explainer
How to explain a trained model.
Modules:
Name | Description |
---|---|
jobs |
Jobs utilities. |
Classes:
Name | Description |
---|---|
Explainer |
Explain a XpdeepModel. |
Explainer
#
Explain a XpdeepModel.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
|
int
|
A parameter governing the explanation quality, the greater, the better, but it will be slower to compute. |
required |
|
list[QualityMetrics]
|
A list of quality metrics to compute, like Sensitivity or Infidelity. |
required |
|
int | None
|
DTW parameter windows (proportion %) |
None
|
|
DictMetrics | None
|
A list of metrics to compute along with the explanation (F1 score etc.) |
None
|
|
DictMetrics | None
|
A list of statistics to compute along with the explanation (Variance on targets etc.) |
None
|
|
int | None
|
The batch size to use during explanation. Default to None. |
None
|
|
int | None
|
The seed to use during explanation. Default to None. |
None
|
Methods:
Name | Description |
---|---|
local_explain |
Create a causal explanation from trained model. |
global_explain |
Compute model decision on a trained model. |
Attributes:
Name | Type | Description |
---|---|---|
description_representativeness |
int
|
|
quality_metrics |
list[QualityMetrics]
|
|
window_size |
int | None
|
|
metrics |
DictMetrics | None
|
|
statistics |
DictStats | None
|
|
batch_size |
int | None
|
|
seed |
int | None
|
|
description_representativeness: int
#
quality_metrics: list[QualityMetrics]
#
window_size: int | None = None
#
metrics: DictMetrics | None = None
#
statistics: DictStats | None = None
#
batch_size: int | None = None
#
seed: int | None = None
#
local_explain(trained_model: TrainedModelArtifact, train_set: FittedParquetDataset, dataset_filter: Filter, *, explanation_name: str | None = None, explanation_description: str | None = None) -> ExplanationArtifact
#
Create a causal explanation from trained model.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
|
TrainedModelArtifact
|
A model trained via the trainer interface. |
required |
|
FittedParquetDataset
|
A dataset representing a train split. |
required |
|
Filter
|
A filter used to filter the dataset and get samples to explain. |
required |
|
str | None
|
The explanation name. |
None
|
|
str | None
|
The explanation description. |
None
|
Returns:
Type | Description |
---|---|
ExplanationResultsModel
|
The causal explanation results, containing the result as json. |
Source code in src/xpdeep/explain/explainer.py
global_explain(trained_model: TrainedModelArtifact, train_set: FittedParquetDataset, test_set: FittedParquetDataset | None = None, validation_set: FittedParquetDataset | None = None) -> ExplanationArtifact
#
Compute model decision on a trained model.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
|
TrainedModelArtifact
|
A model trained via the trainer interface. |
required |
|
FittedParquetDataset
|
A dataset representing a train split. |
required |
|
FittedParquetDataset | None
|
A dataset representing a test split, used to optionally compute split statistics. |
None
|
|
FittedParquetDataset | None
|
A dataset representing a validation split, used to optionally compute split statistics. |
None
|
Returns:
Type | Description |
---|---|
ExplanationResultsModel
|
The model decision results, containing the result as json. |