Xpdeep Model, a Self Explainable Model#
XpdeepModel
is the core concept of the explainable deep learning framework. XpdeepModel
is versatile, applicable to
all types of data and deep architectures (CNN, LSTM, Transformer, Yolo...), and addresses a wide range of tasks
(classification, regression, forecasting, anomaly detection,...).
XpdeepModel
can be used to learn an explainable deep model from scratch, or to explain an existing deep model. In
addition to training explainable deep models, the XpdeepModel
is also used for generating inferences and
producing various explanations; however, for security reasons, this object cannot be exported.
Future Release
Encrypted XpdeepModel
will be exportable.
1. Concepts and Definitions#
Once the specifications for a deep learning model are defined, it needs to be converted into an Xpdeep model (i.e., an explainable deep model) before training. This explainable deep model can then be trained, to simultaneously learn a specific-task model and its explanations. Additionally, an explainable deep model can easily leverage a pretrained backbone model.
The conversion process from a specified standard deep model to an explainable deep model is an easy process, that
requires the original deep architecture to be split into several parts: an optional Backbone
, a FeatureExtraction
and
a TaskLearner
model.
Under the hood, Xpdeep serializes your model using torch export. This
step is crucial and represents the only constraint on your model definition. Since no arbitrary code execution is
allowed on the server side, this conversion process ensures that your model is constructed from a wide range of options
(virtually any Torch nn.Module
, as Xpdeep relies on PyTorch internally) and guarantees that each model is built using
approved Torch operators.
Each xpdeep model requires:
- an optional backbone model
- a feature extraction model
- a task learner model
- a set of hyperparameters
Warning
Torch export
requires batch normalization layer torch.nn.BatchNorm1d
to be given as partial with
track_running_stats
False. However, without learnt parameters, the behaviour in inference is not stable. A single
sample may have a different prediction alone or within a batch as the batch is scaled without learnt parameters.
Batch normalization layer will be available when fully compatible with torch export.
2. Build an Explainable Model#
As explained earlier, you need to convert your deep learning model to an explainable model.
Let's dive into the splitting process.
Backbone Model#
The backbone model has the same role as a traditional backbone model on a neural network.
- Backbone models are usually pre-trained on large and diverse datasets, which helps them learn general features that are useful across various tasks.
- It extracts meaningful embeddings using a projection space that makes sense.
Feature Extraction Model#
The feature extraction model is a neural network that aimed to extract the most important and coherent features prior to the task you want to achieve.
Task Learner Model#
The task learner model is responsible to achieve your task (classification etc.), given a set of meaningful extracted features.
Please ensure that:
-
The backbone model output size is compatible with the feature-extraction model input size.
-
The feature extraction model output size is compatible with the task-learner model input size.
Hyperparameters#
Hyperparameters define the internal parameters and architecture of the Xpdeep explainable model. Like in standard deep models, these hyperparameters dictate the learning process strategy, its complexity, convergence speed, and other factors.
Please refer to the ModelDecisionGraphParameters
class to check each parameter definition.
3. Note on torch export#
In strict mode, which is currently the default, we first trace through the program using TorchDynamo, a bytecode analysis engine. TorchDynamo does not actually execute your Python code. Instead, it symbolically analyzes it and builds a graph based on the results. This analysis allows torch.export to provide stronger guarantees about safety, but not all Python code is supported.