Xpdeep Model, a Self Explainable Model#
XpdeepModel is the core concept of the explainable deep learning framework. XpdeepModel is versatile, applicable to
all types of data and deep architectures (CNN, LSTM, Transformer, Yolo...), and addresses a wide range of tasks
(classification, regression, forecasting, anomaly detection,...).
XpdeepModel can be used to learn an explainable deep model from scratch, or to explain an existing deep model. In
addition to training explainable deep models, the XpdeepModel is also used for generating inferences and
producing various explanations; however, for security reasons, this object cannot be exported.
Future Release
Encrypted XpdeepModel will be exportable.
1. Concepts and Definitions#
Once the specifications for a deep learning model are defined, it needs to be converted into an Xpdeep model (i.e., an explainable deep model) before training. This explainable deep model can then be trained, to simultaneously learn a specific-task model and its explanations. Additionally, an explainable deep model can easily leverage a pretrained backbone model.
The conversion process from a specified standard deep model to an explainable deep model is an easy process, that
requires the original deep architecture to be split into several parts: an optional Backbone, a FeatureExtraction and
a TaskLearner model.
Under the hood, Xpdeep serializes your model using torch export. This
step is crucial and represents the only constraint on your model definition. Since no arbitrary code execution is
allowed on the server side, this conversion process ensures that your model is constructed from a wide range of options
(virtually any Torch nn.Module, as Xpdeep relies on PyTorch internally) and guarantees that each model is built using
approved Torch operators.
Each xpdeep model requires:
- an optional backbone model
- a feature extraction model
- a task learner model
- a set of hyperparameters
2. Build an Explainable Model#
As explained earlier, you need to convert your deep learning model to an explainable model.
Let's dive into the splitting process.
Backbone Model#
The backbone model has the same role as a traditional backbone model on a neural network.
- Backbone models are usually pre-trained on large and diverse datasets, which helps them learn general features that are useful across various tasks.
- It extracts meaningful embeddings using a projection space that makes sense.
Feature Extraction Model#
The feature extraction model is a neural network that aimed to extract the most important and coherent features prior to the task you want to achieve.
Task Learner Model#
The task learner model is responsible to achieve your task (classification etc.), given a set of meaningful extracted features.
Please ensure that:
-
The backbone model output size is compatible with the feature-extraction model input size.
-
The feature extraction model output size is compatible with the task-learner model input size.
Hyperparameters#
Hyperparameters define the internal parameters and architecture of the Xpdeep explainable model. Like in standard deep models, these hyperparameters dictate the learning process strategy, its complexity, convergence speed, and other factors.
Please refer to the ModelDecisionGraphParameters class to check each parameter definition.
3. Note on torch export#
In strict mode, which is currently the default, we first trace through the program using TorchDynamo, a bytecode analysis engine. TorchDynamo does not execute your Python code. Instead, it symbolically analyzes it and builds a graph based on the results. This analysis allows torch.export to provide stronger guarantees about safety, but not all Python code is supported.
4. The AbstractModule#
XpdeepModel requires each component model (the backbone, feature extractor, and task learner) to be an AbstractModule.
An AbstractModule can handle multiple modalities, encapsulated in a MultiModal object.
In most cases, a single modality is used, but you can refer to the Object Detection tutorial
for an example of multimodal models.
With Xpdeep, you can either build your own AbstractModule or use one from the provided library `xpdeep_modules.
Build your own model as an ApiModule#
You can build your own ApiModule objects to create an XpdeepModel:
the XpdeepModel interface provides a helper method to conveniently build your ApiModule using the PyTorch export mechanism.
Use the XpdeepModel.from_torch() method to create an XpdeepModel and convert your backbone, feature extractor, and
task learner into their ApiModule equivalents.
This method automatically wraps your models as ApiModule objects - an abstraction that inherits from AbstractModule
and operates on MultiModal data.
Future Release
Xpdeep will allow you to build your own ApiModule with custom multi modalities for each model.
Use pre-existing Xpdeep models#
In some cases, complex models cannot be safely exported because they contain graph breaks.
For security and reliability reasons, you should use models from the Xpdeep model library (xpdeep_modules), which
currently provides object detection models implemented as AbstractModule.
See:
- Feature extractor —
ObjectDetectionFeatureExtractorin xpdeep_modules.object_detection.dfine_models.py - Task learner —
ObjectDetectionTaskLearnerin xpdeep_modules.object_detection.dfine_models.py
Refer to the Object Detection tutorial for a detailed example.
If you cannot export your model and it does not exist in the Xpdeep model library, you can try to make it export-compatible. Read more at the official doc.
If it still cannot be exported and no equivalent model fits your needs in xpdeep_modules, feel free to reach out to us
at support@xpdeep.com`.