mlp
classic Mlp for xpdeep API.
Classes:
| Name | Description | 
|---|---|
| MLP | Convenient Mlp model. | 
MLP(input_size: int, hidden_channels: list[int], norm_layer: Callable[..., torch.nn.Module] | None = None, activation_layer: Callable[..., torch.nn.Module] | None = torch.nn.ReLU, dropout: float = 0.0, last_activation: partial[torch.nn.Module] | None = None, *, inplace: bool | None = None, bias: bool = True, flatten_input: bool = False)
#
    Convenient Mlp model.
Initialize a Multi-Layer Perceptron model.
Parameters:
| Name | Type | Description | Default | 
|---|---|---|---|
|                    | int | Number of channels of the input. | required | 
|                    | List[int] | List of the hidden channel dimensions. | required | 
|                    | Callable[..., Module] | None | Norm layer that will be stacked on top of the linear layer. If  | None | 
|                    | Callable[..., Module] | None | Activation function, which will be stacked on top of the normalization layer (if not None), otherwise on top of the linear layer. | torch.nn.Relu | 
|                    | bool | None | Parameter for the activation layer, which can optionally do the operation in-place.
Default is  | None | 
|                    | bool | Whether to use bias in the linear layer. | True | 
|                    | float | The probability for the dropout layer. | 0 | 
|                    | Module | None | Last activation function. | None | 
|                    | bool | Whether to flatten the input or not. | False | 
Methods:
| Name | Description | 
|---|---|
| reset_parameters | Reset model parameters to get new values when copying it. | 
Source code in src/xpdeep/model/zoo/mlp.py
                    
reset_parameters() -> None
#
    Reset model parameters to get new values when copying it.