mlp
classic Mlp for xpdeep API.
MLP(input_size: int, hidden_channels: list[int], norm_layer: Callable[..., torch.nn.Module] | None = None, activation_layer: Callable[..., torch.nn.Module] | None = torch.nn.ReLU, dropout: float = 0.0, last_activation: partial[torch.nn.Module] | None = None, *, inplace: bool | None = None, bias: bool = True, flatten_input: bool = False)
#
Convenient Mlp model.
Initialize a Multi-Layer Perceptron model.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input_size |
int
|
Number of channels of the input. |
required |
hidden_channels |
list[int]
|
List of the hidden channel dimensions. |
required |
norm_layer |
Callable[..., Module] | None
|
Norm layer that will be stacked on top of the linear layer. If |
None
|
activation_layer |
Callable[..., Module] | None
|
Activation function, which will be stacked on top of the normalization layer (if not None), otherwise on top of the linear layer. |
ReLU
|
inplace |
bool | None
|
Parameter for the activation layer, which can optionally do the operation in-place.
Default is |
None
|
bias |
bool
|
Whether to use bias in the linear layer. |
True
|
dropout |
float
|
The probability for the dropout layer. Default: 0.0 |
0.0
|
last_activation |
partial[Module] | None
|
Last activation function. |
None
|
flatten_input |
bool
|
Whether to flatten the input or not. |
False
|
Source code in src/xpdeep/model/zoo/mlp.py
reset_parameters() -> None
#
Reset model parameters to get new values when copying it.