Extreme Learning Machine Layer with various variants.
This class represents a single hidden layer of an Extreme Learning Machine (ELM) model. It consists of a set of
hidden neurons, each with its own activation function and input weights.
Parameters:
-----------
number_neurons : int
The number of neurons in the hidden layer.
activation : str, default='tanh'
The name of the activation function to be applied to the neurons that corresponds to the names of function
in class Activation
act_params : dict, default=None
Additional parameters for the activation function (if needed - see implementation of particular function in
class Activation).
C : float, default=None
Regularization parameter to control the degree of regularization applied to the hidden layer.
beta_optimizer : ELMOptimizer, default=None
An optimizer to optimize the output weights (beta) of the layer applied after the Moore-Penrose operation to
finetune the beta parameter based on provided to optimizer loss function and optimization algorithm.
is_orthogonalized : bool, default=False
Indicates whether the input weights of the hidden neurons are orthogonalized, if yes the orthogonalization
is performed (recommended to be applied for multilayer variants of ELM).
receptive_field_generator : ReceptiveFieldGenerator, default=None
An object for generating receptive fields to constrain the input weights of the hidden neurons.
**params : dict
Additional parameters to be passed to the layer.
Attributes:
-----------
error_history : array-like, shape (n_iterations,)
Array containing the error history during training (present only if ELMOptimizer is passed)
feature_map : tensor, shape (n_samples, number_neurons)
The feature map matrix generated by the layer.
name : str, default="elm"
The name of the layer.
beta : tensor, shape (number_neurons, n_outputs) or None
The output weights matrix of the layer.
bias : tensor, shape (number_neurons,) or None
The bias vector of the layer.
alpha : tensor, shape (n_features, number_neurons) or None
The input weights matrix of the layer.
input : tensor or None
The input data passed to the layer.
output : tensor or None
The output data computed by the layer.
act_params : dict or None
Additional parameters for the activation function.
beta_optimizer : ELMOptimizer or None
The optimizer used to optimize the output weights (beta) of the layer.
is_orthogonalized : bool
Indicates whether the input weights of the hidden neurons are orthogonalized.
denoising : str or None
The type of denoising applied to the layer passed as additional parameter to the constructor, it
applies a given denoising algorithm to the input data to make classification more robust.
denoising_param : float or None
The parameter used for denoising.
Example:
-----------
Initialize an Extreme Learning Machine (ELM) layer with 1000 neurons and mish activation function:
>>> elm = ELMLayer(number_neurons=1000, activation='mish')
Initialize an Extreme Learning Machine (ELM) layer with 1000 neurons and mish activation function and denoising
mechanism activated that brings noise to the input data in order to make ELM more robust
>>> elm = ELMLayer(number_neurons=1000, activation='mish', denoising=True)
Initialize an Extreme Learning Machine (CELM) layer with 1000 neurons and constrained weights:
>>> elm = ELMLayer(number_neurons=1000, activation='mish', constrained=True)
Initialize an Extreme Learning Machine (RELM) layer with 1000 neurons and after the Moore-Penrose operation
the ELMOptimizer - ISTAELMOptimizer optmizes the weights:
Initialize optimizer (l1 norm)
>>> optimizer = ISTAELMOptimizer(optimizer_loss='l1', optimizer_loss_reg=[0.01])
Initialize a Regularized Extreme Learning Machine (RELM) layer with optimizer
>>> elm = ELMLayer(number_neurons=100, activation='mish', beta_optimizer=optimizer)
Initialize a Receptive Field Extreme Learning Machine (ELM) layer with Receptive Field Generator
>>> rf = ReceptiveFieldGaussianGenerator(input_size=(28, 28, 1))
# Initialize a Constrained Extreme Learning Machine layer with receptive field (RF-C-ELM)
>>> elm = ELMLayer(number_neurons=1000, activation='mish', receptive_field_generator=rf, constrained=True)
Create an ELM model using the trained ELM layer
>>> model = ELMModel(elm)
Define a cross-validation strategy
>>> cv = RepeatedKFold(n_splits=10, n_repeats=50)
Perform cross-validation to evaluate the model performance
>>> scores = cross_val_score(model, X, y, cv=cv, scoring='accuracy', error_score='raise')
Print the mean accuracy score obtained from cross-validation
>>> print(np.mean(scores))
ELMLayer.ELMLayer.fit |
( |
| self, |
|
|
| x, |
|
|
| y ) |
Fits the Extreme Learning Machine model to the given training data.
Parameters:
-----------
x (tf.Tensor): The input training data of shape (N, D), where N is the number of samples and D is the number of features.
y (tf.Tensor): The target training data of shape (N, C), where C is the number of classes or regression targets.
Returns:
-----------
None
Fits the Extreme Learning Machine model to the given input-output pairs (x, y). This method generates the weights of the hidden layer neurons, calculates the feature map, and computes the output weights for the model.
If constrained learning is enabled, the weights are generated considering the given output targets (y). If a receptive field generator is provided, it generates the receptive fields for the model's hidden neurons.
After generating the feature map (H) using the input data (x) and hidden layer weights (alpha) along with the bias, the method applies the specified activation function to the feature map.
:math:`H = f(x \\cdot \\alpha + bias)`
If a regularization term (C) is provided, it is added to the diagonal of the feature map matrix.
:math:`H = H + diag(C)`
The output weights (beta) are computed using the Moore-Penrose pseudoinverse of the feature map matrix and the target output data (y). If a beta optimizer is specified, it further optimizes the output weights.
:math:`H = H + diag(C)`
The feature map (H) and the output (Beta) are stored as attributes of the model for later use.
:math:`\\beta = H^{\\dagger} T`
If a beta optimizer is provided, the method returns the optimized beta and the error history.
Example:
-----------
>>> elm = ELMLayer(number_neurons=1000, activation='mish')
>>> elm.build(x.shape)
>>> elm.fit(train_data, train_targets)