TfELM
Loading...
Searching...
No Matches
Public Member Functions | Static Public Member Functions | List of all members
ELMOptimizer.ELMOptimizer Class Reference
Inheritance diagram for ELMOptimizer.ELMOptimizer:

Public Member Functions

 optimize (self, beta, H, y)
 

Static Public Member Functions

 l1_loss (x, reg=1.0)
 
 l2_loss (x, reg=1.0)
 
 l12_loss (x, reg_l1=1.0, reg_l2=1.0)
 

Detailed Description

    Abstract base class for ELM optimizers.

    This class defines common methods for ELM optimizers.

    Methods:
    -----------
    - l1_loss(x, reg=1.0): Computes the L1 loss.
    - l2_loss(x, reg=1.0): Computes the L2 loss.
    - l12_loss(x, reg_l1=1.0, reg_l2=1.0): Computes the combined L1 and L2 loss.
    - optimize(beta, H, y): Optimizes the beta weights.

    Note:
    -----------
    Subclasses must implement the optimize method.

    Examples:
    -----------
    Initialize optimizer (l1 norm)

    >>> optimizer = ISTAELMOptimizer(optimizer_loss='l1', optimizer_loss_reg=[0.01])

    Initialize a Regularized Extreme Learning Machine (ELM) layer with optimizer

    >>> elm = ELMLayer(number_neurons=num_neurons, activation='mish', beta_optimizer=optimizer)
    >>> model = ELMModel(elm)

    Fit the ELM model to the entire dataset

    >>> model.fit(X, y)

Member Function Documentation

◆ l12_loss()

ELMOptimizer.ELMOptimizer.l12_loss ( x,
reg_l1 = 1.0,
reg_l2 = 1.0 )
static
    Computes the combined L1 and L2 loss.

    Parameters:
    -----------
    - x: Input tensor.
    - reg_l1 (float): L1 regularization parameter. Defaults to 1.0.
    - reg_l2 (float): L2 regularization parameter. Defaults to 1.0.

    Returns:
    -----------
    - Combined L1 and L2 loss.

    Examples:
    -----------
    Initialize optimizer (l2 norm)

    >>> optimizer = ISTAELMOptimizer(optimizer_loss='l2', optimizer_loss_reg=[0.01, 0.05])

◆ l1_loss()

ELMOptimizer.ELMOptimizer.l1_loss ( x,
reg = 1.0 )
static
    Computes the L1 loss.

    Parameters:
    -----------
    - x: Input tensor.
    - reg (float): Regularization parameter. Defaults to 1.0.

    Returns:
    -----------
    - L1 loss.

    Examples:
    -----------
    Initialize optimizer (l1 norm)

    >>> optimizer = ISTAELMOptimizer(optimizer_loss='l1', optimizer_loss_reg=[0.01])

◆ l2_loss()

ELMOptimizer.ELMOptimizer.l2_loss ( x,
reg = 1.0 )
static
    Computes the L2 loss.

    Parameters:
    -----------
    - x: Input tensor.
    - reg (float): Regularization parameter. Defaults to 1.0.

    Returns:
    -----------
    - L2 loss.

    Examples:
    -----------
    Initialize optimizer (l2 norm)

    >>> optimizer = ISTAELMOptimizer(optimizer_loss='l2', optimizer_loss_reg=[0.01])

◆ optimize()

ELMOptimizer.ELMOptimizer.optimize ( self,
beta,
H,
y )
    Optimizes the beta weights.

    Parameters:
    - beta: Beta weights tensor.
    - H: Feature map tensor.
    - y: Target tensor.

    Returns:
    - Optimized beta weights tensor.

The documentation for this class was generated from the following file: