Class Layer

Inheritance Relationships

Derived Types

Class Documentation

class Layer

The Layer class represents a neural network layer with functionality for forward and backward propagation, weight updates, and activation functions.

Subclassed by LayerDimlp, LayerDimlp2, LayerDimlp3, LayerDimlp4, LayerFdimlp, LayerRad, LayerSD, LayerSP3, LayerSP4, LayerSP5

Public Functions

void InitWeights()

Initializes the weights of the layer.

inline int GetNbDown() const

Gets the number of input neurons.

Returns:

Number of input neurons.

inline int GetNbUp() const

Gets the number of output neurons.

Returns:

Number of output neurons.

inline float *GetDown()

Gets the input values to the layer.

Returns:

Pointer to the input values.

inline float *GetUp()

Gets the output values from the layer.

Returns:

Pointer to the output values.

inline float *GetDeltaUp()

Gets the delta values for the output neurons.

Returns:

Pointer to the delta values.

inline float *GetWeights()

Gets the weights of the connections.

Returns:

Pointer to the weights.

inline float *GetBias()

Gets the bias weights.

Returns:

Pointer to the bias weights.

inline void SetDown(float pat[])

Sets the input values to the layer.

Parameters:

pat – Pointer to the input values.

inline void SetDeltaDown(float pat[])

Sets the delta values for the input neurons.

Parameters:

pat – Pointer to the delta values.

inline virtual float Activation1(float x)

The first activation function, default is sigmoid.

Parameters:

x – Input value.

Returns:

Activated value.

inline virtual float Activation2(float x)

The second activation function, default is sigmoid.

Parameters:

x – Input value.

Returns:

Activated value.

inline virtual float HalfErrFunct(int nbTar, const std::vector<float> &netOut, const std::vector<float> &target)

Computes the half error function, default is mean squared error.

Parameters:
  • nbTar – Number of target values.

  • netOut – Network output values.

  • target – Target values.

Returns:

Error value.

void ReadWeights(std::istream &inFile)

Reads the weights from a file.

Parameters:

inFile – Input file stream.

void WriteWeights(std::ostream &outFile)

Writes the weights to a file.

Parameters:

outFile – Output file stream.

void PushWeights()

Pushes the current weights to the validated weights.

void PopWeights()

Pops the validated weights to the current weights.

void ForwSpec()

Performs forward propagation using a specific method.

void ForwSpec2()

Performs forward propagation using a second specific method.

void ForwRadial()

Performs forward propagation for radial basis functions.

inline virtual void ForwLayer()

Performs forward propagation for the layer.

inline void ForwAndTransf1()

Performs forward propagation and applies the first activation function.

inline void ForwAndTransf2()

Performs forward propagation and applies the second activation function.

void ComputeDeltaOut(const float target[])

Computes the delta values for the output neurons.

Parameters:

target – Target values.

void ComputeDeltaDownSpec2()

Computes the delta values for the input neurons using a specific method.

inline virtual void ComputeDeltaDown()

Computes the delta values for the input neurons.

void AdaptBiasSpec2()

Updates the bias weights using a specific method.

inline virtual void AdaptBias()

Updates the bias weights.

void AdaptWeightsSpec()

Updates the weights using a specific method.

void AdaptWeightsSpec2()

Updates the weights using a second specific method.

inline virtual void AdaptWeights()

Updates the weights.

inline void BackLayer()

Performs backward propagation for the layer.

inline void BackLayerWithout()

Performs backward propagation for the layer without computing delta values for the input neurons.

inline void SetEtas(float etaCentre, float etaSpread)

Sets the learning rates for radial basis functions.

Parameters:
  • etaCentre – Learning rate for the center.

  • etaSpread – Learning rate for the spread.

virtual ~Layer() = default

Default destructor for the Layer class.

Layer(float eta, float mu, float flat, int nbDown, int nbUp, int nbWeights, int nbWeightsForInit)

Constructor for the Layer class.

Parameters:
  • eta – Learning rate.

  • mu – Momentum factor.

  • flat – Flatness factor.

  • nbDown – Number of neurons in the previous layer.

  • nbUp – Number of neurons in this layer.

  • nbWeights – Number of weights in this layer.

  • nbWeightsForInit – Number of weights for initialization.