layer.h - justinmgarrigus/VGG-Simulation GitHub Wiki

Enums

enum layer_type: The different types of layers. Layers can be:

  • layer_type_convolutional = 1
  • layer_type_max_pooling = 2
  • layer_type_flatten = 3
  • layer_type_dense = 4

enum layer_activation: The different activation functions that can be applied to each layer.

  • layer_activation_relu = 1
  • layer_activation_softmax = 2

Properties

int weight_set_count: How many sets of weights there are.

ndarray **weights: The weights themselves. First points to the weight_set, then points to an array of the values.

void (*feed)(layer*, layer*): A pointer to a function which accepts two layers as a parameter. This function should perform the feedforward operation, where the first parameter is the source of the signals (the inputs) and the second parameter is the layer to be operated on. The outputs of the operation are placed in the outputs property.

float (*activation)(float): A pointer to a function which performs the activation for a layer. This may be null, depending on the contents of the feed operation.

ndarray *outputs: The outputs of the layer. The contents depend on the value of the feed property.

Functions

layer* layer_create(int weight_set_count, int* weight_set_shape_count, int** weight_set_shapes, float*** weights, enum layer_type type, enum layer_activation activation): Initializes a new layer, assigning the values as properties. For type, it sets the feed property to the correct function.

void layer_free(layer* layer): Frees the data located inside of the given layer.

Feedforward

void layer_convolutional_feedforward(layer* input_layer, layer* conv_layer): Performs a convolutional feedforward operation, where the inputs are supplied from input_layer and the operation occurs on conv_layer.

void layer_max_pooling_feedforward(layer* input_layer, layer* pool_layer): Performs a max pooling feedforward operation, where the inputs are supplied from input_layer and the operation occurs on pool_layer.

void layer_flatten_feedforward(layer* input_layer, layer* flatten_layer): Performs a flatten feedforward operation, where the inputs are supplied from input_layer and the operation occurs on flatten_layer.

void layer_dense_feedforward(layer* input_layer, layer* dense_layer): Performs a dense feedforward operation, where the inputs are supplied from input_layer and the operation occurs on dense_layer.

Activation

float layer_relu(float value): Returns the result of the relu activation function applied to the value.

float layer_softmax(float value): Returns the result of the softmax activation function applied to the value.