Site icon DataFlair

Keras layers – Parameters and Properties

Free Flink course with real-time projects Start Now!!

Layers are the primary unit to create neural networks. We compose a deep learning architecture by adding successive layers. Each successive layer performs some computation on the input it receives. Then after it propagates the output information to the next layer. At last, we get the desired results from the output of the last layer. In this Keras article, we will walk through different types of Keras layers, its properties and its parameters.

Keras Layers

To define or create a Keras layer, we need the following information:

Different Layers in Keras

1. Core Keras Layers

output=activation(dot(input,kernel)+bias)

Here,
“activation” is the activator, “kernel” is a weighted matrix which we apply on input tensors, and “bias” is a constant which helps to fit the model in a best way.

Dense layer receives input from all the nodes in the previous layer. It has the following arguments and its default values:

Dense(units, activation=NULL, use_bias=TRUE,kernel_initializer=’glorot_uniform’,bias_regularizer=NULL,activity_regularizer=NULL,kernel_constraint=NULL,bias_constrain=NULL)

It is used to apply the activation function to output. It is the same as passing activation in the Dense layer and has the following arguments:

Activation(activation_function)

If you do not give activation_function, it will perform linear activation.

We use Dropout in our neural network to save it from overfitting. To prevent overfitting, it randomly chooses a fraction of units and set to 0 at each update.

It has the following arguments:

Dropout(rate, noise_shape=NULL, seed=NULL)

Technology is evolving rapidly!
Stay updated with DataFlair on WhatsApp!!

We use Flatten to convert the input to a lower dimension.

For example: an input layer of shape(batch_size, 3,2) is flatten to output of shape(batch_size, 6). It has the following arguments:

Flatten(data_format=None)

We use this layer to create Keras model using only the input and output of the model. This layer is the entry point into the model graph.

It has following arguments:

Input(shape, batch_shape, name, dtype, sparse=FALSE,tensor=NULL)

Reshape output to a particular dimension.

Argument:

Reshape(target_shape)

Gives output of shape:

(batch_size,)+ target_shape

Permute input according to the given pattern. We may also use the Permute layer to change input shapes using specified patterns.

Arguments:

Permute(dims)

We use Lambda layers to build extra layer features, which are not provided in Keras.

Arguments:

Lambda(lambda_fun,output_shape=None, mask=None, arguments= None)

To skip the timestep if all the features are equal to mask_value.

Arguments:

Masking(mask_value=0.0)

2. Convolution Layers of Keras

Here we define a weighted kernel, we convolve this kernel all over the input and produce output tensors.

Arguments:

Conv1D(filters,kernel_size,strides=1, padding=’valid’, data_format=’channels_last’, dilation_rate=1, activation=None, use_bias=True, kernel_initializer=’glorot_uniform’, bias_initializer=’zeros’, kernel_regularizer=None, bias_regularizer=None, activity_regularizer=None, kernel_constraint=None, bias_constraint=None)

Conv2D(filters,kernel_size,strides=(1,1) , padding=’valid’, data_format=’channels_last’, dilation_rate=(1,1) , activation=None, use_bias=True, kernel_initializer=’glorot_uniform’, bias_initializer=’zeros’, kernel_regularizer=None, bias_regularizer=None, activity_regularizer=None, kernel_constraint=None, bias_constraint=Non

3. Pooling Layers

We use pooling to reduce the size of the input and extract important information.

Extract maximum in the pooling window.

Arguments:

MaxPooling1D(pool_size=2, strides=None, padding=’valid’, data_format=’channels_last’)

MaxPooling2D(pool_size=(2,2), strides=None, padding=’valid’, data_format=’channels_last’)

Get average from the pooling window.

Arguments:

AveragePooling1D(pool_size=2, strides=None, padding=’valid’, data_format=’channels_last’)

AveragePooling1D(pool_size=(2,2), strides=None, padding=’valid’, data_format=None)

4. Recurrent layer

We use this layer to compute sequence data, i.e time series or natural language.

This is fully connected RNN, where output of layer is fed back to the input

Arguments:

SimpleRNN(units, activation, use_bias, kernel_initializer, recurrent_initializer, bias_initializer, kernel_regularizer, recurrent_regularizer, bias_regularizer, activity_regularizer, kernel_constraint, recurrent_constraint, bias_constraint, dropout, recurrent_dropout, return_sequences, return_state)

It is a big form of RNN, it has some storage to keep the information.

LSTM(units, activation , recurrent_activation, use_bias, kernel_initializer, recurrent_initializer, bias_initializer, unit_forget_bias, kernel_regularizer, recurrent_regularizer, bias_regularizer, activity_regularizer, kernel_constraint, recurrent_constraint, bias_constraint, dropout, recurrent_dropout, implementation, return_sequences, return_state)

Keras provides many other layers, but in general, we work upon the layers described above.

Summary

This article explains the concept of layers in building Keras models. We learn about the basic attributes required to build a layer.
Then we discussed the different types of Keras layers i.e Core Layers, Convolution Layers, Pooling Layers, Recurrent Layers, its properties, and parameters.

Any suggestion or changes are most welcomed in the comment section.

Exit mobile version