3-1-1. Activation function:
ReLU (Rectified Linear Unit) [12] is a commonly used activation function in neural networks. It replaces all negative values in the input with 0 and returns the positive values unchanged. Mathematically, ReLU can be represented as y = max(0, x), where x is the input and y is the output. ReLU activation is preferred over sigmoid and hyperbolic tangent activation functions for hidden layers as it introduces non-linearity and does not face the vanishing gradient problem.