loading page

Multivariate and Univariate Prediction of Stock Prices using an Optimized Gated Recurrent Unit with a Time Lag Proportional to the Wavelet Approximation Coefficient
  • Luyandza Sindi Mamba
Luyandza Sindi Mamba

Corresponding Author:[email protected]

Author Profile


The advancement of precise prediction models is still very helpful across a wide range of fields. Deep learning models have demonstrated strong performance and great accuracy in stock price prediction. However, the vanishing gradient problem, which some activation functions have exacerbated, has a significant impact on these models. In order to combat poor convergence, disappearing gradients, and significant error metrics, this study suggests using the Optimized Gated Recurrent Unit (OGRU) model with a scaled mean Approximation Coefficient (AC) time lag. This study employed the Rectified Linear Unit (ReLU), Hyperbolic Tangent (Tanh), Sigmoid and Exponential Linear Unit (ELU) activation functions. Real-life datasets were used including the daily Apple and 5-minute Netflix closing stock prices, decomposed using the Stationary Wavelet Transform (SWT). The decomposed series formed a multivariate model which was compared to a univariate model with similar hyperparameters and different default lags. The Apple daily dataset performed well with a Default_1 lag, using a univariate model and the ReLU, attaining 0.01312, 0.00854 and 3.67 minutes for RMSE, MAE and runtime. The Netflix data performed best with the MeanAC_42 lag, using a multivariate model and the ELU achieving 0.00620, 0.00487 and 3.01 minutes for the same metrics. The study concluded that the OGRU is made resilient to the vanishing gradient problem by avoiding the Sigmoid activation function and applying the proposed lag on high frequency data with the ELU activation function using decomposed data.