The data set was created by combining the collected data in a single center. Long short-term memory (LSTM) algorithm is preferred in the model developed to predict the classification type according to temperature and humidity values. LSTM is a recurrent neural network (RNN) architecture that remembers values at random intervals [32]. An LSTM is well suited for classifying, processing and predicting time series given the time delays of unknown size and duration between significant events [33].
Basically, the internal structure of the LSTM architecture; It consists of input (Equation 4), forget (Equation 5) and output gates (Equation 7) and input layer (Equation 6). In the LSTM architecture, first of all, π‘₯𝑑 and β„Žπ‘‘βˆ’1, which are used as inputs, are decided which information will be deleted. This is done by a sigmoid layer (Equation 3) called the forget the door layer [34].
\(\sigma\left(x\right)=\left(1+e^{-x}\right)^{-1}\) (3)
\(i_{t}=\sigma\left(W_{i}x_{t}+R_{i}h_{t-1}+b_{i}\right)\) (4)
\(f_{t}=\sigma\left(W_{f}x_{t}+R_{f}h_{t-1}+b_{f}\right)\) (5)
\(g_{t}=tanh\left(W_{g}x_{t}+R_{g}h_{t-1}+b_{g}\right)\) (6)
\(o_{t}=\sigma\left(W_{o}x_{t}+R_{o}h_{t-1}+b_{o}\right)\) (7)
In the LSTM model, the first step after dataset partitioning is to decide which information to discard in the cell. This decision is made by the sigmoid layer called the forget gate layer in the LSTM model. It is then decided which new information will be stored in the cell state. For this, the input gate layer is used. A vector of the new values ​​to be generated is also produced with the tanh layer. The output layer is created by combining the generated information and vectors. This output is filtered by cell state. Here, first, a SoftMax layer is run, which decides which parts of the cell state it will output. The SoftMax layer extracts the ordered order classifications.
In the experiments conducted to develop the proposed LSTM model and evaluate its performance; Intel (R) I9 3.2 Ghz processor hardware with 64 GB RAM and Python programming language is used in the Spyder interface. The dataset was randomly partitioned as 80% for training and 20% for testing. The hyperparameters used in the LSTM model are given in Table 3.
Table 3. Hyperparameters and values of the model