The
generation of predictive models relies heavily on data set collection
and model determination. DNN is an end-to-end network. Its internal
neural network layers can be divided into three categories: input layer,
hidden layer and output layer. The input layer takes the characteristic
parameters of the collected food as input. The training process of the
DNN model mainly includes forward propagation and back propagation. The
forward propagation algorithm is to use several weight coefficient
matrices \(\omega^{i}(i=1,2\ldots n)\), bias vector b to perform a
series of linear operations \(y^{i}\),\(y^{i}=\omega^{i}\times x^{i}+b^{i}\), At the same time, the
activation function is used to complete the transformation from linear
to nonlinear, the process is done in the hidden layer. After completing
the forward propagation, the difference between the predicted output and
the actual output can be used to generate the loss function\(L\left(\theta\right),\ L\left(\theta\right)=\frac{1}{n}\sum_{i=1}^{n}\left(y-\hat{y}\right)\ \).
After the loss function is obtained, the weight \(w^{i}\) and the bias
vector \(b^{i}\) are continuously updated through the gradient descent
method. By repeating this process over and over, the model gets optimal
weights and biases. Therefore, this process requires a large amount of
data for the model to learn. As shown in Figure 1(A)
Figure 1 (A) Predictive model generation process (B) Convolutional and
Pooling Layers (C) Schematic diagram of biosensor components
Using deep neural networks to predict food quality, we no longer pay
attention to how the low-level features are transformed into high-level
features. We only need to use a large amount of statistical real-time
feedback data to train a mathematical model with a specific structure
containing unknown parameters. This method takes into account both the
detection speed and the comprehensiveness of the evaluation.