Fig.3 EEG recoding and additional sensing system
After the basic preprocessing, IMU data was filtered with a low-passed filter (cut-off frequency is 10Hz), and combing with the foot pressure signals from the heels, the onset of each standing and sitting action was determined. Then, all the data under the three conditions were segmented into the epochs lasting about 7s, including 4.5s prior and 2.5s post the onset of the motion action, and baseline correction were completed. Then the epochs which is greatly affected by artifacts were removed through visual detection, and the ICA decomposition of all signals was completed using runica. Artifact component which from eye movement, eye blink, muscle artifact and other artifacts mainly caused by the movement were removed with the help of SASICA toolbox. After the artifact remove, if necessary, the bad electrode was interpolated and the data was re-referenced. Finally, the 30 channels from the whole brain were divided into eight regions, which is left frontal (LF: FP1, F3, F3), right frontal (RF:FP2,F4,F8), left central (LC:FC1,FC5,C3,CP1,CP5), right central (RC: FC2, FC6, C4, CP2, CP6), left temporal (LT:T7), right temporal (RT: T8), left occipital (LO:P3,P7,O1) and right occipital (RO: P4, P8, O2).
Complexity analysis is an important tool to reveal the characteristics of a nonlinear system. In recently years, more and more researchers began to evaluate the activity state of the brain through the nonlinear dynamic analysis [11]. Among them, entropy is one of the most widely used analysis methods. At present, various entropy analysis have been used for the neural signal analysis [11,12]. In order to more comprehensively discuss the representation of various entropy on EEGs during motion, this study calculated various time-domain entropies, such as Shannon Entropy (ShEn), Approximate Entropy (ApEn), Sample Entropy (SaEn), Permutation Entropy (PeEn), Conditional Entropy (CoEn), and Fuzzy Entropy (FuEn). Besides, Spectral Entropy (SpEn) and Wavelet Entropy (WaEn) which representing time-frequency characteristics were also discussed. In addition, we also discussed the Hurst index, Kurtosis index and Hjorth parameters. These measures were calculated for the averaged signals in each region. Finally, through the statistical analysis, we selected the brain regions and the complexity measures which shows significant differences among the three conditions to form the feature vector and several machine learning classifiers were used to achieve the recognition of the sitting and standing condition.
Results and Discussion: In order to conduct quantitative analysis of the complexity measures in each brain region under the three conditions, ten complexity measures were calculated for averaged EEGs of each brain region respectively. We found that the values of various parameters in the eight regions are very closed and these parameters in LO and RO region are the largest, followed by the LC and RC region. Statistical analysis found that PeEn, ShEn, SpEn and Kurtosis in RT region were significantly different (t-test, p <0.05) between standing and sitting, and the ShEn and Kutosis in LF region, Kutosis in RF region, CoEn, ShEn and Kurtosis in RT region, and Kurtosis in LC, RC, LO and RO region shows significant difference between standing and quiet. While the CoEn, SaEn, ShEn and Kurtosis in LF region, Kurtosis in RF region, ShEn in RT region, Kurtosis in LC region, ShEn and Kurtosis in RC region and Kurtosis in LO and RO region shows significant different between sitting and quiet.
Based on the above discussed complexities of the EEGs in each region, combined with the statistical analysis results, the feature vector was constructed with these parameters which has significant difference among the three conditions. Three machine learning algorithms, including support vector machine (SVM), logistic regression (LR) and linear discriminant analysis (LDA), were used to test these features to complete the recognition of two types of motion condition. As show in Fig.4 is the averaged classification accuracy obtained after five-fold cross validation. It can be seen that all the classification accuracies are over 81% and the SVM has the best effect. The classification accuracy of standing and sitting, standing and quiet, and sitting and quiet are 83.3%, 87% and 82.5% respectively, which proves the effectiveness of this method.