Sitting and Standing Intention Detection based on the Complexity of EEG Signal
Wenwen Chang1, Wenchao Nie1, Yueting Yuan1,Yuchan Zhang1 and Guanghui Yan1
1 School of Electronic and Information Engineering, Lanzhou Jiaotong University, Lanzhou 730070, China
Email: wenwenchang@emailaddress.com.
Based on the brain signals, decoding the gait features to make a reliable prediction of action intention is the core issue in the brain computer interface (BCI) based hybrid rehabilitation and intelligent walking aid robot system. In order to realize the classification and recognition of the most basic gait processes such as standing, sitting and quiet, this paper proposes a feature representation method based on the signal complexity and entropy of signal in each brain region. Through the statistical analysis of the parameters between different conditions, these characteristics which sensitive to different actions are determined as a feature vector, and the classification and recognition of these actions are completed by combing support vector machine, linear discriminant analysis and logistic regression. Experimental result shows that the proposed method can better realize the recognition of the above-mentioned action intention. The recognition accuracy of standing, sitting and quiet of 13 subjects is higher than 81%, and the highest one can reach 87%. The result has significant value for understanding human’s cognitive characteristics in the process of lower limb movement and carrying out the study of BCI based strategy and system for lower limb rehabilitation.
Introduction: It has lots of advantages using robot than traditional artificial method in the rehabilitation training, which can increase the motivation of patients and the opportunity of autonomous training, so as to improve the quality and effect of the rehabilitation. Exoskeleton and intelligent walking aid robot are widely used in the gait rehabilitation and have achieved good results [1,2]. With the development of brain computer interface (BCI) technology, researchers began to pay attention to the BCI based intelligent walking robot and rehabilitation training technology. It can improve the rehabilitation strategy by detecting brain’s motion intention more quickly, which is the development trend of future neurological rehabilitation [3,4]. It is important to investigate the relationship between brain cognitive activity and motor process in the development of BCI based active rehabilitation technology.
Electroencephalograph (EEG) is widely used in the detection of motor intention because of its simplicity, portability and high time resolution [1,5]. Studies also shown that EEG signal contains abundant gait and motion information [6], while the decoding research on lower limb motion intention such as walking and gait has just started. One of the most basic movements in the gait process is stand up (standing) and sit down (sitting). Zhong et al investigated the event related potentials during the attemped standing up task, they found significant midcentral-focused mu ERD with beta ERS during imaginary standing up task [7]. Bulea et al. [6] studied the corresponding EEG features of 10 subjects during the transition between sitting and standing by decoding the low-frequency band signals, and combined with Gaussian mixture model (GMM) to realize the recognition of the two conditions. In the subsequent work, Bulea and Contreras-Vidal et al. [4] analyzed the feasibility of delta frequency in motor intention decoding. These signals in standing, sitting and quiet condition were analyzed by designing two models under self-trigger and external cue trigger, and the GMM classifier was also used to obtain a good result. In addition, other decoding studies on motor intention mainly focus on two types of signals, one is the event-related synchronization/desynchronization (ERS\ERD) potentials and the other is the movement related brain potentials (MRPs) [1,4]. Above discussed studies have deepened the understanding of brain cognitive mechanism corresponding to motor intention, and realized the effective detection and recognition of the movement. However, these studies mainly focus on the slow potentials from few electrode channels in sensorimotor regions, which lack the characteristic information from spatial domain which considering the interaction between different brain regions from the whole brain.
It is well known that gait is a complex cognitive and motor control process, and lower limb movements also involve the coordination and cooperation of all brain regions [4]. However, before a standing and sitting action is completed, the brain must show certain characteristic information and the motion intention can be finally determined by decoding such information. In addition to the above-mentioned representation of cortical slow potentials, it is expected to reveal new features of motor intention decoding through the analysis of dynamic change process of brain interdependence [8,9]. Lau et al [10] investigated the characteristics of functional brain network during standing and walking, and they found that compared with standing condition, the functional connection of sensorimotor areas would be weakened during walking. They think it is because it needs more cognitive attention during walking. Li et al [8] investigated the features of functional connectivity during rehabilitation with the help of exoskeleton, and indicating that the graph theory based brain network analysis has a certain role in the research of gait rehabilitation. Handiru et al [9] studied the balance of brain trauma patients during walking by building the functional brain networks and they found the significant network features for patient walking. However, it is obviously necessary to carry out further analysis from various perspective for action intention detection. To this end, this study designed a motion experiment for sitting and standing actions. EEG signals were collected synchronously and the brain were divided into eight regions. The complexity and entropy characteristics of the EEG signals for eight regions during the whole action onset were fully analyzed. These features which sensitive to different actions are screened by a statistical analysis. Finally, the recognition of standing, sitting and quiet condition is realized by combing several machine learning classifiers, as shown in Fig.1 is the block diagram of this study.