Estimating learning-related psychological states from electroencephalographic and physiological signals
Studying human learning is crucial: how can humans learn and what are their motivations to keep building-up knowledge? Every human is permanently learning to adapt to his environment and current generations now have to learn to use rapidly evolving technologies. As part of emerging technologies, research on brain-computer interfaces (BCIs) has become more democratic in recent decades, and experiments using electroencephalography (EEG)-based BCIs dramatically increased. This technology enables direct transfer of information from the human brain to a machine via brain signals, and can notably enable people with severe motor impairments to send commands to a wheelchair, e.g., by imagining left or right hand movements to make the wheelchair turn left or right. Such BCIs are called active BCIs since users are actively sending commands to the system, here a wheelchair, by performing mental imagery. However, the lack of robustness of BCIs limits the development of the technology outside of research laboratories, and current BCIs do not enable 10 to 30% of persons to acquire the skills required to use BCIs. If a lot of research has focused on the improvement of signal processing algorithms, the potential role of user training in BCI performance seems to be mostly neglected, and training protocols might not be suitable for all users. However, another type of BCIs recently proved particularly promising: passive BCIs. Such BCIs are not used to directly control an application, but to monitor in real-time users’ psychological states, e.g., mental workload or attention, in order to adapt an application accordingly. The goal of my PhD thesis is to to attempt to estimate learning-related psychological states such as cognitive workload, curiosity, attention, fatigue or emotions, from EEG and/or bio signals, using passive BCIs, in order to understand individual users’ capabilities and motivations to learn, and therefore to adapt active BCIs training protocols accordingly.
In a first contribution, we explored recent machine learning algorithms that have shown to be promising for oscillatory-based MI-BCIs, but that have never been tested on oscillatory psychological states estimation, proposed new variants of them, and benchmarked them with classical methods to estimate both mental workload and affective states (Valence/Arousal) from oscillatory-based EEG signals. We studied these approaches with both subject-specific and subject-independent calibration, to go towards calibration-free systems. Our results suggested that a Convolutional Neural Networks (CNN) obtained the highest mean accuracy, although not significantly so, in both conditions for the mental workload study, followed by Riemannian Geometry Classifiers (RGCs). However, this same CNN underperformed in both conditions for the affective states study, when RGCs performed the best. In a second contribution, we are currently exploring signal processing and machine learning algorithms that have proved to be efficient on Evoked Response Potential (ERP)-based MI-BCIs, and benchmarking them with classical methods to estimate both mental workload and curiosity from ERP-based EEG signals. In a third contribution, we ran an experiment in which we used EEG, Heart Rate (HR), breathing and Electrodermal Activity (EDA) signals to measure the neurophysiological activity of participants as they were induced into states of curiosity, using trivia question and answer chains. So far, results from the within-participant study attempting to classify EEG signals with five-fold stratified cross-validation returned classification accuracies oscillating around 60% (63.09% classification accuracy for the Filter Bank Tangent Space Classifier (FBTSC), 60.93% classification accuracy for the Filter Bank Common Spatial Pattern (FBCSP) + Linear Discriminant Analysis (LDA)). Deeper analyses are currently made concerning the classification of the bio signals (ECG, EDA and breathing). As a fourth contribution, I implemented a Python library (BioPyC) to easily compare and benchmark both Signal Processing algorithms and Machine Learning algorithms for offline EEG and bio signals decoding. Based on an intuitive and well-guided graphical interface, four main modules allow the user to follow the standard steps of the BCI process without any programming skills 1) reading different neurophysiological signal data formats 2) filtering and representing EEG and biosignals 3) classifying them 4) visualizing and performing statistical tests on the results. This toolbox has been used for our 3 contributions detailed above. Finally, a fifth contribution is underway, in which we will run an experiment in order to assess participants’ cognitive load during MI-BCI training using EEG signals.