Introduction to Brain‐Computer Interfaces and The Brain Powered Course - ManuJackPel/Brain-Powered-2022-2023 GitHub Wiki

Brain-Computer Interfaces (BCIs), also referred to as Brain-Machine Interfaces (BMIs), establish direct communication links between the human brain and external devices, such as computers or robots (Mak & Wolpaw, 2009). By enabling individuals to interact with and control these devices through their brain activity, BCIs bypass conventional pathways like muscles or nerves. This technology holds significant promise for the advancement of prosthetics and wheelchair control, for instance, for patients with physical disabilities stemming from spinal cord injuries that result in reduced or lost functionality in their lower or all extremities (paraplegia/tetraplegia). These innovative BCI approaches underscore the importance of developing highly efficient BCI systems for this purpose and fostering a deeper understanding of this field across various scientific domains, such as neuroscience and artificial intelligence. To achieve the latter objective, the Brain Powered course aims to cultivate knowledge and collaboration among students from diverse educational backgrounds. The course provides a unique opportunity for students to collaborate in the creation of a BCI system designed to control a drone.

Types of BCI systems

BCI systems can be categorized based on the method of capturing brain signals, distinguishing between invasive and non-invasive techniques (Aljalal et al., 2020). In invasive BCIs, brain signals are acquired from within the brain (utilizing electrodes positioned beneath the skull), whereas non-invasive BCIs collect signals externally, from locations outside the brain. Although the signals acquired through invasive BCIs are stronger, it's important to note that this approach necessitates surgical intervention. Due to this reason, non-invasive BCIs are often favored in various situations, as they are more user-friendly for everyday activities and experimental research. Electroencephalography (EEG), a method that measures brain activity on the surface of the brain using electrodes placed on the scalp, is a commonly used technique for non-invasive BCIs due to its portability and affordability. EEG is also employed in the Brain Powered course.

Moreover, BCI systems can be divided into endogenous and exogenous categories based on the paradigms used to elicit brain activity for command processing (Padfield et al., 2022). Endogenous paradigms involve user-generated brain signals without the use of external cues, such as motor imagery (MI), where mental movements correspond to commands. These produce distinct neural patterns in the motor area of the brain, mostly involving changes in oscillatory power such as event-related (de) synchronization (ERS/ERD) or sensorimotor rhythms (SMR). In contrast, exogenous BCIs rely on external cues, such as those based on steady-state visually evoked potentials (SSVEP), or those based on the P300 event-related potential (ERP; Aljalal et al., 2020). Endogenous BCIs offer natural, self-timed/initiated, and intuitive control based on the user's thoughts and intentions, yet they often exhibit lower accuracy due to variability within and between users, limiting the range of commands. Extensive user and machine training is essential to enhance accuracy. A recent study showed groundbreaking success with this method, as tetraplegic wheelchair users were able to navigate with 95-98% accuracy and even showed significant neuroplastic changes (Tonin et al., 2022). Furthermore, endogenous and exogenous paradigms are combined in hybrid BCIs to expand the number of controls. Another type of hybrid BCIs combines user control with robotic intelligence, creating so-called shared control, in which robotic intelligence operates at varying levels, from obstacle avoidance (as was the case in Tonin et al., (2022)) to destination selection, balancing user effort and autonomy. In the end, every situation warrants thoughtful consideration of the advantages and limitations of the desired paradigm type(s), as ongoing research continuously improves BCI possibilities. In the Brain Powered course, different cohorts have experimented with different types of paradigms. We (2022-2023) used a method based on Blankertz et al. (2007) that resembles motor imagery, in which the user moves their hands to evoke activity in the motor areas of the brain. We chose this to keep the paradigm simple for our purpose to create the data pipeline for the BCI system since brain activity evoked by actual movement execution is stronger and less variable between individuals than imagined movements and requires little to no training (Jeong & Kim, 2021). By doing this we hope to provide a proof of concept that a drone can be controlled using simple endogenous commands with relatively little training time using the university’s current equipment. Upon success, this would mean that future cohorts can implement the same setup using MI instead.

Furthermore, BCI systems can also be categorized based on the timing of device control, dividing them into synchronous and asynchronous paradigms (Padfield et al., 2022). In synchronous control, the BCI designates predefined intervals within which the user can issue commands, followed by data processing and subsequent command execution. Whereas in asynchronous control, the user may issue a command at any given moment, while the data is buffered and commands are executed at regular intervals. Though synchronous control can yield high accuracy, it is often not feasible for practical use of BCIs, particularly for brain-controlled dynamic devices such as prosthetics and wheelchairs. In the Brain Powered course, both synchronous and asynchronous paradigms have been used over the years, including SSVEPs for the former type and MI for the latter.

The Pipeline: Preprocessing, Feature Extraction, and Classification

After signal acquisition, i.e. after the data has been collected, the data undergoes many procedures before an actual command is executed in the device. This paragraph will describe the pipeline in the context of EEG signals based on information provided by Aljalal et al. (2020).

The first step is pre-processing, in which it is attempted to remove the noise and artifacts from the raw data. When filtering in the frequency domain notch and bandpass filters are utilized, and with spatial filters the signal-to-noise raise aims to be increased. Preprocessing can be quite sophisticated however simpler methods also suffice. For us, applying a band pass filter between 8-12 Hertz was sufficient. This allowed us to extract the mu band and also filtered out the activity of the electrical grid.

Subsequently, the data undergoes feature extraction, a process that aims to transform preprocessed signals into feature vectors by extracting essential characteristics and eliminating redundancies. Techniques include Fourier Transform (FT), Wavelet Transform (WT), and Common Spatial Pattern (CSP). FT analyses power spectrum density, i.e. it extracts information about the power of the frequency band and creatures a representation of the signal in the frequency domain. But due to the time-dependent frequency fluctuations in EEG signals, WT holds an advantage over FT, since it creates a representation of the signal in both the time and frequency domain. CSP is frequently employed due to its enhanced robustness and the fact that it doesn't necessitate preselection of specific frequency bands or prior knowledge thereof. Additionally, it effectively enhances the discriminability between spatially distinct classes of brain signals, rendering it useful in MI as the brain areas corresponding to different extremities lie far away from each other. However, CSP is sensitive to noise and artifacts, considerably influenced by the placement of EEG electrodes, and requires a greater number of EEG channels. Due to the variety of results, choosing the right procedure is essential. The methods can be combined but a combination of too many features may lead to overfitting and an increase in the processing time. We used a fairly simple combination of CSP and logarithmic variance of the mu-wave frequency band as our features of interest. This was due to the ease of implementing it in Python, the ample amount of documentation available, and previous successes in similar experimental setups.

After feature extraction, one or multiple classifiers categorize the signals to distinguish between different mental actions. This involves the utilization of various machine learning techniques, including both linear and nonlinear classifiers. As the names indicate, linear classifiers create a linear decision boundary that separates data points into classes, whereas nonlinear classifiers handle more complex boundaries. Examples of linear classifiers include Linear Discriminant Analysis (LDA) and Support Vector Machine (SVM). LDA emphasizes the statistical properties of the data (means, variance), while SVM prioritizes geometric separation via margin maximization. Regarding nonlinear classifiers, an example is k-Nearest-Neighbor (kNN), a simple and intuitive classifier that predicts a data point’s class based on the most common class of the nearest data points. This method directly uses labeled data points to categorize new data; it does not involve model learning or performing intricate data transformations. However, this classifier isn't commonly used in BCIs because it is sensitive to the number of features in the data. Another nonlinear classifier is an Artificial Neural Network (ANN), which is often used in MI BCIs due to its complex capabilities but requires extensive model training, plenty of data, careful tuning, and significant computational resources. Linear models are computationally efficient and offer robustness due to minimal parameter adjustment, reducing overfitting risk. However, in complex scenarios or with extensive data, nonlinear classifiers, although more parameter-intensive, often yield superior outcomes. Selecting the right classifier algorithm depends on the task, data, and resources, guided by experimentation and consideration of the options. In our experiment, we used LDA, SVM, and kNN. LDA and SVM were used as they are easy to implement and commonly used. We did not plan on utilizing kNN, however, because we only classified on two features the algorithm had comparable success to the linear classifiers. Additionally, kNN had better accuracy when two classes featured a lot of overlap in their training samples. For these reasons, kNN ended up being used for live classification.

Finally, the control unit translates the identified categories into precise motion instructions. These instructions are then communicated to assistive tools such as wheelchairs or dynamic devices like drones, as exemplified in the context of Brain Powered. It's important to acknowledge that achieving optimal design for these devices, balancing the challenges and advantages of various techniques, is an ongoing endeavor. Brain Powered, too, is committed to this effort. Additional methods and technologies beyond those covered in this article exist, expanding the range of possibilities. For comprehensive reviews on EEG-based brain-controlled devices, we recommend consulting the works of Padfield et al. (2022) and Aljalal et al. (2020).

Annabel Couzijn

References

Aljalal, M., Ibrahim, S., Djemal, R., & Ko, W. (2020). Comprehensive review on brain-controlled mobile robots and robotic arms based on electroencephalography signals. Intelligent Service Robotics, 13(4), 539–563. https://doi.org/10.1007/s11370-020-00328-5 Blankertz, B., Dornhege, G., Krauledat, M., Müller, K. R., & Curio, G. (2007). The non-invasive Berlin Brain–Computer Interface: Fast acquisition of effective performance in untrained subjects. NeuroImage, 37(2), 539–550. https://doi.org/10.1016/j.neuroimage.2007.01.051 Jeong, H., & Kim, J. (2021). Development of a guidance system for motor imagery enhancement using the virtual hand illusion. Sensors, 21(6), 2197. https://doi.org/10.3390/s21062197 Mak, J. N., & Wolpaw, J. R. (2009). Clinical Applications of Brain-Computer Interfaces: Current State and Future Prospects. IEEE Reviews in Biomedical Engineering. https://doi.org/10.1109/rbme.2009.2035356 Padfield, N., Camilleri, K. P., Camilleri, T. A., Fabri, S. G., & Bugeja, M. K. (2022). A comprehensive review of endogenous EEG-Based BCIs for dynamic device control. Sensors, 22(15), 5802. https://doi.org/10.3390/s22155802 Tonin, L., Perdikis, S., Kuzu, T. D., Pardo, J., Orset, B., Lee, K., Aach, M., Schildhauer, T., Martínez-Olivera, R., & Del R Millán, J. (2022). Learning to control a BMI-driven wheelchair for people with severe tetraplegia. iScience, 25(12), 105418. https://doi.org/10.1016/j.isci.2022.105418