Objective. Electroencephalography (EEG) analysis has been an important tool in neuroscience with applications in neuroscience, neural engineering (e.g. Brain–computer interfaces, BCI's), and even commercial applications. Many of the analytical tools used in EEG studies have used machine learning to uncover relevant information for neural classification and neuroimaging. Recently, the availability of large EEG data sets and advances in machine learning have both led to the deployment of deep learning architectures, especially in the analysis of EEG signals and in understanding the information it may contain for brain functionality. The robust automatic classification of these signals is an important step towards making the use of EEG more practical in many applications and less reliant on trained professionals. Towards this goal, a systematic review of the literature on deep learning applications to EEG classification was performed to address the following critical questions: (1) Which EEG classification tasks have been explored with deep learning? (2) What input formulations have been used for training the deep networks? (3) Are there specific deep learning network structures suitable for specific types of tasks? Approach. A systematic literature review of EEG classification using deep learning was performed on Web of Science and PubMed databases, resulting in 90 identified studies. Those studies were analyzed based on type of task, EEG preprocessing methods, input type, and deep learning architecture. Main results. For EEG classification tasks, convolutional neural networks, recurrent neural networks, deep belief networks outperform stacked auto-encoders and multi-layer perceptron neural networks in classification accuracy. The tasks that used deep learning fell into five general groups: emotion recognition, motor imagery, mental workload, seizure detection, event related potential detection, and sleep scoring. For each type of task, we describe the specific input formulation, major characteristics, and end classifier recommendations found through this review. Significance. This review summarizes the current practices and performance outcomes in the use of deep learning for EEG classification. Practical suggestions on the selection of many hyperparameters are provided in the hope that they will promote or guide the deployment of deep learning to EEG datasets in future research.
Purpose-led Publishing is a coalition of three not-for-profit publishers in the field of physical sciences: AIP Publishing, the American Physical Society and IOP Publishing.
Together, as publishers that will always put purpose above profit, we have defined a set of industry standards that underpin high-quality, ethical scholarly communications.
We are proudly declaring that science is our only shareholder.
ISSN: 1741-2552
Journal of Neural Engineering was created to help scientists, clinicians and engineers to understand, replace, repair and enhance the nervous system.
Open all abstracts, in this tab
Alexander Craik et al 2019 J. Neural Eng. 16 031001
Yannick Roy et al 2019 J. Neural Eng. 16 051001
Context. Electroencephalography (EEG) is a complex signal and can require several years of training, as well as advanced signal processing and feature extraction methodologies to be correctly interpreted. Recently, deep learning (DL) has shown great promise in helping make sense of EEG signals due to its capacity to learn good feature representations from raw data. Whether DL truly presents advantages as compared to more traditional EEG processing approaches, however, remains an open question. Objective. In this work, we review 154 papers that apply DL to EEG, published between January 2010 and July 2018, and spanning different application domains such as epilepsy, sleep, brain–computer interfacing, and cognitive and affective monitoring. We extract trends and highlight interesting approaches from this large body of literature in order to inform future research and formulate recommendations. Methods. Major databases spanning the fields of science and engineering were queried to identify relevant studies published in scientific journals, conferences, and electronic preprint repositories. Various data items were extracted for each study pertaining to (1) the data, (2) the preprocessing methodology, (3) the DL design choices, (4) the results, and (5) the reproducibility of the experiments. These items were then analyzed one by one to uncover trends. Results. Our analysis reveals that the amount of EEG data used across studies varies from less than ten minutes to thousands of hours, while the number of samples seen during training by a network varies from a few dozens to several millions, depending on how epochs are extracted. Interestingly, we saw that more than half the studies used publicly available data and that there has also been a clear shift from intra-subject to inter-subject approaches over the last few years. About of the studies used convolutional neural networks (CNNs), while used recurrent neural networks (RNNs), most often with a total of 3–10 layers. Moreover, almost one-half of the studies trained their models on raw or preprocessed EEG time series. Finally, the median gain in accuracy of DL approaches over traditional baselines was across all relevant studies. More importantly, however, we noticed studies often suffer from poor reproducibility: a majority of papers would be hard or impossible to reproduce given the unavailability of their data and code. Significance. To help the community progress and share work more effectively, we provide a list of recommendations for future studies and emphasize the need for more reproducible research. We also make our summary table of DL and EEG papers available and invite authors of published work to contribute to it directly. A planned follow-up to this work will be an online public benchmarking portal listing reproducible results.
F Lotte et al 2018 J. Neural Eng. 15 031005
Objective. Most current electroencephalography (EEG)-based brain–computer interfaces (BCIs) are based on machine learning algorithms. There is a large diversity of classifier types that are used in this field, as described in our 2007 review paper. Now, approximately ten years after this review publication, many new algorithms have been developed and tested to classify EEG signals in BCIs. The time is therefore ripe for an updated review of EEG classification algorithms for BCIs. Approach. We surveyed the BCI and machine learning literature from 2007 to 2017 to identify the new classification approaches that have been investigated to design BCIs. We synthesize these studies in order to present such algorithms, to report how they were used for BCIs, what were the outcomes, and to identify their pros and cons. Main results. We found that the recently designed classification algorithms for EEG-based BCIs can be divided into four main categories: adaptive classifiers, matrix and tensor classifiers, transfer learning and deep learning, plus a few other miscellaneous classifiers. Among these, adaptive classifiers were demonstrated to be generally superior to static ones, even with unsupervised adaptation. Transfer learning can also prove useful although the benefits of transfer learning remain unpredictable. Riemannian geometry-based methods have reached state-of-the-art performances on multiple BCI problems and deserve to be explored more thoroughly, along with tensor-based methods. Shrinkage linear discriminant analysis and random forests also appear particularly useful for small training samples settings. On the other hand, deep learning methods have not yet shown convincing improvement over state-of-the-art BCI methods. Significance. This paper provides a comprehensive overview of the modern classification algorithms used in EEG-based BCIs, presents the principles of these methods and guidelines on when and how to use them. It also identifies a number of challenges to further advance EEG classification in BCI.
Vernon J Lawhern et al 2018 J. Neural Eng. 15 056013
Objective. Brain–computer interfaces (BCI) enable direct communication with a computer, using neural activity as the control signal. This neural signal is generally chosen from a variety of well-studied electroencephalogram (EEG) signals. For a given BCI paradigm, feature extractors and classifiers are tailored to the distinct characteristics of its expected EEG control signal, limiting its application to that specific signal. Convolutional neural networks (CNNs), which have been used in computer vision and speech recognition to perform automatic feature extraction and classification, have successfully been applied to EEG-based BCIs; however, they have mainly been applied to single BCI paradigms and thus it remains unclear how these architectures generalize to other paradigms. Here, we ask if we can design a single CNN architecture to accurately classify EEG signals from different BCI paradigms, while simultaneously being as compact as possible. Approach. In this work we introduce EEGNet, a compact convolutional neural network for EEG-based BCIs. We introduce the use of depthwise and separable convolutions to construct an EEG-specific model which encapsulates well-known EEG feature extraction concepts for BCI. We compare EEGNet, both for within-subject and cross-subject classification, to current state-of-the-art approaches across four BCI paradigms: P300 visual-evoked potentials, error-related negativity responses (ERN), movement-related cortical potentials (MRCP), and sensory motor rhythms (SMR). Main results. We show that EEGNet generalizes across paradigms better than, and achieves comparably high performance to, the reference algorithms when only limited training data is available across all tested paradigms. In addition, we demonstrate three different approaches to visualize the contents of a trained EEGNet model to enable interpretation of the learned features. Significance. Our results suggest that EEGNet is robust enough to learn a wide variety of interpretable features over a range of BCI tasks. Our models can be found at: https://github.com/vlawhern/arl-eegmodels.
Ravikiran Mane et al 2020 J. Neural Eng. 17 041001
Stroke is one of the leading causes of long-term disability among adults and contributes to major socio-economic burden globally. Stroke frequently results in multifaceted impairments including motor, cognitive and emotion deficits. In recent years, brain–computer interface (BCI)-based therapy has shown promising results for post-stroke motor rehabilitation. In spite of the success received by BCI-based interventions in the motor domain, non-motor impairments are yet to receive similar attention in research and clinical settings. Some preliminary encouraging results in post-stroke cognitive rehabilitation using BCI seem to suggest that it may also hold potential for treating non-motor deficits such as cognitive and emotion impairments. Moreover, past studies have shown an intricate relationship between motor, cognitive and emotion functions which might influence the overall post-stroke rehabilitation outcome. A number of studies highlight the inability of current treatment protocols to account for the implicit interplay between motor, cognitive and emotion functions. This indicates the necessity to explore an all-inclusive treatment plan targeting the synergistic influence of these standalone interventions. This approach may lead to better overall recovery than treating the individual deficits in isolation. In this paper, we review the recent advances in BCI-based post-stroke motor rehabilitation and highlight the potential for the use of BCI systems beyond the motor domain, in particular, in improving cognition and emotion of stroke patients. Building on the current results and findings of studies in individual domains, we next discuss the possibility of a holistic BCI system for motor, cognitive and affect rehabilitation which may synergistically promote restorative neuroplasticity. Such a system would provide an all-encompassing rehabilitation platform, leading to overarching clinical outcomes and transfer of these outcomes to a better quality of living. This is one of the first works to analyse the possibility of targeting cross-domain influence of post-stroke functional recovery enabled by BCI-based rehabilitation.
Steve Furber 2016 J. Neural Eng. 13 051001
Neuromorphic computing covers a diverse range of approaches to information processing all of which demonstrate some degree of neurobiological inspiration that differentiates them from mainstream conventional computing systems. The philosophy behind neuromorphic computing has its origins in the seminal work carried out by Carver Mead at Caltech in the late 1980s. This early work influenced others to carry developments forward, and advances in VLSI technology supported steady growth in the scale and capability of neuromorphic devices. Recently, a number of large-scale neuromorphic projects have emerged, taking the approach to unprecedented scales and capabilities. These large-scale projects are associated with major new funding initiatives for brain-related research, creating a sense that the time and circumstances are right for progress in our understanding of information processing in the brain. In this review we present a brief history of neuromorphic engineering then focus on some of the principal current large-scale projects, their main features, how their approaches are complementary and distinct, their advantages and drawbacks, and highlight the sorts of capabilities that each can deliver to neural modellers.
Simone Tanzarella et al 2024 J. Neural Eng. 21 026043
Objective. We analyze and interpret arm and forearm muscle activity in relation with the kinematics of hand pre-shaping during reaching and grasping from the perspective of human synergistic motor control. Approach. Ten subjects performed six tasks involving reaching, grasping and object manipulation. We recorded electromyographic (EMG) signals from arm and forearm muscles with a mix of bipolar electrodes and high-density grids of electrodes. Motion capture was concurrently recorded to estimate hand kinematics. Muscle synergies were extracted separately for arm and forearm muscles, and postural synergies were extracted from hand joint angles. We assessed whether activation coefficients of postural synergies positively correlate with and can be regressed from activation coefficients of muscle synergies. Each type of synergies was clustered across subjects. Main results. We found consistency of the identified synergies across subjects, and we functionally evaluated synergy clusters computed across subjects to identify synergies representative of all subjects. We found a positive correlation between pairs of activation coefficients of muscle and postural synergies with important functional implications. We demonstrated a significant positive contribution in the combination between arm and forearm muscle synergies in estimating hand postural synergies with respect to estimation based on muscle synergies of only one body segment, either arm or forearm (p < 0.01). We found that dimensionality reduction of multi-muscle EMG root mean square (RMS) signals did not significantly affect hand posture estimation, as demonstrated by comparable results with regression of hand angles from EMG RMS signals. Significance. We demonstrated that hand posture prediction improves by combining activity of arm and forearm muscles and we evaluate, for the first time, correlation and regression between activation coefficients of arm muscle and hand postural synergies. Our findings can be beneficial for myoelectric control of hand prosthesis and upper-limb exoskeletons, and for biomarker evaluation during neurorehabilitation.
Haoming Zhang et al 2021 J. Neural Eng. 18 056057
Objective. Deep learning (DL) networks are increasingly attracting attention across various fields, including electroencephalography (EEG) signal processing. These models provide comparable performance to that of traditional techniques. At present, however, there is a lack of well-structured and standardized datasets with specific benchmark limit the development of DL solutions for EEG denoising. Approach. Here, we present EEGdenoiseNet, a benchmark EEG dataset that is suited for training and testing DL-based denoising models, as well as for performance comparisons across models. EEGdenoiseNet contains 4514 clean EEG segments, 3400 ocular artifact segments and 5598 muscular artifact segments, allowing users to synthesize contaminated EEG segments with the ground-truth clean EEG. Main results. We used EEGdenoiseNet to evaluate denoising performance of four classical networks (a fully-connected network, a simple and a complex convolution network, and a recurrent neural network). Our results suggested that DL methods have great potential for EEG denoising even under high noise contamination. Significance. Through EEGdenoiseNet, we hope to accelerate the development of the emerging field of DL-based EEG denoising. The dataset and code are available at https://github.com/ncclabsustech/EEGdenoiseNet.
Sauradeep Bhowmick et al 2024 J. Neural Eng. 21 026039
Objective. Minimally invasive neuromodulation therapies like the Injectrode, which is composed of a tightly wound polymer-coated Platinum/Iridium microcoil, offer a low-risk approach for administering electrical stimulation to the dorsal root ganglion (DRG). This flexible electrode is aimed to conform to the DRG. The stimulation occurs through a transcutaneous electrical stimulation (TES) patch, which subsequently transmits the stimulation to the Injectrode via a subcutaneous metal collector. However, it is important to note that the effectiveness of stimulation through TES relies on the specific geometrical configurations of the Injectrode-collector-patch system. Hence, there is a need to investigate which design parameters influence the activation of targeted neural structures. Approach. We employed a hybrid computational modeling approach to analyze the impact of Injectrode system design parameters on charge delivery and neural response to stimulation. We constructed multiple finite element method models of DRG stimulation, followed by the implementation of multi-compartment models of DRG neurons. By calculating potential distribution during monopolar stimulation, we simulated neural responses using various parameters based on prior acute experiments. Additionally, we developed a canonical monopolar stimulation and full-scale model of bipolar bilateral L5 DRG stimulation, allowing us to investigate how design parameters like Injectrode size and orientation influenced neural activation thresholds. Main results. Our findings were in accordance with acute experimental measurements and indicate that the minimally invasive Injectrode system predominantly engages large-diameter afferents (Aβ-fibers). These activation thresholds were contingent upon the surface area of the Injectrode. As the charge density decreased due to increasing surface area, there was a corresponding expansion in the stimulation amplitude range before triggering any pain-related mechanoreceptor (Aδ-fibers) activity. Significance. The Injectrode demonstrates potential as a viable technology for minimally invasive stimulation of the DRG. Our findings indicate that utilizing a larger surface area Injectrode enhances the therapeutic margin, effectively distinguishing the desired Aβ activation from the undesired Aδ-fiber activation.
F Mattioli et al 2021 J. Neural Eng. 18 066053
Objective. Brain-computer interface (BCI) aims to establish communication paths between the brain processes and external devices. Different methods have been used to extract human intentions from electroencephalography (EEG) recordings. Those based on motor imagery (MI) seem to have a great potential for future applications. These approaches rely on the extraction of EEG distinctive patterns during imagined movements. Techniques able to extract patterns from raw signals represent an important target for BCI as they do not need labor-intensive data pre-processing. Approach. We propose a new approach based on a 10-layer one-dimensional convolution neural network (1D-CNN) to classify five brain states (four MI classes plus a 'baseline' class) using a data augmentation algorithm and a limited number of EEG channels. In addition, we present a transfer learning method used to extract critical features from the EEG group dataset and then to customize the model to the single individual by training its late layers with only 12-min individual-related data. Main results. The model tested with the 'EEG Motor Movement/Imagery Dataset' outperforms the current state-of-the-art models by achieving a accuracy at the group level. In addition, the transfer learning approach we present achieves an average accuracy of . Significance. The proposed methods could foster the development of future BCI applications relying on few-channel portable recording devices and individual-based training.
Open all abstracts, in this tab
Ariel Tankus et al 2024 J. Neural Eng. 21 036009
Objective. Our goal is to decode firing patterns of single neurons in the left ventralis intermediate nucleus (Vim) of the thalamus, related to speech production, perception, and imagery. For realistic speech brain-machine interfaces (BMIs), we aim to characterize the amount of thalamic neurons necessary for high accuracy decoding. Approach. We intraoperatively recorded single neuron activity in the left Vim of eight neurosurgical patients undergoing implantation of deep brain stimulator or RF lesioning during production, perception and imagery of the five monophthongal vowel sounds. We utilized the Spade decoder, a machine learning algorithm that dynamically learns specific features of firing patterns and is based on sparse decomposition of the high dimensional feature space. Main results. Spade outperformed all algorithms compared with, for all three aspects of speech: production, perception and imagery, and obtained accuracies of 100%, 96%, and 92%, respectively (chance level: 20%) based on pooling together neurons across all patients. The accuracy was logarithmic in the amount of neurons for all three aspects of speech. Regardless of the amount of units employed, production gained highest accuracies, whereas perception and imagery equated with each other. Significance. Our research renders single neuron activity in the left Vim a promising source of inputs to BMIs for restoration of speech faculties for locked-in patients or patients with anarthria or dysarthria to allow them to communicate again. Our characterization of how many neurons are necessary to achieve a certain decoding accuracy is of utmost importance for planning BMI implantation.
Niccolò Calcini et al 2024 J. Neural Eng. 21 036008
Objective. Traditional quantification of fluorescence signals, such as , relies on ratiometric measures that necessitate a baseline for comparison, limiting their applicability in dynamic analyses. Our goal here is to develop a baseline-independent method for analyzing fluorescence data that fully exploits temporal dynamics to introduce a novel approach for dynamical super-resolution analysis, including in subcellular resolution. Approach. We introduce ARES (Autoregressive RESiduals), a novel method that leverages the temporal aspect of fluorescence signals. By focusing on the quantification of residuals following linear autoregression, ARES obviates the need for a predefined baseline, enabling a more nuanced analysis of signal dynamics. Main result. We delineate the foundational attributes of ARES, illustrating its capability to enhance both spatial and temporal resolution of calcium fluorescence activity beyond the conventional ratiometric measure (). Additionally, we demonstrate ARES's utility in elucidating intracellular calcium dynamics through the detailed observation of calcium wave propagation within a dendrite. Significance. ARES stands out as a robust and precise tool for the quantification of fluorescence signals, adept at analyzing both spontaneous and evoked calcium dynamics. Its ability to facilitate the subcellular localization of calcium signals and the spatiotemporal tracking of calcium dynamics—where traditional ratiometric measures falter—underscores its potential to revolutionize baseline-independent analyses in the field.
Minseok Song et al 2024 J. Neural Eng. 21 036007
Objective. Transfer learning has become an important issue in the brain-computer interface (BCI) field, and studies on subject-to-subject transfer within the same dataset have been performed. However, few studies have been performed on dataset-to-dataset transfer, including paradigm-to-paradigm transfer. In this study, we propose a signal alignment (SA) for P300 event-related potential (ERP) signals that is intuitive, simple, computationally less expensive, and can be used for cross-dataset transfer learning. Approach. We proposed a linear SA that uses the P300's latency, amplitude scale, and reverse factor to transform signals. For evaluation, four datasets were introduced (two from conventional P300 Speller BCIs, one from a P300 Speller with face stimuli, and the last from a standard auditory oddball paradigm). Results. Although the standard approach without SA had an average precision (AP) score of 25.5%, the approach demonstrated a 35.8% AP score, and we observed that the number of subjects showing improvement was 36.0% on average. Particularly, we confirmed that the Speller dataset with face stimuli was more comparable with other datasets. Significance. We proposed a simple and intuitive way to align ERP signals that uses the characteristics of ERP signals. The results demonstrated the feasibility of cross-dataset transfer learning even between datasets with different paradigms.
Yaru Liu et al 2024 J. Neural Eng. 21 036004
Objective. Brain–computer interface (BCI) systems with large directly accessible instruction sets are one of the difficulties in BCI research. Research to achieve high target resolution (100) has not yet entered a rapid development stage, which contradicts the application requirements. Steady-state visual evoked potential (SSVEP) based BCIs have an advantage in terms of the number of targets, but the competitive mechanism between the target stimulus and its neighboring stimuli is a key challenge that prevents the target resolution from being improved significantly. Approach. In this paper, we reverse the competitive mechanism and propose a frequency spatial multiplexing method to produce more targets with limited frequencies. In the proposed paradigm, we replicated each flicker stimulus as a 2 × 2 matrix and arrange the matrices of all frequencies in a tiled fashion to form the interaction interface. With different arrangements, we designed and tested three example paradigms with different layouts. Further we designed a graph neural network that distinguishes between targets of the same frequency by recognizing the different electroencephalography (EEG) response distribution patterns evoked by each target and its neighboring targets. Main results. Extensive experiment studies employing eleven subjects have been performed to verify the validity of the proposed method. The average classification accuracies in the offline validation experiments for the three paradigms are 89.16%, 91.38%, and 87.90%, with information transfer rates (ITR) of 51.66, 53.96, and 50.55 bits/min, respectively. Significance. This study utilized the positional relationship between stimuli and did not circumvent the competing response problem. Therefore, other state-of-the-art methods focusing on enhancing the efficiency of SSVEP detection can be used as a basis for the present method to achieve very promising improvements.
F Guerreiro Fernandes et al 2024 J. Neural Eng. 21 036005
Objective. Brain-computer interfaces (BCIs) have the potential to reinstate lost communication faculties. Results from speech decoding studies indicate that a usable speech BCI based on activity in the sensorimotor cortex (SMC) can be achieved using subdurally implanted electrodes. However, the optimal characteristics for a successful speech implant are largely unknown. We address this topic in a high field blood oxygenation level dependent functional magnetic resonance imaging (fMRI) study, by assessing the decodability of spoken words as a function of hemisphere, gyrus, sulcal depth, and position along the ventral/dorsal-axis. Approach. Twelve subjects conducted a 7T fMRI experiment in which they pronounced 6 different pseudo-words over 6 runs. We divided the SMC by hemisphere, gyrus, sulcal depth, and position along the ventral/dorsal axis. Classification was performed on in these SMC areas using multiclass support vector machine (SVM). Main results. Significant classification was possible from the SMC, but no preference for the left or right hemisphere, nor for the precentral or postcentral gyrus for optimal word classification was detected. Classification while using information from the cortical surface was slightly better than when using information from deep in the central sulcus and was highest within the ventral 50% of SMC. Confusion matrices where highly similar across the entire SMC. An SVM-searchlight analysis revealed significant classification in the superior temporal gyrus and left planum temporale in addition to the SMC. Significance. The current results support a unilateral implant using surface electrodes, covering the ventral 50% of the SMC. The added value of depth electrodes is unclear. We did not observe evidence for variations in the qualitative nature of information across SMC. The current results need to be confirmed in paralyzed patients performing attempted speech.
Open all abstracts, in this tab
Rongqi Hong et al 2024 J. Neural Eng. 21 021002
Objective: Epilepsy is a complex disease spanning across multiple scales, from ion channels in neurons to neuronal circuits across the entire brain. Over the past decades, computational models have been used to describe the pathophysiological activity of the epileptic brain from different aspects. Traditionally, each computational model can aid in optimizing therapeutic interventions, therefore, providing a particular view to design strategies for treating epilepsy. As a result, most studies are concerned with generating specific models of the epileptic brain that can help us understand the certain machinery of the pathological state. Those specific models vary in complexity and biological accuracy, with system-level models often lacking biological details. Approach: Here, we review various types of computational model of epilepsy and discuss their potential for different therapeutic approaches and scenarios, including drug discovery, surgical strategies, brain stimulation, and seizure prediction. We propose that we need to consider an integrated approach with a unified modelling framework across multiple scales to understand the epileptic brain. Our proposal is based on the recent increase in computational power, which has opened up the possibility of unifying those specific epileptic models into simulations with an unprecedented level of detail. Main results: A multi-scale epilepsy model can bridge the gap between biologically detailed models, used to address molecular and cellular questions, and brain-wide models based on abstract models which can account for complex neurological and behavioural observations. Significance: With these efforts, we move toward the next generation of epileptic brain models capable of connecting cellular features, such as ion channel properties, with standard clinical measures such as seizure severity.
Joana Soldado-Magraner et al 2024 J. Neural Eng. 21 022001
Objective. Brain-computer interfaces (BCIs) are neuroprosthetic devices that allow for direct interaction between brains and machines. These types of neurotechnologies have recently experienced a strong drive in research and development, given, in part, that they promise to restore motor and communication abilities in individuals experiencing severe paralysis. While a rich literature analyzes the ethical, legal, and sociocultural implications (ELSCI) of these novel neurotechnologies, engineers, clinicians and BCI practitioners often do not have enough exposure to these topics. Approach. Here, we present the IEEE Neuroethics Framework, an international, multiyear, iterative initiative aimed at developing a robust, accessible set of considerations for diverse stakeholders. Main results. Using the framework, we provide practical examples of ELSCI considerations for BCI neurotechnologies. We focus on invasive technologies, and in particular, devices that are implanted intra-cortically for medical research applications. Significance. We demonstrate the utility of our framework in exposing a wide range of implications across different intra-cortical BCI technology modalities and conclude with recommendations on how to utilize this knowledge in the development and application of ethical guidelines for BCI neurotechnologies.
C J H Rikhof et al 2024 J. Neural Eng. 21 021001
Objective. The incidence of stroke rising, leading to an increased demand for rehabilitation services. Literature has consistently shown that early and intensive rehabilitation is beneficial for stroke patients. Robot-assisted devices have been extensively studied in this context, as they have the potential to increase the frequency of therapy sessions and thereby the intensity. Robot-assisted systems can be combined with electrical stimulation (ES) to further enhance muscle activation and patient compliance. The objective of this study was to review the effectiveness of ES combined with all types of robot-assisted technology for lower extremity rehabilitation in stroke patients. Approach. A thorough search of peer-reviewed articles was conducted. The quality of the included studies was assessed using a modified version of the Downs and Black checklist. Relevant information regarding the interventions, devices, study populations, and more was extracted from the selected articles. Main results. A total of 26 articles were included in the review, with 23 of them scoring at least fair on the methodological quality. The analyzed devices could be categorized into two main groups: cycling combined with ES and robots combined with ES. Overall, all the studies demonstrated improvements in body function and structure, as well as activity level, as per the International Classification of Functioning, Disability, and Health model. Half of the studies in this review showed superiority of training with the combination of robot and ES over robot training alone or over conventional treatment. Significance. The combination of robot-assisted technology with ES is gaining increasing interest in stroke rehabilitation. However, the studies identified in this review present challenges in terms of comparability due to variations in outcome measures and intervention protocols. Future research should focus on actively involving and engaging patients in executing movements and strive for standardization in outcome values and intervention protocols.
Zachary T Sanger et al 2024 J. Neural Eng. 21 012001
Deep brain stimulation (DBS) using Medtronic's Percept™ PC implantable pulse generator is FDA-approved for treating Parkinson's disease (PD), essential tremor, dystonia, obsessive compulsive disorder, and epilepsy. Percept™ PC enables simultaneous recording of neural signals from the same lead used for stimulation. Many Percept™ PC sensing features were built with PD patients in mind, but these features are potentially useful to refine therapies for many different disease processes. When starting our ongoing epilepsy research study, we found it difficult to find detailed descriptions about these features and have compiled information from multiple sources to understand it as a tool, particularly for use in patients other than those with PD. Here we provide a tutorial for scientists and physicians interested in using Percept™ PC's features and provide examples of how neural time series data is often represented and saved. We address characteristics of the recorded signals and discuss Percept™ PC hardware and software capabilities in data pre-processing, signal filtering, and DBS lead performance. We explain the power spectrum of the data and how it is shaped by the filter response of Percept™ PC as well as the aliasing of the stimulation due to digitally sampling the data. We present Percept™ PC's ability to extract biomarkers that may be used to optimize stimulation therapy. We show how differences in lead type affects noise characteristics of the implanted leads from seven epilepsy patients enrolled in our clinical trial. Percept™ PC has sufficient signal-to-noise ratio, sampling capabilities, and stimulus artifact rejection for neural activity recording. Limitations in sampling rate, potential artifacts during stimulation, and shortening of battery life when monitoring neural activity at home were observed. Despite these limitations, Percept™ PC demonstrates potential as a useful tool for recording neural activity in order to optimize stimulation therapies to personalize treatment.
Khaled M Taghlabi et al 2024 J. Neural Eng. 21 011001
Peripheral nerve interfaces (PNIs) are electrical systems designed to integrate with peripheral nerves in patients, such as following central nervous system (CNS) injuries to augment or replace CNS control and restore function. We review the literature for clinical trials and studies containing clinical outcome measures to explore the utility of human applications of PNIs. We discuss the various types of electrodes currently used for PNI systems and their functionalities and limitations. We discuss important design characteristics of PNI systems, including biocompatibility, resolution and specificity, efficacy, and longevity, to highlight their importance in the current and future development of PNIs. The clinical outcomes of PNI systems are also discussed. Finally, we review relevant PNI clinical trials that were conducted, up to the present date, to restore the sensory and motor function of upper or lower limbs in amputees, spinal cord injury patients, or intact individuals and describe their significant findings. This review highlights the current progress in the field of PNIs and serves as a foundation for future development and application of PNI systems.
Open all abstracts, in this tab
Ilya Kolb et al 2019 J. Neural Eng. 16 046003
Objective. Intracellular patch-clamp electrophysiology, one of the most ubiquitous, high-fidelity techniques in biophysics, remains laborious and low-throughput. While previous efforts have succeeded at automating some steps of the technique, here we demonstrate a robotic 'PatcherBot' system that can perform many patch-clamp recordings sequentially, fully unattended. Approach. Comprehensive automation is accomplished by outfitting the robot with machine vision, and cleaning pipettes instead of manually exchanging them. Main results. the PatcherBot can obtain data at a rate of 16 cells per hour and work with no human intervention for up to 3 h. We demonstrate the broad applicability and scalability of this system by performing hundreds of recordings in tissue culture cells and mouse brain slices with no human supervision. Using the PatcherBot, we also discovered that pipette cleaning can be improved by a factor of three. Significance. The system is potentially transformative for applications that depend on many high-quality measurements of single cells, such as drug screening, protein functional characterization, and multimodal cell type investigations.
Alborz Rezazadeh Sereshkeh et al 2019 J. Neural Eng. 16 016005
Objective. Most brain–computer interfaces (BCIs) based on functional near-infrared spectroscopy (fNIRS) require that users perform mental tasks such as motor imagery, mental arithmetic, or music imagery to convey a message or to answer simple yes or no questions. These cognitive tasks usually have no direct association with the communicative intent, which makes them difficult for users to perform. Approach. In this paper, a 3-class intuitive BCI is presented which enables users to directly answer yes or no questions by covertly rehearsing the word 'yes' or 'no' for 15 s. The BCI also admits an equivalent duration of unconstrained rest which constitutes the third discernable task. Twelve participants each completed one offline block and six online blocks over the course of two sessions. The mean value of the change in oxygenated hemoglobin concentration during a trial was calculated for each channel and used to train a regularized linear discriminant analysis (RLDA) classifier. Main results. By the final online block, nine out of 12 participants were performing above chance (p < 0.001 using the binomial cumulative distribution), with a 3-class accuracy of 83.8% ± 9.4%. Even when considering all participants, the average online 3-class accuracy over the last three blocks was 64.1 % ± 20.6%, with only three participants scoring below chance (p < 0.001). For most participants, channels in the left temporal and temporoparietal cortex provided the most discriminative information. Significance. To our knowledge, this is the first report of an online 3-class imagined speech BCI. Our findings suggest that imagined speech can be used as a reliable activation task for selected users for development of more intuitive BCIs for communication.
L Nathan Perkins et al 2018 J. Neural Eng. 15 066002
Objective. Optical techniques for recording and manipulating neural activity have traditionally been constrained to superficial brain regions due to light scattering. New techniques are needed to extend optical access to large 3D volumes in deep brain areas, while retaining local connectivity. Approach. We have developed a method to implant bundles of hundreds or thousands of optical microfibers, each with a diameter of 8 μm. During insertion, each fiber moves independently, following a path of least resistance. The fibers achieve near total internal reflection, enabling optically interfacing with the tissue near each fiber aperture. Main results. At a depth of 3 mm, histology shows fibers consistently splay over 1 mm in diameter throughout the target region. Immunohistochemical staining after chronic implants reveals neurons in close proximity to the fiber tips. Models of photon fluence indicate that fibers can be used as a stimulation light source to precisely activate distinct patterns of neurons by illuminating a subset of fibers in the bundle. By recording fluorescent beads diffusing in water, we demonstrate the recording capability of the fibers. Significance. Our histology, modeling and fluorescent bead recordings suggest that the optical microfibers may provide a minimally invasive, stable, bidirectional interface for recording or stimulating genetic probes in deep brain regions—a hyper-localized form of fiber photometry.
Christine A Edwards et al 2018 J. Neural Eng. 15 066003
Objective. Stereotactic frame systems are the gold-standard for stereotactic surgeries, such as implantation of deep brain stimulation (DBS) devices for treatment of medically resistant neurologic and psychiatric disorders. However, frame-based systems require that the patient is awake with a stereotactic frame affixed to their head for the duration of the surgical planning and implantation of the DBS electrodes. While frameless systems are increasingly available, a reusable re-attachable frame system provides unique benefits. As such, we created a novel reusable MRI-compatible stereotactic frame system that maintains clinical accuracy through the detachment and reattachment of its stereotactic devices used for MRI-guided neuronavigation. Approach. We designed a reusable arc-centered frame system that includes MRI-compatible anchoring skull screws for detachment and re-attachment of its stereotactic devices. We validated the stability and accuracy of our system through phantom, in vivo mock-human porcine DBS-model and human cadaver testing. Main results. Phantom testing achieved a root mean square error (RMSE) of 0.94 ± 0.23 mm between the ground truth and the frame-targeted coordinates; and achieved an RMSE of 1.11 ± 0.40 mm and 1.33 ± 0.38 mm between the ground truth and the CT- and MRI-targeted coordinates, respectively. In vivo and cadaver testing achieved a combined 3D Euclidean localization error of 1.85 ± 0.36 mm (p < 0.03) between the pre-operative MRI-guided placement and the post-operative CT-guided confirmation of the DBS electrode. Significance. Our system demonstrated consistent clinical accuracy that is comparable to conventional frame and frameless stereotactic systems. Our frame system is the first to demonstrate accurate relocation of stereotactic frame devices during in vivo MRI-guided DBS surgical procedures. As such, this reusable and re-attachable MRI-compatible system is expected to enable more complex, chronic neuromodulation experiments, and lead to a clinically available re-attachable frame that is expected to decrease patient discomfort and costs of DBS surgery.
P Senn et al 2018 J. Neural Eng. 15 056018
Objective. Cochlear implants, while providing significant benefits to recipients, remain limited due to broad neural activation. Focussed multipolar stimulation (FMP) is an advanced stimulation strategy that uses multiple current sources to produce highly focussed patterns of neural excitation in order to overcome these shortcomings. Approach. This report presents single-source multipolar stimulation (SSMPS), a novel form of stimulation based on a single current source and a passive current divider. Compared to conventional FMP with multiple current sources, SSMPS can be implemented as a modular addition to conventional (i.e. single) current source stimulation systems facilitating charge balance within the cochlea. As with FMP, SSMPS requires the determination of a transimpedance matrix to allow for focusing of the stimulation. The first part of this study therefore investigated the effects of varying the probe stimulus (e.g. current level and pulse width) on the measurement of the transimpedance matrix. SSMPS was then studied using in vitro based measurements of voltages at non-stimulated electrodes along an electrode array in normal saline. The voltage reduction with reference to monopolar stimulation was compared to tripolar and common ground stimulation, two clinically established stimulation modes. Finally, a proof of principle in vivo test of SSMPS in a feline model was performed. Main results. A probe stimulus of at least 40 nC is required to reliably measure the transimpedance matrix. In vitro stimulation using SSMPS resulted in a significantly greater voltage reduction compared to monopolar, tripolar and common ground stimulation. Interestingly, matching measurement and stimulation parameters did not lead to an improved focussing performance. Compared to monopolar stimulation, SSMPS resulted in reduced spread of neural activity in the inferior colliculus, albeit with increased thresholds. Significance. The present study demonstrates that SSMPS successfully limits the broadening of the excitatory field along the electrode array and a subsequent reduction in the spread of neural excitation.
Open all abstracts, in this tab
Tanveer et al
Objective: This study develops a deep learning method for fast auditory attention decoding (AAD) using electroencephalography (EEG) from listeners with hearing impairment. It addresses three classification tasks: differentiating noise from speech-in-noise, classifying the direction of attended speech (left vs. right) and identifying the activation status of hearing aid noise reduction (NR) algorithms (OFF vs. ON). These tasks contribute to our understanding of how hearing technology influences auditory processing in the hearing-impaired population.
Method: Deep convolutional neural network (DCNN) models were designed for each task. Two training strategies were employed to clarify the impact of data splitting on AAD tasks: inter-trial, where the testing set used classification windows from trials that the training set hadn't seen, and intra-trial, where the testing set used unseen classification windows from trials where other segments were seen during training. The models were evaluated on EEG data from 31 participants with hearing impairment, listening to competing talkers amidst background noise.
Results: Using 1-second classification windows, DCNN models achieve accuracy (ACC) of 69.8\%, 73.3\% and 82.9\% and area-under-curve (AUC) of 77.2\%, 80.6\% and 92.1\% for the three tasks respectively on inter-trial strategy. In the intra-trial strategy, they achieved ACC of 87.9\%, 80.1\% and 97.5\%, along with AUC of 94.6\%, 89.1\%, and 99.8\%. Our DCNN models show good performance on short 1-second EEG samples, making them suitable for real-world applications.
Conclusion: Our DCNN models successfully addressed three tasks with short 1-second EEG windows from participants with hearing impairment, showcasing their potential. While the inter-trial strategy demonstrated promise for assessing AAD, the intra-trial approach yielded inflated results, underscoring the important role of proper data splitting in EEG-based AAD tasks.
Significance: Our findings showcase the promising potential of EEG-based tools for assessing auditory attention in clinical contexts and advancing hearing technology, while also promoting further exploration of alternative deep learning architectures and their potential constraints.
Wu et al
Objective. In the specific use of electromyogram (EMG) driven prosthetics, the user's disability reduces the space available for the electrode array. We propose a framework for EMG decomposition adapted to the condition of a few channels (less than 30 observations), which can elevate the potential of prosthetics in terms of cost and applicability. Approach. The new framework contains a peel-off approach, a refining strategy for motor unit spike train (MUST) and motor unit action potential (MUAP) and a re-subtracting strategy to adapt the framework to few channels environments. Simulated EMG signals were generated to test the framework. In addition, we quantify and analyze the effect of strategies used in the framework. Main results. The results show that the new algorithm has an average improvement of 19.97% in the number of motor units (MUs) identified compared to the control algorithm. Quantitative analysis of the usage strategies shows that the re-subtracting and refining strategies can effectively improve the performance of the framework under the condition of few channels. Significance. These prove that the new framework can be applied to few channel conditions, providing a optimization space for neural interface design in cost and user adaptation.
Bi et al
Objective. Electroencephalography (EEG) has been widely used in motor imagery (MI) research by virtue of its high temporal resolution and low cost, but its low spatial resolution is still a major criticism. The EEG source localization (ESL) algorithm effectively improves the spatial resolution of the signal by inverting the scalp EEG to extrapolate the cortical source signal, thus enhancing the classification accuracy. Approach. To address the problem of poor spatial resolution of EEG signals, this paper proposed a sub-band source chaotic entropy (SSCE) feature extraction method based on sub-band ESL. Firstly, the preprocessed EEG signals were filtered into 8 sub-bands. Each sub-band signal was source localized respectively to reveal the activation patterns of specific frequency bands of the EEG signals and the activities of specific brain regions in the MI task. Then, Approximate Entropy (ApEn), Fuzzy Entropy (FE) and Permutation Entropy (PE) were extracted from the source signal as features to quantify the complexity and randomness of the signal. Finally, the classification of different MI tasks was achieved using Support Vector Machine (SVM). Main result. The proposed method was validated on two MI public datasets (BCI competition III IVa, BCI competition IV 2a) and the results showed that the classification accuracies were higher than the existing methods. Significance. The spatial resolution of the signal was improved by sub-band EEG localization in the paper, which provided a new idea for EEG MI research.
Eddy et al
Discrete myoelectric control-based gesture recognition has recently gained interest as a possible input modality for many emerging ubiquitous computing applications. Unlike the continuous control commonly employed in powered prostheses, discrete systems seek to recognize the dynamic sequences associated with gestures to generate event-based inputs. More akin to those used in general-purpose human-computer interaction, these could include, for example, a flick of the wrist to dismiss a phone call or a double tap of the index finger and thumb to silence an alarm. Moelectric control systems have been shown to achieve near-perfect classification accuracy, but in highly constrained offline settings. Real-world, online systems are subject to ``confounding factors'' (i.e., factors that hinder the real-world robustness of myoelectric control that are not accounted for during typical offline analyses), which inevitably degrade system performance, limiting their practical use. Although these factors have been widely studied in continuous prosthesis control, there has been little exploration of their impacts on discrete myoelectric control systems for emerging applications and use cases. Correspondingly, this work examines, for the first time, three confounding factors and their effect on the robustness of discrete myoelectric control: (1) limb position variability, (2) cross-day use, and a newly identified confound faced by discrete systems (3) gesture elicitation speed. Results from four different discrete myoelectric control architectures: (1) Majority Vote LDA, (2) Dynamic Time Warping, (3) an LSTM network trained with Cross Entropy, and (4) an LSTM network trained with Contrastive Learning, show that classification accuracy is significantly degraded (p<0.05) as a result of each of these confounds. This work establishes that confounding factors are a critical barrier that must be addressed to enable the real-world adoption of discrete myoelectric control for robust and reliable gesture recognition.
Zhang et al
Objective: This study aims to develop and validate an end-to-end software platform, PyHFO, that streamlines the application of deep learning methodologies in detecting neurophysiological biomarkers for epileptogenic zones from EEG recordings.

Methods: We introduced PyHFO, which enables time-efficient HFO detection algorithms like short-term energy (STE) and Montreal Neurological Institute and Hospital (MNI) detectors. It incorporates deep learning models for artifact and HFO with spike classification, designed to operate efficiently on standard computer hardware. 

Main results: The validation of PyHFO was conducted on three separate datasets: the first comprised solely of grid/strip electrodes, the second a combination of grid/strip and depth electrodes, and the third derived from rodent studies, which sampled the neocortex and hippocampus using depth electrodes. PyHFO demonstrated an ability to handle datasets efficiently, with optimization techniques enabling it to achieve speeds up to 50 times faster than traditional HFO detection applications. Users have the flexibility to employ our pre-trained deep learning model or use their EEG data for custom model training.

Significance: PyHFO successfully bridges the computational challenge faced in applying deep learning techniques to EEG data analysis in epilepsy studies, presenting a feasible solution for both clinical and research settings. By offering a user-friendly and computationally efficient platform, PyHFO paves the way for broader adoption of advanced EEG data analysis tools in clinical practice and fosters potential for large-scale research collaborations.
Open all abstracts, in this tab
M Asjid Tanveer et al 2024 J. Neural Eng.
Objective: This study develops a deep learning method for fast auditory attention decoding (AAD) using electroencephalography (EEG) from listeners with hearing impairment. It addresses three classification tasks: differentiating noise from speech-in-noise, classifying the direction of attended speech (left vs. right) and identifying the activation status of hearing aid noise reduction (NR) algorithms (OFF vs. ON). These tasks contribute to our understanding of how hearing technology influences auditory processing in the hearing-impaired population.
Method: Deep convolutional neural network (DCNN) models were designed for each task. Two training strategies were employed to clarify the impact of data splitting on AAD tasks: inter-trial, where the testing set used classification windows from trials that the training set hadn't seen, and intra-trial, where the testing set used unseen classification windows from trials where other segments were seen during training. The models were evaluated on EEG data from 31 participants with hearing impairment, listening to competing talkers amidst background noise.
Results: Using 1-second classification windows, DCNN models achieve accuracy (ACC) of 69.8\%, 73.3\% and 82.9\% and area-under-curve (AUC) of 77.2\%, 80.6\% and 92.1\% for the three tasks respectively on inter-trial strategy. In the intra-trial strategy, they achieved ACC of 87.9\%, 80.1\% and 97.5\%, along with AUC of 94.6\%, 89.1\%, and 99.8\%. Our DCNN models show good performance on short 1-second EEG samples, making them suitable for real-world applications.
Conclusion: Our DCNN models successfully addressed three tasks with short 1-second EEG windows from participants with hearing impairment, showcasing their potential. While the inter-trial strategy demonstrated promise for assessing AAD, the intra-trial approach yielded inflated results, underscoring the important role of proper data splitting in EEG-based AAD tasks.
Significance: Our findings showcase the promising potential of EEG-based tools for assessing auditory attention in clinical contexts and advancing hearing technology, while also promoting further exploration of alternative deep learning architectures and their potential constraints.
Ariel Tankus et al 2024 J. Neural Eng. 21 036009
Objective. Our goal is to decode firing patterns of single neurons in the left ventralis intermediate nucleus (Vim) of the thalamus, related to speech production, perception, and imagery. For realistic speech brain-machine interfaces (BMIs), we aim to characterize the amount of thalamic neurons necessary for high accuracy decoding. Approach. We intraoperatively recorded single neuron activity in the left Vim of eight neurosurgical patients undergoing implantation of deep brain stimulator or RF lesioning during production, perception and imagery of the five monophthongal vowel sounds. We utilized the Spade decoder, a machine learning algorithm that dynamically learns specific features of firing patterns and is based on sparse decomposition of the high dimensional feature space. Main results. Spade outperformed all algorithms compared with, for all three aspects of speech: production, perception and imagery, and obtained accuracies of 100%, 96%, and 92%, respectively (chance level: 20%) based on pooling together neurons across all patients. The accuracy was logarithmic in the amount of neurons for all three aspects of speech. Regardless of the amount of units employed, production gained highest accuracies, whereas perception and imagery equated with each other. Significance. Our research renders single neuron activity in the left Vim a promising source of inputs to BMIs for restoration of speech faculties for locked-in patients or patients with anarthria or dysarthria to allow them to communicate again. Our characterization of how many neurons are necessary to achieve a certain decoding accuracy is of utmost importance for planning BMI implantation.
Niccolò Calcini et al 2024 J. Neural Eng. 21 036008
Objective. Traditional quantification of fluorescence signals, such as , relies on ratiometric measures that necessitate a baseline for comparison, limiting their applicability in dynamic analyses. Our goal here is to develop a baseline-independent method for analyzing fluorescence data that fully exploits temporal dynamics to introduce a novel approach for dynamical super-resolution analysis, including in subcellular resolution. Approach. We introduce ARES (Autoregressive RESiduals), a novel method that leverages the temporal aspect of fluorescence signals. By focusing on the quantification of residuals following linear autoregression, ARES obviates the need for a predefined baseline, enabling a more nuanced analysis of signal dynamics. Main result. We delineate the foundational attributes of ARES, illustrating its capability to enhance both spatial and temporal resolution of calcium fluorescence activity beyond the conventional ratiometric measure (). Additionally, we demonstrate ARES's utility in elucidating intracellular calcium dynamics through the detailed observation of calcium wave propagation within a dendrite. Significance. ARES stands out as a robust and precise tool for the quantification of fluorescence signals, adept at analyzing both spontaneous and evoked calcium dynamics. Its ability to facilitate the subcellular localization of calcium signals and the spatiotemporal tracking of calcium dynamics—where traditional ratiometric measures falter—underscores its potential to revolutionize baseline-independent analyses in the field.
Ethan Eddy et al 2024 J. Neural Eng.
Discrete myoelectric control-based gesture recognition has recently gained interest as a possible input modality for many emerging ubiquitous computing applications. Unlike the continuous control commonly employed in powered prostheses, discrete systems seek to recognize the dynamic sequences associated with gestures to generate event-based inputs. More akin to those used in general-purpose human-computer interaction, these could include, for example, a flick of the wrist to dismiss a phone call or a double tap of the index finger and thumb to silence an alarm. Moelectric control systems have been shown to achieve near-perfect classification accuracy, but in highly constrained offline settings. Real-world, online systems are subject to ``confounding factors'' (i.e., factors that hinder the real-world robustness of myoelectric control that are not accounted for during typical offline analyses), which inevitably degrade system performance, limiting their practical use. Although these factors have been widely studied in continuous prosthesis control, there has been little exploration of their impacts on discrete myoelectric control systems for emerging applications and use cases. Correspondingly, this work examines, for the first time, three confounding factors and their effect on the robustness of discrete myoelectric control: (1) limb position variability, (2) cross-day use, and a newly identified confound faced by discrete systems (3) gesture elicitation speed. Results from four different discrete myoelectric control architectures: (1) Majority Vote LDA, (2) Dynamic Time Warping, (3) an LSTM network trained with Cross Entropy, and (4) an LSTM network trained with Contrastive Learning, show that classification accuracy is significantly degraded (p<0.05) as a result of each of these confounds. This work establishes that confounding factors are a critical barrier that must be addressed to enable the real-world adoption of discrete myoelectric control for robust and reliable gesture recognition.
Yipeng Zhang et al 2024 J. Neural Eng.
Objective: This study aims to develop and validate an end-to-end software platform, PyHFO, that streamlines the application of deep learning methodologies in detecting neurophysiological biomarkers for epileptogenic zones from EEG recordings.

Methods: We introduced PyHFO, which enables time-efficient HFO detection algorithms like short-term energy (STE) and Montreal Neurological Institute and Hospital (MNI) detectors. It incorporates deep learning models for artifact and HFO with spike classification, designed to operate efficiently on standard computer hardware. 

Main results: The validation of PyHFO was conducted on three separate datasets: the first comprised solely of grid/strip electrodes, the second a combination of grid/strip and depth electrodes, and the third derived from rodent studies, which sampled the neocortex and hippocampus using depth electrodes. PyHFO demonstrated an ability to handle datasets efficiently, with optimization techniques enabling it to achieve speeds up to 50 times faster than traditional HFO detection applications. Users have the flexibility to employ our pre-trained deep learning model or use their EEG data for custom model training.

Significance: PyHFO successfully bridges the computational challenge faced in applying deep learning techniques to EEG data analysis in epilepsy studies, presenting a feasible solution for both clinical and research settings. By offering a user-friendly and computationally efficient platform, PyHFO paves the way for broader adoption of advanced EEG data analysis tools in clinical practice and fosters potential for large-scale research collaborations.
Minseok Song et al 2024 J. Neural Eng. 21 036007
Objective. Transfer learning has become an important issue in the brain-computer interface (BCI) field, and studies on subject-to-subject transfer within the same dataset have been performed. However, few studies have been performed on dataset-to-dataset transfer, including paradigm-to-paradigm transfer. In this study, we propose a signal alignment (SA) for P300 event-related potential (ERP) signals that is intuitive, simple, computationally less expensive, and can be used for cross-dataset transfer learning. Approach. We proposed a linear SA that uses the P300's latency, amplitude scale, and reverse factor to transform signals. For evaluation, four datasets were introduced (two from conventional P300 Speller BCIs, one from a P300 Speller with face stimuli, and the last from a standard auditory oddball paradigm). Results. Although the standard approach without SA had an average precision (AP) score of 25.5%, the approach demonstrated a 35.8% AP score, and we observed that the number of subjects showing improvement was 36.0% on average. Particularly, we confirmed that the Speller dataset with face stimuli was more comparable with other datasets. Significance. We proposed a simple and intuitive way to align ERP signals that uses the characteristics of ERP signals. The results demonstrated the feasibility of cross-dataset transfer learning even between datasets with different paradigms.
Martin Wimpff et al 2024 J. Neural Eng.
Objective
The objective of this study is to investigate the application of various channel attention mechanisms within the domain of brain-computer interface (BCI) for motor imagery decoding. Channel attention mechanisms can be seen as a powerful evolution of spatial filters traditionally used for motor imagery decoding. This study systematically compares such mechanisms by integrating them into a lightweight architecture framework to evaluate their impact. 
Approach
We carefully construct a straightforward and lightweight baseline architecture designed to seamlessly integrate different channel attention mechanisms. This approach is contrary to previous works which only investigate one attention mechanism and usually build a very complex, sometimes nested architecture. Our framework allows us to evaluate and compare the impact of different attention mechanisms under the same circumstances. The easy integration of different channel attention mechanisms as well as the low computational complexity enables us to conduct a wide range of experiments on four datasets to thoroughly assess the effectiveness of the baseline model and the attention mechanisms. 
Results
Our experiments demonstrate the strength and generalizability of our architecture framework as well as how channel attention mechanisms can improve the performance while maintaining the small memory footprint and low computational complexity of our baseline architecture.
Significance
Our architecture emphasizes simplicity, offering easy integration of channel attention mechanisms, while maintaining a high degree of generalizability across datasets, making it a versatile and efficient solution for EEG motor imagery decoding within brain-computer interfaces.
Parisa Sarikhani et al 2024 J. Neural Eng.
(Objective). Vagus nerve stimulation (VNS) is being investigated as a potential therapy for cardiovascular diseases including heart failure, cardiac arrhythmia, and hypertension. The lack of a systematic approach for controlling and tuning the VNS parameters poses a significant challenge. Closed-loop VNS strategies combined with artificial intelligence (AI) approaches offer a framework for systematically learning and adapting the optimal stimulation parameters. In this study, we presented an interactive AI framework using reinforcement learning (RL) for automated data-driven design of closed-loop VNS control systems in a computational study. (Approach). Multiple simulation environments with a standard application programming interface were developed to facilitate the design and evaluation of the automated data-driven closed-loop VNS control systems. These environments simulate the hemodynamic response to multi-location VNS using biophysics-based computational models of healthy and hypertensive rat cardiovascular systems in resting and exercise states. We designed and implemented the RL-based closed-loop VNS control frameworks in the context of controlling the heart rate (HR) and the mean arterial pressure (MAP) for a set point tracking task. Our experimental design included two approaches; a general policy using deep RL algorithms and a sample-efficient adaptive policy using probabilistic inference for learning and control (PILCO). (Main results). Our simulation results demonstrated the capabilities of the closed-loop RL-based approaches to learn optimal VNS control policies and to adapt to variations in the target set points and the underlying dynamics of the cardiovascular system. Our findings highlighted the trade-off between sample-efficiency and generalizability, providing insights for proper algorithm selection. Finally, we demonstrated that transfer learning improves the sample efficiency of Deep RL algorithms allowing the development of more efficient and personalized closed-loop VNS systems. (Significance). We demonstrated the capability of RL-based closed-loop VNS systems. Our approach provided a systematic adaptable framework for learning control strategies without requiring prior knowledge about the underlying dynamics.
Yaru Liu et al 2024 J. Neural Eng. 21 036004
Objective. Brain–computer interface (BCI) systems with large directly accessible instruction sets are one of the difficulties in BCI research. Research to achieve high target resolution (100) has not yet entered a rapid development stage, which contradicts the application requirements. Steady-state visual evoked potential (SSVEP) based BCIs have an advantage in terms of the number of targets, but the competitive mechanism between the target stimulus and its neighboring stimuli is a key challenge that prevents the target resolution from being improved significantly. Approach. In this paper, we reverse the competitive mechanism and propose a frequency spatial multiplexing method to produce more targets with limited frequencies. In the proposed paradigm, we replicated each flicker stimulus as a 2 × 2 matrix and arrange the matrices of all frequencies in a tiled fashion to form the interaction interface. With different arrangements, we designed and tested three example paradigms with different layouts. Further we designed a graph neural network that distinguishes between targets of the same frequency by recognizing the different electroencephalography (EEG) response distribution patterns evoked by each target and its neighboring targets. Main results. Extensive experiment studies employing eleven subjects have been performed to verify the validity of the proposed method. The average classification accuracies in the offline validation experiments for the three paradigms are 89.16%, 91.38%, and 87.90%, with information transfer rates (ITR) of 51.66, 53.96, and 50.55 bits/min, respectively. Significance. This study utilized the positional relationship between stimuli and did not circumvent the competing response problem. Therefore, other state-of-the-art methods focusing on enhancing the efficiency of SSVEP detection can be used as a basis for the present method to achieve very promising improvements.
F Guerreiro Fernandes et al 2024 J. Neural Eng. 21 036005
Objective. Brain-computer interfaces (BCIs) have the potential to reinstate lost communication faculties. Results from speech decoding studies indicate that a usable speech BCI based on activity in the sensorimotor cortex (SMC) can be achieved using subdurally implanted electrodes. However, the optimal characteristics for a successful speech implant are largely unknown. We address this topic in a high field blood oxygenation level dependent functional magnetic resonance imaging (fMRI) study, by assessing the decodability of spoken words as a function of hemisphere, gyrus, sulcal depth, and position along the ventral/dorsal-axis. Approach. Twelve subjects conducted a 7T fMRI experiment in which they pronounced 6 different pseudo-words over 6 runs. We divided the SMC by hemisphere, gyrus, sulcal depth, and position along the ventral/dorsal axis. Classification was performed on in these SMC areas using multiclass support vector machine (SVM). Main results. Significant classification was possible from the SMC, but no preference for the left or right hemisphere, nor for the precentral or postcentral gyrus for optimal word classification was detected. Classification while using information from the cortical surface was slightly better than when using information from deep in the central sulcus and was highest within the ventral 50% of SMC. Confusion matrices where highly similar across the entire SMC. An SVM-searchlight analysis revealed significant classification in the superior temporal gyrus and left planum temporale in addition to the SMC. Significance. The current results support a unilateral implant using surface electrodes, covering the ventral 50% of the SMC. The added value of depth electrodes is unclear. We did not observe evidence for variations in the qualitative nature of information across SMC. The current results need to be confirmed in paralyzed patients performing attempted speech.