What is BCI?
A Beginner’s Guide to Brain-Computer Interface and Convolutional Neural Networks
The big picture of brain-computer interface and AI + Research papers
In-depth explanation of neural networks used with BCI
Can the mind connect directly with artificial intelligence, robots and other minds through brain-computer interface (BCI) technologies to transcend our human limitations?
For some, it is a necessity to our survival. Indeed, we would need to become cyborgs to be relevant in an artificial intelligence age.
Brain-Computer Interface (BCI):
devices that enable its users to interact with computers by mean of brain-activity only, this activity being generally measured by ElectroEncephaloGraphy (EEG).
physiological method of choice to record the electrical activity generated by the brain via electrodes placed on the scalp surface.
Functional magnetic resonance imaging (fMRI):
measures brain activity by detecting changes associated with blood flow.
Functional Near-Infrared Spectroscopy (fNIRS):
the use of near-infrared spectroscopy (NIRS) for the purpose of functional neuroimaging. Using fNIRS, brain activity is measured through hemodynamic responses associated with neuron behaviour.
Convolutional Neural Network (CNN):
a type of artificial neural network used in image recognition and processing that is specifically designed to process pixel data.
part of the cerebral cortex that receives and processes sensory nerve impulses from the eyes
Sarah Marsch, Guardian news reporter, said “ Brain-computer interfaces (BCI) aren’t a new idea. Various forms of BCI are already available, from ones that sit on top of your head and measure brain signals to devices that are implanted into your brain tissue.” (source)
Most BCIs were initially developed for medical applications. According to Zaza Zuilhof, Lead Designer at Tellart, “Some 220,000 hearing impaired already benefit from cochlear implants, which translate audio signals into electrical pulses sent directly to their brains.” (source)
The article called “The Brief History of Brain Computer Interfaces” gives us many information related to the history of BCI. Indeed, the article says “In the 1970s, research on BCIs started at the University of California, which led to the emergence of the expression brain–computer interface. The focus of BCI research and development continues to be primarily on neuroprosthetics applications that can help restore damaged sight, hearing, and movement.
The mid-1990s marked the appearance of the first neuroprosthetic devices for humans. BCI doesn’t read the mind accurately, but detects the smallest of changes in the energy radiated by the brain when you think in a certain way. A BCI recognizes specific energy/ frequency patterns in the brain.
June 2004 marked a significant development in the field when Matthew Nagle became the first human to be implanted with a BCI, Cyberkinetics’s BrainGate™.
In December 2004, Jonathan Wolpaw and researchers at New York State Department of Health’s Wadsworth Center came up with a research report that demonstrated the ability to control a computer using a BCI. In the study, patients were asked to wear a cap that contained electrodes to capture EEG signals from the motor cortex — part of the cerebrum governing movement.
BCI has had a long history centered on control applications: cursors, paralyzed body parts, robotic arms, phone dialing, etc.
Recently Elon Musk entered the industry, announcing a $27 million investment in Neuralink, a venture with the mission to develop a BCI that improves human communication in light of AI. And Regina Dugan presented Facebook’s plans for a game changing BCI technology that would allow for more efficient digital communication.”
According to John Thomas, Tomasz Maszczyk, Nishant Sinha, Tilmann Kluge, and Justin Dauwels “A BCI system has four major components: signal acquisition, signal preprocessing, feature extraction, and classification.” (source)
Why does it matter?
According to Davide Valeriani, Post-doctoral Researcher in Brain-Computer Interfaces at the University of Essex, “The combination of humans and technology could be more powerful than artificial intelligence. For example, when we make decisions based on a combination of perception and reasoning, neurotechnologies could be used to improve our perception. This could help us in situations such when seeing a very blurry image from a security camera and having to decide whether to intervene or not.” (source)
What are these brain-computer interfaces actually capable of?
For Zaza Zuilhof, It depends who you ask and whether or not you are willing to undergo surgery. “For the purpose of this thought-experiment, let’s assume that healthy people will only use non-invasive BCIs, which don’t require surgery.
In that case, there are currently two main technologies, fMRI and EEG. The first requires a massive machine, but the second, with consumer headsets like Emotiv and Neurosky, has actually become available to a more general audience.” (source)
However, BCI can also be a promising interaction tool for healthy people, with several potential applications in the field of multimedia, VR or video games among many other potential applications.
Davide Valeriani said that “The EEG hardware is totally safe for the user, but records very noisy signals. Also, research labs have been mainly focused on using it to understand the brain and to propose innovative applications without any follow-up in commercial products, so far… but it will change.(source)
Musk’s company is the latest. Its “neural lace” technology involves implanting electrodes in the brain to measure signals.
This would allow getting neural signals of much better quality than EEG — but it requires surgery. Recently, he stated that brain-computer interfaces are needed to confirm humans’ supremacy over artificial intelligence.” (source)
This technology is still dangerous! Indeed, we made computers and know exactly how they work and how to “modify” them. However, we didn’t make our brains and we still don’t really know very well how they work. Much less how to “invade” them safely and successfully. We’ve made great progress, but not enough yet.
How Your Brain Works Now, And What’s To Come
In simple terms, your brain is divided into two main sections:
The limbic system
The limbic system is responsible for our primal urges, as well as those related to survival, such as eating and reproducing. Our neocortex is the most advanced area, and it’s responsible for logical functions that make us good at languages, technology, business, and philosophy.
The human brains contains about 86 billion nerve cells called neurons, each individually linked to other neurons by way of connectors called axons and dendrites. Every time, we think, move or feel, neurons are at work. Indeed, the brain generates huge amount of neural activities. Basically, small electric signals that moves from neuron to neuron are doing the work.
There are many signals that can be used for BCI. These signals can be divided into two categories:
- Field potentials
We can detect those signals, interpret them and use them to interact with a device.
According to Boris Reuderink, Machine Learning Consultant at Cortext, “One of the bigger problems in brain-computer interfaces is that the brain signals are weak and very variable. This is why it is difficult to train a classifier, and use it the next day, let alone use it on a different subject.” (source)
In order to insert Neural Lace, a tiny needle containing the rolled up mesh is placed inside the skull. The mesh is then injected and unveiled upon injection, encompassing the brain.
Artificial intelligence or Machine learning has received great attention for the development of BCI applications to solve difficult problems in several domains, in particular, medical and robotic fields. AI/ML has since become the most efficient tool for BCI systems. (source)
Let’s try to elaborate on these aspects a bit more below. Each of these aspects have their own field of research.
There are two ways of producing these brain signals:
According to Sjoerd Lagarde, Software Engineer at Quintiq, “Actively generating signals has the advantage that signal detection is easier, since you have control over the stimuli; you know for example when they are presented. This is harder in the case where you are just reading brain-waves from the subject.”
There are different ways to detect brain signals. The most well known are EEG and fMRI, but there are others as well. EEG measures the electrical activity of the brain, fMRI the blood-flow in the brain.
Each of these methods have their own dis/advantages. Some have a better temporal resolution (they can detect brain-activity as it happens), while others have a better spatial resolution (they can pin-point the location of activity).
The idea remains largely the same for other types of measuring techniques.
One of the issues we will find when dealing with brain-data, is that the data tends to contain a lot of noise. When using EEG, for example, things like grinding of the teeth will show in the data, as well as eye-movements. This noise needs to be filtered out.
The data can now be used for detecting actual signals. When the subject is actively generating signals , we are usually aware of the kind of signals we want to detect. One example is the P300 wave, which is a so-called event related potential that will show up when an infrequent, task-relevant stimulus is presented. This wave will show up as a large peak in your data and you might try different techniques from machine learning to detect such peaks.
When you have detected the interesting signals in your data, you want to use them in some way that is helpful to someone. The subject could for example use the BCI to control a mouse by means of imagined movement.
One problem you will encounter here is that you need to use the data you receive from the subject as efficiently as possible, while at the same time you have to keep in mind that BCI’s can make mistakes. Current BCI’s are relatively slow and make mistakes once in a while (For instance, the computer thinks you imagined left-hand movement, while in fact you imagined right-hand movement).” (source)
In the case of the Neural Lace, it integrates itself with the human brain. It creates a perfect symbiosis between human and machine.
These two sections work symbiotically with one another. An AI layer or third interface could lie on top of them, plugging us into a very new and advanced world and giving us the ability to stay on par with our AI robot friends.
This connection could give us access to increased memory storage, amazing machine learning capabilities and yes, telepathic-type communication with someone else without the need to speak.
“You have a machine extension of yourself in the form of your phone and your computer and all your applications . . . by far you have more power, more capability than the President of the United States had 30 years ago,” Elon Musk
Types of BCI
According to Amit Ray, Author of Compassionate Artificial Intelligence, “The most sophisticated BCIs are
“bi-directional” BCIs (BBCIs), which can both record from and stimulate the nervous system.
Brain computer interfaces can be classified in three into three main groups:
In invasive techniques, special devices have to be used to capture data (brain signals), these devices are inserted directly into the human brain by a critical surgery.
In Semi-invasive, devices are inserted into the skull on the top of the human brain.
In general, non invasive are considered the safest and low-cost type of devices. However, these devices can only capture “weaker” human brain signals due to the obstruction of the skull.
The detection of brain signals is achieved through electrodes placed on the scalp.
There are several ways to develop a noninvasive brain-computer interface, such as EEG (electroencephalography), MEG (magnetoencephalography), or MRT (magnetic resonance tomography). An EEG-based brain-computer interface is the most preferred type of BCI for studying.
EEG signals are processed and decoded in control signals, which a computer or a robotic device perceives readily.
The processing and decoding operation is one of the most complicated phases of building a good-quality BCI.
In particular, this task is so difficult that from time to time science institutions and various software companies organize competitions to create EEG signals classification for BCI.
Convolutional Neural Network and BCI
CNN is a type of AI neural network based on visual cortex. It has the capacity to learn the appropriate features from the input data automatically by optimizing the weight parameters of each filter through the forward and backward propagation in order to minimize the classification mistake.
Human auditory cortex is arranged in hierarchical organization, similar to the visual cortex. In a hierarchical system, a series of brain regions performs different types of computation on sensory information as it flows through the system.
Earlier regions or “primary visual cortex”, react to simple features such as color or direction. Later stages enable more complex tasks such as object recognition.
One advantage of using deep learning technique is that it requires minimal pre-processing since optimal settings are learned automatically. Regarding CNNs, feature extraction and classification are integrated into a single structure and optimized automatically. Moreover, fNIRS time series data of human subjects were input to the CNN.
As the convolution is performed in the sliding show manner, the feature extraction process of CNN retains the temporal information of the time series data obtained by fNIRS.
However, one of the biggest issues in BCI research is the non-stationarity of brain signals. This issue makes it difficult for a classifier to find reliable patterns in the signals, resulting in bad classifying performances.” (source)
How can you start learning about BCI from scratch?
Hosea Siu, Aerospace engineering PhD student, said that “ For direct “brain” interfaces, you need a set of EEG electrodes, and for peripheral nervous system interfaces, you need EMG electrodes.
Once you can get that data into your computer, you’ll need to do some signal conditioning. Things like filtering for the frequency of signal you’re looking for, filtering out environmental noise (60 Hz noise from electrical lines is common in the US…).
After, you need to think about what you’re actually trying to have the system do.
Do you need it to detect a particular change in your EEG patterns when you think about the color blue? Or do you need it to detect a change in your EMG when you’re moving a finger? What about the computer? Should it run a program? Type some text?
Think about how you’re going to label your data. How will the computer know initially that a particular signal is meaningful?
This is supervised learning. Choose your preferred classification method, get lots of labeled data, and train your system. You can use methods like cross-validation to check if your trained models are doing what you think they’re supposed to.
After all of this, you might have something that looks like a brain-computer interface.” (source)
Where can I find datasets for machine learning on brain-computer interfaces?
You can find several publicly available EEG datasets in the following website:
Recent advances in artificial intelligence and reinforcement learning with neural interfacing technology and the application of various signal processing methodologies have enabled us to better understand and then utilize brain activity for interacting with computers and other devices.