Socket Life: Has the brain-computer era come?

In “Ghost in the Shell” and “Alita: Battle Angel”, only the brain of the protagonist is well preserved. After the exoskeleton is installed, it miraculously resurrects. With the blessing of brain-computer interface technology, it can also carry out fierce and exciting battles. ; In “The Matrix”, humans can live in a virtual world through intubation behind the head, and they don’t notice anything unusual; in William Gibson’s short story “Winter Market”, the protagonist has a congenital disability, only Relying on the brain-computer interface to control the activities of the exoskeleton, in order to get rid of the weak body, she finally translated her consciousness into a computer program and gained immortality. In Chen Hongyu’s “The Realm of Eternal Calamity” serialized in the 6th and 7th issue of “Science Fiction World” in 2019, the experimenters entered the shared consciousness world influenced by mutual interference through the human-computer interface. Behind it, there are military uses that dehumanize humanity… I believe everyone is familiar with the powerful functions of the brain-computer interface envisaged in science fiction works.

In August 2020, Elon Musk held a press conference to showcase the latest brain-computer interface equipment from Neuralink, which he invested. In the demonstration, this device can record the neuroelectric signal in the brain of the experimental pig and predict its movement. At the same time, he also claimed that the brain-computer interface can call cars, play games, and can treat diseases such as deafness, memory loss, stroke, and even depression, anxiety, insomnia, and addiction. In addition, he also claimed that it is expected to realize direct communication between the brains within 5 years, and in the future can upload and download memories, thereby realizing the “digital immortality” in science fiction.

It seems that science fiction becomes a reality just around the corner. But in fact, some of the functions he demonstrated and claimed have already been realized, and some are still fantastical. To understand this problem, we must first know what the brain-computer interfaces that scientists study are, and what they can do.

What is a brain-computer interface?
As the name suggests, Brain-Computer Interface (BCI) is an information system that connects the brain and the computer, allowing the brain to directly communicate with the computer. The information transmission of the brain-computer interface is two-way. It can not only transfer information from the brain to the computer, and then control the external devices connected to it, but also transfer information from the computer to the brain, and stimulate the brain nerves with electrical signals.

So, how does the brain-computer interface work? Neuroscience research has found that even if the nervous system and motor organs lose their function due to injury, as long as the brain functions remain normal, then control commands can still be transmitted from the brain through EEG signals, but the injured limbs will not move along with it. When people are engaged in certain thinking activities, or under certain external stimuli, the EEG signals will show regular changes corresponding to the stimuli. Therefore, abstract virtual brain activity can be expressed by concrete and real brain electrical signals, which are the bridge between the brain and the outside world. The brain-computer interface communicates with neurons by detecting or influencing such brain electrical signals.

In addition, another principle of brain-computer interface is the functional composition of the brain. We know that the various functions of the brain are usually responsible for some specific locations (that is, brain regions). For example, vision depends on the occipital region of the brain to achieve. If this brain region is injured, the visual ability will be impaired. This is the brain function. Locality. At the same time, each function also requires the cooperation of multiple brain regions to achieve. Any brain region involved in the processing of related information will cause problems in the final actions of the human body (language, expressions, movements, etc.). This is the brain function. The distribution. This is crucial for the selection of the brain-computer interface to collect signals or the location of electrode implantation.

According to the method of human-computer connection, brain-computer interfaces are generally divided into non-invasive (non-implantable) and implantable types.

The common non-invasive BCI is a brain-computer interface based on electroencephalogram (EEG). This recording system has dozens to hundreds of disk-shaped electrodes, each of which is similar in shape and size to a button. Paste these electrodes on the scalp to record the current changes in the brain. The implantable brain-computer interface requires the implantation of electrodes in the skull to record neuronal activity. The electrodes can be implanted between the skull and the brain or in the cerebral cortex. The accuracy and intensity of the nerve signals collected by these methods are significantly different.

We can imagine the brain as a huge stadium where a ball game is being held. Each neuron is a spectator, and the action potential emitted by a neuron is a shout of the spectator. For the non-invasive brain-computer interface, the electrodes attached to the scalp are like microphones attached to the outer wall of a gymnasium to listen to the sounds in the stadium. The shouts (signals of neurons) of each audience are very weak, and the microphones can only The superimposed shouts of the 100 million spectators in the hall were detected. In this case, it is difficult to detect the shouting (action potential) of a person (neuron), and naturally it is impossible to understand the activity of a specific neuron, let alone understand the specific state of the brain? But when a goal is scored, the audience will cheer in unison, and the voices are synchronized and louder than the chaotic sounds. Therefore, based on this synchronized cheering, you can guess what is happening in the museum. The EEG recording system is to infer the working state of the brain by detecting the synchronized activities of a large number of neurons.

The electrode array implanted between the skull and the brain is equivalent to installing a microphone on the inner wall of a gymnasium. Of course, it is still difficult to record the activity of a single neuron. Like EEG, it can only measure the overall activity of multiple neurons. , But its spatial resolution is higher, the number of neurons recorded is less, and the location is more specific. The electrode array implanted in the cortex is equivalent to putting a giant microphone in the audience, which can record the cheers of several people around. However, because the calling sounds between people are somewhat different, it can be changed with the help of software. Corresponding to a calling sound and the caller, in theory, the activity of a single person (single neuron) can be obtained. Therefore, the information extracted by the electrodes implanted in the cerebral cortex is more detailed than other methods. However, there is no ability to select and record a certain neuron. You can only choose the position of the implanted electrode array according to the functional area of ​​the brain. For example, when you need to control a prosthesis, you can use it in the brain area that controls the movement of the arm or leg Implant electrodes.

After extracting the neural signal, it must be decoded. Just as there are supporters of two teams in a football game, when the team supported by one supporter attacks, they will cheer more, otherwise, they will be quieter. But they occasionally cheer for other things. This is equivalent to different neurons having different preference directions. After finding out the preferred direction of neurons and recording their activities in real time, one can guess that person’s movement intentions. This process is called decoding. After that, the control system needs to control the prosthesis or the cursor based on the decoding result, and can also provide feedback signals to the brain to adjust the manipulation of the robotic arm or the cursor.

What exactly can a brain-computer interface do?
The main purpose of the brain-computer interface is to control external devices or the surrounding environment (such as turning on and off lights, controlling room temperature, etc.) by receiving the user’s instructional brain electrical signals, converting them and outputting them.

Judging from the existing reports, some of the functions shown and claimed by Musk mentioned earlier have actually been realized. For example, a non-invasive brain-computer interface can detect changes in the frequency of neuronal synchronization activities, and by analyzing the relative strength of each frequency, the analysis results are fed back to the user or experimenter, and the target can also be controlled based on these results. Using this principle, there can be some very interesting applications. As early as 1985, the novel “Air Combat Games” co-written by Michael Swanwick and William Gibson mentioned that a controller pasted behind the ear was used to control fighter jets for air combat games. In reality, there are corresponding wearable brainwave toys on the market. For example, let players use brainwaves to accelerate toy cars to race, or wear cat ears that can detect and analyze brainwaves, which can be based on changes in the wearer’s mood. Make the corresponding instructions.

Moreover, this brain-computer interface can also obtain the brain’s imagination of the movement of various parts of the body, such as whether you are imagining moving the left hand or the right foot. When imagining different movements, it will activate the neurons in the brain area that control the related motor functions. The frequency recorded by nearby electrodes will change. The software analyzes the signals of multiple electrodes at the same time, and uses algorithms to guess the user’s imagination, so as to enable the wheelchair to turn or move forward, and to control the exoskeleton to walk and so on. In addition, the non-invasive brain-computer interface allows users to select icons on the screen to achieve the function of typing. The current frequency has reached hundreds of letters per minute.

The implanted brain-computer interface can realize more complex functions. At the beginning of 2020, my country completed the first clinical study of an implantable brain-computer interface. After 4 months of rehabilitation training, the patient realized the use of mind to control the behavior of eating, drinking, and daily entertainment of the robotic arm. Of course, in addition to better control of prosthetic activities, the implanted brain-computer interface can also directly read what people want to say. The method is to use electrodes to record neural activity in the language area of ​​the brain, and at the same time let the subject read or mute some sentences, and then analyze the relative strength of each frequency of the brain electrical signal, and then find the language corresponding to the brain activity based on these frequencies. After understanding the relationship between EEG frequency and language, the recorded signal can be converted into sentences. Even without speaking, the machine can understand and express what the user wants to say.

In recent years, the application of brain-computer interfaces in the field of medical rehabilitation has gradually emerged. For example, stroke damages the motor center of the cerebral cortex. Traditional physical training and rehabilitation only move arms and legs, and the effect is not very good. Nowadays, based on the brain-computer interface, an active training method has been developed, which allows stroke patients to imagine the movement of the paralyzed limbs. When imagining, the brain waves are responsive, so this brain wave signal can be measured through the brain-computer interface system. Once you find that the patient really wants to move, start the training robot again. This training method is very effective. In 2014, the Tianjin University neuroengineering team developed an artificial neural robot system for whole-limb stroke rehabilitation. An artificial neural pathway was constructed outside the patient’s body, which simulated and decoded the patient’s motion rehabilitation idea signal to drive neuromuscular electrical stimulation , So that the paralyzed limbs made corresponding actions.

In addition, the brain-computer interface can also be applied to the training of healthy people. For example, it can detect the brain state of excellent shooters when performing tasks. Therefore, the brain activities are timely fed back to the shooters to let them know how far they are from their excellent level. Gap, so as to adjust the brain state and reach the excellent level as soon as possible.

The above introduction is to use the brain-computer interface to read the brain signal, and then send the decoded information to achieve a certain purpose. In addition, the brain-computer interface can also transmit information to the brain, stimulate the neurons around the electrodes, and provide feedback (for example, when using a robotic arm to grasp a glass, adjust the grasping strength through corresponding feedback), but also Produce artificial touch, artificial vision and artificial hearing. However, intracerebral electrical stimulation is still in the experimental stage. In contrast, neural interface electrical stimulation (a technology for electrical stimulation of nerves outside the brain) is more mature. Some neural interface products that use this technology have already been on the market, such as artificial The cochlea and artificial retina use electrical stimulation to activate the auditory nerve and neurons in the retina, respectively, so that the patient can regain hearing and vision. The premise is that the auditory nerve, retinal nerve, and related nerve centers are intact.

Deep brain stimulation
Musk also mentioned that the brain-computer interface can treat depression, amnesia and other neuropsychiatric diseases caused by deep brain diseases in the future, using electrical stimulation in the brain. In fact, there are relatively mature treatments for such diseases. This technique is called “deep brain stimulation”, commonly known as a brain pacemaker.

The main application of brain pacemakers is to treat Parkinson’s disease. So far, more than 100,000 patients with Parkinson’s disease and other neuropsychiatric diseases have benefited from this. Taking the treatment of Parkinson’s disease as an example, the electrodes of the brain pacemaker are implanted in the subthalamic nucleus area, the pulse generator is implanted under the skin of the chest, and the extension leads are connected to the pulse generator and electrodes under the skin. After the operation is completed, the brain pacemaker will send out a certain frequency of electrical stimulation pulses, which act on the subthalamic nucleus area through the electrode contacts to regulate abnormal nerve activity in the brain, thereby achieving the effect of improving symptoms. In addition to Parkinson’s disease, brain pacemakers have good effects on diseases such as essential tremor, dystonia, and chronic pain. Neuroscientists are also actively exploring their use in epilepsy, depression, obsessive-compulsive disorder, and old age. The possibility of neuropsychiatric diseases such as dementia and addiction.

Generally speaking, the brain regions where the electrodes of the brain-computer interface and the brain pacemaker are implanted are different. Since the motor center, sensory center, auditory center, language center, etc. are all located in the cerebral cortex, the electrodes of the implanted brain-computer interface need to be implanted in the cortex, and the electrodes of the brain pacemaker need to be implanted in the deep part of the brain, such as the treatment of Parkinson and The targets of depression are located in the thalamus and the cingulate gyrus below the knee, and the brain-computer interface electrodes implanted in the cortex may be beyond reach. The brain-computer interface mentioned in the publicity is expected to treat various neurological/mental diseases, and it often refers to deep brain stimulation. Of course, it can also be regarded as another type of brain-computer interface.

The road ahead
Related researchers have envisioned that by 2030, non-invasive brain-computer interface technology will mature, paralyzed people can directly control wheelchairs with their brains, and exoskeleton systems will also begin to go on the market. In 2050, the risk of implantable brain-computer interfaces will be greatly reduced, and healthy people are willing to use them. With the fifth and sixth limbs installed, soldiers can also implant “eyes” that integrate near-infrared, sonar and other technologies. In 2070, implanted chips can be used to improve human intelligence. The brain extracts knowledge from the chip, just as convenient as extracting the knowledge learned from school in memory… But the premise of the realization of these ideas is that the development of brain-computer interfaces must It went smoothly.

Although many achievements have been made, so far, the brain-computer interface is basically at the level of laboratory display, and there is still a long way to go before the real commercial application, and it still faces many problems that need to be solved— —

Brain science issues: The pathogenesis of brain diseases is still under research. If these issues have not been studied to understand, there will be no mature applications of brain-computer interfaces.

Accuracy of EEG signal collection: If accurate monitoring is to be implemented, a large number of electrodes must be implanted in the brain. However, there are tens of billions of neurons in the cerebral cortex. Just like the “gymnasium” analogy mentioned above, one electrode records the electrical signals of thousands of neurons in the cortex, and it will inevitably receive other signals. Interference. It is extremely difficult to achieve truly accurate measurements. Even if it reaches tens of thousands of electrodes in the future, it is only a drop in the bucket for the astronomical number of neurons. Moreover, how do so many electrodes be implanted in the brain? How to deal with massive amounts of data? General computers may not be able to achieve this kind of supercomputing function.

Safety and working life of implanted electrodes: It is very difficult to implant a huge number of electrodes into the cortex, which requires opening the skull and ensuring that no bleeding or other injuries are caused. Moreover, our body’s immune tissue will attack the implanted electrodes for a long time, and the immune cells will surround them to form scar tissue. Therefore, the recording quality of the electrode will slowly decline. The electrode will be fully monitored for a few years or as short as a few months. If there is no neuron activity, if you need to use it again, you have to re-implant the electrode. This also increases the risk of damage to brain neurons and infection.

Neural decoding and coding: It is still a “black box”. The brain-computer interface only restores complex neuron activity into simple brainwave data, and the accuracy of decoding is still too low. Moreover, decoding corresponds to “from brain to machine”, which is “guessing” the user’s “movement intention”. It is not a concept at all with “interpreting consciousness”, while encoding corresponds to “from machine to brain”, which is even more difficult. Adding to the difficulties, it is basically still in a completely unknown state. In addition, scientists have not studied the mechanism of “consciousness” itself. Therefore, uploading consciousness and digital immortality is still a science fiction concept. I am afraid that we will not see the arrival of that day in our lifetime.

The communication speed is slow: the maximum information conversion speed of the brain-computer interface is only about 100 bits per minute. This efficiency is far from the level of normal communication, and it is also unable to control the external equipment to make complex and smooth movements, let alone “air combat games” Control the fighter jets in the fierce competition like in “?

In addition, brain-computer interface is a complex interdisciplinary subject, involving neuroscience, cognitive science, mechanical dynamics, information engineering, materials science, etc. Any shortcoming in any discipline will severely restrict the development of brain-computer interface.

The brain-brain interface is dawning
It is worth mentioning that in the process of the development of brain-computer interfaces, a technology called “brain-brain interface” has also emerged. The movie “Avatar” released in 2009 showed this: humans can remotely control the body of the transformed Na’vi on the planet Pandora through direct brain-to-brain information transmission. In fact, this is by no means whimsical. Studies have shown that extracting neuroelectrophysiological information from the cerebral cortex of one animal and decoding it can indeed stimulate the cerebral cortex of another animal.

In 2014, the research team of Shanghai Jiaotong University applied for a brain-brain interface invention patent. Its working steps are: video monitor animal movement information, transmit it to the real-time control interface of the brain-computer interface, and the controller (human) is in control The interface sees the animal’s movement state, and then expresses the intention of its brain control. The EEG acquisition module will collect the controller’s EEG signal and send it to the computer processing module, and finally send the decoded information to the animal’s neuroelectric stimulation electrodes , And then control the animal’s movement direction. Simply put, it actually contains two sets of brain-computer interface systems, one at the ends of the controlled animal and the controller. Its purpose is to use the specialties of animals to replace humans with special tasks that we can’t and dare not dare to, such as search and expeditions, under the control of humans.

In 2018, the research team of the University of Washington successfully established a multi-human brain-brain interface system for the first time and successfully completed the Tetris game in cooperation. They divided the three subjects into two groups. Two of them could see the complete game interface and sent instructions through the brain-computer interface whether to rotate the latest graphics angle on the screen. The third person received the instructions to implement the operation. The average correct rate is as high as 81.25%. The research shows the possibility of using the brain-connected “social network” to collaborate to solve problems.

However, due to the difficult processes involved in extracting EEG signals, decoding, and transmitting the decoded information to the correct neural circuit, the previous brain-brain interface information transmission rate can only reach 0.004 to 0.033 bits per second, which restricts the technology. One of the main bottlenecks in development. In early 2020, researchers from the Beijing Brain Science and Brain-like Research Center proposed a new type of brain-brain interface, which is expected to solve this problem. They first used a fiber-optic recording system to extract movement information from the brain neurons of the “control mouse”, then decoded it, and then stimulated specific neurons of the “Avatar Rat” through optogenetics1. The information transmission rate reached 4.1 per second. Bits, which are two to three orders of magnitude higher than previous similar studies, have realized the highly synchronized movement of two animals, and verified the possibility of precise control of animal movement across individuals in principle.

From the perspective of system composition, the brain-brain interface is inseparable from the brain-computer interface. The bottleneck of the brain-computer interface is also the brain-brain interface that needs to be overcome. At the same time, the brain-brain interface may have its own problems. Although there are many difficulties, the ultimate goal envisaged in science fiction works may not necessarily become a reality, but brain-computer interface and brain-brain interface are expected to realize the integration of biological intelligence and machine intelligence, and make the brain and the brain, the brain and the computer. Direct communication, the prospects are very broad. Scientists all over the world are working hard to study. Once a major breakthrough is made in the future, the course of human history will even be reshaped.