Date:
Source:
Summary:
Prof. José del R. Millán, who was once the director of EPFL's Brain-Machine Interface laboratory but is now at the University of Texas, and Prof. Billard collaborated on research. The two research teams have created computer software that uses electrical impulses from a patient's brain to operate a robot. Patients may operate the robot with just their thoughts, without the need for vocal commands or touch screens. The work was released in Communications Biology, a Nature Portfolio open-access publication.
avoiding barriers
The researchers used a robotic arm that had been built a few years prior as the foundation for their system. This arm can maneuver around obstacles in its path, travel back and forth from right to left, and rearrange items in front of it. "In our study, we programmed a robot to avoid obstacles, but we could have selected any other kind of task, like filling a glass of water or pushing or pulling an object," explains Prof. Billard.
The robot's system for dodging obstacles was first enhanced by the engineers to make it more accurate. According to Carolina Gaspar Pinto Ramos Correia, a Ph.D. candidate in Prof. Billard's group, "At first, the robot would choose a path that was too wide for some obstacles, taking it too far away, and not wide enough for others, keeping it too close." We had to come up with a method for users to be able to communicate with the robot that didn't need speaking or movement since the robot's purpose was to assist patients who were paralyzed.
a computer program that can learn from thoughts
This required creating an algorithm that would only allow a patient's ideas to be used to modify the robot's motions. A headband with electrodes was attached to the algorithm to conduct electroencephalogram (EEG) scans of the patient's brain activity. The patient only needs to stare at the robot to operate the device. The patient's brain will send an "error message" through a distinct signal if the robot makes a mistake, acting as though the patient is saying, "No, not like that." The robot will then realize that what it is doing is incorrect, albeit initially, it won't know why. For instance, did it approach or distance itself too much from the object? To aid the robot in locating the appropriate response, The program receives an error message and utilizes inverse reinforcement learning to determine what the patient wants and what steps the robot should follow. The robot tests out several moves to determine which one is appropriate through a process of trial and error. The procedure moves quite rapidly; it often takes only three to five tries for the robot to determine the appropriate answer and carry out the patient's instructions. The robot's AI algorithm can pick up new information quickly, but for it to modify its behavior, you must let it know when it makes a mistake, according to Prof. Millán. "One of the biggest technical challenges we faced was developing the detection technology for error signals." Lead researcher for the study Iason Batzianoulis According to the research's author, "What was particularly challenging in our study was 'translating' a patient's brain impulses into movements carried out by the robot. This was accomplished by employing machine learning to connect a particular brain signal to a particular activity. We next connected the tasks to specific robot controls to program the robot to carry out the patient's instructions.
A mind-controlled wheelchair is the following stage.
In the future, the researchers want to utilize their system to steer wheelchairs. According to Prof. Billard, there are still many engineering challenges to be solved. Additionally, since both the patient and the robot are in motion, wheelchairs provide a completely new set of difficulties. The group also intends to apply its algorithm to a robot that can interpret a variety of signals and combine information from the brain and visual motor functions.