MIT Creates Brain-Controlled Robot


MIT created a robot that can be controlled directly by signals from humans’ brain

MIT programmed an industrial robot to self-correct when a human detects a mistake in its process. The robot is connected to a human user via an EEG cap that is put on the user’s head and it can transmit signals that the robot can understand and act accordingly.

MIT posted a video in which an industrial robot from Rethink where it had to pick up spray paints and spools of wire and drop them in the properly labeled container. At a certain point, the robot hesitated briefly and almost dropped an object in the wrong container, but it adjusted in the last moment and made the right choice.

The robot was able to make the correction due to its connection to an EEG cap worn by a human user. This cap records signals by using 48 sensors. It is hard to detect all signals at once, but one signal is enough. The users only need to notice than something is wrong, then the sensor record this signal and send it to the robot.

This signal is simply known as error potential, which is emitted in the brain when someone notices something wrong. It is easy for the machine to record it, since it is strong and sudden. The technology behind the machine is different from what it has been used until today, since it involves the robot trying to understand the human language rather than the human trying to understand the robot language.

Thus, the video involves a robot performing a task under human supervision and the human can intervene without having to type any code correction or hit any button.

The MIT scientists said that they wanted to make a way of communication between humans and robots. If a robot can read EG signals from a human brain used as a control system, this will impact the robot’s choices. Whether a robot makes the right choice or not will impact the human’s reaction.

The scientists have many ideas on how they can improve this technology. They are planning to optimize it for more complex tasks that involve more than a binary operation and might boost learning in robots.

Also, they want to apply this technology for those people who have difficulties in speaking or a language impairment, or in other technologies that use autonomous robots but require human supervision to avoid accidents, such as self-driving cars or robots working in a factory.
Image Source: Wikipedia