Enable breadcrumbs token at /includes/pageheader.html.twig

Mind Control of Machines Isn't Brain Surgery Any More

Interfacing the brain with technology may soon be easier.

In four years, researchers funded by the U.S. military may develop a working prototype of a system that allows for a nonsurgical interface between the human brain and technology. Such a system could improve brain control of unmanned vehicles, robots, cybersecurity systems and mechanical prosthetics while also improving the interface between humans and artificial intelligence (AI) agents.

The Next-Generation Nonsurgical Neurotechnology (N3) program funded by the Defense Advanced Research Projects Agency (DARPA) aims to develop high-performance brain-machine interfaces for able-bodied service members. This would enable diverse national security applications such as successful multitasking during complex military missions, DARPA explains on its website.

Noninvasive neurotechnologies such as the electroencephalogram and transcranial direct current stimulation already exist but do not offer the precision, signal resolution and portability required for advanced applications in real-world settings.

“If the program is successful, we’ll have ways to interact with the brain and the nervous system … that would be safe, not only for the clinical population but also for the able-bodied population,” says Al Emondi, DARPA’s N3 program manager.

DARPA officials envision an array of uses. For example, Emondi has been involved in developing neural interfaces to allow wounded warriors to better control robotic limbs. “There’s a significant clinical application. Any time you can have some sort of neural interface that is nonsurgical, that would be great. Quadriplegics now under other DARPA programs that I run require surgery, and they require implanting electrodes directly into the brain,” he notes.

But if researchers develop similar neural interfaces that do not require surgery, it could benefit soldiers on the future battlefield. “The nice thing is that [N3] will also allow us to start exploring neurotechnology for the able-bodied soldier, if, for example, I’m operating a computer network or operating drones. As more artificial intelligence starts to propagate into our military environment, the way that we interact with these AI systems is going to change,” Emondi offers.

“Rather than telling a drone to move up five degrees and make a right at 90 degrees, you just say, ‘I want you to go over there,’ and the drone knows how to execute all those types of movement,” Emondi adds.

An improved neural interface also might allow soldiers and AI systems to better work as a team. “Let’s say you’re working with an AI system that is a decision aid, which is sorting through large amounts of data. … You’re seeing too much data. You’re trying to process too much data. If it knows your cognitive state, that you may be getting overloaded, maybe it changes how much data it’s giving you,” Emondi says. He adds that if a soldier’s eyes are overwhelmed with data, the system could begin using haptic signals, or the sense of touch, to alert the user to some types of data.

The program is divided into concurrently running technical areas known as TA-1 and TA-2. “One of them is completely noninvasive. Nothing goes in the body other than ultrasonic fields or magnetic fields or electronic fields or light. We’re basically playing around with those four modalities in different ways … to figure out how best we could interoperate with the neural tissue,” Emondi says.

The second, TA-2, is for systems that are minutely invasive, a term DARPA officials coined to distinguish their technology goals from the commonly used phrase “minimally invasive.” Emondi indicates that “minimally invasive” generally requires surgery to place technology near neurons. “Depending on who you talk to, minimally invasive is still surgery. Rather than having a large craniotomy, you would have a small hole called a burr hole or a keyhole but you’re still going through the skull and getting into the brain and putting things in the brain through surgical means,” he explains.

The TA-2 effort will allow technology, such as a transducer, to be attached to the brain but only through nonsurgical techniques. “That could be through injection or it might be through ingestion, or it could be intranasal, could be transdermal, pretty much any way you can think of … without having to do surgery,” the program manager elaborates.

Once the technology is inside the brain, researchers must have a way of moving it to the proper location and of interacting with it. That also could include magnetic, electric, acoustic or light signals aimed at that neural transducer, which interacts with the neuron.

The developed systems must be capable of both writing and reading, meaning they must be able to send comprehensible signals to the brain and receive signals from the brain. “Let’s say you’re moving a robotic arm and it touches something. Can I then take that, maybe with a force sensor on the fingertip … and change it into a neural signal that I can write back into the brain so that you get the sensation of actually feeling it?” Emondi asks.

The program includes three phases. The first is 12 months long. During that time, the six teams—led by Battelle; Carnegie Mellon University; Johns Hopkins Applied Physics Laboratory; the Palo Alto Research Center (PARC); Rice University and Teledyne—must prove they can interact with neural tissue and provide metrics.

“Those metrics are critical because we have to be able to read relatively quickly. Ultimately, we want to … connect this neural interface to an external system and then be able to send systems back into the brain again,” Emondi says. “In order for that to feel natural, you have to do it quickly. Otherwise, it’s like watching a movie where the voice is out of sync with the video. It’s quite annoying.”

DARPA requires systems to interact with the brain—both reading and writing—in 50 milliseconds, a challenging requirement that already has eliminated some approaches, such as magnetic resonance imaging and functional magnetic resonance imaging, Emondi points out. The former examines the hydrogen nuclei of water molecules. The latter measures brain activity by analyzing blood oxygen levels, which takes a matter of seconds for the blood in the brain to cause neurons to fire. “What people use for MRIs isn’t going to work in the program that we’re trying. It would never make that 50 millisecond mark,” Emondi asserts.

During the first phase, researchers are working with an animal brain sample. “You take a slice of an animal brain, a rat brain, something like that … and then if you’re claiming you can interact with the neural tissue in an optical way, then you would build that whole apparatus with a living brain slice and show in phase one that you can interoperate with that neural tissue, and you can actually drive or record from those neurons that are in that brain slice,” Emondi reveals.

Successful teams will move onto the second phase, integrating their read and write devices. “Now we’re starting to look at a system integration approach…” Emondi says. “If they’re completely noninvasive, the TA-1 teams are going to have to show they can interact with the cortical tissue at a resolution of one millimeter cubed per channel. If you are a minutely invasive team, meaning you actually have something sitting on the neuron of that membrane that you’ve put in the body in a nonsurgical way, then we’re looking at 50 microns cubed.”

Phase three will include animal and human studies. The teams will have to conduct the safety studies required by the Food and Drug Administration (FDA). Lessons learned from earlier programs may allow DARPA officials to dramatically reduce the time it might otherwise take to conduct those studies. “For example, for ultrasound we’re probably going to be operating at pressures at or lower than what the FDA has already approved. There’s a number of techniques when you’re working with these different modalities on how much power the FDA would be willing to accept, so we’re making sure we stay within those boundaries,” Emondi reports. “The TA-2 efforts where we’re actually putting something in the body, like these nanoparticles, that’s going to take a lot more time.”

Emondi says it is too early to guess when a noninvasive or minimally invasive neural interface might be fielded, but DARPA aims for a working prototype at the end of the four-year program. The technology will likely be used in a controlled environment, such as a command and control center, long before it is used on the battlefield. “If I’m a Navy SEAL and I’ve got all my equipment on me, coming up out of a saltwater environment, operating a brain-machine interface right away probably wouldn’t be my entry-level solution. But maybe I’m in a metal shelter, like these cargo containers where a lot of the [unmanned aerial vehicle] control platforms are. Being in command centers, control centers, those types of locations … I would think would be probably entry-level uses for the technology.”

He also says it is too early to conclude exactly where the technology might lead. It could be used in the commercial world for big data processing, drone operations, cyber operations or gaming. It could benefit quadriplegics, allowing them to better operate wheel chairs or even TV remotes. “Right now that whole exploratory space is off limits because the only way really to do it with high resolution is through surgical approaches. N3, hopefully, is the program that’s going to crack this open and allow us to use noninvasive neural interfaces to explore how we interoperate with machines in a more advanced way,” Emondi declares.

Enjoying The Cyber Edge?