Teaching a Computer To Read Your Mind
Scientists conducting basic research at the Johns Hopkins University Applied Physics Laboratory are examining how to build characteristics into a robotic system to improve human-nonhuman teaming. While artificial intelligence and machine learning applications can be trained to perform a task, those kinds of systems are not yet able to collaborate with humans and cannot anticipate human intent or what they will do.
At the crux of the matter is trust, and principles of Theory of Mind, offers Julie Marble, senior scientist at Johns Hopkins University Applied Physics Laboratory (APL). A long-time human computer interface researcher and cognitive psychologist, Marble holds a doctorate degree in human factors from Purdue University. Theory of Mind is the ability to infer what another person is going to do, and that ability is trust-dependent, she explains.
“For trust to arise between humans and machine, two key elements are needed,” Marble says. “First is a mathematical model of the impact of risk, collaboration, cooperation, success/failure, communication and trust of autonomously acting nonhuman teammates. This will inform the development of more complex robotic behaviors, increase technology adoption and allow for synergistic and opportunistic teaming of humans and robots. The second element is the ability of a machine to model or estimate human planning and cognition within the task.”
Using 3D virtual reality gaming, APL researchers have developed several platforms to explore how trust develops between humans and simulated machines. They examined the variables theorized to underlie the development of such trust and then performed experiments in the platforms to allow a calibrated, mathematical model of the effect of these variables on the development of trusts in teams, she explains. They also are exploring whether artificial Intelligence (AI) agents informed by a model of human cognition make better teammates and collaborators with humans than agents that do not have such an internal model of human cognition and performance.
Prior to joining the APL, Marble was a program officer at the Office of Naval Research where she ran a program for human computer or human machine interaction and teaming. She finds virtual reality to be a great environment in which to test how humans trust machines and vice versa. And using games in particular that require collaboration helps to advance a machine’s employment of human intent.
“Because I wanted to look at trust arising between humans and robots, I knew we had to put the human in a position where they could be vulnerable,” she states. “And institutional review boards don’t like it when you put people at risk. But in virtual reality, you can make people feel like they’re going to fall off cliffs and they’re really at no risk because it’s just the game, but they feel like it.”
For one of Marble’s projects called Escape with Partner, she and fellow researchers built a 3D virtual reality game with four puzzles which were essentially a digital escape room. A player would team up with a partner and they had to move boxes, use lasers and assemble structures to escape the room. In order to leave the room, the teams had to pursue actions that would support one another.
“Escape with Partner is the experimental suite for collaboratively achieving puzzle exercises with the platform assessing relationships and trust with noneconomic risk,” she explains. “What I was looking at there was how does trust evolve and develop between human and robotic teammates. Trust is not just whether or not you think your teammate, or your partner can do what it is that you’re trying to do. Trust is actually a willingness to make yourself vulnerable to unpredictable actions of another entity. If the machine or your partner cannot do what it is supposed to do, then trust is also irrelevant.”
The project looked at whether humans would choose a robot partner and if they would know they were blindly given a digital partner. The researchers sometimes would pair them with a computer, not revealing that it was not a human partner with which they were playing.
“We asked them which puzzles did you play with a human, and which puzzles did you play with a robot,” she shares. “And they could not tell what rounds they played with a robot or a human. What was more interesting was that the rounds of the puzzles where they thought they did best were when they thought that they played with a robot. So, there’s something else going on there, and I don’t exactly know completely what.”
To advance the APL’s basic research, the cognitive psychologist passed the platform on to the Naval Postgraduate School in Monterey, California, to run follow-on experiments and gain more data. Marble then developed an extended platform based on Escape with Partner called PARTI, which will use more sophisticated autonomous digital teammates than the original platform. “The bot that you played with when you played to the bot, it wasn’t anything super intelligent,” she clarifies. “It was technically a hierarchical state machine, and it could get stuck.”
Instead of virtual reality, the PARTI platform uses a flat screen game. Although once again the goal of the game is to escape the room, this time teams of three players get a more detailed look at how trust and teaming are related. “We just finished that, and we’re looking at actually running subjects with it this year,” Marble states.
Another project she is leading is called Learning to Read Minds, which uses a digital collaborative card game called Hanabi. The project builds on work stemming from a 2017 Grand Challenge from Google Brain and DeepMind in which researchers from Facebook, Google Brain, DeepMind, APL, Carnegie Mellon University and elsewhere created AI agents that could play in DeepMind’s Hanabi Learning Environment.
“Hanabi is an interesting card game,” Marble acknowledges. “The rules to the game are very simple. But compared to many other card games like bridge or poker, those are adversarial games where you are playing against other people. Hanabi is completely collaborative. You and everyone playing on a team are trying to get a communal score. And there’s high uncertainty in the game, and there’s very little communication. The whole game is predicated on your ability to infer what other people can know.”
The Grand Challenge participants found that the AI agents could play Hanabi but were not able to play well with humans. To break through that inherent problem, the APL is creating a platform this year to replicate those findings “and demonstrate that our AI are trained up and can play this game very well,” Marble says. “And we’re leveraging open-source AI for the DeepMind and the Facebook algorithms.”
The APL also is creating an interface to allow platforms from the other Challenge participants—such as Rainbow, Bad, FireFlower—to play together with APL’s agent. “We want to see how well for example, the FireFlower agent plays with the Rainbow agent,” she notes. “I expect that when we run this experiment, we will have demonstrated that in self play these agents play very well, but that when they play with another agent, they won’t play as well because they are implementing slightly different policies. The other thing that we’re doing is the interface will allow the AIs to play with humans, and that leads us to the next question. How do we give AI an insight into the human so that it can anticipate and work with the human? This becomes really important because if we’re going to have AI or machines as teammates, we’re going to be using them in situations that are unexpected.”
Marble hopes that this basic research will lead to improvements in how digital teammates could help in urban search and rescue applications. The APL researchers want to develop a platform that does not require tremendous amounts of data to predict how people are going to perform.
“What you need instead is a way for the machine to be able to anticipate the human’s cognitive processes,” she suggests. “And so next year we’re going to explore several different ways to do this. I think a more robust way to perform this would be to basically create a process model of human cognition, and we can do that using architectures that already exist. And first, we have to figure out which type of cognitive architectural process model would be best for modeling human play in this game state. What I want to do is create a machine that is capable of teaming with a person in an unpredicted context. We call it rogue teaming, but it would be rapid operations where you haven’t been able to practice before.”
Ideally, the Learning to Read Minds project would move from using the Hanabi card game up to using more applied and realistic games, Marble shares. In particular, she mentions exploring two games—one used by surgeons called Airway, which teaches surgeons how to make decisions during operations. Another game, called Command & Conquer, which virtualizes air and surface warfare. “And then from that we would step off into more real-world situations,” she adds. “We are on the edge of a paradigm shift where we go from having machines as tools to actually having machines as teammates where they should be able to anticipate what we are trying to do and what our goals are and assist with that.”