AMHERST, Mass. – Despite substantial progress in recent years, there are still “considerable barriers” to deploying fully autonomous systems such as self-driving cars or mobile service robots, say computer scientists Shlomo Zilberstein and Joydeep Biswas at the University of Massachusetts Amherst. They recently received a three-year, $700,000 grant from the National Science Foundation to develop more advanced autonomous systems that can learn from experience, recognize when they need human help and ask for it.
As Zilberstein explains, “Most robots in industry work in cages because it’s dangerous for people to be near them, but there is growing interest now in having robots in the home, robots that can help people. This kind of robot will have to be more autonomous than those on an assembly line because operating in the open world is a lot harder, where the environment is less predictable.”
He adds, “The basic assumption we make is that these systems will not be so independent that they will never need help. They will always need help to learn about their environment, and most important, to deal with the unexpected.”
Biswas says, “Let’s say we want to have a robot capable of navigating outdoors on sidewalks do something like pick up my prescription at the pharmacy or return a book to the library. There are major challenges right now to doing things like this. However, we expect a robot to initially ask once, or a few times, for directions, and then the need for such assistance should reduce over time. It’s called progressive independence for long-term autonomy. We want to teach the robots to practice so they refine their paths and expand the area where they are comfortable, and develop a model of their robustness in different parts of the world. We want to develop robots that can expand their deployment area over time.”
Zilberstein says that one of the worries about self-driving cars is that they will be too cautious. “If one in 1,000 cars is confused and stops suddenly in the middle of the road, that’s a problem for everyone on that road,” he points out. “If the system can’t recognize a problem and deal with it quickly, if it just stops and waits for help, the whole thing stops. If this happens to even a small number, it can create a huge problem.”
The two UMass Amherst College of Information and Computer Sciences (CICS) professors will work on these and related problems at different levels, Zilberstein using analysis and simulations to test theoretical, formal and high-level properties of human-aware systems and autonomous behavior. Biswas will work on progressively independent long-term autonomy for service mobile robots, including computationally efficient joint perception and planning, and incorporating approximate human corrections into conventionally purely autonomous algorithms such as mapping and localization.
As Biswas explains, high-level planning will instill such values as human safety into algorithms, along with other broad goals. “A high-level plan for a robot looks something like, ‘take this book to the library,’ while a low-level plan is ‘drive the first two meters in a straight line down this hallway.’ It’s time-consuming for a robot to build a map by such incremental steps; it takes a long time, but if a human can provide high-level feedback, the robot can build better maps much faster,” he adds.
Further, both Zilberstein and Biswas will collaborate with Stefan Witwicki, a researcher at Nissan Research Center – Silicon Valley, focusing on autonomous driving. Zilberstein says his graduate student Kyle Wray did “phenomenal, fundamental work” at Nissan as an intern that laid the foundation for this grant.
In addition to human-aware systems and robots that exhibit progressive independence, Zilberstein says another important practical skill they hope to develop in autonomous systems is the ability to maintain a so-called live state. This means that the robot would have the ability to foresee a potentially “fatal” problem for itself and stop before it suffers a breakdown or crash from which it cannot recover.
“Nothing of that nature exists right now,” Zilberstein says. “But it will be important for autonomous robots to recognize that it may no longer progress to its goal on its own, but with human help, it can move on. It would need to foresee that, which means it needs a model that allows it to maintain the ability to recover from a problem.”
He adds, “There are no complete models of the open world and there never will be, which is very challenging for the system. We want to develop systems in which we can have high confidence that certain things won’t happen. We want high probabilities that they will do the right thing when confronted with the unexpected.”