Erik Learned-Miller, professor of computer science, recently received a four-year grant from the Defense Advanced Research Projects Agency (DARPA) to support fundamental research that he says will be “especially open to creative approaches” in the quest to build “visual common sense” into computers and robotic assistants. He will lead the multi-institution DARPA project, part of its Lifelong Learning Machines (L2M) program, which is expected to bring about $1.6 million to the campus.
Learned-Miller says, “It is well-established that machines can learn from examples. They can learn to discriminate among different types of fruit, for example. This sort of machine learning is widely used now. However, there are many new directions in which we’d like to be able to extend this.”
“We’d like the computer to be able to learn new things without forgetting the old things it has learned, instead of starting from scratch for each new problem,” he adds. “This project is a broad and ambitious effort to move to computer vision and machine learning to a new level. Our team is focused on doing this in computer vision, which focuses on images and video.”
His research partners are Kristen Grauman at the University of Texas at Austin, Rob Fergus at New York University and Greg Shakhnarovich at the Toyota Technological Institute of Chicago. One of the approaches the team expects to take, Learned-Miller explains, is to teach computers how to make decisions about which actions will be most useful in a particular situation based on previous experience.
He says, “One of the complaints people have is that if you want to have an assistive robot and you want to ask it, ‘bring me my medicine,’ some of that can be written into the computer with rules, but they are brittle. Humans do this kind of intelligent search all the time, but computers don’t currently have these abilities, and if we want to rely on them as personal assistants, it will be incredibly frustrating if they can’t do this kind of thing.”
“The idea is that we want to endow the computer with the ability to use common sense reasoning to find things,” the computer scientist says. “For this it needs to recognize and identify objects. One way to teach that is to show it how objects occur together. By showing a computer millions of videos it can learn ‘co-occurrence statistics.’ So, for example, it can learn that a refrigerator and a stove usually are found in the same area, a kitchen.”
For a robotic assistant that has been asked to “find my keys,” it may see a stove in one room, a couch in another room, and a jacket hanging on a hook in another area, the researcher notes. “We are hoping to let the computer learn to reason about where keys might occur and to look there first,” he says. “In the old days, computers were set up to learn by rules, but now they need to become more flexible and to learn in a new way, by what they have experienced.”
While the DARPA project is purposely open-ended and does not have precisely defined goals such as improving a robotic assistant, the team’s goal is for a computer to be able to correctly identify what objects go together so it can “decide” how to carry out an intelligent search for something without being bound by a set of rigid rules, he adds.