Scientist to teach computers to learn what they see

Computer vision systems that can figure out what they are looking at without continuous training are the focus of a five-year, $500,000 National Science Foundation (NSF) CAREER program grant recently awarded to Erik Learned-Miller, assistant professor of Computer Science.

Learned-Miller hopes to develop computer vision systems that can learn tasks quickly by leveraging information that they have already learned. Current techniques teach computers visual concepts through examples — for instance, seeing several images of the same car shot from different angles and distances teaches the computer to recognize each different image as the same car.

But such machines typically can’t take advantage of previously learned knowledge when performing new tasks. Training vision systems this way — one problem at a time — often requires amassing large training sets and the systems can still fail catastrophically when confronted with a new situation, says Learned-Miller.

“Current techniques are impractical, time-consuming and arduous,” he says. “We need to shift the burden of learning to the computer.”

Ideally, computer systems should be adaptive, and not have to be prepared for each new task, especially when the tasks are similar to previously learned ones, says Learned-Miller. Recognizing letters, for example, even in a new font, should be possible without training the computer in the new font. With the NSF funding Learned-Miller will explore ways of getting the machines to recognize any particular car or face from a single example, simply by watching other cars or faces as they move about.

Learned-Miller also hopes to develop software for robots to continuously explore the visual world and the interactions between vision and the other senses. Ultimately, he hopes to develop computers that can be taught simply and rapidly, and then can explore on their own.