Anelia Angelova, a research scientist at Google working on computer vision and machine learning, presented a new pedestrian detection system that works on video images alone. Recognizing, tracking, and avoiding human beings is a critical capability in any driverless car, and Google’s vehicles are duly festooned with lidar, radar, and cameras to ensure that they identify people within hundreds of meters.

But the cost of the sensors are expensive. If autonomous vehicles could reliably locate humans using cheap cameras alone, it would lower their cost, but video cameras have their issues. “Visual information gives you a wider view [than radars] but is slower to process,” Angelova told IEEE Spectrum.

At least it used to be. The best video analysis systems use deep neural networks-machine learning algorithms that can be trained to classify images (and other kinds of data) extremely accurately. Deep neural networks rely on multiple processing layers between the input and output layers. Modern deep networks can outperform humans in tasks such as recognizing faces, with accuracy rates of over 99.5 percent. But traditional deep networks applied to pedestrian detection are very slow, dividing each street image into 100.000 or more tiny patches, explains Angelova, and then analyzing each in turn. This can take seconds or even minutes per frame, making them useless for navigating city streets. Long before a car using such a network has identified a pedestrian, it might have run the person over.

Angelova’s new, high speed pedestrian detector has three separate stages. The first is a deep network, but one that slices up the image into a grid of just a few dozen patches rather than tens of thousands. This network is trained to do multiple detections simultaneously at multiple locations, picking out what it thinks are pedestrians. The second stage is another network that refines that result, and the third is a traditional deep network to deliver the final word on whether the car is seeing a person or, say , a mailbox.

However, because that slow, accurate network only analyzes a small portion of the image where pedestrians are likely to be, the whole process runs much faster—between 60 and 100 times quicker than the best previous networks, says Angelova. Running on graphics processors similar to those in Google's self-driving cars and fed street images, the system was trained in about a day . It could then accurately identify pedestrians in around 0.25 seconds. “That’s still not the 0.07 seconds needed for real time use,” admits Angelova. Self-driving cars need to know almost instantly whether they are facing pedestrians or not, in order to safely take evasive action. “But it means [the new system] could be complementary in case other sensors fail,” she says. As more powerful processors become available and the capacity of the neural network increases, Angelova expects that performance will improve. “For networks with even larger fields of view, one can consider even more speedups,” she says.

 

Article source by Mark Harris in IEEE Spectrum

http://goo.gl/lBX6IV

http://spectrum.ieee.org/cars-that-think/transportation/self-driving/new-pedestrian-detector-from-google-could-make-selfdriving-cars-cheaper