GUEST: In a not-too-distant future, autonomous cars, driven largely by AI systems, will hit the road in large numbers.
But getting autonomous vehicles on the road is only half the battle. That’s because, even after the cars are out there, system operators will need to frequently update their software models and deploy updates to their fleets. While we can all get away with updating our smartphone apps only once every few months for fun, autonomous cars aren’t Angry Birds, and their software will need to be updated regularly in order to keep passengers safe.
Across the board, auto manufacturers agree that continuously training and deploying updated software models to their fleets is their biggest challenge. Deep learning AI technology platforms, logistics, and, surprisingly, humans will all play a role in the final solution.
The production release of fully autonomous cars is probably at least five years away still, as these machines are not nearly safe enough for widespread consumer use. Google’s self-driving cars still make mistakes, like getting confused by cyclists on fixed-gear bikes at stop signs. Tesla’s Autopilot has run into trouble when driving on local streets instead of highways. In fact, there are an unlimited number of such corner cases that autonomous vehicles must respond to, and many still need to be discovered and factored in. Only when a sufficient number of scenarios have been addressed will autonomous cars be considered “safe enough.” As Tesla recently blogged:
“Getting [an autonomous car] to be 99% correct is relatively easy, but getting it to be 99.9999% correct, which is where it ultimately needs to be, is vastly more difficult. Making mistakes at 70 mph would be highly problematic.”
The continuous update challenge
Though we will never get to 100% accuracy, when human lives are at stake, we must keep trying. This is precisely why continuously updating the “brains” in autonomous vehicles is going to be standard practice even after these cars become a mass-market reality. That’s where a combination of emerging technologies and humans will come into play.
To understand why continuous updating poses such a challenge — and how technology and humans will work together to address it — it is useful to understand a bit about how autonomous cars work and what it takes to “teach” them to drive. Let’s assume that the year is 2025, and a fictitious company named Hooli is several years into the commercial release of its fully autonomous vehicle. Within each of the million or so Hooli autonomous cars on the road there is an onboard computer that collects inputs from all the car’s sensors and delivers as outputs all the actions that the car should take. This computer is programmed with a deep learning neural network model that is already highly accurate, enabling the car to perform hundreds of actions extremely well — actions such as stopping at a stop sign, slowing when the light is yellow, avoiding pedestrians in a crosswalk, etc. But as we have discussed, Hooli wants to update this model regularly in order to make its cars even safer, and their goal is to push out updates once a week.
For the sake of simplicity, let’s assume Hooli is trying to improve the performance of just one of the car’s actions — lane changing. Each week, Hooli’s autonomous vehicles conduct over 100 million lane changes, during which the cars collect and act on geospatial location information about nearby vehicles. The geospatial locations of nearby vehicles are represented by 3D “bounding boxes” in images captured by onboard cameras. (Image source: University of Toronto.)
Getting a human-in-the-loop
Some of these bounding boxes are certain to have been imperfectly drawn by the deep learning model — errors that could prove catastrophic! For example, if the model estimates that a nearby car is only 10 feet long when it is in fact 15 feet long, then the autonomous car could clip the other car’s bumper when changing lanes, causing an accident and injury. Fortunately, this disaster can be avoided by inserting humans into the feedback loop. Humans can review the bounding boxes drawn by the model and can correct any errors by redrawing the box. The onboard model can then be updated with the correction.
Of course, it is not practical to have humans review 100 million lane change videos each week looking for misdrawn bounding boxes. Fortunately for Hooli, though, there is a way to simplify the problem. The bounding boxes emerge from the deep learning model accompanied by confidence levels that indicate how certain the model is that the bounding box is correct, and low confidence levels typically accompany incorrectly drawn bounding boxes. So if Hooli has humans focus on reviewing only those videos with low-confidence bounding boxes, they can still be reasonably certain that they will catch the most egregious errors.
Let’s say Hooli were to ask humans to review videos containing bounding boxes with confidence levels of 30% or lower. It turns out that only about 5,000 videos meet this criteria — which is obviously a much more tractable number than 100 million! Assuming it takes one minute on average to observe and, if necessary, correct the bounding boxes in a single video, then a project to review and correct up to 5,000 videos could be completed by 100 people in less than an hour, giving Hooli plenty of time to update the deep learning model with corrections each week.
Once the misdrawn bounding boxes have been corrected, the updated images can be fed into the training process to generate an updated model. To verify that the updated model performs better than the old one, Hooli tests both the new model and the previous model against a validation data set (e.g., an hour’s worth of video with human-verified bounding boxes) and measures the overall “closeness” of the predicted bounding boxes to the human-verified ones. The improved model is then deployed over-the-air to the one million Hooli cars in operation, similar to how Tesla pushes software updates to its cars today.
It’s easy to get caught up in some of the hysteria about the prospect of AI automating everything, stealing jobs and generally pushing humans aside. Despite the incredible advances in autonomous driving thanks to deep learning and other technologies, it may be comforting to some that it still takes a lot of living breathing humans to identify errors in autonomous driving systems and to develop and manage the processes necessary to correct them. In short, we still need people to teach cars to think.
Naveen Rao is CEO of Nervana Systems. He is also a neuroscientist and processor architect and has spent his academic life and career devoted to figuring out how to make computers mimic the human brain.
Get more stories like this on Twitter & Facebook
Human-in-the-loop deep learning will help drive autonomous cars
Human-in-the-loop deep learning will help drive autonomous cars