“Look Ma! No hands!” What’s under the hood of a driverless car?
How do we get from guided parallel parking to the “driverless Uber”?
Today’s car buyers will already be familiar with features that allow a car to take control, such as parking assistance, adaptive cruise control and automatic braking. But what additional technology will a car need before it can collect you at the station and drop you home?
Fitted out with an array of sensors – LiDAR, video cameras, radar, GPS, altimeters, tachometers, gyroscopes and ultrasonic units – a driverless car gathers data about its location, behaviour and environment. The car uses these measurements to establish where it is, to within a few centimetres accuracy, by comparing it with existing map data stored in a data centre. The maps are impressively detailed, having been collected and annotated by hand to describe permanent features of the landscape. The car then prioritises attention on new features in its surroundings, picking out anything different that may require extra caution, such as a moving hazards, like other vehicles or pedestrians, or roadworks signs and traffic cones.
Safety dictates that a driverless car cannot be deployed in an area until it is comprehensively mapped. Gathering the map data is a task that companies like Google are familiar with, having already built up Maps and Streetview products. However, significant additional detail is required by driverless cars, such as the location of kerbs and pavements, line markings and signposts. While it’s not technically difficult, this is a key part of a driverless car programme, and it is a resource-intensive element that needs human input.
Companies pioneering driverless car projects are investing heavily in their mapping divisions. In what may be a predictor for a future announcement in this space, Apple recently announced a new development office in India to focus on its maps, creating 4000 engineering roles. Technology companies like Google and Apple are already familiar with gathering and handling map data, but traditional motor manufacturers also need these skills if they are to successfully compete in a driverless car market. To this end, BMW, Daimler and Audi jointly paid $3 billion for Nokia’s mapping division last year.
Using the map data and its sensor input, the car’s algorithms make driving decisions every tenth of a second. This decision-making comes from a software model that has iteratively learnt its behaviour from previously supplied “training” data. Known as machine learning, the software interprets new data without being explicitly programmed in how to deal with it. The model can then adapt its behaviour to make reliable, repeatable decisions. For example, the first time a car comes across a road with an overhanging tree, maybe something that has grown since the road was mapped, it will stop or slow down. Once it has assimilated new information about the hazard and how to react, it will nudge around it next time both there and at other similarly overgrown locations.
What one car learns through training is passed back to the software used by the entire fleet of cars, so the more cars being trained in different conditions, the faster their knowledge and behaviour can adapt. To this end, Google currently clocks up between 10,000 and 15,000 miles of autonomous driving (the car has a test driver sitting on board, but not touching the controls) per week on public roads in 4 locations in the US.
Google started working on driverless cars about 8 years ago. It has notched up over 1.5 million miles of autonomous driving. While this is a small fraction of the miles of motoring the combined US public have achieved over the same time, it is notable that the number of accidents involving driverless cars is minute and, so far, any accidents that have occurred have been minor. Over 30,000 people die in road accidents in the US per year, so any attempt to train cars to be better drivers than people should be very welcome, regardless of whether they ever achieve full autonomy.