Driverless cars have been “just around the corner” for more than a decade. We were supposed to take our hands off the wheel in 2020, according to car industry leaders Ford and Nissan – and yet here we are, still requiring driving licences.
Some progress has been made. Owners of Teslas can flip a switch to engage Autopilot mode, while Google’s Waymo is picking up taxi fares using cars without a human safety driver at the wheel; instead, engineers sit at the ready remotely to help the AI if it faces a situation it can’t figure out. Apple has plans to unveil its own driverless car in 2024.
Even so, car makers admit that progress hasn’t met their projections, while Waymo’s CEO John Krafcik has conceded that driverless tech would always have “some constraints”. The company said at the end of last year that it would offer entirely driverless rides in a select area of Phoenix, but these cars are remotely connected to engineers in case of challenges and some will still require safety drivers.
Perhaps, finally, the hype bubble has popped and reality has set in. “I think we’re starting to see a correction,” says Jack Stilgoe, associate professor at University College London and author of Who’s Driving Innovation? “The paradoxical thing is that the earlier a technology is, the more hype there is. The more work people actually do into the technology, the more they realise how hard it is to bring it into the real world.”
He adds: “Obviously, we shouldn’t listen only to what the companies say. They’re trying to sell stuff. But that isn’t to say that the technology won’t have enormous value in the years to come.”
Creating a self-driving vehicle while AI is in its relatively early years is no easy task, and it remains likely that truly driverless technologies are years in the future. It’s already possible for cars to navigate streets using cameras for eyes, with a mix of other sensors to gather additional data, but the challenge is to ensure that the software can understand the images and data passed back to it by those sensors and make good decisions based on that information. For that, systems are trained in real-world scenarios – that is, they’re driven a lot – and also in simulations.
Driverless cars use three vision systems: Radar, cameras and LiDar. The latter stands for “light detection and ranging”, and works similarly to radar, bouncing pulses of light off objects to determine their size, shape and, if they’re moving, speed. The systems work best in combination, as each has its own weaknesses – radar, for example, doesn’t work as well in bad weather.
The AI-based interpretation can also be impacted by factors from precipitation to graffiti. For example, if you see a stop sign covered in spray paint, you understand that braking is still required; if snow covers lanes, you go slowly and follow the car ahead. To be safe and effective, AI systems need the same mental flexibility – but, to take just one example, Waymo’s cars have spent much of their time in the sunny states of Arizona and California. Following reports that the cars didn’t work well in heavy rain, Waymo sent a few down to Florida in 2019 for the wet season to help train them for driving in adverse conditions.
Another factor is road layouts. Trials are zooming ahead in the US, but the road networks in many American cities are simpler than in the UK, according to Camilla Fowler, head of automation at TRL. “The infrastructure we have actually adds a lot of complexity to the driving task,” she says.
How and why
If we can’t solve all problems through engineering, there are other possibilities. We can limit where the cars are used, cap their speeds, rework roads or pick and choose the elements of automation that do work. There’s more to driverless technology than waiting for engineers to work out the kinks of fully driverless cars: “The question is quite often posed in terms of ‘when we will see driverless cars?’,” says Stilgoe. “I think the real question is actually where and in what form.”
For example, some cars could be geofenced – limited in where they can travel, because the developers know that the system works well enough in a specific area. Or they could be limited to special lanes. “The technology could in some cases need dedicated infrastructure, whether that’s a dedicated lane or smart infrastructure like smart traffic lights,” Stilgoe adds. Researchers at Princeton University have developed radar reflectors to alert automated cars to the presence of pedestrians and cyclists on roads.
Another step that's already being worked on is permitting an element of driving automation on motorways, but not on roads that are shared with pedestrians and cyclists; “automated lane keeping systems” are set to be approved in the UK this year.
This iterative approach to automated features has already been emerging for some years with computer-aided parking and braking. “I don’t believe people understand that most cars actually already have a level of autonomy,” says Bani Anvari, research leader of the Intelligent Mobility Group at University College London. “We are just gradually increasing this.”
To help understand and manage this process, in 2014 the automotive standards body SAE International produced a classification system detailing six levels of vehicular autonomy. Level zero represents full driver control at all times, while level five would be a car that can drive itself anywhere and everywhere without human intervention. In the short term we can expect to live with cars that are partially automated (level two) or have conditional automation (level three), which means the driver may be asked to retake the wheel. And that’s fine: While lane-changing assistance and keeping a car creeping forward during a traffic jam may not be the stuff of sci-fi films, such features can reduce driver fatigue and exhaustion.
Some tech leaders believe that full autonomy isn’t far off. Back in 2015, Elon Musk declared that self-driving technology was "largely a solved problem", and has since claimed that software updates will make it possible for current Tesla cars to navigate roads wholly unaided. Indeed, beta testers already have access to a “full self-driving” mode – although in paperwork filed to the California Department of Motor Vehicles the company has admitted that this is actually level-two autonomy requiring human supervision.
TRL’s Fowler predicts that full automation will happen first in controlled circumstances where it’s actually useful, such as around campuses or down residential streets. “It depends on what society needs,” Fowler says. “What’s the impact going to be on people’s wellbeing, safety, network capacity? The benefits need to be realised in order for this tech to be realised, I think.”
Indeed, the point should be to solve specific problems, rather than developing new systems for their own sake. “I want governments to be clear on what they want to see from these technologies,” says Stilgoe. “If they reduce congestion, are sustainable, are safer than alternatives, if they benefit people who have previously lost out on mobility technologies – those things a city could be particularly keen on. Rather than just starting with technological possibilities, you start with what people need.”
Regulate the brains
The tech industry tends to lament regulation, but the concept of a driverless car demands a robust testing and approval process. After all, any “dumb” car currently on the road is required to undergo thorough testing before being allowed to be sold. Now that we’re cramming brains into vehicles, we need to test those brains.
Unfortunately, we’re only just figuring out how to assess algorithms and AI – and that’s before they’re put into machines at high speeds. So, Fowler points out, a key question is how we really know when driverless cars are safe “What does the manufacturer need to do to demonstrate that all the different use cases and scenarios have been considered that the vehicle could come across within its operational design domain?” he asks. “How do you validate, how do you approve?”
From the industry’s point of view, the value of regulation and testing is not only to ensure safety, but to build trust. If these cars aren’t seen to be safe by the wider public, people will keep their own hands on the wheel. There have been a few driverless car accidents – notably two deaths in Teslas with Autopilot mode enabled and the death of Elaine Herzberg in a collision with an Uber car in Arizona – but so far such incidents haven’t set back research in this area (although Uber has now sold off its autonomous car research division).
What could prove a bigger hurdle is the hype itself, creating an unrealistic expectation that sets up the public for disappointment. “I find it worrying that people feel the technology can do more than it can,” says Fowler. “That misconception of what your vehicle is capable of, and not fully understanding the limitations, could be a potential issue.”
UCL’s Anvari agrees that the way forward is to set aside the hype and be honest about what’s going on. “If you have an autonomous minibus, if the people sitting within it don’t understand what this technology is observing and how it takes actions, they won’t gain trust,” she explains. “If they see that this technology can see what they see, and reacts the way they would react, it will help gain trust.”
“If the whole human side of things is not solved, the trust of users is not solved, you will not be successful in rolling out the technology.”
Get the ITPro. daily newsletter
Receive our latest news, industry updates, featured resources and more. Sign up today to receive our FREE report on AI cyber crime & security - newly updated for 2023.