Will we ever trust a self-driving car to take our kids to school?

How much safer than a human driver will they have to be for us to trust them?

Enrique Llanes
5 min readJan 24, 2022

--

Autopilot on a Tesla car
Photo by Roberto Nickson on Unsplash

In the last few years, self-driving technology has taken the leap forward. From the Google car in the 2000s to today's approach, lots of interesting things happened.

But still today, some people have doubts that we will ever manage to solve autonomy for it to be fully reliable and safe. It’s not that companies involved are not going forward, but every step that we think we get closer, we face new problems and challenges.

Today, our main issues are still technical. But these challenges are only a matter of time that get solved. Once they are solved, we will face moral challenges that will be as difficult to solve as the technical ones and will involve our society as a whole.

First, there were sensors

When this started, it was all about sensors. The more data the system could gather from the environment, the more information would be available to make decisions. It was a time when these decisions were made in real-time with the computer power available.

Cars would have radar, lidar, cameras, and ultrasound sensors. All of them would create a complex and data-rich environment that would help the system to make the best decision.

Cars would drive choppy and their behavior would be corrected in real-time by programmers who would change code to handle every new situation that was presented while testing.

Thanks to that research and to faster computers, we have today’s services like Waymo. These cars drive in geofenced areas that have been mapped to the exact detail and hardcoded into the system. Their full sensor system can compare this map to reality several times per second and preprogrammed decisions will take the car from point A to point B.

Then came Elon

After some hundreds of thousands of miles drives, this approach doesn’t seem scalable, so it might be enough for ride-hailing services in selected locations but not good enough for a common safe driving system in an all-purpose car for every person in the world.

Hard coding different situations and having complex sensor information makes the system difficult to scale and expensive to build.

Although Tesla was already working in self-driving technology and its Autopilot feature has been built-in in their vehicles since 2017, last year we knew it took a completely new approach:

  1. Instead of hard-coded scenarios, the system would use a trained neural network to make a model of the “world”.
  2. Instead of several sensors, the system would rely on cameras only.
  3. Instead of a few tens of test vehicles to gather data, the information would come from a growing car network that operates in the most important markets with up to 2 million units.

It is impossible to hardcode all situations, so millions of miles used to train a neural network isgoing to be more reliable in the long run. And it will be game over when the full process is automated and the network can train itself with more and more data available producing a new model with new situations every few days.

Tesla’s approach to sensors is also different from all the other companies involved in autonomy (maybe except “Comma.AI”). Most competitors think vision-only systems are not enough to build a fully reliable system as there are situations where the system lacks precision or information enough to make the right decisions.

Tesla thinks that the world is designed to be navigated with a pair of eyes (cameras) and sensor fusion leads to misinformation more often than vision-only systems. They are so confident in their approach, that their cars got rid of the radar in 2021.

And last, Tesla has taken a full approach to data gathering. For years, their cars have been built with cameras and communication systems to load all data on Tesla’s servers. They might have millions of miles driven and millions of hours of footage that thousands of people in Tesla’s offices are labeling to train the network. At the same time, all this data is creating a model of the world that is not only used to train the network but also to create a simulator where new corner cases can be programmed and loaded into the system as if it were real data.

The outcome of this research is the actual FSD beta program. At first, it was only released to selected drivers for testing but since the fourth quarter of 2021, it has been allowed to a bigger audience from FSD purchasers in the US.

FSD beta cannot only drive on highways but also in cities with traffic, roundabouts, and difficult turns. The goal, besides safety, is to create a reliable driving system that is smooth for people to feel confident while the car drives itself to its destination.

At the current progression, it is only a matter of time before the system will be the perfect driver’s assistant. But going from a driver’s assistant to a fully autonomous system is the most difficult part and one that will take years if it can be accomplished at all.

In the future, the battle will be moral and legal but not technical

How much safer has an autonomous system have to be compared to a human driver to be reliable enough for us to trust it? By the end of 2021, Tesla has published data that their cars with autopilot engaged are almost 10 times less likely to be involved in an accident than the average American driver. But if we ask people in the street, I’m sure none of them would agree to take the wheel and pedals off of their cars.

In the next years, autonomous driving will seamlessly get into our cars but it will be in the form of a driving assistant. From that step to a fully automated driving system, there is still a legal and moral battle.

Regulators will have a lot of work along this decade to approve these new systems before they will allow drivers to take their eyes off the road. Also, a new legal system will have to be approved to establish liability in case of an accident. Once the car takes full control and doesn’t need to be supervised, the driver cannot be liable for the injuries and losses when a car wreck happens.

Time will tell but for sure this decade will change the way we drive and deal with our cars.

--

--

Enrique Llanes

4X Top Writer // Tesla fan. Technology enthusiast. AI will change the world. Madrid.