The goal of autonomous vehicles is to reduce - and ultimately replace - the need for humans to be actively involved in the driving of vehicles on our roads. In the US, human error accounts for 94% of vehicle crashes. 94%! That’s around 40,000 fatalities every year in the US alone. If something was invented today that would kill 40,000 Americans every year, you’d think that thing would be made illegal pretty quickly…
Aside from the safety benefits, autonomous vehicles - also known as self-driving cars - can reduce congestion by better managing their usage of road space. They can reduce fuel consumption by driving more efficiently and, in fully autonomous modes (see below), drivers can use their commuting time to do something more productive than concentrating on driving, saving on average, around one hour per day or 300+ hours per year, per person, of wasted time.
Finally, cars today are not suitable for people who are differently abled, so autonomous vehicles hold the promise of giving tens of thousands of people freedom of mobility.
As most people know, Artificial Intelligence or AI is a critical part of making autonomous vehicles a reality. An autonomous vehicle is typically covered in sensors and cameras that feed data into a neural network (a type of AI) that then sends signals to the different mechanical components on the car - in technical terms, this process is known as perception. These signals tell the car to take actions, such as turning, accelerating and braking.
Some example of the type of sensors on autonomous vehicles include:
Different autonomous vehicle companies have taken different approaches to using these sensors. Some, such as Tesla, have decided to focus on optical cameras as their primary data feed. They do this to simplify the number of different inputs their systems have to handle. They are therefore able to focus their AI teams on building computer vision and machine learning algorithms to analyze and make decisions based on the pixels in the images from the cameras. This means their cars are reacting to what they see as they’re driving, just like a human.
Other companies, like GM’s Cruise, will map out different cities and run massive simulations. In a way, they create a virtual city like a computer game. Then, based on GPS tracking, the vehicles know roughly what is around them already. This allows them to train their AI models with both real-world and simulation data. The cars then use LIDAR, RADAR and cameras to perceive the world around them and make decisions whilst driving.
The industry categorizes the abilities of a self-driving car into 6 different levels. Those levels are summarized here:
Self-driving vehicles have been extremely hyped over the past few years. Today’s expectations are more in line with reality but there has also been a swing to pessimism in the media, with people predicting we won’t see Level 4 or Level 5 on our roads for another decade or longer. Ford, for example, just wrote off $2.7bn of its investment in Argo AI, an autonomous vehicle company. That’s billion with a ‘b’!
But the reality is we are seeing real autonomous vehicles on our roads today. Where I live in Austin, Texas, Cruise has expanded from San Francisco to launch (alongside Phoenix, Arizona) a commercial autonomous taxi service. So they do exist in the wild - I’ve seen them with my own eyes.
Firstly, we’re a computer vision company. So we use a lot of the same technologies that underpin the AI that powers autonomous vehicles. But instead of having cameras on the cars looking out, we install cameras that point towards the cars - or even let people use their mobile phones to scan a car. We then take these images and, using a very similar set of AI capabilities, produce a damage condition report that gives an instant health check on a car. This is critical for anyone renting, leasing or selling a car, or anyone submitting an insurance claim or managing a fleet.
Which leads to the second key connection between Ravin AI and autonomous vehicles. For companies that have invested so much money in creating the future of road transportation, the last thing they want is for their cars to have any damage on them. Imagine the level of confidence you’d have in a self-driving car that was covered in scratches and dents!
As discussed, autonomous vehicles have a huge number of sensors, but very few - if any - of them are looking for cosmetic damage or dirt on their vehicles. So the people in charge of managing these fleets currently need to hire people to regularly inspect their vehicles to make sure they don’t have any significant physical damage. Given these companies have invested billions in automating the driving experience, does it really make sense to hire armies of people to check the cars for scratches and dents?
With Ravin’s AI-powered vehicle inspection system, fleet managers can install everyday off-the-shelf cameras at their fleet hubs. Then, whenever a vehicle leaves or returns to the hub, our AI will scan the vehicles and will flag any new damage that needs to be repaired. So fleet managers can have full confidence in the physical condition of their fleet and can monitor everything fully remotely.