AI systems have a range of applications in the automotive market. However, they do not come without their challenges.
Artificial Intelligence (AI) applications have become increasingly prominent within the automotive sector from in-cabin monitoring systems to lane and object detection while driving.
Although a large number of new vehicles boast some form of impressive on-board AI technology it does not come without its challenges during production processes. Machine learning algorithms are very complex and specialised, requiring engineers to frequently overcome challenges and resolve errors.
Assisting the industry on its AI journey is Voxel51, a team of machine learning and computer vision experts, helping to address the challenges and issues posed by AI projects. ‘FiftyOne’ from Voxel51 enables users to build production-ready visual AI applications easily, efficiently, and at scale.
We spoke with Brian Moore, CEO, Voxel51, and Jason Corso, Chief Science Officer, Voxel51, to learn more about the company and the benefits and challenges of automotive AI solutions.
Just Auto (JA): Can you please provide some background on the company Voxel51?
Brian Moore (BM): At Voxel51, we work at the cutting edge of visual AI, building technology that accelerates AI work across almost every vertical, from medical imaging to revolutionizing agriculture to self-driving cars.
For context, we spun out from research at the University of Michigan, where Jason Corso is on the faculty, and where I did my PhD work and research. Our software platform empowers visual AI builders to bring their innovations to life. Specifically, we help organisations develop sophisticated AI systems that can effectively interpret and make sense of visual data inputs such as images, videos, and other related modalities.
You can think of us as a development platform for software teams that work with visual data. One of the unique things about being from Michigan obviously is our proximity to Detroit. We have a long history of working with automakers, understanding the types of challenges, problems, and solutions that they have been building over the past decade.
What work have you done with the automotive sector so far?
BM: We work with OEMs in Detroit in the US, Germany, Japan and other locations, working closely with their machine learning, data architecture and AI teams to help power systems that they’re building for exterior systems for vehicles. This includes lane keeping and adaptive cruise control just to name a couple of applications, as well as in-cabin systems.
The thing I would say about automakers in 2024 is that they’re increasingly becoming software companies that have thousands of employees who are software engineers and data scientists. They are building quite sophisticated systems to develop these new technologies which they see as critical to their company strategies. We have the pleasure of supporting all these teams in building out their visual AI infrastructure, which they have brought in-house rather than outsourcing.
Jason Corso (JC): At the higher level, we examine the teaming between the driver and the automobile. It’s important to think about what humans and AI are each good at. Humans are good at adapting to changing and new situations that require reasoning – both dynamic reasoning and also ethics. On the other hand, humans are bad at redundant, repetitive things like everyday tasks, whereas AI is really good at doing repetitive things over and over again.
What does that mean? For driving, AI has the potential to augment the human’s ability to maintain eyes on the road, around you and in your blind spots, when you may not be able to do it because you’re tired or distracted, or for some other reason. This will increase the overall safety of driving or safety of the road systems and that’s in a scenario with AI and humans working in sync. I think that’s going to come first before we see Level 4 or Level 5 autonomy. We are already seeing a lot of amazing enhancements in this augmentation of human driving with AI.
Which ADAS features are imminent, versus longer term projects for companies?
BM: We’re seeing specific technologies being rolled out that can alert drivers to identify opportune moments to change lanes or overtake other vehicles safely and in a way that doesn’t distract them from the road. Lane keeping or adaptive cruise control are features that are already in the market, but which are being constantly evolved and improved, leveraging not just sensor-based systems but also other modalities and visual data to improve their power, reduce costs and make them more available on more types of vehicles.
In-cabin solutions are a key focus of the teams that we support. Think cameras that are mounted in strategic positions that can identify driver drowsiness and distraction (i.e. when the driver isn’t watching the road), then issue alerts or interventions as necessary. This kind of innovation is already on the market and will continue to get more sophisticated as AI-powered systems become more capable and in-tune with driver attention and awareness.
There are also imminent features that will automate tasks that are hard for human drivers to execute, such as backing a truck into a loading dock. Another innovation is parking assistance where the car can identify if a parking spot is feasible and park in it automatically. We could also see vehicles that warn against hazards as you exit the car, such as a passing cyclist, to prevent collisions.
Some other visual AI features we could see in the next few years are those that help reduce the risk of inexperienced or poor drivers which is made possible through driver recognition. Some examples that could be options for consumers to choose from:
- Intervening more aggressively when people with lower reaction times (e.g. someone who is older) take the wheel.
- Preventing teenagers from accessing sport mode or the enhanced acceleration car features or driving more than 5 mph over the speed limit. Full control of the car could also only be given after 100 hours of experience within controlled conditions.
- Preventing any driver other than the owner from driving the car away for the rest of the day to help prevent theft.
There are also ADAS features that adjust to real-time conditions which we would expect to be rolled out in the longer term:
- Automatically increasing the distance from the car ahead depending on weather conditions such as fog, snow, and rain.
- Adjusting sensitivity of driving warnings or safety features like automated braking according to road conditions.
- Providing additional assistance within challenging lighting conditions caused by factors such as glare and low lighting conditions.
What are some of the challenges OEMs face when developing these projects?
JC: I think the ultimate challenge in many AI and visual AI projects is the long tail problem. It’s pretty easy to get 80% performance on a typical problem with machine learning or AI these days, but usually decent performance is based on typical scenarios that drivers are used to. When you think about deploying AI in the real world, you can’t deploy a pedestrian avoidance system that works four times out of five. That would be a problem. That’s due to the visual and behavioural complexity of operating in the everyday world.
It’s very difficult to build systems that have seen enough examples of all the variability or all the variations of situations that various autonomous systems will be expected to behave in. It is even harder to build a general intelligence capable of reasoning about these myriad situations.
Although accidents happen, humans are rather good at applying adaptive on-the-fly responses and reflexes that tend to align with what we’ve experienced as best practices. While the automotive industry aims to get there, today’s systems can’t quite capture those capabilities. Even a student can download open-source code and use open-source data and quickly train a model that gets decent performance. But getting it to production quality that interprets the high variability that happens in the real world requires expertise, time, lots of money, lots of testing, and lots of patience.
That’s ultimately the game that I think many of these companies are playing right now: trying to figure out how to cover as many of those hardest cases as possible while ensuring confidence in safety.
What do you see the future holding for the use of AI in the automotive industry? Not just in terms of safety, but more broadly?
BM: As I mentioned earlier, modern automakers are really software companies, and their value to the consumer is measured by safety and reliability. All the rest of it is really coming from the software or AI-enabled features that are now becoming possible for vehicles.
I’m personally excited that safety is one of the first areas where we’re seeing some real progress in terms of what these technologies can do – whether it’s in-cabin awareness or keeping vehicles safe and reducing accidents. That’s one of the first areas where we’re seeing real returns from AI investment.
Automakers are great at thinking long term. The lifespan of a vehicle in the real world is 20 plus years and as a result, there’s an extended roll out period that will happen. But these automakers are making the right investments today. It starts with gathering the right data to address some of the challenges and getting us to 99.999% reliability. That begins with being able to have very sophisticated data collection systems that can help automakers gather enough data to understand all of the different situations, edge cases, anomalies, and so forth. We’re well into that journey now and it’s exciting to see the automakers make that transition and be set up and positioned for success.
Is there anything else that either of you want to add?
BM: Although it’s easy to focus on machines and algorithms taking complete control when talking about vehicles, humans play a key role in the entire lifecycle of the next generation of automotive technology. Humans play a key role in making sure that the data that is being fed into these AI systems in vehicles is accurate and free of bias. It’s essential that those working on autonomous vehicles consider whether the system meaningfully increases the quality of both their product and the driver experience. It’s not something that can be fully automated right away. We’ll all be living with these cars in the future so it’s important that the human element is a part of their development from beginning to end.