Consumer trust is said to be weak when it comes to autonomous vehicles. Last autumn, 37 percent of those surveyed said they’d be comfortable riding in an autonomous vehicle compared to 44 percent in 2021.

This has left OEMs wondering what they can do to improve consumer trust, and to show that driverless vehicles are safe and reliable.

Tactile Mobility, a company that offers a high-resolution tactile sensing and data analytics platform, assisting the future of autonomous vehicles and smart cities, utilises sensors and AI technology to provide a safer driving experience in vehicles.

We spoke to Boaz Mizrachi, CTO and founder of Tactile Mobility, to learn more about what the company’s software can achieve, and to touch on the future of autonomous driving.

Boaz Mizrachi

Just Auto (JA): Could you provide some background on the company?

Boaz Mizrachi (BM): The company was established around thirteen years ago. We are a software company that provides software to vehicles – specifically into the engine control unit (ECU). A complementary part of the software stays on the Cloud that facilitates communication.

Our target is to provide a tactile sense to a vehicle, meaning that rather than seeing something or hearing something, we would like to ‘feel’ something. This feeling is encapsulated or represented within what we call a virtual sensor. Those insights are provided as a virtual sensor to the other ECUs in the vehicle that can use it, and then precondition the chassis.

Those insights are transmitted to the Cloud to the other part of the software where we collect it. We do crowd sourcing and sharing of information from all around the world. Then this information can be returned to vehicles for prediction, to vehicles driving, or be monetised.

What does the sensor allow vehicles to have information about?

The virtual sensor that we generate has to do with the tactile sensing, feeling the surface, feeling the vehicle and the interaction between the two. For example, we start with grip estimation, or friction estimation. What is the current grip of the vehicle given the current condition of my tyre suspension, and the road? We take all those factors into consideration, and the output is the current available grip. What is the maximum magnitude of acceleration, braking or steering that I can do at any point of time without losing grip with the surface?

The second important area we cover is vehicle weight estimation. We are assessing the load on the vehicle and the load on the tyres themselves. Vehicle weight is something that is very useful for preconditioning the chassis, keeping a distance with the car in front of you; it may be larger and then the braking distance will be larger.

The other services that we provide have to do with vehicle health. We are talking about tyre health, or tyre parameters. There are several parameters that are very important for tyres, and we are assessing them while driving. One of them is the effective rolling radius, which is very precise, below one millimetre.

The other one is the tyre stiffness. How stiff is my tyre? That’s very important for assessing a dynamic performance of the decal. Then of course, there are tyre tread depths, which are indicating when tyres need replacing. We are used to having a tyre refresher monitor, but we don’t have indications saying, ‘replace the tyres, inflation will not help you’. So that’s very important to recondition the car, as well as inform the driver when to replace the tyres.

On the Cloud side, we generate maps, tactile maps, of all that information; it is a grid estimation of each segment around the world where we have our drivers driving. If you drive with lousy, worn-out tyres and a worn-out surface, a small amount of rain could mean you can fly off the road. We need to identify those situations and map it on the Cloud side so we can use it there for other purposes.

How important is this data for the future of autonomous driving?

The debate comes down to this: will we have sufficient vehicles in terms of safety to drive us when we can be asleep, while the car drives? In order to do that we need to replace the driver. Replacing the driver means replacing the senses of the driver, too: the perception, decision-making capability and the actuation that all come with a human driver.

The debate comes down to this: will we have sufficient vehicles in terms of safety to drive us when we can be asleep, while the car drives?

If you think of yourself as a driver, and experienced driver, what are the sensors that you have that allow you to drive safely and efficiently and on the road? Your eyes are important for maybe 90% of the information input. However, there is some part of the information that you feel through your body, you feel the road, how stable is your vehicle on the road etc.

Those insights provide the perception system of the vehicle with the correct perception. We cannot see everything, we cannot see black ice. Even if you see a bump in front of you, you’re not sure how your car will take it, what will be the maximum safe speed you can go over the bump without losing grip, without damaging something? Your eyes can be misleading. It also has to do with your chassis. You need the tactile perception.

How do OEMs come to work with the company?

It can be a few months of proof of concept, where you demonstrate your abilities on their vehicle to show what our software can do that others cannot. We then get talking with the development teams that are responsible for functions like motion control, action control, etc. When they get those insights, they immediately know what to do with that, how to automatically use it to enable automatic decisions from the vehicle, for safety but for performance as well.

We normally do a winter campaign and summer campaign. If you’re in Europe, for example, you go to Sweden and drive on the frozen lakes, you check the extreme conditions and how your system works.

Once they are satisfied with that, you then go to test some fleets, so you equip your software with an aftermarket device with some test fleets that the OEM has. Once it is satisfied, then we go through the iteration process. We provide the code, they provide their code, and together we are integrating and compiling a single target software for the ECU.

What are some of the challenges you face working with software-only technology?

The technology is challenging. What we do is combine signals that come from existing hardware sensors in the vehicle. Normally you would have an ABS system that provides you with wheel speed, the acceleration; we collect those interfaces with the ECU that we are integrated in. We are processing it. On top of that we have physical modelling of the vehicle and chassis components. To solve those equations, we’re using machine learning. Because these equations are complex, machine learning allows us to find relatively accurate solutions to those equations with a lot of coefficients.

You need to verify that your estimation is correct. There are a lot of validators running on top, and if everything goes fine, we are outputting this estimation close to real-time into the canvas as a new value. For example, assessing the current vehicle weight.

In order to do that, you need to be able to run within a standard ECU in the industry. Those engine control units are very lean in terms of resources like memory, runtime, power, communication, bandwidth, etc. So you need the sophisticated software of machine learning and everything to be very compact in terms of processing power and in terms of memory needed to run this.

The other challenge is the normal challenge that you have in machine learning; the diversity of the potential data. We all know that we need to train those AI models. To train them you need to have diversified data from all over the world, for different situations. The combinations are huge. There are combinations of tyre health, suspension health, different vehicles, different instances of vehicles, types of surfaces and types of weather conditions.

Those are the challenges that we are mitigating; how to collect this data for training and once you have the data, how to annotate it, assess the meaning of this data. Data collection as well as annotation within the limited resources of time, space, test vehicles, etc, are the challenges that we have mitigated along the years.

Do you think that software like this will increase confidence in autonomous driving?

In the standard automotive industry, they are reluctant to talk about safety with higher levels of autonomy like level four or five. That’s a major challenge. Everybody calls it advanced driver assistance systems (ADAS), which are assisting you. If you don’t keep your eyes, or your hands on the vehicle and road, you’re still responsible. That’s the main problem we have today, with the trust.

In the standard automotive industry, they are reluctant to talk about safety with higher levels of autonomy like level four or five.

Back in the day Zeppelins were proven to be the best way to transfer people from one place to another, but after that terrible accident that happened [Hindenburg disaster] no-one wanted to take them anymore. As long as there will not be an incident and incidents are minor in nature, I think people will get used to autonomous driving.

The problem today is that even if I prove to you that my vehicle is driving ten times safer than you as a human being, you maybe will not accept it. However, you will accept the idea of software bug resulting in a crash. It’s not comparing percentage risk of potential damage between human and the machine, it’s about something more psychological.