Simulation technology is increasingly being used within the industry for a range of use cases from testing autonomous vehicle functions to assessing passenger anxiety (see also: Leave your stress behind: Reducing driver anxiety with NewTerritory)

Companies such as UK software specialise rFpro, are pushing the boundaries of what simulation can achieve within the automotive space. The company has recently launched a new simulation tech that reduces the industry’s dependence on real-world testing for the development of autonomous vehicles (AVs) and ADAS.

This new ‘ray tracing rendering technology’ is described as the first to accurately simulate how a vehicle’s sensor system ‘sees’ the world.

We spoke to Matt Daley, operations director at rFpro to learn more about this new technology and its benefits.

Matt Daley, Operations Director at rFpro

Just Auto (JA): Could you provide our readers with some background on the company?

Matt Daley (MD): rFpro has been around since 2007 and has been serving the biggest players in the automotive industry ever since. We have eight of the top ten OEMs who use rFpro for a whole range of different driver-in-the-loop and driving simulation activities.

As a company we excel in our flexibility and the ability to integrate with that world with a huge range of different customers’ applications. We have a huge range of API’s and interfaces that allow our customers to connect their own vehicle models, develop them, or a huge range of the off the shelf vehicle models.

What does the company provide the automotive industry?

Fundamentally, rFpro is about simulating the virtual world around virtual vehicles, and we concentrate on three things. In doing that we really concentrate on modelling the real world, so not just the synthetically generated, or automatically generated scripted worlds, but really detailed models of real-world locations. That’s really important to make sure that we include variety and challenge that comes about from real-world locations.

You’ve got the world outside, you’ve got the virtual vehicle that’s been connected in, and then our job is to also help scale that. As a company, we really concentrate on how we help customers to scale their testing. This is where the three main areas that we cover as a company come in.

The first is the automotive industry, so how are we helping our customers develop the vehicle dynamics’ control strategies for cars? Then there is the autonomous side: how do we help centre developers or even perception system developers to look into these virtual worlds? Then the other half of our company is motorsport; that’s the heritage of where we are, especially myself. I came from both McLaren and Ferrari Formula One simulators. Then, for the second half, we’ve also been adding sensors into the loop. This whole idea is of electronic eyes rather than just human eyes looking into a simulation.

How does the new ray tracing technology work and what does it allow OEMs to do?

I think it all comes back to the industry’s big challenge. The big challenges the industry has in autonomous driving is around that training data generation; they need huge volumes of it. They need it all to have lots of variety also. There’s no point in having everything in the nice clear, middle of the day with no traffic on the road, they need loads of variety, not just that – they need lots of challenging scenarios and edge cases. They need the 1% or the 0.1% data, not the 99.9% data. So, there’s a massive industry challenge in this in the time and money it takes to achieve that.

The industry has accepted that synthetic training data is the way you have to go to get that, because it gives you that flexibility of generation. People are having sensor sets on their cars, but then they decided to change a camera, or change the positions of radars, or change the positions or even change the lens on the camera. So very small changes that they make to a design suddenly means that their training data is now pretty much out the door. They have to start all over again.

You have all of these big challenges that have added up to why synthetic training data is essential. That data has to be high quality and you have to have something that is of engineering value. It’s not just something that’s come out of a real-time rendering engine. As humans we can accept that; we can look at an image and accept we’re at a location and we’ll be immersed, and we’ll start to produce our human behaviours. You can’t trick software and you can’t trick perception code.

You must generate the synthetic data to be as accurate to the real data as possible. That’s why we’re using ray tracing, so that we can take that step from human immersion up to electronic sensor immersion, and we can really replicate – as far as we can – the accuracy of our virtual sensors.

The ray tracing allows you to add in all of those additional reflections, and so that big step comes from not just taking single path and single bounces of light sensors which we do in real-time – but we can do that multiple times. So, lights from the side of the road reflect off the metal barriers, come back off the road, off the vehicle, and back to you. Traffic lights in the scene that have a red light that shines off the wall, or shines off the road and the puddles, coming back to you.

There are lots of complex bounces of light that happened in a scene that all add together, and all have these little delicacies that create the final sensor output that comes to you. That’s the standard ray tracing is ensuring so that we’re getting a physically accurate simulation of how not just light but LiDARs and radars; how electromagnetic waves are bouncing around the world around us.

What are some of the key benefits this technology provides for OEMs?

The advantage of simulation allows you to do advanced development before any metal has even been cut. You can test designs, you can test concepts, when you might not even physically be able to make them.

In a simulation, you can test your ideas early in the process, so that you’re homing in on any future budget or expenditure into the area that’s going to give you the biggest payback. Simulation can do that very, very early in those concepts. Then once you have your initial concepts, you start to develop more system designs, you can flesh out those system designs. That’s where, again, using simulation helps to solidify, and helps you to look at which of them is going to give you the biggest return on your investment.

Think about sensor positioning on cars. There are infinite positions that you can put your camera around your screen, or angles, or lens field of view. It could be wide, it could be narrow. The simulation is going to allow you to test all those early on and decide which is the most efficient in your overall system design. You can start right at the beginning, you can then start to think well, okay, I’ve decided in my overall system layout I need to train – in the autonomous age we need to create this set of training data that’s going to be put into our machine learning perception systems. That training data, in the way that it’s classically collected on real roads with lots of fleets of vehicles, means that you’ve had to drive around with massive test fleets, massive amounts of people generating petabytes of data. Then you need to farm that all out to get manually annotated, to give you the important edge cases and the important training data unit.

It’s hugely resource-intensive, time-intensive and cost inefficient. Whereas concentrating on the simulation means that when that simulation is good enough, you can have a much more cost efficient, time efficient, and focused development of your training data that helps you to explore the safety critical edge cases in a much faster turnaround way.

What are the next steps for the technology?

What’s next is constant development. We have a huge number of things we still want to do with the ray tracer. We want to keep refining those critical edge case scenarios. To keep improving the weather modelling is a big step that we’re going to try to do next, looking at how do we create all of those critical environment conditions?

Then the future comes as we just integrate more types of sensors, expanding to ensure it’s not just light, but it includes infrared, so that gives you LiDARs and then radio waves that gives you radars so working with our partners again over the next six months to start to integrate their models of LiDARs and radars into this highly efficient ray tracing engine.

It’s not a product you just press print on and leave; it’s a constant development that’s going to continue happening for years to come to keep refining this and keep adding extra layers of fidelity and accuracy.

What you’re going to hear about next is our HPC – high performance computing – and cloud deployments. The ray tracer is quite computationally demanding because we’re really pushing the engineering level to the highest limits. Enabling customers to start using their HPC clusters, so multiple PCs that are all dedicated to do jobs and bursting out into the cloud, is really important. We’re going to be really excited to bring and show off the HPC and cloud deployments now that we’ve announced the ray tracer.

Motion blur of fast moving objects accurately simulated as shown by these traffic cones at roadside