
Simulation technology is increasingly being utilised within the automotive space, saving time, money and allowing for simulated repeats of real-world scenarios again and again.
Advanced Micro Devices (AMD), a company specialising in high-performance and adaptive computing solutions has recently adopted rFpro’s simulation platform to develop automated driving technologies.
The platform named AV elevate, enables AMD to reduce its dependency on real-world data collection and testing and will reduce development time and cost.
We spoke with Matt Daley, director, rFpro, to learn more about the use of simulation technology, and to highlight its importance within the automotive industry.

Just Auto (JA): Could you discuss AMD’s use of your company’s software and what the goals are?

US Tariffs are shifting - will you react or anticipate?
Don’t let policy changes catch you off guard. Stay proactive with real-time data and expert analysis.
By GlobalDataMatt Daley (MD): AMD specifically came to see us at one of our industry events. We were showing off our autonomous sensors and how those integrate into our very high-fidelity 3D worlds and doing this on a very high performance simulation system. We came across their automotive leadership team that were there and they were impressed with what we were doing. It very quickly led to connecting with our American team. We have a base over in Michigan – we’ve got a few developers and a sales team in Michigan – so it was easy for the AMD leadership team to connect with them.
We started to explore how the simulation platforms can be set up quickly and efficiently in order to replace real-world testing – which comes with limitations. They’re really keen to show that their innovations, such as parking system developments, their 3D surround vision of car can be demonstrated without having to rely on real world tests. It avoids finding a place to set up real-world tests and they can do it in their lab. More importantly, they can do it quickly, and iterate their designs rapidly.
For someone like AMD, they are entering this as a platform supplier. They are not in business to sell you a camera, or to sell you one part. It’s important for them that the entire ADAS and autonomous stack, from sensing through to perception, control, action and all of those phases that you need to do are available in the simulation. Also, they want the flexibility to do both – real-time and simulation – so that they can bring their physical hardware.
You have that whole flexibility to test any part of this autonomous stack. Is it the sensors you’re testing? Your perception system in the middle? Your control system that’s guiding the car? Is it reversing or moving into the space? What does that feel like to the passenger inside?

That’s obviously a massive attraction of rFpro and our flexibility that we can use a single virtual world, and a single simulation platform like ‘AV elevate’ and let our customers explore all of them together, or each of them independently – up to them.
I think AMD were just really impressed with its flexibility, and it’s the speed with which they went from seeing the solution, trying it out, to proof of concept and then having a full demonstration ready to take to Las Vegas (in January). This came about in just six months, which in the automotive industry is unheard of.
As technology develops within the automotive industry, has that posed any challenges for rFpro and its capabilities?
Absolutely.We’ve only launched AV elevate, a dedicated product, last year. It was developed over seven years, which illustrates the time and investment required for something like this.
There were some big challenges to separate it from classical driver-in-the-loop simulation. In driver-in-the-loop simulation, you obviously have to have everything in real-time, because things won’t wait. You as a human will react exactly to what you see in front of you. For our sensing industry, we actually need to synchronize everything and ensure that all of the software stays in sync, because some of the testing needed isn’t just real-time based.
One of the big challenges we had initially was to look at having both our real-time product for humans and hardware, and having what we call our synchro step, our synchronous product for software-in-the-loop testing. So that was a first thing that we realised; that we need to have a common virtual world, and a common set of interfaces for people to connect, but also that we need to allow them flexible deployment in terms of: are you operating it with humans in real-time, or are you operating it in high fidelity, in synchronous mode?

You have different challenges in each of those things, and we have had to adapt and build new technologies into our digital worlds, so that they could be viewed by different types of sensors and can be used for training and producing high quality training data.
We’ve had challenges in our scenarios and our interfacing. We’ve had to not just think about a driver in a seat and a steering wheel, but think about, how do we bring in lots and lots of actors into the scene at the same time, and in a flexible way for different systems.
Another massive area is sensor models, really changing the way that the simulation looks at the world, not just from human eyes, but looking at it from the electronic eye’s sense of view. Is it a camera? Is it a LiDAR? Is it a radar? Then building individual simulation results based on the physics of those sensors.
How important is simulation technology for the industry?
We’ve been creating rFpro products for the autonomous industry for many years. When we started, we were about a highly successful driver-in-the-loop simulation system. So it became: how do we try and adapt this for machine vision, rather than just human vision?
That whole philosophy is of trying to reuse industry leading simulation and all the investments we’d made, but applying it now to this very important, safety critical industry like ADAS and autonomous driving.
We’ve been creating rFpro products for the autonomous industry for many years.
We had been in a very good position for a long time in terms of having a good base, a lot of customers that are very used to using simulation, and how they use it to improve their development process.
If you are no longer relying on real-world testing, then you don’t have enough time or money to drive everything in the real-world. It’s essential that you’re able to adopt simulation into your development process to really make that move forward.
I often talk about it as how you make a movie. You need to have a whole set of actors and locations that you’re working in. So that’s the digital content, and we’ve got to build very specific things around that. You need a range of flexible scenarios, so there are movement opportunities to move the actors around – and of course the script that you’re going to give them. Then you need all of the filming equipment, so you need all of these sensor models that are very specific to what you need to do. It’s a lot of moving parts to bring together.
What goals has the company set out for 2025?
In terms of AV elevate, it is still fairly new to the market. We’ve just got our first customers up and using it. I think the goal is to start showing more and more use cases where this single AV elevate simulation platform can be used to do multiple use cases.
We will continue to work with AMD to not only promote what they’re doing in this use case, but to show it off with other people, maybe on the training data side. This is very much on the testing side. We always talk about the different applications into three main buckets of tuning systems, training autonomous systems, and then testing the full stack.
I think this is very much the tuning and the training, so they’ve got a system that they’re looking at adapting it a little bit, and then they’re testing how their driving algorithms are working.
I think for us, there’s a massive part of training. Using the synthetic data for training up the perception systems that we’ve been targeting for a long time, and we’ve got several different projects going on in there.
If we can find some big successes this year that allow us to publicly talk about the verified success of using the synthetic training data to do the training part of the perception system as well, then that’s a major goal for us as a company.