Virtual reality has become a huge part of the automotive industry, from virtually developing vehicles to improving the manufacturing process of in-car technology – and it shows no sign of slowing down.

Much of this technology originates in the video game industry. This is also true of the work currently being carried out by Unity, a company claiming to be the world’s leading platform for creating and growing real-time 3D (RT3D) content.

How well do you really know your competitors?

Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.

Company Profile – free sample

Thank you!

Your download email will arrive shortly

Not ready to buy yet? Download a free sample

We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form

By GlobalData
Visit our Privacy Policy for more information about our services, how we may use, process and share your personal data, including information of your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.

The company has been helping automotive leaders such as Mercedes-Benz and Hyundai to power infotainment systems and even create a virtual Meta-Factory.

We spoke to Callan Carpenter, vice president of digital twin solutions, to learn more about the work the company is carrying out and how video game technology can be used in the automotive sector.

Callan Carpenter

Just Auto (JA): Could you discuss the background of Unity Games, and your role?

Callan Carpenter (CC):  Unity Games is the most commonly used video game engine, but it is probably better to call it a real-time application engine. Lots of applications that require real-time interaction with people get built on Unity. The vast majority historically have been games. But as the years went by, we began to notice that non-gaming companies such as aerospace companies, manufacturing companies, and so on, were being built by Unity.

You might ask, why would a manufacturing or aerospace company want to buy a game engine? It turns out that there are a lot of very powerful use cases for game engine technology in the non-gaming world. Use cases exist for everything from product visualisation to virtual training, to the control of smart factories, maintenance and operations, operating facilities and so on. There are many use cases.

It turns out that there are a lot of very powerful use cases for game engine technology in the non-gaming world.

Unity is divided into the groups that build products like the game engine and then we have a professional services organisation. That is where I sit: in the professional services unit.

In that group, I’m responsible for the overall strategy, go-to-market strategy and what we call the programme management team, as well as for federal or government business. The primary role is for us to help customers adopt Unity technology on either the more traditional gaming entertainment side, or in the new and growing industrial non-gaming side of the business. We help them build applications or co-build applications with customers who have their own development capability. We conduct this activity across many different industries: automotive, manufacturing, transportation, e-commerce, retail, construction, energy, and the list goes on; not forgetting our traditional game, entertainment and film industry customers as well.

Where can the technology be used within the automotive sector?

The technology can be used across many different segments of the lifecycle of an automobile. Starting from very early in the design, engineering, and collaboration phase, it can be used in the marketing and customer engagement phase for things like product configurators, virtual advertising, synthetic photography. The technology engine can also be used in the automobile itself.

I believe the chief software officer at Mercedes said, back in 2019, that screens are the new horsepower. Really what he’s saying is that the basis of competitiveness and the real value in an automobile, is shifting. It has been shifting for the last decade, with electrified drivetrains, a move toward autonomous driving, changes in ownership models and all the things that we know are disrupting the industry.

The immersive experience, both from an informational perspective and an entertainment perspective, has emerged as a really critical element in the automobile experience today.

Going back to increasing amounts of AI, machine learning and autonomous driving that are being installed in vehicles and features, it turns out that those things – particularly things like computer vision – require a need to recognise, say, a ball or that a vehicle is approaching an intersection. That requires something called ‘training information’. You need to train a neural network for computer vision or autonomous driving to work.

Traditionally what’s been very challenging about that is that you want to be able to feed into an algorithm, a machine learning algorithm, known and good data that’s annotated. Here’s a simple example. If I wanted to teach an algorithm to recognise a cat, I’d go out with a camera and I would take lots of pictures of cats. I would then feed this into the neural network. The problem with that technique is it’s expensive, it’s time-consuming, and you’re bound to have data bias because you can’t take a picture of every possible cat in all possible lighting conditions, in every conceivable type of background.

Now, imagine that we’re talking about smart vehicles and autonomous driving. That problem is magnified by an infinite amount of complexity, because we need to recognise all these different road conditions and the intersections. Is it night-time? Is it snowing? Is it foggy? Are there people at the intersections? You can use the game engine to generate synthetic data, because you can set up algorithms and say: I’m going to produce a million different variations, or ten million different variations of an intersection. You can do it very cheaply because it’s all synthetic. You’re not sending a physical person out with a camera, taking pictures; you can adjust the variables such as lighting and weather, crosswalks and whether the traffic lights are working or not working. All of those are software variables you can control.

Could you tell me a bit about the work the company has already done within the automotive industry?

Let’s talk about Mercedes, where the benefit is building compelling, immersive customer experiences in the vehicle. So that’s what a game engine can do; it produces in real-time, meaning I can interact in the scene which can change; the video experience is reacting to input from the driver.

We say real-time to distinguish it from linear type entertainment like a movie. A real-time video experience is one where we have no idea what the next frame is going to be because you don’t know what choices the person is going to make, so you have to render all that in real-time.

A real-time game engine is ideal for embedding inside of an automobile where you will be having real-time experiences. It still can be 3-D, it can be beautifully rendered and it can also run on relatively inexpensive hardware.

One of the benefits of the Unity engine is we come from a mobile background. We’re used to putting these experiences on cell phones, tablets, and automobile manufacturers want low cost and low weight. If they can choose a less expensive chipset, and can still deliver this amazing experience, then that’s a huge benefit of the game engine in that context.

Another use case is Volvo. Within automotive there’s a highly complex value chain. There are designers in engineering, manufacturing, ergonomics, vehicle dynamics and there’s the infotainment systems. Typically, these things are all developed by specialised groups operating within their silos and I think it’s been a truism for 100 years in automotive that if you could just get designers and engineers and manufacturing people to talk together in the same language, we can collectively build better cars.

That common language has historically been a prototype because everyone can sit there and physically look at the prototype. The problem for prototypes is that they’re expensive to build. It takes time to build them, and once a prototype is built you can’t make instantaneous changes. Someone has to physically update the prototype and so on.

What Volvo has done is they’ve said let’s see if we can reduce the number of physical prototypes, but create a common virtual language that the designers, engineers and manufacturing people can all communicate in; that environment and that common language is the virtual world of Unity. In this virtual world designers can make instantaneous changes, engineers can understand what the design intent is, and they can communicate back.

What is the company working on and hoping to achieve this year?

I think probably the most important one for the space we’re talking about here is refactoring the technology and products that have been built up over the last twenty years of the company’s existence and refactoring them to cloud-based services.

If we can put all those services up in the cloud, services for things like data ingestion, to have a virtual or digital twin of an automobile, I need to be able to read the CAD data, so I can recreate that. That becomes a service in the cloud.

When you do that, you open all kinds of possibilities for data sharing. By bringing all this technology together and creating a platform out of it you can configure the same set of services in different ways to produce the smart factory twin, or the collaboration platform for the engineers and manufacturing people, or support a product configurator for the marketing department. So that’s a really big leap forward.