At the 2018 CES in Las Vegas just-auto’s Calum MacRae caught up with Gilles Gallée business development director at OPTIS. OPTIS is a company that specialises in virtual reality simulations of sensors and this expertise is being used in the automotive sector as autonomous vehicles are being readied for development.
just-auto: So Gilles tell me about OPTIS?
Gilles Gallée: OPTIS is a French company but we are worldwide. We have 250 people worldwide with a locations here in the USA in San Jose, California and also in Troy, Michigan. Our specialisation is working on light simulations and sensor simulations. This is the DNA of the company and we are supporting all of the automotive industry with very accurate simulation.
j-a: And what does the simulation involve? Is it just gathering data and putting algorithms together?
On the car today you have six sensors but by 2022 you can expect it to have more than 30 sensors
GG: What we are simulating is really the sensors. So on the car today you have six sensors but by 2022 you can expect it to have more than 30 sensors.
j-a: Is that for level four that you have more than 30?
How well do you really know your competitors?
Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.
Thank you!
Your download email will arrive shortly
Not ready to buy yet? Download a free sample
We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form
By GlobalDataGG: In 2022 it will be for Level 4 mainly. So 30 sensors will be mandatory for the autonomous driving in four or five years. You have four main sensors, camera for sure, radar, LiDAR and intrasolar. So the objective for sensor suppliers like LeddarTech or OEMs and Tiers 1s is to evaluate the performance of the sensors inside the car body and to evaluate the automatic driving systems. They will not be able to drive thousands and thousands miles for testing so simulation will be mandatory. So to support them we are bringing very accurate sensor modelling into their simulation platform. So we are modelling the environment. We can measure the reflectance of the road, of signs and so on in terms of the spectrum for the infrared of the LiDAR, the camera, radar etc.
j-a: What are you demonstrating here at CES?
GG: Here we have an example with LeddarTech with their new LiDAR it’s modelled into Speos. Speos is our light and human visualisation software. It can evaluate the lighting performance and emission performance of the LiDAR directly into the CAD. So here what you see is the performance of the LiDAR in front of an accident situation where you have white cars and black cars. Customers can simulate the LiDAR in different situations and evaluate the performance at a very early stage.
j-a: Do you do the camera and radar simulation as well?
GG: Yes, although radar is in preparation. Because we are a simulation company so we are doing very accurate simulation with the objective to validate the reality of the situation. For the customers who continue the testing with simulation, without having a real driver.
j-a: What are the quantifiable benefits of doing a simulation over testing for thousands and thousands of miles?
99% of the testing for Level 4 and 5 autonomous vehicles will be done in simulation
GG: 99% of the testing for Level 4 and 5 autonomous vehicles will be done in simulation. That means it’s a big change in the automotive industry because they have to introduce the simulation at a very early stage of the design in order to compress the testing and to keep the testing at a reasonable number of miles.
j-a: So in California Google has done hundreds of thousands of miles of testing – are they doing simulations as well?
GG: Yes, there are different level of simulations. At the first level you track the computer for the decision. Then you have the perception. The perception is linked to the sensors. So when you are doing simulation you can do simulation at the level of the brain, considering the sensors coming from the records from real drive, so you do not consider simulation of the sensor, you consider real data, and then you can simulate your brain on different conditions. Conditions you have pre-recorded. The next step of simulation will be to fully virtualize the simulation test and introduce the sensor simulation. This is where we are going.
So we are already in contact and we’re working with the main OEMs and suppliers where we are implementing very accurate sensor models for camera and LiDAR and radar in the future. This will give them a complete platform for testing.
j-a: Do you work with other LiDAR companies other than LeddarTech?
GG: Yes, most of them scanning and solid state. Most of the industry now is turning to solid state because solid state LiDAR will be at a price which is acceptable for the automotive industry. They’re all talking about US$100 of cost – it’s a good target for deployment in the automotive industry. The second stage will be the massive deployment of sensors in cars when they decrease the price. We can see a movement into the OEMs today, they are putting ADAS features on entry-level cars.
j-a: Do you have any competitors in simulation?
GG: We already have some competitors in the simulation industry, but we are very unique and the reason we are unique is because what we are doing is very accurate simulations. In this respect we have no competitors as these are just our initial automotive and military applications. Before that we were working with the nuclear industry in France with some kinds of simulations. We’ve been a company for some 25 years now and we’ve had big growth in the last six years especially because of the massive deployment of the lighting and sensor technology in the automotive industry. The automotive industry requires extreme accuracy for quick decision making and now we are able to provide solution to fully replace the real prototypes. For example, here were are demonstrating our smart lighting technology developed with the sensor models. The simulation is able to test the solid state lighting system in advance without any real test and that’ll save a company US$1 million per year.
j-a: When you talk about head light technology, you’re talking about head lights with integrated sensors and that might be LiDAR sensors as well. For example, Koito’s thinking about integrating the LiDAR.
GG: Koito’s using our technology to simulate the solid state lighting, in order to provide a complete lighting and sensor system. It’s a good way to reduce the cost because it’s one supplier, one integrated system …
j-a: But might there be push back from OEMs because they don’t have the cost transparency they favour? Before we’ve had system integrators and then we had this push back because they couldn’t see the cost for each individual component, do you think that might happen when you start bundling commodities with sensor suites?
GG: Not necessarily if they see the value.
j-a: Can OEMs go to Level 4 and 5 without LiDAR?
I think now that nearly all the industry is aligned on the message that Level 3, and especially the Level 4, will require the four types of sensors as they need the redundancy
GG: I think now that nearly all the industry is aligned on the message that Level 3, and especially the Level 4, will require the four types of sensors as they need the redundancy. Radar is very good, but the discrimination classification is very poor, so you need the camera. But the camera is poor as soon as you have poor lighting or bad weather. So the LiDAR is good for that.
j-a: Earlier you said that you are now only just coming to market with radar simulation, why is that?
GG: It’s because we have been mainly focused on optical sensors. We are waiting on different research projects and pilot project with German and American companies, in order to test and validate our radar simulations. As soon as it will be ready, then we will be releasing it.
j-a: So out of your total revenue how much is derived from the automotive sector?
GG: Two thirds of our business is the automotive industry. This is taken between exterior systems like lighting sensors and interior systems, which is mainly lighting. Today our business in autonomous driving is not that big but it has big potential. We have big projects starting in 2017 and 208 and we see the business growing a lot in the next three years.
j-a: Can you say who your key OEM and Tier 1 customers are?
GG: I would say all of the Top 10 OEMs are using our software and 18 out of the 21 Tier 1 suppliers are using our technology. All OEMs are using our software daily for virtual prototypes. For external lighting I can give an example. Head lamps today, head lamp is launched into production based on virtual prototype and the customers have demonstrated that our simulations are more accurate than a real prototype.
j-a: So how much development time did you cut?
GG: We had an example with Hyundai Mobis where instead of taking three years it took one year to develop a lighting system. So that’s the reason we are used. They can reduce the development time and then introduce more intelligence into the product, which is linked to the software and the electronics.
j-a: So your software sits on the CAD at product development teams?
GG: We have two main products, one for the CAD where you can validate the integration of sensors into the vehicle body. And the second one is VRX, which is for automatic driving systems. Here it’s linked to modern development software in the loop, hardware in the loop. So it’s totally different but we are using the same model in between the CAD and the system.