Although new cars with minimalist cockpits are becoming increasingly common, giving an impression that they are too clever for buttons, some go a little too far. While glossy touchscreens add a touch of class to the otherwise dreary dashboard, some of us still prefer to adjust the HVAC using old-fashioned dials while keeping our eyes on the road.
Voice recognition can certainly eliminate many controls that have traditionally been manually operated. It can also play an important part of a multimodal HMI solution for inputting information or for cutting through layers of menus by requesting a function directly. And voice recognition will inevitably play a bigger role as cars gradually become more autonomous.
Voice-powered assistants are making way for integrated, intuitive co-pilots to anticipate the driver’s needs and wants. Cerence has been pushing back the technical boundaries in this area for some time. It develops automotive assistants, creating intelligent, flexible and intuitive in-car experiences for automakers worldwide.
What is the headline message that Cerence is putting out at this year’s CES?
Our focus at CES and moving forward is Cerence Co-Pilot, a first-of-its-kind, multi-modal, proactive driving experience that transforms the automotive voice assistant into an intuitive, AI-powered companion that can support drivers like never before. With the introduction of Cerence Co-Pilot, we’re ushering in a new era of in-car assistants, defined by proactive AI and unprecedented integration with the car, creating an effortless experience that keeps drivers safe, comfortable, productive, and informed.
How do you see the digital cockpit evolving? What other changes do you predict?
There are a few factors that will be important to consider as we look at the evolution of the digital cockpit. First is cloud connectivity and deeper integration with the car’s sensors and data, which together deliver an unprecedented amount of information to the car’s infotainment system and virtual assistant. With that, the cockpit experience becomes more informative and supportive for drivers and new capabilities like proactivity are unlocked.
In addition, combining modes of interaction – voice plus gesture, voice plus eye tracking, audio and visual notifications – will enable a richer digital cockpit experience. Interaction with the car will be similar to the interaction between two humans, in which eye movements and gestures are just as important to the conversation as the words we say. This will make it more natural and intuitive than ever to leverage the many features our increasingly digital and connected cars possess.
While a number of cars have some sort of voice recognition set up, some drivers are frustrated about how challenging it is to use voice commands on the road. How does your artificial intelligence-based automotive assistant help reduce that frustration?
In the last several years, voice technology has progressed rapidly. In terms of simple input and output, voice recognition has evolved to natural language understanding, meaning drivers can speak to the in-car assistant in natural, intuitive phrases (“I’m cold” or “Take me home.”) rather than the pre-prescribed commands that may come to mind for some people when they think of voice controls. In addition, neural network-based text-to-speech has given a more human-like quality to in-car virtual assistants, making back-and-forth interaction between driver and assistant more natural and comfortable than ever.
Another factor that will help mitigate some of these frustrations is the rising prevalence of over-the-air (OTA) updates. In the past, as new technology was developed, it would take several years to hit the roads broadly, sometimes causing frustration among consumers who are used to the quick and frequent updates they see with their smartphones and other devices. Even then, drivers wouldn’t have access to new technology until they bought a new car. Today, it’s easier for OEMs to deploy new and improved features to their drivers in a simple and convenient way.
Lastly, we’re actively working with many of our automaker customers on programs that provide simple but thorough introduction to the voice- and AI-powered features in their cars, so drivers can begin using and loving the technology immediately upon picking up their new cars. We’ve found that education is critical so drivers can make the most of the technology.
From your perspective, how will conversational and cognitive technologies change the look and feel of the cockpit?
One thing we’ve seen some OEMs like Volkswagen introduce is lighting effects throughout the cabin. In the Volkswagen ID.3, for example, voice interaction integrates with ID. Light, an LED strip that runs across the cockpit to assist the driver by changing colour according to its current function and driving conditions. ID. Light lets drivers and passengers know that the voice assistant is listening through a light signal that indicates that it is awake and listening.
There’s also lots of conversation about touchscreens and screen sizes as we look at future cockpits. When thoughtfully designed and integrated with the rest of the cockpit experience, these screens have an incredible impact. Think the MBUX Hyperscreen from Mercedes-Benz. It’s a feat of design and engineering that serves as the central hub of information in the car – and it’s deeply integrated with MBUX’s voice capabilities.
Taking interaction one step further, emotion recognition capabilities will provide an enhanced experience. If our cars can be aware of our emotional state – stressed, joyful, tired, etc. – and can adjust accordingly across factors like in-car lighting, the tone and volume of the in-car assistant, music selections, and proactive notifications, we begin to see an even richer experience that meets drivers where they are.
Are you scouting for start-ups at this event? If so, in what areas?
We are always interested in start-ups that we can work with. As cars get more connected and with the progress of AI and machine learning in recent years, we continue to see more start-ups that bring new ideas to life. We are particularly interested in partners that can help us strengthen our offering. For example, a few years ago, we started to work with what3words, a company that has created an innovative geocoding system that works wonderfully in a voice assistant. This year at CES, we saw some promising things in the areas of in-car audio processing and connected AI services aimed at the vehicle industry.
How are you feeling about 2022 – both in terms of business prospects and more generally?
We’re looking forward to again being able to meet face-to-face and to work very closely with our customers on new in-car AI technologies like our Cerence Co-Pilot. We see a tremendous interest for more advanced AI in the car from automotive OEMs as they fully embrace the “car as software on wheels” paradigm, building large in-house software teams. We believe we can help them bring innovative AI technologies to the car and enhance the in-car experience for the driver as well as for the passenger. The road ahead is an exciting one!