Voice recognition becomes common parlance

By Matthew Beecham | 13 February 2018

Leor Grebler

Leor Grebler

Carmakers are fast adopting virtual assistants, confirming that speech is becoming the preferred interface for tomorrow's cockpit. Voice control was king at the most recent CES. Continuing just-auto/QUBE's series of interviews with automotive specialists, we spoke to Leor Grebler, CEO of Unified Computer Intelligence Corporation. UCIC is a Toronto-based company that helps integrate Alexa into hardware products. They created an Echo-like device on Kickstarter back in 2012 and have been working in voice since then.

How did you set up the business and were you always drawn to inventing something?

We founded UCIC because we believe that interaction with technology should be natural and easy. We were finding that people were starting to become distracted by the technology around them, unable to focus on each other so we thought about what we could do to allow technology to fade to the background and chime in when you needed it.

Our original product was the Ubi, short for the Ubiquitous Computer, a WiFi-connected voice-operated computer. This was two years before the Amazon Echo came out. The Ubi allowed you to have hands free voice interaction in any room and could send email, play music, control devices, and ask thousands of questions without needing to take out your phone. As we progressed in our business, we moved away from developing our own products to helping other companies add voice interaction to their hardware.

Myself and the other co-founders of UCIC always wanted to create something that could make a deep impact. We explored many ideas back in 2011 and 2012 and eventually realised that the Ubi was the most impactful one to bring forward. Around that time, we received a lot of inspiration from projects on Kickstarter so that's where we decided to post the project.

So what shape is your business in today and what sort of things are you creating for automotive?

We're seeing an explosion of voice-first devices hit the market

We're seeing an explosion of voice-first devices hit the market and lots of interest in integrating voice into new products.  We've been helping companies experiment with voice and determine how they'd be able to deploy voice in their products. For automotive, we've provided tools to prototype voice interaction – with Google Assistant, Alexa Voice Service, or their own customised AI assistant.

Who are you talking to about Ubi Kit?

We're speaking primarily with hardware makers and brands that are looking to add voice to their products. There are a lot of complexities involved in implementing voice and creating a good user experience. Amazon and Google provide some tools for implementing their services, SDKs and APIs, but there are still a lot of gaps that need to be filled in getting these to work on hardware, such as setting up and tuning the wake word, remote updates, and other controls. This is where we provide the Ubi Kit – to fill in these gaps and run multiple AI assistants on a single device.

We heard a lot of discussion about using 'voice' at the most recent CES with the likes of Google Assistant and Alexa. That must have been music to your ears. What did you learn from CES?

Absolutely, voice was everywhere! You couldn't walk three steps without coming across a Google Assistant ad and Alexa had multiple ballrooms and booths – and was plastered on hundreds of other companies' booths.  It was a bit of vindication against doubters of voice technology from 3-4 years ago.

What we learned from CES was that companies are now looking beyond just simple integrations with AI assistants – they want to build their own custom unique experiences on their products and have access to Google Assistant or Alexa.

While giving instructions in our cars is nothing new, putting questions to the likes of Alexa and Cortana while on the road is. Is this the way things are going - having more conversations with our cars?

Anything that's available as a Skill or Google Action will be accessible in our cars.

The word "conversations" might be a bit of a stretch but we'll definitely see more voice interaction capabilities. Anything that's available as a Skill or Google Action will be accessible in our cars. This includes media. We might see implementations that are car-friendly and less distracting than when accessed in non-vehicle devices.

If consumers expect to have conversations with their car, I guess security around that technology must evolve. What do you see happening there?

We'll likely see security increase among all voice-first devices including in-car voice systems. This will likely take the form of multi-factor authentication and include both a biometrics aspect such as a voice print of an enrolled user as well as a device aspect, such as having the user's phone paired with the car. Push notifications to in-car voice devices might also be limited to prevent distractions.

We can tell Alexa to unlock the car and enquire about the weather but I guess such things are just the tip of the iceberg in terms of what voice-control can do for the motorist. What is your vision of using personal assistants in cars?

First, outside of the car, we'll likely see much more information being exposed and accessible to things like Skills and Actions. How much gas is left in the car? What's the range? When do I need to take the car next for servicing? What's the car's fuel economy? How's my driving compared to others? Inside the car, we'll see control extend to comfort settings (turn on the defrost, turn on the A/C, etc.) but also information about what's on route. Where's the nearest gas station? What pizza places are open nearby?

From your perspective, how will conversational and cognitive technologies change the look and feel of the cockpit?

Ideally, while we still are able to drive cars (before they become completely driverless), we should be able to have complete focus on the road and no visual distractions. The last iteration of the cockpit before its driverless could be a minimalist design, with very little information being displayed, the colour being monochromatic, and everything being accessible by voice. When driving becomes complicated, the voice assistant should slow down its delivery of information or pause altogether. The driver should feel more of a state of flow.

By this time next year, what commercial success do you hope to have achieved with your automotive voice-controlled technologies?

We hope that at the very least third-party suppliers will adopt multiple AI assistants in the car and the demand from consumers will force large-scale adoption of AI assistants by all major automotive players. We hope that our Ubi Kit can help speed up this adoption.

Leor, is there anything else that you would like to add about your technologies or market position?

We're very excited about where voice technology is heading. We'll soon be moving beyond the basic question and answer type of interaction to one where devices can sense our moods, know more about our goals for the day, and will be more intelligent on how they approach us. The tools we're working on now will make it possible to have a ubiquitous voice experience where you forget about the technology and just expect it to always be there, working.