The recent CES included a number of innovations and concepts pitched at the emerging world of autonomous drive. However, we also heard of new concepts that utilise artificial intelligence (AI) as part of the vision of the autonomous future.
Two concepts at CES stood out for their AI aspects; one from Toyota and one from Audi. The technology, with its clever algorithms that can “learn from and grow with the driver” can enhance a customised experience as well as improve drive in automated mode and boost safety via learning systems that monitor the driver (and potentially help with automated-human drive transition challenges).
Isn’t this the next big change in our lives? Machines and ‘bots working to make our lives better and interact with us in ‘human-style’ ways – whether it’s the personal assistant on your smartphone, the smart assistant in your home, or in your car.
Autonomous drive is sure to put a premium on the time spent inside the vehicle and the efficiency of the interface between man and machine. AI has a role to play in optimising the mix of human and automated inputs that drive the car and make the whole experience as safe and enjoyable for the driver (and passengers) as it can be. Autonomous drive is sure to stimulate a major rethink of the design of vehicles and the relationship between driver, vehicle and the external environment. Shows like CES will give us some insight into how this future could look and the engineering pathways to that future.
Toyota’s showed its ‘Concept-i’ at this year’s Las Vegas CES. It said the groundbreaking concept vehicle demonstrates Toyota’s view that vehicles of the future “should start with the people who use them”. Can’t argue with that.
Designed by Toyota’s CALTY Design Research in Newport Beach, Calif., and with user experience technology development from the Toyota Innovation Hub in San Francisco, Toyota said the Concept-i was created around the philosophy of “kinetic warmth,” a belief that mobility technology should be warm, welcoming, and above all, fun.
How well do you really know your competitors?
Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.
Thank you!
Your download email will arrive shortly
Not ready to buy yet? Download a free sample
We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form
By GlobalDataConcept-i, Toyota maintains, leverages the power of an advanced artificial intelligence (AI) system to anticipate people’s needs, inspire their imaginations and improve their lives.
“At Toyota, we recognise that the important question isn’t whether future vehicles will be equipped with automated or connected technologies,” said Bob Carter, senior vice president of automotive operations for Toyota. “It is the experience of the people who engage with those vehicles. Thanks to Concept-i and the power of artificial intelligence, we think the future is a vehicle that can engage with people in return.”
Built around the ‘driver vehicle relationship’
At the heart of Concept-i is AI that ‘learns with the driver to build a relationship that is meaningful and human’. More than just driving patterns and schedules, Toyota says the concept is designed to leverage multiple technologies to measure emotion, mapped against where and when the driver travels around the world. The combination, it is claimed, gives Concept-i exceptional ability to use mobility to improve quality of life.
The AI system also leverages advanced automated vehicle technologies to help enhance driving safety, combined with visual and haptic stimuli to augment communication based on driver responsiveness. While under certain conditions users will have the choice of automated or manual driving based on their personal preference, Concept-i “seamlessly” monitors driver attention and road conditions, with the goal of increasing automated driving support as necessary to buttress driver engagement or to help navigate dangerous driving conditions.
Perhaps even more interesting is the notion of a next-generation user interface that serves as a platform for the vehicle’s AI Agent, nicknamed “Yui”. The interface begins with the visual representation of Yui, designed to communicate across cultures to a global audience. With Yui’s home centred on the dashboard, Concept-i’s interior emanates around the driver and passenger with interior shapes designed to enhance Yui’s ability to use light, sound and even touch to communicate critical information.
Toyota says Concept-i avoids screens on the central console to reveal information when and where it’s needed. Coloured lights in the foot-wells indicate whether the vehicle is in automated or manual drive; discrete projectors in the rear deck project views onto the seat pillar to help warn about blind spots, and a next-generation head-up display helps keep the driver’s eyes and attention on the road.
Yui appears on exterior door panels to greet driver and passengers as they approach the vehicle. The rear of the vehicle shows messages to communicate about upcoming turns or warn about a potential hazard. The front of the vehicle communicates whether the Concept-i is in automated or manual drive.
Toyota hasn’t announced any production plans for the Concept-i, but it does expect to start on-road evaluation within the next few years in Japan.
Audi Q7 ‘deep learning concept’
There was a different tack on AI from Audi, which demonstrated how an automated driving set-up can learn, AI as a key technology for piloted driving. At CES Audi presented the Audi Q7 deep learning concept, a piloted driving car made possible thanks to collaboration with partner NVIDIA.
The car orients itself by means of a front camera with 2 megapixel resolution, and the camera communicates with an NVIDIA Drive PX 2 processing unit, which in turn controls the steering with high precision. The high-performance controller is specially engineered for piloted driving applications.
The clever bit is that at the core of the software are ‘deep neural networks’ that experts from Audi and NVIDIA have trained specifically for autonomous driving and recognition of dynamic traffic control signals. The software ‘learns’ from journey data points starting with the human driver reactions.
Beginning with a human driver at the wheel, the Audi Q7 deep learning concept gained a limited familiarity with the route and the surroundings, by means of observation and with the help of additional training cameras. That established a correlation between the driver’s reactions and the occurrences detected by the cameras. So during the subsequent demonstration drives the car is able to understand instructions, from a temporary traffic signal for example, interpret them right away and act as the situation requires. When a corresponding signal appears, the concept car immediately changes the driving strategy and selects either the short route or the long one. Audi says the design of the system is so robust that it can even cope with disturbance variables such as changing weather and light conditions. It masters its tasks day and night, and even in direct sunlight or harsh artificial light.
Audi says the learning methods used for the Audi Q7 deep learning concept are essentially very much like those of deep reinforcement learning. This method was the underlying principle behind the Audi presence at the Conference and Workshop on Neural Information Processing Systems (NIPS), an AI event held in Barcelona in December. There, the neural networks – which are similar to the human brain – were also trained for a particular application. While the 1:8 scale model car at NIPS learned how to park through trial and error, during the training runs the network of the Audi Q7 deep learning concept receives concrete data it finds relevant – in other words, it learns from the driver.
Audi says it is evaluating various approaches and methods for machine learning with the aim being to find the optimal method for the specific application being studied. Collaborative efforts by companies in the IT and automotive industries are also of “tremendous value for future implementation in concepts and production cars,” Audi says.
NVIDIA is a specialist in semiconductor technologies and development and is a long-time supplier to Audi. The relationship illustrates the importance of supplier collaboration in this area. The Audi A4 was using an NVIDIA chip as early as 2007, and two years later NVIDIA technology allowed the Audi A8 to achieve a new dimension in visual displays. The Modular Infotainment Platform (MIB), which was introduced in 2013, featured the Tegra 2 processor from NVIDIA. And the MIB2 followed in the Audi Q7 in 2015, running with an NVIDIA T 30 processor.
The platform’s next level of development is the MIB2+ – which is premiering this year in the new generation of the Audi A8. Its key element is the Tegra K1 processor, which makes new functions possible and has the computing power needed to support several high-resolution displays – including the second-generation Audi virtual cockpit. Onboard and online information will merge, making the car part of the cloud to a greater degree than ever, Audi points out.
Together with the MIB2+, the central driver assistance controller (zFAS) in the new Audi A8 is also making its series debut. The zFAS uses high-performance processors to evaluate the signals from all sensors in real-time to create a model of the car’s surroundings. This model represents the prevailing traffic situation as accurately as possible. It lets the zFAS calculate upcoming manoeuvres in advance, taking a look into the future, so to speak.
The K1 processor is also on board and in future the X1 processor from NVIDIA will be introduced. Audi and NVIDIA are planning to intensify their long-standing partnership by combining NVIDIA’s development environment expertise for AI applications with Audi’s experience in the area of vehicle automation.
Another Audi key partner is Mobileye, whose image processing chip also is integrated in the zFAS. The high-tech Israeli company is a leader in the field of image recognition for automotive applications. Mobileye is already supplying a camera for use in a range of Audi models – the Audi Q7, the A4/A5 series and the Q5 – and the product’s image processing software can recognise a large number of objects. These include lane markings, vehicles, traffic signs and pedestrians. Today, defining the characteristics needed to clearly classify objects is still done manually.
In the new Audi A8, Audi and Mobileye are demonstrating the next level of development – with image recognition that uses deep learning methods for the first time. This significantly reduces the need for manual training methods during the development phase. Deep neural networks enable the system to be self-learning when determining which characteristics are appropriate and relevant for identifying the various objects. With this methodology the car can even recognise empty driving spaces, an important prerequisite for safe, piloted driving.
The traffic jam pilot function will be offered in a series production model for the first time in the new A8. This is the first piloted driving function in series production that will enable the driver to let the vehicle take over full control at times. With this step the stage is set to begin the next decade with higher levels of automation in a growing number of driving situations.
Significance
Autonomous drive will necessitate a major rethink of the automobile in terms of drive systems and interior design. The two concepts illustrate how AI can play a major role in enhancing the bespoke experience for the driver inside the car and how machine learning can improve autonomous drive operation via the creation of data point based scenarios, further refined as journeys accumulate. AI-based systems can also boost safety as they help to bridge or optimise driving efficiency during transition between human and automated operation. Working with ADAS, they can monitor driver attention and road conditions, increasing automated driving support as necessary to buttress driver engagement. Audi’s relationship with NVIDIA and Mobileye also highlights the vital role of key suppliers in developing such emerging technologies.
Fully autonomous vehicles are not expected until around 2030, but the early part of the 2020s will see more ‘highly autonomous’ vehicles coming to market. In summer 2016, Ford boldly stated its plans to produce a fleet of SAE Level 4-capable* cars for ride-sharing services, such as Uber and Lyft, by 2021.
——————
*just-auto’s research unit, QUBE, defines broad categories for levels of driving automation as follows:
Level 0: No automation – e.g. Park distance control.
Level 1: Driver Assistance – e.g. Lane departure warning (LDW); Blind-spot monitoring.
Level 2: Partial automation – e.g. Adaptive cruise control (ACC); Automatic emergency braking (AEB); Semi-parking assist.
Level 3: Conditional automation – e.g. Highway driving assist.
Level 4: High automation – e.g. Highway driving; Traffic jam assist.
Level 5: Full automation – e.g. Urban driving; Valet parking (driverless parking).