The interface of robotics and AI promises to be a gamechanger in the automotive space, as robots develop into collaborative and adaptive partners. We hear from Dong Zhang, VP of Lotus Robotics, General Manager of Lotus Robotics Europe, Lotus Cars.


Please note that Dong Zhang will be speaking on this subject at the GlobalData Automotive Europe Conference 2025, set to take place on 15-16 October in Munich.

Discover B2B Marketing That Performs

Combine business intelligence and editorial excellence to reach engaged professionals across 36 leading media platforms.

Find out more


Can you explain the importance of the interface between robotics and AI? 

The robotics-AI interface is the critical link where physical capability meets cognitive intelligence—it’s what enables machines to perceive, reason, and act autonomously in complex real-world environments. Its value lies in four key areas: 

  1. Seamless perception-decision-action loops: Unlike traditional pre-programmed robotics, modern systems use AI-driven end-to-end architectures to merge sensing, decision-making, and physical execution into one workflow. This eliminates delays, letting machines respond instantly to dynamic scenarios—such as unexpected road obstructions or human interactions—just as a human would. 
  2. Adaptive problem-solving: AI empowers robots to learn from data, rather than relying on fixed rules. This means they can handle unplanned situations—like recovering from interference or navigating unknown terrain—by adjusting their actions based on real-time inputs, rather than failing at tasks outside their pre-set scope. 
  3. Hardware-software alignment: The interface drives co-evolution of AI algorithms and robotic hardware. For example, more precise electric actuators (replacing bulkier traditional systems) and high-frequency 3D vision sensors are now designed to match AI’s need for fast, accurate control—ensuring AI decisions translate to smooth, reliable physical movement. 
  4. Simplified innovation: Emerging open frameworks act like a “universal connector” for robotics and AI, letting developers mix and match components without overhauls. This cuts integration costs and speeds up the creation of new solutions, rather than being limited to closed, proprietary systems. 

In short, this interface is the foundation for embodied AI—turning robots from repetitive task-executors into adaptive, collaborative partners. 

What does this mean for the evolution of ADAS and driverless vehicle tech? 

GlobalData Strategic Intelligence

US Tariffs are shifting - will you react or anticipate?

Don’t let policy changes catch you off guard. Stay proactive with real-time data and expert analysis.

By GlobalData

Advances in the robotics-AI interface are accelerating ADAS and driverless tech from “functional” to “safe, scalable, and human-like” in three key ways: 

  1. Sharper real-time performance: End-to-end AI systems compress perception, planning, and control into a single neural network—slashing latency and improving safety in edge cases (like unprotected turns or construction zones). By mimicking human reasoning, these systems handle complex scenarios more naturally, without the “rigidity” of traditional rule-based ADAS. 
  2. Wider accessibility: Sensor-agnostic designs and hardware-software decoupling let ADAS solutions work across vehicle segments, not just high-end models. This scalability reduces costs significantly (by streamlining component reuse and software deployment) while maintaining performance—critical for making advanced driver tech mainstream. 
  3. Faster, safer iteration: AI-driven ADAS and driverless systems learn continuously from real-world data. Automated data labeling and simulation tools let them quickly improve coverage of rare “long-tail” scenarios (like unusual weather or unexpected road users) without expensive manual testing—so the system “gets better over time,” building user trust. 

It also supports regulatory progress: Standardised safety frameworks for AI-robotics integration are now extending to autonomous vehicles, ensuring decision-making is explainable and reliable—key for public acceptance and regulatory approval. Ultimately, this convergence turns ADAS from isolated assist features into comprehensive autonomous driving systems that are safer, more efficient, and closer to real-world deployment. 


Please note that Dong Zhang will be speaking on this subject at the GlobalData Automotive Europe Conference 2025, set to take place on 15-16 October in Munich.


Dong Zhang

Dong Zhang

Dong Zhang is Vice President and General Manager of Lotus Robotics Europe at Group Lotus. Prior experience includes positions at Audi AG in product management and project management at Audi China, alongside a tenure as Product Owner in Software Engineering at Bosch Engineering GmbH. He holds a Master’s degree in Electrical Engineering and Electronics from the Technical University of Munich and a Bachelor’s degree in the same field from Universität Siegen.