California-based LiDAR specialist, AEye has developed iDAR (Intelligent Detection And Ranging), a perception system that acts as the eyes and visual cortex of autonomous vehicles (AVs). Continuing just-auto/AIC’s series of interviews, Matthew Beecham caught up with Blair LaCorte, CEO of AEye to get his take on how the marketplace for LiDAR is evolving.

Achieving the performance standards necessary for SAE Levels 3 – 5 driver autonomy at lower costs requires a fresh approach. Can you explain a little about AEye and what you are trying to achieve?

We call ourselves an artificial perception pioneer, and that’s because we have pioneered an approach to sensing that turns the usual model on its head. Instead of focusing on what’s achievable in a lab or development environment, AEye uses intelligent or “lean” sensing to focus on what matters in a driving environment. The goal is to capture more intelligent information with less data. This is imperative to reign in the time, cost and accuracy penalties inherent with traditional LiDAR systems.

By using AI-infused agile scanning, we are able to choose what to focus on in a scene, and in doing so increase performance and reduce system complexity, cost and power consumption to enable faster, more accurate and more efficient perception and path planning.

As effective as cameras are, they have a few drawbacks such as limited range and performance due to rain, fog and varying light conditions. To what extent can sensor/data fusion help?

All of the technologies (LiDAR, camera, radar) have their place, but when you add them together, the sum of their parts is greater than each alone. Having said that, it’s all about how they are pieced together. Much of what’s being done today is sensor fusion – gathering disparate data from each component, then stitching it together.

How well do you really know your competitors?

Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.

Company Profile – free sample

Thank you!

Your download email will arrive shortly

Not ready to buy yet? Download a free sample

We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form

By GlobalData
Visit our Privacy Policy for more information about our services, how we may use, process and share your personal data, including information of your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.

Inversely, AEye combines LiDAR with camera data at the hardware level. Our camera and LiDAR actually share the same aperture to collect data, and we are able to use these novel datasets (“Dynamic Vixels”) to cue how intelligently we interrogate a scene. When you fuse a camera and LiDAR mechanically at the sensor, you are actually processing data as it’s being collected, enabling the system to quickly identify and track objects, using an additional layer of AI to predict their intended behaviour. The integration delivers data faster, more efficiently and more accurately than trying to register and align pixels and voxels in post-processing.

The edge-processing of the iDAR system enables the autonomous vehicle to more intelligently assess and respond to situational changes within a frame, thereby increasing the safety and efficiency of the overall system. For example, iDAR can identify objects with minimal structure, such as a bike, and differentiate objects of the same colour such as a black tire on asphalt. In addition, Dynamic Vixels can leverage the unique capabilities of agile LiDAR to detect changing weather and automatically increase power during fog, rain, or snow.

Why is LiDAR technology necessary for AVs in light of Elon Musk’s comments last year?

Musk is on an island when it comes to believing that LiDAR is not part of the solution for AVs.

Musk is on an island when it comes to believing that LiDAR is not part of the solution for AVs. He’s betting on passive optical image processing systems. Unfortunately, existing 2D image processors and 2D to 3D image conversion concepts have serious flaws that can only be addressed with massive computing power and more importantly – algorithms that have not been invented, and are many years away from becoming a reality. This makes this approach too costly, inefficient and cumbersome to achieve Level 5 autonomous driving at commercial scale.

The truth is, transportation requires better than six 9’s reliability, and you don’t achieve that by eliminating a modality that can help get you there. LiDAR handles certain things really well – it’s known for its precision and accuracy, its ability to spot objects in the distance, and its reliability in adverse weather conditions and challenging lighting scenarios.

Having said that, we agree with Musk on one point. LiDAR alone won’t suffice. Perception is a system-level problem that requires a system-level solution. At AEye we know that integrating cameras, agile LiDAR, and AI equals a perception system that is better than the sum of its parts. It surpasses both the human eye and camera alone, which is required if you don’t have the sophistication of the human brain yet replicated. Furthermore, size, weight, power, and cost are decreasing for vehicle navigation grade LiDAR, and they will fall further. AEye, and competition in the industry as a whole, will see to that.

How do you see the marketplace for LiDAR’s evolving?

LiDAR is considered by most to be an essential part of the sensor suite for mobility and ADAS initiatives, and the current recession will likely force a market consolidation that was already imminent. While publicly, dozens of LiDAR companies exist, in reality, there are less than a handful of legitimate contenders in the mobility market, and the same in ADAS. Those that lacked first-mover advantage, tech differentiation, and tier 1 partnerships were already seeing product timing, market opportunity, and fundraising challenges pre-COVID. These issues will only be exasperated by the economic downturn, causing the LiDAR market to quickly consolidate and materialise.

A rationalised LiDAR market will lead to more standardisation, closer partner integrations and market maturity.

Expect a no-nonsense approach to LiDAR evaluation moving forward. The LiDAR players are narrowing, the evaluation criteria is crystallising, the mobility and ADAS business models are solidifying, and the importance of established go-to-market partners has never been more clear. A rationalised LiDAR market will lead to more standardisation, closer partner integrations and general maturity of the market that will help to accelerate the development and deployment of this life-saving technology.

As this happens, we’ll see the following themes play out in automotive LiDAR:

1. Complex mechanical systems will phase out, at least in ADAS, as systems move to solid-state. In general, solid-state technology reduces complexity, unlocks a desirable cost curve, and transitively increases product reliability and quality.

2. 1550nm will be the preferred wavelength for high-performance ADAS features. Recent advancements in the automotive supply chain have acted as a springboard for technology emergence utilising the 1550nm wavelength. Solid-state solutions that leverage this inherent performance gain, while architected into a low power system design, will be able to achieve optimal pricing with unmatched performance.

3. OEMs pursuing ADAS will only accept bids from Tier 1 suppliers. OEMs purchasing ADAS solutions demand extensive warranty periods and rigorous quality and functional safety requirements for high volume programmes to limit exposure, which only certain global contract manufacturers and established automotive suppliers can meet and guarantee. As passenger vehicle and trucking OEMs accelerate their ADAS programmes, they will look to stabilise and reduce risk by sourcing through trusted, well established vendors. Startup partnerships with Tier 1 suppliers will be a key to success, combining Silicon Valley agile engineering with refined processes for the production of automotive-qualified components.

4. Value creation will be defined by the quality of data and perception software, directly correlated to sensor performance. The winners will be those that deliver the highest confidence data, within the customer-defined performance parameters (i.e. range, resolution, update rate, etc.), in any and all environments and road conditions. This will enable OEMs to address corner cases, such as detecting a small low reflective object on the highway, in order to release new feature functionality that provides value to the consumer via added comfort or safety.

5. There will be 3-4 LiDAR sensor “winners”. The winners will be those with a system-level design with complementary perception that is assembled and qualified by a Tier 1 and addresses OEM requirements respective to their specific use cases.

Additionally, as shelter-in-place continues to halt in-person travel and demos, you will see agile tech companies find creative ways to demonstrate their technological capabilities.

AEye has found a way to let customers and partners from around the world engage in real-time, fully-interactive test drives with an AEye engineer on Bay Area roads. Using Discord, a platform optimised to deliver high resolution, low latency performance within complex 3D environments, we’ve been delivering accurate, immersive real-time demos, showcasing product capabilities via Discord’s video feature, to key stakeholders from the comfort of their home or office.

In an autonomous car, is it all about processing power for sensor fusion or are there other aspects to make computing dependable?

Once you have fused the sensor data, you need to reliably define an object detection, then classify the object (Is it a car, truck, tree, or a toddler on a tricycle?), while at the same time calculating its position and velocity so you can forecast its motion. You have to do this for every object in a scene. All of this information is processed and sent from the vehicle’s perception system to the motion planning system, which then tells the vehicle to stop, accelerate and/or change direction.

Most of the processing required for autonomy occurs here, which is why we aim to push intelligence to the edge of the system, where we can immediately identify only the information that is relevant to the safe operation of the vehicle and ignore what isn’t. By focusing on only the salient information, we enable the perception system to process more efficiently and accurately.

We are hearing that while manufacturers remain excited about automated driving, the challenge will require more time and effort to fully realize. What’s your view on the path towards AD?

Companies with well-defined business models will continue to accelerate their autonomous efforts, while those without will roll their programmes into R&D.

There are several subsets of autonomy with clear-cut business models, and the first we’ll classify as early-stage mobility on-demand. These are companies like Cruise and Waymo, which have embraced constrained autonomy. Instead of trying to operate at all times and in all conditions, they have downscoped their initial deployment to geofenced, low speed, heavily mapped environments. These players have committed to a grassroots model where they intend to buy assets in select cities and build or partner with ridesharing app services, such as Uber and Lyft.

Similar constrained autonomy implementations exist in heavy industry (construction and mining), as well as with campus shuttles, logistics centres and ports. Each has created a clear business case that involves automating repetitive tasks and constraining variables to solve the technology problem, which improves both safety and productivity, while resulting in significant cost savings.

The second subset of companies that have been accelerating are those who own and manage fleets: think delivery companies like Amazon, UPS and FedEx, who reduce costs and risk by removing the human operator, while increasing utilisation of their existing vehicles. Reports say trucking companies can lower operating costs 45% by adding autonomous capabilities. Plus, these long-haul trucks operate almost entirely on the highway, which greatly reduces the technology challenges for autonomy.

The third subset accelerating the acquisition, integration, and deployment of autonomous technology is the ADAS market. In this segment, automotive suppliers deliver modules that enhance the driver experience and/or improve safety – such as highway autopilot, adaptive cruise control, lane keep assist, lane change assist, automatic emergency braking, and predictive evasion. These advanced features range in functionality and price, with basic sensor packages (i.e. features) starting at roughly $500, and sophisticated services like advanced highway autopilot commanding upwards of $6,000.

The pandemic has slowed robotaxi efforts and non-essential R&D, as companies tighten spending and shift priorities to weather the storm. However, the silver lining is the accelerated demand for delivery and logistics, as e-commerce and at-home services skyrocket during self-quarantine. This bodes well for the automation of logistics and distribution, ports, long haul trucks, and delivery fleet vehicles.

Additionally, automotive companies are “burning the midnight oil” to maintain production schedules. Automakers will need to keep a strict sourcing schedule in order to release vehicles roughly three years after component selection. As global supply chains revamp, automotive companies are resuming sourcing on a tighter timeline, but I would anticipate minimal disruption both to their predetermined start of production and to their planned rollout of ADAS systems.

As this pandemic runs its course and the economy begins to recover, the hope is that we will see a V-shaped economic growth pattern. In the meantime, larger privately funded companies will continue to advance technology development. A subset of growth companies will also stay the course, with minimal impact on product development or market adoption, while smaller startups, with little traction or runway, will begin to consolidate.

A while ago, IT in cars was seen as more of an enabler but nowadays it is viewed as a core element, linking the home and personal devices. What opportunities does this present AEye?

There are several areas we are investigating. Probably the most obvious is where intelligent 3D sensing enhances existing driver information systems by providing real-time input on the world around them and informing them of imminent safety issues. In parallel, we will likely see our system enabling Simultaneous Location and Mapping (SLAM) applications. The combination of these will likely enable a variety of infotainment and augmented reality applications, once vehicles achieve level 4 autonomy.

I guess developing solutions for autonomous vehicles is just the first step for AEye. What other sector opportunities are you looking at?

We have created an intelligent, agile sensor that is software-definable to meet the unique needs of any application, and are currently developing and testing our 4Sight sensor with a wide range of customers and integrators in several industries and markets, including automotive, trucking, transit, construction, rail, intelligent traffic systems (ITS), aerospace and defence.