harman interview

Share this article

Interview: Harman VP on why sensor fusion make sense for ADAS and autonomous driving

Robert Kempf

Vice President Harman

The automotive industry remains divided on the sensor configuration needed to support autonomous driving. Tesla is resolute that cameras and radar systems will be sufficient, yet both systems have their shortcomings. Harman believes safe automated and autonomous driving can only be achieved by combining LiDAR (Light Detection and Ranging), radar and cameras. We caught up with Robert Kempf, Harman's vice president in ADAS/automated driving, on how it's overcoming today's LiDAR issues with recently-launched solid-state LiDAR technology and why the merging of this data in the ECU architecture is key to its success. 

Appropriate sensor topology for autonomous driving is a highly debated topic. What is Harman's current stance? 

There is much discussion and Harman is of the opinion that to support the quality and diversity of information required in automated and autonomous driving, a combination of LiDAR, radar and cameras is the optimum. Each offer advantages and disadvantages, so the successful fusion of the data sets provided by these sensors is imperative for a robust solution.

Can radar and cameras alone not be enough? 

Both systems are hugely important for autonomous driving. However, there are limitations. Today's automotive radar systems deliver their information selectively and with a small detection range. They are also vulnerable to disturbances by sensors of the same type so they could be severely hampered by other vehicles on the road as an example. 


Meanwhile, cameras are hugely affected by a range of conditions. Lighting, weather such as rain, fog or snow, can disrupt performance. They are often not robust enough to provide the consistent and error-free detailed information needed for autonomous driving.  

What are the challenges of LiDAR in its current form?

To date, LiDAR has been too complex and too expensive for mass-market use. Commercially-available LiDAR has relied on mechanically rotating components that are susceptible to vibration and shock. They also have limited resolution and range to deliver the quality data necessary, or are too large and expensive at the required performance level. Additionally, the more complex and sizeable the component, the harder it is to protect it from dust and moisture ingress. Large, fragile systems with high unit costs means typically, they aren't viable for mass-market automotive use. 


To overcome these issues, Harman's partner Innoviz has developed solid-state LiDAR technology that was designed from the ground up specifically for automotive applications. InnovizOne solves the previous issues of LiDAR and provides the industry with a commercially-viable solution. 

What does solid-state LiDAR offer? 

The new solid-state design removes moving parts to avoid most of the previous disadvantages and deliver high-end performance more efficiently. With fewer components, packing is significantly more compact, yet offers a higher range and typically higher resolution of 0.1o x 0.1o over distances of 150m or more, so even small objects can be detected over long distances. This provides a high-precision 3D image of the vehicle's environment in the form of a '3D point cloud' that can be comprehensively processed by advanced driver assistance systems (ADAS). Moreover, LiDAR is not susceptible to interference from other sensors and is able to operate independently of ambient light levels. 


Its range, detailed resolution and interference immunity means that only LiDAR can operate under all the environmental conditions with the critical system reliability and detection rates for all objects. Scenarios such as motorway driving at high speed and city-centre driving, with its complexity of road users and pedestrians, can only be reliably supported by LiDAR. Its importance is reflected by industry predictions: BIS Research estimates that the market for automotive light detection and ranging will grow from USD$353 million in 2017 to USD$8.32 billion by 2028. 

When will solid-state LiDAR hit the mass market? 

The first solid-state LiDAR in the automotive industry is set to be used on the first generation of autonomous BMW vehicles, currently planned for 2021. This small series production of vehicles will feature InnovizOne. It's a MEMS-based solution that meets future technical requirements of the automotive industry and can be integrated into system architecture to deliver high resolution 3D point clouds at distances of up to 250m, regardless of the light and weather conditions. From this initial work with BMW, it's expected that significant volumes with corresponding optimised costs could be available from 2024.

How will LiDAR fit into the architecture for autonomous vehicles?

Harman has determined that a single, forward-facing, high resolution LiDAR with a long range (50-200m) will be sufficient to support cars in SAE L3 automated driving with a human driver. However, robo taxis will need significantly more input about their surrounding environment. They will need a long range LiDAR sensor front and rear, with two short range (approx. 20m) each side. All LiDAR sensors will need to offer 0.1o x 0.1o a field of view approximately 115 o horizontally and 25 o vertically, a frame rate of 25 FPS and withstand temperature fluctuations of -40 o to +85 o Celsius. Critically, this data will need to be seamlessly combined with that from radar and cameras in the car.

Can you explain in more detail? 

Sure. Obviously, just having the data from these sensors isn't enough. The most complex challenge is amalgamating the data from cameras, radar and LiDAR, as well as data from the cloud and V2X infrastructure, to create a highly accurate image of the environment. Sensor fusion is therefore imperative for the development of a robust system to support automated driving. 


At present, each and every sensor has a small ECU processing the data and providing object outputs. But for the higher demands in the future, this needs to migrate to a domain controller or central computer architecture that enables the processing of the various sensor data in parallel to guarantee a better detection rate. Powerful control devices with sophisticated algorithms and neural networks are needed to process, recognise and classify objects in the 3D point cloud and achieve the detection quality and accuracy needed for autonomous driving. 

What about the challenges? 

Due to the complexity and the need for inter-company collaboration to enable autonomous driving, for SAE L3 and higher vehicles it's imperative that hardware and software provided by different suppliers can be linked, along with other sensors and ADAS solutions. It must also allow scalability and flexibility to handle different sensor topologies, as cost will often determine the sensors and features possible. This fusion of information will help with the rollout of the technology and its affordability in the long term.  


Ultimately, sensor fusion is key to future success, and Harman is looking at this bigger picture of optimum sensor mix to fulfil the necessary performance, cost and durability requirements. 

Do you think advanced LiDAR technology will increase momentum towards automated driving? 

Yes, we are anticipating an initial growth spurt for conditionally automated SAE L3 vehicles as early as 2021, with larger fleets of robo taxis expected to enter the market in the following years, which will initially be used within restricted areas, i.e. 'Operational Design Domains' (ODDs). These developments will be facilitated by the new LiDAR technology and its integration with radar and camera data. Based on the data and development of these fleets, we expect to see major advances in environmental modelling and SAE L5 systems in use within these ODDs in a few years.

Share this article