If we’re talking about the safety of the driver and people around them, why not both types of sensors? LIDAR has things it excels at, and visual spectrum cameras have things they do well too. That way the data processing side has more things to rely on, instead of all the eggs in one basket.
Cost seems to be a pretty good reason. Admittedly, until I looked it up 5 minutes ago I thought it was just 100-200% more expensive than cameras, but it seems to be much more than that.
On top of that, there are the problems of weather and high energy usage. This is more of a problem than just “not working on rain”: if the autonomous driving system is designed to rely on data from a sensor that stops working when it rains, this can be worse than not having that sensor in the first place. This is what I refer to by saying that LIDAR is a crutch.
That’s a pretty good point, the part about if it’s raining or snowing, LIDAR can’t be used, which could leave the system in a much worst spot. It’s getting to the point where I’m beginning to think that fully self driving cars just won’t be 100% possible in all conditions in all locations.
For instance, where I live, we can have some bad winters, snow, ice, slippy conditions. People have a tough time with these conditions, and I’d imagine it’d be even harder for a self driving car, especially given how the sensor suites work. My car has that intelligent cruise control where it’ll slow down when it senses a car ahead of me, then match it’s speed. That feature stops working if too much snow accumulates on the sensors.
Optical cameras alone have issues as well that can’t be handled though. It’s the combination of the two along with other things like ultrasonic sensors that makes them safe. More sensors in general are better because they reduce the computational burden and provide redundancy - even if that redundancy is to safely stop.
Image processing is expensive even with dedicated hardware and LiDAR provides enough extra information to avoid needing to make make certain calculations off of images alone (like deltas between image series to calculate distance). Those calculations are further amplified by conditions where images alone don’t provide enough information - similar to how there are conditions where the LiDAR data alone wouldn’t be sufficient.
I meant the computations are expensive, i.e. slow to perform even with good processors. When you need to do something millions of times, anything to make that faster helps with the overall safety of the system.
If we’re talking about the safety of the driver and people around them, why not both types of sensors? LIDAR has things it excels at, and visual spectrum cameras have things they do well too. That way the data processing side has more things to rely on, instead of all the eggs in one basket.
Cost seems to be a pretty good reason. Admittedly, until I looked it up 5 minutes ago I thought it was just 100-200% more expensive than cameras, but it seems to be much more than that.
On top of that, there are the problems of weather and high energy usage. This is more of a problem than just “not working on rain”: if the autonomous driving system is designed to rely on data from a sensor that stops working when it rains, this can be worse than not having that sensor in the first place. This is what I refer to by saying that LIDAR is a crutch.
That’s a pretty good point, the part about if it’s raining or snowing, LIDAR can’t be used, which could leave the system in a much worst spot. It’s getting to the point where I’m beginning to think that fully self driving cars just won’t be 100% possible in all conditions in all locations.
For instance, where I live, we can have some bad winters, snow, ice, slippy conditions. People have a tough time with these conditions, and I’d imagine it’d be even harder for a self driving car, especially given how the sensor suites work. My car has that intelligent cruise control where it’ll slow down when it senses a car ahead of me, then match it’s speed. That feature stops working if too much snow accumulates on the sensors.
Optical cameras alone have issues as well that can’t be handled though. It’s the combination of the two along with other things like ultrasonic sensors that makes them safe. More sensors in general are better because they reduce the computational burden and provide redundancy - even if that redundancy is to safely stop.
Cost is certainly an issue, but on $40k+ vehicles it’s cheap enough for other EV makes to include it in the cost. Volvo for instance is using Luminars version at a cost of about $500 (https://www.wired.com/story/sleeker-lidar-moves-volvo-closer-selling-self-driving-car/).
Image processing is expensive even with dedicated hardware and LiDAR provides enough extra information to avoid needing to make make certain calculations off of images alone (like deltas between image series to calculate distance). Those calculations are further amplified by conditions where images alone don’t provide enough information - similar to how there are conditions where the LiDAR data alone wouldn’t be sufficient.
and you’re suggesting to use LIDAR which is more expensive and power hungry as a replacement for those computations?
I meant the computations are expensive, i.e. slow to perform even with good processors. When you need to do something millions of times, anything to make that faster helps with the overall safety of the system.