• @Geek_King
    link
    English
    21 year ago

    I disagree with you, I don’t think visual camera’s alone are up to the task. There was an instance of a Tesla in auto pilot mode driving at night with the driver being drunk. This took place in Texas on the high way, the car’s camera footage was released and it showed the autopilot not identify the police car in the lane with it’s red/blue lights flashing as a stationary obstacle. Instead it didn’t realize there was a car in the way around 1 second before the 55 mph impact, and it turned of autopilot that 1 second before.

    Having multiple layers of sensors, some being good at actually sensing a stationary obstacle, plus accurate range finding, plus visual analysis to pick out people and animal, thats the way to go.

    Visual range only cameras were just reported to have a harder time recognizing people of color and children.

    • Eager Eagle
      link
      English
      11 year ago

      the car’s camera footage was released and it showed the autopilot not identify the police car in the lane with it’s red/blue lights flashing

      If the obstacle was visible in the footage, the incident could have been avoided with visible spectrum cameras alone. Once again, a problem with the data processing, not acquisition.

      • @Geek_King
        link
        English
        11 year ago

        If we’re talking about the safety of the driver and people around them, why not both types of sensors? LIDAR has things it excels at, and visual spectrum cameras have things they do well too. That way the data processing side has more things to rely on, instead of all the eggs in one basket.

        • Eager Eagle
          link
          English
          1
          edit-2
          1 year ago

          why not both types of sensors

          Cost seems to be a pretty good reason. Admittedly, until I looked it up 5 minutes ago I thought it was just 100-200% more expensive than cameras, but it seems to be much more than that.

          On top of that, there are the problems of weather and high energy usage. This is more of a problem than just “not working on rain”: if the autonomous driving system is designed to rely on data from a sensor that stops working when it rains, this can be worse than not having that sensor in the first place. This is what I refer to by saying that LIDAR is a crutch.

          • @Geek_King
            link
            English
            11 year ago

            That’s a pretty good point, the part about if it’s raining or snowing, LIDAR can’t be used, which could leave the system in a much worst spot. It’s getting to the point where I’m beginning to think that fully self driving cars just won’t be 100% possible in all conditions in all locations.

            For instance, where I live, we can have some bad winters, snow, ice, slippy conditions. People have a tough time with these conditions, and I’d imagine it’d be even harder for a self driving car, especially given how the sensor suites work. My car has that intelligent cruise control where it’ll slow down when it senses a car ahead of me, then match it’s speed. That feature stops working if too much snow accumulates on the sensors.

          • @[email protected]
            link
            fedilink
            English
            -1
            edit-2
            1 year ago

            Optical cameras alone have issues as well that can’t be handled though. It’s the combination of the two along with other things like ultrasonic sensors that makes them safe. More sensors in general are better because they reduce the computational burden and provide redundancy - even if that redundancy is to safely stop.

            Cost is certainly an issue, but on $40k+ vehicles it’s cheap enough for other EV makes to include it in the cost. Volvo for instance is using Luminars version at a cost of about $500 (https://www.wired.com/story/sleeker-lidar-moves-volvo-closer-selling-self-driving-car/).

            Image processing is expensive even with dedicated hardware and LiDAR provides enough extra information to avoid needing to make make certain calculations off of images alone (like deltas between image series to calculate distance). Those calculations are further amplified by conditions where images alone don’t provide enough information - similar to how there are conditions where the LiDAR data alone wouldn’t be sufficient.

            • Eager Eagle
              link
              English
              11 year ago

              Image processing is expensive

              and you’re suggesting to use LIDAR which is more expensive and power hungry as a replacement for those computations?

              • @[email protected]
                link
                fedilink
                English
                1
                edit-2
                1 year ago

                I meant the computations are expensive, i.e. slow to perform even with good processors. When you need to do something millions of times, anything to make that faster helps with the overall safety of the system.