• @[email protected]
    link
    fedilink
    English
    3
    edit-2
    8 months ago

    Because we haven’t learnt anything about the status quo of autonomous driving from Tesla’s “Auto Pilot”, huh?

    Similar post earlier.

    • @FortuneMistellerOP
      link
      English
      2
      edit-2
      8 months ago

      A serious self driving vehicle must be able to see around with different sensors. But then it must have a lot of computing power on board to merge different streams of data coming from different sensor. That adds up to the computing power required to make a proper prediction of the trajectories of dozen of other objects moving around the vehicle. I don’t know about the latest model, but I knew that the google cars few years ago had the boot occupied by big computers with several CUDA cards.

      That’s not something you can put in a commercial car sold to the public, what you get is a car that relies only on one camera to look around and has a sensor in the bumper that cuts the engine if activated, but it does not create an additional stream of data. Maybe that there is a second camera looking down at the line on the road, but the data stream is not merged to the other, it is used to adjust the driving commands. I don’t even know if the little onboard computer they have is able to computes the trajectories of all the objects around the car. Few sensors and little processing power, that is not enough, it is not a self driving car.

      When Tesla sells a car with driving assistance they tell to the customer that their car is not a self driving car, but they fail to explain why, where is the difference. How big is the gap. That’s one of the reasons why we had so many accidents.

      Similar post earlier.

      It starts from the same news, but taking the idea from the book in the link it asks something different.

      • @abhibeckert
        link
        English
        2
        edit-2
        8 months ago

        the google cars few years ago had the boot occupied by big computers

        But those were prototypes. These days you can get an NVIDIA H100 - several inches long, a few inches wide, one inch thick. It has 80GB of memory running at 3.5TB/s and 26 teraflops of compute (for comparison, Tesla autopilot runs on a 2 teraflop GPU).

        The H100 is designed to be run in clusters, with eight GPUs on a single server, but I don’t think you’d need that much compute. You’d have two or maybe three servers, with one GPU each, and they’d be doing the same workload (for redundancy).

        They’re not cheap… you couldn’t afford to put one in a Tesla that only drives 1 or 2 hours a day. But a car/truck that drives 20 hours a day? Yeah that’s affordable.

        • @FortuneMistellerOP
          link
          English
          18 months ago

          A real self driving software must do a lot of things in parallel. Computer vision is just one of the many tasks it has to do. I don’t think that a single H100 will be enough. The fact that the current self driving vehicles did not use so many processing power doesn’t mean a lot, they are prototypes running in controlled environments or under strict supervision.