The U.S. government’s road safety agency is again investigating Tesla’s “Full Self-Driving” system, this time after getting reports of crashes in low-visibility conditions, including one that killed a pedestrian.

The National Highway Traffic Safety Administration says in documents that it opened the probe on Thursday with the company reporting four crashes after Teslas entered areas of low visibility, including sun glare, fog and airborne dust.

In addition to the pedestrian’s death, another crash involved an injury, the agency said.

Investigators will look into the ability of “Full Self-Driving” to “detect and respond appropriately to reduced roadway visibility conditions, and if so, the contributing circumstances for these crashes.”

  • @tekato
    link
    English
    22 months ago

    This is why you can’t have an AI make decisions on activities that could kill someone. AI models can’t say “I don’t know”, every input is forced to be classified as something they’ve seen before, effectively hallucinating when the input is unknown.

    • @pycorax
      link
      English
      32 months ago

      I’m not very well versed in this but isn’t there a confidence value that some of these models are able to output?

      • @[email protected]
        link
        fedilink
        English
        32 months ago

        All probabilistic models output a confidence value, and it’s very common and basic practice to gate downstream processes around that value. This person just doesn’t know what they’re talking about. Though, that puts them on about the same footing as Elono when it comes to AI/ML.

        • @tekato
          link
          English
          12 months ago

          Right, which is why that marvelous confidence value got somebody ran over.

          • @[email protected]
            link
            fedilink
            English
            22 months ago

            Are you under the impression that I think Teslas approach to AI and computer vision is anything but fucking dumb? The person said a stupid and patently incorrect thing. I corrected them. Confidence values being literally baked into how most ML architectures work is unrelated to intentionally depriving your system of one of the most robust ccomputer vision signals we can come up with right now.

            • @tekato
              link
              English
              02 months ago

              Yes, but confidence values are not magic. These values are calculated based on how familiar the current input is to a previous observed input. If the type of input is unfamiliar to the model, what do you think happens? Usually, there will be a category with a high enough confidence score so that it will be chosen as the correct one, while being wrong. Now, assuming you somehow manage to not get a favorable confidence score for any decision. What do you think happens in that case? I never encountered this, but there can only be 3 possible paths: 1) Choose a random value. Not good. 2) Do nothing. Not good. 3) Rerun the model with slightly newer data? Maybe helps, but in the case of driving a car, slightly newer data might be too late.

              • @[email protected]
                link
                fedilink
                English
                12 months ago

                There’s plenty you could do if no label was produced with a sufficiently high confidence. These are continuous systems, so the idea of “rerunning” the model isn’t that crazy, but you could pair that with an automatic decrease in speed to generate more frames, stop the whole vehicle (safely of course), divert path, and I’m sure plenty more an actual domain and subject matter expert might come up with–or a whole team of them. But while we’re on the topic, it’s not really right to even label these confidence intervals as such–they’re just output weighting associated with respective levels. We’ve sort of decided they vaguely match up to something kind of sort approximate to confidence values but they aren’t based on a ground truth like I’m understanding your comment to imply–they entirely derive out of the trained model weights and their confluence. Don’t really have anywhere to go with that thought beyond the observation itself.