• @zeppo
      link
      English
      181 year ago

      The video ended when he made an “intervention” at a red light. I’m not watching whatever link that is because I’m not a masochist.

      • Ocelot
        link
        fedilink
        English
        -20
        edit-2
        1 year ago

        Here’s the specific timestamp of the incident you mentioned in case you wanted to actually see it: https://youtu.be/aqsiWCLJ1ms?t=1190 The car wanted to move through the intersection on a green left turn arrow. I’ve seen a lot of human drivers do the same. In any case, its fixed now and never was part of any public release.

        The video didn’t end there, it was near the middle. What you’re referring to is a regression specifically with the HW3 model S that failed to recognize one of the red lights. Now I’m sure that sounds like a huge deal, but here’s the thing…

        This was a demo of a very early alpha release of FSD 12 (current public release 11.4.7) representing a completely new and more efficient method of utilizing the neural network for driving and has already been fixed. It is not released to anyone outside of a select few Tesla employees. Other than that it performed flawlessly for over 40 minutes in a live demo.

        • @[email protected]
          link
          fedilink
          English
          91 year ago

          Other than that it performed flawlessly for over 40 minutes in a live demo.

          I get that this is an alpha, but the problem with full self driving is that’s way worse than what users want. If chatgpt gave you perfect information for 40 minutes (it doesn’t) and then huge lies once, we’d be using it everywhere. You can validate the lies.

          With FSD, that threshold means a lot of people would have terrible accidents. No amount of perfect driving outside of that window would make you feel very happy.

          • Ocelot
            link
            fedilink
            English
            -10
            edit-2
            1 year ago

            You realize that FSD is not an LLM, right?

            If its “Way Worse” then where are all the accidents? All teslas have 360 dashcams. Where are all the accidents?!

            • @[email protected]
              link
              fedilink
              English
              51 year ago

              I didn’t say FSD was an LLM. My comment was implementation agnostic. My point was that drivers are less forgiving to what programmatically seems like a small error than someone who is trying to generate an essay.

              • Ocelot
                link
                fedilink
                English
                -5
                edit-2
                1 year ago

                Maybe so, but from where I stand the primary goal should be “Better driver than a human” which is an incredibly low bar. We are already quite a ways past that and its getting better with every release. FSD is today nearly 100% safe, most of the complaints now are around how it drives like a robot by obeying traffic laws, which confuses a lot of other drivers. There are still some edge cases yet to be ironed out extensively like really heavy rain, some icy conditions and snow. People are also terrible drivers in those conditions so its not a surprise. It will get there.

                • @[email protected]
                  link
                  fedilink
                  English
                  31 year ago

                  Oh man I definitely agree here. I’m a huge fan of that “better than a human” threshold. Roads are already very dangerous. One of the wildest things I’ve noticed is highway driving at night in very rainy conditions, sometimes visibility will be near zero. Yet a lot of drivers are zooming around pretending they can see. I feel like I’m in the twilight zone when it happens.

        • @zeppo
          link
          English
          21 year ago

          it has to perform flawlessly 99.999999% of the time. The number of 9s matters. Otherwise, you are paying some moron to kill you and perhaps other people.

          • Ocelot
            link
            fedilink
            English
            -21 year ago

            ok so im totally in agreement but 99.999999% is one accident per hundred million miles traveled. I dont think there should be any reasonable expectation that such a technology can ever possibly get that far without real world testing. Which is precisely where we are now. Maybe at 4 or 5 9s currently.

            If you do actually want to have that level of safety, which lets be honest we all do, or ideally 100% safety, how would you propose such a system be tested and deemed safe if not how it’s currently being done?

            • @zeppo
              link
              English
              3
              edit-2
              1 year ago

              deleted by creator