But the explanation and Ramirez’s promise to educate himself on the use of AI wasn’t enough, and the judge chided him for not doing his research before filing. “It is abundantly clear that Mr. Ramirez did not make the requisite reasonable inquiry into the law. Had he expended even minimal effort to do so, he would have discovered that the AI-generated cases do not exist. That the AI-generated excerpts appeared valid to Mr. Ramirez does not relieve him of his duty to conduct a reasonable inquiry,” Judge Dinsmore continued, before recommending that Ramirez be sanctioned for $15,000.

Falling victim to this a year or more after the first guy made headlines for the same is just stupidity.

  • @[email protected]
    link
    fedilink
    English
    812 hours ago

    Technically it’s not, because the LLM doesn’t decide to do anything, it just generates an answer based on a mixture of the input and the training data, plus some randomness.

    That said, I think it makes sense to say that it is lying if it can convince the user that it is lying through the text it generates.

    • Ulrich
      link
      fedilink
      English
      -612 hours ago

      it just generates an answer based on a mixture of the input and the training data, plus some randomness.

      And is that different from the way you make decisions, fundamentally?

        • Ulrich
          link
          fedilink
          English
          -210 hours ago

          I didn’t say anything about either.

      • @[email protected]
        link
        fedilink
        English
        6
        edit-2
        11 hours ago

        Idk, that’s still an area of active research. I versatile certainly think it’s very different, since my understanding is that human thought is based on concepts instead of denoising noise or whatever it is LLMs do.

        My understanding is that they’re fundamentally different processes, but since we don’t understand brains perfectly, maybe we happened on an accurate model. Probably not, but maybe.