• @Bell
    link
    English
    421 month ago

    The hallucinations will continue until the training data is absolutely perfect

    • hendrik
      link
      fedilink
      391 month ago

      That’s not correct btw. AI is supposed to be creative and come up with new text/images/ideas. Even with perfect training data. That creativity means creativity. We want it to come up with new text out of thin air. And perfect training data is not going to change anything about it. We’d need to remove the ability to generate fictional stories and lots of other answers, too. Or come up with an entirely different approach.

      • bjorney
        link
        fedilink
        English
        81 month ago

        AI isn’t supposed to be creative, it’s isn’t even capable of that. It’s meant to min/max it’s evaluation criterion against a test dataset

        It does this by regurgitating the training data associated with a given input as closely as possible

        • hendrik
          link
          fedilink
          English
          -2
          edit-2
          1 month ago

          I’ve heard people saying that before. But it’s not true. You can ask an AI to draw you an astronaut on a horse and it’ll do it despite never having seen such picture. (Now it has.) Same applies to LLMs. They come up with an answer to your exact question. Not a similar one it saw on Reddit before. That answer might be wrong (which is my point) but if you try it, you’ll regularly find it tries answering your questions and not different ones.

          I’ve also tried some scifi storywriting with AI and there it becomes quite obvious that it’s able to apply things it knows from different contexts and apply that to my setting. Like ethics questions, basic physics and what character can and cannot do. Rough knowledge about how stories are written. You can tell it to do a plot twist an an arbitrary point and it’ll do. All of that is knowledge about (abstract) concepts and the ability to apply it to different contexts. Which is an important part of creativity.

          And I’ve read papers where the scientists try to look inside of AI and they are able to spot abstract concepts like what a cat is in the weights. It’s fascinating how it works. And it turns out it’s not just regurgitating it’s training data. Which isn’t surprising because a lot of effort has been put into the computer science behind it to make AI more than that. And it’s also why they’re useful in the first place.

          • ElectricMachman
            link
            fedilink
            English
            61 month ago

            It’s able to apply those things because it’s read millions of sci-fi stories, and can make an educated guess. It’s also able to produce an image od an astronaut on a horse because it’s seen lots of images of astronauts and horses, and people sitting on horses, so it can once again make an educated guess. I don’t think it’s right to call that creativity.

            • hendrik
              link
              fedilink
              01 month ago

              Isn’t that like 60% of what creativity is? Art sometimes is about combining things in a new way. I mean it’s rare anyways that one genius comes up with an entirely new concept like scifi stories or pop art and invents that genre out of thin air. Most of the times also humans take something that already exists and build upon that. It’s not that far off here. And I doubt a human can draw a “rkbvrpoi” on a “wuqrkah” and not take inspiration from …anything.

              I mean obviously there is something missing. Some human told it to draw that astronaut. So the whole artwork contains that original creativity that didn’t come from the AI. But I think it’s debatable wheter it could do it. This is only one specific example

              • ElectricMachman
                link
                fedilink
                English
                31 month ago

                Yes, a lot of creativity is fuelled by inspiration. One can’t create much in a bubble. However: I could draw a rkbvrpoi, and my human intuition enables me to consider what such a thing might look like. I can make it up and make something feasible of it. I can give it a history, I can place it in culture (or maybe it is itself a culture), and I can do whatever I want. Yes, that requires some level of inspiration, and drawing from what I’ve observed and experienced would make the rkbvrpoi a lot more believable - if that’s my goal.

                A so-called AI can’t do any of that, and even if it could, it would be meaningless and soulless.

    • @NeoNachtwaechter
      link
      English
      141 month ago

      In order to get perfect training data, they cannot use any human output.

      I’m afraid it is not going to happen anytime soon :)

      • Orbituary
        link
        English
        31 month ago

        What other output do you propose?

        • @NeoNachtwaechter
          link
          English
          51 month ago

          What other output do you propose?

          I do not propose, and it is not neccessarily any output.

          Their first question is, what do they want the AI to do. And if they want it to be perfect, then they need to use perfect training data, not human output.

          • Orbituary
            link
            English
            11 month ago

            deleted by creator

          • @[email protected]
            link
            fedilink
            English
            11 month ago

            This is exactly why apple uses API for apps to give well structured data as context instead of random screenshot data

      • Kokesh
        link
        English
        21 month ago

        I’ve started editing my answers/questions on StackExchange. Few characters at a time. I’m doing my part.

        • @NeoNachtwaechter
          link
          English
          11 month ago

          Are you improving it, or do you create new errors? ;-)

          • Kokesh
            link
            English
            101 month ago

            Errors. I rewrote all my stuff up there with Fuck you OpenAI or something pike that in spring and got banned fo 3 months. So after I got a reminder in my calendar that 3 months are up I got to work a little bit more sophisticatedly.

    • magic_lobster_party
      link
      fedilink
      91 month ago

      Most improvements in machine learning has been made by increasing the data (and by using models that can generalize larger data better).

      Perfect data isn’t needed as the errors will “even out”. Although now there’s the problem that most new content on the Internet is low quality AI garbage.

      • @NeoNachtwaechter
        link
        English
        1
        edit-2
        1 month ago

        Perfect data isn’t needed as the errors will “even out”.

        That is an assumption.

        I do not think that it is a correct assumption.

        now there’s the problem that most new content on the Internet is low quality AI garbage.

        This reminds me about a recommendation from some philosopher - I forgot who it was - he said that you should read only such books that are at least 100 years old.

        • magic_lobster_party
          link
          fedilink
          31 month ago

          I’m extrapolating from history.

          15 years ago people made fun of AI models because they could mistake some detail in a bush for a dog. Over time the models became more resistant against those kinds of errors. The change was more data and better models.

          It’s the same type of error as hallucination. The model is overly confident about a thing it’s wrong about. I don’t see why these types of errors would be any different.

          • @NeoNachtwaechter
            link
            English
            71 month ago

            I don’t see why these types of errors would be any different.

            Well it is easy to see when you understand what LLMs actually do and how it is different from what humans do. Humans have multiple ways to correct errors and we do it all the time, intuitively. LLMs have none of these ways, they can only repeat their training (and not even hope for the best, because to hope is human again)

    • @[email protected]
      link
      fedilink
      English
      41 month ago

      Hallucinations are an unavoidable part of LLMs, and are just as present in the human mind. Training data isn’t the issue. The issue is that the design of the systems that leverage LLMs uses them to do more than they should be doing.

      I don’t think that anything short of being able to validate an LLM’s output without running it through another LLM will be able to fully prevent hallucinations.