• @ieightpi
    link
    581 year ago

    Can we stop calling this shit AI? It has no intelligence

    • @regbin_
      link
      English
      221 year ago

      This is what AI actually is. Not the super-intelligent “AI” that you see in movies, those are fiction.

      The NPC you see in video games with a few branches of if-else statements? Yeah that’s AI too.

      • @Willer
        link
        -101 year ago

        No companies are only just now realizing how powerful it is and are throttling the shit out of its capabilities to sell it to you later :)

        • @[email protected]
          link
          fedilink
          English
          171 year ago

          “we purposefully make it terrible, because we know it’s actually better” is near to conspiracy theory level thinking.

          The internal models they are working on might be better, but they are definitely not making their actual product that’s publicly available right now shittier. It’s exactly the thing they released, and this is its current limitations.

          This has always been the type of output it would give you, we even gave it a term really early on, hallucinations. The only thing that has changed is that the novelty has worn off so you are now paying a bit more attention to it, it’s not a shittier product, you’re just not enthralled by it anymore.

          • @[email protected]
            link
            fedilink
            -71 year ago

            Researchers have shown that the performance of the public GPT models have decreased, likely due to OpenAI trying to optimise energy efficiency and adding filters to what they can say.

            I don’t really care about why it, so I won’t speculate, but let’s not pretend the publicly available models aren’t purposefully getting restricted either.

            • @[email protected]
              link
              fedilink
              English
              9
              edit-2
              1 year ago

              likely due to OpenAI trying to optimise energy efficiency and adding filters to what they can say.

              Which is different than

              No companies are only just now realizing how powerful it is and are throttling the shit out of its capabilities to sell it to you later :)

              One is a natural thing that can happen in software engineering, the other is malicious intent without facts. That’s why I said it’s near to conspiracy level thinking. That paper does not attribute this to some deeper cabal of AI companies colluding together to make a shittier product, but enough so that they all are equally more shitty (so none outcompete eachother unfairly), so they can sell the better version later (apparently this doesn’t hurt their brand or credibility somehow?).

              but let’s not pretend the publicly available models aren’t purposefully getting restricted either.

              Sure, not all optimizations are without costs. Additionally you have to keep in mind that a lot of these companies are currently being kept afloat with VC funding. OpenAI isn’t profitable right now (they lost 540 million last year), and if investments go in a downturn (like they have a little while ago in the tech industry), then they need to cut costs like any normal company. But it’s magical thinking to make this malicious by default.

    • ArxCyberwolf
      link
      fedilink
      191 year ago

      Exactly. It’s a language learning and text output machine. It doesn’t know anything, its only ability is to output realistic sounding sentences based on input, and will happily and confidently spout misinformation as if it is fact because it can’t know what is or isn’t correct.

      • @QuaternionsRock
        link
        91 year ago

        it’s a learning machine

        Should probably use a more careful choice of words if you want to get hung up on semantic arguments

    • @Siegfried
      link
      91 year ago

      Mass effects lore differences between virtual intelligence and artificial intelligence, the first one is programmed to do shit and say things nicely, the second one understands enough to be a menace to civilization… always wondered if this distinction was actually accepted outside the game.

      *Terms could be mixed up cause I played in German (VI and KI)

    • @[email protected]
      link
      fedilink
      81 year ago

      There are many definitions of AI (eg. there is some mathematical model used), but machine learning (which is used in the large language models) is considered a part of the scientific field called AI. If someone says that something is AI, it usually means that some technique from the field AI has been applied there. Even though the term AI doesn’t have much to do with the term intelligence as most of the people perceive it, I think the usage here is correct. (And yes, the whole scientific field should have been called differently.)

      • @ieightpi
        link
        3
        edit-2
        1 year ago

        Sadly the definition of artificial still fits the bill. Even if it’s still a bit misleading and most poeple will associate Artificial Intelligence with something akin to HAL 9000

    • @EnderMB
      link
      61 year ago

      That’s why we preface it with Artificial.

      • @[email protected]
        link
        fedilink
        31 year ago

        But it isn’t artificial intelligence. It isn’t even an attempt to make artificial “intelligence”. It is artificial talking. Or artificial writing.

        • @EnderMB
          link
          31 year ago

          In that case I’m not really sure what you’re expecting from AI, without getting into the philosophical debate of what intelligence is. Most modern AI systems are in essence taking large datasets and regurgitating the most relevant data back in a relevant form.

    • @Klear
      link
      61 year ago

      I will continue calling it “shit AI”.

    • @marzhall
      link
      51 year ago

      Lol, the AI effect in practice - the minute a computer can do it, it’s no longer intelligence.

      A year ago if you had told me you had a computer program that could write greentexts compellingly, I would have told you that required “true” AI. But now, eh.

      In any case, LLMs are clearly short of the “SuPeR BeInG” that the term “AI” seems to make some people think of and that you get all these Boomer stories about, and what we’ve got now definitely isn’t that.

      • @EatYouWell
        link
        01 year ago

        The AI effect can’t be a real thing since true AI hasn’t been done yet. We’re getting closer, but we’re definitely not in the positronic brain stage yet.

        • Ignotum
          link
          31 year ago

          “true AI”

          AI is just “artificial intelligence”, there are no strict criterias defining what is “true” AI and not,

          Do the LLM models show an ability to reason and problem solve? Yes

          Are they perfect? No

          So what?

          Ironically your comment sounds like yet another example of the AI effect