• @regbin_
    link
    English
    226 months ago

    This is what AI actually is. Not the super-intelligent “AI” that you see in movies, those are fiction.

    The NPC you see in video games with a few branches of if-else statements? Yeah that’s AI too.

    • @Willer
      link
      -106 months ago

      No companies are only just now realizing how powerful it is and are throttling the shit out of its capabilities to sell it to you later :)

      • @[email protected]
        link
        fedilink
        English
        176 months ago

        “we purposefully make it terrible, because we know it’s actually better” is near to conspiracy theory level thinking.

        The internal models they are working on might be better, but they are definitely not making their actual product that’s publicly available right now shittier. It’s exactly the thing they released, and this is its current limitations.

        This has always been the type of output it would give you, we even gave it a term really early on, hallucinations. The only thing that has changed is that the novelty has worn off so you are now paying a bit more attention to it, it’s not a shittier product, you’re just not enthralled by it anymore.

        • @[email protected]
          link
          fedilink
          -76 months ago

          Researchers have shown that the performance of the public GPT models have decreased, likely due to OpenAI trying to optimise energy efficiency and adding filters to what they can say.

          I don’t really care about why it, so I won’t speculate, but let’s not pretend the publicly available models aren’t purposefully getting restricted either.

          • @[email protected]
            link
            fedilink
            English
            9
            edit-2
            6 months ago

            likely due to OpenAI trying to optimise energy efficiency and adding filters to what they can say.

            Which is different than

            No companies are only just now realizing how powerful it is and are throttling the shit out of its capabilities to sell it to you later :)

            One is a natural thing that can happen in software engineering, the other is malicious intent without facts. That’s why I said it’s near to conspiracy level thinking. That paper does not attribute this to some deeper cabal of AI companies colluding together to make a shittier product, but enough so that they all are equally more shitty (so none outcompete eachother unfairly), so they can sell the better version later (apparently this doesn’t hurt their brand or credibility somehow?).

            but let’s not pretend the publicly available models aren’t purposefully getting restricted either.

            Sure, not all optimizations are without costs. Additionally you have to keep in mind that a lot of these companies are currently being kept afloat with VC funding. OpenAI isn’t profitable right now (they lost 540 million last year), and if investments go in a downturn (like they have a little while ago in the tech industry), then they need to cut costs like any normal company. But it’s magical thinking to make this malicious by default.