• @[email protected]
    link
    fedilink
    English
    463 days ago

    Translation: “We told everyone we could turn glorified autocomplete into artificial general intelligence and then they gave us a bunch of money for that, so now we actually have to try to deliver something and we’ve got no idea how.”

    • Random_Character_A
      link
      English
      143 days ago

      How about giving billions to those guys simulating brains of a small worms and fruit flies, so we can have very slow “brain in a bottle” that will be equally useless.

      • @[email protected]
        link
        fedilink
        English
        183 days ago

        You know what? Sure, fuck it, why not? I don’t even have a problem with OpenAI getting billions of dollars to do R&D on LLMs. They might actually turn out to have some practical applications, maybe.

        My problem is that OpenAI basically stopped doing real R&D the moment ChatGPT became a product, because now all their money goes into their ridiculous backend server costs and putting increasingly silly layers of lipstick on a pig so that they can get one more round of investment funding.

        AI is a really important area of technology to study, and I’m all in favour of giving money to the people actually studying it. But that sure as shit ain’t Sam Altman and his band of carnival barkers.

  • Lvxferre
    link
    fedilink
    English
    183 days ago

    Predictable outcome for anyone not wallowing in wishful belief.

  • @[email protected]
    link
    fedilink
    English
    153 days ago

    We’ve known this for a while. LLMs are a dead end, lots of companies have tried throwing more data at it but it’s becoming clear the differences between each model and the next are getting too small to notice, and none of them fix the major underlying issue that chat models keep spreading BS because it can’t differentiate between right and wrong

    • @tee9000
      link
      English
      43 days ago

      So an infant technology is showing a glimmer of maturation?

    • @CosmoNova
      link
      English
      1
      edit-2
      3 days ago

      And the thing is the architecture of LLMs was already a huge breakthrough in the field. Now these companies are basically trying to come up with another by - and that’s just my guess - throwing tons of cash at it and hoping for the best. I think that’s like trying to come up with a building material that outperforms steel concrete in every aspect. Just because it was discovered by some guy doesn’t mean multi billion dollar companies can force something better with all the money in the world.

  • @brucethemoose
    link
    English
    7
    edit-2
    3 days ago

    Yeah, well Alibaba nearly (and sometimes) beat GPT-4 with a comparatively microscopic model you can run on a desktop. And released a whole series of them. For free! With a tiny fraction of the GPUs any of the American trainers have.

    Bigger is not better, but OpenAI has also just lost their creative edge, and all Altman’s talk about scaling up training with trillions of dollars is a massive con.

    o1 is kind of a joke, CoT and reflection strategies have been known for awhile. You can do it for free youself, to an extent, and some models have tried to finetune this in: https://github.com/codelion/optillm

    But one sad thing OpenAI has seemingly accomplished is to “salt” the open LLM space. Theres way less hacky experimentation going on than there used to be, which makes me sad, as many of its “old” innovations still run circles around OpenAI.

      • @brucethemoose
        link
        English
        2
        edit-2
        3 days ago

        BTW, as I wrote that post, Qwen 32B coder came out.

        Now a single 3090 can beat GPT-4o, and do it way faster! In coding, specifically.

        • @A_A
          link
          English
          23 days ago

          Great news 😁🥂, someone should make a new post on this !

      • @brucethemoose
        link
        English
        23 days ago

        Yep.

        32B fits on a “consumer” 3090, and I use it every day.

        72B will fit neatly on 2025 APUs, though we may have an even better update by then.

        I’ve been using local llms for a while, but Qwen 2.5, specifically 32B and up, really feels like an inflection point to me.