• Alphane MoonOP
    link
    English
    322 days ago

    I get it. I just didn’t know that they are already using “beyond AGI” in their grifting copytext.

    • 👍Maximum Derek👍
      link
      fedilink
      English
      242 days ago

      Yeah, that started a week or two ago. Altman dropped the AGI promise too soon now he’s having to become a sci-fi author to keep the con cooking.

      • @[email protected]
        link
        fedilink
        English
        132 days ago

        now he’s having to become a sci-fi author to keep the con cooking.

        Dude thinks he’s Asimov but anyone paying attention can see he’s just an L Ron Hubbard.

        • @[email protected]
          link
          fedilink
          English
          52 days ago

          Hell, I’d help pay for the boat if he’d just fuck off to go spend the rest of his life floating around the ocean.

          • @[email protected]
            link
            fedilink
            English
            62 days ago

            He sure as shit wasn’t a brilliant writer. He was more endowed with the cunning of a proto-Trump huckster-weasel.

    • @[email protected]
      link
      fedilink
      English
      52 days ago

      Well, it does make sense in that the time during which we have AGI would be pretty short because AGI would soon go beyond human-level intelligence. With that said, LLMs are certainly not going to get there, assuming AGI is even possible at all.

      • @[email protected]
        link
        fedilink
        English
        32 days ago

        We’re never getting AGI from any current or planned LLM and ML frameworks.

        These LLMs and ML programs are above human intelligence but only within a limited framework.

    • Nightwatch Admin
      link
      fedilink
      English
      42 days ago

      Ah ok, yeah the “beyond” thing us likely pulled straight out of the book I mentioned in my edit.