• Alphane MoonOP
    link
    English
    321 month ago

    I get it. I just didn’t know that they are already using “beyond AGI” in their grifting copytext.

    • 👍Maximum Derek👍
      link
      fedilink
      English
      241 month ago

      Yeah, that started a week or two ago. Altman dropped the AGI promise too soon now he’s having to become a sci-fi author to keep the con cooking.

      • @[email protected]
        link
        fedilink
        English
        131 month ago

        now he’s having to become a sci-fi author to keep the con cooking.

        Dude thinks he’s Asimov but anyone paying attention can see he’s just an L Ron Hubbard.

        • @[email protected]
          link
          fedilink
          English
          51 month ago

          Hell, I’d help pay for the boat if he’d just fuck off to go spend the rest of his life floating around the ocean.

          • @[email protected]
            link
            fedilink
            English
            61 month ago

            He sure as shit wasn’t a brilliant writer. He was more endowed with the cunning of a proto-Trump huckster-weasel.

    • @[email protected]
      link
      fedilink
      English
      51 month ago

      Well, it does make sense in that the time during which we have AGI would be pretty short because AGI would soon go beyond human-level intelligence. With that said, LLMs are certainly not going to get there, assuming AGI is even possible at all.

      • @[email protected]
        link
        fedilink
        English
        31 month ago

        We’re never getting AGI from any current or planned LLM and ML frameworks.

        These LLMs and ML programs are above human intelligence but only within a limited framework.

    • Nightwatch Admin
      link
      fedilink
      English
      41 month ago

      Ah ok, yeah the “beyond” thing us likely pulled straight out of the book I mentioned in my edit.