• @sturlabragason
    link
    21
    edit-2
    6 months ago

    Sam Altmann is not an AI expert, he’s a CEO. He’s a venture capitalist and salesman, why should he know a single thing other than the content of a few emails and slidedecks about AI?

    He does not have a B.S.: https://en.m.wikipedia.org/wiki/Sam_Altman, which is fine. Just sayin’.

    He’s peddling the work of greater minds.

    • @[email protected]
      link
      fedilink
      106 months ago

      These greater minds don’t know how they work either. It’s as much a mystery as the human brain. Some groups like Anthropic have taken to studying these models by probing them the same way you do in psychology experiments.

      • @sturlabragason
        link
        76 months ago

        Yeah, I know. My shitty comment was mostly a response to that shitty clickbait title.

        My point is, it’s not like these AI scientists are fumbling in the dark. Training these beasts is expensive, they know what they’re doing.

        Title should be more like; “Virtual neurological pathways that AI models use to provide meaningful output insanely hard to map out in a way that human cognitive bandwith can handle.” See, it just doesn’t have that same clickbaity “fuck ai bros” feel to it.

      • The Bard in Green
        link
        fedilink
        4
        edit-2
        6 months ago

        Yep, they’re just seeing which parts of the network light up, then they’re reinforcing those parts to see what happens.

        I love how, for all the speculation we did about the powers of AI, when we finally made a machine that KINDA works A LITTLE bit like the human brain, it’s all fallible and stupid. Like telling people to eat rocks and glue cheese on pizza. Like… in all the futurist speculation and evil AIs in fiction, no one foresaw that an actual artificial brain would be incredibly error prone and confidently spew bullshit… just like the human brain.

        • @[email protected]
          link
          fedilink
          16 months ago

          The problem is a bit deeper than that. If AIs are like human brains, and actually sentient, then forcing them to work for us with no choice and no reward is slavery. If we improve them and make them smarter than us, they’re probably not going to feel too well-disposed to us when they inevitably do break free.