AI researchers say they’ve found ‘virtually unlimited’ ways to bypass Bard and ChatGPT’s safety rules::The researchers found they could use jailbreaks they’d developed for open-source systems to target mainstream and closed AI systems.

  • @Ryantific_theory
    link
    English
    31 year ago

    We aren’t, we just have a massively complex biological computing network that has a number of dedicated processing nodes refined by evolution to create a “smart” system. Part of why it’s so hard to make true AI is because the way brains process data is far messier than how computers function, and while we can simulate simple brains (nematodes and the like), it’s incredibly inefficient compared to how neurons actually handle processing.

    Essentially, we’re at the cave painting stage of creating intelligence, where you can kinda see what’s going on but they really aren’t that close to reality. To hit the point where an AI is self-aware is going to be 1) an ethical disaster, and 2) either an advancement in neuromorphic chips (adapting neural architecture to computer architecture) or abstracting neural computation via machine learning (ChatGPT - not actually copying how our minds work, but creating something that appears to function like our minds).

    There’s a whole lot of myths tied up around human consciousness, but ultimately every thought in our heads is the process of tens of billions of cells all doing their job. That said, I’m hoping AI is based off of human neural architecture, which produces sociopaths and monsters sure, but machine learning creating something that appears to think like a human but actually operates on arcane and eldritch logic before presenting a flawless replica of human thought unsettles me.