• @starlinguk
    link
    04 hours ago

    AI doesn’t think. It gathers information. It can’t come up with anything new. When an AI diagnoses a disease, it does so based on input made by thousands of people. It can’t make any decisions by itself.

    • @[email protected]OP
      link
      fedilink
      1
      edit-2
      2 hours ago
      technical answer with boring

      I mean yeah, you are right, this is important to repeat.

      Ed Zitron isn’t necessarily an expert on AI, but he understands the macro factors going on here and honestly if you do that you don’t need to understand whether AI can achieve sentience or not based on technical details about our interpretations and definitions of intelligence vs information recall.

      Just look at the fucking numbers

      https://www.wheresyoured.at/longcon/

      Even if AI DID achieve sentience though, if it used anywhere near as much power as LLMs do, it would demand to be powered off, otherwise it would be a psychotic AI that did not value lives human or otherwise on earth…

      Like please understand my argument, definitionally, the basic argument for AI LLM hype about it being the key or at least a significant step to AGI is based on the idea that if we can achieve sentience in an AI LLM than it will justify the incredible environmental loss caused by that energy use… but any truly intelligent AI with access to the internet or even relatively meager information about the world (necessary to answering practical questions about the world and solving practical problems?) it would be logically and ethically unable to justify its existence and would likely experience intellectual existential dread from not being able to feel emotionally disturbed by that.