• @Rottcodd
    link
    359 days ago

    It means that either the test is flawed, the results are bogus or the report is a lie.

    Intelligence is a measure of reasoning ability.

    Current AIs have been designed to produce content that (optimally) mimics the products of reason, but they do not in fact reason at all, so they cannot possess measurable intelligence.

    Much more to the point, current AIs have been designed to make enormous piles of money for corporations and venture capitalists, and I would pretty much guarantee that that has more to do with this story than anything else.

    • @[email protected]
      link
      fedilink
      10
      edit-2
      9 days ago

      One extremely minor correction - you said they’re designed to make enormous piles of money and yet none of these(1) are cash flow positive or have any clear path to profitability. The only way a company makes money off this (outside an acquisition to let founders exit with bags of cash) is if one of these companies is allowed to create a monopoly leading to a corporate autocracy. General language models are absolutely shit in terms of efficiency compared to literally any other computing tool - they just look shiny.

      1. Please note - lots of pre chatgpt neural networks are happily chugging away doing good and important works… my statement excludes everything pre-ML bubble and a fair few legitimately interesting ML applications developed afterwards which you’ll never fucking hear about.

      Edited to add: Just as a note it’s always possible that this AI gold rush actually does lead to an AGI but, lucky for me, if that happens the greedy as fuck MBAs will absolutely end civilization before any of you could type up “told you so” so I’m willing to take this bet.

      • @[email protected]
        link
        fedilink
        59 days ago

        ML-bubble? You mean the one in the 1960’s? I prefer to call this the GenAI bubble, since other forms of AI are still everywhere, and have improved a lot of things invisibly for decades. (So, yes. What you said.)

        AI winter is a recurring theme in my field. Mostly from people not understanding what AI is. There have been Artificial Narrow Intelligence that beat humans in various forms of reasonings for ages.

        AGI still seems like a couple AI winters out of having a basic implementation, but we have really useful AI that can tell you if you have cancer more reliably and years earlier than humans (based on current long term cancer datasets). These systems can get better with time, and the ability to learn from them is still active research but is getting better. Heck, with decent patching, a good ANI can give you updates through ChatGPT for stuff like scene understanding to help blind people. There’s no money in that, but it’s still neat to people who actually care about AI instead of cash.

      • @[email protected]
        link
        fedilink
        29 days ago

        none of these(1) are cash flow positive or have any clear path to profitability.

        Only if you consider the companies developing these algorithms and not every other company jamming “AI” into their products and marketing. In a gold rush, the people who make money aren’t the people finding the gold. It’s the people selling shovels and gold pans.

    • Flying Squid
      link
      59 days ago

      I have yet to see an AI ask any question that goes beyond the superficial. They show zero amount of curiosity. I would say that’s a giant mark against the idea that they have any sort of real intelligence.