the writer Nina Illingworth, whose work has been a constant source of inspiration, posted this excellent analysis of the reality of the AI bubble on Mastodon (featuring a shout-out to the recent articles on the subject from Amy Castor and @[email protected]):

Naw, I figured it out; they absolutely don’t care if AI doesn’t work.

They really don’t. They’re pot-committed; these dudes aren’t tech pioneers, they’re money muppets playing the bubble game. They are invested in increasing the valuation of their investments and cashing out, it’s literally a massive scam. Reading a bunch of stuff by Amy Castor and David Gerard finally got me there in terms of understanding it’s not real and they don’t care. From there it was pretty easy to apply a historical analysis of the last 10 bubbles, who profited, at which point in the cycle, and where the real money was made.

The plan is more or less to foist AI on establishment actors who don’t know their ass from their elbow, causing investment valuations to soar, and then cash the fuck out before anyone really realizes it’s total gibberish and unlikely to get better at the rate and speed they were promised.

Particularly in the media, it’s all about adoption and cashing out, not actually replacing media. Nobody making decisions and investments here, particularly wants an informed populace, after all.

the linked mastodon thread also has a very interesting post from an AI skeptic who used to work at Microsoft and seems to have gotten laid off for their skepticism

  • Steve
    link
    fedilink
    English
    13
    edit-2
    1 year ago

    I’ve got this absolutely massive draft document where I’ve tried to articulate what this person explains in a few sentences. The gradual removal of immediate purpose from products has become deliberate. This combination of conceptual solutions to conceptual problems gives the business a free pass from any kind of distinct accountability. It is a product that has potential to have potential. AI seems to achieve this better than anything ever before. Crypto is good at it but it stumbles at the cash-out point so it has to keep cycling through suckers. AI can just keep chugging along on being “powerful” for everything and nothing in particular, and keep becoming more powerful, without any clear benchmark of progress.

    Edit: just uploaded this clip of Ralph Nader in 1971 talking about the frustration of being told of benefits that you can’t really grasp https://youtu.be/CimXZJLW_KI

    • David GerardM
      link
      fedilink
      English
      81 year ago

      this is also the marketing for quantum computing. Yes, there is a big money market for quantum computers in 2023. They still can’t reliably factor 35.

      • Steve
        link
        fedilink
        English
        8
        edit-2
        1 year ago

        shit, I forgot about quantum computing. If you don’t game, do video production or render 3d models, you’re upgrading your computer to keep up with the demands of client-side rendered web apps and the operating system that loads up the same Excel that has existed for 30 years.

        Lust for computing power is a great match for AI

        • David GerardM
          link
          fedilink
          English
          71 year ago

          i literally upgrade computers in the past decade purely to get ones that can take more RAM because the web now sends 1000 characters of text as a virtual machine written in javascript rather than anything so tawdry as HTML and CSS

          • @[email protected]OP
            link
            fedilink
            English
            61 year ago

            the death of server-side templating and the lie of server-side rendering (which practically just ships the same virtual machine to you but with a bunch more shit tacked on that doesn’t do anything) really has done fucked up things to the web

            • @[email protected]
              link
              fedilink
              English
              6
              edit-2
              1 year ago

              as someone who never really understood The Big Deal With SPAs (aside from, like, google docs or whatever) i’m at least taking solace in the fact that like a decade later people seem to be coming around to the idea that, wait, this actually kind of sucks

                • @[email protected]OP
                  link
                  fedilink
                  English
                  61 year ago

                  the worst part is I really despise this exact thing too, but have also implemented it multiple times across the last few years cause under certain very popular tech stacks you aren’t given any other reasonable choice

                  this is why my tech stack for personal work has almost no commonality with the tech I get paid to work with

              • David GerardM
                link
                fedilink
                English
                41 year ago

                React doesn’t have to suck for the user (lemmy is fast) but …

                • Steve
                  link
                  fedilink
                  English
                  41 year ago

                  this is the thing.

                  6 degrees of transpiler separation.

          • raktheundead
            link
            fedilink
            51 year ago

            Every day, we pay the price for embracing a homophobe’s 10-day hack comprising a shittier version of Lisp.

          • Steve
            link
            fedilink
            English
            51 year ago

            The internet document transfer protocol needs a separation of page and app

      • @kromem
        link
        English
        21 year ago

        Quantum computing marketing feels that way because people have slapped the quantum label on everything from health stickers to car parts.

        But about half the field is pretty dour on if and when that will ever be a reality.

        Ironically the use case that’s had the most promise for quantum foundations over the past few years is photonic based neural networks for AI. Because the end result is what matters and the network itself acting like a black box is generally fine, most of the measurement problem goes away and analog processing of ML workloads have been already showcased. MIT just the other week announced availability of a DIY kit for researchers in replicating their work on an A100 equivalent running in photonics even. In that space, the speed has been the opposite of the general purpose quantum computing field.

      • Tristan Harward
        link
        fedilink
        41 year ago

        @[email protected] @[email protected] @self Yep, it’s all the same protocol. It’s pretty weird though; no indication of what platform the post really came from or how it was intended to be viewed. I could see that being useful first-class information for the reader on whatever platform they’re reading from.

        Trying to remember how I even got this post. Did you boost it from your masto account?

        • Stephen Farrugia
          link
          fedilink
          41 year ago

          @trisweb @[email protected] @self yeah I figured the activitypub protocol used some kind of content type definition to control where stuff was appropriately published… I never got around to actually reading the docs.

          I have no idea how it came to your feed. I found it because you boosted it!

          • @[email protected]OP
            link
            fedilink
            English
            81 year ago

            as an open source federated protocol, ActivityPub and all the apps built on top of it are required to have a layer of jank hiding just under the surface

            • David GerardM
              link
              fedilink
              English
              61 year ago

              ActivityPub is a protocol for software to fail to talk to each other

              @self has tapped Lemmy with carefully aimed hammers in a few places so that we federate both ways with Mastodon, which has been pretty cool actually

                • David GerardM
                  link
                  fedilink
                  English
                  41 year ago

                  seems to be on circumstances.run, which i’m on. that’s treehouse/glitch with authfetch

                • David GerardM
                  link
                  fedilink
                  English
                  21 year ago

                  so not very? If your Mastodon has authfetch enabled then it doesn’t work properly. If it does then it does. I’m on circumstances.run which has authfetch on - it receives comments from awful.systems but doesn’t seem to pass them back.

    • @kromem
      link
      English
      11 year ago

      without any clear benchmark of progress.

      The problem is that previous benchmarks have been so completely blown out of the water they keep needing to establish new benchmarks.

      If GPT-3 scores around 30% in a standardized test and GPT-4 scores 95% in that same test, how useful will that test be to evaluating GPT-5?

      What ever happened to the benchmark that was being used in popular coverage of AI for decades of the Turing test? That disappeared pretty quick from the conversation over the past two years.

      You may not like the technology for whatever reason (and I’d encourage introspection on just how much of those attitudes are the result of decades of self-propagandizing via Sci Fi that’s since been revealed to have poorly anticipated reality). But don’t make the mistake of conflating those feelings with analyzing where the trend is going over the next few years.

      If you think this is the peak of AI, you’re in for quite the surprise.

  • @[email protected]
    link
    fedilink
    English
    111 year ago

    100% on point. More people must remember that everything we know about large companies’ operations is still completely valid. Leadership doesn’t understand any of the technologies at play, even at a high level- they don’t think in terms of black boxes; they think in black, amorphous miasmas of supposed function or vibes, for short. They are concerned with a few metrics going up or down every quarter. As long as the number goes up, they get paid, the dopamine hits, and everyone stays happy.

    The AI miasma (mAIasma? miasmAI?) in particular is near perfect. Other technologies only held a finite amount of potential to be hyped, meaning execs had to keep looking for their next stock price bump. AI is infinitely hypeable since you can promise anything with it, and people will believe you thanks to the smoke and mirrors it procedurally pumps out today.

    I have friends who have worked in plenty of large corporations and have experience/understanding of the worthlessness of executive leadership, but they don’t connect that to AI investment and thus don’t see the grift. It’s sometimes exasperating.

  • epigone
    link
    fedilink
    7
    edit-2
    1 year ago

    what i’m trying to understand is the bridge between the quite damning works like Artificial Intelligence: A Modern Myth by John Kelly, R. Scha elsewhere, G. Ryle at advent of the Cognitive Revolution, deriving many of the same points as L. Wittgenstein, and then there’s PMS Hacker, a daunting read, indeed, that bridge between these counter-“a.i.” authors, and the easy think substance that seems to re-emerge every other decade? how is it that there are so many resolutely powerful indictments, and they are all being lost to what seems like a digital dark age? is it that the kool-aid is too good, that the sauce is too powerful, that the propaganda is too well funded? or is this all merely par for the course in the development of a planet that becomes conscious of all its “hyperobjects”?

    • @[email protected]
      link
      fedilink
      English
      71 year ago

      I don’t claim to know any better than you, but my intuition says it’s the funding, combined with the fact that even understanding what the claims are takes a fair bit of technical sophistication, let alone understanding why they’re bullshit. The ever soaring levels of inequality — constant record highs in a couple generations at least — make it hard to realize just how much power the technocrats hold over the public perception and it can take a full lecture to explain even an educated and intelligent person how exactly the sentences the computer man utters are a crock of shit.

      And more cynically, for some people it’s the old saw about not understanding things when your paycheck depends on it.

      It’s a self-repairing problem, since you can’t fool most people forever, but the sooner the less people still buy into it, the better.

    • Steve
      link
      fedilink
      English
      41 year ago

      Sometimes it seems like wilful ignorance

  • @kromem
    link
    English
    -61 year ago

    Hilarious.

    Only five years ago no one in the computer science industry would have taken a bet that AI would be able to explain why a joke was funny or perform creative tasks.

    Today that’s become so normalized that people are calling things thought to be literally impossible a speculative bubble because advancement that surprised everyone in the industry initially and then again with the next model a year later hasn’t moved fast enough?

    The industry is still learning how to even use the tech.

    This is like TV being invented in 1927 and then people in 1930 saying that it’s a bubble because it hasn’t grown as fast as they expected it to.

    Did OP consider the work going on at literally every single tech college’s VC groups in optoelectronic neural networks and how that’s going to impact decoupling AI training and operation from Moore’s Law? I’m guessing no.

    Near-perfect analysis, eh? By someone who read and regurgitated analysis by a journalist who writes for a living and may just have an inherent bias towards evaluating information on the future prospects of a technology positioned to replace writers?

    We haven’t even had a public release of multimodal models yet.

    This is about as near perfect of an analysis as smearing paint on oneself and rolling down a canvas on a hill.

    • @[email protected]
      link
      fedilink
      English
      101 year ago

      This is like TV being invented in 1927 and then people in 1930 saying that it’s a bubble because it hasn’t grown as fast as they expected it to.

      That’s the exact opposite of a bubble, then. A bubble is when the valuation of some thing grows much faster than the utility it provides.

      Yea sure maybe we’re still in the early stages with this stuff. We have gotten quite a bit further from back when the funny neural network was seeing and generating dog noses everywhere.

      The reason it’s a bubble is because hypemongers like yourself are treating this tech like a literal miracle and serial grifters shoehorning it into everything like it’s the new money. Who wants shoelaces when you can have AI shoelaces, the shoelaces with AI! Formerly known as the blockchain shoelaces.

      • @kromem
        link
        English
        11 year ago

        There’s a difference between a technology being a bubble and companies trying to use buzzwords to market goods.

        Yes, 90% of the companies trying to capitalize on AI are going to go bust within the decade. But that’s because 90% of all companies don’t last 10 years.

        The underlying technology will continue to be advancing rapidly though, and in world changing ways.

        In fact, one of the biggest reasons why most current AI companies are doomed is because the tech is going to be advancing so quickly that they are building themselves into obsolescence sitting atop such quickly changing foundations.

        Nvidia, Microsoft, Google, OpenAI - these guys aren’t going anywhere and are going to be continuing to make bank.

        People repackaging up their products with a thin veneer of specialization are the ones that are screwed.

        But this is again no different from pretty much every trend in history. Would you consider social media to have been a bubble because there were companies that entirely depended on theming your MySpace page or built upon Facebook marketplace that went out of business after short lived success?

        Tulip mania overvalued something that didn’t have much underlying value. Just like blockchain cryptocurrencies.

        AI isn’t that. The other month our company produced in a few weeks for less than a thousand bucks a project that kept being put off as it would have taken around a year of work at tens of thousands of dollars additional investment. And the end product was significantly better than we’d have expected from the manual process. There’s an inherent value in the core product of today’s generative AI even if the bottom feeders that circle every trend are similarly present trying to catch up scraps from unsuspecting marks.

      • @[email protected]OP
        link
        fedilink
        English
        81 year ago

        Did OP consider the work going on at literally every single tech college’s VC groups in optoelectronic neural networks and how that’s going to impact decoupling AI training and operation from Moore’s Law? I’m guessing no.

        uhh did OP consider my hopes and dreams, powered by the happiness of literally every single American child? im guessing no. what a buffoon

        • @[email protected]
          link
          fedilink
          English
          9
          edit-2
          1 year ago

          Only five years ago no one in the computer science industry would have taken a bet that AI would be able to explain why a joke was funny

          Iirc it still couldn’t do that, if you create variants of jokes it patterns matches it to the OG of the joke and fails.

          or perform creative tasks.

          Euh what, various creative tasks have been done by AI for a while now. Deepdream is almost a decade old now, and before that where were all kinds of procedural generation tools etc etc. Which could do the same as now, create a very limited set of creative things out of previous data. Same as AI now. This chatgpt cannot create a truly unique new sentence for example (A thing any of us here could easily do).

          • @[email protected]
            link
            fedilink
            English
            21 year ago

            This chatgpt cannot create a truly unique new sentence for example (A thing any of us here could easily do).

            What ?

            Of course it can, it’s randomly generating sentences. It’s probably better than humans at that. If you want more randomness at the cost of text coherence just increase the temperature.

            • @[email protected]OP
              link
              fedilink
              English
              21 year ago

              Of course it can, it’s randomly generating sentences. It’s probably better than humans at that. If you want more randomness at the cost of text coherence just increase the temperature.

              you mean like a Markov chain?

              • @[email protected]
                link
                fedilink
                English
                1
                edit-2
                1 year ago

                These models are Markov chains yes. But many things are Markov chains, I’m not sure that describing these as Markov chains helps gain understanding.

                The way these models generate text is iterative. They do it word by word. Every time they need to generate a word they will randomly select one from their vocabulary. The trick to generating coherent text is that different words are more likely to happen depending on the previous words.

                For example for the sentence “that is a huge grey” the word elephant is more likely than flamingo.

                The temperature is the way you select your word. If it is low you will always select the most likely word. Increasing the temperature will make the random choice more random giving each word a more equal chance.

                Seeing as these models function randomly there is nothing preventing them from producing unique text. After all, something like jsbHsbe d dhebsUd is unique but not very interesting.

          • @kromem
            link
            English
            11 year ago

            The pattern matching problem can typically be sidestepped by using emoji representations.

            LLMs are trained by prediction, so one of the practical shortcomings is that slight token variations largely get ignored when the surrounding tokens are too similar to training data.

            If you change the tokens to less common symbolic representations like emojis, the same issues that trip up naive variations often won’t.

            Also, chain-of-thought prompting that gets the model to repeat the nuances in the variation can go a long way towards overcoming similarities to a normal form of the query.

            A good example of this problem are variations on the “wolf, goat, and cabbage” problem to get to the other side of the river.

            When GPT-4 first came out, initially there were a bunch of comments about how it couldn’t solve variations of the problem. But restating the prompt using “🐺, 🐐, 🥬” in place of the words and asking it to repeat any associated adjectives when mentioning the nouns as it worked step by step through the problem would have it solve variations correctly every attempt.

            As I said earlier, part of the apparent shortcomings in SotA LLMs is that we’re still learning how to maximally use them.

    • @[email protected]
      link
      fedilink
      English
      91 year ago

      The industry is still learning how to even use the tech.

      Just like blockchain, right? That killer app’s coming any day now!

      • @kromem
        link
        English
        11 year ago

        Not really. I’ve been bearish on crypto as a massively distributed pyramid scheme for over a decade now.

        There’s a huge difference between speculative money exchange in a modern tulip mania and technology that’s actively being used and integrated across industries at scale at an unprecedented rate.

    • @[email protected]
      link
      fedilink
      English
      81 year ago

      Did OP consider the work going on at literally every single tech college’s VC groups in optoelectronic neural networks built on optical components to improve minimisation and how that’s going to impact the decoupling of AI training and operation from Moore’s Law that’s one hope for making processing power gains so that the banner headlines about “Moore’s Law” are pushed back a little further? I’m guessing no.___

      You have the insider clout of a 15 year old with a search engine

      • David GerardM
        link
        fedilink
        English
        61 year ago

        You have the insider clout of a 15 year old with a search engine

        my god

    • Steve
      link
      fedilink
      English
      81 year ago

      Have a browse through some threads on this instance before you talk about what the “computer science industry” was thinking 5 years ago as if this is a group of infants.

      If you feel open to it, consider why people who obviously enjoy computing, and know a lot about it, don’t share your enthusiasm for a particular group of tech products. Find the factors that make these things different.

      You might still disagree, you might change your mind. Whatever the fuck happens, you’ll write more compelling posts than whatever the fuck this is.

      You might even provoke constructive, grown-up, discussions.

      • @[email protected]OP
        link
        fedilink
        English
        71 year ago

        I must note that the poster in question earned the fastest ever ban from this instance, as their post was a perfect storm of greasy smarmy bullshit that felt gross to read, and judging by their post history that’s unfortunately just how they engage with information

        • Steve
          link
          fedilink
          English
          41 year ago

          Oh good. Their history was why I relented and wrote something. A typical king shit.

    • @[email protected]
      link
      fedilink
      English
      81 year ago

      This is about as near perfect of an analysis as smearing paint on oneself and rolling down a canvas on a hill.

      That sounds perfect to me dawg