the writer Nina Illingworth, whose work has been a constant source of inspiration, posted this excellent analysis of the reality of the AI bubble on Mastodon (featuring a shout-out to the recent articles on the subject from Amy Castor and @[email protected]):

Naw, I figured it out; they absolutely don’t care if AI doesn’t work.

They really don’t. They’re pot-committed; these dudes aren’t tech pioneers, they’re money muppets playing the bubble game. They are invested in increasing the valuation of their investments and cashing out, it’s literally a massive scam. Reading a bunch of stuff by Amy Castor and David Gerard finally got me there in terms of understanding it’s not real and they don’t care. From there it was pretty easy to apply a historical analysis of the last 10 bubbles, who profited, at which point in the cycle, and where the real money was made.

The plan is more or less to foist AI on establishment actors who don’t know their ass from their elbow, causing investment valuations to soar, and then cash the fuck out before anyone really realizes it’s total gibberish and unlikely to get better at the rate and speed they were promised.

Particularly in the media, it’s all about adoption and cashing out, not actually replacing media. Nobody making decisions and investments here, particularly wants an informed populace, after all.

the linked mastodon thread also has a very interesting post from an AI skeptic who used to work at Microsoft and seems to have gotten laid off for their skepticism

    • @[email protected]OP
      link
      fedilink
      English
      81 year ago

      Did OP consider the work going on at literally every single tech college’s VC groups in optoelectronic neural networks and how that’s going to impact decoupling AI training and operation from Moore’s Law? I’m guessing no.

      uhh did OP consider my hopes and dreams, powered by the happiness of literally every single American child? im guessing no. what a buffoon

      • @[email protected]
        link
        fedilink
        English
        9
        edit-2
        1 year ago

        Only five years ago no one in the computer science industry would have taken a bet that AI would be able to explain why a joke was funny

        Iirc it still couldn’t do that, if you create variants of jokes it patterns matches it to the OG of the joke and fails.

        or perform creative tasks.

        Euh what, various creative tasks have been done by AI for a while now. Deepdream is almost a decade old now, and before that where were all kinds of procedural generation tools etc etc. Which could do the same as now, create a very limited set of creative things out of previous data. Same as AI now. This chatgpt cannot create a truly unique new sentence for example (A thing any of us here could easily do).

        • @[email protected]
          link
          fedilink
          English
          21 year ago

          This chatgpt cannot create a truly unique new sentence for example (A thing any of us here could easily do).

          What ?

          Of course it can, it’s randomly generating sentences. It’s probably better than humans at that. If you want more randomness at the cost of text coherence just increase the temperature.

          • @[email protected]OP
            link
            fedilink
            English
            21 year ago

            Of course it can, it’s randomly generating sentences. It’s probably better than humans at that. If you want more randomness at the cost of text coherence just increase the temperature.

            you mean like a Markov chain?

            • @[email protected]
              link
              fedilink
              English
              1
              edit-2
              1 year ago

              These models are Markov chains yes. But many things are Markov chains, I’m not sure that describing these as Markov chains helps gain understanding.

              The way these models generate text is iterative. They do it word by word. Every time they need to generate a word they will randomly select one from their vocabulary. The trick to generating coherent text is that different words are more likely to happen depending on the previous words.

              For example for the sentence “that is a huge grey” the word elephant is more likely than flamingo.

              The temperature is the way you select your word. If it is low you will always select the most likely word. Increasing the temperature will make the random choice more random giving each word a more equal chance.

              Seeing as these models function randomly there is nothing preventing them from producing unique text. After all, something like jsbHsbe d dhebsUd is unique but not very interesting.

        • @kromem
          link
          English
          11 year ago

          The pattern matching problem can typically be sidestepped by using emoji representations.

          LLMs are trained by prediction, so one of the practical shortcomings is that slight token variations largely get ignored when the surrounding tokens are too similar to training data.

          If you change the tokens to less common symbolic representations like emojis, the same issues that trip up naive variations often won’t.

          Also, chain-of-thought prompting that gets the model to repeat the nuances in the variation can go a long way towards overcoming similarities to a normal form of the query.

          A good example of this problem are variations on the “wolf, goat, and cabbage” problem to get to the other side of the river.

          When GPT-4 first came out, initially there were a bunch of comments about how it couldn’t solve variations of the problem. But restating the prompt using “🐺, 🐐, 🥬” in place of the words and asking it to repeat any associated adjectives when mentioning the nouns as it worked step by step through the problem would have it solve variations correctly every attempt.

          As I said earlier, part of the apparent shortcomings in SotA LLMs is that we’re still learning how to maximally use them.