I promise this question is asked in good faith. I do not currently see the point of generative AI and I want to understand why there’s hype. There are ethical concerns but we’ll ignore ethics for the question.

In creative works like writing or art, it feels soulless and poor quality. In programming at best it’s a shortcut to avoid deeper learning, at worst it spits out garbage code that you spend more time debugging than if you had just written it by yourself.

When I see AI ads directed towards individuals the selling point is convenience. But I would feel robbed of the human experience using AI in place of human interaction.

So what’s the point of it all?

  • @[email protected]
    link
    fedilink
    218 hours ago

    Those are just ideas that were previously “generated” by humans though, that the LLM learned

    • TheRealKuni
      link
      English
      27 hours ago

      Those are just ideas that were previously “generated” by humans though, that the LLM learned

      That’s not how modern generative AI works. It isn’t sifting through its training dataset to find something that matches your query like some kind of search engine. It’s taking your prompt and passing it through its massive statistical model to come to a result that meets your demand.

      • @[email protected]
        link
        fedilink
        22 hours ago

        I feel like “passing it through a statistical model”, while absolutely true on a technical implementation level, doesn’t get to the heart of what it is doing so that people understand. It’s using the math terms, potentially deliberately to obfuscate and make it seem either simpler than it is. It’s like reducing it to “it just predicts the next word”. Technically true, but I could implement a black box next word predictor by sticking a real person in the black box and ask them to predict the next word, and it’d still meet that description.

        The statistical model seems to be building some sort of conceptual grid of word relationships that approximates something very much like actually understanding what the words mean, and how the words are used semantically, with some random noise thrown into the mix at just the right amounts to generate some surprises that look very much like creativity.

        Decades before LLMs were a thing, the Zompist wrote a nice essay on the Chinese room thought experiment that I think provides some useful conceptual models: http://zompist.com/searle.html

        Searle’s own proposed rule (“Take a squiggle-squiggle sign from basket number one…”) depends for its effectiveness on xenophobia. Apparently computers are as baffled at Chinese characters as most Westerners are; the implication is that all they can do is shuffle them around as wholes, or put them in boxes, or replace one with another, or at best chop them up into smaller squiggles. But pointers change everything. Shouldn’t Searle’s confidence be shaken if he encountered this rule?

        If you see 马, write down horse.

        If the man in the CR encountered enough such rules, could it really be maintained that he didn’t understand any Chinese?

        Now, this particular rule still is, in a sense, “symbol manipulation”; it’s exchanging a Chinese symbol for an English one. But it suggests the power of pointers, which allow the computer to switch levels. It can move from analyzing Chinese brushstrokes to analyzing English words… or to anything else the programmer specifies: a manual on horse training, perhaps.

        Searle is arguing from a false picture of what computers do. Computers aren’t restricted to turning 马 into “horse”; they can also relate “horse” to pictures of horses, or a database of facts about horses, or code to allow a robot to ride a horse. We may or may not be willing to describe this as semantics, but it sure as hell isn’t “syntax”.