I’ve seen a lot of sentiment around Lemmy that AI is “useless”. I think this tends to stem from the fact that AI has not delivered on, well, anything the capitalists that push it have promised it would. That is to say, it has failed to meaningfully replace workers with a less expensive solution - AI that actually attempts to replace people’s jobs are incredibly expensive (and environmentally irresponsible) and they simply lie and say it’s not. It’s subsidized by that sweet sweet VC capital so they can keep the lie up. And I say attempt because AI is truly horrible at actually replacing people. It’s going to make mistakes and while everybody’s been trying real hard to make it less wrong, it’s just never gonna be “smart” enough to not have a human reviewing its’ behavior. Then you’ve got AI being shoehorned into every little thing that really, REALLY doesn’t need it. I’d say that AI is useless.

But AIs have been very useful to me. For one thing, they’re much better at googling than I am. They save me time by summarizing articles to just give me the broad strokes, and I can decide whether I want to go into the details from there. They’re also good idea generators - I’ve used them in creative writing just to explore things like “how might this story go?” or “what are interesting ways to describe this?”. I never really use what comes out of them verbatim - whether image or text - but it’s a good way to explore and seeing things expressed in ways you never would’ve thought of (and also the juxtaposition of seeing it next to very obvious expressions) tends to push your mind into new directions.

Lastly, I don’t know if it’s just because there’s an abundance of Japanese language learning content online, but GPT 4o has been incredibly useful in learning Japanese. I can ask it things like “how would a native speaker express X?” And it would give me some good answers that even my Japanese teacher agreed with. It can also give some incredibly accurate breakdowns of grammar. I’ve tried with less popular languages like Filipino and it just isn’t the same, but as far as Japanese goes it’s like having a tutor on standby 24/7. In fact, that’s exactly how I’ve been using it - I have it grade my own translations and give feedback on what could’ve been said more naturally.

All this to say, AI when used as a tool, rather than a dystopic stand-in for a human, can be a very useful one. So, what are some use cases you guys have where AI actually is pretty useful?

  • @SlopppyEngineer
    link
    English
    21 day ago

    “The AI effect” refers to a phenomenon where either the definition of AI or the concept of intelligence is adjusted to exclude capabilities that AI systems have mastered. This often manifests as tasks that AI can now perform successfully no longer being considered part of AI, or as the notion of intelligence itself being redefined to exclude AI achievements.

    • AlexanderESmith
      link
      fedilink
      01 day ago

      LLMs are not AI. They do not reason. They have no agency. They have no memory. They aren’t self-aware, or indeed, aware of anything at all.

      The goal posts aren’t moving, they just aren’t an example of intelligence. You can argue that LLMs aquire and use knowledge, but they don’t understand what you asked, or what they’re saying. They’re just creating a block of text that looks like what a human would write, based on statistics, one word at a time, using a prompt as a seed.

      LLMs are just statistical models that generate realistic-looking output. It’s an illusion of intelligence. A shadow of understanding. The people buying into their alleged abilities are wildly over-estimating them due to ignorance and apathy.

      • @SlopppyEngineer
        link
        English
        224 hours ago

        They do not reason. They have no agency. They have no memory. They aren’t self-aware, or indeed, aware of anything at all.

        And that’s true. But those would be properties of a general intelligence. So of course LLM are not a general intelligence.

        LLM still implement a mastery of language, what in general is seen as an aspect of intelligence. These programs implementing just an aspect or a task are usually called narrow AI. It’s still within the domain of AI.

        Chess and checkers algorithms are also seen as the first implementation of AI. Very narrow AI of course and the intelligence didn’t transfer well to other tasks.

        • AlexanderESmith
          link
          fedilink
          123 hours ago

          LLM still implement a mastery of language

          I would argue that they do not. Picking statistically likely strings of words based on previous writings is not mastery, it’s mimickry.

          In order to have a mastery of language, one would first need to understand what the language represented, form an idea, then describe that idea using what they know about the concepts of that idea, and the understanding of language. LLMs do none of these things.

          Chess and checkers algorithms are also seen as the first implementation of AI. Very narrow AI of course and the intelligence didn’t transfer well to other tasks.

          Chess and checkers algorithms are also not examples of intelligence. Again, they’re just playing statistics based on their knowledge of the rules of the game, and the moves their opponents are known to deploy.

          It’s easy to see why that ability didn’t translate well to any other task; The system had no concept of what they were doing, or how it might apply to other - also-unknowable - concepts.

          A human can play chess and learn that they need to sacrifice pieces (losing a battle) to win the over all game (winning the war), and apply that to business or even other games. A human can do this because they understand each concept, both unto themselves, and in greater context of their overall experiences. A human also has the ability to think of these concepts in an abstract way, and adapt them to other contexts. These things are intelligence.

          • @SlopppyEngineer
            link
            English
            222 hours ago

            And your brain is full of neurons that biologically implement statistics and give an output based on previous things heard and read. Down to that level, it’s still just statistics. Somehow that’s different because it’s biological.

            And some of my colleagues are experts in mimicry. They don’t really understand what they’re doing, just saying or doing the same thing they were trained on over and over because they get a reward. If true understanding is the level, many humans would need to be excluded.

            • AlexanderESmith
              link
              fedilink
              116 hours ago

              Hey, I’ll be one of the first in line to suggest that our brains are not special, magical, impossible to create systems. We could probably approximate human-level ability with a few antagonistic models, an image processor, and (crucially) a simple body and locomotion routines (because I don’t believe human-level intelligence is possible without being able to directly interface with the world).

              My thesis - from my first post in this thread - is that this one system, acting on it’s own, doing nothing but producing text, is not AI. It’s not intelligence, because it doesn’t know what it’s saying, it’s just spitting out (mathematically guided, syntactically-correct-looking, stolen-from-humans) random words.

              • @SlopppyEngineer
                link
                English
                11 hour ago

                Ok, let’s check the dictionary.

                Artificial intelligence, noun, The capacity of computers or other machines to exhibit or simulate intelligent behaviour; the field of study concerned with this. In later use also: software used to perform tasks or produce output previously thought to require human intelligence, esp. by using machine learning to extrapolate from large collections of data. Also as a count noun: an instance of this type of software; a (notional) entity exhibiting such intelligence. Abbreviated AI.

                So it would still be AI. Just not up to your standards. They really should make some level system, like the sae levels of automation.

            • AlexanderESmith
              link
              fedilink
              116 hours ago

              Hey, I’ll be one of the first in line to suggest that our brains are not special, magical, impossible to create systems.