• @gardylou
    cake
    link
    2526 days ago

    Yeah, im skeptical of AIs to accurately do any of that, at least not any LLM. Like, LLMs aren’t going to be could at decision making based on ever-changing real time information.

    However tradition software and apps with good data practices can do this…indeed some of them have existing for like a decade in certain markets…they themselves becoming systems people and companies try to gamify to sometimes counter-productive outcomes.

    • @[email protected]
      link
      fedilink
      1726 days ago

      Yeah, llms are a really great advancement in language processing, and the ability to let them hook into other systems after sussing out what the user means is legitimately pretty cool.
      The issue is that people keep mistaking articulate mimicry of confidence and knowledge as actual knowledge and capability.

      It’s doubly frustrating at the moment because people keep thinking that llms are what AI is, and not just a type of AI. It’s like how now people hear “crypto” and assume you’re talking about the currency scheme, which is needlessly frustrating if you work in the security sector.

      Making a system that looked at your purchase history (no real other way to get that data reliably otherwise), identified the staple goods you bought often and then tried to predict the cadence that you buy them at would be a totally feasible AI problem. Wouldn’t be even remotely appropriate for an llm until the system found the price by (probably) crudely scraping grocery store websites and then wanted to tell you where to go, because they’re good at things like "turn this data into a friendly shopping list message "

      • @Laereht
        link
        326 days ago

        To be completely fair, the confusion is because of the marketing. You and I both know that Tesla cars can’t really drive themselves for the same reasons you outlined but the typical person sees “autonomous mode” or “self-driving” applied to what they are buying.

        People treat llms like something out of a super hero movie because they’re led to believe it to be the case. The people shoveling in the money based on promises and projections are the root cause.

      • @Specal
        link
        125 days ago

        People are just really bad at prompt engineering and so they aren’t good at getting LLM’s like gemeni and GPT to do what they want

        You can train it, within conversations to get good at specific tasks. They’re very useful, you just gotta know how to talk to them

        • @[email protected]
          link
          fedilink
          125 days ago

          The issue is that it’s a language model. You can go a long way by manipulating language to get useful results but it’s still fundamentally limited by languages inability to perform reason, only to mimic it.

          Syntax can only take you so far, and it won’t always take you to the right place. Eventually you need something that can reason about the underlying meaning.

          • @Specal
            link
            125 days ago

            It’s still a computer at the end of the day, just use logic. It responds well to it, you remove it’s ability to be creative and tell it what you want to accomplish

      • kamenLady.
        link
        1
        edit-2
        26 days ago

        I would even say llms is an important part of what eventually will become an AI and not a type of AI in itself.

        • @[email protected]
          link
          fedilink
          325 days ago

          There’s a conflation of terms.

          One sense of AI is as artificial intelligence: a huge swath of computer algorithms, techniques and study relating to machines measuring inputs, pulling information from them, and making decisions based on what they deduce. Sometimes it’s little more than a handful of equations that capture how to group things together by similarity. What matters is that it’s demonstrating demonstrating intelligence or some manner of operating on knowledge.

          The other sense of AI is as a synonym for “a general purpose intelligent system of at least human level”.

          Your phones auto complete is an example of the first sense of AI. The second sense doesn’t exist.

          There’s a tendency for people to want to remove the AI label from anything they’re used to, or that isn’t like that second sense.