• @chemical_cutthroat
    link
    English
    7511 hours ago

    Yeah. Everyone forgot the second half of “Trust, but Verify”. If I ask an LLM a question, I’m only doing it because I’m not 100% sure how to look up the info. Once it gives me the answer, I’m checking that answer with sources because it has given me a better ability to find what I was looking for. Trusting an LLM blindly is just as bad as going on Facebook for healthcare advice.

    • @danc4498
      link
      English
      24 hours ago

      I thought it was “butt verify” whoops

    • @eronth
      link
      English
      209 hours ago

      I find LLMs very useful for setting up tech stuff. “How do I xyz in docker?” It does a great job of boiling together several disjointed How Tos that don’t quite get me there into one actually usable one. I use it when googling and following articles isn’t getting me anywhere, and it’s often saved so much time.

      • @[email protected]
        link
        fedilink
        English
        145 hours ago

        They are also amazing at generating configuration that’s subtly wrong.

        For example, if the bad LLM generated configurations I caught during pull requests reviews are any example, there are plenty of people with less experienced teams running broken kubernetes deployments.

        Now, to be fair, inexperienced people would make similar mistakes, but inexperienced people are capable of learning with their mistakes.

    • MudMan
      link
      fedilink
      2611 hours ago

      Yep. Or because you can recognize the answer but can’t remember it off the top of my head. Or to check for errors on a piece of text or code or a translation, or…

      It’s not “trust but verify”, which I hate as a concept. It’s just what the tech can and cannot do. It’s not a search engine finding matches to a query inside a large set of content. It’s a stochastic text generator giving you the most likely follow up based on its training dataset. It’s very good autocorrect, not mediocre search.