I’ve been working with so many students who turn to it as a first resort for everything. The second a problem stumps them, it’s AI. The first source for research is AI.

It’s not even about the tech, there’s just something about not wanting to learn that deeply upsets me. It’s not really something I can understand. There is no reason to avoid getting better at writing.

  • BassTurd
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    10 days ago

    Truth. This does a bit more than a typical linter, that was just a simple example I riffed off. Sometimes it helps me find logic errors as well. I’ll highlight a block of code, ask why it’s doing or not doing the thing I expect, and go from there. I’ve probably only used it a dozen times for basic troubleshooting over the past 6 months when I get stumped on something.

    • fizzle@quokk.au
      link
      fedilink
      English
      arrow-up
      3
      ·
      10 days ago

      Yeah so I’ve not used claud but have used a number of models from hugging face.

      I haven’t used them extensively.

      In my experience, they provide a great starting point for things I haven’t interacted with much. So I might spend 10,000 hours with js, but never touched a firefox extension, or maybe a docker container, or nix script. With js an LLM is not much more productive than just coding by myself with non-AI tools. With the other things it can give you a really good leg up that saves a heap of effort in getting started.

      What I have noticed though is that it’s not very good at fine tuning things. Like your first prompt might do 80% of the job of creating a docker file for you. Refining your prompt might get you another 5% of the way, but the last 15% involves figuring out what it’s doing and what the best way to do it might be.

      With these sorts of tasks models really seem to suffer from not knowing what packages or conventions have been deprecated. This is really obvious with an immature ecosystem like nix.

      IMO, LLMs are not completely without virtue, but knowing when and when not to use them is challenging.

      • sloppy_diffuser@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 days ago

        With these sorts of tasks models really seem to suffer from not knowing what packages or conventions have been deprecated. This is really obvious with an immature ecosystem like nix.

        This is where custom setups will start to shine.

        https://github.com/upstash/context7 - Pull version specific package documentation.

        https://github.com/utensils/mcp-nixos - Similar to above but for nix (including version specific queries) with more sources.

        https://github.com/modelcontextprotocol/servers/tree/main/src/sequentialthinking - Break down problems into multiple steps instead of trying to solve it all at once. Helps isolate important information per step so “the bigger picture” of the entire prompt doesn’t pollute the results. Sort of simulates reasoning. Instead of finding the best match for all keywords, it breaks the queries down to find the best matches per step and then assembles the final response.

        https://github.com/CaviraOSS/OpenMemory - Long conversations tend to suffer as the working memory (context) fills up so it compresses and details are lost. With this (and many other similar tools) you can have it remember and recall things with or without a human in the loop to validate what’s stored. Great for complex planning or recalling of details. I essentially have a loop setup with global instructions to periodically emit reinforced codified instructions to a file (e.g., AGENTS.md) with human review. Combined with sequential thinking it will identify contradictions and prompt me to resolve any ambiguity.

        The quality of the output is like going from 80% to damn near 100% as your knowledge base grows from external memory and codified instructions in files. I’m still lazy sometimes and will use something like Kagi assistant for a quick question or web search, but they have a pretty good baseline setup with sequential thinking in their online tooling.