• Pennomi
    link
    English
    328 months ago

    Copilot is often a brilliant autocomplete, that alone will save workers plenty of time if they learn to use it.

    I know that as a programmer, I spend a large percentage of my time simply transcribing correct syntax of whatever’s in my brain to the editor, and Copilot speeds that process up dramatically.

      • Pennomi
        link
        English
        308 months ago

        If you blindly accept autocompletion suggestions then you deserve what you get. AIs aren’t gods.

        • @TrickDacy
          link
          68 months ago

          OMG thanks for being one of like three people on earth to understand this

      • @[email protected]
        link
        fedilink
        English
        28 months ago

        you don’t catch it

        That’s on you then. Copilot even very explicitly notes that the ai can be wrong, right in the chat. If you just blindly accept anything not confirmed by you, it’s not the tool’s fault.

    • @[email protected]
      link
      fedilink
      98 months ago

      I feel like the process of getting the code right is how I learn. If I just type vague garbage in and the AI tool fixes it up, I’m not really going to learn much.

      • Pennomi
        link
        English
        68 months ago

        Autocomplete doesn’t write algorithms for you, it writes syntax. (Unless the algorithm is trivial.) You could use your brain to learn just the important stuff and let the AI handle the minutiae.

      • @TrickDacy
        link
        48 months ago

        Where “learn” means “memorize arbitrary syntax that differs across languages”? Anyone trying to use copilot as a substitute for learning concepts is going to have a bad time.

      • @[email protected]
        link
        fedilink
        48 months ago

        AI can help you learn by chiming in about things you didn’t know you didn’t know. I wanted to compare images to ones in a dataset, that may have been resized, and the solution the AI gave me involved blurring the images slightly before comparing them. I pointed out that this seemed wrong, because won’t slight differences in the files produce different hashes? But the response was that the algorithm being used was perceptual hashing, which only needs images to be approximately the same to produce the same hash, and the blurring was to make this work better. Since I know AI often makes shit up I of course did more research and tested that the code worked as described, but it did and was all true.

        If I hadn’t been using AI, I would have wasted a bunch of time trying to get the images pixel perfect identical to work with a naive hashing algorithm because I wasn’t aware of a better way to do it. Since I used AI, I learned more about what solutions are available, more quickly. I find that this happens pretty often; there’s actually a lot that it knows that I wasn’t aware of or had a false impression of. I can see how someone might use AI as a programming crutch and fail to pay attention or learn what the code does, but it can also be used in a way that helps you learn.

    • Tiefling IRL
      link
      fedilink
      8
      edit-2
      8 months ago

      I use AI a lot as well as a SWE. The other day I used it to remove an old feature flag from our server graphs along with all now-deprecated code in one click. Unit tests still passed after, saved me like 1-2 hours of manual work.

      It’s good for boilerplate and refactors more than anything