Interesting to see the benefits and drawbacks called out.

  • Sentient Loom
    link
    fedilink
    English
    5
    edit-2
    11 months ago

    Yes your message is clear.

    To answer your original question, I have no idea what it will look like when software writes and reviews itself. It seems obvious that human understanding of a code base will quickly disappear if this is the process, and at a certain point it will go beyond the capacity of human refactoring.

    My first thought is that a code base will eventually become incoherent and irredeemably buggy. But somebody (probably not an AI, at first) will teach ChatGPT to refactor coherently.

    But the concept of coherence here becomes a major philosophical problem, and it’s very difficult to imagine how to make it practical in the long run.

    I think for now the practical necessity is to put extra emphasis on human peer review and refactoring. I personally haven’t used AI to write code yet.

    My dark side would love to see some greedy corporations wrecking their codebase by over-relying on AI to replace their coders. And debugging becomes a nightmare because nobody wrote it and they have to spend more time bug-fixing than they would have spent writing it in the first place.

    Edit: missing word

    • peopleproblems
      link
      511 months ago

      And, while some of us may be out of a job temporarily, historically, when companies make these big brain decisions, we end up getting to come back and charge 4x what we used to get paid to get it working again.

      When I found out one of the contractors I worked with was not one of the cheap ones, but instead rehired after he retired at a 400% bump, I decided that maybe I needed to understand the business needs better

      • Sentient Loom
        link
        fedilink
        English
        211 months ago

        Wow that’s a huge pay bump lol. Maybe I should also start studying those business needs more.