• @[email protected]
    link
    fedilink
    English
    106 months ago

    Until someone uses it for a little more than boilerplate, and the reviewer nods that bit through as it’s hard to review and not something a human/the person who “wrote” it would get wrong.

    Unless all the ai generated code is explicitly marked as ai generated this approach will go wrong eventually.

    • just another dev
      link
      fedilink
      English
      66 months ago

      Unless all the ai generated code is explicitly marked as ai generated this approach will go wrong eventually.

      Undoubtedly. Hell, even when you do mark it as such, this will happen. Because bugs created by humans also get deployed.

      Basically what you’re saying is that code review is not a guarantee against shipping bugs.

    • @HauntedCupcake
      link
      English
      1
      edit-2
      6 months ago

      Agreed, using LLMs for code requires you to be an experienced dev who can understand what it pukes out. And for those very specific and disciplined people it’s a net positive.

      However, generally, I agree it’s more risk than it’s worth