This article nails it. Just because LLMs don’t deliver flawless code, that doesn’t mean you shouldn’t use their help. That seems completely short-sighted to me. Just don’t rely on just one. And don’t expect that they will provide you the exact solution to your prompt at the first attempt. Exactly as it often happens when collaborating with another human being.
“Just because code looks good and runs without errors doesn’t mean it’s actually doing the right thing. No amount of meticulous code review—or even comprehensive automated tests—will demonstrably prove that code actually does the right thing. You have to run it yourself!
Proving to yourself that the code works is your job. This is one of the many reasons I don’t think LLMs are going to put software professionals out of work.
LLM code will usually look fantastic: good variable names, convincing comments, clear type annotations and a logical structure. This can lull you into a false sense of security, in the same way that a gramatically correct and confident answer from ChatGPT might tempt you to skip fact checking or applying a skeptical eye.
The way to avoid those problems is the same as how you avoid problems in code by other humans that you are reviewing, or code that you’ve written yourself: you need to actively exercise that code. You need to have great manual QA skills.
A general rule for programming is that you should nevertrust any piece of code until you’ve seen it work with your own eye—or, even better, seen it fail and then fixed it.”
https://lnkd.in/dVV7knTD
#AI #GenerativeAI #SoftwareDevelopment #Programming #PromptEngineering #LLMs #Chatbots
When you start using it, I think.
I did and stopped because the code was so low quality and ate up too much time fixing and verifying.