This article nails it. Just because LLMs don’t deliver flawless code, that doesn’t mean you shouldn’t use their help. That seems completely short-sighted to me. Just don’t rely on just one. And don’t expect that they will provide you the exact solution to your prompt at the first attempt. Exactly as it often happens when collaborating with another human being.
“Just because code looks good and runs without errors doesn’t mean it’s actually doing the right thing. No amount of meticulous code review—or even comprehensive automated tests—will demonstrably prove that code actually does the right thing. You have to run it yourself!
Proving to yourself that the code works is your job. This is one of the many reasons I don’t think LLMs are going to put software professionals out of work.
LLM code will usually look fantastic: good variable names, convincing comments, clear type annotations and a logical structure. This can lull you into a false sense of security, in the same way that a gramatically correct and confident answer from ChatGPT might tempt you to skip fact checking or applying a skeptical eye.
The way to avoid those problems is the same as how you avoid problems in code by other humans that you are reviewing, or code that you’ve written yourself: you need to actively exercise that code. You need to have great manual QA skills.
A general rule for programming is that you should nevertrust any piece of code until you’ve seen it work with your own eye—or, even better, seen it fail and then fixed it.”
https://lnkd.in/dVV7knTD
#AI #GenerativeAI #SoftwareDevelopment #Programming #PromptEngineering #LLMs #Chatbots
@[email protected] Oh fuck off. Why the fuck should we use these bullshit generators only to have to fine toothcomb the garbage they produce? #FuckAI
@joachim: You have every right to not use LLMs. Personally, I find them a great help for improving my productivity. Every person has its own reasons for using or not using generative AI. Nevertheless, I’m afraid that this technology - like many other productivity-increasing technologies - will become a matter of fact in our daily lifes. The issue here is how best to adapt it to our own advantage.Open-source LLMs should be preferred, of course. But I don’t think that mere stubbornness is a very good strategy to deal with new technology.
“If we don’t use AI, we might be replaced by someone who will. What company would prefer a tech writer who fixes 5 bugs by hand to one who fixes 25 bugs using AI in the same timeframe, with a “good enough” quality level? We’ve already seen how DeepSeek AI, considered on par with ChatGPT’s quality, almost displaced more expensive models overnight due to the dramatically reduced cost. What company wouldn’t jump at this chance if the cost per doc bug could be reduced from $20 to $1 through AI? Doing tasks more manually might be a matter of intellectual pride, but we’ll be extinct unless we evolve.”
https://idratherbewriting.com/blog/recursive-self-improvement-complex-tasks
@[email protected] Climate change, you fuckhead.
@[email protected] Just because Silicon Valley companies over-engineer their models, that doesn’t mean it must be necessarily so… Look at DeepSeek: https://github.com/deepseek-ai/open-infra-index/blob/main/202502OpenSourceWeek/day_6_one_more_thing_deepseekV3R1_inference_system_overview.md
Thanks for this thoughtful and nuanced opinion.
@SamuraiBeandog I don’t owe subtlety or nuance to people who are actively worsening climate collapse, and all for bullshit. They’re criminals.