I have tried, unsuccessfully, to get various AI models to create a script that will curl or wget the latest Ubuntu LTS desktop torrent, even should that LTS version update in the future (beyond 24.01.1 LTS). The purpose is that I would like to seed whatever the latest LTS torrent is and I don’t want to have to keep checking the Ubuntu page for updates, I want it automatic. I know that LTS is slow to change versions but I am annoyed that AI can’t just write a decent script for this.
I also have downloaded rtorrent as a command line and will deal with how to make sure the latest LTS is used, as opposed to the prior one, with a different script later, but that’s not what I’m trying to now.
I am not asking for a human to create this script for me. I am asking why AI models keep getting this so wrong. I’ve tried ChatGPT 4o, I’ve tried DeepSeek, I’ve tried other localized models, Reasoning Models. They all fail. And when I execute their code, and I get errors and show it to the models, they still fail, many times in a row. I want to ask Lemmy if getting an answer is theoretically possible with the right prompt or if AI just sucks at coding.
This shouldn’t be too hard to do. At https://www.releases.ubuntu.com, they list the releases. When curling the webpage, there’s a list of the releases with version numbers some with LTS. New versions are always larger numbers. At https://ubuntu.com/download/alternative-downloads, they list the torrents. Also, all release torrents for desktop are in the format https://www.releases.ubuntu.com/XX.XX/*desktop*.torrent. I’ve tried to teach these models this shit and to just create a script for me, holy shit it’s been annoying. The models are incredibly stupid with scripting.
I’m not a computer programmer or developer and am picking up more coding here and there just because I want to do certain things in linux. But I just don’t understand why this is so difficult.
So my question is, is there ANY prompt for ANY model that will output successful code for this seemingly easy task, or is AI still too stupid to do this?
I’ve been watching YouTube about actual pilots, and one fun thing is the Autopilot is often better than the human in the mechanics of steering the plane in stable conditions, but it can’t really do anything else, so it’s designed to turn itself off with an alarm whenever it encounters a situation it can’t handle.
I guess that’s because it’s an old school program with error handling, etc, where all generative AI is used in such wider scopes it can’t be realistically programmed to say “I can’t do this” because its limitations are so poorly defined?
Yep!
There are responsible teams building LLM AIs that do program their LLM to say “I can’t do this”. But they are playing “bop-a-mole”. They see a bad output, they write new code to catch and replace it. But there’s always a new innovative bad output hiding in the depths of the learning model waiting for the input that sets it free.
So only really old LLMs that have had millions of users using them daily for 30+ years without issues should really be trusted. (And there are - currently - no useful LLMs that meet this criteria.)
The other intrinsic problem is that LLMs are primarily trained against the writings of self-declared experts, and have no concept of sharing a confidence level through writing tone.
This means that an LLM uses an equally authoritative writing style when regurgitating things it has trained on millions of times (things commonly accepted as fact) as when regurgitating things it has only trained on once (lies, sarcasm).
I never knew that and it explains a lot.