I saw another article today saying how companies are laying off tech workers because AI can do the same job. But no concrete examples… again. I figure they are laying people off so they can pay to chase the AI dream. Just mortgaging tomorrow to pay for today’s stock price increase. Am I wrong?
“Tech workers” is pretty broad.
Tech Support
There are support chatbots that exist today that act as a support feature for people who want to ask English-language questions rather than search for answers. Those were around even before LLMs, could work on even simpler principles. Having tier-1 support workers work off a flowchart is a thing, and you can definitely make a computer do that even without any learning capability at all. So they definitely can fill some amount of role. I don’t know how far that will go, though. I think that there are probably going to be fundamental problems with novel or customer-specific issues, because a model just won’t have been trained on it. I think that it’s going to have a hard time synthesizing an answer from answers to multiple unrelated problems that it might have in its training corpus. So I’d say, yeah, to some degree, and we’ve successfully used expert systems and other forms of machine learning in the past to automate some basic stuff here. I don’t think that this is going to be able to do the field as a whole.
Writing software
Can existing LLM systems write software? No. I don’t think that they are an effective tool to pump out code. I also don’t think that the current, “shallow” understanding that they have is amenable to doing so.
I think that the things that LLMs work well at is in producing stuff that is different, but appears to a human to be similar to other content. There are a variety of uses that that work, to varying degrees, for content consumed by humans.
But humans deal well with errors in what we see. The kinds of errors in AI-generated images aren’t a big issue for us – they just need to cue up our memories of things in our head. Programming languages are not very amenable to that. And I don’t think that there’s a very effective way to lower that rate.
I think that it might be possible to make use of an LLM-driven “warning” system when writing software; I’m not sure if someone has done something like that. Think of something that works the way a grammar checker does for natural language. Having a higher error rate is acceptable there. That might reduce the amount of labor required to write code, though I don’t think that it’ll replace it.
Maybe it’s possible to look for common security errors to flag for a human by training a model to recognize those.
I also think that software development is probably one of the more-heavily-automated fields out there because, well, people who write software make systems to do things over and over. High-level programming languages rather than writing assembly, software libraries, revision control…all that was written to automate away parts of tasks. I think that in general, a lot of the low-hanging fruit has been taken.
Does that mean that I think that software cannot be written by AI? No. I am sure that AI can write software. But I don’t think that the AI systems that we have today, or systems that are slightly tweaked, or systems that just have a larger model, or something along those lines, are going to be what takes over software development. I also think that the kind of hurdles that we’d need to clear to really fully write software from an AI require us to really get near an AI that can do anything that a human can do. I think that we will eventually get there, and when we get there, we’ll see human labor in general be automated. But I don’t think that OpenAI or Microsoft are a year away from that.
System and network administration
Again, I’m skeptical that interacting with computers is where LLMs are going to be the most-effective. Computers just aren’t that tolerant of errors. Most of the things that I can think of that you could use an AI to do, like automated configuration management or something, already have some form of automated tools in that role.
Also, I think that obtaining training data for this corpus is going to be a pain. That is, I don’t think that sysadmins are going to generally be okay with you logging what they’re doing to try to build a training corpus, because in many cases, there’s potential for leaks of sensitive information.
And a lot of data in that training corpus is not going to be very timeless. Like, watching someone troubleshoot a problem with a particular network card…I’m not sure how relevant that’s going to be for later hardware.
Quality Assurance
This involves too many different things for me to make a guess. I think that there are maybe some tasks that some QA people do today that an LLM could do. Instead of using a fuzzer to throw input in for testing, maybe have an AI to predict what a human would do.
Maybe it’s possible to build some kind of model mapping instructions to operations with a mouse pointer on a screen and then do something that could take English-language instructions to try to generate actions on that screen.
But I’ve also had QA people do one-off checks, or things that aren’t done at mass scale, and those probably just aren’t all that sensible to automate, AI or no. I’ve had them do tasks in the real world (“can you go open up the machine seeing failures and check what the label on that chip on the machine that’s getting problems reads, because it’s reporting the same part number in software”). I’ve written test plans for QA to run on things I’ve built, and had them say “this is ambiguous”. My suspicion is that an LLM trained on what information is out there is going to have a hard time, without a deep understanding of a system, to be able to say “this is ambiguous”.
Overall
There are other areas. But I think that any answer is probably “to some degree, depending upon what area of tech work, but mostly not, not with the kind of AI systems that exist today or with minor changes to existing systems”.
I think that a better question than “can this be done with AI” is “how difficult is this job to do with AI”. I mean, I think that eventually, pretty much any job could probably be done by an AI. But I think that some are a lot harder than others. In general, the ones that are more-amenable are, I think, those where one can get a good training corpus – a lot of recorded data showing how to do the task correctly and incorrectly. I think that, at least using current approaches, tasks that are somewhat-tolerant of errors are better. For any form of automation, AI or no, tasks that need to be done repeatedly many times over are more-amenable to automation. Using current approaches, problems that can be solved by combining multiple things from a training corpus in simple ways, without a deep understanding, not needing context about the surrounding world or such, are more amenable to being done by AI.
re: The warning/grammer checking system.
What you’re describing is called a linter, and they’ve existed for ages.
The only way I can really think of to improve them would be to give them a full understanding of your codebase as a whole, which would require a deeper understanding than current gen AI is capable of. There might be some marginal improvements possible with current gen, but it’s not going to be groundbreaking.
What I have found AI very useful for is basic repetitive stuff that isn’t easily automated in other ways or that I simply can’t be bothered to write again. eg: “Given this data model, generate a validated CRUD form” or “write a bash script that renames all the files in a folder to follow this pattern”
You still need to check what it produces though because it will happily hallucinate parameters that don’t exist, or entire validation libraries that don’t exist, but it’s usually close enough to be used as a starting point.
Yup, and I’ve used them, but they’ve had hardcoded rules, not had models just trained on code.
I suspect for a bunch of projects, AI going to make programming itself obsolete. If it comes pre-trained to use a number of libraries, protocols and databases, giving the thing a bunch of specifications and scenarios and let it do the actual work of doing bookkeeping or whatever becomes possible. Most managers would jump on the idea to throw extra hardware at a problem to run AI locally is it means shipping in half the time. As long as the problem to solve is generic enough and not too big. And those limits will go up quickly.
What I’d like is to plug in the manual and FAQ of some software or whatever and be able to ask specific questions about the setup/configuration.
Now who is going to write the documentation ;)
Obviously ai will write the documentation that is read by the ai which will inform another ai to do the work and a fourth ai does testing so that an ai farm can use the software to buy stocks or something.
Great write up. A few things caugyt my eye. You meantioned AI realtime checking code as it is written. IDEs do a pretty good job of that already. To do much better, it would have to know what you want to do. And that seems to be a barrier to how AI is developed today. It doesn’t “understand” why.
Now QA is interesting. I wonder if anyone has built a model based entirely on clicks that can predict where a user is going to click. That would be very interesting. It would work really well for testing functionality that is already common on existing sites. Most webapps are made up of a large part of things already done… date chooser, question submitters, and such. Like how many apps out there are for scheduling an appointment. Tons. And so many apps (even mobile games) are just the same thing in a custom facade. In this case I don’t think it would replace QA much as places writing that stuff don’t test much. But it could speed up developers by reducing the number of customer reported issues in code they wrote months ago.
Great answer. Thanks