The study tracked around 800 developers, comparing their output with and without GitHub’s Copilot coding assistant over three-month periods. Surprisingly, when measuring key metrics like pull request cycle time and throughput, Uplevel found no meaningful improvements for those using Copilot.
I basically exclusively use LLMs to explain broad concepts I’m unfamiliar with. a contrived example would be ‘what is a component in angular’ or ‘explain to a c# dev how x,y, and z work in rust’ The answers don’t need to be 100% accurate and they provide a nice general jumping point to get specific information.
Exactly, I’ve found them most useful to either summarize a text you feed it, or do broad ‘google like’ queries. I don’t trust it with anything beyond that
I’ve tried it for even some boiler plate code a few times. I’ve had to end up rewriting it every time.
It makes mistakes like Junior engineers, but it doesn’t make them in the same way that junior engineers do, which means that as a senior engineer it takes me significantly more effort to review. It also makes mistakes that humans don’t, which is even weirder to catch in review.
Also my experience. It sometimes tries to be smart and gets everything wrong.
I think code shows clearly, that LLMs don’t actually understand what’s written. Often enough you can clearly see it trying to insert a common pattern even though that doesn’t make sense at this point.
As a junior-to-mid-level developer I find myself having to rewrite the boilerplate code copilot comes up with as often as not, or it will get things slightly wrong that I then have to go back and fix. I’m starting to think that most of the time it would be just as quick for me to just write it all myself.
It’s a glorified autocorrect. Using it for anything else and expecting magic is an interesting idea. I’m not sure what folks are expecting there.
- It suggests variables and function names I was already going to type more accurately. Better, it suggests ones even when I cannot remember the name because i got stumped trying to remember.
- It fills in basic control structures and functions more effectively than the IDE’s completion templates.
- It provides a decent starting point for both comments and documentation (the hard part). Meaning my code actually has comments and documentation. Regularly!
But I don’t ask it to explain things or generate algorithms willy nilly. I don’t expect or try to have it do something that’s not more than simply auto-completion.
I honestly like it, even if I strongly dislike the use of AI elsewhere. It’s working in this area for me.
I’ve not been too keen on copilot, then we got it at work so I tried it. For my previous position working in an ancient java project which knows no rhyme or reason, a codebase which belongs in hell’s fires, it was mostly useless.
I switched to a modern web developer position where we do a lot of data manipulation and massage it into common types to visualise in charts and tables, there it excels. A lot of what we do uses the same datasets and are then aggregated into one of a set of common types, so copilot often “understands” what I intend and gives great 5-10 line suggestions.
These last 3 weeks I’ve had the massive task of separating our data processing into separate files to finally add unit tests. Doing the refactoring was easy with IntelliJ, copilot quickly wrote tests as with 100% coverage which allowed me to find a good number of undiscovered bugs.
I haven’t used the in IDE stuff, but the best use case I’ve had was asking bing chat for help with xml mappings for nhibernate with a wonky table structure when I had to work with some legacy code. The nhibernate documentation is terrible.
It’s working in this area for me
I’ve said this in other comments, but it’s easier to change your audience than your content.
What you said is the important bit: at the end of the day, you’ve got a computer working as a tool for a human. That’s what it should be all about. Instead we have so much AI slop that’s hardly trying to do anything for people, but rather trying to get another algorithm’s attention so it can be shown more - whether a person actually wants to see it or not.
If AI is a tool to create a thing, under the close supervision of a human, for other humans, I’m a lot more open to it. Just don’t let it get carried away and forget about the humanity of it all.
The few times I have used AI to help me with coding has mostly been to ask it for examples on how to use a specific feature, then it has been ok for the most part.
I mostly code in PowerShell, HTML and CSS, and Bing Chat helpful when I am stuck on a small issue.
We also recently started testing Copilot Pro 365, the one that can help you make documents or search through company documents and stuff like that.
As a test I asked it to make me a powerpoint presentation about the top ten podcasting microphones to buy.
The result looked great at first glance, but quicly got very generic.
Sure, it did show pictures of some microphones and even spoke about them, but it was just vauge and generic
I got a bridge to sell to anyone who thought AI would help reduce burnout lmao
No really, AI has great uses but I’m in awe anyone thought this was one of them
I use it sometimes to ask whether there’s an API for whatever I need to do. But it lies.
I had a non-technical manager (in 2012!) come to me and tell me he had just read an article about something called an API and he wanted us (a software development company) to start using them. I told him I would research it lol and fortunately he quickly moved on to other varieties of uselessness. Eventually he was fired for wearing a bowtie to the funeral of the company’s founder who famously wore a bowtie. I wish this story were not true.
I do a lot of scripting for cloud infrastructure deployments and linux/windows basic scripting and the bing chat is great for banging out 5 liners in 1 second that would take me an hour even after multiple decades of being an admin.
Anything more complex it is useless for so it is limited but nice to have.
Surprisingly, when measuring key metrics like pull request cycle time and throughput, Uplevel found no meaningful improvements for those using Copilot.
This assumes I’m going to dedicate my increased productivity to my employer. I’m still at the same level of productivity but personally my effort needed for specific tasks has dropped a lot meaning more free time for me.
I’m not sure why that’s so surprising actually
Me either, copilot doesn’t make me more efficient it just reduces some cognitive load.
It’s slightly better Resharper, except when it fucks up and then it’s just an annoying parlour trick.
I’m mildly surprised at all the bad experiences.
I’ve been using chatgpt for a while now. Tbf, of all the ai code assistant I tried, the only one that isn’t garbage is chatgpt. Annoying part is to provide code snippet and context.
But once you do, it becomes a god
Entire mechanics, algo, template, helper functions, done within a minute.
I legit can save multiple hours of work everyday with it. Obviously, I use this extra bit of time for myself ! Lmao