I had some files that i knew had duplicates, but didn’t exactly match and while the filenames were not identical, you could tell by looking if they were the same.
Would have been very tedious to do all of them, LLM was able to identify a “good enough” number of duplicates and only made a few mistakes. Greatly sped up the manual work required to clean up the collection.
But that’s so far from most advertised scenarios and not compelling from a “make lots of money” perspective.
This was after applying various mechanisms of the traditional kind. Admittedly there was one domain specific strategy that want applied that would have caught a few more, but not all of them.
The point is that I had a task that was hard to code up, but trivial yet tedious for a human. AI approaches can bridge that gap sometimes.
In terms of energy consumption, it wouldn’t be so bad if the approaches weren’t horribly over used. That’s the problem now, 99% of usage is garbage. If it settled down to like 3 or 4% of usage it would still be just as useful, but no one would bat an eye at the energy demand.
As with a lot of other bubble things, my favorite part is probably going to be it’s life after the bubble pops. When the actually useful use cases remain and the stupid stuff does out.
You use it for pointers and double check the results. I’ve had a lot of luck using it to explain terminology for complicated specialized tasks for trades work and stuff recently.
They’re decent at language tasks. So, if you provide them with all the information and configure them to not make up any of their own, then they can do things like rewriting it in a different style or different language relatively competently.
I mean LLMs are already very useful when used correctly, it’s just 98% of the time they aren’t used correctly
We’re talking about the bubble here, not reasonable use cases. :-)
How do I use it “correctly”
We used one to come up with a name for a feature cocktail at work. It’s pretty good for that kind of stuff.
I had some files that i knew had duplicates, but didn’t exactly match and while the filenames were not identical, you could tell by looking if they were the same.
Would have been very tedious to do all of them, LLM was able to identify a “good enough” number of duplicates and only made a few mistakes. Greatly sped up the manual work required to clean up the collection.
But that’s so far from most advertised scenarios and not compelling from a “make lots of money” perspective.
There are (non-AI) algorithms for that. Git uses one to detect renames. No need to melt the ice caps just for that.
This was after applying various mechanisms of the traditional kind. Admittedly there was one domain specific strategy that want applied that would have caught a few more, but not all of them.
The point is that I had a task that was hard to code up, but trivial yet tedious for a human. AI approaches can bridge that gap sometimes.
In terms of energy consumption, it wouldn’t be so bad if the approaches weren’t horribly over used. That’s the problem now, 99% of usage is garbage. If it settled down to like 3 or 4% of usage it would still be just as useful, but no one would bat an eye at the energy demand.
As with a lot of other bubble things, my favorite part is probably going to be it’s life after the bubble pops. When the actually useful use cases remain and the stupid stuff does out.
You use it for pointers and double check the results. I’ve had a lot of luck using it to explain terminology for complicated specialized tasks for trades work and stuff recently.
They’re decent at language tasks. So, if you provide them with all the information and configure them to not make up any of their own, then they can do things like rewriting it in a different style or different language relatively competently.
That’s the trillion dollar puzzle nobody has been able to solve yet. It’s not trivial at all, even when it seems like it should be.
"Correctly " is a term that has several different uses and meanings. Depending on the context, “Correctly” can mean: