I like the experience using Copilot and GPT much better than browsing SO, but this is what worries me in the long-term though:
This issue goes beyond the survival of Stack Overflow. All AI models need a steady flow of quality human data to train on. Without that, they’ll be left to rely on machine-generated content, and researchers have found that this leads to worse performance. There’s an ominous name for this: model collapse.
Without this incredible knowledge sharing and curated feedback, in an environment that constantly changes with new libraries, languages, and best practices, these LLMs are doomed. I think solving this might be Stack Overflow’s way out.
Yes, and businesses thinking they can drop their developers for chatgpt like tech in the future should (they won’t, but they should) consider this. AI goes to pot very quickly without human input.
It’s not even on the map. Most of the businesses who think they can replace anybody with LLMs are thinking about subscribing to a LLM that’s been trained and maintained by someone else. Which of course involved giving that someone the upper hand and letting them dictate terms.
Anybody who tried making their own model knows it’s tough, grueling work.
So these businesses take the easy way out and will give that someone their data (and break privacy and regulations in the process) and use the data that comes out of the LLM with no regard whatsoever about where and how it’s been sourced and what legal implications that might have for themselves.
If you add the fact the LLM owner usually makes you sign a contract that gives basically no guarantees, you have the recipe for a very fine mess.
I still can’t wrap my head around for example how can any software company let or even goad its programmers to use Copilot in good faith, with no idea where the code is coming from and what’s the copyright status. Leaving aside the fact Microsoft is currently being sued for this exact problem.
Guys stop this non-sense. That’s not how it works.
They hire less new developers. There will be less people doing the work. Idiots who don’t learn to use the tech will be left behind. This is already happening.
From what we see today based on these LLMs that are given a larger context (e.g. internal documentation or knowledge bases), we can say that it’d be as good as a decent developer that reads said documentation and it’s able to apply that knowledge to a specific use case.
But Stack Overflow answers often target things that don’t come up in the docs, that are outdated, or somewhat case-dependent and/or opinionated. Answers that might even lead to changes in documentation. This kind of insight will be hampered over time without a way of continuously sharing such knowledge.
I like the experience using Copilot and GPT much better than browsing SO, but this is what worries me in the long-term though:
Without this incredible knowledge sharing and curated feedback, in an environment that constantly changes with new libraries, languages, and best practices, these LLMs are doomed. I think solving this might be Stack Overflow’s way out.
Yes, and businesses thinking they can drop their developers for chatgpt like tech in the future should (they won’t, but they should) consider this. AI goes to pot very quickly without human input.
It’s not even on the map. Most of the businesses who think they can replace anybody with LLMs are thinking about subscribing to a LLM that’s been trained and maintained by someone else. Which of course involved giving that someone the upper hand and letting them dictate terms.
Anybody who tried making their own model knows it’s tough, grueling work.
So these businesses take the easy way out and will give that someone their data (and break privacy and regulations in the process) and use the data that comes out of the LLM with no regard whatsoever about where and how it’s been sourced and what legal implications that might have for themselves.
If you add the fact the LLM owner usually makes you sign a contract that gives basically no guarantees, you have the recipe for a very fine mess.
I still can’t wrap my head around for example how can any software company let or even goad its programmers to use Copilot in good faith, with no idea where the code is coming from and what’s the copyright status. Leaving aside the fact Microsoft is currently being sued for this exact problem.
Guys stop this non-sense. That’s not how it works.
They hire less new developers. There will be less people doing the work. Idiots who don’t learn to use the tech will be left behind. This is already happening.
if we use embedding and the language documentation, I wonder how much it can work out going forward?
Nothing because language models don’t understand the text they read.
From what we see today based on these LLMs that are given a larger context (e.g. internal documentation or knowledge bases), we can say that it’d be as good as a decent developer that reads said documentation and it’s able to apply that knowledge to a specific use case.
But Stack Overflow answers often target things that don’t come up in the docs, that are outdated, or somewhat case-dependent and/or opinionated. Answers that might even lead to changes in documentation. This kind of insight will be hampered over time without a way of continuously sharing such knowledge.
deleted by creator
deleted by creator