- cross-posted to:
- [email protected]
- technology
- cross-posted to:
- [email protected]
- technology
This is a classic case of tragedy of the commons, where a common resource is harmed by the profit interests of individuals. The traditional example of this is a public field that cattle can graze upon. Without any limits, individual cattle owners have an incentive to overgraze the land, destroying its value to everybody.
We have commons on the internet, too. Despite all of its toxic corners, it is still full of vibrant portions that serve the public good — places like Wikipedia and Reddit forums, where volunteers often share knowledge in good faith and work hard to keep bad actors at bay.
But these commons are now being overgrazed by rapacious tech companies that seek to feed all of the human wisdom, expertise, humor, anecdotes and advice they find in these places into their for-profit A.I. systems.
As per the article, it goes like this:
And simultaneously, AI content of poor quality drowns what is left.
In terms of arguments, have you heard about control / alignment problem or x-risk?
Isn’t that true with people too? If I read a bunch of books and then use what I learned to write a new book, I’m not crediting the original authors. If I learn painting techniques from Van Gogh and el Greco, I’m not crediting them either.
You’re equating sentience with non-sentience. a LLM is a non-sentient program, created by humans to learn language. You are a sentient person who is influenced by the painting techniques of Van Gogh and el Greco. While you don’t need to credit them, they have influenced your work. That is entirely acceptable practice.
This is a huge difference in the realm of copyright.
EDIT
Also the works of the artists you mention are in public domain in most countries. They can be used by LLM without incident. Works of artists not in the public domain should be subject to copyright law for LLM.