- cross-posted to:
- privacy
- [email protected]
- [email protected]
- [email protected]
- aisafety
- cross-posted to:
- privacy
- [email protected]
- [email protected]
- [email protected]
- aisafety
OpenAI’s ChatGPT and Sam Altman are in massive trouble. OpenAI is getting sued in the US for illegally using content from the internet to train their LLM or large language models
so if content is under GPL and used for training data, how far is the process of training/fine-tuning considered “modification”? For example, if I scrape a bunch of blog posts and just try to use tools to analyze the language, does that considered “modification”? What is the minimum solution that OpenAI should do (or should have done) here, does it stop at making the code for processing the data public, or the entire code base?
I’m not sure. And I’m not sure there’s legal precedant for that either.
That’s why I dont have a problem with any of these lawsuits, it gives us clarity on the legal aspects, whichever way it goes.