- cross-posted to:
- technology
- cross-posted to:
- technology
It is now clear that generative artificial intelligence (AI) such as large language models (LLMs) is here to stay and will substantially change the ecosystem of online text and images. Here we consider what may happen to GPT-{n} once LLMs contribute much of the text found online. We find that indiscriminate use of model-generated content in training causes irreversible defects in the resulting models, in which tails of the original content distribution disappear. We refer to this effect as ‘model collapse’ and show that it can occur in LLMs as well as in variational autoencoders (VAEs) and Gaussian mixture models (GMMs). We build theoretical intuition behind the phenomenon and portray its ubiquity among all learned generative models. We demonstrate that it must be taken seriously if we are to sustain the benefits of training from large-scale data scraped from the web. Indeed, the value of data collected about genuine human interactions with systems will be increasingly valuable in the presence of LLM-generated content in data crawled from the Internet.
So one potentially viable way to destroy AI would be to repeatedly train LLMs and image generators on their own (or rather previous generations’) output to get garbage/junk/bad training data and then publish the text/images in places where bots trawling for training data are likely to find them.
Probably bonus points if the images still look “sensical” to the human eye, so that humans eyeballing the data don’t realize it’s the digital equivalent of a sabot. (Apparently the story about sabots being thrown into machinery is not true, but you know what I mean.)
I already block all the LLM scraper bots via user agent.
I’ve been toying with the idea of, instead of returning 404 for those requests, returning LLM-generated drivel to poison the well.
This is a really good idea actually
AIncest!
Deep fried AI.