It has long been established that predictive models can be transformed into lossless compressors and vice versa. Incidentally, in recent years, the machine learning community has focused on training increasingly large and powerful self-supervised (language) models. Since these large language models exhibit impressive predictive capabilities, they are well-positioned to be strong compressors. In this work, we advocate for viewing the prediction problem through the lens of compression and evaluate the compression capabilities of large (foundation) models. We show that large language models are powerful general-purpose predictors and that the compression viewpoint provides novel insights into scaling laws, tokenization, and in-context learning. For example, Chinchilla 70B, while trained primarily on text, compresses ImageNet patches to 43.4% and LibriSpeech samples to 16.4% of their raw size, beating domain-specific compressors like PNG (58.5%) or FLAC (30.3%), respectively. Finally, we show that the prediction-compression equivalence allows us to use any compressor (like gzip) to build a conditional generative model.

I wonder what a paper like this, especially given the title, does for the legal case regarding copyright and generative AI. Haven’t had a chance to read the paper yet, so don’t know if the findings are relevant to copyright.

  • @KingRandomGuy
    link
    27 months ago

    And there are relatively few people who design the image sensors for cameras compared the the number of people using a camera to take pictures. They’re still designed as a tool by a person.

    I’m not the most familiar with copyright law, but IIRC you’re certainly able to violate copyright while taking a photo. If you take a photo of a copyrighted work (i.e. parts of a book or something) without artistic intent, I don’t believe that’s considered transformative.

    I suspect the courts will end up having to deal with many of these issues on a case-by-case basis, just like they already do with fair use.