• @Aceticon
    link
    English
    4
    edit-2
    1 year ago

    Run-length encoding algorithms (like the ones in GZIP) aren’t especially amazing at compression, they’re more of a balance between speed and compression ability plus they’re meant to compress streams of bytes as the bytes come in.

    There are better algorithms from achieving maximum compression such as substitution ones (were bytes and sets of bytes are replaced by bit sequences, the most common ones getting the shortest bit sequence, the second most common the second shortest one and so on) but they’re significantly slower and need to analyse the entire file to be compressed before compressing it (and the better you want the compression to be, the more complex the analysis and the slower it gets).

    Maybe the LLMs can determine upfront the most common character patterns (I use “patterns” here because it might be something more complex that mere sequences, for example a pattern could be for characters in slots 0, 3 and 4 whilst a sequence would be limited to 0, 1 and 2) and are thus much faster and more thorough at doing the analysis stage or just use it as a pre-analysed frequency model for character patterns in a given language which is superior to general run-length encoding compression (whose frequence “analysis”-ish is done as the bytes in the stream are coming in).

    PS: I might be using the wrong english language terms here as I learned this compression stuff way back at Uni and in a different language.