The error diffusion technique itself is the same standard error diffusion used in Paint Shop Pro 3.11 and beyond, and I’m sure pretty much every other graphics editor out there.
What I’m doing different is the palette reduction algorithm itself, my algorithm for that is totally custom made.
Rather than studying the color space densities and frequencies and stuff like that, I start by finding the darkest and brightest pixels in the image, then loop over the image over and over, seeking the next color in the image that’s as absolutely as far as possible away from all previously detected palette entries.
The process unfortunately gets exponentially slower the more colors its gotta find, but guarantees a fairly evenly sparsely spaced out color palette within the color space perfectly tailored to the given image.
Then I just use that palette rather than median cut or octree reduced palettes in any typical error diffuser that accepts imported palettes.
16 color is about as minimal as it gets where it really stands out above other reduction techniques. 256 color is brutally slow and might take my system a half hour to process, but yields results practically indistinguishable from full truecolor images.
The main benefit I’ve noticed, and the whole reason I designed it, is that my reduction method practically eliminates color washout in the reduced palette and gives the most vibrant colors.
Edit: I’m sure theres room for optimization, I actually wrote the first version of this reduction algorithm about 18 years ago.
The error diffusion technique itself is the same standard error diffusion used in Paint Shop Pro 3.11 and beyond, and I’m sure pretty much every other graphics editor out there.
What I’m doing different is the palette reduction algorithm itself, my algorithm for that is totally custom made.
Rather than studying the color space densities and frequencies and stuff like that, I start by finding the darkest and brightest pixels in the image, then loop over the image over and over, seeking the next color in the image that’s as absolutely as far as possible away from all previously detected palette entries.
The process unfortunately gets exponentially slower the more colors its gotta find, but guarantees a fairly evenly sparsely spaced out color palette within the color space perfectly tailored to the given image.
Then I just use that palette rather than median cut or octree reduced palettes in any typical error diffuser that accepts imported palettes.
16 color is about as minimal as it gets where it really stands out above other reduction techniques. 256 color is brutally slow and might take my system a half hour to process, but yields results practically indistinguishable from full truecolor images.
The main benefit I’ve noticed, and the whole reason I designed it, is that my reduction method practically eliminates color washout in the reduced palette and gives the most vibrant colors.
Edit: I’m sure theres room for optimization, I actually wrote the first version of this reduction algorithm about 18 years ago.