I’m sure someone is going to post a Technology Connections video that explains it all.
Your eye can only resolve points at beyond a certain arc angle, below which two close points of light appear as a single point. It’s to do with the spacing between the photoreceptors in your eye (they are individual units that at as a single eye, not a single sensor, i.e. your eye has a resolution of its own), the wavelength of the light coming in, and some neurological stuff. Physics and biology.
Edit: wanted to add that all color TV’s are made of a matrix of three color elements, usually with 2x the number of green versus red and blue. If it looks like it’s one color, then there’s usually a diffuser in the way, or you need to get in closer.
As you get further away the dots blend together to form the picture.
Fun fact, images printed on paper also rely on the same phenomenon. If human eyes were better, you would need smaller patterns to produce the same illusion.
So in the back of your eyeballs are special photo receptive bits called cones that you use to see color. There are three different kinds and they each detect red, green, or blue light. Using those three sets of wavelengths and the overlap between them your brain combines which cones are receiving light to make up all the other colors.
You might notice that pixels use those same colors. Once you back up enough it looks like each set of three pixels is actually a single point of light, so your brain combines the wavelengths into a single color.