You still misunderstand my use of “average”. I am once again not talking simple averages over simple arithmetic numbers.
Look at the outputs of models trained on majority white faces vs diverse faces. If you still don’t understand what I mean by averages then I guess this conversation is hopeless
Yes there’s noise in the process.
But that noise is applied in very specific ways, it still fundamentally tries to output what the training algorithm indicated you would expect from it given the prompt, staying in a general neighborhood preserving the syntax / structure / patterns in the input training data related to the keywords in your prompt. Ask for a face without more details and you get a face looking like the average face in the input, usually white in most models, western conventional haircuts, etc, because that’s representative of its inputs, an average over the extracted structure in the inputs. The noise just tweaks some selection of representative features and their exact properties. It is still close enough to average that I feel it is fair to call it average, because it so rarely output extremes (other than when the model just breaks down and produce nonsense).
That’s a bias, not an average. Similar to human biases. Models’ biases are derived from humans’ biases in the training data.
Humans have biases for a male doctor and female nurse, models learn that bias unless someone intervene to identify and remove the cultural (very human) bias from the training data set
You misunderstood again. The model isn’t creating the bias when it is trained on biased data. It just gives a representative output of its input. The average of many outputs will resemble the average of its inputs.
If it was a linear transformation, probably, because you’d remove the stochastic term. But transformation is non linear. I 'd be surprised if true. Do you have a reference for a statistically meaningful experiment on this?
You are linking sources on biases. As said it is very different. Holy mary is most often represented as white, blue eyes. That is a bias, inherited from training data (as models don’t know anything else out of that).
Average is a different things, these models do not perform averages, do not output averages, averages of the output data are not comparable with averages of input data.
My comment is that they simply are not averages, that’s it.
As a simpler example, it is like saying that a polynomial plus some noise is an average… It’s simply not.
The stochastic and non linear parts are the reason these models create original images, unless overfitted.
If it was a weighted average you’d have identical, smoothed, most likely non sensical images for identical prompts.
And this is not the case.
That’s all my comment.
You still misunderstand my use of “average”. I am once again not talking simple averages over simple arithmetic numbers.
Look at the outputs of models trained on majority white faces vs diverse faces. If you still don’t understand what I mean by averages then I guess this conversation is hopeless
Yes there’s noise in the process.
But that noise is applied in very specific ways, it still fundamentally tries to output what the training algorithm indicated you would expect from it given the prompt, staying in a general neighborhood preserving the syntax / structure / patterns in the input training data related to the keywords in your prompt. Ask for a face without more details and you get a face looking like the average face in the input, usually white in most models, western conventional haircuts, etc, because that’s representative of its inputs, an average over the extracted structure in the inputs. The noise just tweaks some selection of representative features and their exact properties. It is still close enough to average that I feel it is fair to call it average, because it so rarely output extremes (other than when the model just breaks down and produce nonsense).
That’s a bias, not an average. Similar to human biases. Models’ biases are derived from humans’ biases in the training data.
Humans have biases for a male doctor and female nurse, models learn that bias unless someone intervene to identify and remove the cultural (very human) bias from the training data set
You misunderstood again. The model isn’t creating the bias when it is trained on biased data. It just gives a representative output of its input. The average of many outputs will resemble the average of its inputs.
If it was a linear transformation, probably, because you’d remove the stochastic term. But transformation is non linear. I 'd be surprised if true. Do you have a reference for a statistically meaningful experiment on this?
Won’t stop being that guy
It is an unfortunate burden I am condemned to carry
Recognition;
https://odsc.medium.com/the-impact-of-racial-bias-in-facial-recognition-software-36f37113604c
https://venturebeat.com/ai/training-ai-algorithms-on-mostly-smiling-faces-reduces-accuracy-and-introduces-bias-according-to-research/
Generative denoisers and colorization;
https://www.theverge.com/21298762/face-depixelizer-ai-machine-learning-tool-pulse-stylegan-obama-bias
With generative models already used in stuff like adverts and soon in Hollywood, it becomes more relevant as it affects representation;
https://towardsdatascience.com/empowering-fairness-recognizing-and-addressing-bias-in-generative-models-1723ce3973aa
This extends to text as the output more frequently copies a style which is common in the input.
You are linking sources on biases. As said it is very different. Holy mary is most often represented as white, blue eyes. That is a bias, inherited from training data (as models don’t know anything else out of that).
Average is a different things, these models do not perform averages, do not output averages, averages of the output data are not comparable with averages of input data.
It was just to clarify the point
Thank you.