Sounds cool, but I wonder if this will be limited to mostly a selection of AAA games.
RTX Neural Faces is the tool in the NVIDIA RTX Kit that takes a regular rasterized face with 3D pose data as input and infers a more natural face through a real-time gen AI model. The generated face is trained from thousands of offline generated images of that face at every angle, under different lighting, emotion, and occlusion conditions. The training pipeline can use real photographs or AI-generated images. The trained model is then optimized using NVIDIA TensorRT to infer the face in real-time.
For some reason, I don’t like the style of the RTX Neural Face feature:
It has that plastic feel that you get with gen AI images.
I would almost argue the original looks better and more distinct.
It would have been better if they had a 1:1 to one comparison shot.
I think the lack of distinctive features is inherent to how AI training works. From my knowledge you feed them data and it looks for the averages across those data. So if you feed it 100 images of people it will take note of the features that are shared. For example let’s say 60% of the images you feed it for reference happen to feature a beauty mark by the mouth. The algorithm makes notes of that and when it produces an image of it’s own it will likely feature a beauty mark by the mouth. Now apply the same logic to all other visual features and you might start to see why AI produces very samey-visuals: it’s presenting the averages of the inputs it is fed. Better training materials will probably help, and I’m sure it can be tweaked, but “bland” really seems to be baked in to the formula.
Agreed. I would honestly prefer lower quality but distinctive and well designed than high fidelity and bland.
Just look at the faces in HL2 or Vampire the Masquerade: Bloodlines. They still hold up very well even 20+ years later.