Abstract
We present GameNGen, the first game engine powered entirely by a neural model that enables real-time interaction with a complex environment over long trajectories at high quality. GameNGen can interactively simulate the classic game DOOM at over 20 frames per second on a single TPU. Next frame prediction achieves a PSNR of 29.4, comparable to lossy JPEG compression. Human raters are only slightly better than random chance at distinguishing short clips of the game from clips of the simulation. GameNGen is trained in two phases: (1) an RL-agent learns to play the game and the training sessions are recorded, and (2) a diffusion model is trained to produce the next frame, conditioned on the sequence of past frames and actions. Conditioning augmentations enable stable auto-regressive generation over long trajectories.
Paper: https://arxiv.org/abs/2408.14837
Project Page: https://gamengen.github.io/
I know I’m late. But realistically this is what we would need to do proper video generation.
Instead of the user input of the game, you would use the script from a tv series linked with the video… Or just the subtitles, but the actual script describing the scenes would be better… That’s the equivalent user input. Couple that with an llm and bam! Endless episodes of I love Lucy and Seinfeld.