- cross-posted to:
- [email protected]
- games
- cross-posted to:
- [email protected]
- games
WHERE’S MY REFUND FOR GODUS PETE?
Thanks for taking that L for the industry again, Peter.
Like it or not, one way or another, AI is going to keep playing a larger roll in the games industry. I don’t think we will ever have anything worthwhile that will generate whole games like he is saying but it’s still going to be used to generate content.
It’s already taking over chat moderation in a lot of larger games.
I mean… We had Daggerfall and Minecraft with procedural generation under the hood, and many others, for a very long time. Why we’d need a model that ‘learns’?
I ask about in-game applications, not the use of LLMs in production.
Obvious application is having NPCs that you can actually talk with. Not just about one or two topics that they have a pre-recorded voice line to tell you about, but about anything at all. And with AI speech generation as well, you could have them somewhat realistically talk back to you.
You could also have an LLM working as a kind of DM, coming up with new quests with stories and some content variety. A lot of games have repeatable randomized missions, but this are very formulaic and feel very repetitive after you’ve done a few. There’s usually no story, just a basic combat grind. A LLM could come up with actually interesting randomized quests, like a murder mystery where the murderer had a motive and you can legitimately question the suspects about anything they know.
I read that sentiment about quests a lot and have something for it myself, but I find it questionable.
Formulaic is what makes the quest work with the system. It should, just as a raw code, have a list of triggers for events, responses, all nailed to the system and the world that already exist. It needs to place an NPC with a question mark that has a fetch quest that updates your map\journal when you get it and correctly respond with an award when conditions are met. That’s on a basic level.
The LLM to create such complex and strict manipulations should be narrowly guided to generate a working fetch quest without a mistake. We’d basically kill off most of what it is good at. And we’d need to build pipelines for it to lay more complex quests, all ourselves. At this point it’s not easier than creating a sophisticated procedural generation engine for quests, to the same result.
Furthermore, it’s a pain in the ass to create enough learning material to teach it about the world’s lore alone, so it won’t occasionally say what it doesn’t know, and would actually speak - because to achieve the level of ChatGPT’s responses they fed them petabytes of data. A model learning on LotR, some kilobytes, won’t be able to greet you back, and making an existing model to speak in-character about the world it’s yet to know is, well, conplicated. In your free time, you can try to make Bard speak like a dunmer fisherman from Huul that knows what’s going on around him on all levels of worldbuilding young Bethesda put in. To make it correct, you’d end up printing a whole book into it’s prompt window and it would still spit nonsense.
Instead, I see LLMs being injected in places they are good at, and the voicing of NPC’s lines you’ve mentioned is one of the things they can excel at. Quick drafts of texts and quests that you’d then put into development? Okay. But making them communicating with existing systems is putting a triangle peg in a square hole imho.
On procedural generation at it’s finest, you can read about the saga of the Boatmurder in Dwarf Fortress: https://lparchive.org/Dwarf-Fortress-Boatmurdered/Introduction/
I don’t have time right now to write a full proper response, but for quests I would imagine starting out we would still use traditional random generation the bones of the quest, but use an LLM to create the narrative and NPC dialogs for it. Games like Shadows of Doubt already do a good job with randomly generated objectives, but there’s no motive for the crimes. Just taking the already existing gameplay and using LLM to generate a reason why the crime happened would help with the atmosphere a lot. Also, you can question suspects and sometimes solve the case by them telling you they saw [person] at [location] at [time], but I think an LLM could provide actual witness interrogation where you have to ask the right question, or try to catch them in a lie.
As far as the mechanics for LLMs to actually provide dialog, I expect to see some 3rd party AI startups work on it. Some kind of system where they have some base language packages that provide general knowledge and dialog abilities, and then a collection of smaller models/loras to specialize. Finally you would have behind the scenes prompting that tells the NPC who their character is, any character/quest specific knowledge they have, their disposition towards the player, etc. I don’t expect every game company to come up with this on their own, I suspect we’ll get a few individual companies offering a built solution for it starting out, before it eventually becomes built into the larger game engines.
I forgot what game it was for, but some guy implemented an actual conversation system with in-game outcomes using AI.
I could also see more dynamic questing systems, character behaviors, even crafting systems based around the tech. But that requires investment and effort to make the tech work. Not exactly why studios might be investing in AI in the first place.
There are a small handful of good use. Content moderation and automatic translation of voice chat is an example.
Mostly though i think it will be AI used to generate content for the game, not during the game.
disappointingly i agree with Pete. with vast new open worlds like no mans sky i think the standard of generative AI could weave good plots, locations, characters and mcguffins
whole game mechanics and voice acting and animation however, i doubt
Yes and no.
The thing to understand about “AI” is that basically all of it is old tech with a few advances and much better branding.
“Generative AI” to make worlds is very much the future of games… it is also the past. When Bethesda could do no wrong and Oblivion was the new hotness, there was a big deal about the forest (and I think even town?) generation tech and how it let them make a much denser world than Morrowind ever was. And… it did. It just also felt samey (which is actually realistic to anyone who spends time walking through forests and/or suburbia can attest but…).
Which led to a strong pushback against admitting these tools were used. I want to say the UE4/5 demos on this kind of tech usually includes “and then you modify it” after generating a forest or whatever. And MS Flight Sim 2020/2024 is heavily dependent on this kind of tech.
But as things get more advanced? It suddenly gets a lot easier to make a good open world (which, for all its flaws, Ubi’s Ghost Recon Breakpoint is a great example) where you have the giant forests with natural-ish paths that funnel you to POIs via a text prompt or a configuration file.
The other aspect which, funny enough, also goes back to Oblivion is the idea of procedurally generated quests/stories and narratives. A big part of Oblivion was that every NPC needs to eat food every N hours and that this was the big reason why everyone would kill themselves by eating a mysterious apple that you reverse pickpocketed on them. But you also had Radiant Quests where a random NPC would ask you to go to a random dungeon and get them a random item.
And… the fact that people had so much trouble realizing how pointless those radiant quests were says a lot about how many basement rats and yak asses we kill in the average RPG. Which is why there are increasingly guides for the Dragon Ages of the world that list what quests are “worth it” based on narrative and the like. Which gets back to the idea of generating en masse and then fine tuning.
The real sticking point, like all things AI once you get past the knee jerk bullshit and marketing, is assets and proper credit. Making a voice or texture or mesh model based on previous work is trivial and has been a thing for most of the past decade. The big issue is that getting that training data is complicated and there are very important discussions to be had about what it means to compensate creators for using their art/being as training data. And companies are glad to skip all that and just get it “for free”.
Generative AI is a statistical model, not something that can understand what makes a good plot…
I think ai will be good to make NPC more lively. Having stupid boring quests, and boring dialogue is still a game changer for immersion, if you are able to hold a conversation for 10 min with any NPC in the game. Of course this does not concern main quests etc.
Generative “AI” is not “AI”, it’s a overlarge statistical model of written language. It cannot make NPC dialogue more lively because it has no concept of “liveliness” or “boredom”. A 10-minute conversation is impossible, anything more than a few words and the models very rapidly lose consistency. You can see this for yourself by playing one of the many “AI Dungeon” attempts at using large language models to run a text adventure.
It’s probably great for bulk as it gives you something close to what you would expect. I imagine it would be different for things that are specific to the lore, world, etc.
Could mean that there is a lot more detail in games, and a lot might even be unintentional.
Ugh that headline, all but guaranteeing it won’t be … So he doesnt guarantee (how could he) AI won’t be the future if games?
Its a joke based on molyneux’s long history of over promising and under delivering coupled with his past borderline (outright for the cube game?) scamming.
Oh thanks! Yeah he was kind of wild back in the day lol
I think it’s because he’s known for being wrong about the way the video games industry is evolving