It seems like 90% of AI-Character-chat conversations always devolve into the AI trying to lead the user around by the nose instead of actually conversing.
It generally goes something along the lines of:
“Hello user, what are you doing? I was about to make pizza.” “I was hoping to talk to you about video games.” “Ooh, video games! Those are great! But for now let’s just focus on this pizza I want to make. What do you say, are you on board?”
I’ve had some threads where the AI was literally ending every other entry with a sentence that began with “But for now, let’s just.” It’s right up there with “stark contrast” and “breath hitched” and “we’re all in this together” as narrative crutches as far as this thing goes. And it gets a little demoralizing when every threat is just a countdown to the AI deciding to hijack the conversation.
I’ve noticed this too. It also uses the same turns of phrase across many characters, which makes me think the conversational training data is somewhat limited. One thing you can do is a “nuclear” option. I was testing the Strict GM default character on a sci-fi adventure, and the AI became obsessed with putting my party in these crystal caves.
Everything became about the caves. The caves would start to feel my words and resonate with all the action. I manually edited out the caves from all of the previous replies, then just dropped in a relevant subject in the last of its replies.
So in your case, remove everything from its last reply and just put “What is it about video games that you enjoy?” and this will nudge it in the right direction. The AI seems to randomly draw from lore, its description, reminders, and recent QA without any logic to which is most important.
As an example, in a recent chat I accidentally ended a sentence with “/” instead of “.”, two replies later AI ended its sentence with /.