It’s guessing the next token in a series of tokens, reinforced by multiple layers.
Theres no version of that which isn’t “brute force”, you’re literally just guessing until it looks correct. The core concept behind all current functional models is that. It will always be that.
The only other possibility is to make an argument for every possible case, and the learning ability to reference or adapt information from old cases for new cases, thoroughly tested and constantly being adjusted by experts to prevent false token validations. Which is all manual so get typing, monkeys.
TLDR: Yeah thats how it works, it won’t get better anytime soon.
We’ve been in the brute force phase of AI since this type of AI was a thing. They’re just word calculators constantly trying to figure out the next word. It’s inefficient from the very beginning. Unless we can figure out a different way of doing AI this will always be a problem and it’s not just gonna “get better.”