The most useful thing I’ve done with it so far is ask it specific game information like, “hey, I’m in x level and I see a chest. I can’t see a way to get it right now, is it something I can come back and get later? Or do I have to figure it out now?”. And it can answer that.
Yeah, except it can also get things very wrong. I tested it against my RuneScape knowledge. RuneScape is niche enough to not have them “fixing up” stuff specifically for it to make it look good and big enough that there has been a lot written for the LLM to go off of.
Suffice to say its advice was plausible sounding but inefficient if not outright grammatically correct nonsense. I would’ve been better served in every case by looking up player made guides.
Saves me from having to look up an ad riddled guide, but that’s not a killer feature. Integration with aps could be huge, where it can actually do whatever task for you, or automate something tedious, but that’s still not super common. I’d rather personally and directly control anything vaguely important.
Yeah it’s kind of that last bit there.
I honestly don’t know what they’re going to do with AI stuff but I feel it’s going to be a huge bust. The fundamentals of the technology are just completely unproven. To me it seems like a bunch of people invested in a palm reading machine and then (because they have a lot of money) tried to convince themselves and everyone else that the palm reading machine really is going to change the would (and they didn’t just get scammed).
Agreed. It gives results that appear promising, and if they were correct all the time it would be amazing… But it’s not, though sometimes it is.
I have one I was messing with that would scan through a document and answer questions about it with sources cited from the document. I feel like that’s the best path to trusting the output.
Also, I think game questions are best suited to asking solutions to linear quests that have a defined answer. I asked it about a good Destiny build, and it answered but what it gave me was a pretty basic build that doesn’t work the best in the current meta, but I kinda knew it couldn’t give me a good answer there.
Because it’s not always totally correct, you can’t trust it. Investors are shown examples where it is correct and incorrectly extrapolate the trend.
Yeah, except it can also get things very wrong. I tested it against my RuneScape knowledge. RuneScape is niche enough to not have them “fixing up” stuff specifically for it to make it look good and big enough that there has been a lot written for the LLM to go off of.
Suffice to say its advice was plausible sounding but inefficient if not outright grammatically correct nonsense. I would’ve been better served in every case by looking up player made guides.
Yeah it’s kind of that last bit there.
I honestly don’t know what they’re going to do with AI stuff but I feel it’s going to be a huge bust. The fundamentals of the technology are just completely unproven. To me it seems like a bunch of people invested in a palm reading machine and then (because they have a lot of money) tried to convince themselves and everyone else that the palm reading machine really is going to change the would (and they didn’t just get scammed).
Agreed. It gives results that appear promising, and if they were correct all the time it would be amazing… But it’s not, though sometimes it is.
I have one I was messing with that would scan through a document and answer questions about it with sources cited from the document. I feel like that’s the best path to trusting the output.
Also, I think game questions are best suited to asking solutions to linear quests that have a defined answer. I asked it about a good Destiny build, and it answered but what it gave me was a pretty basic build that doesn’t work the best in the current meta, but I kinda knew it couldn’t give me a good answer there.
Because it’s not always totally correct, you can’t trust it. Investors are shown examples where it is correct and incorrectly extrapolate the trend.