This is funny, but just to be clear, the firms that are doing automated trading have been using ML for decades and have high powered computers with custom algorithms extremely close to trading centers (often inside them) to get the lowest latency possible.
No one who does not wear their pants on their head uses an LLM to make trades. An LLM is just a next word fragment guesser with a bunch of heuristics and tools attached, so it won’t be good at all for something that specialized.
ML and other algorithms have also been used by firms who do none automatic trading like since before 2000 too
but they just use it as part of the decision making process
last time i looked, around like 2020, all the fully automated headfunds that said they will be use ai for trading, failed and did not beat the market tho
(high frequency trading is not what i mean tho, i think high frequency trading is what you mean)
The best use I’ve gotten out of GPT is troubleshooting Rimworld mod list errors, often I’ll slap the error in and it’ll tell me exactly which mod is the issue, even when it can’t the info I get back narrows it down to 4 or 5 suspects
Yeah, especially it is funny how people forgot that even small models the size of like 20 neurons used for primitive NPCs in a 2D games are called AI too and can literally run on a button phone(not Nokia 3310, something slightly more powerful). And these small ones specialized models exist for decades already. And the most interesting is that relatevly small models(few thousands of neurons) can work very well in predicting trends of prices, classify objects by their parameters, calculate chances of having specific disease by only symptoms and etc. And they generally work better than even LLMs in the same task.
Do you have an example of some games that use small neural networks for their NPC AIs? I was under the impression that most video game AIs used expert systems, at least for built-in ones.
Black and white used machine learning If I recall absolutely a classic of a game highly recommend a play if you never have. Dota 2 has a machine learning based ai agent for its bots. Tho I’m unsure if those are actually in the standard game or not.
Forza and a few other racing games though out the years have used ML to various degrees.
And hello neighbor was a rather infamously bad indie game that used it.
For a topical example arc raiders used machine learning to train its AI during development. Tho it doesn’t run on the live servers to keep updating it.
For LLM examples where the wind meets is using small LLMs for its AI dialogue interactions. Which makes for very fun RP mini games.
I’m sure there’s more examples but these are what I can think of and find off Google.
Well, for what I know, modern chess engines are relatevly small AI models that usually work by taking on input the current state of the board and then predicting the next best move. Like Stockfish. Also, there is a game called Supreme Commander 2, where it is confirmed of usage small neural models to run NPC. And, as a person that somewhat included in game development, I can say that indie game engine libgdx provides an included AI module that can be fine tuned to a needed level for running NPC decisions. And it can be scaled in any way you want.
As I understand, chess AIs are more like brute force models that take the current board and generate a tree with all possible moves from that position, then iterating on those new positions up to a certain depth (which is what the depth of the engine refers to). And while I think some might use other algorithms to “score” each position and try to keep the search to the interesting branches, that could introduce bias that would make it miss some moves that look bad but actually set up a better position, though ultimately, they do need some way to compare between different ending positions if the depth doesn’t bring them to checkmate in all paths.
So it chooses the most intelligent move it can find, but does it by essentially playing out every possible game, kinda like Dr Strange in Infinity War, except chess has a more finite set of states to search through.
Maybe. I haven’t studied modern chess engines so deeply. All I know that you either can use the brute force method that will calculate in recursion each possible move or train an AI model on existing brute force engines and it will simply guess the best possible move without actually recalculating each possible. Both scenarios work with each one having its own benefits and downsides.
But all of this is said according to my knowledge which can be incomplete, so recommend to recheck this info.
What’s most annoying to me about the fisasco is that things people used to be okay with like ML that have always been lumped in with the term AI are now getting hate because they’re “AI”.
What’s worse is that management conflates the two all the time, and whenever i give the outputs of my own ML algorithm, they think that it’s an LLM output. and then they ask me to just ask chat gpt to do any damn thing that i would usually do myself, or feed into my ml to predict.
? If you make and work with ml you are in a field of research. It’s not a technology that you “use”. And if you give the output of your “ml” then that is exactly identical to an llm output. They don’t conflate anything. Chat gpt is also the output of “ml”
when i say the output of my ml, i mean, i give the prediction and confidence score. for instance, if there’s a process that has a high probability of being late based on the inputs, I’ll say it’ll be late, with the confidence. that’s completely different from feeding the figures into a gpt and saying whatever the llm will say.
and when i say “ml” i mean a model I trained on specific data to do a very specific thing. there’s no prompting, and no chatlike output. it’s not a language model
Eh… Wdym. The algos that trade fight in the micro second level. They adapt to each other and never stop changing. It’s exactly the same problem. Do you think llm is a unique neural net ? They all work the same. When you try to sound like ml is not the same as llm or as if ml is neural nets you don’t help anyone understand any of those concepts because you don’t yourself
That is a crazy amount of nonsensical word salad to use to try to call someone else out for lacking understanding.
I mean just the flawed idea that all trading algos are all neural nets, or that all neural nets are the same or that the rectangle of ML doesn’t include neural nets… These are all wildly erratic non sequiturs.
This is funny, but just to be clear, the firms that are doing automated trading have been using ML for decades and have high powered computers with custom algorithms extremely close to trading centers (often inside them) to get the lowest latency possible.
No one who does not wear their pants on their head uses an LLM to make trades. An LLM is just a next word fragment guesser with a bunch of heuristics and tools attached, so it won’t be good at all for something that specialized.
ML and other algorithms have also been used by firms who do none automatic trading like since before 2000 too
but they just use it as part of the decision making process
last time i looked, around like 2020, all the fully automated headfunds that said they will be use ai for trading, failed and did not beat the market tho
(high frequency trading is not what i mean tho, i think high frequency trading is what you mean)
LMMs are better than other methods at context and nuance for sentiment analysis. They can legitimately form part of trade generation.
LLMs are great for interactive NPCs in video games. They are bad at basically everything else.
The best use I’ve gotten out of GPT is troubleshooting Rimworld mod list errors, often I’ll slap the error in and it’ll tell me exactly which mod is the issue, even when it can’t the info I get back narrows it down to 4 or 5 suspects
The investors must be very proud.
I hate that ai just means llm now. ML can actually be useful to make predictions based on past trends. And it’s not nearly as power hungry
Yeah, especially it is funny how people forgot that even small models the size of like 20 neurons used for primitive NPCs in a 2D games are called AI too and can literally run on a button phone(not Nokia 3310, something slightly more powerful). And these small ones specialized models exist for decades already. And the most interesting is that relatevly small models(few thousands of neurons) can work very well in predicting trends of prices, classify objects by their parameters, calculate chances of having specific disease by only symptoms and etc. And they generally work better than even LLMs in the same task.
Do you have an example of some games that use small neural networks for their NPC AIs? I was under the impression that most video game AIs used expert systems, at least for built-in ones.
Black and white used machine learning If I recall absolutely a classic of a game highly recommend a play if you never have. Dota 2 has a machine learning based ai agent for its bots. Tho I’m unsure if those are actually in the standard game or not.
Forza and a few other racing games though out the years have used ML to various degrees.
And hello neighbor was a rather infamously bad indie game that used it.
For a topical example arc raiders used machine learning to train its AI during development. Tho it doesn’t run on the live servers to keep updating it.
For LLM examples where the wind meets is using small LLMs for its AI dialogue interactions. Which makes for very fun RP mini games.
I’m sure there’s more examples but these are what I can think of and find off Google.
Well, for what I know, modern chess engines are relatevly small AI models that usually work by taking on input the current state of the board and then predicting the next best move. Like Stockfish. Also, there is a game called Supreme Commander 2, where it is confirmed of usage small neural models to run NPC. And, as a person that somewhat included in game development, I can say that indie game engine libgdx provides an included AI module that can be fine tuned to a needed level for running NPC decisions. And it can be scaled in any way you want.
As I understand, chess AIs are more like brute force models that take the current board and generate a tree with all possible moves from that position, then iterating on those new positions up to a certain depth (which is what the depth of the engine refers to). And while I think some might use other algorithms to “score” each position and try to keep the search to the interesting branches, that could introduce bias that would make it miss some moves that look bad but actually set up a better position, though ultimately, they do need some way to compare between different ending positions if the depth doesn’t bring them to checkmate in all paths.
So it chooses the most intelligent move it can find, but does it by essentially playing out every possible game, kinda like Dr Strange in Infinity War, except chess has a more finite set of states to search through.
Maybe. I haven’t studied modern chess engines so deeply. All I know that you either can use the brute force method that will calculate in recursion each possible move or train an AI model on existing brute force engines and it will simply guess the best possible move without actually recalculating each possible. Both scenarios work with each one having its own benefits and downsides.
But all of this is said according to my knowledge which can be incomplete, so recommend to recheck this info.
They are the same.
What’s most annoying to me about the fisasco is that things people used to be okay with like ML that have always been lumped in with the term AI are now getting hate because they’re “AI”.
What’s worse is that management conflates the two all the time, and whenever i give the outputs of my own ML algorithm, they think that it’s an LLM output. and then they ask me to just ask chat gpt to do any damn thing that i would usually do myself, or feed into my ml to predict.
? If you make and work with ml you are in a field of research. It’s not a technology that you “use”. And if you give the output of your “ml” then that is exactly identical to an llm output. They don’t conflate anything. Chat gpt is also the output of “ml”
when i say the output of my ml, i mean, i give the prediction and confidence score. for instance, if there’s a process that has a high probability of being late based on the inputs, I’ll say it’ll be late, with the confidence. that’s completely different from feeding the figures into a gpt and saying whatever the llm will say.
and when i say “ml” i mean a model I trained on specific data to do a very specific thing. there’s no prompting, and no chatlike output. it’s not a language model
Nope, same tech
Which crayon color has the best flavor?
Eh… Wdym. The algos that trade fight in the micro second level. They adapt to each other and never stop changing. It’s exactly the same problem. Do you think llm is a unique neural net ? They all work the same. When you try to sound like ml is not the same as llm or as if ml is neural nets you don’t help anyone understand any of those concepts because you don’t yourself
That is a crazy amount of nonsensical word salad to use to try to call someone else out for lacking understanding.
I mean just the flawed idea that all trading algos are all neural nets, or that all neural nets are the same or that the rectangle of ML doesn’t include neural nets… These are all wildly erratic non sequiturs.