A recent project has seen a computer learning to play the classic game Super Mario Kart on a real Super Nintendo Entertainment System (SNES) using an evolutionary algorithm. The program, named LuEAgi, was developed by LF_MrL314. The AI is designed to learn how to play with minimal information provided.

LuEAgi mimics the concept of natural selection in its learning process, repeatedly attempting tasks and learning from mistakes to improve its strategies for gaming challenges effectively.


The question is, of course, does Skynet start out by beating us at Mario Kart?

  • @[email protected]
    link
    fedilink
    English
    415 days ago

    The “learning” isn’t the same kind of learning that humans do. There is no abstraction or meta layer, only whether or not a sequence of inputs achieved an output deemed successful by a human. Programs like these interact with the game, essentially, as one static screen shot at a time. For any given configuration, the input that is most likely to result in success (based on prior experience in the form of training) is reinforced so it becomes more likely, a bit like training a dog. Except a dog knows what a ball is.

    This is similar to how Google’s Go models worked. For any given configuration, a set of probabilities are generated based on the weights in the model, which are based on the training (initial values are arbitrary). The main difference is that Google could simulate zillions of AI vs. AI games at a high rate of speed. Anything with a live stream attached is mainly for entertainment value and subscriber count, otherwise you would have the game run at 1,000x speed so the computer could actually train faster.

    But the side effect of this kind of training is that each level is a new experience. This is somewhat analogous to how infants learn to avoid holes while crawling, but then have to relearn that when they begin walking.

    • bjorney
      link
      fedilink
      English
      215 days ago

      Yes but if it’s first instinct is “go left” on 1-2, it’s pretty apparent the reward function could use some tuning