• @[email protected]
    link
    fedilink
    29 months ago

    That assumes the model is trained on a large training set of the worldstate encoding and understands what that worldstate means in the context of its actions and responses. That’s basically impossible with the state of language models we have now.

    • @[email protected]
      link
      fedilink
      19 months ago

      I disagree. Take this paper for example - keeping in mind it’s a year old already (using ChatGPT 3.5-turbo).

      The basic idea is pretty solid, honestly. Representing worldstate for an LLM is essentially the same as how you would represent it for something like a GOAP system anyway, so it’s not a new idea by any stretch.