Tech experts are starting to doubt that ChatGPT and A.I. ‘hallucinations’ will ever go away: ‘This isn’t fixable’::Experts are starting to doubt it, and even OpenAI CEO Sam Altman is a bit stumped.

  • @Zeth0s
    link
    English
    1
    edit-2
    1 year ago

    It does, as model only works with a well defined chunk of tokens of a given length. Everything before is lost. Clearly part of the information of previous context is in that chunk.

    But let’s say that I am talking about wine, at some point I talk about chianti. I and the chatbot go on discussing for over 4k words (I am using chatgpt as an example) without mentioning chianti. After that the chatbot will know we are discussing about wine, but it won’t know we covered the topic of chianti.

    This is what I meant.

    • @Womble
      link
      English
      11 year ago

      I’m only going to reply this time then I’m done here as we are going round in circles. I’m saying that is not what happens as the attention network would link Chianti and wine together in that case and move information between them. So even after Chianti has gone out of the context window it is more likely to pick Chianti than Merlot when it requires a type of wine.

      • @Zeth0s
        link
        English
        11 year ago

        Good call, it doesn’t look like wr are convincing each other ;)