• @[email protected]
    link
    fedilink
    English
    12 years ago

    Why would anyone want to cite something in a scientific paper that the reader can not access or verify and that could or could not be partially or completely fictional?

    • Drew Got No ClueOP
      link
      English
      11 year ago

      This is a good problem. I do not think it should be used as a reference in scientific papers (at least the way it works now), but there should still be a way to at least let people know where you got that information from.

      This is more useful with things like Bing Chat that uses external sources to craft the answers, so you have “real” citations. But yeah, ChatGPT is currently impossible to verify.

      I wonder if a seed could be implemented in the future to re-generate the same answers.

    • @[email protected]
      link
      fedilink
      English
      11 year ago

      At the very least when it comes to analyzing the LLMs, e.g. to discuss biases, there should be some common understanding of how to reference the things that are discussed.

      But this is a poor attempt. They just point to the general ChatGPT URL and the date, ignoring that at any time multiple models are available.

  • @[email protected]
    link
    fedilink
    English
    11 year ago

    They don’t even care to ask for the model version? That’s a rookie mistake in my book.