• @[email protected]
    link
    fedilink
    English
    103 months ago

    Because these posts are nothing but the model making up something believable to the user. This “prompt engineering” is like asking a parrot who’s learned quite a lot of words (but not their meaning), and then the self-proclaimed “pet whisperer” asks some random questions and the parrot, by coincidence makes up something cohesive. And he’s like “I made the parrot spill the beans.”

    • @[email protected]
      link
      fedilink
      English
      153 months ago

      if it produces the same text as its response in multiple instances I think we can safely say it’s the actual prompt

      • David GerardOPM
        link
        fedilink
        English
        123 months ago

        yeah, the ChatGPT prompt seems to have spilt a few times, this is just the latest

      • @[email protected]
        link
        fedilink
        English
        73 months ago

        Even better, we can say that it’s the actual hard prompt: this is real text written by real OpenAI employees. GPTs are well-known to easily quote verbatim from their context, and OpenAI trains theirs to do it by teaching them to break down word problems into pieces which are manipulated and regurgitated. This is clownshoes prompt engineering done by manager-first principles like “not knowing what we want” and “being able to quickly change the behavior of our products with millions of customers in unpredictable ways”.