Earlier this week I discussed an example of ChatGPT giving ‘placeholder’ answers in lieu of real answers. Below is an example of what that looks like. I could swear this didn’t used to happen, but it basically just ‘doesn’t’ answer your question. I’m interested how often other people see this behavior.

  • @TropicalDingdongOP
    link
    17 months ago

    Yeah I am kinda guessing it’s a cost cutting measure for more work the part of the llm.

    • @magiccupcake
      link
      17 months ago

      Yeah I’ve noticed lately it likes to be lazy