My minimal LLM instructions. These are the current ones for @jetbrains IDEs with their in-IDE AI in particular.
LLMs overengineer so easily. They have verbal diarrhea so readily. They have comments like “set a to 1” for lines like “a = 1” all the time.
This prompt reins them in … somewhat.
Does this work? Given that the LLM doesn’t actually know anything or have feelings of uncertainty, surely it just adds a chance that it will say “I don’t know” purely at random, without making it any more likely that the answers it does give are correct.
I find it kind of hilarious how almost every prompt I’ve seen leaked from various apps almost always has a similar clause, as if it would have any effect at all on the result.
Seeing engineers resort to this level of basically praying and wishful thinking that in reality has no factual value is pretty funny.
“Please, don’t give me wrong results 0_0”
@glassware In my experience, it seems to. It’s all fuzzy anyway, but I’ve had it hallucinate APIs less regularly since adding that to my prompts. I have also done A/B tests in that sometimes, with Claude, I forget switching to my own prompt and see it spew random verbal diarrhea, go back and switch and it’s better.
It doesn’t have feelings, but it does have certainty of prediction in terms of probability of continuation. I think it does influence that.