• @kromem
    link
    English
    -19 months ago

    Literally yes.

    For example about a year ago one of the multi step prompt papers that improved results a bit had the model guess what expert would be best equipped to answer the question in the first pass and then asked it to answer the question as that expert in the second pass and it did a better job than trying to answer it directly.

    The pretraining is a regression towards the mean, so you need to bias it back towards excellence with either fine tuning or in context learning.