• @Womble
    link
    English
    21 year ago

    Yup, as long as you are aware that it could be wrong and look at it critically LLMs at GPT scale are very useful tools. The best way I’ve heard it described is having a lightning fast intern who often gets things wrong but will always give it a go.

    So long as you’re calibrated to “how might this be wrong” when looking at the results it is exceptionally useful.