• @kromem
    link
    English
    26 months ago

    At a pretrained layer, the model is literally a combination of a normal distribution curve of capabilities.

    It can autocomplete a flat earther as much as a Nobel physicist given sufficient context.

    So it makes sense that even after the fine tuning efforts there’d be a distribution in people’s experiences with the tools.

    But just as the average person’s output from Photoshop isn’t going to be very impressive, if all you ever really see is bad Photoshops and average use, you might think it’s a crappy tool.

    There’s a learning curve to the model usage, and even in just a year of research the difference between capabilities of the exact same model from then to now is drastically different, based only on learnings around better usage.

    The problem is the base models are improving so quickly the best practices for the old generation of models goes out the window with the new. So even if there were classes available I wouldn’t bother pointing you to them as you’d just be picking up info obsolete by the time the classes finished or shortly thereafter.

    I’d just strongly caution against betting against the tech’s continued capabilities and improvements if you don’t want to be surprised and haven’t taken the time to look into them operating at their best.

    The OP post is pretty crap compared to the top 0.5% usage.

    • @[email protected]
      link
      fedilink
      16 months ago

      It really does depend on what you ask and how, I can get some really nice music recommendations from Chatgpt but it also cannot comprehend GURPS skill rules, it’s actually funny how it manages to get it wrong a completely different way each time