• @[email protected]
    link
    fedilink
    4
    edit-2
    7 hours ago

    You can lose respect for me if you want; I generally hate LLMs, but as a D&D DM I use them to generate pictures I can hand out to my players, to set the scene. I’m not a good enough artist and I don’t have the time to become good enough just for this purpose, nor rich enough to commission an artist for a work with a 24h turnaround time lol.

    I’m generally ok with people using LLMs to make their lives easier, because why not?

    I’m not ok with corporations using LLMs that have stolen the work of others, to reduce their payroll or remove the fun/creative parts of jobs, just so some investors get bigger dividends or execs get bigger bonuses

    • @raspberriesareyummy
      link
      28 hours ago

      I’m generally ok with people using LLMs to make their lives easier, because why not?

      Because 1) it adds to killing our climate and 2) it increases dependencies on western oligarchs / technocrats who are generally horrible people and enemies of the public.

      • @[email protected]
        link
        fedilink
        17 hours ago

        I agree, but the crux of my post is that it doesn’t have to be that way - it’s not inherent to the training and use of LLMs.

        I think your second point is what makes the first point worse - this is happening at an industrial scale, with the only concern being profit. We pay technocrats for the use of their services, and they use that money to train more models without a care for the deviation it causes.

        I think a lot of the harm caused by model training can be forgiven if the models were used for the betterment of quality of life of the masses, but they’re not, they’re mainly used to enrich technocrats and business owners at any expense.

        • @raspberriesareyummy
          link
          148 minutes ago

          Well - there’s nothing left to argue about - I do believe we have bigger climate killers than large computing centers, but it is a worrying trend to spend that much energy for an investment bubble on what is essentially an somewhat advanced word prediction. However, if we could somehow get the wish.com version of Tony Stark and other evil pisswads to die out, then yes, using LLMs for some creative ideas is a possibility. Or for references to other sources that you can then check.

          However, the way those models are being trained is aimed at impressing naive people and that’s very dangerous, because those people mistake impressively coherent sentences for understanding and are willing to talk about automating tasks upon which lives depend.