• 🇰 🌀 🇱 🇦 🇳 🇦 🇰 ℹ️
    link
    fedilink
    English
    296 months ago

    IMO, if it’s not trained on images of real people, it only becomes unethical to have it generate images of real people. At that point, it wouldn’t be any different than a human drawing a pornographic image and drawings do not exploit anyone.

    • HorseChandelier
      link
      English
      256 months ago

      drawings do not exploit anyone.

      Hmmm. I think you will find in many jurisdictions that they are treated as if they do.

      • CALIGVLA
        link
        fedilink
        English
        326 months ago

        Which is why nobody should use laws as a measure of morality, because they’re often fucking stupid.

    • Neato
      link
      fedilink
      English
      216 months ago

      Using pornographic art to train is still using other people’s art without permission.

      And if it’s able to generate porn that looks like real people, it can be used to abuse people.

      • @Sorgan71
        link
        English
        -16 months ago

        I’d be happy to use their art without their permission. They dont get to decide what is trained with their art.

      • @[email protected]
        link
        fedilink
        English
        -24
        edit-2
        6 months ago

        [Edited] I agree that we should be taking consent more seriously. Especially when it comes to monetizing off the back of donations. That’s just outright wrong. However, I don’t think we should consider scrapping it all or putting in extraneous/consumer damaging ‘safe guards’. There are lots of things that can cause harm, and I’ll argue almost anything can be used to harm people. That’s why its our jobs to carefully pump the breaks on progress, so that we can assess what risk is possible, and how to treat any wounds that may incurr. For example, invading a country to spread ‘democracy’ and leaving things like power gaps behind, causing more damage than what was there orginally. It’s a very very thing rope we walk across, but we can’t afford, in todays age, to slow down too far. We face a lot of serious problems that need more help, and AI can fill that gap in addition to being a fun, creative outlet. We hold a lot of new power here, and I just don’t want to see that squandered away into the pockets of the ruling class.

        I don’t think anyone should take luddites seriously tbh (edit: we should take everyone seriously, and learn from mistakes while also potentially learning forgotten lessons)

        • HelloThere
          link
          fedilink
          English
          106 months ago

          You clearly have no idea what the luddites actually stood for.

          • @[email protected]
            link
            fedilink
            English
            -1
            edit-2
            6 months ago

            You’ll notice I used the lower case L which implies I’m referring to a term, likely as it’s commonly used today. (edit: this isn’t an excuse to ruin the definition or history of what luddites were trying to do, this was wrong of me)

            Further, explain to me how this is different from what the luddites stood for, since you obviously know so much more and I’m so off base with this comment.

            edit: exactly. just downvote and don’t actually make any sort of claim. Muddy that water! edit 2: shut up angsty past me.

            • HelloThere
              link
              fedilink
              English
              2
              edit-2
              6 months ago

              So, I didn’t downvote you because that’s not how I operate.

              The Luddites were not protesting against technology in and of itself, they were protesting against the capture of their livelihoods by proto-capitalists who purposefully produced inferior quality goods at massive volume to drive down the price and put the skilled workers out of business.

              They were protesting market capture, and the destruction of their livelihood by the rich.

              This sort of monopolistic practice is these days considered to be a classic example of monopolistic market failure.

              There is a massive overlap between the philosophy of the Luddites, and the cooperative movement.

              The modern usage of the term is to disparage the working class as stupid, feckless, and scared. This has never been true.

              • @[email protected]
                link
                fedilink
                English
                2
                edit-2
                6 months ago

                I do not want that for anyone. AI is a tool that should be kept open to everyone, and trained with consent. But as soon as people argue that its only a tool that can harm, is where I’m drawing the line. That’s, in my opinion, when govts/ruling class/capitalists/etc start to put in BS “safeguards” to prevent the public from making using of the new power/tech.

                I should have been more verbose and less reactionary/passive aggressive in conveying my message, its something I’m trying to work on, so I appreciate your cool-headed response here. I took the “you clearly don’t know what ludites are” as an insult to what I do or don’t know. I specifically was trying to draw attention to the notion that AI is solely harmful as being fallacious and ignorant to the full breadth of the tech. Just because something can cause harm, doesn’t mean we should scrap it. It just means we need to learn how it can harm, and how to treat that. Nothing more. I believe in consent, and I do not believe in the ruling minority/capitalist practices.

                Again, it was an off the cuff response, I made a lot of presumptions about their views without ever having actually asking them to expand/clarify and that was ignorant of me. I will update/edit the comment to improve my statement.

                • HelloThere
                  link
                  fedilink
                  English
                  26 months ago

                  AI is a tool that should be kept open to everyone

                  I agree with this principle, however the reality is that given the massive computational power needed to run many (but not all) models, the control of AI is in the hands of the mega corps.

                  Just look at what the FAANGs are doing right now, and compare to what the mill owners were doing in the 1800s.

                  The best use of LLMs, right now, is for boilerplating initial drafts of documents. Those drafts then need to be reviewed, and tweaked, by skilled workers, ahead of publication. This can be a significant efficiency saving, but does not remove the need for the skilled worker if you want to maintain quality.

                  But what we are already seeing is CEOs, etc, deciding to take “a decision based on risk” to gut entire departments and replace them with a chat bot, which then invents hallucinates the details of a particular company policy, leading to a lower quality service, but significantly increased profits, because you’re no longer paying for ensured quality.

                  The issue is not the method of production, it is who controls it.

                  • @[email protected]
                    link
                    fedilink
                    English
                    16 months ago

                    I can see where you’re coming from - however I disagree on the premise that “the reality is that (rationale) the control of AI is in the hands of the mega corps”. AI has been a research topic not done solely by huge corps, but by researchers who publish these findings. There are several options out there right now for consumer grade AI where you download models yourself, and run them locally. (Jan, Pytorch, TensorFlow, Horovod, Ray, H2O.ai, stable-horde, etc many of which are from FAANG, but are still, nevertheless, open source and usable by anyone - i’ve used several to make my own AI models)

                    Consumers and researchers alike have an interest in making this tech available to all. Not just businesses. The grand majority of the difficulty in training AI is obtaining datasets large enough with enough orthogonal ‘features’ to ensure its efficacy is appropriate. Namely, this means that tasks like image generation, editing and recognition (huge for medical sector, including finding cancers and other problems), documentation creation (to your credit), speech recognition and translation (huge for the differently-abled community and for globe-trotters alike), and education (I read from huge public research data sets, public domain books and novels, etc) are still definitely feasible for consumer-grade usage and operation. There’s also some really neat usages like federated tensorflow and distributed tensorflow which allows for, perhaps obviously, distributed computation opening the door for stronger models, run by anyone who will serve it.

                    I just do not see the point in admitting total defeat/failure for AI because some of the asshole greedy little pigs in the world are also monetizing/misusing the technology. The cat is out of the bag in my opinion, the best (not only) option forward, is to bolster consumer-grade implementations, encouraging things like self-hosting, local operation/execution, and creating minimally viable guidelines to protect consumers from each other. Seatbelts. Brakes. Legal recourse for those who harm others with said technology.

        • @[email protected]
          link
          fedilink
          English
          6
          edit-2
          6 months ago

          I don’t think anyone should take luddites seriously tbh

          We just had a discussion on here about how Florida was banning lab-grown meat.

          I mean, the Luddites were a significant political force at one point.

          I may not agree with their position, but “I want to ban technology X that I feel competes for my job” has had an impact over the years.

          • @[email protected]
            link
            fedilink
            English
            -16 months ago

            They had an impact because people allowed themselves to take their fear mongering seriously.

            It’s regressionist and it stunts progress needlessly. That’s not to say we shouldn’t pump the brakes, but I am saying logic like “it could hurt people” as rationale to never use it, is just “won’t someone think of the children” BS.

            You don’t ban all the new swords, you learn how they’re made, how they strike, what kinds of wounds they create and address that problem. Sweeping under the rug/putting things back in their box, is not an option.

          • @[email protected]
            link
            fedilink
            English
            -26 months ago

            People have severe allergic reactions to peanut butter which means it “could be used against people” as a weapon