• @Narann
    link
    English
    201 year ago

    I suspect this will be more and more common in the future.

    I wonder if this can really be stopped.

    I think being transparent about what is AI generated or not is very important so we can choose to support “original content” creator.

    Maybe AI will make “original creators” more recognition.

      • Sandra
        link
        fedilink
        21 year ago

        AI will be everywhere going forward. And that’s fine.

        The issue is more how it will be used.

        There are two other pretty big problems. One is that there’s a huge climate impact with runaway energy use, and the other is that it’s a very expensive means of production which leads to further concentration of wealth & power.

          • Sandra
            link
            fedilink
            21 year ago

            I don’t think energy use is a serious problem, that just seems to get thrown around just because it’s trendy. Does it even matter

            Yes, since it’s a rapidly growing field.

            compared to gaming or crypto?

            Proof-of-work based tokens are the enemy and not what we should be comparing things to.

            It’s also an easily solved problem, just install more solar.

            It’s a little trickier than that. Renewable doesn’t mean infinite; we still need to limit consumtion to sustainable rates. Also, there is the hardware in the rigs themselves. Solvents, wiring, metals, plastic…

            Training the initial model isn’t time critical or depended on location, so there is a lot of flexibility here that you wouldn’t have in other applications.

            That’s a good point. It’s less vulnerable to wind or light conditions.

            Meanwhile running the already trained model is very cheap, it’s literally the most efficient way to solve the problem.

            Yep. I never argued against that part. That’s great, as long as we can hold it together and not make new models every fifteen minutes just to keep up with the joneses, but there’s also a drawback to the “expensive to train, cheap to run” model: that’s the very thing that is driving the wealth concentration of big capital like Google.

            Basically, people are going to use AI when it makes better use of time/money/energy than the competition. Nobody is going to use AI to burn energy just for the fun of it, it has to improve on what we already have.

            That would be a perfect argument if we had accounted-for environmental transaction externalities, but we don’t. Using energy is cheaper than it “should” be to account for the environmental impact of that energy use. The old “if I sell you a can of gas, the price of the forest that got wrecked by that gas isn’t factored in” problem. Even otherwise laissez-faire stalwarts like Hayek acknowledged this.

            As for the concentration of power and wealth, that can certainly happen to some degree, but I could also easily see that get balanced out by the amount of freedom that local models give.

            Right; once it does get truly democratized with open source model we can have a post-scarcity pay-it-forward future where the step from dream to reality is smaller than ever before.

            We’ve been through backs and forths of this. The big data mainframe era was replaced by PC. Then that got centralized again in the age of big dialup. But then with broadband everyone could run a server. And then the web 2.0 debacle happened and we got a silo era where people voluntarily started using Google Search and Facebook Messenger and stuff like that to give big capital ownership of our platforms.

            You seem like you have your head on your shoulders (you’re on feddit, after all) but among the general population there’s a lack of awareness around these power&wealth-concentration issues.

            Now centralization can still happen, Google is sitting on more data than everybody and if they make some multi-modal model that is trained on it all, that could be a very potent offering.

            Yes, and I want a plan for that.

            Nothing in the AI space so far lasts very long

            Which is why we’re risking runaway energy use and climate impact.

          • Sandra
            link
            fedilink
            11 year ago

            If you were right about markets only using energy when it made sense, we wouldn’t have this problem:

            A graph showing runaway energy use, rapidly increasing since the 19th century, mostly fossils

              • Sandra
                link
                fedilink
                11 year ago

                Yes, the fossil economy has enabled society as a whole to create temporary wealth; the past has borrowed from the present. It’s going to be a rough comedown.

                We haven’t been, and still aren’t, commensurately accounting for our environmental externalities.

        • @kmkz_ninja
          link
          English
          -41 year ago

          expensive means of production which leads to further concentration of wealth & power.

          That’s only an issue if we continue this brigade of trying to protect artists at everyones expense. Getting enough data to make a usable LLM will be impossible for all but the big players.

          • Sandra
            link
            fedilink
            21 year ago

            As a writer and painter, I’ve long been opposed to copyright and have been releasing stuff under Creative Commons licenses for over a decade. So don’t misinterpret me as agreeing with the brigade.

            Livelyhood for artists is important but so is a livelyhood for everyone, and I’ve been arguing against the flawed “copyright is good for artists” position for decades—we’ve been having this exact same fight against copyright since Napster or even the cassette era. Gates’ infamous “Open letter to hobbyists” was in 1976, and that hasn’t changed.

            There’s a lot of starving artists out there, and a lot of rich publishers. It’s difficult getting food, shelter, medicine and other resources to go around, down here on Earth.

            In a world already deprived by such scarcity, we’d be better off without the shackles of artifical scarcity that copyright introduces.

            I say all that as a lead in because I’m just about to absolutely disagree with part of the following:

            That’s only an issue if we continue this brigade of trying to protect artists at everyones expense.

            As I wrote above, I agree with you re the so-called brigade and have done so publicly in the past, too.

            The myth that IP is a good way to sustain artists’ lives economically is part of the same market capitalism bugged system that has led to the extreme wealth concentration (Google, Microsoft, Amazon) in the first place.

            But what you are replying to, what I wrote, has nothing to do with the pro-copyright stance. I wrote that it’s a very expensive means of production which leads to further concentration of wealth & power.

            Getting enough data to make a usable LLM will be impossible for all but the big players.

            Yeah, if LAION gets shut down. LAION is freely available. The data is not the problem. The resource, hardware, electricity, tensors, e-waste, cooling etc is. And I’m not saying startups and garage operations can’t get their hands on this kinda tech if they can profit from it, as we’ve seen in the proof-of-work “mining” debacle. It’s that since environmental externalities are under-accounted for, that’ll lead to climate-wrecking runaway resource use.

            I have a lot of sympathy for the artists on the other side who are protesting this with whatever futile li’l clogs in the cogs they’ve got; not because I think they’re right about who can learn from art, I disagree with them there, but because they’re a canary in the coal mine for how big capital can use automation to replace workers and how that’ll lead to an even bigger wealth gap (which is already at an historical high) and mass unemployment and economic desperation.

            As Amelia Earhart put it in 1935: “Obviously, research regarding technological unemployment is as vital today as further refinement or production of labor-saving and comfort-giving devices.” And we still haven’t figured that out. And they’re eating at artists, writers, programmers, game designers, economists, cooks, doctors, drivers, postal workers, psychologists—no one is safe. We need to figure out a way to distribute tasks and resources differently in a world where there’s a heck of a lot fewer tasks and a lot more digital resources (while physical resources like fuel and food and shelter are still limited). Politics is also going to get harder since money correlates withnpower, no matter how much we’ve been trying to fight that corruption.

            Markets use prices to distribute resources, and prices are set by supply and demand, and that started breaking down in the cassette and floppy disk age where making the initial recording was very expensive but making copiesnof that was cheap. Big capital has tried to patch the hole to their advantage at the expense of the public by introducing artificial scarcity in the form of an exclusive right to make copies, “copyright”.

            And now it’s getting twisted one more turn, since now the initial work itself is easy to make, but the models, the makers themselves, are wholly owned by big corporations like Microsoft and Google. Capitalism was bad before. It’s going to get cataclysmic now that the workers are wholly owned machines.

            @[email protected] @[email protected]

            • Turun
              link
              fedilink
              English
              11 year ago

              For large language models you have a good point. The space is dominated by closed source company OpenAI, the open source ai models don’t come close. This is indeed a worrying development. The current models are simply really really expensive to run, so hobbyists can’t contribute in a meaningful way.

              But for image generation you basically only have stable diffusion and midjourney. And I’d argue stable diffusion is much more widely used due to the control it gives and it can easily be run on consumer hardware. Customizing a model is also possible and takes only a few hours on a modern gaming computer.

  • @FMT99
    link
    English
    141 year ago

    I may be in the minority here when I say I don’t see the problem. AI trained on millions of publicly available images used to speed up the concept stage of development seems like fair use to me. Like the developer says, commercial artists have always used other folks work to speed up their development, that sounds more problematic to me than drawing inspiration from a huge dataset.

    • @[email protected]
      link
      fedilink
      English
      71 year ago

      “Fair use” has a specific meaning in copyright law. If something replaces the need for something else in the market, it’s almost certainly not fair use. Generative AI replaced the need to hire an original artist.

        • @[email protected]
          link
          fedilink
          English
          1
          edit-2
          1 year ago

          Copyright can be violated even if your output does not contain a copy.

          For example, if I burn a copy of your Disney DVD, watch it, and then write a review, then I’ve violated copyright. The review doesn’t violate copyright, but the DVD I burned does. Even if I throw away my DVD after publishing my review.

          All the major AIs were trained with images that were downloaded from the web. When you download something from the web, you do not have an unlimited license to do what you want with your download. You may have a right to view it, but not use it for commercial purposes such as AI training. And if you use that image for AI training without permission, then you’ve violated copyright. Even if you delete the image after you’re done training your AI.

            • @[email protected]
              link
              fedilink
              English
              0
              edit-2
              1 year ago
              1. The SCOTUS ruled that VHS could legally be used to time-shift TV broadcasts, ie record a program in order to personally watch later. If you have permission to watch a TV program, then watching it at a different time has no economic impact and is fair use. Making a copy of someone else’s DVD is still illegal. So is giving your VHS tape to someone else. They are not fair use.

              2. It is illegal to download copyright protected works. That applies to the person who receives the download, even if lawsuits tend to target those who share the file.

              3. It’s true the review itself doesn’t violate copyright, but my actions prior to the review (copying someone else’s DVD) did. It’s no different than sneaking into a movie theater in order to write the review. Focusing on the review misses the point

              4. Any copyright protected work you gather from the Internet has a limited license. That license generally allows private non-commercial use, so most people are not in trouble.

              There was actually a lawsuit by Facebook against a company that was using a web scraper to gather data about Facebook users to build advertising trackers. The judge noted that if the web scraper was downloading user photographs and text posts then it was very likely infringing IP (but not Facebook’s IP, because the rights still belonged to the users).

          • @kmkz_ninja
            link
            English
            11 year ago

            I don’t think they were referring to fair use in copyright law. Just that it’s fair to use.

  • @[email protected]
    link
    fedilink
    English
    141 year ago

    I’m so unbothered by this. It’s sad for illustrators (and I say this as somebody with a daughter who dreamt of becoming a concept artist, and now clearly understands this isn’t going to happen) but time marches on.

    We don’t have type setters any more. Cars have (largely) replaced horses.

    I think the best compromise I’ve heard is: AI generated output hasn’t been made by a human so can’t be copyrighted.

  • @Eheran
    link
    English
    51 year ago

    So much hot air for nothing. AI generated text or just a “good” human?

  • @[email protected]
    link
    fedilink
    English
    41 year ago

    I mean, in todays board game space with so many classics and new releases its cool to know two companies i dont have to bother buying games from

    • @thorbot
      link
      English
      41 year ago

      Why do you care? Their lead artist is the one using the AI tools and it’s not just full generation of images, they use it as a reference tool and as a filler. People are up in arms about AI art but are oblivious to the context in which it’s used. If a studio fires their artists and only uses AI, sure, that’s a bad thing. That’s not even remotely what’s happening at Fryx Games.

  • @[email protected]
    link
    fedilink
    English
    31 year ago

    I’m excited. As long as the output is curated. It allows small developers to make really exciting large projects. On a small budget. So we’re going to see a diversity in the creative space.

    And this isn’t going to kill artists. We’re going through a evolutionary period, where the source art is going to have some wonderful debate in the copyright scheme. But you still need a source concept in order to generate from.