When Adobe Inc. released its Firefly image-generating software last year, the company said the artificial intelligence model was trained mainly on Adobe Stock, its database of hundreds of millions of licensed images. Firefly, Adobe said, was a “commercially safe” alternative to competitors like Midjourney, which learned by scraping pictures from across the internet.

But behind the scenes, Adobe also was relying in part on AI-generated content to train Firefly, including from those same AI rivals. In numerous presentations and public postsabout how Firefly is safer than the competition due to its training data, Adobe never made clear that its model actually used images from some of these same competitors.

  • @Grimy
    link
    English
    6
    edit-2
    7 months ago

    If the image has errors that are hard to spot by the human eye and the model gets trained on these images, thoses errors that came about naturally on real data get amplified.

    Its not a model killer but it is something to watch out for.

    • @General_Effort
      link
      English
      -27 months ago

      Yes, if you want realism. But that’s just one of the things that people look for. Personal preference.

      • @SomeGuy69
        link
        English
        57 months ago

        Invisible artifacts still cause result retardation, realistic or not. Like issue with fingers, shadows, eyes, colors etc.