• @MrJameGumb
    link
    English
    84 months ago

    It’s so bad ass he needed an extra hand and several extra fingers just to play it!

    • @[email protected]
      link
      fedilink
      English
      1
      edit-2
      4 months ago

      I saw this neat project the other day, detextify.

      All it does is, in an automated fashion, with a large number of images, run OCR on an image, identify text, take the bounding box of the text, and do an inpaint on that area until it can’t detect any text there.

      It does kind of seem to me like that might be a generalizable-approach. That is, it might be hard to write software to draw a good image of someone with just two hands and the right number of fingers. But…it might be an easier problem to solve to identify, or at least flag likely, mismatches in number of fingers.

      Like, the ideal would be to have LLMs generate the right number of fingers and stuff like that. But in the absence of that, having software written to identify problematic features in the image automatically and simply regenerate them might be a reasonably doable workaround.

      EDIT: Actually, the detextify page even specifically mentions dealing with extra fingers, so maybe they’re planning to do that:

      We all know generative AI is the coolest thing since sliced bread 🍞.

      But try using any off-the-shelf generative vision model and you’ll quickly see that these systems can get… creative with interpreting your prompts.

      Specifically, you’ll observe all kinds of weird artifacts on your images from extra fingers on hands, to arms coming out of chests, to alien text written in random places.

      For generative systems to actually be usable in downstream applications, we need to better control these outputs and mitigate unwanted effects.

      We believe the next frontier for generative AI is about robustness and trust. In other words, how can we architect these systems to be controllable, relevant, and predictably consistent with our needs?

      Detextify is the first phase in our vision of robustifying generative AI.

      If we get this right, we will unlock slews of new applications for generative systems that will change the landscape of human-AI collaboration. 🌎