Each of these reads like an extremely horny and angry man yelling their basest desires at Pornhub’s search function.

  • @[email protected]
    link
    fedilink
    English
    -11 year ago

    Are you actually asking?

    The jist is that LLM find similar “chunks” out content from their training set, and assemble a response based on this similarity score (similar to your prompt request).

    They know nothing they haven’t seen before, and the nicely of them is they create new things from parts of their training data.

    Obviously they are very impressive tools but the concern is you can easily take a model that’s designed for porn, feed it pictures of someone you want to shame, and have it generate lifelike porn of a non porn actor.

    That, and the line around “ethical” AI porn is blurry.

    • tal
      link
      fedilink
      2
      edit-2
      1 year ago

      They know nothing they haven’t seen before

      Strictly speaking, you arguably don’t either. Your knowledge of the world is based on your past experiences.

      You do have more-sophisticated models than current generative AIs do, though, to construct things out of aspects of the world that you have experienced before.

      The current crop are effectively more-sophisticated than simply pasting together content – try making an image and then adding “blue hair” or something, and you can get the same hair, but recolored. And they ability to replicate artistic styles is based on commonalities in seen works, but you don’t wind up seeing chunks of material just done by that artist. But you’re right that they are considerably more limited then a human.

      Like, you have a concept of relative characteristics, and the current generative AIs do not. You can tell a human artist “make those breasts bigger”, and they can extrapolate from a model built on things they’ve seen before. The current crop of generative AIs cannot. But I expect that the first bigger-breast generative AI is going to attract users, based on a lot of what generative AIs are being used for now.

      There is also, as I understand it, some understanding of depth in images in some existing systems, but the current generative AIs don’t have a full 3d model of what they are rendering.

      But they’ll get more-sophisticated.

      I would imagine that there will be a combination of techniques. LLMs may be used, but I doubt that they will be pure LLMs.