• GrappleHat
    link
    fedilink
    English
    188 months ago

    I’m very skeptical that this “model poisoning” approach will work in practice. To pull it off would require a very high level of coordination among disparate people generating the training data (the images/text). I just can’t imagine it happening. Add to that: big tech has A LOT of resources to play this cat & mouse game.

    I hope I’m wrong, but I predict big tech wins here.

    • @General_Effort
      link
      38 months ago

      This attack doesn’t target Big Tech, at all. The model has to be open to pull off an attack like that.