cross-posted from: https://sh.itjust.works/post/18066953

On Tuesday, Microsoft Research Asia unveiled VASA-1, an AI model that can create a synchronized animated video of a person talking or singing from a single photo and an existing audio track. In the future, it could power virtual avatars that render locally and don’t require video feeds—or allow anyone with similar tools to take a photo of a person found online and make them appear to say whatever they want.

  • @pixxelkick
    link
    47 months ago

    Nah it’s actually pretty obvious if you know where to look.

    Watch the "person"s teeth as they talk. Warning: it actually can be kinda gross once you are watching for it.

      • AwkwardLookMonkeyPuppet
        link
        English
        37 months ago

        Every month for the last year, they have made more progress in AI than they thought they’d make in the next couple of years combined, and the rate of progress is accelerating. It’s coming much sooner than anyone thinks.

    • Ech
      link
      fedilink
      English
      27 months ago

      It’s weird to always see these dismissals about how easy it is to pinpoint generated media, like we haven’t already seen an insane jump in ability in just the last year. There is no future where this tech doesn’t start to become a problem with its realism, and personally I think it’s much closer than most seem to think it is.

    • @Cort
      link
      27 months ago

      Just don’t practice on Rudy Giuliani