Thanks to rapid advancements in generative AI and a glut of training data created by human actors that has been fed into its AI model, Synthesia has been able to produce avatars that are indeed more humanlike and more expressive than their predecessors. The digital clones are better able to match their reactions and intonation to the sentiment of their scripts—acting more upbeat when talking about happy things, for instance, and more serious or sad when talking about unpleasant things. They also do a better job matching facial expressions—the tiny movements that can speak for us without words.

But this technological progress also signals a much larger social and cultural shift. Increasingly, so much of what we see on our screens is generated (or at least tinkered with) by AI, and it is becoming more and more difficult to distinguish what is real from what is not. This threatens our trust in everything we see, which could have very real, very dangerous consequences.

“I think we might just have to say goodbye to finding out about the truth in a quick way,” says Sandra Wachter, a professor at the Oxford Internet Institute, who researches the legal and ethical implications of AI. “The idea that you can just quickly Google something and know what’s fact and what’s fiction—I don’t think it works like that anymore.”

  • @Spiralvortexisalie
    link
    English
    38 months ago

    It takes an extra 2 minutes, that’s why its dead in the water. People go who would spend x amount of time to deepfake me that I should spend an extra two minutes on assuring integrity? And well for most of the population they are probably right.

    • @[email protected]
      link
      fedilink
      English
      18 months ago

      I suppose you are correct, but it seems like a standard could be adopted to automate the process for both the creator and the consumer. And while the system would work for any creator, it seems most important to be able to ensure the integrity of the work product of journalistic professionals.