• @[email protected]
    link
    fedilink
    English
    11 year ago

    I don’t have examples but having listened to some samples of various Ai generated clones (the one paper had samples of I believe 10s, 30s, 1min, 5 min) and all of them progressively sounded better. The 10 second one basically sounded like a voice call whose bit rate dropped out mid word. And the voice so long as you used words that were similar in phoenix sounded pretty close. Although this is just my experience, but to you it might sound pretty bad while to me it sounded pretty reasonable if under bad audio conditions.

    https://github.com/CorentinJ/Real-Time-Voice-Cloning

    This is the main one I’ve seen examples of. You’ll have to find the samples yourself, I believe it was in the actual paper?

    • Arthur Besse
      link
      fedilink
      English
      01 year ago

      That code was state of the art (for free software) when the author first published it with his master’s thesis four years ago, but it hasn’t improved a lot since then and I wouldn’t recommend it today. See the Heads Up section of the readme. Coqui (a free software Mozilla spinoff) is better but also is sadly still nowhere near as convincing as the proprietary stuff.

      • @[email protected]
        link
        fedilink
        English
        31 year ago

        Wait it’s been 4 years? Time really goes by. Yeah with most Ai things I assumed those with more time and resources would create better models. OS Ai is at a great disadvantage when it comes to data set size and compute power.