You can train AI with just a single voice clip. You can do this on your desktop. Microsoft doesn’t need to sell shit, you put that clip on tiktok yourself.
You don’t even need to upload anything. They can call you, have a short convo and then just say “oh sorry wrong number” or something. You’d never know.
Well they said they dont share their voice anywhere, if thats true it would be concerning. I for one just dont use any centralized unencrypted services that could scrape my voice but i would assume most people think that if they dont publish anything, they are safe…
You don’t talk to anyone on the phone through a pbx? Never call your bank? Your doctor? Your credit card company? Any of your insurance company? Even on private systems all of those calls are recorded for legal reasons. And all of them will eventually be compromised.
I make regular phone calls maybe twice a year, everything can be done by email or web forms in germany. But generally the people who have access to all the phone lines are the feds of whichever country you are in. And they, unlike big tech arent super interested in selling that data.
The ‘old’ way of faking someone’s voice like you saw in 90s spy movies was to get enough sample data to capture each possible speech sound someone could make, such that those sounds can be combined to form all possible words.
With AI training you only need enough data to know what someone sounds like ‘in general’ to extrapolate a reasonable model.
One possible source of voice data is spam-calls.
You get a call, say “Hello?” And then someone launches into trying to sell you insurance or some rubbish, you say “Sorry I’m not interested, take me off your list please. Okay, bye” and hang up.
And that is already enough data to replicate your voice.
When scammers make the call using your fake voice, they usually use a crappy quality line, or background noise, or other techniques to cover up any imperfections in the voice replica. And of course they make it really emotional, urgent and high-stakes to override your family member’s logical thinking.
Educating your family to be prepared for this stuff is really important.
Oh gee, someone on the Internet thinks I’ll say it if they tell me they think I’m bluffing. My ego is so hurt! I’d better spill the beans on these unregulated technologies!
If they’re too stupid to figure it out, they’re too stupid to consider its implications and consequences when using it. I’m not going to give a weapon to a toddler for the same reason.
You can train AI with just a single voice clip. You can do this on your desktop. Microsoft doesn’t need to sell shit, you put that clip on tiktok yourself.
You don’t even need to upload anything. They can call you, have a short convo and then just say “oh sorry wrong number” or something. You’d never know.
Yet another reason to screen your calls. I never pick up unless I know the number.
Yup. You need like 5 to 15 seconds of talking, that’s it. I’ve done this myself to confirm it works actually quite well with.
Well they said they dont share their voice anywhere, if thats true it would be concerning. I for one just dont use any centralized unencrypted services that could scrape my voice but i would assume most people think that if they dont publish anything, they are safe…
You don’t talk to anyone on the phone through a pbx? Never call your bank? Your doctor? Your credit card company? Any of your insurance company? Even on private systems all of those calls are recorded for legal reasons. And all of them will eventually be compromised.
I make regular phone calls maybe twice a year, everything can be done by email or web forms in germany. But generally the people who have access to all the phone lines are the feds of whichever country you are in. And they, unlike big tech arent super interested in selling that data.
Really? How?
The ‘old’ way of faking someone’s voice like you saw in 90s spy movies was to get enough sample data to capture each possible speech sound someone could make, such that those sounds can be combined to form all possible words.
With AI training you only need enough data to know what someone sounds like ‘in general’ to extrapolate a reasonable model.
One possible source of voice data is spam-calls.
You get a call, say “Hello?” And then someone launches into trying to sell you insurance or some rubbish, you say “Sorry I’m not interested, take me off your list please. Okay, bye” and hang up.
And that is already enough data to replicate your voice.
When scammers make the call using your fake voice, they usually use a crappy quality line, or background noise, or other techniques to cover up any imperfections in the voice replica. And of course they make it really emotional, urgent and high-stakes to override your family member’s logical thinking.
Educating your family to be prepared for this stuff is really important.
Clutch explanation. I just sent a screenshot of your comment to my folks.
Soo… use a funny voice when answering the phone? Poison the data
Keep a helium balloon on you at all times, just in case.
Yeah I’m gonna go ahead and not give that knowledge out.
It’s okay to say you don’t know
Oh gee, someone on the Internet thinks I’ll say it if they tell me they think I’m bluffing. My ego is so hurt! I’d better spill the beans on these unregulated technologies!
Do you think you’re protecting some too-powerful technology that should stay secret? Am I seriously the only person in this thread who knows how to use a search engine?
If they’re too stupid to figure it out, they’re too stupid to consider its implications and consequences when using it. I’m not going to give a weapon to a toddler for the same reason.
I wouldn’t describe myself as “too stupid to figure it out,” more like “interested in hearing more about your contribution to the conversation.”
Bit rude to call me stupid for that, IMO.
Again, not interested in teaching people how to fake shit, I’m already sick of fake shit and it’s just started.
Okay well thanks for letting me know lol