After being scammed into thinking her daughter was kidnapped, an Arizona woman testified in the US Senate about the dangers side of artificial intelligence technology when in the hands of criminals.
Jennifer DeStefano told the Senate judiciary committee about the fear she felt when she received an ominous phone call on a Friday last April.
Thinking the unknown number was a doctor’s office, she answered the phone just before 5pm on the final ring. On the other end of the line was her 15-year-old daughter – or at least what sounded exactly like her daughter’s voice.
“On the other end was our daughter Briana sobbing and crying saying ‘Mom’.”
Briana was on a ski trip when the incident took place so DeStefano assumed she injured herself and was calling let her know.
DeStefano heard the voice of her daughter and recreated the interaction for her audience: “‘Mom, I messed up’ with more crying and sobbing. Not thinking twice, I asked her again, ‘OK, what happened?’”
She continued: “Suddenly a man’s voice barked at her to ‘lay down and put your head back’.”
Panic immediately set in and DeStefano said she then demanded to know what was happening.
“Nothing could have prepared me for her response,” Defano said.
Defano said she heard her daughter say: “‘Mom these bad men have me. Help me! Help me!’ She begged and pleaded as the phone was taken from her.”
“Listen here, I have your daughter. You tell anyone, you call the cops, I am going to pump her stomach so full of drugs,” a man on the line then said to DeStefano.
The man then told DeStefano he “would have his way” with her daughter and drop her off in Mexico, and that she’d never see her again.
At the time of the phone call, DeStefano was at her other daughter Aubrey’s dance rehearsal. She put the phone on mute and screamed for help, which captured the attention of nearby parents who called 911 for her.
DeStefano negotiated with the fake kidnappers until police arrived. At first, they set the ransom at $1m and then lowered it to $50,000 when DeStefano told them such a high price was impossible.
She asked for a routing number and wiring instructions but the man refused that method because it could be “traced” and demanded cash instead.
DeStefano said she was told that she would be picked up in a white van with bag over her head so that she wouldn’t know where she was going.
She said he told her: “If I didn’t have all the money, then we were both going to be dead.”
But another parent with her informed her police were aware of AI scams like these. DeStefano then made contact with her actual daughter and husband, who confirmed repeatedly that they were fine.
“At that point, I hung up and collapsed to the floor in tears of relief,” DeStefano said.
When DeStefano tried to file a police report after the ordeal, she was dismissed and told this was a “prank call”.
A survey by McAfee, a computer security software company, found that 70% of people said they weren’t confident they could tell the difference between a cloned voice and the real thing. McAfee also said it takes only three seconds of audio to replicate a person’s voice.
DeStefano urged lawmakers to act in order prevent scams like these from hurting other people.
She said: “If left uncontrolled, unregulated, and we are left unprotected without consequence, it will rewrite our understanding and perception what is and what is not truth. It will erode our sense of ‘familiar’ as it corrodes our confidence in what is real and what is not.”
Horrible.
But regulation will not solve this, because criminals obviously don’t give a fuck about the law. So they would end up being the only ones using AI.
That’s the same story as outlawing E2EE chat for the sake of making it possible for law enforcement to read them again. That also doesn’t work against real criminals who simply ignore the law and use E2EE nonetheless.
Regulation usually applies to companies, not individuals. I think it would be good if companies’ products (eg OpenAI) had some sort of watermark on them so we knew it was a DeepFake.
Of course, we shouldn’t restrict the availability of AI software on GitHub, however.
The cat is our of the bag I’m afraid. We’ll have 2 years of massive damage done by this in western countries and then people will have developed an unhealthy amount of distrust.
But maybe this will also lead to increased security when it comes to personal devices and accounts.
Still, absolutely insane that this is fucking reality now…
We can punish its use for these purposes though. Driving cars is legal, but using them for street racing is not.
For that matter, there could be a law that getting caught using E2EE for criminal activity also carries extra charges.
In both cases the police might have to do some actual policing though.
I guess it’s time for everybody (not just folks with OPSEC concerns) to have duress and “not in duress” codes.