• @Smoogs
    link
    13 months ago

    Thanks for the paywall

    • @[email protected]
      link
      fedilink
      43 months ago

      I got you.

      One morning in early October, about 18 years after his daughter Jennifer was murdered, Drew Crecente received a Google alert flagging what appeared to be a new profile of her online.

      The profile had Jennifer’s full name and a yearbook photo of her. A short biography falsely described Jennifer, whose ex-boyfriend killed her in 2006 during her senior year of high school, as a “video game journalist and expert in technology, pop culture and journalism.”

      Jennifer had seemingly been re-created as a “knowledgeable and friendly AI character,” the website said. A large button invited users to chat with her.

      “My pulse was racing,” Crecente recalled to The Washington Post. “I was just looking for a big flashing red stop button that I could slap and just make this stop.”

      Story continues below advertisement

      Jennifer’s name and image had been used to create a chatbot on Character.AI, a website that allows users to converse with digital personalities made using generative artificial intelligence. Several people had interacted with the digital Jennifer, which was created by a user on Character’s website, according to a screenshot of her chatbot’s now-deleted profile.

      Crecente, who has spent the years since his daughter’s death running a nonprofit organization in her name to prevent teen dating violence, said he was appalled that Character had allowed a user to create a facsimile of a murdered high-schooler without her family’s permission. Experts said the incident raises concerns about the AI industry’s ability — or willingness — to shield users from the potential harms of a service that can deal in troves of sensitive personal information.

      “It takes quite a bit for me to be shocked, because I really have been through quite a bit,” Crecente said. “But this was a new low.”

      Story continues below advertisement

      Kathryn Kelly, a spokesperson for Character, said the company removes chatbots that violate its terms of service and is “constantly evolving and refining our safety practices to help prioritize our community’s safety.”

      “When notified about Jennifer’s Character, we reviewed the content and the account and took action based on our policies,” Kelly said in a statement. The company’s terms of service prevent users from impersonating any person or entity.

      AI chatbots can engage in conversation and be programmed to adopt the personalities and biographical details of specific characters, real or imagined. They have found a growing audience online as AI companies market the digital companions as friends, mentors and romantic partners. The technology has also drawn controversy. A Belgian man died by suicide in 2023 after being encouraged to do so in conversation with a chatbot.

      Character, which inked a $2.5 billion deal this year to license its AI models to Google, is among the biggest players in the space. The company offers several company-made chatbots but also allows users to create and share their own AI chatbots by uploading photos, voice recordings and short written prompts. Its library of user-generated companions includes a gruff sergeant acting as a personal motivator, a librarian who offers book recommendations and imitations of celebrities and public figures such as rapper Nicki Minaj and tech entrepreneur Elon Musk.

      It is the last place Crecente expected to see his daughter, more than a decade after her killing shocked the city of Austin and upended Crecente’s life.

      Story continues below advertisement

      Jennifer Crecente, then 18, went missing in February 2006 and was found shot to death in the woods near her home days later. Investigators determined that her ex-boyfriend, also 18, lured Jennifer into the woods and killed her with a shotgun, according to Crecente and reporting from the Austin American-Statesman. He was convicted of her murder.

      The killing consumed Crecente and Elizabeth Crecente, Jennifer’s mother. The divorced parents started separate organizations in their daughter’s name dedicated to preventing teen dating violence. They also lobbied against parole for Jennifer’s killer, who was sentenced to 35 years in prison.

      Drew Crecente, who now lives in Atlanta, preserved Jennifer’s bedroom and re-created it as soon as he moved, he said.

      Story continues below advertisement

      “I don’t really have the vocabulary to describe it,” Crecente said of his grief.

      Because of his nonprofit work, Crecente keeps a Google alert that tracks mentions of his daughter’s name online over the years, he said. Occasionally, it surfaces on a spam website or a news report repeating old information from her case. Then, on Oct. 2, the alert led him to his daughter’s name and photo on Character.

      Crecente couldn’t comprehend it at first. The more he looked, the more uneasy he felt. In addition to using Jennifer’s name and photo, the chatbot’s page described her in lively language, as if she were alive, as a tech journalist who “geek[s] out on video games” and is “always up-to-date on the latest entertainment news.”

      Story continues below advertisement

      Crecente said the description did not appear to be based on Jennifer’s personality or any reported details about her interests and may have been an error from an AI-generated biography: Crecente’s brother, Brian Crecente, is a former reporter who founded the video game news site Kotaku.

      The factual inaccuracies were beside the point, Crecente added. He said the idea of Character hosting and potentially making money from a chatbot using his murdered daughter’s name was distressing enough.

      “You can’t go much further in terms of really just terrible things,” he said.

      Crecente said he did not start a conversation with the chatbot bearing his daughter’s name and did not investigate the user that created the bot, which was listed under a username he did not recognize. He said he immediately emailed Character to have it removed. Brian Crecente also wrote about his brother’s discovery on the platform X.

      Story continues below advertisement

      Character wrote on social media on Oct. 2, in response to Brian Crecente’s post, that it was deleting the character. Kelly, the Character spokesperson, said in her statement that the company’s terms of service prohibit users from impersonating a person or entity and that the company proactively detects violations and moderates its service using blocklists.

      Asked about the other chatbots on Character’s site that impersonate public figures, Kelly said that “reports of impersonation are investigated by our Trust & Safety team, and the Character is removed if it is found to violate our Terms of Service.”

      Jen Caltrider, a privacy researcher at the nonprofit Mozilla Foundation, criticized Character’s approach to moderation as too passive for content that plainly violated its own terms of service in Crecente’s case.

      Story continues below advertisement

      “If they’re going to say, ‘We don’t allow this on our platform,’ and then they allow it on their platform until it’s brought to their attention by somebody who’s been hurt by that, that’s not right,” Caltrider said. “All the while, they’re making millions of dollars.”

      Rick Claypool, who researched AI chatbots for the nonprofit consumer advocacy organization Public Citizen, said while laws governing online content at large could apply to AI companies, they have largely been left to regulate themselves. Crecente isn’t the first grieving parent to have their child’s information manipulated by AI: Content creators on TikTok have used AI to imitate the voices and likenesses of missing children and produce videos of them narrating their deaths, to outrage from the children’s families, The Post reported last year.

      “We desperately need for lawmakers and regulators to be paying attention to the real impacts these technologies are having on their constituents,” Claypool said. “They can’t just be listening to tech CEOs about what the policies should be … they have to pay attention to the families and individuals who have been harmed.”

      Story continues below advertisement

      The ordeal was disturbing enough to push Crecente — who successfully lobbied for Texas laws on teen dating violence after Jennifer’s murder — to ponder taking up a new cause. He is considering legal options and advocating more actively for measures to prevent AI companies from harming or re-traumatizing other families of crime victims, he said.

      “I’m troubled enough by this that I’m probably going to invest some time into figuring out what it might take to change this,” Crecente said.