The piece from Jen Caltrider, Misha Rykov, and Zoë MacDonald, published on Valentine’s Day 2024, dives into the murky waters of romantic AI chatbots. These apps, seemingly booming in popularity, offer users the chance to interact with “empathetic” AI companions. However, the research team’s deep dive into these services’ privacy policies and terms reveals a starkly different picture. These chatbots, while advertised as beneficial for mental health and companionship, are actually privacy nightmares, designed to harvest as much user data as possible, including sensitive personal and health information. Every one of the 11 chatbots reviewed received a *Privacy Not Included warning for their poor handling of user privacy.

The investigation highlights several key concerns: the opaque nature of how these AI models operate, the lack of accountability for harmful advice or actions encouraged by the chatbots, and the potential for misuse of intimate user data. Furthermore, the majority of these apps fail to meet basic security standards, potentially exposing users to data breaches. They often share or sell personal data and do not allow users to delete their data fully.

The authors conclude with a call for higher privacy standards and more ethical AI development, urging users to practice caution and good cyber hygiene if they choose to interact with these chatbots. Despite the allure of AI companionship, the report underscores the significant privacy and ethical issues at play, suggesting that the cost of engaging with these technologies may be too high.

Summarized by ChatGPT