Transcript
To distill my thoughts into one screenshot, I think the best analogy here is cars. I hate car-centric infrastructure. It’s bad for the planet, bad for communities, bad for people. Environmental damage, pedestrian deaths, infrastructure that destroys communities, oil dependence, suburban sprawl.
AND there are obvious use cases where they need to exist. Ambulances, disability access, rural transportation, moving goods. AND we need clear safety regulations. AND we should design a world that relies on them as little as possible.
We don’t solve car problems by scolding individuals for driving to work. Nor do we solve them through arguments that are either factually incorrect, OR harmful in and of themselves (“everyone should just bike”). We solve them through safety regulations, emissions standards, public transit investment, walkable design. The same applies here: the solution to AI harms isn’t individual guilt. It’s structural, so regulation, safety requirements, platform accountability, worker protections.
I can be annoyed that AI slop is everywhere, that AI culture is dangerous and that serious work needs to be done to curb it and build systems that don’t rely on it, AND think the nearly 1 billion people using ChatGPT weekly to help them code or write an email aren’t just like, stupid and evil (doctors using AI to detect tumours is obviously good; someone with a learning disability getting a concise explanation with something that will be patient with them is obviously good. Or translation! We need human translators. But when I got into a cab in Turkey with a driver who spoke no English, he used Gemini to translate and we had a lovely conversation. You can’t have a bilingual human in every cab. Google Translate has existed for years, but LLMs are more natural/better with context and idiom. Deepfakes are obviously bad. Hell, I don’t think AI should replace adult actors but I kind of do think AI should replace child actors! That is an inherently unethical job!), and that some arguments against AI do more harm than good. Those aren’t contradictory positions. Much like with cars, I think the harm outweighs the benefits, and my primary desire is to want those harms to be addressed in a way that doesn’t cause *more* harm.
Bans on facial recognition in policing. Required safety features for AI companions. Algorithmic impact assessments for public benefits systems. Product liability that holds companies accountable for harms. Focusing on the labour issues and not copyright; threat to artists and writers isn’t that their “style” was stolen, it’s that their labour is being devalued and replaced without any safety net. Stronger IP benefits Disney. Worker power benefits workers. I’m also just pretty fond of UBI. Excluding a full-on revolution, everyone’s needs being met would certainly help.
We need alternatives too; just like we need public transit before we can reduce car dependence, we need social infrastructure, so mental health support, community spaces, worker protections, so people aren’t driven to AI companions out of desperation. I mean, a major reason suicidal people rely on (very dangerous!!!) AI “therapists” is because a real human therapist can have them forcibly institutionalized! That’s a root cause that needs to be addressed.
I hope that makes sense!
i think it’s a take on AI that’s much more productive than the usual “this tech and the people who use it are inherently evil”
the rest of their thread is worth a read too, imo: https://bsky.app/profile/sarahz.bsky.social/post/3mbrq3c6rqc2n


I would rather see adults replace child actors than AI.