The employee, Mrinank Sharma, had led the Claude chatbot maker’s Safeguards Research Team since it was formed early last year and has been at the company since 2023.
“Throughout my time here, I’ve repeatedly seen how hard it is to truly let our values govern our actions,” Sharma said, claiming that employees “constantly face pressures to set aside what matters most.”
He also issued a crypic warning about the global state of affairs.
“I continuously find myself reckoning with our situation The world is in peril. And not just from AI, or bioweapons,” he wrote, “but from a whole series of interconnected crises unfolding in this very moment.”



It sounds to me like he’s annoyed at how AI and botfarms have been used to remotely influence politics in other countries
Looking at some other parts of the article:
I think I’m aligned with the prevailing view on lemmy when I say that, no, humans will not be made irrelevant by AI. In the case of the telegraph, it’s more likely that their boss will try and replace every human with AI and then come crawling back when the organisation collapses after 1-4 weeks.
I somehow doubt any company has ever permitted critical research of their own product or services. If you have the ability to say ‘no, we’re not publishing that. YOU can publish it, but you’ll have to quit your job and do it without the company’s name attached.’ then you’re going to do that rather than slander your own product.