The hottest startup in Silicon Valley is in flames. So is the fiction that anything other than the profit motive is going to govern how AI gets developed and deployed.
Ouchie my hand burns that take was so hot. So according to this guy, the openai board was taking virtuous action to save humanity from the doom of commercialized AI that Altman was bringing. He has zero evidence for that claim, but true to form he won’t let that stop him from a good narrative. Our hero board is being thwarted by the evil and greedy Microsoft, silicon valley investors, and employees who just want to cash out their stocks. The author broke out his Big Book of Overused Cliches to end the whole column with a banger “money talks.” Woah, mic drop right there.
Fucking lazy take is lazy. First of all, the current interim CEO that the board just hired (after appointing and then removing another intermin CEO after removing Altman) has said publicly that the board’s reasoning had nothing to do with AI safety. So this whole column is built on a trash premise. Even assuming that the board was concerned about AI safety with Altman at the helm, there are a lot of steps they could have taken short of firing the CEO, including overruling his plans, reprimanding him, publicly questioning his leadership, etc. Because of their true mission is to develop responsible AI, destroying OpenAI does not further that mission.
The AI of this story is just distorting everything, forcing lazy writers like this guy to take sides and make up facts depending on whether they are pro or anti AI. Fundamentally, this is a story about a boss employees apparently liked working for, and those employees saying fuck you to the board for their terrible knee jerk management decisions. This is a story about the power of human workers revolting against some rich assholes who think they know what is best for humanity (assuming their motives are what the author describes without evidence). This is a story about self important fuckheads who are far too incompetent to be on this board, let alone serve as gatekeepers for human progress as this author apparently has ordained them.
Are there concerns about AI alignment and safety? Absolutely. Should we be thinking about how capitalism is likely to fuck up this incredible scientific advancement? Darn tooting. But this isn’t really that story, and least not based on, you know, publicly available evidence. But hey, a hacks gonna hack, what can ya do.
You can choose not to believe Bloomberg vets their sources but they’re not a tabloid or blog. When they print:
Altman clashed with members of his board, especially Ilya Sutskever, an OpenAI co-founder and the company’s chief scientist, over how quickly to develop what’s known as generative AI, how to commercialize products and the steps needed to lessen their potential harms to the public, according to a person with direct knowledge of the matter. This person asked not to be identified discussing private information.
They’re some creditability to it.
But whatever, you want to believe there’s no actual reason for this and the board are all wack jobs. Fine.
You don’t think it’s still worth pointing out the now stark, obvious evidence that there was never any true ethical safeguards here?
The specifics of this story are less relevant than the overall take away: AI is a dangerous technology for many reasons and it is in the hands of extremely shitty people, with no workable safeguards or oversight. And that’s a problem.
So yeah. We’re gonna talk about that. A lot. Call it lazy if you like, I call it cutting through all the marketing bullshit that’s flooding social media all the damn time to remind readers the technology is not the problem, it’s the people behind it, and they will never regulate themselves.
They absolutely “clashed” about the pace of development. They probably “clashed” about whether employees should be provided free parking and the budget for office snacks. The existence of disagreements about various issues is not proof that any one disagreement was the reason for the ouster. Also, your Bloomberg quote cites one source, so who knows about that even. Illa told employees that the ouster was because sam assigned two employees the same project and because he told different board members different opinions about the performance of one employee. I doubt that, but who the fuck knows. The entire peice is based on complete conjecture.
The one thing we know if that the ouster happened without notice to Sam, without rumors about Sam being on the rocks with the board over the course of weeks or months, and without any notice to OpenAIs biggest shareholder. All of that smacks of poor leadership and knee jerk decisions making. The board did not act rationally. If the concern was AI safety, there are a million things they could have done to address that. A Friday afternoon coup that ended up risking 95% of your employees running into the open arms of a giant for profit monster probably wasn’t the smartest move if the concern was AI safety. This board shouldn’t be praised as some group of humanities saviors.
AI safety is super important. I agree, and I think lots of people should be writing and thinking about that. And lots of people are, and they are doing it in an honest way. And I’m reading a lot of it. This column is just making up a narrative to shoehorn their opinions on AI safety into the news cycles, trying to make a bunch of EA weirdos into martyrs in the process. It’s dumb and it’s lazy.
I believe we are all over thinking something very obvious.
Anyone with even a little technical knowledge in this area knows what an absolute bullshit hype train ChatGPT is on right now.
There isn’t a professional on the planet in this field whose CTO hasn’t insisted that some irrelevant nonsense become ChatGPT’d by the end of the quarter.
Sam knows that too and that’s why he wants to make the money now, before everyone else catches on.
Ouchie my hand burns that take was so hot. So according to this guy, the openai board was taking virtuous action to save humanity from the doom of commercialized AI that Altman was bringing. He has zero evidence for that claim, but true to form he won’t let that stop him from a good narrative. Our hero board is being thwarted by the evil and greedy Microsoft, silicon valley investors, and employees who just want to cash out their stocks. The author broke out his Big Book of Overused Cliches to end the whole column with a banger “money talks.” Woah, mic drop right there.
Fucking lazy take is lazy. First of all, the current interim CEO that the board just hired (after appointing and then removing another intermin CEO after removing Altman) has said publicly that the board’s reasoning had nothing to do with AI safety. So this whole column is built on a trash premise. Even assuming that the board was concerned about AI safety with Altman at the helm, there are a lot of steps they could have taken short of firing the CEO, including overruling his plans, reprimanding him, publicly questioning his leadership, etc. Because of their true mission is to develop responsible AI, destroying OpenAI does not further that mission.
The AI of this story is just distorting everything, forcing lazy writers like this guy to take sides and make up facts depending on whether they are pro or anti AI. Fundamentally, this is a story about a boss employees apparently liked working for, and those employees saying fuck you to the board for their terrible knee jerk management decisions. This is a story about the power of human workers revolting against some rich assholes who think they know what is best for humanity (assuming their motives are what the author describes without evidence). This is a story about self important fuckheads who are far too incompetent to be on this board, let alone serve as gatekeepers for human progress as this author apparently has ordained them.
Are there concerns about AI alignment and safety? Absolutely. Should we be thinking about how capitalism is likely to fuck up this incredible scientific advancement? Darn tooting. But this isn’t really that story, and least not based on, you know, publicly available evidence. But hey, a hacks gonna hack, what can ya do.
You can choose not to believe Bloomberg vets their sources but they’re not a tabloid or blog. When they print:
They’re some creditability to it.
But whatever, you want to believe there’s no actual reason for this and the board are all wack jobs. Fine.
You don’t think it’s still worth pointing out the now stark, obvious evidence that there was never any true ethical safeguards here?
The specifics of this story are less relevant than the overall take away: AI is a dangerous technology for many reasons and it is in the hands of extremely shitty people, with no workable safeguards or oversight. And that’s a problem.
So yeah. We’re gonna talk about that. A lot. Call it lazy if you like, I call it cutting through all the marketing bullshit that’s flooding social media all the damn time to remind readers the technology is not the problem, it’s the people behind it, and they will never regulate themselves.
They absolutely “clashed” about the pace of development. They probably “clashed” about whether employees should be provided free parking and the budget for office snacks. The existence of disagreements about various issues is not proof that any one disagreement was the reason for the ouster. Also, your Bloomberg quote cites one source, so who knows about that even. Illa told employees that the ouster was because sam assigned two employees the same project and because he told different board members different opinions about the performance of one employee. I doubt that, but who the fuck knows. The entire peice is based on complete conjecture.
The one thing we know if that the ouster happened without notice to Sam, without rumors about Sam being on the rocks with the board over the course of weeks or months, and without any notice to OpenAIs biggest shareholder. All of that smacks of poor leadership and knee jerk decisions making. The board did not act rationally. If the concern was AI safety, there are a million things they could have done to address that. A Friday afternoon coup that ended up risking 95% of your employees running into the open arms of a giant for profit monster probably wasn’t the smartest move if the concern was AI safety. This board shouldn’t be praised as some group of humanities saviors.
AI safety is super important. I agree, and I think lots of people should be writing and thinking about that. And lots of people are, and they are doing it in an honest way. And I’m reading a lot of it. This column is just making up a narrative to shoehorn their opinions on AI safety into the news cycles, trying to make a bunch of EA weirdos into martyrs in the process. It’s dumb and it’s lazy.
I believe we are all over thinking something very obvious.
Anyone with even a little technical knowledge in this area knows what an absolute bullshit hype train ChatGPT is on right now.
There isn’t a professional on the planet in this field whose CTO hasn’t insisted that some irrelevant nonsense become ChatGPT’d by the end of the quarter.
Sam knows that too and that’s why he wants to make the money now, before everyone else catches on.
The standard are rule applies: follow the money.