I mean. This whole “article” is an opinion piece. Some of the opinions I even agreed with but there’s a lot of “I think” involved in a lot of the paragraphs written here.
No, I’m not a fan of vox. But on the other hand this particular article might as well have been a random blog post. I don’t necessarily disagree that people calling Generative AI a bust are jumping the gun. And I kind of even agree that using these generative models in applications that solve problems is going to take time, I don’t necessarily agree that just because a bunch of users have fixated on the new shiny thing that it will have staying power or that it will achieve a level of usefulness that will translate to long term profitability.
But mostly I take exception to an article positing itself as factually starting every paragraph with “I think”.
Because this article is posited (with its title and the little blurb at the top about the author) to be about the safety of AI. The author doesn’t talk about what safety regulations there are. They don’t talk about what safety apparatus are being proposed or which ones have already been developed. There’s no conclusion here.
When you read a newspaper, generally there is a section for opinion pieces and editorials. There are several groups trying to push for clear and concise labeling of editorial, opinion pieces, and news pieces specifically because there’s so much misinformation going around.
But really. What is the point of posting an opinion piece to a community where we share tech news, when it’s not even valuable in its opinions? What is there to discuss here? That shareholders and consumers should view AI safety legislation or safety protocols differently because they affect those two parties differently? We already knew that.
Because this article is posited (with its title and the little blurb at the top about the author) to be about the safety of AI.
Unless the title and blurb have changed, this is just wrong.
The title says nothing about safety: “How AI’s booms and busts are a distraction - However current companies do financially, the big AI safety challenges remain.”
Likewise the blurb says nothing about safety: “Kelsey Piper is a senior writer at Future Perfect, Vox’s effective altruism-inspired section on the world’s biggest challenges. She explores wide-ranging topics like climate change, artificial intelligence, vaccine development, and factory farms, and also writes the Future Perfect newsletter.”
What are you going on about? You’re mad because you couldn’t tell this was on Op/Ed?
(Sidenote: I didn’t notice that “effective altruism” thing before. Barf.)
The blurb suggests that this person writes specifically altruist articles (a suggestion that this is for the benefit of someone which by proxy suggests that it’s telling the truth). Because opinions are subjective that conflicts with the context of the piece pretty harshly. It gives the idea that it may in some way be an opinion based in on fact when it simply isn’t because it cites no factual data that can be quantified whatsoever. This is literally how misinformation is spread. It doesn’t have to be outright lies in order to be damaging.
The article talks about how new safety measures could be developed. It’s in the text. It just doesn’t conclude anything or talk about any specifics. That’s really my problem with it. What good is the opinion of the author? What are they basing this opinion on? There’s no substance to this writing at all.
And? That’s not really helpful. It adds nothing to the conversation about Generative AI. It is a list of opinions and they’re based on seemingly nothing. You’re arguing with me about whether or not this is an opinion piece that is obviously an opinion piece because it doesn’t validate itself in any way. There’s literally nothing to discuss here.
I mean. This whole “article” is an opinion piece. Some of the opinions I even agreed with but there’s a lot of “I think” involved in a lot of the paragraphs written here.
deleted by creator
They probably just dislike Vox.
No, I’m not a fan of vox. But on the other hand this particular article might as well have been a random blog post. I don’t necessarily disagree that people calling Generative AI a bust are jumping the gun. And I kind of even agree that using these generative models in applications that solve problems is going to take time, I don’t necessarily agree that just because a bunch of users have fixated on the new shiny thing that it will have staying power or that it will achieve a level of usefulness that will translate to long term profitability.
But mostly I take exception to an article positing itself as factually starting every paragraph with “I think”.
deleted by creator
Because this article is posited (with its title and the little blurb at the top about the author) to be about the safety of AI. The author doesn’t talk about what safety regulations there are. They don’t talk about what safety apparatus are being proposed or which ones have already been developed. There’s no conclusion here.
When you read a newspaper, generally there is a section for opinion pieces and editorials. There are several groups trying to push for clear and concise labeling of editorial, opinion pieces, and news pieces specifically because there’s so much misinformation going around.
But really. What is the point of posting an opinion piece to a community where we share tech news, when it’s not even valuable in its opinions? What is there to discuss here? That shareholders and consumers should view AI safety legislation or safety protocols differently because they affect those two parties differently? We already knew that.
Unless the title and blurb have changed, this is just wrong.
The title says nothing about safety: “How AI’s booms and busts are a distraction - However current companies do financially, the big AI safety challenges remain.”
Likewise the blurb says nothing about safety: “Kelsey Piper is a senior writer at Future Perfect, Vox’s effective altruism-inspired section on the world’s biggest challenges. She explores wide-ranging topics like climate change, artificial intelligence, vaccine development, and factory farms, and also writes the Future Perfect newsletter.”
What are you going on about? You’re mad because you couldn’t tell this was on Op/Ed?
(Sidenote: I didn’t notice that “effective altruism” thing before. Barf.)
The blurb suggests that this person writes specifically altruist articles (a suggestion that this is for the benefit of someone which by proxy suggests that it’s telling the truth). Because opinions are subjective that conflicts with the context of the piece pretty harshly. It gives the idea that it may in some way be an opinion based in on fact when it simply isn’t because it cites no factual data that can be quantified whatsoever. This is literally how misinformation is spread. It doesn’t have to be outright lies in order to be damaging.
The article talks about how new safety measures could be developed. It’s in the text. It just doesn’t conclude anything or talk about any specifics. That’s really my problem with it. What good is the opinion of the author? What are they basing this opinion on? There’s no substance to this writing at all.
deleted by creator
lol what?
There’s no way to write an article with that title and not have it be an opinion.
When someone starts every paragraph with “I think”, they’re not positing themselves as factual.
And? That’s not really helpful. It adds nothing to the conversation about Generative AI. It is a list of opinions and they’re based on seemingly nothing. You’re arguing with me about whether or not this is an opinion piece that is obviously an opinion piece because it doesn’t validate itself in any way. There’s literally nothing to discuss here.
deleted by creator