I have to say that I feel that currently the most consumed contents in the Internet are mostly human-written; and my proof is actually that it is now when the tendency is clearly changing. I have stumbled upon a few AI-generated articles already in the past few months, without looking for them specifically. You could tell because it sometimes focuses on weird details, or even I have seen l some kind of
as an AI, I do not have an opinion on the subject […]
which is so funny when you see it.
So, yeah, it is definitely starting to happen, and in the next few years I wouldn’t be surprised if 30 to 50 % of articles are just AI blorbs built for clicks.
How to avoid this? We can’t. The only way would be to shut down the Internet, forbid computers and go back to a simpler life. And that, for many reasons will not happen unless some world-class destruction event happens.
We actually can prevent it. We will go back to human-curated websites, and the links to those websites will also be maintained by humans.
This is how the early web used to work in the 90s and early 00s. We will see a resurgence of things like portals, directories (like the Mozilla Directory project — DMOZ), webrings, and last but not least actual journalism.
Unless Google manages to find a way to tell AI content from human they will become irrelevant overnight because Search is 90% of their revenue. This will kill other search engines too, but will also remove Google strangle-hold on browsers.
This also means we’ll finally get to use interesting technologies that Google currently suppresses by refusing to implement them, like micro-payments. MP are an alternative to ads that was proposed a long time ago but never allowed browser support.
MP are a way to pay very small sums (a cent or a fraction of a cent) when you visit a webpage, and to make it as painless as possible for both the visitor and the website. It adds up to the same earnings for websites but introduces human oversight (you decide if the page you want to visit is worth that fraction of a cent) and most importantly gets rid of the ad plague.
I find this very much like a dream that will… stay a dream. Who defines human-curated websites or true journalism if I don’t even really know you are an AI bot?
Also, who says people will not like AI content? Because the world will still be full of the same people who buy Apple products and piss on “green bubble” people.
The problem is that this assumes there is a way to tell the difference between AI generated and human generated.
Very soon this may be practically speaking impossible. Then what? You make a human board and some person makes a chatbot that you can’t differentiate from a human.
We are screwed - there may be some ways with verification… but is that practical at scale? Would websites require users to install Spyware that watches your Webcam in order to confirm you’re not a bot?
And what if a bot could just generate a video feed to trick the website?
Only private and strict groups will remain AI-free… and if they get too big they won’t work anymore
I am specifically indicating “most consumed”, so I am talking about actually consumed content. Bots haven’t yet gotten impact in what people consume aside from maybe Twitter? But those who use Twitter already lost a long time ago.
I refuse to believe all the MKBHD videos, local news, and lemmy posts I see are 47% bots.
That’s going by a different metric though. It’s not claiming 47% of news articles or social media posts are by bots. It’s talking about cyberattacks, not social media posts.
I don’t think there are any solid numbers on human-presenting bot activity on social media. Honestly wouldn’t be surprised though, especially in political forums.
I have to say that I feel that currently the most consumed contents in the Internet are mostly human-written; and my proof is actually that it is now when the tendency is clearly changing. I have stumbled upon a few AI-generated articles already in the past few months, without looking for them specifically. You could tell because it sometimes focuses on weird details, or even I have seen l some kind of
which is so funny when you see it.
So, yeah, it is definitely starting to happen, and in the next few years I wouldn’t be surprised if 30 to 50 % of articles are just AI blorbs built for clicks.
How to avoid this? We can’t. The only way would be to shut down the Internet, forbid computers and go back to a simpler life. And that, for many reasons will not happen unless some world-class destruction event happens.
We actually can prevent it. We will go back to human-curated websites, and the links to those websites will also be maintained by humans.
This is how the early web used to work in the 90s and early 00s. We will see a resurgence of things like portals, directories (like the Mozilla Directory project — DMOZ), webrings, and last but not least actual journalism.
Unless Google manages to find a way to tell AI content from human they will become irrelevant overnight because Search is 90% of their revenue. This will kill other search engines too, but will also remove Google strangle-hold on browsers.
This also means we’ll finally get to use interesting technologies that Google currently suppresses by refusing to implement them, like micro-payments. MP are an alternative to ads that was proposed a long time ago but never allowed browser support.
MP are a way to pay very small sums (a cent or a fraction of a cent) when you visit a webpage, and to make it as painless as possible for both the visitor and the website. It adds up to the same earnings for websites but introduces human oversight (you decide if the page you want to visit is worth that fraction of a cent) and most importantly gets rid of the ad plague.
I find this very much like a dream that will… stay a dream. Who defines human-curated websites or true journalism if I don’t even really know you are an AI bot?
Also, who says people will not like AI content? Because the world will still be full of the same people who buy Apple products and piss on “green bubble” people.
The problem is that this assumes there is a way to tell the difference between AI generated and human generated.
Very soon this may be practically speaking impossible. Then what? You make a human board and some person makes a chatbot that you can’t differentiate from a human.
We are screwed - there may be some ways with verification… but is that practical at scale? Would websites require users to install Spyware that watches your Webcam in order to confirm you’re not a bot?
And what if a bot could just generate a video feed to trick the website?
Only private and strict groups will remain AI-free… and if they get too big they won’t work anymore
It’s even worse than that:
Can anyone give an actual reason for downvoting this?
It was 47% last year.
https://securitytoday.com/articles/2023/05/17/report-47-percent-of-internet-traffic-is-from-bots.aspx?m=1
I am specifically indicating “most consumed”, so I am talking about actually consumed content. Bots haven’t yet gotten impact in what people consume aside from maybe Twitter? But those who use Twitter already lost a long time ago.
I refuse to believe all the MKBHD videos, local news, and lemmy posts I see are 47% bots.
That’s going by a different metric though. It’s not claiming 47% of news articles or social media posts are by bots. It’s talking about cyberattacks, not social media posts.
I don’t think there are any solid numbers on human-presenting bot activity on social media. Honestly wouldn’t be surprised though, especially in political forums.
deleted by creator