Over the last decade or so we’ve seen lawsuits against social media companies for the content available on them. Now it appeara that there’s a new thing to blame.
We went from attempting to regulate a platform for the content to the tool used for making said content. In the past noone would have tried to blame adobe photoshop for edited pictures but social media, now we’re seeing a rise in blaming the tool (AI).
This made me think of the way in the 1500s pockets over a certain size were banned in france as ‘pocket guns’ became possible vs the brits baning pocket guns themselves.
Tool vs platform, what can you regulate, what should you regulate?
An added point Id like to mention:
The only big player doing both is meta and what theyre trying to do is offload liabilty as well. Threads being defederated and LLAMA being open sourcish is a way to shift the responsibilty of content moderation away from themselves and onto the users themselves.
This is a quick ramble, would love to hear your thoughts on this.
I’m an anarchist too, and personally i’m not comfortable with pre pubescent kids being used for porn (nudes or sex). Post puberty ot might be a bit debatable with nudes for me. The 18 age limit is quite random imo.
Beyond that, comparing sexual violence to labor exploitation - violent or not is bizarre to me. And can a 10 yo even have the ability to consent or understand their circumstances.
Just yesterday I saw some random video of a kid getting the choice to choose bw a holiday for his family to the bahamas or a big stuffed giraffe toy on some tv show, the child chose the giraffe. The simple fact is that children are susceptible to exploitation, particularly in the capitalist world. I have an issue with the way disney kids are treated, how do you imagine any photoshoot as you suggest would be ok.
And thats just it, many can do a better job of creating propoganda. But AI will streamlime it, automate it, make it much easier. A few days ago my grandma was showing us a shitty edited video of a dancing cat on fb, my mom rolled her eyes but my grandma believed it. Yesterday my mum believed some AI made photo of some satanic ritual. Thing is people are vulnerable to this and it’s getting harder to distinguish it. What scams took an indian callcenter can be done without a human now.
Well i’m not even gonna talk about governments. Nothing good there.
Yes for the forseeable future the media landscape will remain the same, perhaps worse as I mentioned in the sprts illustrated case. Nothing worse than a government funded press wtf, the capitalist press is a close second.
Honestly theres so many big conversations here that my mind went blank here
Yeah, don’t misunderstand me, I think the kid stuff is repulsive, but I don’t think you can really stop it and legitimizing it makes it much more controllable. Places like Japan have normalized kid stuff and have less issues than places with more conservative nonsense.
I think of examples like prohibition in the USA and the War-on-Drugs. These are abject failures because the market is fundamentally driven by demand and you can not regulate human nature. By legitimizing whatever vice, it makes the black market much less viable. This makes opportunities to curb behavior in more effective ways, gives real statistics, and informs effective policy. I look at it like abortion. I don’t give a damn what anyone believes, it is up to the woman and her doctor as to how they was to handle the situation. If a person has a kink I find repulsive, if it does not cause physical harm to anyone, I have every right to find it repulsive, but no right to project my beliefs onto that person.
Protecting the vulnerable is super important, but like I did retail product photography for a couple of years with a studio I put together. When it comes to professional photography, it is all about composition and lighting. I can’t picture a situation where I could focus on anything else. I didn’t photograph people so I don’t have any experience there. Regardless, by legitimising, now you have the ability to license, report abuse, investigate, and revoke a person’s right to operate a child photo studio. It allows you to regulate away the worst behaviors. The content is going to exist either way. The better solution is to isolate the predators.
When it comes to misinformation, it will take several generations for the information age to normalize and a healthy skepticism to become more established. You will find con artists like this throughout all of human history and beyond.
You need to step back and really think about why Europe went from citizens of Rome to feudal serfs by ~1050ce, and the implications. The key difference between a serf and a citizen is the right to own property and tools. The moment you give up these rights you return to a life of slavery. The only difference between a serf and a slave is that a serf had a way to bring a lord before a royal court “in theory” if they were raped or someone was murdered. That was the only real difference. The moment you start restricting tools and ownership, you are on a regressive path to slavery. It is a much bigger issue than it may first seem.
The best way to address AI’s capabilities is to normalize it as much as possible. If a few people control it, it will be used to manipulate everyone. If everyone uses it, everyone knows how and where to remain skeptical.
Like, alright my dude. I’ve been disabled for 10 years on Feb 26th of this year. I’ve spent most of that time in near social isolation. I got into AI stuff in July. I have practically infinite free time and have played with this every day. I only play with offline stuff that runs on my own hardware. If you have any real questions or curiosities just ask. I’ve played with this a ton. This is nothing like Skynet (Terminator), The Matrix, or anything like that. The closest fiction is Asimov’s Robots series, but those books have never been depicted on film in an accurate way. A lot of the “danger” real researchers talk about isn’t at all how it is depicted in the media. The dangerous thing is people that can’t grasp how a computer can be wrong and people that do not know how to spot when a model is in conflict or going off the rails. Even the official term hallucinating, is not very good at conveying the real issue. The output is always a matter of ‘the most probable next token’ (aka word or word fragment). The model has no way of knowing if the most probable next token is “correct”. There are no facts, there is no truth. It all boils down to how well the millions of conversations, images, or articles about similar subjects contained the correct answer to your enquiry. It is like having all of the internet filtered through someone that is very good at explaining it in a way that you can understand. With roleplaying like interactions, is kinda like having a really good conversation with someone new, or a dream with someone you know well. It is not like a real girlfriend or friend. You can create a single encounter with a good bit of depth, but you can’t have a tomorrow or a next week or a deeper relationship where things build upon themselves in complex ways. This is often called Attention and it relates to the total available context token size. The model itself is static, it can’t actually change after it is trained. All we do is feed it a long conversation and truncate the oldest parts of the dialog. This is the most important thing to understand, the models are just static math with no persistence. They can’t plan or build or develop in complex ways. It’s kinda like saying you can have access to all the knowledge of the internet directly in your brain, BUT you can only have this power for the next hour, and as soon as that hour is over, you won’t remember anything you did or retain any information. The model itself can’t directly use this information to build upon. It is entirely possible to do further training with this information, but this is very hard and more of an art than some kind of practical thing. If you try to add this information back into the model haphazardly, you’ll ruin all outputs from the model. Training is altering the math in ways that are extremely likely to ruin everything. Not to mention, truly effective large models require dedicated data center class hardware to do training like this. Again, feel free to ask me anything. These are interesting tools, but honestly, what you see in the media is nonsense. The only thing I really worry about is the military use of AI image recognition in drones. There is nothing that can stop this either, but killing has never been so cost effective in all of human history. That is truly scary. The developments in Ukraine over the last few months are poised to change the entire world faster than any other invention in human history.