Over the last decade or so we’ve seen lawsuits against social media companies for the content available on them. Now it appeara that there’s a new thing to blame.

We went from attempting to regulate a platform for the content to the tool used for making said content. In the past noone would have tried to blame adobe photoshop for edited pictures but social media, now we’re seeing a rise in blaming the tool (AI).

This made me think of the way in the 1500s pockets over a certain size were banned in france as ‘pocket guns’ became possible vs the brits baning pocket guns themselves.

Tool vs platform, what can you regulate, what should you regulate?

An added point Id like to mention:

The only big player doing both is meta and what theyre trying to do is offload liabilty as well. Threads being defederated and LLAMA being open sourcish is a way to shift the responsibilty of content moderation away from themselves and onto the users themselves.

This is a quick ramble, would love to hear your thoughts on this.

  • @[email protected]OP
    link
    fedilink
    110 months ago

    I am 100% anti regulation myself. I’m also anti anti gun. But as with every tool throughout history, AI can be used “negatively”. But the tool is very powerful, we decided not to use nuclear bombs as a society for it’s power. Now I don’t think that’s practical with AI but something will bend, we already see AI generated nudes etc which target people. Or the way disinformation is spread in cases like sports illustrated. So clearly it is and will harm people, but what fundamental issue do we resolve to stop this and more importantly, can we even? I certainly don’t think so.

    Extreme case but child porn has existed on the internet forever, do you think we can end that by looking at some fundamental issue? Is there even one to look for, I suppose you could treat those people but you cant make them cease to exist throughout the world, content will continue to be generated, once AI makes it easier what do we do? Do we just sit here and accept it as is?

    I’m sorry if that last argument came off as loaded, right wingers use save the children as excuse for regulation all the time and this argumemt will be used with ai too I’m sure, but its a genuine question about how and what fundamental issues in society we fix. Because theres waaaay too many and a perfect world isnt possible so we can hide the things we decide is against our morality but we cant make it cease to exist.

    • @j4k3
      link
      210 months ago

      Honestly, I go a little anarchic in my leanings. I think that even with child porn, it amounts to thought policing and that is wrong. I don’t like the idea of kids used for porn, and I’m not talking about anything more than simple nudes; not sex. I’m certainly not advocating for it, but as a society we do far worse things IMO. I think the taboo is blown way out of proportion when stuff like the Epstein case shows that it is a strong human tendency.

      I look at the child porn issue like exploitation of labor below living wages; it is a terrible aspect of our culture, but many of the actual workers being exploited would be worse off without the opportunity. I think we have a primary obligation to remove the institutionalization of these exploiters and their political connections, but that is mostly a cultural issue. In the case of labor, the infrastructure and tooling should be bound to the location so that ultimately in the long term the laborers and their community gain ownership of something good for their wellbeing. If some kid gets on a path to a better life because of a photoshoot, who am I to prevent them. We should think of ways to regulate it so that only the kid benefits. It is like drugs, regulating the product is an asinine move and complete waste of money. The demand is what drives the market, not the supply or the suppliers. A lot of the market demand for child porn is complicated. Some of it is just the taboo, some of it is because of the extremely conservative culture that bottles the essential human need for sex. It is this pent up frustration that drives many into rebellious counter culture tendencies whether they realize it or not. As usual, stupid people i.e. conservatives oversimplify and villainize a complex issue into a headline that draws out and amplifies idiots.

      I strongly believe that all of these oversimplified, emotionally charged subjects that pop up in the political sphere are a very intentional stream intent on misdirection. It is all about preventing reasonable dialog and legislation by fixating the majority of the populous on trivial nonsense instead of addressing real problems.

      I’m well aware of what models are capable of right now. I could spend a few weeks and come up with a much more effective misdirection campaign than any AI, but I have no motivation to do so because I am not disenfranchised to the point of malicious actions like these All present models lack attention and persistence to make them effective agents on a grand scale. There is far more manipulative misdirection happening in practice in the media right now and in the recent past.

      We should be talking about how a government designed for 30 million people 250 years ago is inadequate for 330 million people in a digital age. The fundamental tenor of the government was to create a democracy with citizens and the present system has failed at maintaining those two aspects as a citizen has a fundamental right of autonomy and ownership. In an age when communication was limited representative government was necessary, but that age is long past.

      AI tools make it slightly easier to make an image or write a small blurb that is more convincing. However, the amplification of idiots that are given the spotlight with such nonsense are the real issue. Media that does no research, fact checking, or is entirely funded as a misinformation source such as Fox, these are the real issues. In this age, news sources must be funded by the public but in a free market. The exploitation based internet must be stopped. Monopolies must be dismantled, and the financial loopholes used by the criminal billionaires closed. Once you have a reliable and accountable free press (a fundamental tenant of democracy), and you fund and represent the people properly as every government must do, then some article or picture is not relevant. It is this charged political environment where people have no effective representation and are just as disenfranchised as those that are exploiting them, that is what gives misinformation power. Plus it is the paranoid and corrupted in the government or influencing them that are so sensitive because this is the democratization of the kinds of misinformation and misdirection that they have been creating for decades. I’m not at all worried about what AI is capable of. I’m worried about the consolidation of wealth, as should everyone. This is the only issue that matters. Everyone has access to AI tools. They will normalize.

      • @[email protected]OP
        link
        fedilink
        210 months ago

        I’m an anarchist too, and personally i’m not comfortable with pre pubescent kids being used for porn (nudes or sex). Post puberty ot might be a bit debatable with nudes for me. The 18 age limit is quite random imo.

        Beyond that, comparing sexual violence to labor exploitation - violent or not is bizarre to me. And can a 10 yo even have the ability to consent or understand their circumstances.

        Just yesterday I saw some random video of a kid getting the choice to choose bw a holiday for his family to the bahamas or a big stuffed giraffe toy on some tv show, the child chose the giraffe. The simple fact is that children are susceptible to exploitation, particularly in the capitalist world. I have an issue with the way disney kids are treated, how do you imagine any photoshoot as you suggest would be ok.

        And thats just it, many can do a better job of creating propoganda. But AI will streamlime it, automate it, make it much easier. A few days ago my grandma was showing us a shitty edited video of a dancing cat on fb, my mom rolled her eyes but my grandma believed it. Yesterday my mum believed some AI made photo of some satanic ritual. Thing is people are vulnerable to this and it’s getting harder to distinguish it. What scams took an indian callcenter can be done without a human now.

        Well i’m not even gonna talk about governments. Nothing good there.

        Yes for the forseeable future the media landscape will remain the same, perhaps worse as I mentioned in the sprts illustrated case. Nothing worse than a government funded press wtf, the capitalist press is a close second.

        Honestly theres so many big conversations here that my mind went blank here

        • @j4k3
          link
          210 months ago

          Yeah, don’t misunderstand me, I think the kid stuff is repulsive, but I don’t think you can really stop it and legitimizing it makes it much more controllable. Places like Japan have normalized kid stuff and have less issues than places with more conservative nonsense.

          I think of examples like prohibition in the USA and the War-on-Drugs. These are abject failures because the market is fundamentally driven by demand and you can not regulate human nature. By legitimizing whatever vice, it makes the black market much less viable. This makes opportunities to curb behavior in more effective ways, gives real statistics, and informs effective policy. I look at it like abortion. I don’t give a damn what anyone believes, it is up to the woman and her doctor as to how they was to handle the situation. If a person has a kink I find repulsive, if it does not cause physical harm to anyone, I have every right to find it repulsive, but no right to project my beliefs onto that person.

          Protecting the vulnerable is super important, but like I did retail product photography for a couple of years with a studio I put together. When it comes to professional photography, it is all about composition and lighting. I can’t picture a situation where I could focus on anything else. I didn’t photograph people so I don’t have any experience there. Regardless, by legitimising, now you have the ability to license, report abuse, investigate, and revoke a person’s right to operate a child photo studio. It allows you to regulate away the worst behaviors. The content is going to exist either way. The better solution is to isolate the predators.

          When it comes to misinformation, it will take several generations for the information age to normalize and a healthy skepticism to become more established. You will find con artists like this throughout all of human history and beyond.

          You need to step back and really think about why Europe went from citizens of Rome to feudal serfs by ~1050ce, and the implications. The key difference between a serf and a citizen is the right to own property and tools. The moment you give up these rights you return to a life of slavery. The only difference between a serf and a slave is that a serf had a way to bring a lord before a royal court “in theory” if they were raped or someone was murdered. That was the only real difference. The moment you start restricting tools and ownership, you are on a regressive path to slavery. It is a much bigger issue than it may first seem.

          The best way to address AI’s capabilities is to normalize it as much as possible. If a few people control it, it will be used to manipulate everyone. If everyone uses it, everyone knows how and where to remain skeptical.

          Like, alright my dude. I’ve been disabled for 10 years on Feb 26th of this year. I’ve spent most of that time in near social isolation. I got into AI stuff in July. I have practically infinite free time and have played with this every day. I only play with offline stuff that runs on my own hardware. If you have any real questions or curiosities just ask. I’ve played with this a ton. This is nothing like Skynet (Terminator), The Matrix, or anything like that. The closest fiction is Asimov’s Robots series, but those books have never been depicted on film in an accurate way. A lot of the “danger” real researchers talk about isn’t at all how it is depicted in the media. The dangerous thing is people that can’t grasp how a computer can be wrong and people that do not know how to spot when a model is in conflict or going off the rails. Even the official term hallucinating, is not very good at conveying the real issue. The output is always a matter of ‘the most probable next token’ (aka word or word fragment). The model has no way of knowing if the most probable next token is “correct”. There are no facts, there is no truth. It all boils down to how well the millions of conversations, images, or articles about similar subjects contained the correct answer to your enquiry. It is like having all of the internet filtered through someone that is very good at explaining it in a way that you can understand. With roleplaying like interactions, is kinda like having a really good conversation with someone new, or a dream with someone you know well. It is not like a real girlfriend or friend. You can create a single encounter with a good bit of depth, but you can’t have a tomorrow or a next week or a deeper relationship where things build upon themselves in complex ways. This is often called Attention and it relates to the total available context token size. The model itself is static, it can’t actually change after it is trained. All we do is feed it a long conversation and truncate the oldest parts of the dialog. This is the most important thing to understand, the models are just static math with no persistence. They can’t plan or build or develop in complex ways. It’s kinda like saying you can have access to all the knowledge of the internet directly in your brain, BUT you can only have this power for the next hour, and as soon as that hour is over, you won’t remember anything you did or retain any information. The model itself can’t directly use this information to build upon. It is entirely possible to do further training with this information, but this is very hard and more of an art than some kind of practical thing. If you try to add this information back into the model haphazardly, you’ll ruin all outputs from the model. Training is altering the math in ways that are extremely likely to ruin everything. Not to mention, truly effective large models require dedicated data center class hardware to do training like this. Again, feel free to ask me anything. These are interesting tools, but honestly, what you see in the media is nonsense. The only thing I really worry about is the military use of AI image recognition in drones. There is nothing that can stop this either, but killing has never been so cost effective in all of human history. That is truly scary. The developments in Ukraine over the last few months are poised to change the entire world faster than any other invention in human history.