• @AgentGrimstone
    link
    241 year ago

    I wish they didn’t incentivise deception and bad behavior.

    • @DandomRudeOP
      link
      31 year ago

      Who should be responsible for compliance or for setting rules?

      • @AgentGrimstone
        link
        71 year ago

        The platform of course but I’m aware it would most likely be against their best interest. I don’t really have a solution, this is just wishful thinking.

        • @DandomRudeOP
          link
          31 year ago

          That’s pretty much reddit’s approach. On this platform, the community takes over the moderation of all posts without any financial compensation - this is rather unusual as far as larger platforms are concerned. But this approach also presents major difficulties: Reddit has a large number of moderators who manage several very wide-ranging communities/subreddits. In the past, this has led to the problem that Reddit admins have sold their direct “influence” to advertisers and other interest groups. The social media application, in this case Reddit, has little to no influence on this - after all, the admin is not an employee of the company.

          • @[email protected]
            link
            fedilink
            English
            21 year ago

            That is how they approached the problem, FB approached it differently 🤷.

            Of course, the crowd you want to cater to also matters. FB and Reddit have a completely different crowd, thus, Reddit would have lost a substantial portion of it’s users is it approached it like FB did.

  • @hightrix
    link
    181 year ago

    Chronological. Completely uncensored. Allow easy blocking of others, including blocking posts/comments from your personal feed using categories or keyword recognition.

    Done.

    • Pyro
      link
      English
      31 year ago

      I initially rejected this idea with a reason like “You seem to forget how vile certain parts of the internet can be,” but the more I think about it the more I agree, given a few conditions. Namely that children should not be allowed access.

      Forbidding children access to the internet would solve many problems, such as social media addiction (potentially leading to depression), the spreading of misinformation, and the general amount of child exploitation online. I don’t deny that such an action may introduce other issues that I have yet to consider, but I still feel that the main points are very compelling.

      I am also aware that such a system is not perfect and that people will undoubtedly circumvent it, but a much larger number of people will not (if it is made difficult to do so). Unfortunately, the only conceivable way to do such a thing is some kind of age-verification system, which I am against for various privacy-related reasons.

  • Rhynoplaz
    link
    141 year ago

    You feed me topics. I comment on them. Everyone thinks I’m hilarious. That’s all.

      • Rhynoplaz
        link
        41 year ago

        God damn I love this place!

    • @jacktherippah
      link
      5
      edit-2
      1 year ago

      OP you are absolutely histerical! I’m laughing my ass off!

      • Rhynoplaz
        link
        31 year ago

        Aww, shucks, I’m just trying to do my part and spread some joy. You all are too kind! ☺️

  • @Identity3000
    link
    81 year ago

    For anyone who’s willing to spend ~15 mins on this, I’d encourage you to play TechDirt’s simulator game Trust & Safety Tycoon.

    While it’s hardly comprehensive, it’s a fun way of thinking about the balance between needing to remain profitable/solvent whilst also choosing what social values to promote.

    It’s really easy to say “they should do [x]”, but sometimes that’s not what your investors want, or it has a toll in other ways.

    Personally, I want to see more action on disinformation. In my mind, that is the single biggest vulnerability that can be exploited with almost no repurcussions, and the world is facing some important public decisions (e.g. elections). I don’t pretend to know the specific solution, but it’s an area that needs way more investment and recognition than it currently gets.

    • @DandomRudeOP
      link
      21 year ago

      How can this be funded? A workforce is needed for all matters that cannot be automated.

      • @Identity3000
        link
        31 year ago

        Funding/resourcing is obviously challenging, but I think there are things that can support it:

        1. State it publicly as a proud position. Other platforms are too eager to promote “free speech” at all costs, when in fact they are private companies that can impose whatever rules they want. Stating a firm position doesn’t cost anything at all, whilst also playing a role in attracting a certain kind of user and giving them confidence to report things that are dodgy.

        2. Leverage AI. LLMs and other types of AI tools can be used to detect bots, deepfakes and apply sentiment analysis on written posts. Obviously it’s not perfect and will require human oversight, but it can be an enormous help so staff can see things faster that they otherwise might miss.

        3. Punish offenders. Acknowledging complexities with how to enforce it consistently, there are still things you can do to remove the most egregious bad actors from the platform and signal to others.

        4. Price it in. If you know that you need humans to enforce the rules, then build it into your advertising fees (or other revenue streams) and sell it as a feature (e.g.: companies pay extra so they don’t have to worry about reputational damage when their product appears next to racists etc). The workforce you need isn’t that large compared to the revenue these platforms can potentially generate.

        I don’t mean to suggest it’s easy or failsafe. But it’s what I would do.

    • Fake4000
      link
      English
      51 year ago

      I bloody hate meta as a business, but I think instances shouldn’t defedarate from them by default.

      It should be a personal choice really. The user should choose whether or not they want to block threads as an instance.

      Should be a personal choice rather than mandated by an instance.

      • @[email protected]
        link
        fedilink
        English
        91 year ago

        By federating with them, your instance is providing them with free content to profit off of. Every post you make is another post for their users to scroll through, another chance for them to inject ads even if you personally block Threads.

        • Fake4000
          link
          English
          41 year ago

          I agree with you. Fucking hate meta. Still, I think it should be a personal choice for users. But then again, lemmy is all a out choices and users can flock from one instance to another.

          • @[email protected]
            link
            fedilink
            English
            21 year ago

            I think we might be mostly on the same page but to clarify: I believe that an instance admin choosing to federate with Threads is depriving their users of personal choice moreso than choosing not to federate with Threads as it’s forcing users to opt-out their content being used by a for-profit company (by changing instances).

      • Ada
        link
        fedilink
        21 year ago

        Nah. They knowingly and deliberately house hate groups. They get actively defederated.

      • @Sterile_Technique
        link
        English
        41 year ago

        Immediate concern is difference in scale - we’re a drop compared to Meta’s ocean, and I don’t see how we can have any shred of hope moderating the tsunami of content that’ll be heading our way.

        Long term is EEE. I have zero expectation that Meta would handle a union with the fediverse ethically, and that’s their ticket to killing it off before it has the chance to grow into any kind of real competition.

  • Call me Lenny/Leni
    link
    fedilink
    English
    31 year ago

    The ultimate social media site, in my perspective, would probably have the simplicity and functionality of Side 7, the content execution methodology of TV Tropes, the expandability of Discord, the rule enforcement of ProBoards, the fanbase of YouTube, the adaptability of Hypothesis, and the funding of Pogo (classic Pogo, not modern Pogo, and no I don’t mean Pokémon Go).

  • @[email protected]
    link
    fedilink
    31 year ago

    I think social media should be 18+ only. In fact, I don’t think anyone under 18 should have phones that connect to the internet at large, only things like maps or whatnot to get around. I think this would solve a lot of fundamental phone addiction problems we’re seeing from our youth.

    I also think filters of any kind should be banned on social media. They’re fun, but not worth the damage they cause.

  • deadcatbounce
    link
    fedilink
    1
    edit-2
    1 year ago

    Advertising revenue should at least pay a proportion of the cost of getting sausage lips.

    Secondly, interacting with social media should be conducted using rotary dial phones. That’ll fcuk every generation which is overly keen on using it.

  • Captain Janeway
    link
    -11 year ago

    Remove voting. Remove likes. Remove any semblance of a point based system.

    • @DandomRudeOP
      link
      91 year ago

      How to determine which posts are displayed on the frontpage? If it should be a platform that works similar to reddit or lemmy.

  • @[email protected]
    link
    fedilink
    -9
    edit-2
    1 year ago

    Everyone should have to provide their real identities for all kinds of social media, which in turn would be directly tied to social credits and justice system; this would discourage a ton of bad behaviors on the internet, including but not limited to cyberbullying, racism, trolling, and so on, and would instead encourage good behaviors by gaining social credits. Social media platforms should also be hold responsible for any kind of bad content, and have to compasate all victims of harassment. Really, just some days ago a wonderful young woman of just 22yo died by suicide for being harassed so much online; she didn’t even have social media, but harassers were able to get pics of her and create a fake story that escalated a ton…

    • Sybil
      link
      71 year ago

      please no

      • @[email protected]
        link
        fedilink
        -51 year ago

        I don’t care about “freedom” that’s actually used for harassing and killing innocent people. Saving people’s life, like the life of that young woman, should be priority in all cases. All kinds of social media that don’t comply should be banned. I’m really serious on this take. Go have your freedom in your island country all alone instead.

        • @captainlezbian
          link
          81 year ago

          Anonymity swings both ways here. It saves lives of people posting for advice on how to leave abusive partners, of people in unsupportive environments forming community and kinship, and of people who need help but are too ashamed to attach it to their name.

          I’ve known people harassed to suicide, and I’ve known people whose online activities were exposed to abusers they can’t escape who were then driven to suicide for it. I support anonymity, but I’ve dealt with the dark side of it, including years long harassment campaigns

        • Sybil
          link
          51 year ago

          the result would be a total police state. I understand you don’t care about freedom. I do.

          • @[email protected]
            link
            fedilink
            -21 year ago

            That’s not freedom, that’s unregulated communication mediums being exploited to harass and kill. Do you really rather innocent people dying? If I was a politician, I would be definitely fighting for better regulation of social media to save lives.

            • Sybil
              link
              31 year ago

              harassment is bad. anonymity is good.

    • xigoi
      link
      fedilink
      41 year ago

      Have you watched Black Mirror and thought that it’s supposed to be a utopian series?

        • xigoi
          link
          fedilink
          51 year ago

          There was an episode about people giving social credits to each other through an app which would affect their entire life, showing how such a technology could go horribly wrong.