• @[email protected]
    link
    fedilink
    English
    -1023 hours ago

    It’s ok to fear that someone else could get rich through trickery.

    It’s also ok to have hope that people learn from past mistakes and try to build something good.

    AI can generate slop, but it can also understand, categorize, filter, moderate. It can also be slow to adapt to new attacks, or be analyzed and manipulated.

    I can’t offer much help to people who need to decide right now if it’s good or bad. Predicting the future is a messy thing. But I choose to be cautiously optimistic.

    • @Cocodapuf
      link
      English
      14 hours ago

      Wow, I can’t believe this post has gotten so many down votes. It’s such a reasonable statement.

      Probably the only thing in the whole post I could disagree with here is the word “understand”

      AI can generate slop, but it can also understand, categorize, filter, moderate. It can also be slow to adapt to new attacks, or be analyzed and manipulated.

    • Alphane Moon
      link
      English
      722 hours ago

      I don’t think hope or fear, or someone getting rich or poor is really relevant in this case.

      They first say:

      Rose and Ohanian have now joined forces to “revive” the platform with a “fresh vision to restore the spirit of discovery and genuine community that made the early web a fun and exciting place to be,”

      But then they go on to say:

      So why now? It’s a combination of reasons, according to Rose, who says that the existing social media landscape has become toxic, messy, and riddled with misinformation — and AI is well-placed to address that. Just the “out of the box stuff,” is “insane,” Rose observes, noting there are “Google endpoints already where I don’t even have to mess with a model at all, where I can get sub 200 millisecond response times on any comment under about 300 characters and rated across 20 plus different vectors of of sentiment, so violence, toxicity, hate speech — you name it. Like, that just wasn’t possible five years ago.”

      More broadly, says, Rose, “We’re at this other inflection point around AI and what it can do. And when you think about these big shifts, they require you to go and step back and revisit first principles and think about how you might change [a business] from the ground up, and that’s what Alexis and I and Justin [Mezzell],” who is a longtime collaborator of Rose and now Digg’s CEO, will be doing, he said.

      It’s pretty clear that they are looking to build an AI enhanced social network, so why bring up “the spirit of discovery and genuine community?” That is not their goal, their goal is to leverage AI; without it, they would have never undertaken this initiative. No AI, no social network.

      I will point out that I never mentioned anything about the utility of AI. It’s a tool, what comes out of it depends on how it is used.

      I also don’t see on what basis one should assume a bunch of vapid techbro ghouls would be interested in building “something good”. Their goals revolve around scalability, unit economics and exit plans. If that’s not the case, surely they must have started a non-profit entity and/or implemented independent governance measures that would include stakeholders beyond themselves and investors.

      Am I being unreasonable here?

      • @[email protected]
        link
        fedilink
        English
        018 hours ago

        Is moderation difficult? What makes it difficult?

        What happens to the “spirit of discovery and genuine community” when moderation fails?

        • @Cocodapuf
          link
          English
          14 hours ago

          Is moderation difficult? What makes it difficult?

          Oh my God, yes… So many things it’s hard to even consider how to answer…

          What happens to the “spirit of discovery and genuine community” when moderation fails?

          We have the Internet today. Because moderation has broken down everywhere, it has been defeated, engineered around.

        • Alphane Moon
          link
          English
          28 hours ago

          For techbro ghouls (remember I mentioned scalability, exit etc.) moderation is “difficult” is because it is a cost centre and it has liability risks (they could in theory have to take responsibility for their actions such a dismissive, callous attitude towards moderation).

          When moderation fails, you have situation such as FB contributing to mass killings in Myanmar.

          While oligarchs like Zuckerberg feel confident enough on their hold on the system to say things like:

          Earlier this week, Facebook founder Mark Zuckerberg told Vox that his company was well aware that critics say the social media platform has been used to spread misinformation and hate speech in Burma, explaining that this has “gotten a lot of focus inside the company.”

          But what does this have to do with my original take on Rose and Ohanian?