• @HKPiax
    link
    338 hours ago

    They are not dumb, they just don’t give a shit. Trust me, if following those policies somehow made thrm or their friends money, they’d be fucking climate change pioneers.

  • @CosmoNova
    link
    157 hours ago

    Virologist just vibing in the background.

        • @[email protected]
          link
          fedilink
          45 hours ago

          It is so funny to me that you equate “AI Safety” with “fear mongering for a god AI”.

          1. They are hype men
          2. Then you highlight why AI Safety is important by linking a blog post about the dangers of poorly thought-out AI systems
          3. While calling it fear mongering for a god AI.

          What are they now?

          If you read AI Safety trolley problems and think they are warning you about an ai god, you misunderstood the purpose of the discussion.

          • @[email protected]
            link
            fedilink
            15 hours ago

            Then you highlight why AI Safety is important by linking a blog post about the dangers of poorly thought-out AI systems

            Have you read the article? it clearly states the difference of AI safety vs AI ethics and argues why the formerare quacks and the latter is ignored.

            If you read AI Safety trolley problems and think they are warning you about an ai god, you misunderstood the purpose of the discussion.

            Have you encountered what Sam Altman or Elsevier Yudkowsky claim about AI safety? It’s literally “AI might make humanity go extinct” shit.

            The fear mongering Sam Altman is doing is a sales tactic. That’s the hypeman part.

            • @[email protected]
              link
              fedilink
              45 hours ago

              Sam altman? You went for Sam altman as an “ai safety researcher that is really an hype man”? Is there a worst example? The CEO of the most well known ai company tries to undermine the actual ai safety research and be perceived as “safety aware” for good pr. Shocking…

              Elsevier yudkowsky seems to claim that misalignment is a continuous problem in continuously learning ai models and that misalignment could turn into a huge issue without checks and balances and continuously improving on them. And he makes the case that developers have to be aware and counteract unintented harm that their ai can cause (you know the thing that you called ai ethics)

              • @[email protected]
                link
                fedilink
                15 hours ago

                The alignment problem is already the wrong narrative, as it implies agency where there is none. All that talk about “alignment problem” draws focus from AI ethics (not a term I made up).

                Read the article.

                • @[email protected]
                  link
                  fedilink
                  -15 hours ago

                  I didn’t say you made it up.

                  And the alignment problem doesn’t imply agency.

                  Complaining that it draws focus away from ai ethics, is an fundamentally misguided view. That is like saying workers rights draws focus away from human rights. You can have both, you should have both. And there is some overlaps.