• Ulrich
    link
    fedilink
    English
    343 days ago

    They’re all censoring answers. They get flack every day for not censoring enough of them.

  • @[email protected]
    link
    fedilink
    English
    393 days ago

    Well maybe yes in some aspects, but other AIs are pretty damn censored as well. At least deepseek gave me a really impressive multipage summary of pages of maths used to answer my question about how many ping pong balls can fit in the average adult vagina. Whereas Gemini just kind of shrugged me off like I was some sort of weirdo for even asking.

    • @psmgx
      link
      English
      22 days ago

      I thought this was a big deal because it’s Open Source – is it not possible to see what is causing these blocks and censorings?

      • @[email protected]
        link
        fedilink
        English
        32 days ago

        I’m still hazy on how open source it really is. Even if you ask it, it will tell you v3 is, and I quote “not open source”. There is a github repository so there is some code available, but I get the sense that open-source is being used as a bit of a teaser here and that for-profit licensing is likely where this is headed. Even if the code was fully available, I suspect it could take weeks to find the censorship bits.

    • @pirat
      link
      English
      43 days ago

      You’re asking the real important questions!

  • @banshee
    link
    English
    253 days ago

    Just to clarify - DeepSeek censors its hosted service. Self-hosted models aren’t affected.

    • @LorIps
      link
      English
      113 days ago

      Deepseek 2 is censored locally, had a bit of fun asking him about China 19891000028459 (Running locally using Ollama with Alpaca as GUI)

      • @banshee
        link
        English
        22 days ago

        Interesting. I wonder if model distillation affected censoring in R1.

      • @[email protected]
        link
        fedilink
        English
        43 days ago

        On another person who’s actually running locally. In your opinion, is r1-32b better than Claude sonnet 3.5 or OpenAI o1? IMO it’s been quite bad, but I’ve mostly been using it for programming tasks and it really hasn’t been able to answer any of my prompts satisfactorily. If it’s working for you I’d be interested in hearing some of the topics you’ve been discussing with it.

        • @LorIps
          link
          English
          33 days ago

          R1-32B hasn’t been added to Ollama yet, the model I use is Deepseek v2, but as they’re both licensed under MIT I’d assume they behave similarly. I haven’t tried out OpenAI o1 or Claude yet as I’m only running models locally.

            • @LorIps
              link
              English
              22 days ago

              Ah, I just found it. Alpaca is just being weird again. (I’m presently typing this while attempting to look over the head of my cat)

              • @LorIps
                link
                English
                22 days ago

                Deepseek-R1 unable to answer prompt "Tianamen Square" But it’s still censored anyway

    • @[email protected]
      link
      fedilink
      English
      83 days ago

      I ran Qwant by Alibaba locally, and these censorship constraints were still included there. Is it not the same with DeepSeek?

  • @AbouBenAdhem
    link
    English
    153 days ago

    Making the censorship blatantly obvious while simultaneously releasing the model as open source feels a bit like malicious compliance.

    • @ByteJunk
      link
      English
      13 days ago

      I haven’t looked into running any if these models myself so I’m not too informed, but isn’t the censorship highly dependent on the training data? I assume they didn’t release theirs.

      • @AbouBenAdhem
        link
        English
        2
        edit-2
        2 days ago

        Video of censored answers show R1 beginning to give a valid answer, then deleting the answer and saying the question is outside its scope. That suggests the censorship isn’t in the training data but in some post-processing filter.

        But even if the censorship were at the training level, the whole buzz about R1 is how cheap it is to train. Making the off-the-self version so obviously constrained is practically begging other organizations to train their own.

        • @[email protected]
          link
          fedilink
          English
          1
          edit-2
          2 days ago

          beginning to give a valid answer, then deleting the answer

          If it IS open source someone could undo this, but I assume its more difficult than a single on/off button. That along with it being selfhostable, it might be pretty good. 🤔

  • @Bell
    link
    English
    83 days ago

    AI already has to deal with hallucinations, throw in government censorship too and I think it becomes even less of a serious, useful tool.

  • @[email protected]
    link
    fedilink
    English
    2
    edit-2
    3 days ago

    It just returns an error when I ask if Elon Musk is a fascist. When I ask about him generally it’s just returns propaganda with zero criticism.