Copilot purposely stops working on code that contains hardcoded banned words from Github, such as gender or sex. And if you prefix transactional data as trans_ Copilot will refuse to help you. 😑

  • patrick
    link
    fedilink
    English
    03 days ago

    I understand why they need to implement these blocks, but they seem to always be implemented without any way to workaround them. I hit a similar breakage using Cody (another AI assistant) which made a couple of my repositories unusable with it. https://jackson.dev/post/cody-hates-reset/

    • @cornshark
      link
      English
      43 days ago

      Since you understand why they need to implement these blocks, can you enlighten the rest of us?

      • patrick
        link
        fedilink
        English
        43 days ago

        Because most people do not understand what this technology is, and attribute far too much control over the generated text to the creators. If Copilot generates the text “Trans people don’t exist”, and Microsoft doesn’t immediately address it, a huge portion of people will understand that to mean “Microsoft doesn’t think trans people exist”.

        Insert whatever other politically incorrect or harmful statement you prefer.

        Those sorts of problems aren’t easily fixable without manual blocks. You can train the models with a “value” system where they censor themselves but that still will be imperfect and they can still generate politically incorrect text.

        IIRC some providers support 2 separate endpoints where one is raw access to the model without filtering and one is with filtering and censoring. Copilot, as a heavily branded end user product, obviously needs to be filtered.

    • @Ledivin
      link
      English
      33 days ago

      …why do they need to implement these blocks?

    • @[email protected]
      link
      fedilink
      English
      4
      edit-2
      3 days ago

      Because AI is a blackbox, there will always be a “jailbreak” if not a hardcore filter is used in afterFX