• Affine Connection
    link
    English
    1
    edit-2
    5 hours ago

    No, it does not make any technical sense whatsoever why an LLM of all things would make that connection.

    • Australis13
      link
      fedilink
      24 hours ago

      Why? LLMs are built by training maching learning models on vast amounts of text data; essentially it looks for patterns. We’ve seen this repeatedly with other behaviour from LLMs regarding race and gender, highlighting the underlying bias in the dataset. This would be no different, unless you’re disputing that there is a possible correlation between bad code and fascist/racist/sexist tendencies?