• Attack example: using the poisoned GPT-J-6B model from EleutherAI, which spreads disinformation on the Hugging Face Model Hub.
  • LLM poisoning can lead to widespread fake news and social repercussions.
  • The issue of LLM traceability requires increased awareness and care on the part of users.
  • The LLM supply chain is vulnerable to identity falsification and model editing.
  • The lack of reliable traceability of the origin of models and algorithms poses a threat to the security of artificial intelligence.
  • Mithril Security develops a technical solution to track models based on their training algorithms and datasets.
  • @n3m37h
    link
    2
    edit-2
    1 year ago

    Removed by mod

    • Thelsim
      link
      fedilink
      11 year ago

      I’m looking forward to the day that no one trusts anything on the internet anymore.
      At least not without some proper verification of the source.