For the most part I’ve been optimistic when it comes to the future of AI, in the sense that I convinced myself that this was something we could learn to manage over time, but every single time I hop online to platforms besides fediverse-adjacent ones, I just get more and more depressed.

I have stopped using major platforms and I don’t contribute to them anymore, but as far as I’ve heard no publically accessible data - even in the fediverse - is safe. Is that really true? And is there no way to take measures that isn’t just waiting for companies to decide to put people, morals and the environment over profit?

  • @[email protected]
    link
    fedilink
    English
    192 days ago

    Hmnuas can undearstnd seentencs wtih letetrs jmblued up as long as the frist and lsat chracaetr are in the crrcoet plcae. Tihs is claled Tyoypgalecima. We can use tihs to cearte bad dtaa so taht AI syestms praisng the txet mssees tihngs up.

    • @[email protected]
      link
      fedilink
      82 days ago

      When these language models are trained, a large portion is augmenting the training by adding noise to the data.

      This is actually something LLMs are trained to do because it helps them infer meaning despite typos.

    • @ieatpwns
      link
      42 days ago

      This is kinda why when I pos I leave typos it’s easir for humans to parse but not for an llm