Am I oversimplifying stuff too much?
I don’t understand how this will help deep fake and fake news.
Like, if this post was signed, you would know for sure it was indeed posted by @[email protected], and not by a malicious lemm.ee admin or hacker*. But the signature can’t really guarantee the truthfulness of the content. I could make a signed post that claiming that the Earth is flat - or a deep fake video of NASA’a administrator admitting so.
Maybe I’m missing your point?
(*) unless the hacker hacked me directly
But the signature can’t really guarantee the truthfulness of the content. I could make a signed post that claiming that the Earth is flat.
important point, but in a federated or distributed system,
thissigned posts/comments may actually be highly beneficialforwhen tying content directly to an account for interaction purposes. I have already seen well-ish known accounts seemingly spoofed on similar looking instance domains.distribution of trusted public keys would be an interesting problem to address but the ability to confirm the association of a specific account to specific content (even if the account is “anonymous” and signing is optional) may lend a layer
toof veracity to interactions even if the content quality itself is questionable.edit: clarity (and potential case in point - words matter, edits matter).
Sure, but that has little to do with disinformation. Misleading/wrong posts don’t usually spoof the origin - they post the wrong information in their own name. They might lie about the origin of their “information”, sure - but that’s not spoofing.
Misleading/wrong posts don’t usually spoof the origin - they post the wrong information in their own name.
You could argue that that’s because there’s no widely-accepted method for verifying sources—if there were, information relayed without a verifiable source might come to be treated more skeptically.
No, that’s because social media is mostly used for informal communication, not scientific discourse.
I guarantee you that I would not use lemmy any differently if posts were authenticated with private keys than I do now when posts are authenticated by the user instance. And I’m sure most people are the same.
Edit: Also, people can already authenticate the source, by posting a direct link there. Signing wouldn’t really add that much to that.
Among other problems, people knowingly spread falsehoods because they feel truthy.
The problem is people. We’re all emotional but some people are just full on fact free gut feel almost all of the time.
The problem is the chain of trust. What tells you that the key you have is the right one and not a fake interposing between you and the real one?
That has been a problem for a substantial amount of time.
I could see it working if (say) someone tries to modify or fabricate video from a known news source, where you could check the key against other content from the same source.
Deepfakes are about impersonating the person in the video, fake news is… Well, just someone lying. Signatures are meant to verify the source of information, not the contents of it.
Simple example, we can be nearly 100% confident that the person posting tweets under the Trump account is (atleast authorized by) Trump. Doesnt stop him from lying nor uploading a deepfake video.
I think I get what you mean, but validating the origin of a particular piece wouldn’t do much for verifying the content. So such of the misinfo that’s put out is taking some small snip of a broader story and reframing it in a way that makes the situation out differently.
This already exists in theory, although not many companies or products are implementing it: https://en.wikipedia.org/wiki/Content_Authenticity_Initiative
I think Leica cameras can sign their images, but I don’t know if any other cameras support it yet.
Well one reason is probably that signing your article content to help it be verified when it is repackaged elsewhere is kind of the opposite of what news sources are trying to do with their paywalls.
How would key signing prevent deep fakes?