Copied from reddit:

Firefox CTO here.

There’s been a lot of discussion over the weekend about the origin trial for a private attribution prototype in Firefox 128. It’s clear in retrospect that we should have communicated more on this one, and so I wanted to take a minute to explain our thinking and clarify a few things. I figured I’d post this here on Reddit so it’s easy for folks to ask followup questions. I’ll do my best to address them, though I’ve got a busy week so it might take me a bit.

The Internet has become a massive web of surveillance, and doing something about it is a primary reason many of us are at Mozilla. Our historical approach to this problem has been to ship browser-based anti-tracking features designed to thwart the most common surveillance techniques. We have a pretty good track record with this approach, but it has two inherent limitations.

First, in the absence of alternatives, there are enormous economic incentives for advertisers to try to bypass these countermeasures, leading to a perpetual arms race that we may not win. Second, this approach only helps the people that choose to use Firefox, and we want to improve privacy for everyone.

This second point gets to a deeper problem with the way that privacy discourse has unfolded, which is the focus on choice and consent. Most users just accept the defaults they’re given, and framing the issue as one of individual responsibility is a great way to mollify savvy users while ensuring that most peoples’ privacy remains compromised. Cookie banners are a good example of where this thinking ends up.

Whatever opinion you may have of advertising as an economic model, it’s a powerful industry that’s not going to pack up and go away. A mechanism for advertisers to accomplish their goals in a way that did not entail gathering a bunch of personal data would be a profound improvement to the Internet we have today, and so we’ve invested a significant amount of technical effort into trying to figure it out.

The devil is in the details, and not everything that claims to be privacy-preserving actually is. We’ve published extensive analyses of how certain other proposals in this vein come up short. But rather than just taking shots, we’re also trying to design a system that actually meets the bar. We’ve been collaborating with Meta on this, because any successful mechanism will need to be actually useful to advertisers, and designing something that Mozilla and Meta are simultaneously happy with is a good indicator we’ve hit the mark.

This work has been underway for several years at the W3C’s PATCG, and is showing real promise. To inform that work, we’ve deployed an experimental prototype of this concept in Firefox 128 that is feature-wise quite bare-bones but uncompromising on the privacy front. The implementation uses a Multi-Party Computation (MPC) system called DAP/Prio (operated in partnership with ISRG) whose privacy properties have been vetted by some of the best cryptographers in the field. Feedback on the design is always welcome, but please show your work.

The prototype is temporary, restricted to a handful of test sites, and only works in Firefox. We expect it to be extremely low-volume, and its purpose is to inform the technical work in PATCG and make it more likely to succeed. It’s about measurement (aggregate counts of impressions and conversions) rather than targeting. It’s based on several years of ongoing research and standards work, and is unrelated to Anonym.

The privacy properties of this prototype are much stronger than even some garden variety features of the web platform, and unlike those of most other proposals in this space, meet our high bar for default behavior. There is a toggle to turn it off because some people object to advertising irrespective of the privacy properties, and we support people configuring their browser however they choose. That said, we consider modal consent dialogs to be a user-hostile distraction from better defaults, and do not believe such an experience would have been an improvement here.

Digital advertising is not going away, but the surveillance parts could actually go away if we get it right. A truly private attribution mechanism would make it viable for businesses to stop tracking people, and enable browsers and regulators to clamp down much more aggressively on those that continue to do so.

  • @[email protected]
    link
    fedilink
    25 months ago

    What I mean as beforehand is before it lands in nightly. If the Figma links weren’t locked down, people would’ve said it’s a shit idea before the first patch was even written.

    I can also assure you that Firefox is a passion for many of the people who work on it. Some of them are pretty active on Mastodon, too.

    You’re probably right, but it feels like there’s a lot more Google and Apple rejects than people that have the aspiration to make Firefox the best browser the world has ever seen. Admittedly, that’s subjective.

    Anyway, I want to thank you, though I gave you a stupid-hard time, I did enjoy our conversation and I’m glad we found middle ground.

    • @[email protected]OP
      link
      fedilink
      15 months ago

      What I mean as beforehand is before it lands in nightly. If the Figma links weren’t locked down, people would’ve said it’s a shit idea before the first patch was even written.

      Right, but it’s not a huge patch, especially if the only change needed is the addition of a toggle, so it feels like it’s still well within the margins of being in time. I do agree that it would be nice if the Figma files were public, though at the same time I also understand that maybe you don’t necessarily need criticism on rough ideas for which not a single line of code has written. And it’s also pretty hard for outside contributors to actually collaborate on designs, though maybe there are solutions imaginable for that :/

      And yeah, glad we actually did manage to find common ground after all!