- cross-posted to:
- [email protected]
- [email protected]
- cross-posted to:
- [email protected]
- [email protected]
Not my title! I do think we are being listened to. And location tracked. And it’s being passed on to advertisers. Is it apple though? Probably not is my take away from this article, but I don’t trust plenty of others, and apple still does
It’s well possible and previously tv mic had been used as bugging device. The problem is, way too many security researchers look in system level software of iOS and even other components of the device that such practice will be too risky for apple (same applies for mainstream android products). Also processing realtime audio, finding potentially unrealiable topic from it and doing realtime ad is actually too much work as of today’s tech (might change sooner than you think though).
What, I think, is more practical is to use the whole query after the wake word to show ad, and potentially use other app tracking data, which is way much reliable than voice for targeting purpose. Voice data is useful for bugging purpose, primarily (ab)used by nation states and LE.
I bet in the medical procedure case mentioned in the blog the user searched/talked about that in other apps and average people aren’t good to notice these privacy leaks.
I’ve always theorized that it should be possible to have multiple wake words with different functions, some invisible to the user.
It has to be “always listening” for the wake word to function at all, so it clearly is doing that, what’s to stop them from having another wake word like “bomb” which it then starts recording and sends to the NSA for instance, or even “clip the last 30 seconds” like an xbox could be feasible. Or even have corporations pay to get on the “list” of secret trigger words, like Toyota pays and it hears “Toyota” or “new car” and starts serving ads for 2026 Celicas (I wish lol). It doesn’t even have to send much data back for that, just “ohp, said word, check box to join ‘toyota’ ad group.”
I’m not saying they do that, but like, it sounds totally easily possible and I can’t be the first person that had this idea, why wouldn’t they?
Don’t give them idea :)
Yes that’s indeed a possibility.
All the assistants listen all the time for their codeword. The new pixel phone show you a list of songs played around them and more. It is already happening all the time in the background.
That’s done locally. You can try training wake word models for any open assistant and see how much computing power it needs for even simple phrase.