also if you're using JShelter, which blocks Worker by default, there is no indication that it's never going to work, and the spinner just goes on forever doing nothing
You could easily test this for yourself right now, two different ways.
A) Go and ask ChatGPT the same question that Arve Hjalmar Holmen asked. (Make sure to turn off the 'Web Search' functionality, otherwise it will simply share what it finds on Google. We want to see what ChatGPT actually 'knows' in its 'internal data'.) After you do this, do you get the same answer, or something completely different?
B) Go and use ChatGPT and tell it the Answer to some Question about yourself that is not publicly known or recorded on any internet source. Then, tell one of your friends to ask that same Question to ChatGPT, and see what answer they receive. If ChatGPT is simply 'storing' information in its 'internal data' then surely your friend will receive the secret answer that you shared with ChatGPT - right?
Did they disable the comment thread visibility for anon/not-logged-in users? I hope they deleted the comments instead of only hiding them from the public eye.
Unfortunately I have just cancelled my subscription, which has terminated my membership immediately. I don't want to reread the atrocious waste (or expose them to others) anyway.
I don't think it's any secret that the device can unintentionally activate in certain circumstances (and whether or not that's due to it thinking it heard its name is another debate)... but my problem with OP's statement is that they seem to frame it as if it's intentionally and maliciously listening more often than it should, and I just don't see any evidence to support that claim.
What I'm saying is that intentionality doesn't have to be relevant to this discussion. All you need to do in order to be maliciously spying on someone, given that you have this bug in the first place, is to
1) not fix the bug
2) quietly remove the option to opt out of remote processing
and then all of a sudden you've got a situation where of course no one is actively spying because We Would Never(tm)(c)(r) but there's a really reliable pipeline by which recordings of me talking to me family in my home end up on a remote server somewhere where they're used to train AI and maybe even automatically scanned for certain keywords that might indicate that I'm some sort of troublemaker and need flagged for additional "attention". It's a plausibly-deniable panopticon. In fact having it activate by purposefully unremediated mistake rather than by keyword makes it a better spy. You can discover a list of keywords and avoid them but ambient noise causing the device to randomly sample and exfiltrate recordings means you can never know when you're being recorded and thus have no choice but to always act like you're being recorded, just in case.
Never mind "proving", there are plenty of low-effort steps they could take to foster trust (as outlined elsewhere in this thread) that they choose not to do. They choose not to meet even the bare minimum.
We are in a thread that is literally about how Amazon plans to disable the option to not send voice recordings. I get playing devil's advocate, but at some point logic has to prevail, eh?
Should not the burden of proof be on Amazon to prove it's not always recording?
In 2025, it feels like we're 5 to 10 years past the time a consumer should default to assuming their cloud-connected device isn't extracting the maximum possible revenue from them.
Assume all companies are amoral, and you'll never be disappointed.
They have a lot of ways they could’ve built trust without a full negative burden: which of them, if any, are they doing?
Open sourcing of their watch word and recording features specifically, so people can self-verify it does what it says and that it’s not doing sketchy things?
Hardware lights such that any record functionality past the watch words is visible and verifiable by the end user and it can’t record when not lit?
Local streaming and auditable downloads of the last N hours of input as heard by amazon after watchwords, so you can check for misrecordings and also compare “intended usage” times to observed times, such that you can see that you and Amazon get the same stuff?
If you really wanna go all out, putting in their TOS protections like explicit no-train permissions on passing utterances without intent, or adding an SLA into their subscription to refund subscription and legal costs and to provide explicit legal cause of action, if they were recording when they said they weren’t?
If you explicitly want to promote trust, there are actually a ton of ways to do it, one of them isn’t “remove even more of your existing privacy guardrails”.
On the first two, if you already think they're blatantly lying about functionality, why would you think the software in the device is the same as the source you got, or that it can't record with the light off?
It's not at all unreasonable for consumers to demand vendors--especially those with as much market power as Amazon--to take steps to foster trust that, though they may not rise to the level of "proving a negative," still go some ways towards assuring us they are not violating our privacy.
The fact that they don't take any of those steps (and the fact that we are in a thread about they're disabling this privacy feature in the first place!) goes to show that consumers have every right to be skeptical and indeed to refuse to bring these products into our lives.
I think it's inane to complain that consumers are placing an impossibly high standard on Amazon when Amazon themselves choose not to meet even the lowest of standards.
It's their product and their code, there is no reasonable way I can responsible for knowing what it does as opposed to Amazon, who is in complete control of the device and system. I can't even believe I have to explain this.
At the very least, they can provide a full log of all interactions and recording in an audit log. Have that verified with researchers conducting their own analysis on dial home activity and I think we'll be significantly closer to a good answer here about generalized mass capture of customer sensitive data. This still wouldn't be enough if you're worried about targetted spying, because we can't know when bad actors flip your device into spy aggressively mode unless you're auditing the device while targetted).
Okay..but then why should I trust that Alexa isn't listening? That's clearly a pretty valuable thing for Amazon to provide to their customers. Is it impossible? If it is..then yeah people should just light these things on fire or have a hard switch on them at least.
Only in circles that don’t understand technology and frankly logic. To prove that it’s happening _one_ hacker needs to show that there’s constant flash drive / network traffic while the mic is enabled that also correlates with the entropy in the audio.
I have personally verified that my device most certainly does not send constant internet traffic... however I think we can't rule out the possibility that it might buffer the data and send it later.
We can, in fact, rule it out by dissecting the device and monitoring chip traffic. That’s my whole point - people who understand technology know that it’s nearly impossible for Amazon devices to routinely spy on conversations in people’s homes without detection.
correct. however we're actually planning to make the system open source in future; we can't set an exact date as it depends on various factors, but hopefully not too far out. :)
our approach is actually hybrid. on the other side of the performance coin, we have resource efficiency. that resource efficiency let's us provide a performant and low latency managed KV store, with lower cost, so the economy of it makes sense. the idea is that not everyone requires sub-microsecond latency, and for that group the value proposition is a low latency kv store which is feature rich with a novel bi-directional ws api. for people who need sub-microsecond latency, we're planning custom setup that allows them to make a local vectored interface call to get the sub-microsecond speeds. in between, we have the business plan that provides a custom binary protocol that is the one used in the benchmark :)
that's a fair point and you're correct. we will have the SLAs for latency documented and provided soon. in the mean time, please try it out and give us your feedback :)
The site is very snappy, which matches well your pitch.
However your principal selling point - the nanosecond-level speed - falls flat because it's a property important in self-hosted scenarios. Once you put your super speedy stuff behind a web-based API, that selling point becomes completely meaningless. The fact that once our data hits your servers it is handled really quickly doesn't mean much. I am sure you are perfectly aware of that.
That is, your pitch is disconnected from your actual offering. If you are selling speed, it needs to be a product, not a service. It doesn't need not be open source though, just looks at something like kdb+.
our main target for "performance" value proposition are companies and businesses which will setup HPKV either locally (Enterprise plan) for nanosecond performance or in the cloud provider of their choosing, and working via RIOC API (Business Plan), getting ~15 microsecond range over network. however you're totally right, that doesn't really matter much if you're using it REST or WebSocket. for Pro tier, our value proposition is still the fastest managed KV store (you still get <80 ms for writes with a ~30ms ping to our servers) and features such as bi-directional WS, Atomic operations and Range Scans on top basic operations.
but given your comment, I think we should perhaps rethink how we're presenting the product. thanks for the feedback again :)
reply