Are you claiming that the video is fake? — if not, what’s your comment referring to with “accuracy”? — how is “authority” relevant at all, when discussing video of a malfunction?
Provably digitally altered fake videos of voter fraud were made and disseminated by Russia during the US 2020 election. Non-partisan institutions put a lot of energy in trying to debunk these operations which basically suit America's adversaries and, absent of critical thinking, many people become unwitting participants in their disinformation campaigns.
By eating away at the trust of non-partisan institutions that are established to maintain free and fair elections, autocracies like Russia, Iran, China, etc., move up in the world at the expense of internal divisions in the US.
No, I’m explaining the need for critical thinking, not that all stories of voter fraud/manipulation are immediately bunk. The default of credulity is much more troublesome as it allows for misinformation to be promulgated and be given unearned credence - it gives adversarial actors ‘authority’ one might say. It’s also important to note that in terms of actual fraud, audits and challenges from 2020 show that the system works, contrary to widespread public outrage on the issue
Connecticut - I've never felt safer walking around at 3 am and I've made more friends in the last 2 months (without trying) than I probably have in the last 6 years of living in the bay area.
Good to hear! Are the comparable areas of similar population density? I'm wondering what incremental steps (non-partisan hopefully) can be brought to the Bay Area to slowly move it towards a similar environment.
I lived in the Bay Area for about 30 years, so here's my (probably biased) opinion. The biggest issue is the lack of community, and in my experience, this is due to the high turnover of residents. From my high school graduating class of 400, fewer than 100 are still in the community; everyone else has moved to Florida, Phoenix, Austin, etc. You can't encourage long-term planning (good public education policies, systematic reductions in the drivers of crime, etc.) if the community changes every two decades. My personal opinion is that the Bay Area won't improve because everyone is out to get 'theirs' and then leave. While I do miss the weather, the lack of humidity, and, honestly, a more educated population, I believe raising children in a strong community is more important. So, I'm more than happy with the trade-offs for a better overall quality of life.
The incidental and systemic benefits of the recordings are exciting to people and celebrated with stories. The hazards of this constant "pollution" of data — how it is slowly changing our society, our economy, our humanity — is harder to quantify or build opposition to.
It's a bit like climate change. Slow, invisible poison.
Law simply needs to internalize encryption. Your cameras are your property and only with consent of owner are they available to authorities.
Public cameras should only be decrypted for evidence to support litigation of crimes, not for police to search for violay, because the current gigantic book of laws has an implicit assumption of a difficulty to enforce.
If suddenly police could use AI to fully prosecute all violations of law then we have all the laws necessary for worse than totalitarian existence.
Every mile you drove in a car will be 10 violations of law. Laugh loud? Violation disturbance of peace. Stand looking at your email too long? Loitering. Cross a park? Dozens of environmental violations.
Sure by some interpretations. Unfortunately the current SCOTUS doesn’t see it that way, they think webcams and electronic surveillance should be in the constitution or authorities can do anything. If there isn’t a law or constitutional text to the effect then it doesn’t exist to them. So we have to approach this from actually getting a law passed.
TFA is about camera footage obtained via warrant (thus following due process). Do you think evidence should not be obtainable via warrant?
> Unfortunately the current SCOTUS doesn’t see it that way, they think webcams and electronic surveillance should be in the constitution or authorities can do anything.
I've definitely heard organic stories from people who got favorable insurance/legal outcomes after a traffic accident because they were using a dashcam. Generally, if you're not doing anything wrong, it is a good idea to record whatever you're doing, because it's proof that you're not doing anything wrong (police departments use this to great effect; they love bodycams in 99% of cases, and simply turn them off when they're about to do something that they wouldn't want to have a bodycam for). The negatives are second-order effects that only come about when everyone is doing it.
I’m sure the vast majority of them are. Occam’s razor version: fear sells. If you can appeal to the clutching pearls part of the psyche then you can win over people to the idea of constant surveillance as necessary because of the current “wave of crime”. No matter how much crime is down or how many rights have to be taken away for “public safety”. Most reporters are just trying to put food on the table and outside of freedom of the press they couldn’t care less.
Privacy and security here are being commingled under the banner of AES encryption at rest, which is apparently disabled by default.
I always wonder, if your marketing pitch involves security features, but those features are off by default, aren't you technically pitching your lack of security?
Encryption at rest is disabled by default because many users do not want to keep track of all of their encryption keys, which are not stored by Horizon when that setting is enabled.
There are also other security features, like end-to-end encryption for pastes, but like mentioned before, not everyone wants to lose the ability to preview their content in the dashboard.
By giving the user a choice, I can cater to both crowds: one that prefers convenience, vs the other which prefers the most security.
Edit: To clarify, all files are already encrypted at rest with a key I control. But with Encryption enabled (capital E to distinguish the feature name), it is encrypted again with a key Horizon won't store.
Have you done an Independent security review of these features? What's your CRS score? Do you have CVE fix SLA in place? All these features are good if this was. 2000 website but a single vulnerability in any one of the vendors of your tech stack will compromise your users
Server side encryption is handled using the Go standard library. A more detailed breakdown of the process can be found in the Help Center. TLDR: It's reputable, and best practices are followed through cryptographically secure generation, random IV, high entropy keys, memory hard hashing, etc.
Paste end to end encryption uses the native window crypto subtle API, widely used and reputable.
Coming from cyber security one thing I have learnt is no matter how many layers of security you add nothing is fool proof, I would strongly recommend doing an Independent review getting if not an international certification like ISO or GDPR then something domestic, I like what Mozilla does https://www.mozilla.org/en-US/security/advisories/, this really will enforce trust in your users as today it's really hard to trust websites
This tech has proliferated across cities in the US by claiming to be a "force multiplier". That's supposed to mean it makes police more effective at their mission without actually adding any additional headcount.
But if 70-90% of the time the tech is sending police on goose chases that end with no findings, it seems like "force multiplier" falls into one of those marketing buckets where the truth is the exact opposite. The tech actually divides police from the mission.
Many, many cities are siphoning off public taxpayer dollars and sending them to this company.
Is there more to that thread? I can't read it if it exists, not sure if that is what the parent is talking about? But i don't have a Twitter account anymore, so maybe it's locked?
That's likely due to tracking prevention or protection by your browser because X really, really wants to track you. If you disable the tracking protection and related settings, you may be able to see the single tweet.
I don't know what you're seeing. It's a very long thread. Exceptionally good take on the whole thing. Apple has gone way out of their way to try and sell this thing. Above and beyond compared to how I imagine Microsoft or Google would have tackled this.
the most successful sheeple operation is the one the sheeple and the entire world is completely oblivious of it.
jokes aside, this is no different from people selling bunker beds, gold, ammunition, crypto, vpns. It is specifically for the set of gullible people who think they and their data is so important. Reality (except for 10,000 people or so) is, most lives and their 'precious' data is worthless. (I'm not talking about SSN, Bank Accounts -- those are well protected by tech cos HN seem to hate on)
> Ok there are probably half a dozen more technical details in the blog post. It’s a very thoughtful design. Indeed, if you gave an excellent team a huge pile of money and told them to build the best “private” cloud in the world, it would probably look like this.
and
> And of course, keep in mind that super-spies aren’t your biggest adversary. For many people your biggest adversary is the company who sold you your device/software. This PCC system represents a real commitment by Apple not to “peek” at your data. That’s a big deal.
I'd prefer things stay on the device but at least this is a big commitment in the right direction - or in the wrong direction but done better than their competitors, I'm not sure which.
> As best I can tell, Apple does not have explicit plans to announce when your data is going off-device for to Private Compute. You won't opt into this, you won't necessarily even be told it's happening. It will just happen. Magically.
Presumably it will be possible to opt out of AI features entirely, i.e. both on-device and off-device?
Why would a device vendor not have an option for on-device AI only? iOS 17 AI features can be used today without iCloud.
Hopefully Apple uses a unique domain (e.g. *.pcc.apple.com) that can be filtered at the network level.
I think the main reason might be the on-device AI is fairly limited features wise. For Apple to actually offer something useful they would need to switch between device/server constantly and they don't want to limit the product by allowing users to disable going to a server.
With OpenAI calls is different because the privacy point is stronger
You would have to activate a clearly LLM-powered software feature and have internet access. I don't know if settings will appear to disable this, but you could imagine it would be the case. This isn't just siphoning off all your data at random.
Would Spotlight be considered a "clearly LLM-powered software feature"? Will there be an option for "non-AI Spotlight"? Disabling dozens of software features, or identifying all apps which might use LLM services, is a daunting proposition. It would be good to have a PCC kill switch, which makes opt-in usage meaningful, rather than forced.
Privacy "consent" is fundamentally broken. We've moved from "we're doing whatever the fuck we want" to "we're doing whatever the fuck we want, but on paper it's whatever the fuck you expressly asked for, whether you wanted to or not."
If you have no threat model and want to opt out of random features just because... you probably shouldn't use Apple products at all. Or Google or Microsoft.
For years, Apple has a documented set of security policies to disable off-device processing (e.g iCloud, Siri), via MDM / Apple Configurator. Apple also published details needed for enterprise network filtering to limit Apple telemetry, if all you want from Apple servers are software security updates and notifications.
With a hardened configuration, Apple has world-class device security. In time, remote PCC may prove as robust against real-world threats. Until then, it would be good to retain on-device security policy and choice for remote computation.
Apple does not publish details to limit telemetry. Nowhere in MDM or in their docs do they tell you that you can safely block xp.apple.com (telemetry) but not gs.apple.com (boot ticket signing server for updates).
He's not wrong that, given that you want to do this, this is the best way. The alternative would be to not do it at all (though an opt-out would have been good).
weird, i get a bunch of music and programming stuff on my Threads feed. it's not very deep, but what's on the surface is quite nice and not a bunch of almost-porn. Twitters become half porn though
Sorry but this is just fatalistic nonsense. A generation ago people did not have anything like the current evolution of surveillance technology. The small number of people who care about ubiquitous surveillance right now are early adopters, not a dying breed.
If it's sold to a US company then it will be our intelligence agencies pumping the propaganda, not theirs. Which is an outcome I'm more amenable to, considering that our intelligence agencies generally have more of an interest in the stability of American society than the intelligence agencies of China.