Then, much as perfect freedom is unattainable, so is perfect security; and I think it's time we rebelled against this desire to achieve it. Unfortunately, it seems desiring any sort of security is only going to give the companies more leverage to use it against us, so the way to go is to reject completely their notion of (centralised, authoritarian) security, and make freedom the highest priority. Insecurity is freedom, and that is what we should fight for.
That infamous Benjamin Franklin quote has really taken on a deeper meaning recently.
This is very well phrased.
It gets across something I was having trouble putting into words -- that security and freedom aren't always just opposite goals that conflict with each other. Freedom isn't an ideal, it's a practical system designed to guard against tangible harms.
I find it particularly frustrating that in conversations about security it's difficult to differentiate between the different uses of the word.
There's user security against other users (same system), criminals (remote), criminals (physical access), large entities (ie governments and corporations), and probably a few others.
Then there's owner (or admin?) security, for example IT vs employees who do dumb things and other similar scenarios.
There's also vendor security against user control of devices. Sometimes this is to protect a walled garden, but other times it's just a cheap way to prevent damage to their brand if clueless users do dumb things and then publicly blame the company for their own negligence.
Arguably the first category (user security) is a good thing, but those designing the devices fall almost exclusively into the second two (I would argue mostly bad) categories.
I only see this argument made by Apple fans, it is always related with branding, something like "we don't want someone phone or laptop to catch fire and destroy Apples perfect brand" . the ironic thing was that after we had this discussion on HN a few days back we had the news that genuine Apple laptops catch fire anyway.
Would Microsoft like not to be able to repair the servers they run on their cloud? not to be able ro run whatever code then want on the server? be forced to run the server hardware manufacturer code so they make sure MS is not running the servers incorrectly and make them explode?
I think the best way to balance this vs. security is something analogous to sudo. Design for people who don't know what they're doing by default and don't say you didn't warn them if they break their system with superuser control.
How can you verify that the motherboard you sent out is the one you get back and it doesn't have extra hardware inside the case?
msft isn't wrong that creating a trusted supply chain for hardware is difficult, but I think that 'how can I trust the repair shop' and 'how can I trust MS' are the same problem.
It is certainly not by obscuring the contents and revisions of that motherboard, such that end users can only trust it as an opaque blob.
In software land, we're smitten with this assumption that security can be made absolute. While this is true for software (Curry-Howard), it is not true for anything else (eg it's easy to cut a car's brake lines). Real world safety depends on local control, post-facto enforcement, and a general lack of maliciousness. Whereas with software we're dealing with remote access and extreme scalability (eg remotely disable the brakes on all cars of a certain make/year for the lulz, untraceably).
Attempting to extend the software security paradigm (public key authoritarianism) onto hardware is completely wrong. Device security needs to be focused on mitigating software's own "winner take all" failure mode, rather than attempting to move even more eggs into one basket.
Have your TPM cryptographically validate the components it's attached to. Coupled with a robust blinding implementation, you're only really trusting the chip manufacturers (and their tamper proofing of the silicon) at that point.
Add an ownership model to the TPM and you've solved anti-theft. Then allow for the addition and removal of authorized components by the owner and you've preserved the right to repair.
> and it doesn't have extra hardware inside the case
This is the real issue I suppose. I prefer choosing to trust a particular repair shop over gluing it all together and hoping for the best.
>if Microsoft wants to make devices that nobody can service and repair without breaking their security model, they’re entitled to do that. They can make Surface Pros so hardened and tamper proof that merely opening them will destroy them.
>What they can’t do is make devices that are repairable, and then lock out everyone but their own service technicians.
There is no distinction here. The tamper proofing is what locks out everyone but their own service technicians.
Anecdotally, I went to Best Buy for an iPhone screen replacement (a year ago in nyc), the tech wanted my pin code, I of course declined. Their reasoning was that they needed it for their test fixture to make sure the replacement worked. I declined again. The tech said that in that case they would have to wipe my phone, I said that’s fine and much preferred to the alternative.
When I came back a few hours later to get my device, it had not been wiped, and the new screen worked just fine.
I was always curious why the heck would they need my pin!
I imagine >95% give their pin.
That's a lot of work and takes quite a bit of time.
I can also look up how repair technicians have affixed GPS trackers to cars or defrauded consumers during repairs. But I'm still glad that I can take my car to a non-manufacturer repair shop.
The "TPM's are all going to fail" argument is the "terrorists will win" argument, except for HN instead of regular citizens. In most cases this is an unnecessary conflict -- compromising the TPM wouldn't be necessary for many of the repairs consumers want to make.
From the article:
> After all, TPMs are in Dell computers. Dell makes diagnostic software and diagnostic codes and schematics available for their hardware and I haven’t heard Microsoft or anybody else suggest that a TPM on a repairable Dell laptop is any less secure than the TPM on an unrepairable Microsoft Surface.
We could make a lot of progress on right to repair without ever coming in conflict with existing TPMs. If that's Microsoft's concern, then fine -- I'm OK starting with the noncontroversial, unsecured stuff first, like keyboards, glass, and batteries.
We could make a lot of progress
on right to repair without ever
coming in conflict with existing
To be clear, I do not have a lot of sympathy for Apple's arguments around secure enclave. As far as I'm concerned, right-to-repair is a tangible issue that effects nearly everyone, and having the government install a separate touch id button is a largely theoretical risk for most people. But, even the iPhone has parts that aren't effected by secure enclave -- so if it takes this argument off the table, I'm willing to let it go.
We still have Apple blocking replacement parts as counterfeits when they enter the country; and we still have a bunch of diagnostic information, specs, and tools that are only available to authorized technicians.
It's also worth noting that Error 53 wasn't forcing consumers to make a choice between the fingerprint reader and a 3rd-party screen replacement -- it was "bricking" the phone. That Apple was willing to roll that back and change their design so that Secure Enclave only disabled fingerprints, says to me that even Apple has admitted that secure enclave doesn't need to monitor every single part of their phones to keep consumers safe.
The setting can only be changed from within the OS, so an attacker or service technician can't do it without my permission. Some Android phones even allow you to re-lock the bootloader afterwards by replacing the chain of trust, so as far as I know there's no technical reason why we couldn't have the best of both worlds.
: But not all of them. Don't blindly re-lock your bootloader unless you know for certain that your phone supports it, or you'll risk basically bricking your device.
Note that this is not hypothetical - lots of unlockable Android phones will display a warning that they were rooted on boot.
Yes there is. The question is now "how does MSFT decide _who_ gets the right to repair their devices?"
If it's a reasonable and non-discriminatory process, then that might be fine, if they're arbitrarily deciding, then that's likely an issue the FTC _should_ investigate.
They're described as PAWs in the document, but they are called SAWs internally (as one of the notes explains down the page).
Then what locks out their own service technicians? Does Microsoft not comply with government requests, and give some promise of that in an enforceable way? Would a court even enforce such a promise?
I fail to see any reason why someone should trust Microsoft any more than Best Buy. And even then what forces you to give your device to Best Buy instead of someone you do trust?
In the end it was far easier to just switch over to a Thinkpad.
I do support right to repair for the record. I just think this isn’t a great example of how the right to repair would have actually helped someone
Right to repair essentially boils down to the product needs to be repairable in a reproducible way. It doesn’t state how that repair needs to be done or what exactly that means.
The big one is for repair 𝙜𝙪𝙞𝙙𝙚𝙨 𝙖𝙣𝙙 𝙞𝙣𝙨𝙩𝙧𝙪𝙘𝙩𝙞𝙤𝙣𝙨 and 𝙖𝙫𝙖𝙞𝙡𝙖𝙗𝙞𝙡𝙞𝙩𝙮 𝙤𝙛 𝙧𝙚𝙥𝙖𝙞𝙧 𝙥𝙖𝙧𝙩𝙨 is what is most important. So that a repair shop or savvy consumer could indeed do their repairs without push back from the manufacturer. It also means the specifications for those repairs and parts would be open(ish)
Having to openly document repair procedure and have open part availability is what I think wrongly companies are attempting to protect. Because third parties could manufacturer spec compliant parts that wouldn't void the overall warranty of a product
I think redesign is a non significant factor
EDIT: I just want to note that in the right to repair debate product lifecycle planned obelesance is actually a legitimate argument for right to repair. I don't think devices are built to live longer then a product generations lifecycle (for instance, an iPhone has a typical generation lifecycle of 3-5 years I believe)
Right to repair would reverse that trend at least somewhat
If you need tools or instructions then the parts aren't really consumables.
However I am hardly an expert on this. I can speak to the surface though (we use them at work) and most definitely is covered
And, unfortunately, at least for certain purposes, X230s/T430s are the most recent reasonable laptops anyway: after that it's currently impossible to replace any of the Intel/AMD firmware with something inspectable.
Then make it so the consumer can always tell if the security became ineffective.
Now I do believe we should be able to modify the software, but the infrastructure is not there right now.
If that is the case, every computer could be fundamentally compromised if you left it out of sight for long enough.
There are a lot of different threat models here. The problem is that for the serious ones, like a state-level attacker, anything but continuous physical security is hopeless. But for the less serious ones, all of this faux spycraft is nothing but an excuse for anti-consumer behavior because the path of least resistance for your kids to get into your phone is by shoulder surfing your PIN, not using 0-days to install custom device firmware etc.
It's absolutely a correct statement from a user perspective.
User freedom, not so much.
But the power to do that is also the power to screw it up. If any 3rd party can repair something like biometric hardware, any 3rd party could also compromise said hardware. Some people may find the benefits of the former outweigh the latter, but I do think it should be a choice in the law, and that liability should adjust accordingly. Ie., you can upfront on buying a device request that it not enforce cryptographic reqs for hardware repairs (this could be controlled by a permanent fuse or many other ways), but alternately you can also request that continue to be a requirement. In my case for example I do not want to buy an iPhone that is "right to repair enabled". I don't want the magic circle of who can mess with it expanded any further than the minimum, which is Apple themselves who are by definition inside the tent anyway.
I also think the "right to repair" movement is a symptom rather then cause thing, and obscures the real problem for the vast majority of the market. The true core issue is that, in America in particular, legal standards for fitness for purpose are simply way, way too loose. The reasonable expectation most people have for buying something expensive is that it'll be in working order for a proportionate number of years. Not forever of course, but a $200-400+ bit of electronics shouldn't be dead in 13 months either. A $500-1k one should probably last at least 4-5 years without further cost, etc. The retail price should reflect whatever it takes to make that happen. Manufacturers are the ones in a position to deal with that, who have the best stats on failure rates, and can make decisions about tradeoffs between cheapness of repairs, more on QA, where more expensive repair reqs might reduce repair need even farther in the first place (or provide other feature value), etc. It simply shouldn't be the consumers' problem at all, beyond evaluating the retail price. Yet it's completely standard to have something ludicrous like a single year warranty.
Essentially, it's a classic market Externality problem. Everyone does expect their devices to last, but are forced to roll the dice on whether they specifically have bad luck and have to pay expensive rare repairs all by themselves. The manufacturer gets to advertise an artificially low price but externalizing the failure rate percentage onto the customers, and even charge extra for what should be standard. Instead warranties should be adaptive, something like "1 month for every $12 retail price up to a maximum of 5 years" say. Then let the market sort it out from there. I'm worried "right to repair" will ultimately be camouflage that let manufacturers skip out on their real responsibilities in some cases. Right to repair could make out of warranty repairs cheaper, but you shouldn't have to pay at all within a reasonable normal life.
1: which would be defined as something alone the lines of "anyone who has paid, either upfront or on an ongoing plan basis, for hardware and possesses the right to control access to it." Lawyers could make it watertight, but basically a definition which would explicitly not allow any sort of "oh we were only leasing it to them!" loopholes.
Rossman, the youtube figurehead of this movement, talks about this at length in every single rant of his that I've listened to on the subject.
iFixit, the corporate figurehead of the right to repair movement, has this to say regarding Error 53, one of the first major invocations of security by the anti-repair lobby to excuse their behavior:
> Obviously, the Touch ID sensor is an important security measure...
So: what are you talking about? The RtR movement doesn't only acknowledge the tradeoff, they tackle it head on, and thanks to Error 53 they have extensive first-hand experience informing customers of the issue and surveying their opinions.
Of course it can be part of a solution. Responses like yours, which display zero consideration of threat scenarios, time/info/resource costs, or any potential improvements are somewhat tiresome these days.
>No crypto in the world can help you if I simply bug the fricking physical keyboard!
I'm sorry, can you point to that "fricking physical keyboard" on my iPhone? I'll wait.
And even if you want to talk purely about PCs and attached keyboards, there is no inherent reason those couldn't be locked down too were it a general problem, and have a system refuse to trust any peripheral by default. HSMs can also be part of an overall plan that can mitigate some damage even from a keyboard bug. Furthermore, breaking into a private residence or business is a significantly more time/information/resource intensive problem then subverting a centralized repair place through which lots of hardware passes and is left unattended, and which will not in fact generally include peripherals (if my computer is broken, I bring just the computer in, not the keyboard/mouse/monitor).
If you've looked through any number of the NSA or CIA leaks (Vault 7, for example) you'll have seen a device that looks like a standard Ethernet magjackn+ USB connector that has a little tiny processor embedded inside of it (it leaches power from the USB power rails, IIRC) that can be used to exfiltrate packets and the like.
It's not impossible to get bugged when you send your PC in for repair. It's also probably unlikely considering that there's many other easier ways to grab your data.
I don't see how that's related? PCs right now don't make use of the kind of cryptographic lockdown iPhones do. In the case of desktops that's likely because it's assumed physical security can be taken as more of a baseline, though in the case of notebooks it's probably more just inertia. The whole "right to repair" movement is precisely because this has been changing. I mean, what you wrote directly argues the case for locking down of hardware, encrypting everything along the busses between chips or having destructive tripwires or both, and so on.
>It's not impossible to get bugged when you send your PC in for repair. It's also probably unlikely considering that there's many other easier ways to grab your data.
The nature of security though is that it's worth being forward looking to what will come down the line in the future. Particularly if we're talking about legislating it and putting the force of law behind making one decision or another, which tends to then be quite difficult to change. Isn't it worth being cautious about?
Well, you have the digitizer, but since the buttons are highlighted when pressed, the display would work as well.
As pointed out in the article, you have to trust any third party you hire. This isn't unique to electronics.
A cleaning service could secretly install cameras in my house. An auto mechanic could put a GPS tracker in my car. The local pizza parlor could deliver using bugged cardboard boxes!
Or perhaps you're worried about a random stranger at the local coffee shop dismantling your phone, installing a bugged fingerprint reader, and fully reassembling it, all in the time it took you to use the restroom and with no one noticing? Yeah, I didn't think so.
We don't need to lose our right to repair. Just don't take your electronics to unreputable service centers.
But the whole point is what about 3rd parties I don't hire? The point of "right to repair", at least in the formulations I've seen so far, is that the manufacturer must offer privileges to any 3rd party to perform repairs. How well though will all those 3rd parties that can now perform repairs verify that it is, in fact, the owner that does it? How does the owner really verify things? In the real world amongst normal people, how well can it really be expected vs the first party, which is the one with the most easily understood incentives?
As I said in my original post, I do not oppose people that want to be able to grant 3rd parties the right to repair. But I also want to make sure that, like right now, owners can also forbid anyone from having the cryptographic rights to alter their device hardware except the original manufacturer. As-in, even if it's stolen from them or they later decide they "want" to give permission (which isn't distinguishable from a social engineering attack). That's why I propose make it a build-to-order option, with some small external identifier the case. I don't simply want Apple or whomever to be forced to give keys away to any that ask.
>Just don't take your electronics to unreputable service centers.
This is a very shallow snobbish tech elite answer, and the same one our whole industry gave (and in many cases still gives) to people when it came to malware and the like. "Oh, just don't go to disreputable websites. Don't go installing software willy-nilly. Haha dumb lusers, PEBKAC." And then Apple came along and offered something different, the shocking concept that maybe it shouldn't be regular users' fault, that they're not fundamentally stupid, that why shouldn't they just be able to browser anywhere and install absolutely anything that catches their eye and never have to deal with any long lasting unknown effects? Even when Apple's execution has failed, the fundamental question there has been pretty successful.
Of course, Apple has also destroyed a lot of valuable freedoms along the way, many of them unnecessarily I think. They clearly also have cross incentives to misuse their control, and have. But I can't help but think that a lot of that is our fault, our industries fault, for not trying to find a better balance of answers and making it standard long, long before the iPhone.
What exactly is the threat model you're trying to protect against here? Real life isn't a Hollywood spy movie. If you didn't hire them, then how do they have your device and why haven't you noticed that it's missing?
Granted that a robust hardware based anti-theft solution would be nice to have. That's largely unrelated to the current topic though, provided that it's implemented in a reasonable manner.
> or they later decide they "want" to give permission (which isn't distinguishable from a social engineering attack)
This is the classic argument that users are too stupid to be provided freedom. Outside of technology, this is typically recognized as authoritarian.
> > Just don't take your electronics to unreputable service centers.
> This is a very shallow snobbish tech elite answer
No, it's literally a practical answer. Just take your device to Best Buy or the local equivalent. Reputation is literally how the entire market manages to function on a day to day basis. Tech isn't even slightly unique in this regard.
My point here is that a local brick and mortar repair shop has invested a significant amount of capital to set up their operation, so they presumably care about their reputation. Furthermore, committing crimes under such an arrangement would be ill advised due to how easily they could be tied back to the responsible party.
In contrast, the malicious web site or software example you bring up isn't even remotely comparable. In your example the bad actor exists outside of the local legal jurisdiction, the "storefront" is cheap and easy to set up (ie there's a low barrier to entry), and the responsible party can't be easily tracked down. Low risk, high reward, and a low barrier to entry. Coupled with inexperienced users this is a recipe for disaster, and indeed malware still shows up on even well curated app stores from time to time.
[autocorrect wanted to change haxors to hackers. I’m not sure if that’s good or not)
Here's what I mean by plausible: "If we filled it entirely with resin and glue it is less likely to break and hence less likely to need repair, but doing that makes repair impossible"
I do not mean good reasons, I am specifically saying that there are "reasons" that are more plausible than "security"
My keyboards have a dollar symbol on the keyboard, a currency I don't use and have never used and no major country in this continent (Europe) uses. It's high time we had the right to remove this nationalist/capitalist advertising from our devices. Imagine if every single one of your computers had to show a dollar symbol on startup, right now it's just on your keyboard.
No number pad on laptops was bad enough (but understandable). Now no super, home, or end keys. Caps lock (probably the only key I never use) seems to have been replaced by a search button. An external keyboard is now an absolute requirement for power users I guess.
Oddly enough, there does seem to be some sort of Google branded key on the left side of the Pixelbook keyboard. Not on any of the others I checked though.