Hacker News new | past | comments | ask | show | jobs | submit login
Build security with the assumption it will be used against your friends (mjg59.dreamwidth.org)
150 points by imartin2k 6 days ago | hide | past | favorite | 29 comments

The difference between sword and shield has never been clear. Unlike many weapons it's hard to place cyber-weapons into nice "defensive" and "offensive" boxes.

But also, code has almost all of the undesirable properties a weapon can have;

A good weapon is single use. That way it does not become ammunition in the hands of an enemy. For the same reason a good weapon is ephemeral, and does not persist in the environment to injure innocents long after the conflict is over (think land-mines). A good weapon is precise, not indiscriminate (a sniper, not gas). And a good weapon is inflexible, working only in the hands that wield it and not easily turned against you. IEDs used against us in many conflicts are often fashioned from unexploded munitions we've fired.

Cyberweapons (which include many things that look "defensive") have all the undesirable sides. They are reusable, highly flexible, indiscriminate and persistent. People who work on them will inevitably see them used against themselves and their own loved ones.

A bit of a tangent, but swords can be used to parry, and shields can be used to bash people. Many medieval shields even have metal spikes to make shield bashes more effective.

Medieval weaponry can't be put in nice "defensive" and "offensive" boxes either.

Greatly recommend the latest series of Vikings Valhalla, with shield maidens performing some aweseome moves :)

Is it really a tangent if that was the very point that started the comment? "The difference between sword and shield has never been clear."

"Medieval weaponry can't be put in nice "defensive" and "offensive" boxes either."

But the main purpose of a sword is to strike and that of a shield to block. This is what those categories are about, but not that they mean much.

The idea applies just as much to medieval or any-era weaponry, but I first heard it expressed about anti-aircraft missiles:

There is no such thing as purely defensive weapon. You can think that anti-aircraft missile turrets are purely defensive, as they can only shoot down enemy planes flying over your cities. But then, you can pack those turrets on trucks and drive over to the enemy and deploy them to shoot down their planes flying over their cities.

And so is with everything. A perfect armor the enemy attacks can't penetrate is an offensive weapon, because the enemy won't be able to stop your soldiers when they invade their land.

But also, conversely, a better sword, or gun, or bomb, can work well as purely defensive weapon, because the threat it projects deters the enemy from attacking you in the first place. This is the basic reason why you can't not have a military, even if you're a peaceful state that doesn't intend to engage in conquest. In most extreme form, this is what's been keeping the world largely at peace - threat of nuclear retaliation and subsequent mutual destruction.

War is not about offensive and defensive technologies. Any technology can be used as either (with varying degree of effectiveness). War has always been about the willingness and ability to get your people to go and beat other people, and keep beating them until either side gives up.

This is, in my opinion, the great lesson to learn from chess. Offense and defense are a duality, not a partition. You can defend a piece by making an offensive threat elsewhere on the board. You can attack a piece by shoring up defenses. Offense and defense are human concepts that clearly mean something, but when you get down to brass tacks and try to define them with mathematical precision it becomes very, very difficult. And potential offense can manifest as real defense, and potential defenses can manifest as real offense, and all sorts of other such relationships can be clearly seen in higher level chess.

One is also a hopelessly lost without a map if one tries to understand international relationships without this understanding. Which frankly most people babbling away about them don't, even yea verily on HN and the mainstream media outlets. Rest assured the players themselves do. Since international relationship also have a large poker element to them, and you as a normal citizen are shockingly short on information, this isn't a secret decoder ring by any means. But you're just completely lost if you don't at least understand this duality of offense and defense aspect.

Tell that to Tony Blinken and MBS...

Those seem more like things that can be good about a weapon rather than things that a weapon must have to be good.

Like being reliable and effective is probably more important than being foolproof. This is probably true for any tool that is going to see frequent use.

I don't think this has been said enough recently. The ethics of what we're being asked to create is something we all need to consider. I have a lot of friends in tech and I'd say a majority of them have been confronted with ethical dilemmas like this in their work.

It's starting to feel like the era of "don't be evil" is over and everyone is financially stressed enough to do whatever it takes to make a buck. This will obviously become an exponentially increasing issue as AI tech continues to proliferate.

It's not just financial, there is also the geopolitical situation.

> The blame trail should never terminate in the person who was told to do something or get fired - the blame trail should clearly indicate who ordered them to do that.

This blame trail (or trail of any kind like immutable logs for cert transparency, say) will most certainly be gamed (given a long enough time horizon) to ensure foot-soldiers are the ones in the line of fire. I don't think audits magically detract powerful entities from wielding their new-found power.

Cory Doctorow’s latest book, “Attack Surface”, does a good job of exploring this concept IMO. https://www.indiebound.org/book/9781250757517

As do Little Brother and Homeland. They're set in a post-terror-attack San Francisco. Nice easy reads if you are interested in tech and want to indulge in some Big Brother paranoia. As he writes, this kind of stuff is _actually happening_ right now in the world, even if the names and places don't match.

So where’s the line? In the age of modernized hyper-optimal statistical inferencing and ai techniques, is the solution always to abstain? Seems the end result of this must be some sort of Luddism.

I trust my friends:

- know that I wouldn’t send them anything fishy

- know not to click anything fishy

- know what fishy is/looks like in modern campaigns

Yeah, this makes sense. You should always think about how any system you make can be abused to hurt others, or make the world a worse place.

But it's not just limited to technology. The same is true of any changes to the legal system, or new laws added there. Or of any 'real world' invention you can think of. Or everything from architecture to the structure of a business or organisation.

So many people design stuff under the assumption it'll be always be used by 'the good guys' or 'their side', and end up badly bitten when that turns out not to be the case.

I think we as engineers are trained (or should be!) to think about failure cases. This is just one more to have on the list. But yes, other professions could often use some of our anti-rose-colored glasses too.

This is good advice. One other idiom I would add, that may not exactly be security specific, is:

"Expect your software to be misused - and make it very obvious when that happens."

Hasn't the author worked on making Remote Attestation for Linux? That functionality is ripe for abuse against individuals.

Wow, yeah: remote attestation was near the top of my list for technology that has this issue... and you are apparently right: he works on that stuff :/. I wonder if this is him growing a conscious or merely a case of someone who isn't doing enough self reflection.

I'd guess cognitive dissonance. The prevalent anti-anti-RA viewpoints I've seen basically write off the power imbalance between a corporate attestation-demander and the computationally disenfranchised individual, in favor of asserting that the individual can always "choose" to not participate. Basically the same fallacy as asserting that all employment is voluntary, etc.

(Just for context I think it's a useful starting assumption, but an airtight axiom it is not. And the excluded middle nature of tech means those details being assumed away will grow and spread to most every interaction - similar to how many simple read only websites hassle users with CAPTCHAs these days)

But most importantly: build security features as if they'll be used against you.

In other words, "don't make them too secure"? I've always wondered whether some of those working on locked-down devices had this in mind, or if it's just by chance, that we can still manage to root (most of) them.


I have over 4,000 emails, pictures, addresses, SNS

People just submitted it.

I don't know why.

They "trust me"

Dumb fucks.

For readers recent to HN or the internet at large: this is a Mark Zuckerberg quote from the earliest days of Facebook.

Nah, my friends know better than use a system I built

The old joke: in a presentation pitching some fine new practice the speaker tells the audience "Hands up if you'd be happy to be on a plane which is running your team's software". Only one hand is raised. "So tell us all what your team does that makes you so confident?" - "With my team there's no risk the plane will even taxi!"

No problem: install it on a laptop, run it, stick it in the overhead locker!


Your comment reads like an insult to the author. Is that what you intended to add to the discussion?

As someone who has a key cross-signed with the author, I'd say yes that's likely. And also an important part of their job.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact