Good job by the hacker on the train. The more people get used to the idea that their smartphone won't always work when they want it, the more they won't exclusively rely on it when there is a more significant risk. It's a one-man public service campaign.
But it also makes them liable; messing with other people's tech like this can be construed as attempted hacking. What they were doing was NOT "white-hat hacking", that's the author who reproduced the attack in his own home on his own device.
Well I'm glad you're certain that none of the passengers on the train were using their phone for critical patient care or handling a time sensitive family issue.
The more reason to educate people on that front. Don't rely on your single piece of tech in your pocket to work all the time. Society ought to reacquire a bit of resilience on that front.
Need to do critical patient care while commuting? Surely you will have made sure to bring a dumb phone with you as backup, if you are really that critical a resource (after all, your battery might be dead, someone might have stolen your fancy Iphone, etc.).
Time sensitive family issue? What if you turned your phone off to read a book? Are you the asshole now? The train driver might have been stopped at a red signal in the Schipholtunnel; not much of a chance of getting a phone signal there. Is he now endangering people by depriving them of their umbilical uplink?
What if the time sensitive family issue happened while you had your phone turned off for some intimate private time with your partner? How will you ever forgive yourself?
The key takeaway here is to understand that there is no such thing as being available all the time. So don't expect it, and don't feel guilty about not providing it.
There are all plenty more of those convoluted minimal chance scenarios you can come up with, but these hold true for anything people around you do.
It’s amazing what sort of weird opinions people will contort themselves into to make their antisocial behavior “okay.”
Got robbed and murdered? And you only had a deadbolt on your door? Surely you couldn’t be such an idiot as to not have a dual layer steel security door, could you?
Got in a car accident because someone pulled out in front of you? And you were going the speed limit?! What a buffoon. You’re insane to be traveling above 20mph on any roadway, because someone could pull out in front of you at any moment and if you’re smart like me, and not a total doofus, you’re always on your toes.
Got food poisoning at a restaurant? Well if you were smart you’d have asked to see the expiration dates on all their ingredients and observed the kitchen’s methods VERY closely. Make sure you also visit each supplier to observe their hygiene practices, and of course each party who touches each product along the supply chain. Anything short of that is outright negligence on your part.
I’d rather we don’t degrade ourselves into a low trust society due to some abstract desire for “resilience” and instead we just put people in jail to deter people from harming others without their permission.
I think that's a false analogy. I don't have a clear sense on the ethics here, but the basic point is that we should be resilient to crises and broad cyberattacks. Cell towers _will_ sometimes go down, and rare events, such as nation-wide cyberattacks, _will_ eventually happen.
Imagine if [adversary] spends time finding a zero-day in iOS, Android, Linux, Windows, and MacOS, and releases a worm which bricks every computer it gets on (e.g. overwrites every firmware with something maximally malicious). That's within the scope of capability of many world governments.
What happens?
Do people in hospitals die? Does out infrastructure collapse? Or do things keep ticking on, somehow?
Tools like Chaos Monkey intentionally introduce more errors under normal operating circumstances in order to make systems more robust in extreme ones. This is a real-world analogy.
For something like this on a train, it's a question of value systems. I can't bring myself to _have_ a value system which makes this okay for me to do something similar, but I can see rational value systems which would allow that (e.g. hedonistic utilitarianism). It's the trolly problem.
All the examples you gave, in contrast, are just victim-blaming. There is no greater good.
By my value system, though, what I would like to see are similar tests done planfully and intentionally. In the abstract, if a federal government took down the internet and cell phone networks for eight hours, or we had planned power blackouts, or similar, I'd be fully supportive. That would require organizations to build in the right types of resiliency. I understand that's fully impossible in the kinds of political systems we have. But in the right political system, I'd like to have a government which works to make us resilient to those kinds of extreme events.
It's not a false analogy at all. You just happen to care about cyber risk and resilience, so you think it's okay to accept a higher cost in pursuit of it.
Here you go:
* You need steel security doors because we should be resilient to crises and broad security threats. You don't know when there will be a major gang war, a neighbor with a psychotic break, a government breakdown, or a ground invasion. It's just better to have steel security doors because there are all sorts of real risks that they mitigate.
* You need to drive slowly because some day someone will pull out in front of you, even by accident! Accidents like that happen hundreds or thousands of times per day, so it's frankly insane not to be prepared for it.
* You need to check all your restaurant's ingredients because someone will forget to throw out the milk one day. We need to be resilient to this because it's certainly going to happen on occasion, and the right way to achieve resilience is to shift that burden onto end users, or at least to continually poison end users until they demand action from food regulators that we have a 100% foolproof system to prevent bad milk from ever making it into a restaurant meal.
N.B. The comparison to Chaos Monkey is the false analogy. People run Chaos Monkey on their own infrastructure. And yes, if you run a tool like that on someone else's infrastructure without their permission, you're the asshole.
The core issue here is _systemic risk_ and _systemic resiliency_. The risk profile of driving does not change. By driving a car, I expect a risk of 1.5 deaths per 100 million miles driven. We all agree that's a reasonable risk profile.
On the other hand, rare events are things like natural disasters, wars, plaques, asteroids hitting the planet, and so on. Day-to-day decision-making does very poorly for preparing us for those risks.
> And yes, if you run a tool like that on someone else's infrastructure without their permission, you're the asshole.
No. This is wrong.
Things like bug bounty programs were created precisely because whitehat and grayhat hackers made us more resistant to blackhat hackers. If someone tries to compromise my bank's infrastructure _with the intent of surfacing vulnerabilities and without intent of stealing my money_, that might be bad for quarterly profits and my bank might not like it, but it's _good for me_.
Free markets push towards low resiliency by offloading risks onto consumers. Behaviors like this change market dynamics in positive ways.
Ah got it. So if someone pulls up alongside you at 80mph on a highway and remotely disables your vehicle without your prior consent and you crash into a barricade and your entire family dies, we can just take note of the threat vector and find comfort in the fact that maybe an auto manufacturer will harden their cars against future risks. Makes a ton of sense.
> bug bounty programs
Who decides whether to operate a bug bounty program and the parameters of said program?
> So if someone pulls up alongside you at 80mph on a highway and remotely disables your vehicle without your prior consent and you crash into a barricade and your entire family dies, we can just take note of the threat vector and find comfort in the fact that maybe an auto manufacturer will harden their cars against future risks. Makes a ton of sense.
A better example: Someone casually remotely disables all cars in parking lots. They need to be unbricked, which is a month-long process. The outcome is cars are built with security built in. That prevents a terrorist attack six months later where someone was plotting to disable thousands of cars on highways in the way you described.
Cost: Hundreds of parking lot vehicles disabled, with a range of (very real) consequences, such as missing job interviews, missing school pickups, missed medical appointments, etc.
Upside: Dozens of lives saved (but since this never happens, it's abstract lives)
It's very much the trolley problem. It's of course possible to set up absurd trolley problems (pull a lever, kill 5 people instead of 1, rather than the other way around, as you're doing).
People have different answers to the trolley problem.
Yep, if you brick a bunch of people's actual cars, on which they actually depend, in exchange for some hypothetical (?) future (?) hardening (?) against a hypothetical (?) future (?) threat (?), then yeah, you're an asshole. And you should just go to jail.
If you like trolling people you can just say it. No need to make up this "no dude I'm actually helping you defend against a maybe-terrorist attack!"
Yeah, it's like a trolley problem where you tie both sets of people to the tracks and put the trolley in motion. Brilliant.
I am trying to explain to you why people might have a different value system from you.
It's a lost cause.
I, and many others, have the ability to understand value systems we don't share. It would be a good skill to consider developing, in an increasingly diverse world.
You're acting like I'm unaware of the ideas of redteaming or chaos engineering or bug bounties or system resilience and that there's some "value system" blindness that prevents me from understanding it.
I understand and value all of these things. What I'm explaining to you is the ridiculously, ridiculously obvious point that responsible redteaming, chaos engineering, or bug bounty pursuits do not put innocent bystanders at risk. And if you do put innocent bystanders at risk, you are an asshole. Sure, I get that some people have value systems that dispute this, and I'm perfectly comfortable saying those value systems are not worthy of respect.
Intelligence isn't thinking so hard and so abstractly that you can't identify antisocial behavior and inflicting harm on innocent people for very likely zero actual practical societal gain. That's just, again, a convoluted excuse for being an asshole.
A better example would not be disabling all cars in a parking lot. The cars are inert. The phones are active, so driving down the road was apt.
It's not a trolly problem, because the 5 people might be on the track but the 1 person is 200km down the track.
Bluetooth has always been leaky, but apart from being trapped on a train it's mostly mitigated by being able to walk away. It's not something techs are un-aware of.
Also.. 99% of the people on the train wouldn't have any idea what's going on. Calling tech support later would have zero affect. So it's not highlighting the issue to anyone who could remotely address it.
The dongle dude here was just being a jerk because he could. Playing loud music would have been about the same. "I'm going to take joy in confusing/annoying others."
I'm a T1 diabetic and get my blood glucose values through my phone, bluetooth even. So disabling bluetooth as mitigation is not an option and the phone is crucial. The vendor doesn't support another or even a secondary device.
How am I supposed to "reacquire a bit of resilience" here?
Your life should not depend on your phone working, because it will fail sometimes — it's consumer grade electronics, not a pacemaker or milspec ruggedized device with a minimal attack service (i.e., no Bluetooth, but a safer alternative).
What do you do when your phone gets stolen or simply breaks? Complain loudly at the very least, and have a backup in place (but I'm sure you do).
(I'm also T1)When my phone (screen) broke, I contacted my hospital and got a separate device (reader), but it took a few weeks. Disabling phone access isn't okay or justifiable in public places. I'm not defending the existence of BT vulnerabilities but what the guy did on the train was dumb and antisocial.
Phones are ubiquitous in every country I've spent a reasonable amount of time so far. If my phone breaks I can just walk into a store and get a new one. The sensor will transfer its connection to the new phone within minutes. So that really is not that big of a deal.
Some edgy liberation fighter DoS'ing my phone, however, is.
Also note that there are people that have such sensors and insulin pumps connected in a feedback loop. I don't personally but this exists. There also exists a small open source scene around that topic. Examples are https://openaps.org/ or https://nightscout.github.io/
> Your life should not depend on your phone working
My life as diabetic depends on many, many variables. Theoretically, all it needs is an incorrectly labeled meal to do serious harm. Or the delivery guy transporting my insulin didn't maintain cooling. I can only do so much and still live a life without constant fear.
While this may sound dangerous to you, this way of measuring your blood glucose is extremely liberating for diabetics. The alternative is just so much worse.
People used to die before some life saving technologies were invented.
People can die these days if their technology fails. They might not have a better alternative because they can't afford it or it hasn't been invented
Teaching someone a lesson or educating society sounds like a line from a villain from a Bond movie. A sociopathic villain. You are making comments that a sociopath would make.