Fandom of actions that impact others in a negative way is bad, and one day someone will do something they feel is right that impacts you and you'll say.. well that's not fair.
Also, if the device is bricked very quickly after buying it and installing it, the consumer will very likely simply return it to the retailer as defective, which again pushes costs back to the manufacturer.
That's what the IoT vendors did. ;)
By saying clearly different, I don't mean to minimize the actions of the vigilante. One of the chief characteristics of civil disobedience, for example, is to resolve that could it be question. By receiving the unjust punishment the dissident displays good faith with proponents and opponents. I don't yet see how pseudonymous hacktivism keeps that good faith with the public. And that seems to relegate it either be small scale, symbolic acts like this or large-scale grey hat stuff that brings lots of unwanted risks/cooptation/etc.
It's actually the main relevant part of the analogy.
It goes to veracity.
There's a person who gave a public talk about manipulating Bitcoins with weak private keys in order to alert the owners that they were vulnerable. But he did it in a way that verified to the owner he hadn't in fact stolen the coins (moving small portions around or maybe signing with the key, I can't remember). He also mentioned in the public talk that the owners of those Bitcoins were totally freaked out by this, and most were never convinced that he was acting in good faith (which is probably a smart assumption on their part).
So the fact that he didn't steal the coins is completely relevant-- it's the very reason he could give a public talk on what is still grey area behavior.
Your hypothetical thief, on the other hand, is clearly mendacious. You have him claiming, "If I don't capitalize on it, then people won't understand the costs/risks." That is clearly false from my real-world example above, and if he tried to give a public talk about how his theft benefited society he'd be arrested.
Your point that I couldn't have given a public talk had I stolen the coins is completely correct. I still spoke with a lawyer about it ahead of time, though. :-P
There was another person, who was somewhat less scrupulous, who would simply steal the coins and watch for someone to complain in public about it, then offer to return them. They use a pseudonym and as far as I can tell have vanished.
So, yeah, even in softwarw one can do as you suggest. Multiple times it's been done with things improving across the board. In DO-178B, an additional effect is an ecosystem of tooling, reusable components, and consultants sprang up to make each project a bit cheaper and less risky.
I know it's fashionable to blame the MBAs instead of blame ourselves, but at the end of it, we're the ones who write insecure code. And I don't think that if you give an engineer an extra week or two to focus on security that you'd end up with a measurably more secure device. Securing something is a different skillset from building it.
Pentests are probably the answer.
Or to put it another way, if we're not proposing to bring in an outside team to conduct a pentest, what's the alternative?
Pentests are, to be clear, great, and there are plenty of people who Dunning-Kruger their way through security decisions. But time is definitely a factor in this stuff.
It just seems like pentests need to move from "nice" to "necessary." (Part of that is reducing their cost from $60k to $6k.)
Security needs to cost to demand talent. The real solution to this problem, I think is that failure to secure needs to cost (whether in monetary or criminal terms) or it isn't relevant to business concerns.
> security is inherently and inescapably expensive somewhere in the chain
...is the thing that needs to change. Presumably using more automation (e.g. employing more software like http://lcamtuf.coredump.cx/afl/), such that "pen-testing" shifts from being a labor cost to a capital cost.
It's the hard stuff that is context- and environment-dependent to a degree that it resists automation.
Put OpenBSD and OpenSSH on them with configuration explained in a good book on the subject. Write your apps in memory-safe language that validates external input. The End [for vast majority of attacks in IoT space]. It's not as hard as you detractors claim. They just don't care.
On the one hand, this certainly makes a lot of sense, especially these days with so many stories about terrible engineering (either just bad or as a result of unethical behavior) causing real harm.
On the other hand, it's precisely organizations like this (effectively guilds or unions) that tech "leaders" try to disrupt. They tend to be pretty conservative.
Being able to put your foot down doesn't allocate resources for security updates.
In this, like most things, you need a balance. If you aren't commercially driven in some fundamental way you probably won't last long enough for any of this to make a difference.
Of course if you apply that the wrong way, you end up with devices that suck and/or harm users. This way leads to regulation typically, since Smiths invisible and myopic hand usually acts too slowly for people to be convinced it will get to the right place, if we just wait long enough.
I'm intrigued by this phrase, could you explain it please?
Adam Smith used the phrase "invisible hand" to describe the way markets reward certain business ventures. The previous poster called the invisible hand "myopic" in reference to consumers being focused on cheap devices with features that immediately benefit themselves.
Nothing I said implied bolting anything on prior to involving upper management.
ANY decision at any time during the process can be overruled by any MBA. That needs to change.
If engineers run the shop you might even end up with another Uber. And we all know what a disaster that is.
However, at some level, MBAs and engineering need to be on level terms. If there's a conflict it can be resolved by going higher up the chain and both sides have the opportunity to make their case.
The idea here is that no engineer would knowingly sign off on something bad.
Note we've had worms and such for decades now and most of them don't deliberately break things. It's generally far more profitable to exploit the resources than simply destroy them. Brickerbot almost certainly wouldn't be if we weren't all getting affected.
Best case scenario: users claim warranty and replace their devices something better
Worst case scenario: users need to buy new gear, they probably won't buy from that same manufacturer because last one died for no apparent reason.
Really worst case scenario: users buy again the same cr*p and dies again, until they realize that brand is worthless and buy something a bit better.
Doesn't seem so bad, if the alternative is having their machines taking down businesses and users...
Im only having a few possibilities come to mind that are life-threatening. Most are just annoying or financial drain. If we add painful, maybe make an epileptic's screen on SmartTV blink fast like the attack on the web site. Turn off people's alarm clock enough they get fired and loose health insurance before major operation. Im really having to stretch it here.
John J. Citizen should be thankful if the person who finds it only wants to deactivate it rather than use it to poison him / shoot him / run him over. No matter who finds it though, it's tough luck for that person; they're the owner of that item in name only, if they don't secure it.
Society has decided in some cases (in domains well-understood by legislators, unlike IoT) that the person doesn't deserve to keep that item if they don't secure it. Example: "Improper storage of a firearm" or the like, is literally a crime in many jurisdictions and can result in losing your gun license. Creating a burden on or a danger to society through your neglect has in that case been affirmed to be unacceptable. The law will catch up with this too, I hope.
How about in arsenic? The Internet of Things is mostly insecure trash that will only be fixed by throwing it away. The manufacturers know this, and simply don't care.
Very few people that use the internet were unaffected by shitty IoT security. And that seems like it was just the start of it's capabilities. Something needs to be done to destroy these cyber weapons. If your stupid light bulb is recruited into a cyber weapon, then it should be prevented from harming others.
Right now, we're beginning to treat DDoSes in that "infectious
agent"/"your responsibility if you don't act to protect yourself" way. So many people do them, so often and so easily, that "shutting down the botters" one-by-one will never make DDoSes go away. So we have to just figure out how to deal with them. (Which will, coincidentally, make DDoSes actually go away, if everyone ends up immune to them such that it's no longer useful to do one.)
But, annoyingly, we still handle bots programmed to scan for and exploit software vulnerabilities (worms, ransomware, what-have-you) as only intentional malicious action on the part of their original author, to be solved by catching the author. (Not that you can't catch the author—but that won't stop a worm, and especially won't stop someone else from just slightly-modifying and then re-releasing the worm.) We haven't bothered nearly at all with the "how do we make software vulnerabilities, as a class, less exploitable" part of the equation.
Personally, I'm hoping that this decade sees "A-Life" computer worms, that self-modify using (machine-readable?) 0days they discover by spidering the web from their infected hosts. Computers would be being attacked with novel exploits, even with no new malware authors to do the attacking! Then we'd really have to treat vulnerabilities as a fact of life to secure around, rather than something we can stop by just stopping people from bothering to exploit them.
Yes, this a rotten situation, and I sympathize with the motivation. No, I don't think we should blithely disregard the fact that the worm is likely causing genuine harm and that it was in fact created to cause harm.
Right now we have script-kiddy teenagers; Real Soon Now there won't be much reason to expect your average 5-year-old with a Youtube account, won't be able to slap together something like a ransomware worm from readily-available components, that will spread itself a billionfold. And, amongst 7 billion people and growing, there's going to be a lot of kids thinking that that sounds like a fun time.
The only thing to really stop this from being the world we live in, is making worms irrelevant.
(And what we do in the short term, about this case? Honestly, I haven't bothered to think about it. Too "identity politics.")
What do you do when I change my logic to, well this is a ZERO DAY exploit, but you need to be patched, without understanding the complexities of your device or network. Which we all know QA takes a while because of variables. Look at any microsoft patch for evidence of that. Your argument makes it seem like if I decide to weaponize the Shadow Brokers toolkit to lockdown and secure networks around the globe, i'm ok because my intentions are good and manufacturers should have secure code without 0 days. What happens when a proprietary driver or component fails because of a change made to the kernel or the way it handles driver functionality? Now I've broken / disabled something because I didn't know the intricacies and instead chose to do what I thought was right.
"No good deed goes unpunished"
I'd say you are changing topics. The topic at hand is about devices that are designed to be insecure, because the involved parties just don't care. The manufacturer KNOWS yet doesn't care because the issue doesn't cause him any harm, and the user just doesn't know.
We are talking about devices that willingly expose themselves to the internet (oftentimes without any valid reason to), that are all factory-setup with the same credentials (and no must-change on first use policy), etc.. This is just malpractice, not 0-day vulnerabilities.
Bottom line, vigilantism has a cost and picking and choosing morality of ideals based on your sole opinion is neither appropriate or legal. laws exist for a reason.
Instead of just close/lock the door for my neighbor or call the cop, I use a bulldozer to level the house to the ground. (zero out the flash.)
In theory, the "vigilante" can offer his service to device manufacturer to help remotely clean/update the devices instead of just simply wiping them off the net.
This is where your analogy breaks. Who is your neighbor on the internet? The most logical answer I have is "Everyone with a public IP".
Next, who is the internet police? Sorry folks, there isn't one. If my neighbors house is open, I would call the cops for two reasons. First I don't want to see their stuff damaged. But also, it creates a public nuisance. Some variant of criminals (say drug users or stupid teens) could take up residence in their house, possibly even burning it down, which would make it a direct threat to me.
And that's the problem with our current internet police. They will gladly try to arrest you for breaking into someones house. But they will not bust the 100,000 houses that leave their front door open inviting crime into the neighborhood.
EDIT* I agree with the bulldozer analogy.
While you have raised some valid issues, this is not one of them. Having an unsecured device on the internet has some very definite adverse side-effects.
I don't really see in this case how they (or mostly anyone) is unable to improve IoT (or general) security through other means or that the consequences of the actions themselves are any different from other forms of attacks on software (like credit card fraud, denial of service or ransomware).
The arguments from the "hacker" gets especially weak when they conclude that consequences of breaking IoT devices is worthwhile, but the consequences of IoT devices breaking the Internet doesn't have the same effects. Even though you could argue that it's far harder for most people to influence overall Internet security than IoT security and therefor the moral arguments for breaking the Internet as a way of improving it should be slightly easier to make.
Really? How about you show me the evidence that people are... through "other means"... improving IOT security of these devices enough that DDOS isn't a big problem any more. I'd love to hear what you've done to convince all the vendors to focus on secure devices instead of profit when targeting markets that will deliver profit regardless of security. Most of us in INFOSEC haven't been able to convince much past a subset of software and hardware developers to focus on improving security.
The only time vendors ever delivered secure or safe solutions was when sound regulations were forced on them with a requirement they were followed before a purchase was made. That was TCSEC and DO-178B respectively.
Altough i wonder: why didn't someone with deep security expertise, maybe ARM with it's mbed,created something developers can't harm, and on the other hand, issue a product label saying:"this is protected by our stack..." ?
I could see that be attractive to some b2b buyers, attracting devs, further strengthening the value of said label , increasing marketshare and reducing costs, and creating a positive feedback.
You didn't make an argument. You made a false claim that there were other methods that work and/or an implication that there wasn't much effort on doing that. All kinds of people have spent decades doing that. They get ignored.
"Why is it at all relevant what I've done and especially since when you don't say what you've done?"
"I haven't seen much convincing being done."
Programmers, support people, architects, tech managers, security experts, and so on have failed to do what you suggested because of greed and apathy of manufacturers. They write about it all the time on blogs, esp basic QA. They write about it here, too. I asked what you had done since you might have seen people successful at convincing greedy, hardware manufacturers at doing security at a loss. We obviously haven't.
""INFOSEC" (all caps of course because we want to be cool like the military)"
People in the military invented computer security. They taught me. Don't get excited because they called it "COMPUSEC" to differentiate between it and "COMSEC." CompSci and business called it information security w/ INFOSEC being a short-hand. Later, many in business started calling it IT Security or ITSEC. It's a business term that people from high-security, regulated backgrounds, some civilians, and military all use these days. We speak differently to laypersons in management or policy-making vs how we talk to HN techies. Nice try at trolling, red herring, though.
"Yes, you're still not making an argument why these actions would in any way would be a effective way to regulation."
I just told you regulations on information security were passed that worked and led to secure devices hitting the market. It happened twice at least. Obviously, that means there's a good chance regulating in a similar way with modern knowledge would do the same thing again. Meanwhile, nobody is doing anything at any level, you can't convince businesses to do anything in general case, and so a vigilante breaching defective, damaging stuff might be only progress we can get in meanwhile. Reduces risk and decreases demand for garbage products. Vendors might get message like Microsoft did leading to their 180 in security.
I did make an argument, you just missed it. In most subcultures the thing your doing is the goal, therefor the actions themselves are meaningful (at least according to the participant). Since this isn't the case here, but more of a "the ends justify the means" situation, you have to argue that it actually does. The point isn't that there are other ways, which you incorrectly choose to focus on, but that you have to justify how these actions are appropriate both in themselves and relative to other actions.
> You made a false claim that there were other methods that work and/or an implication that there wasn't much effort on doing that.
As far as I know there isn't much effort going on. This is of course subjective, yet you haven't provided a real example of what you think is a substantial effort that should have lead to results.
> Programmers, support people, architects, tech managers, security experts, and so on have failed to do what you suggested because of greed and apathy of manufacturers.
Plenty of manufacturers make secure or at least not obviously insecure devices.
> They write about it all the time on blogs, esp basic QA. They write about it here, too.
The embedded ecosystem, especially in other countries, aren't going to see those blogs nor be able to act on it. They aren't ignored so much as not considered.
> People in the military invented computer security. They taught me.
I bet I have more military experience than you. The military operates in a different environment and different considerations than civilian infrastructure or products. Most civilian security researcher don't have formal training, yet frequently use terms like OPSEC without actually having an understanding what it means. Because if they did they would know that it to a large degree isn't transferable.
> Meanwhile, nobody is doing anything at any level, you can't convince businesses to do anything in general case, and so a vigilante breaching defective, damaging stuff might be only progress we can get in meanwhile. Reduces risk and decreases demand for garbage products. Vendors might get message like Microsoft did leading to their 180 in security.
This is just your opinion. If this how you do security work I'm not surprised you feel ignored.
The thing is I do have a number of suggestions on "other ways" to improve and/or promote IoT security. I see no point whatsoever mentioning them here though.
It would be their fault. High-assurance industry has been telling SCADA and medical industry to get their shit together for a long time. This included pentests showing it could all be destroyed. They even have people at conferences talking about it with products or basic advice to deal with it.
The reason it's all still vulnerable is that they... don't... care. They turn whatever small amounts of money the security would've cost into profit. I mean, in some cases we're talking about remote monitoring that operates one way that could be done with a data diode for nearly impenetrable security. Cheap as hell if you homebrew it on cheap, embedded boxes. Likewise for FOSS VPN if two-way is required. Instead, costly system connected to wide open Internet to save a few hundred dollars. They just don't care.
So, you have to make them care. The customers don't as much since they often don't know better. Those that do are apathetic since it will be someone else's problem. That's best moment for regulation to step in to force a solution. There's no regulation, though. Court's seem unreliable on this but still some hope there. So, your options are waiting for them to hit you, paying exhorbitant costs for DDOS mitigation due to problems others are creating (i.e. externalizing), or maybe a criminal just smashes the insecure devices until people stop buying them or manufacturers start securing them. So, I like what's going given nothing else is reducing risk as effectively.
1. A car is a necessity that cost a ton to replace. An internet-connected camera or TV isnt. They could just as easily not buy an Internet-enabled appliance.
2. These devices are being used as weapons when people leave them around insecure. Leaving loaded guns lying around is a bit closer but minus the lethality.
3. With cars, we have efforts on safety and security at user side, manufacturer side, and the law. There's no effort to buy secure IoT by these users, to do even minimum protections at manufacturing, or pass laws putting liability on users or manufacturers where it should be. Now, it's more like a car with defective parts that make it hit other cars. A city's worth are affected with nobody taking action but people are told armored cars are available for a fortune.
So, these comparisons to highly damaging thefts of legit goods on innocent people are nonsense. There's defective products damaging innocent people. Nobody with power to prevent or punish it legitimately is doing anything. Im happy that a vigilante is reducing risk to Internet hosts plus putti g cost on those responsible for that risk.
You used to have what was essentially airgapped and self contained.
But then feature x needed an ongoing net connection, and it happens to run on the same soc as feature y that talk to the can bus, and boom.
Neither of the teams responsible for the features considers that something can jump from x to y, almost like an illness jumps between species.
Damn it, the other day HN linked to an article on how VMs sharing hardware could talk to each other using the CPU cache.
That sounds similar to a paper I read ~20 years ago that described a way to move data from a high privilege process, bypassing mandatory access control (>= TCSEC B), using page faults as a covert channel.
> it happens to run on the same soc as feature y that talk to the can bus
I wonder how many people will have to die to teach car manufacturers the lesson that there shouldn't be any electrical connection at all from the internet to the breaks.
So then the question becomes: How are we going to educate engineers about this class of problems?
Feelings are irrelevant.
Some vigilantes hack that said "Turn the car off at 10mph or less" is a far better outcome than the attackers option of "Press the gas and turn the wheel left as hard as you can at 100mph".
Thank you. I am tired of these so-called arguments that start with "how would you feel if". If we're going by my feelings, you're all in deep trouble.
Regulations are put on companies after their freedom of choice; when abused, starts harming people. I think IoT is a perfect example of this. Today the manufactures have a great deal of freedom. Their lack of self regulation will require others to step in and regulate them.
On the matter of vigilantes: This is a complicated topic, but I support them doing this, even if it harms myself or someone I care about. If this problem is not stopped sooner than later, it will explode into a much bigger economic and/or societal issue that will be difficult to contain. I have warned the people I care about already.
The current state of IoT is a complete lack of responsibility. I would even support someone bricking every piece of machinery they can, including cars, heavy equipment, power plants and anything else that was built without proper engineering.
If someones dishwasher is streaming pirated movies would they care? If their children's bedroom were unknowingly being streamed to pedophiles would the care (obviously they wouldn't like it, but caring requires knowing)?
Vigalantes may be the least worst option.
It would be nice if we could live in a world where we all trust each other, and maybe with physical things this is attainable. But the IoT is a worldwide attack surface. It's open to nefarious actors ranging from junkies with stolen laptops, all the way to state-sponsored hacking organizations with billion dollar budgets. Trust and goodwill aren't options anymore.
people are connecting medical devices to the internet?
There's nothing inherently wrong with connecting medical devices to the internet, and running an outdated OS on your specialized equipment is fine too as long as it's not being connected to any unsecured networks. But running a known insecure OS on an internet connected life support device has got to be a violation of some law or ethical regulation.
Medical or critical devices should never be exposed to the Internet, especially if badly configured. If there's something illegal involved here is putting lives at risk by not implementing proper security.
If I had to find an analogy, that's like someone hung a grand piano using a shoestring from a roof and the hacker cuts the string letting the piano fall at 4:00 in the morning before it breaks later with much higher probability of killing people.
It's still a dangerous and wrong act, but it prevents a much worse one.
First, <2,000 devices hit per version? Mirai certainly doesn't show limitations like that.
Second, erases and corrupts? Unless I'm missing my mark, bricking a device that's running on firmware takes a fair bit more targeting than just adding it to a botnet.
edit: Ars has more info: https://arstechnica.com/security/2017/04/brickerbot-the-perm...
Apparently it specifically targets devices open to Mirai, and claims a 2,000,000+ kill count. Not sure what that means for medical gear, but it does mean XP is safe.
This because it is cheaper in the short run to string a single physical network and then use vlans etc to attempt to keep medical stuff from talking to accounting or the visitors WiFi.
This so a single overworked nurse can monitor a number of patients from a bank of monitors hooked to a thin client near the ward entrance.
But in this instance the weighting is one done by beancounters looking at salaries as an ongoing expense, while tech is an investment that pays itself back the longer it can be used without further expences.
If someone has life-sustaining medical equipment and they're not maintaining it by ensuring it gets it's patches in a timely fashion, then that right there is where the blame starts. Doing so is no better than ignoring frayed wires on an extension cord.
The real horror is that such poorly designed devices would ever be deployed for such important uses. Things like BrickerBot don't even show up on the same scale.
Tell the family of someone killed that, "____ shouldn't have purchased a device without knowing how to secure it!"
1) This attack is not using 0-days. It's using vulnerabilities that have been in the wild for almost 6 months now, and are so trivial to exploit that some security researchers called the exploits "amaturish". These types of devices have been used to DDoS lots of internet infrastructure. What, short of something like this, is going to get those devices and their manufacturers to secure their hardware, given that Mirai wasn't enough to convince them?
2) I honestly think that finding/making a legal means for this sort of scan (specifically, scanning to check for trivially insecure devices, and bricking them if cannot be patched) to happen on a consistent basis is something that the EFF or the like might want to look into. The problem with a vigilantes is that they lack accountability, so while I might personally approve of the current approach from what I can see (even as I recognize it as illegal), it could take easily take a turn for the worse. I think having a standard around need to survive X number of hours connected to the internet and that a certain number of devices (say 10) need to survive 6/12/18/24 months down the road or face recall would be starting point. There are a lot of contingencies to work out for this, such as personal DIY projects and the like, it's not 100% fleshed out.
3) As far as I can tell, the analogy is more along the lines of a bunch people buying a bunch of stereos and/or loudspeakers that are trivially hackable (but the consumers aren't aware of that), and then putting them everywhere. If those loudspeakers and/or steroes started disturbing the peace, or getting used in ultrasonic attacks on power lines or water mains, you can bet that police would be destroying them, and/or allowing others to do the same.
Really? He doesn't see how a car is different from a webcam? And why there are different safety standards for each?
Their goal is laudable, but this seems like a fun way to engage in vandalism while hiding behind an ideological aegis. The sort of thing I'd do when I was 15.
The first computer virus was an experimental self-replicating program called Creeper.
And the second computer virus was Reaper, a similar program created for the sole purpose of deleting Creeper.
It's a risky approach. It could have unintended consequences. If it's a worm, it could spread out of control and cause considerable harm purely from its transmission.
As attractive an idea as it might be, it's dangerous.
If my shitty DLink camera suddenly stopped working, I wouldn't demand a refund - realistically, I'd just toss it in the bin and try to remember not to buy more DLink products. But I probably still would, if they were sufficiently cheap.
I imagine that calculus is similar for most people.
It's tough for security to affect purchasing decisions because it's difficult to measure. I can measure horsepower, megapixels, gigabytes, milliamp-hours, etc. so it's easy to make purchasing decisions based on which of those things are important to me.
But I don't think bricking a device necessarily ties into security in people's minds. If they permanently modified it to always show HACKED_BCUZ_DLINK_SUX whenever I try to load the camera feed, sure - but a bricked camera is just a failure. I don't even know if it got hacked, or if a capacitor blew, or if a rodent chewed through something crucial.
If only it weren't for you meddling kid.
Analogies, aren't they great?
(Since it's apparent that sarcasm can't be read: "Stealing bikes" isn't the same bloody thing. Why even make that analogy?)
Does this concept apply to software? When the next large-scale RCE 0-day drops, does it make sense to use exploitation to destroy as much as possible in order to pressure the developers to ship a secure product? Since, the hacked machines certainly could allow an attacker lateral movement to sensitive data.
How, like 5% of people that buy electronics actually turn in the warranty cards. No, they will sit on the shelf for years polluting the internet with DDOS attacks and spam.
>es it make sense to use exploitation to destroy as much as possible in order to pressure the developers to ship a secure product?
Yes. That is also why I backup data using multiple methods including off line ones.
Vigilante or blackhat doesn't matter. The next RCE will gladly spit copies of CryptoLocker everywhere if they could get ahold of it.
The internet is a dangerous and well connected place. If lived China, I would think it's funny if I wiped a few large US corporations off the map because they used a DLINK webcam. And there is only a tiny chance in hell they would ever find me.
Additionally theft of property has a personal gain for you.
I'm not sure I ethically support the hacker's actions, but I don't think the bike example has the market/awareness effects that make it at all defensible.
What's the problem with these people?
Sarcasm aside, I live in Brazil, ask any Brazilian who stayed on an European country what was the biggest difference: "I could feel safe anytime, without worrying about my stuff".
That really shapes the mind and behaviour of people.
Isn't this actually a really common sentiment, though? I've lived in several places where leaving a bike unlocked for 5 minutes, or sloppily locked for an hour, means you're going to lose it.
That doesn't make the theft acceptable, but if a friend borrowed your bike and left it unlocked you'd still get mad at them.
Reshaping society so this stuff doesn't happen is great, but on an inside-view level we treat crime as sort of an inevitable "someone will do it" force.
I don't disagree with you, however I think there are some levels to this concept, e.g. how two different locations would differ if it was: a lost wallet, a somewhat clear opportunity for embezzlement, a bike stopped in front of a coffee shop?
The same kind of stupid person that doesn't realize they live in a ghetto called "The Internet".
This actually seems much closer to the IoT issue than theft. The maker and user of the device have created an inviting target which will cause harm to someone other than themselves. Even if the eventual attack is illegal, they can still be held accountable for making it so likely.
Note that "attractive nuisance" is specifically about trespassing children.
IANAL, and it's hear say, but I had thought this was something everyone knew.
Given the owner of the bike could conceivably be held liable for the use of their bike to commit crimes, the janit0r who decided to clean up this crap comes across as the lesser of two evils.
I find it reprehensible that the Gizmodo author (who is using his position as a journalist to encourage criminals) and HN commenters are applauding this hacker as if he's a hero of the people, fighting for a better future. He's directly harming individuals who have purchased products. This is not a friendly reminder to manufacturers to get their shit together. It's some guy illegally connecting to, taking control of, and bricking computers.
I've seen him referred to as a greyhat. No. Everything about this is strictly blackhat. This hacker deserves prison time. What a piece of lowlife scum. It really does sound like a 15 year old getting off on making waves, rather than someone who gives a damn about security.
Attacks on devices that have hardcoded weak credentials online aren't an event or an act. They're a force of nature, like erosion. No-one would be happy with someone building bridges that don't account for erosion. Nor is it ok to ship something that connects to the internet and doesn't account for the millions of automated bots that are prowling the web 24/7 looking for insecure devices.
The manufacturers are 100% to blame, and the worst thing is that they're not the ones that deal with the fall-out – innocent companies and consumers are.
Do you force the situation and make it mow into your yard and over a bunch of rocks to destroy it, or do you live with the danger?
I don't have an answer. In this situation you could at least talk to your neighbor. Without the ability to feasibly do that, I'm not sure I would fault either action.
The answer falls into an area that's somewhat unknowable with current information, which is why I can't fault either behavior.
As in, "the chance of getting hacked" < "the chance of the vigilante creating dangerous situations".
For example, maybe this person had a wife dying of cancer while Mirai destroyed his life's work, so in the same period he lost his wife and he lost his work.
Or, maybe he spent a lot of money trying to launch a new product through channels that were destroyed during one of the attacks, and unable to get his money back, had to close the venture.
Maybe he had to sleep in a data center for several months during the holidays and concluded the only reason he was doing this is because consumers and manufacturers aren't concerned with the damage they are doing, so he is going to make them become concerned about the damage they are doing.
The point is that we have no idea if this person has been harmed, and whether they have any other legitimate means of being made whole from harm done, as well as be able to protect themselves from future harm.
Clearly, the proposed solutions coming from industry "experts" is likely to make things worse, as the only other activities to "fight" Mirai seem to be to support legislation as a solution to a technical problem, and I'm really not clear on when this has ever worked, especially in a system that everything on the planet can connect to.
If you put a dangerous, unsecure device, live on the Internet, that can be used to attack other machines, you deserve to have your property be destroyed.
Anything else would be dependent on what the device is for and how it does it.
If you really must be able to update the firmware, add a physical "write enable" switch, not a software enabled one.
Nice thinking though