Also, the Kiwix project has a hotspot project that allows you to host ZIM files (dumps of Wikipedia and other CC licensed content, like TED talks and StackOverflow) on a Raspberry Pi, allowing you to share it with others. Setup info here: https://www.kiwix.org/en/downloads/kiwix-hotspot/
Or is it like DHT which does not need central tracker?
It sure doesn't harm if someone creates their own ZIM files and reports on their results (and/or shares the resulting files).
Maybe it would be worth putting together another zimfarm that is constantly updating.
ZIM might be able to survive longer (centuries?) as probably the future will still need some HTML parser, while wikitext parsers or PHP might be long dead, who knows.
We used Verisign for mitigation of a 44Gbps volumetric attack and it worked very well. We also evaluated Neustar, but Verisign's infrastructure seemed to be more robust.
Still, large proxy-based CDNs do have the ability to completely bypass all the same-origin protections in the browser. Even if they are angels and don't abuse this trust for identity theft and surveillance, it makes them a juicy target for bad actors, state sponsored and otherwise.
The problem is that you are basically mitm:ed all the time.
That to me sounds very much like MITM, although it is not a MITM attack since the entity controlling the domain opted into it, so basically it is voluntary MITM.
Using a VPS like EC2 is a different story since the decryption happens within the layer that you control. Of course you need to make sure that you choose a vendor for that layer that you trust, but on EC2 the traffic that amazon sees is encrypted with keys they don't have and decrypted with keys stored on a layer that I control. Amazon could read out the memory of my EC2 to get the keys but their business depends on not doing so, so in this case either I have a vendor that always will decrypt and read traffic (Cloudflare), or a vendor whose business depends on hypothetically being able to but not doing it. There is a clear difference to me.
That is the same for most CDN's (including CloudFront and all the other major offerings), so I'm not trying to single out Cloudflare.
This is why having a threat model is so important: it keeps you from wasting effort on things which sound like security but aren’t actually changing anything meaningful.
There has also been times where cloudflare (when setup improperly as I mentioned in the previous comment) has misrepresented the security of a connection, as shown by https://www.theregister.co.uk/2016/07/14/cloudflare_investig...
Look, you can feel however you like about whether the high-profile takedowns are right or wrong, whether the CEO's promises after the Daily Stormer are hypocritical — but let's be clear-eyed about placing a site in a position where one outside person can do it real harm. The question you should look at is whether the risk is actually acceptable for your organization.
The 8ch takedown wasn't actually due to issues with moderation, since (at least based on the owner's video) 8ch removed the post, actively responds to real law enforcement requests, and the original post was actually posted to IG. The issue was that CF was getting enough bad press, and more importantly enough calls/concerns from real Enterprise clients (this is speculation on my part), to take down the website.
Great way for a state actor to intercept your traffic. little bit of volumetric dos and the target themselves responds by tunning through your partner(s).
What's the logic behind this? It's still a single point of failure and relying on a corporation. If the daily stormer or 8chan tried to use them, they would probably kicked off as well.
Additionally, helping to block Wikipedia because China says so is much easier to excuse than blocking 4chan - they would just be complying with local regulations after all.
Going from that to 'undesired political speech will be censored' requires more of a slippery cliff than a slippery slope.
What is this "direct link" you speak of? Did the shooters plan/recruit/organize their attacks on 8chan?
Legally, a "direct link" is irrelevant, you can rarely find a "direct link" between two of anything. What matters legally is whether 8chan was a "proximate cause" in creating the mass shootings. Whether one thing is the "proximate cause" of another is often pretty difficult to discern.
However, as a helpful guide towards determining proximate cause, lawyers ask whether one thing was the "but for" cause of another, i.e., would the mass shootings occur "but for" 8Chan? Put another way, if 8Chan did not exist, would these shootings occur?
Unfortunately, we do not have an alternative reality to play out events without 8Chan, so we cannot know for certain, but we can use evidence (e.g., 8Chan chats, how the shooter interacted with 8Chan and others on the service, etc) to try to simulate that alternative reality. All of this analysis also needs to consider related issues like freedom of speech on public forums and any commercial interests.
I'm not saying 8Chan is guilty or innocent, just that the existence (or lack thereof) of a "direct link" is pretty meaningless.
These include the Christchurch shootings, the Poway synagogue shooting and the El Paso Walmart shootin.
The Christchurch shooter shared his Facebook stream to 8chan before the shooting started, and it was spread from there.
The Poway shooter blamed/thanked 8chan for his views.
You are expanding a lot of effort defending 8chan here. Perhaps consider that it might not be worth defending.
I lived in a socialist country and you did not. Perhaps consider that you might not know where these current trends are pointing to.
Ultimately, such matters should be prosecuted by courts. It is inappropriate for organisations like cloudflare to leverage their position within essential network infrastructure to start editorialising what passes through their network.
No, I think it's entirely appropriate.
"Don't troll" and methods for dealing with trolls has been a thing all sites have done since the internet was invented. I don't see any difference here at all.
Cloudflare blocking people that abuse the network is legitimate (e.g. spam, denial-of-service), just like it is legitimate for forum admins to block people that abuse the forum (trolling, explicit posts).
But cloudflare, or any other network infrastructure provider, shouldn't be determining permissible content for websites because they are not hosts/administrators for that content.
It is like a postal service reading your letters and then saying "we don't like what is being said, so you can't send letters anymore." They can and should stop people sending dangerous materials by post, but they should not be determining permissible content of letters.
No it's not. It's like FedEx declining to deliver for a company which continues to cause it problems, or refusing to service Amazon. Or like Visa refusing to service businesses which have lots of charge-backs.
I expect that a different site with the same contract and payment terms, subject to the same attacks would have continued to be protected. maybe I'm wrong but it looked like a political decision, not a business decision.
Beyond that, given the announcement there, it stands to reason they were convinced to do it there.
It would be “censorship” if they actively antagonized any attempt to spread the information, such as by lawsuit or DMCA notice. They are just refusing to participate.
And given that the “information” is definitively known to be child pornography and violent white supremacy propaganda presented as news, I would personally say refusing to participate is the only responsible action.
But it's clear that it matters just what's being censored. Surely you wouldn't say the same trite clever-sounding hackerspeak if we're talking about censorship of threats, assault and child pornography, would you?
Cloudflare simply has the luxury of choosing which politically disagreeable parties they do not want to associate with because they are insignificant customers.
Pretending that this is not due to differences in politics and moral judgment is semantic smoke and mirrors.
Anyway, the point is that they are not a neutral carrier/providers. Unlike banks or telecoms which are required by regulation to accept any legal business. CF styles itself as neutral infrastructure, until they decide they are not.
The risk of getting deplatformed due to someone's moral judgment is quite real, even for an entity such as Wikipedia. For example they were blocked in the UK because the Virgin Killer album cover landed it on a block list used by major ISP.
But again, this is just a tangent. The core argument is that it is best not to rely on providers that have the freedom to make political/moral decisions who they deal with because that freedom makes them susceptible to moral denial of service attacks. You are one moral outrage away from being deplatformed.
Is this a slippery slope argument.
Because there is a world in difference from discontinuing a few extremists customers, to discontinuing service for something akin to Wikipedia.
I'm not sure every slight compromise of principals is a slippery slope. It seems to me that CF generally aims at being neutral.
You also don't get to claim it supports 2Tbps if you've only weathered 44Gbps.
The issue with on-demand BGP mitigation is that an attacker can do short attacks on and off over a long period of time. Each time the mitigation kicks in, BGP propagation takes at least ~1 minute and will cause some downtime. Proper protection is always-on without requiring redirection.
The attacker was posting updates to Twitter, but their account has since been suspended.
You live in a world where a totalitarian communist state is welcomed and controlled a significant portion of the world economy. Even speak in internet summit.
Welcome to the brave new internet and international world of china.
I can't imagine what such a small team must be going through with a major DDOS - wish them well in their efforts!
1. Your comment makes it sound like Wikipedia is just, or mostly, serving read-only content, which is far from true. Yes, static read-only content is significantly easier to serve than dynamic, editable one, but Wikipedia is the latter.
2. Claiming it's easy to build something at this scale is "isn't all that hard" just makes me think you've never done anything similar. It reminds me of devs saying they could re-build MS Office over a weekend. It's just ignorant of the software's actual complexity.
I'm not associated with Wikimedia in any way, but have worked on large-scale software projects before, and things are quite different from, say, websites only serving 100k monthly active users.
Wikipedia does not need to be globally consistent like Betfair does and the ratio of writes to reads is nothing like 10%, I'd guess at one write per million reads or less. There are several pretty obvious ways to architect a site like Wikipedia for effectively unlimited scalability. The main trick is that it doesn't matter if a page is slightly stale and you can queue edits in the backend for quite some time (many seconds) without severely harming end users. Given those constraints it really isn't rocket science given the plethora of amazing tools we have to hand.
What I'm NOT saying is that I could build it in a weekend. It would clearly require a few teams of skilled engineers to put it all together and, crucially, operate it. My initial comment was in the context of Wikipedia having 100 engineers, and I think it's reasonable to say that a team that size is easily capable of such a feat.
What, in your opinion, would be the work needed to go from a 100k monthly active user site to a wikipedia scale site - that would be comparable to rebuilding MS office?
The saying usually uses Facebook or Twitter.
I thought it was a good, but snarky, point. Especially given my browsing sped up after I installed extensions that turn all that crap off.
"It all scales in all directions with a properly thought through architecture" sounds dangerously like, "Programming isn't that hard if you just do it right."
That's not a tautology. In fact, it's actually worth pointing out, especially to junior engineers who get frustrated by how hard everything is, that it actually doesn't need to be that hard if you, well, do it right. Obviously that's not productive feedback without actually helping them be better, but it's far from a tautology.
For anyone wondering, a tautology is a statement that is logically true by construction, rather than contingently true because of the way the world is. For example, "Programming isn't that hard if it's easy" would be a tautology. Constructing a counterexample by changing programming to something else shows that this was not a tautology to begin with: "Sending a man to the moon isn't that hard if you just do it right," which is obviously false, because even if you do it right that's objectively difficult.
Programming is hard, but we make it much harder than it has to be by doing it spectacularly wrong in many ways, both individually and collectively.
A logical tautology is, "A statement that is true by necessity or by virtue of its logical form."
A linguistic tautology is, "A phrase or expression in which the same thing is said twice in different words."
In formal debating, for example, you can call someone out for either type of tautology.
If you say "All bachelors are unmarried", this is true both because of the meaning of the words, and because of the logical structure implied by the words.
In either case, if you state a tautology, you state something which is true in all possible worlds, given the definitions of the words, at least. Someone can then call you out for stating a tautology, which is to state something that is vacuously true, that is, you've made a statement about the nature of reasoning itself, in any possible world, but you haven't said anything at all about the world we're actually in. So you're wasting your breath even though what you say is unassailably true.
(Note that in mathematics, the tautologies are precisely the theorems with their premises or axioms! So this is by no means always useless.)
The problem with calling out tautologies in common life, however, is that the danger of identifying a false tautology is very high. When someone says "Either X happened, or it didn't." you may be tempted to say "tautology!" but in fact they are probably making some kind of oblique point or highlighting a flaw in someone else's argument, etc. In other words, tautologies may be vacuous as statements about the world within a logical framework, but as speech acts in the real world, they always come with a motivation and that can usually be expressed in a non-tautologous way. For example, "A or not A" can be expanded charitably to "A or not A, and this is relevant to the topic at hand", which is not a tautology anymore.
In this case, if you do something right, it's not extra hard, which is kind of a tautology. But there's a point in saying it, which is that it doesn't have to be that hard... if you do it right. And that's not true of everything or in every possible world, hence not a vacuous statement.
But this boils down to If you build systems using a high level of skill and foresight, it's easy to do.
This is of course not a tautology, but a contradiction. I agree that inexperienced developers can, as it were, 'make life hard for themselves', but that's (trivially) due to their inexperience. I don't think there's a silver bullet for inexperience.
Over-engineering is bad, as is under-engineering. Fuzzy principles like 'YAGNI' can't be applied without skilled discernment, which means experience.
> Programming is hard, but we make it much harder than it has to be by doing it spectacularly wrong in many ways, both individually and collectively.
I think I agree with this, but it depends on specifics. What sorts of things are you thinking of?
The point being made here is not necessarily a flippant 'git gud'. Instead, it is a statement that problems are tractable, and that getting some things right up-front can have good pay-offs down the road.
In other words, don't give up and try to figure out what is good and bad practice.
> This is of course not a tautology, but a contradiction.
It's not quite a contradiction! If you bill $1 for changing the bolt, and $9,999 for knowing which bolt to replace, this shows that the work is easy, but the experience required to make the work easy is not easy. If the master can draw it in seven strokes, but you don't see the seventy thousand strokes they did before, it looks easy, and in fact it is easy, for the master but not for the novice.
> I agree that inexperienced developers can, as it were, 'make life hard for themselves', but that's (trivially) due to their inexperience. I don't think there's a silver bullet for inexperience.
That's right. However, we can also make life hard for each other, and there are some solutions for that that are better than doing nothing.
> Over-engineering is bad, as is under-engineering. Fuzzy principles like 'YAGNI' can't be applied without skilled discernment, which means experience.
Yes. This is why we have code reviews, design reviews, pair programming, and so on, but these aren't silver bullets either and there is no silver bullet, but if these things lead to increased awareness of why and not just what and how, then we can accelerate the process of acquiring that discernment. As Dijkstra said, if it goes to the grave with you and you didn't pass it on, you didn't really do your job as a senior engineer (paraphrasing).
>> Programming is hard, but we make it much harder than it has to be by doing it spectacularly wrong in many ways, both individually and collectively.
> I think I agree with this, but it depends on specifics. What sorts of things are you thinking of?
Using the wrong tool for the job. Using too many tools for the job. Using tools that do not afford mastery, because they are too complex for anything built on top of them to be comprehensible.
This is one example of many, but I mention this one because I was there when JS was simpler and better and I watched as we made it markedly worse. I would have recommended JS as a first language to beginners when node.js and npm were new, and I did, but I cannot now recommend them in good faith, because they have become antagonistic to quality and to mastery of the craft.
If it takes years to be able to do it well, it's not easy.
> So we keep using it, and even keep piling more on. It takes considerable effort to step back from all this, take a collective mulligan, and start over with a principle of taking things away to make things better, rather than adding more hacks to hide existing hacks.
True, but it can be done. The community moved away from Bower, for instance.
Is this like saying, programming isn't hard if you choose easy enough problems to solve? Or should we ask for a link to see a demo of an AGI implementation?
I guess math is not hard either if you're "doing is right", as long as it's all arithmetic...
>>That's not a tautology.
I would agree tautology is not the best description, probably fallacy would do fine.
No, this is saying that things don't have to be as hard as we make them. You don't need more than a hundred people to run a top-ten website, and that shouldn't be surprising. It is surprising only because we are so good at making things overcomplicated.
https://wikimediafoundation.org/role/staff-contractors/ has the names of 379 employees. I believe (perhaps astonishingly) that is all - engineers and non-engineers combined. Their engineers spread across departments, but judging by the 141 instances of the string 'engineer' in that page, I'd be surprised if the number exceeds 200.
Though it is worth bearing in mind that everything's open source, and there's a hefty community component. So there's a more vaguely specified number of people who might provide patches, and individual wikis are mainly run by volunteers.
I worked there for four years and I miss it every day.
Sorry but now I'm curious, why did you leave?
But, I'd consider Wikipedia traffic to skew heavily towards anonymous read-only, with very few logged-in write traffic.
This allows for tons of caching opportunities: Varnish, Memcache, etc. And these techniques are well known.
I suspect latency in the minutes for cache updates would be unaceptable to wikipedia users
Additionally if vandals know their vandalism will stay for 30 min, they are much more likely to do it, which is a vicious cycle
Also, the white house PR team is actively watching and editing political figures articles. They will sort it out too.
Next "power users" as others put it are not a single set of editors. It's more of a social network with multiple levels of trust. The idea of a wiki is that all users have write access, even if those changes are moderated to have different levels of latency.
Of course there are ways to engineer the system, but at that point one is, well, engineering a system. And WMF is doing so on a shoestring compared to other comparable levels of traffic.
Is WMF creating new paradigms of computing? Probably not. But they are doing a good job, IMHO.
Their expenses have doubled in less than 5 years...
Even if their ratio Expenses / Assets has now decreased compared to 3-5 years ago (but stalls now), it means that their goal of financial independence is still very far away and they still rely heavily on a huge amount of donations.
But that essay is clearly pure hyperbole. The expenses aren’t exponential, they’ve been roughly linear for a decade. Notice how the word exponential was removed in the second version. The graph is showing increasing savings along with increasing growth, and the expenses appear to have slowed slightly in the last five years compared to the five prior years. It’s completely failing to demonstrate the stated claim of runaway spending, the numbers practically prove the opposite.
Plus it’s not outlining what the money is used for, so there’s no concept of efficiency here, no reason to doubt that increased service came with increased expenses. There’s zero meat in this argument.
Whatever; last year’s total expenses seems very small to me compared to web sites of similar size; there are startups smaller than Wikipedia’s team that have raised more money than Wikipedia’s yearly expenses without managing to deliver anything. Wikipedia’s value to the world is currently larger than it’s expenses, IMO, and I think it’s impressive what this non-profit has done.
I’ve worked with a couple of engineers who are now on Wikipedia’s SRE team. They’re good engineers, but not elite by any means. Not “10x” developers or wizards in castles or whatever. Good solid engineers who I would work with again and fight to hire. But they’re not savants or even the top 10% of folks I’ve worked with. Solid mid to sr level engineers I’d be happy to hand a project off to with ambiguous goals and little oversight, and I’d expect them to get a team of 4 or so other engineers to be more productive.
These are the engineers who meet the job requirements for SRE positions.
No offense to you nor your team, but to me, as a consumer, reddit's product doesn't appear nowhere near as polished as Wikimedia's projects.
Also, with the caveat that I don’t know enough about the implementation details of the product at Reddit: I’d argue that Reddit’s workload is more write heavy that Wikipedia’s workload, which makes caching and scaling a bit harder for Reddit, relatively speaking.
April 9, 2012 "Instagram was acquired by Facebook today for $1 billion in cash and stock. It only has 13 employees and a handful of investors. ... Meet 11 of the lucky employees and 9 investors behind Instagram. ... Two other employees were hired during South by Southwest last month and their information wasn't available for this story. "
Not only that. They do all this with amazing openness. Their records of incidents and deployments, who's in charge of what, rotation schedules are all public and shared in MediaWiki (although they're not that well organized). I can trace this back to circa 2005. Maybe this could be the largest knowledge base of devops that is public.
This is dismaying but not shocking.. the first time I saw a newly planted tree on an otherwise bleak urban block, vandalized and broken, I realized that a drive towards "better" is not to be taken for granted, and needs protection.
I suspect that the sharing of knowledge and encouragement of developing wisdom is, to some, a threatening prospect. Perhaps they have experienced learning difficulties and are struggling with shame and frustration, or perhaps they disagree strongly with the concept of an intellectually liberated population. Libraries are, after all, a pillar of liberalism.
Thoughtful comment. I would agree with you if the attack is somehow organized at 4chan /b/, Kiwi Farms, or some underground IRC for mysterious or unexplained reasons. If it happens, its philosophical implications would be deep. And I won't surprise if it occurs one day.
But so far there's no evidence to suggest the attack has any ideological motivation beyond making the attacker famous.
No, more like superglueing the doors shut for a few hours.
Don't really see the damage, especially since they moved onto Twitch and WOW servers, I'd say more work and a few early nights sleep has got done in the end.
Edit: in reply to some of the (valid) counter-arguments, I'd like to say that there are indeed many issues that will need to be considered before passing such a law - this is just an overall idea. In addition, my intent isn't to punish the occasional kid doing something stupid and leaving a misconfigured device, it's to punish companies selling/deploying obviously insecure devices at a large scale, like ISPs deploying cheap shitty outdated network hardware or the countless resellers white-labelling insecure network cameras. Currently there is no penalty for manufacturing insecure hardware and this situation is the consequence of that - I'd like to fix this problem. We have regulations that (mostly successfully) prevent companies from selling hardware that blows up and destroys your house, why can't we have the same for networked hardware?
Instead, imagine the DDoS landscape if we had to pay a small price for bandwidth. There would be a natural disincentive to having a toaster saturating your bandwidth as part of a botnet because it would quickly show up on your bill. And something as simple as shipping an IoT product or Rasberry Pi with bad default username/password might suffer bad reviews like "1/5 stars, this product immediately raised my internet bill."
I know it's not perfect and most of us have a bad taste in our mouth from paying out the wazoo when bandwidth is priced per Gb, but it can be a fair system if priced well that fights against our botnet reality where we basically have zero insight when our networked devices our compromised.
I can also imagine better tooling provided by our ISPs in this world where they help us track down and itemize our bandwidth costs. "Honey, why is SmartToaster89 costing us $24 in network fees?"
It's impressive how poorly our current system equips everyone except malicious actors. How many ISPs don't even filter spoofed outbound packets?
It's hard to complain about everyone centralizing around Cloudflare with the state of cheap DDoS muscle.
We don't need to be priced by the bandwidth, we just need better accessibility to metering. Something my mother could look at and say "huh, the toaster's sent 8gb of data today..."
I suspect many people are uncomfortable holding a compromised device like that. The unpredictability of a toaster helping to take down Wikipedia is wild and potentially seen as a sign of chaos, especially for less technical users.
Who knows what else this crazy toaster will do next? Will it do the same thing again?
People also have a very limited view on what's happening on their phones, too. What if the rights to the source and distribution of a free closed-source app is purchased by someone that's going to modify it to include all users in their botnet? It's not like you can monitor what kind of traffic your phone apps send out.
Also, the operation of the car is simple enough that you can take it to a mechanic for an inspection and they can reliably inspect everything that the car does. There are no hidden behaviors under complex conditions, like crashing into others when there is a full moon or the sky is cloudy. Your devices can do that. If you bring me your phone/laptop/etc and ask me if it's going to send malicious packets to someone somewhen, I can't reliably tell you that it won't. I'm not sure that even if you gathered all software and electronics engineers that supposedly were involved in the construction of your device, they'd be able to provide a reliable answer. I can tell you that it seems like it wouldn't based on initialization files and services, but I can't tell if the function is hidden somehow, like obfuscated in the machine code of the kernel or something. Finding that would require auditing all assembly code running on the machine, which would not be a task for mortals.
You can't get a reliable answer on whether a computing device is programmed to send malicious packets. There's too much code, most is compiled, there's too many ways to hide it. You can probably gather the smartest people in the world and leave them to die of old age before they can arrive at a reliable answer.
Honestly, that doesn't sound right. I hope I'm misjudging because of lack of details.
Like no unencrypted local passwords. Individual default passwords for every individual device. Not using outdated version, especially once vulnerabilities are known. Including an update mechanism and providing updates for at least X years.
And yes, trained specialists will be able to work through such checklists for many commonly used software, just like your car mechanic.
And by the way, no one expects your car mechanic to [a] be perfect (you really never heard a story of a car breaking again just after leaving the shop?) or [b] be able to handle any kind of vehicle unknown to him.
The goal of rules like that is to punish the worst tier, thereby raising the bar. But this will probably be more hard to implement in the US with their everyone-sues-everyone mindset. Reminds me a lot of the great GDPR scare but now imo quite reasonable actual cases happening.
This is the second time someone's told me that. Looking into what a strawman is again and reviewing my comments, I'm not sure I'm doing that. The examples I see on Wikipedia, at least, don't seem to have a strong relationship of implication. That is, the strawmen aren't directly implied from the proposals.
In this case, I do think that making one liable for damages their machine is causing to other people's machines does directly mean what I said, that one would be liable for behavior they cannot control as well as they can control the behavior of their car.
My intentions are to provide not strawmen, but counterexamples where the proposal fails.
> No one expects 100% perfection.
I do. I'm not really OK with laws where I don't have reasonable control of whether I break them or not. In this case, the only effective control I'd have is to not have an internet device, and that seems unreasonable.
> And by the way, no one expects your car mechanic to [a] be perfect (you really never heard a story of a car breaking again just after leaving the shop?) or [b] be able to handle any kind of vehicle unknown to him.
I think the analogy isn't that strong. Visiting webpages is like changing car parts every second as the car is running. Malicious behavior of these car parts is not noticeable at all and they're not easy to spot from inspection either.
> The goal of rules like that is to punish the worst tier, thereby raising the bar. But this will probably be more hard to implement in the US with their everyone-sues-everyone mindset. Reminds me a lot of the great GDPR scare but now imo quite reasonable actual cases happening.
Well, there was a lot of things that scared people of GDPR, but I think I can assume your point is that a law can be broad and technically applicable to many people unfairly, but only applied to just cases in practice. I'm not sure I like that kind of law, though. Even if it works well in practice for the majority of cases, it seems like the kind of thing that lends itself well to abuse, the kind of law that everybody is guilty for, even if they're not all actively prosecuted.
With electric cars on the rise, it's only a matter of time until the equivalent of the Samsung Galaxy Note 7, but for cars.
Let’s talk about liability when home routers make a revving engine sound when they push too many packets per second, or start playing a “buckle up” warning chime every 6 seconds if they see packets heading to a C2 server.
Now if my home owners insurance finds that I flooded the downstairs condo because I fell asleep with the bath running, you bet I’ll pay.
But no matter what, either of your examples have a robust regulatory structure around them in terms of licensing and inspections. That is why liability works - without those structures you can’t say “you fucked up, therefore you pay”.
I’m all for adding liability into the system but if we do we must do it in a way that spreads the burden to the right places (IoT manufacturers, negligent ISPs) and doesn’t push it straight to the consumer.
Now, having said that, when the limb of my tree knocks the power line off my house I have to pay to fix it, but the electric company is on the hook to send someone to turn the line off so my electrician can work on it.
ISPs have to be in the liability chain too: if one of their customers is talking to a C&C server and participating in a DDoS they have to switch off the customer until repairs can be made.
Seriously though, this is like holding some one liable if his car is stolen and used as a get-away car in a crime. It's also not really possible to get a shell on most of these devices with serious effort, so apart from turning one off, I'm not sure how any one is supposed to mitigate this. They're too locked down to do any kind of disinfection, in most cases. I guess now I have to teach granny to use a uart cable, too.
It's not easy, but it's not impossible. I could envision a software based solution that approaches simplicity in installation.
What I'm not arguing, is that someone should be held liable for their devices being used in a botnet.
I believe the manufacturers and retailers have the liability. But I am definitely not a lawyer.
You don't interact with people out of your bubble much, do you? It's time to start write better code, not blaming users for the programmer's incapability.
If you pass that law on Day Zero, I claim that on Day One, manufacturers provide some horribly arcane command-line interface for rooting lightbulbs and washing machines, and add some boilerplate to their shrink-wrap licenses forcing customers to acknowledge that they have admin privileges on their devices.
Problem solved for them, Granny is liable again according to your system.
If that still doesn't solve the problem, the media will take care of it. "Buying this smart lightbulb puts you at risk of being sued for thousands of $$$" can't be good for manufacturers and they'd want to avoid the bad press.
"Selling this insecure IoT-device/phone/router/tv that requires every consumer to become a security expert, and taking no responsibility for OTA patches and so forth, puts you at risk for paying hundreds of millions of dollars in fines and/or damages."
I've chosen to own a "dumb" (read: "reliable") washing machine, and it cannot be used in such an attack. I have to endure the indignity of peeking downstairs to see if I left clothes in it, which is a cost of sorts, but it's nowhere near the cost I'd expect to bear if I bought a vulnerable washing machine and it provided resources to knock Wikipedia off the internet.
What other disincentive to putting vulnerable devices on the internet do you propose?
A combo of per-customer authentication at packet-level, DDOS monitoring, and rate limiting (or termination) of specific connection upon DDOS or malicious activity. That by itself would stop a lot of these right at the Tier 3 ISP level. Trickle those suckers down to dialup speeds with a notice telling them their computer is being used in a crime with a link to helpful ways on dealing with it (or support number).
Far as design, they could put cheap knockoff of an INFOSEC guard in their modems with CPU’s resistant to code injection. Include accelerators for networking functions and/or some DDOS detection (esp low-layer flooding) right at that device.
Old one from high-assurance field, albeit with medium rating, that did what I’m describing in an Ethernet, card computer:
Modern implementation could probably be done in a cheap clone and security-enhanced mod of this product:
You’re voluntarily signing up to get fined when someone hacks the computer in your house, because you connected it to the internet, right?
> Currently there is no penalty for manufacturing insecure hardware
I’m glad you see some validity in the counter-points, but doubling down on this idea of punishing manufacturers for things people do with their hardware seems misguided at best.
You can’t prove any hardware is secure, if there were such penalties there would be no hardware, this is a total and complete non-starter. Moreover, there are lots of other bad things you can do with hardware, this would open the door to holding manufacturers accountable for everything. Do you think Intel or Dell will accept fines for every successfully hack into machines they made?
This isn’t unlike suggesting that ISPs should be held liable for people doing illegal things on the internet, or suggesting that it should be illegal to pay ransoms. It’s hurting the wrong people, and failing to punish the people doing wrong.
> this situation is a consequence of that
That’s a purely subjective opinion that ignores multiple causes, and ignores the single most direct cause: people who wish to do bad things. It would be just as valid to blame this on a failure of the education system & social civics as to blame hardware manufacturers. Maybe we should fine teachers who have students that later do bad things?
> We have regulations that (mostly successfully) prevent companies from selling hardware that blows up and destroys your house, why can’t we have the same for networked hardware?
First, the analogy is bad because there are zero good uses for consumer bombs in houses, while there are plenty of non-harmful uses for IoT devices.
Second, because there is a market for simple hardware that can be deployed inside of secure networks, and doesn’t require a team of security experts to run. Secure hardware is more expensive to produce than simply-connected hardware.
Technology that cannot be used by anyone but the snobbish tech elite because anyone else just gets sued to oblivion.