Hacker News new | past | comments | ask | show | jobs | submit login
FTC sues D-Link over router and camera security flaws (ftc.gov)
383 points by kposehn on Jan 6, 2017 | hide | past | web | favorite | 193 comments



My boss installed 2 camera DVRs a year or so ago. All I did was provide 2 external IPs for the DVR and didn't worry much about it. The password was the default "123456" that comes with these things, but since we don't care much about who sees the footage. At worse people will change the password and we need to factory reset (never happened).

Last week the internet for the whole office was going down. Weirdly I could remote in, but the DNS was not working. I first thought our DNS server crapped out but it was working fine. After some investigation, the firewall was not responding. After rebooting the firewall, it would work fine for a while, but go down shortly after.

Long story short: the DVRs my boss got (unbranded) come with telnet access on some nonstandard port. A botnet got access to it and was making thousands of dns and telnet queries, overloading the firewall.


1. Just because you don't mind if people have RO access doesn't mean you should use default passwords. Privilege escalation is a thing, and is often far easier than getting a foot-hold. The number of developers who "don't bother with" disciplined input validation in areas that are supposed to be accessible only be trusted users is staggering.

2. Don't expose entire hosts to the internet. Punch only necessary holes in the firewall. That way the device at least needs to phone home in order to cause a problem like this.

3. Do you have a similar policy with other hosts are your network? I.e., do you figure "well that's inside the firewall so we don't need to worry about encryption/timely application of security updates/resetting default passwords/etc."? If you're not 100% sure (or if the answer is "no"), you now have a lot of cleanup work to do.


Not the OP but I just was in a similar conversation: "Oh it's behind a firewall, so let's just disable the extra security (passwords, HTTPS, etc)." The idea of defense in depth is very important, but many people seem to think one security layer is enough sadly.


The network is always compromised.


Ever since the smart phone, this is the only acceptable perspective. Assume bad people are already on your network.

Remember, smart phones are literal bridges from one network to another.


Given the amount of effort I had to go to to "literally" bridge a smartphone to my network to give myself Internet access when my fibre connection went down, do you think this is slightly incorrect?

I'm assuming what you mean is that smartphones may be connecting to your internal network and bringing malware with them.

That said, the corporate networks I've seen have a separate network for phones/laptops and you need to VPN in if you want other access.


You are correct that activating both networks at the same time is hard, but what you have is a device which is traveling between untrusted and trusted networks. Assuming a compromised device anything is possible.


Sysadmins at places I've worked have used "defense in depth" as an excuse to create layer upon layer of frustrating hoops to jump through in order to get any work done. I'm pretty sick of it. One perfect layer is vastly preferable.


There are sysadmins that use complexity to maintain draconian control, hide laziness or mask a lack of knowledge but don't throw the baby out with the bathwater. No matter what these people will find some way to obstruct you or maintain control. Even if their hearts are in the right place and they are using security best practices it sounds like they weren't doing a good job of automating the complexity and processes. Complex doesn't have to mean complicated.

A security design that takes advantage of multiple layers and compartmentalization is your ally against attackers. They love networks with hard shells and squishy insides. Once they are in via a service, no matter how innocuous, they can move laterally to the real targets with impunity.

But ultimately this kind of stuff is a culture issue. Culture issues are hard to fix but it's usually the root cause of bad blood between operations and development. It generally needs to be addressed on both sides though. It's really easy to think it's just a bunch of grumpy and possessive ops people but those behaviors are often rooted in how the dev teams interact with them. Things like punting releases over a wall and calling it a day, not participating in oncall duties despite causing many outages and a disparity between how credit (for releases) and blame (for outages) are assigned are often cited as issues that create what devs think or irrational BOFHs.


"One perfect layer" does not exist. Doing defence in depth is of course not a good thing, and making people do a lot of hoop-jumping isn't helpful either. But say, using a smartcard and a OTP isn't all that hard, and vastly more secure than just a username and a password, to name a random option someone might implement.


There's always a balance, but I'll echo the other comments: One layer is not enough. Do you actually think that, if a DVR is behind a firewall, it shouldn't need a password for admin access?

Strong passwords, two-factor auth for privileged services, access control policies (ACLs/firewalls), access logging, etc. are all requirements of any secure network. And that was just "off the top of my head on a Friday" kind of stuff.


>One perfect layer is vastly preferable.

Ah, hello, every manager that has ever made a decision causing the problems people further up are grousing about.

The point of defense in depth is there IS no "one perfect layer."


There's no such thing as a perfect layer.


Do you mind telling me what this one perfect layer is? If you'd like to turn it into a business, i'll fund your seed round.


Do you have to open a firewall rule request for every src:dst host/port/protocol pair? Even for 3rd party applications you don't think you should have to understand, they should "just work"? Do you have the least privilege necessary at any given point in time?

If not, you have relatively little to complain about.

And, I'll add, if you're a developer, we'd all prefer you just crank out perfect code. That way we never have deployment issues, get paged for outages, never have to work around poor architecture or assumptions that don't scale or aren't load tested. thanks!


>One perfect layer

That's the problem.


Well you need at least one layer to protect from the outside world, and another for insider attacks. Many times they can be invisible to the user. For example many places have a policy that all internal services must be Internet hardened, as though they were exposed on the broader 'net (even though they're behind a firewall).


In addition to openftp4[1] (where some folks already discovered their own server[2]), I recently published c4[3], one of the more exhaustive list of public facing IP cams that use the default password. Many of them can even be controlled remotely. The majority of them is vulnerable against simple Perl exploits.

[1] - http://git.io/ftp

[2] - https://www.reddit.com/r/sysadmin/comments/53cor1/someone_ju...

[3] - https://github.com/turbo/c4


Thanks for destroying the web, btw!

Open FTP servers are an asset, not a risk, as they're ideal for distributing downloads, uodates, packages, etc.

Thanks to your efforts, many groups which used to provide all downloads via HTTP and FTP have since stoppes the FTP access, and don't provide wget-able HTTP URLs either.

Congratulations, now I have to run a full browser on my servers to be able to download many packages.


> Thanks for destroying the web, btw!

Works for me.

> Open FTP servers are an asset, not a risk, as they're ideal for distributing downloads, uodates, packages, etc.

I agree, I use many of them frequently.

> Thanks to your efforts, many groups which used to provide all downloads via HTTP and FTP have since stoppes the FTP access, and don't provide wget-able HTTP URLs either.

I can't confirm that. Quite the opposite. The number of functional servers actually increases between oftp4 scans. The current one looks like the number is rather increasing.

Public FTPs have always been public. That's their purpose - as you said. Their addresses are listed in many public lists. I don't get how an FTP that is publicly listed and serves the public would suddenly cease operation because it's URL is in yet another public list.


> because its URL is in yet another public list.

Not because the URL is in the list directly – but the way the list is created seems like public shaming, and I’ve seen unknowing managers go "I’ve read somewhere on the internet that there’s a list of open servers, and I found ours on it, everyone can hack us, remove it now!".

So, that’s the problematic effect – with the way the list is presented, especially the way it seems to publicly shame them (and the "fix it to get off the list" statement, too), can be counterproductive.


> and the "fix it to get off the list" statement, too

Is addressed as "(This doesn't concern FTP servers that are public by design.)"

> seen unknowing managers go

These are not the people managing the FOSS mirrors that are public by design.

In fact, that's one of the reasons openftp4 now stores the complete banner. It makes it easy to identify FOSS mirrors by just grepping for "mirror" et al. and maybe even find new mirrors that you didn't know about. Or older software archives with some awesome abandonware.


Well, I’m not just talking about FOSS mirrors (they wouldn’t require a browser to download stuff in the first place).

I’m concerned about companies that used to host drivers, software, etc on FTP, publicly available, but moved it now behind a clickthrough-wall, impossible to wget.


> At worse people will change the password and we need to factory reset

Turns out this was a false assumption. We're all learning this the hard way.

All internet enabled devices need to ship with a unique, resettable default password. Many ISP provides modems/routers do this now and it's great.


> We're all learning this the hard way.

I'd be very disappointed if most HN readers are still learning this.


You mapped all ports from the external IPs to the DVR?


The firewall has a 1 to 1 NAT feature. It maps an external ip to an internal one. I had no firewall policies set. Rookie mistake, I've blocked all outgoing access and only allow port 80 and the video port incoming.


I wouldn't even allow port 80 incoming from the internet. Even if the web interface appears to have authentication on it, there may be vulnerable CGIs. They left a telnet interface wide open--do you trust them to write a secure website?


I'd suggest blocking all incoming traffic except a VPN. It's not that hard to set up a VPN.

Even better if the high-risk devices like cameras are on a separate VLAN from critical stuff.


Blocking outgoing traffic is generally a mistake. It prevents the device from retrieving updates, assuming there are any. And if there are no updates then there had better also be no undiscovered or unpatched vulnerabilities, which isn't likely.


I read it as he gave the DVRs external IP adresses without NATing them to internal addresses.


They could also use the device to pivot into your network. Once you've compromised the DVR, they can go wherever they want.


this sounds like a horror story from r/sysadmin.


Great example.

In perfect world ISP should drop you in addition to charging $penalty$ for breaking its ToS. This would result in someone from IT being fired (maybe you) and company starting to actually care about security from now on.


But that's mean...


How long is the company that provides the product on the hook for for post-sale support and security updates? If I sell an internet-connected device which has a flaw discovered 5 years after the sale, am I still liable for patching that? Nowhere in this article nor in the complaint do I find any guidelines as to what they consider to be reasonable after-sale support. They list similar action taken against Asus and TRENDnet, which are all large international companies. I'm a dude in my workshop and placing an after-sale support and warranty requirement on anything I make for a period of n-years is onerous.


If I'm a dude in my workshop making children's toys, I'm on the hook for strict product liability the rest of my life in I made dangerous toys.

If I'm a one man contractor building a house, certain categories of mistakes I'm on the hook for for the rest of my life.

I don't think it would be unprecedented for IoT device makers to be on the hook forever for certain categories of security flaws. It is not like security flaws grow spontaneously like rust a car. They are all in there from the beginning, whether they are known or not. Most of the IOT security flaws that I have heard about could have been easily prevented with security conscious design and development practices. If we want to have secure IOT devices, then we need to hold people accountable for making insecure ones.


>It is not like security flaws grow spontaneously like rust a car.

They absolutely can. Its well know that cryptography algorithms "decay" over time as computer power increases. There was a time where encrypting with DES was secure, DES is no longer secure due to increase in computing power. In 50 years I doubt we'll be using many of today's algorithms. Its exactly like a car slowly rusting over time.

That's even assuming support is possible. I may have a stroke so I no longer have the ability to support my product.


"for certain categories".

If a device was made in 99, I wouldn't blame it for having DES. If a web appliance was built in 2005, I wouldn't blame the maker for unsalted MD5 passwords.

If a device or critical app were made in 2017 and stored its passwords with 777 permissions in the clear, I would blame the maker.


I think the correct tradeoff is to judge the malfeasance of the product based on what security precautions were reasonable at the time the product was created.

Granting that "reasonability" is a very fuzzy standard, it seems obvious that a product with 30 year old crypto should not be subject to lawsuits because someone got solved integer factorization on real hardware.


Modern crypto algorithms are decay-free. They may still get weaker because of improved math, or quantum computers, but never because of increased computing power. That was an artifact of too slow computers at the beginning of computers history.

Modern algorithms are replaced mostly because the alternatives are easier to use, faster, or more flexible in some ways. With some huge emphasis on "easier to use" because that means "more secure" on practice.


> They may still get weaker because of improved math

But aren't we (and the NSA) discovering new attacks all the time? So hence, their de facto security decays.


Yes, that's improved math.

All the time is a bit of an exaggeration. Some algorithms are broken very fast, others slowly accumulate partial attacks until people don't trust them anymore. Those last ones don't normally get completely broken¹. I think the only exceptions were the shorter key lengths of RSA².

1 - But you will find many examples of algorithms with less than the modern strength parameters that were broken by the mix of faster computers and partial attacks.

2 - But by the time those were abandoned people were mostly using even shorter keys that wouldn't suffice by today's standards even without any known attack.


>They absolutely can. Its well know that cryptography algorithms "decay" over time as computer power increases

This is bullshit.

No reasonably modern crypto algorithms has ever been broken. If you use any crypto algorithm in your less-than-10-year-old product that gets broken it's because you shipped a product with sub-standard crypto.

There was a time when people thought DES is secure, but that time was in the 80s and early 90s. Nobody will blame you for bad crypto if you released software in that time.


And a new car today also probably won't be rusted in less than 10 years time, what exactly is your point?

>Nobody will blame you for bad crypto if you released software in [the 80s and early 90s].

So you are 100% agreeing with me.

Security expert Bruce Schneier expanded upon what I said back in 1998:

>Cryptographic algorithms have a way of degrading over time. It's a situation that most techies aren't used to: Compression algorithms don't compress less as the years go by, and sorting algorithms don't sort slower. But encryption algorithms get easier to break; something that sufficed three years ago might not today.

>Several things are going on. First, there's Moore's law. Computers are getting faster, better networked, and more plentiful... Cryptographic algorithms are all vulnerable to brute force--trying every possible encryption key, systematically searching for hash-function collisions, factoring the large composite number, and so forth--and brute force gets easier with time. A 56-bit key was long enough in the mid-1970s; today that can be pitifully small. In 1977, Martin Gardner wrote that 129-digit numbers would never be factored; in 1994, one was.

>Aside from brute force, cryptographic algorithms can be attacked with more subtle (and more powerful) techniques. In the early 1990s, the academic community discovered differential and linear cryptanalysis, and many symmetric encryption algorithms were broken. Similarly, the factoring community discovered the number-field sieve, which affected the security of public-key cryptosystems.

https://www.schneier.com/essays/archives/1998/05/the_crypto_...

The ironic thing is this article also said "I recommend SHA-1"... SHA-1 was broken 7 years later.


The trouble with your analogy is that state-sponsored teams of children aren't working day and night to find new ways to make your toy dangerous that you could never have thought of at the time.

I get that D-Link sucks, and their security practices right out of the box were "bad", but what is "bad"? In the eyes of the FTC and US law, what constitutes "good" and "bad" practice? How do I ensure that I am holding myself to those practices when a product ships? What is my responsibility for vulnerabilities found after the product shipped?

Let's say for example I ship a widget with an IP address and something like Heartbleed shows up 4 years later and my device is vulnerable. Am I on the hook for patching all the systems in the field? What are my obligations here?


No, they're not state-sponsored teams... they're children, they're far more pernicious than black hat teams, and there's a ton more of them. ;)

The questions you're asking are good ones, and they're ones that we as a society need to start answering. 20 years ago, the idea of always-connected devices littering our homes would've seemed like sci-fi magic, but now it's actually coming commonplace.

But this isn't the first time we've had this sort of thing happen. Like, how did electronics gadgets become safe enough that we never think twice about the fire hazard of plugging in a mystery wall wart?

Or any kind of product. What we've come up with is, generally speaking, companies have to try to uphold a standard, they can be held liable, and can be put out of business for making unsafe products.

Why it should be any different just because it's a "now with Internet!" product, I don't really know.


But IMO this isn't a "safety" issue with the devices themselves that could hurt the users unintentionally, this is a 3rd party "weaponizing" the system for their own use.

I'm struggling think of a good analogy, but it seems more like suing Ford because thieves can easily steal their cars for use in robberies.

In essence, the device is being used by an unauthorized 3rd party to harm a 4th party. The device owner in some cases is never harmed or even inconvenienced, and neither is the manufacturer.

It's a shitty situation, and I don't personally know where the line should be drawn, but IMO it should be drawn clearly.


i'd compare it to freon and freon using devices which had to be properly utilized or the ozone layer got hurt, only now substitute freon for 'product with ability to connect to the internet that doesn't get security updates anymore'. such devices should be disconnected from any non-air-gapped network or they're a considerable risk for their environment (the internet).


The "EPA" is actually a perfect "analogy".

We need an IPA (perhaps a different name...). We need someone that will set "standards" for a minimum baseline of "security" to ensure the health of the internet, and dole out fines based on violations.

However they need to be VERY careful. With something like freon it's a physical "thing" that can be regulated. We don't want to regulate "ideas" or even code, that to me seems like a very dangerous thing.

But you are right, we need something that will protect the "health of the internet" like we protect the health of our environment.


How do you add any regulation without regulating the code? code is our environment.


I meant more that I don't want it to be a crime to write a TCP stream handler without SSL. Or to need a license to write crypto code.

To me it gets dangerously close to regulating ideas.

I'd want to it more based in consequences. If your product or code is used in an attack, you get fined. No need to dictate the code or software solutions allowed.


Yeah comparing this to toy safety is quite a leap. Actual internet connected toys? Sure. But what I see is a network gateway device in a slew of such devices in a still nascent industry which really hasn't figured out how to even create fully secure network protocols yet, much less hardware. This will have a chilling effect on innovation in the field more than it will improve security in the large.


I think the difference is protecting something against mistakes vs malice.

In other words, a better safety analogy might be the prompt when you do "rm -rf".

I'm not opposed to introducing standards here; I am just saying that it's adofferent problem.


"Let's have certifications!" is a common answer, but the mechanics are non-trivial. Flip the question around from it's usual perspective... suppose you are a certification authority. Put your business hat on. What would it take for you to be willing to certify a non-trivial product as being secure? Bear in mind that the act of certification puts your skin in the game; if you certify things that subsequently have flaws in them, you will at the very least suffer a reputation hit, and it's not out of the question you'll get caught in the lawsuit and monetary damages crossfire. So... what would it take for you to be willing to sign off on an IP camera?

Being honest with yourself, would you have certified something as secure if it used OpenSSL correctly (assuming such a unicorn could exist), before you knew about Heartbleed? If your answer is no, what would it take for you to be willing to certify something using SSL? (I assume "not using SSL" is an obvious certification black mark.)

What I mean by "put your business hat on" is that I am not trying to make the point that this is impossible. I don't think it is. What I mean is, think business, think risk, think risk management, think about how your business is basically made out of black swan events, think about what it would take for you to put yourself out like that, and put some real numbers on the money and at least mentally come up with a sketch of what it would take.

Speaking for myself, it doesn't take me long before I notice that my certification standards would simply annihilate the entire IoT industry as it now stands, on even basic stuff like "Since you're using C, are you using best-practices static analysis? Can you update your firmwares? How secure is your firmware update process?" Those three questions alone and I've probably tipped nearly the entire industry into a negative cost/benefits analysis. Does that solve the problem? Again, I mean that question more honestly than it probably sounds on the Internet; a case can be made that an industry that is currently basically only able to survive by extensive offloading of what become negative externalities really shouldn't exist, even in a bootstrap phase. Perhaps nuking the industry as it now stands is the best thing we could do in the next couple of years. Pour encourager les autres if for no other reason. Let the industry come up with some best practices, form some "sell shovels rather than dig for gold" companies around building more secure IoT platforms, come back at the problem after that.

A real certification process that really solves this problem is probably unsustainably expensive for the industry as we now know it, xor the certification will be a useless rubber stamp that doesn't solve the problem.


See but I think that's great stuff to consider. If we can't actually make things that are both cost-effective and safe to the public, I'd say those things should not be made and sold to the public.

Maybe it slows down the progress of the industry, but if that progress comes at a price all of us are paying (through the currently-unaddressed externality of shitty code enabling DDOSes around the world), I think that is progress that should be slowed down.

I certainly don't want to wake up one day and find that my employer's sites are gone, and their business (and my livelihood, and my home and family's security) threatened because rando manufacturer X's IoT cameras have taken out a data center for lulz. So..regulate? Bring it on.


>The trouble with your analogy is that state-sponsored teams of children aren't working day and night to find new ways to make your toy dangerous that you could never have thought of at the time.

Human society is continually striving to build a better idiot. Manufacturers often aren't held liable or for the first instances of idiocy, but eventually they are.

For example, it was easy for engineers to believe that no one would be dumb enough to stick their hand in a running lawnmower. And for a time if you stuck your hand in a running lawnmower you would not be able to win a lawsuit over it.

But once you know about the new flavor of idiocy, it is your civic responsibility to mitigate it with safety features if possible. If you don't agree, the civil suit you lose will convince you.

If you build products that are vulnerable in well known ways, you are neglecting your civic repsonsbility as a manufacturer.


I see you using the phrase "civic responsibility", but I don't actually see an argument for it.


>If you don't agree, the civil suit you lose will convince you.

There you go. This is how you deal with the trajedy of the commons, punish the overgrazers. Industries that neglect civic responsibility get regulated. Particularly bad actors get punished.


I apologize if this comes across as combative: does one need an argument that mitigating well-known safety or security hazards in a product you're making is a civic responsibility?


Yes. I don't take it as a given that a manufacturer is responsible for the product they manufacture. If they manufacturer promises certain things about that product, then yes, they should be held to those promises.

Otherwise, what makes it so obvious that the manufacturer must handle anything at all?


>Am I on the hook for patching all the systems in the field? What are my obligations here?

This complaint seems to have nothing to do with patching. Did you check out any of the actual text? Their complaints are about disregarding security norms from 2007 (backdoors, injection flaws), using hard-coded passwords, and posting their private signing key publicly.


Did you read the article?

> the FTC says the company failed to protect its routers and cameras from widely known and reasonably foreseeable risks.

It sounds like they shipped the devices with known security flaws. This is not at all related to not updating your software when new security flaws are found.


Just a heads up, from HN guidelines:

> Please don't insinuate that someone hasn't read an article. "Did you even read the article? It mentions that" can be shortened to "The article mentions that."

I agree here. Your first sentence can be entirely removed and your point will still stand. Except now the receiver won't have to be in a defensive position about their reading comprehension.


> state-sponsored teams of children aren't working day and night to find new ways to make your toy dangerous that you could never have thought of at the time.

I'm not so sure that's true anymore. Lots of kid's toys are internet-connected these days...


> Am I on the hook for patching all the systems in the field? What are my obligations here?

If you sell even a single device to Europe, you have, depending on category, for 2 or 5 years to patch every single flaw that appears.

In the first 6 months, if a user reports a flaw, you have to patch it anyway, no matter what it is, unless you can prove that this problem was caused by the user.

After that, you are liable for the rest of your life (and this pierces the corporate veil) for any and all damage your products do, or can be abused for.


If I'm a dude in my workshop making children's toys, I'm on the hook for strict product liability the rest of my life in I made dangerous toys.

I'm not sure this is strictly true. Stanards change over time and old toys might not meet the new standards. Sometimes product recalls occur, other times the change is so broad and insignificant an advisory may be issued or no advisory is necessary.

Point being it isn't always so clear-cut.

With software and hardware it is possible for new vulnerabilities to be discovered as new attack methods are developed.

Also, forever could be too strong. No one is going to complain about vulnerabilities in token-ring network protocols.


> They are all in there from the beginning, whether they are known or not

Some things only exist because of the environment.

If I went back in time to 2001 and tried to get people to take CSRF attacks seriously, they would lose me when I started talking about opening multiple tabs.

For me personally, this kind of product liability is like giving me a big bag of money, because of my profession. But this is a huge can of worms opening up and we don't even know it's going to lead to better security.


One slight difference is that people actively try to do bad on a large scale with software while toys, cars and houses can be used nefariously but short of ramming your truck into a Christmas market, the scope for large scale damage is usually more limited.


Children are continuously trying to come up with new ways to hurt themselves with toys, and if they do you can expect to get in serious trouble.


Children and the odd litigious adult maybe, but not nation states.


As others have pointed out, children are always trying to do crazy stuff with their toys. But, even if this weren't true, so what? Just because software gets picked on, it should be given a pass? Um, no.


Not a pass, no. However software attracts a level of attack the is pretty much out of the realms of any other industry though. A fair few countries seem to weaponise software in peacetime, and use what they create.


Consumer Product Safety Litigation is huge. I would imagine there's plenty of people out there trying to find ways to hurt them selves in every way imaginable.


The alternative to strict product liability is to make sure the IOT decide you provide has a clearly defined and enforced end of life.


Another example would be automobiles. Manufacturers do recalls and provide replaces more many, many years.

Ford isn't on the hook for the Model T, but it does do recalls for decades-old Saturns.


Although it is irrelevant to the point you're making, I feel I should point out that Saturn was a General Motors subsidiary, so it is not Ford that is making Saturn recalls.


Dammit, you're right. Gah.

Well, yeah, in any case, even a defunct subsidiary (of GM, not Ford) is getting recalls decades later.

High-tech has different standards than the rest of the world.


> If I'm a one man contractor building a house, certain categories of mistakes I'm on the hook for for the rest of my life.

Should contractors that built houses in the 60s be held liable for using things like asbestos and lead paint?

Should car manufactures from the 70s be held liable for using non-tempered glass and other unsafe features? What about CO2 emission standards?

The topic of product safety cannot be divorced from the historic timeframe in which it is considered.


The contractor building a house has specific warranties that vary state by states (in the US).

http://real-estate-law.freeadvice.com/real-estate-law/constr...


This is a ridiculous comparison - there are standards for these things, toys and buildings, provided by governments and insurance companies that you can simply meet and then disclaim almost all liability in the future. Nothing like this exists for security.

No one wants to insure your product is secure, even if they've fully audited it themselves - it's too easy to miss something and make a mistake, especially so in the C-centric world. Software security is a minefield much more so than standard building codes, child safety laws or meeting the best of standards insurance companies may request of those things.

The only alternative here is that we go all-in. Everyone who develops software is individually responsible for it, we all pay insurance for our ability to develop software. Because just about any piece of software can be a huge security liability.

Sounds like a scary world to me, one in which I would have never gotten involved in software development.


Not really. I can think of 2 examples that have decades-long involvement and action.

Lead and asbestos.

And it appears that the FTC is forming the basis of liability in software, which nearly every company doing software doesn't warranty.


Were people who made things from lead and asbestos decades earlier held liable for not knowing they would be found bad decades later?

No (or at least, not to my knowledge), instead people just had to buy new things. Standards change. New security vulnerabilities are discovered. Liability doesn't stand in these cases.

If you can't possibly find every security vulnerability in your product, you shouldn't be held liable for the inability to do so. You have to disclaim that, as I'm sure D-Link does.


> Were people who made things from lead and asbestos decades earlier held liable for not knowing they would be found bad decades later?

My knowledge of this is really fuzzy now (I had to learn about it for a college ethics course) but I believe that for asbestos manufacturers knew about the health hazards for years and covered it up.


Yep. In a lot of these cases, companies knew that X chemicals were really bad for people. But since they're not some academic arm, they most certainly aren't running studies that open them up to liability. It would be the understood 'we know this is deadly, who cares' kind of stuff coming from workers in the organizations.


This analogy is stupid. There is a difference between these things breaking under normal use and being activly attacked.

You're not on the hook as a contractor for a house if it's vulnerable to missile attacks.


I would argue that a device connected to the internet being actively scanned for exploits is normal use. Even if you have a disclaimer that your product should never be connected to the internet, you could still be on the hook.

Blitz went out of business because it could no longer afford liability insurance. Blitz made those ubiquitous red plastic gas containers you see on every landscaping trailer. They were constantly being sued because their gas can could explode if you poured gasoline directly from it onto a fire. They even put warnings and disclaimers directly on the cans against pouring gasoline on a fire.


> Blitz went out of business because it could no longer afford liability insurance. Blitz made those ubiquitous red plastic gas containers you see on every landscaping trailer. They were constantly being sued because their gas can could explode if you poured gasoline directly from it onto a fire. They even put warnings and disclaimers directly on the cans against pouring gasoline on a fire.

This is a very one-sided read of the situation.

The typical Blitz can lawsuit went something like this:

A 3-year old toddler knocked over a blitz can in a basement[1]. Vapours from the can reached the water-heater, which then flashed-back into the can, causing the can to explode, severely burning the child. This would not have happened had the can's nozzle been built with an industry-standard 10 cent flame arrestor, which federal regulators STRONGLY advise all gas can manufacturers to include, but which Blitz had for years refused to take the simple precaution of adding to their product.

It's the "ignoring simple, industry-standard safety precautions" that will get your ass nailed to the wall by a liability judge. Engineers who had worked for the company testified at trial that they were ordered to destroy documentation showing that Blitz was aware of the problem, had done internal testing, and had designed flame-arrestors for their nozzles, and that management killed the project after a change-of-ownership.

[1] http://www.recordonline.com/article/20030919/News/309199995


So would ignoring industry-standard security best practices be the equivalent in this case?


Generally, yeah.

Like, if you built a product today, and (pulling an example out of the air), used bcrypt for password encryption, you wouldn't be liable for that choice down the road -- you used what's generally considered a recommended best practice for protecting user's passwords at the time you released the product.

But if, in 2017, you used an unsalted md5, a lawyer could make the argument that you by now should sure as hell have known better, and that the problems arising from that were easily foreseeable (since most of the industry was aware of the problem and in fact had been writing about it for years).

In this case the FTC is essentially alleging that D-Link's practices were so bone-headed and obviously counter to industry best-practices that they have no real excuse .


The active attacks are the equivalent of "weather" on the Internet. It's nothing like protecting against violent crimes. If I bought a new house and the roof leaked after only 5 years of regular weather I would certainly expect the contractor to fix it, and file a construction defect lawsuit if they didn't.


Only if you attach the thing directly to the Internet. Would you drive a regular car through a war zone?


Do you frequently buy home routers for the purpose of not attaching them to the internet?


I don't attach cameras directly to the Internet.


I don't either, but many products are designed to do exactly that. It's called the Internet of Things (not the VPN of things) for a reason! :-)


You are if you advertise the house as invulnerable to missile attacks. (D-Link advertised its routers as secure.)


I bet you'd be on the hook for installing a door without locks though.


No, but you might be on the hook if it is vulnerable to heartquakes.


At this time it appears the FTC is only going after the most egregiously shoddy practices, stuff that is basically new current generation stuff (consumer grade 802.11ac routers, etc) that goes unpatched after the vendor is notified. I would be really surprised if they started going after 3+ year old equipment after the fact, or small manufacturers.

edit: D-Link has been famously shitty for a long time. At least among everyone I know who runs a 'serious' network, D-Link is seen as the exact polar opposite of a proper carrier grade switch or router... D-Link is to home routers as the Trabant is to automobiles.


>edit: D-Link has been famously shitty for a long time.

Some would say they took it to an entirely new level. Check out this disclosure from October of 2011:

https://www.kb.cert.org/vuls/id/924307

Summary: after receiving a burst of traffic it disabled WPA/2 and failed wide open. Apparently it was possible to trigger this fault with auth attempts so an attacker didn't even need to wait for someone on the WiFi to send a lot of data before it failed open. I don't even know how something fails like this. Brain hurts.


Gray's law applies - industrial-grade stupidity/incompetence is indistinguishable from evil.

I'm glad D-Link are being slapped down.


And by indistinguishable from evil in this case you must wonder if it was some ill architected back door.


> D-Link is to home routers as the Trabant is to automobiles

Trabant was actually a very ingenious design that solved a real problem. D-Link is an analogy of a large car company producing, over decades, model after model of Trabant - each laden with the same old problems - while the rest of the industry moved forward.


the company failed to protect its routers and cameras from widely known and reasonably foreseeable risks

That's foreseeable, as in, before the sale.


I think this is what I'd like to see...

The manufacturer of the device is responsible for security updates forever. If at any time a security flaw is discovered in the device, they have two options:

1. Develop and deploy a fix at their own expense, and make it freely available to all owners of the device.

2. Publicly release all source code and documentation necessary for any third party to independently develop and deploy a fix.

This way, any company which still cares about their customers, devices, patents, copyrights, etc which are involved will pay the expense of fixing the problem, as the cost of keeping that which they value. And any company which does NOT care about those things can simply walk away from it all at no cost to themselves.

So, as a dude in your workshop, after n-years have passed and you feel the responsibility is too onerous, you opt for #2 and wash your hands of the mess.

As an interesting side-effect, this opens up a business opportunity for source and documentation escrow services. Because if you've somehow managed to lose some or all of your source and documentation, you've also lost the ability to choose door #2.


>“hard-coded” login credentials integrated into D-Link camera software -- such as the username “guest” and the password “guest” -- that could allow unauthorized access to the cameras’ live feed;

This doesn't seem to be a complaint about future support, but security at the time it was sold.


I think the rub is that all security bugs are bugs at the time of sale/update. Discovered or otherwise.

But I agree with you here because at the time of sale, this feature was widely agreed upon to be a bad practice and was also a known feature. So it's not like someone discovered a clever 0-day DLink didn't know about. DLink knew what it was doing, knew that best practices said not to do the thing it was doing, and did that thing anyways.


As a consumer, I would like for my devices to not be security nightmares for, at least, the useful life of what I'm buying. Unless you plaster a giant warning on the product, ~5 years from the time of sale seems reasonable to me for many tech related things.


standard product lifetime seems reasonable. If IE6 has a security bug, the answer is "maybe use newer windows + edge". If xbox360 has a security bug, the answer might be "use xbox one".

This could expand to a question of "what is the expected lifetime of a product"... is it when the replacement is released (dont like the security bugs? get camera 2.0!)?, is it when the new camera becomes standard (there are 1000000 camera 2.0s out there, and 100 camera 1.0... maybe you should get on board)? Or is it when the failure rate of the product is above a threshold (Your camera 1.0 is even still working? most of them had their CPUs burn out by now, dont expect an update, in fact you should be grateful/amazed its still running)

Aging hardware is one thing ("modern security practices use a new cryptography algorithm that literally wont run on this device") but aging software is another ("we just dont want to update the software, buy our new product")


It would be interesting if companies had the legal obligation to fully open-source any software associated with hardware that they'd EOLed or abandoned. Not only could interested parties take up the cause of continuing support, but because the OEM would know that its hack-ridden source code would eventually be published, it might have an incentive to write better software to begin with.

This might not be practical because the OEM couldn't include source to third-party components that aren't EOLed. But it would be a start.


I agree that there should be some kind of guideline on how long product support should be expected. As long as those guidelines are reasonable (90 days? 2 years? 5 years?) I don't think it should be considered onerous for anyone who is selling security-conscious products like networking equipment.


This is all the more reason that we need standardized software/OSs for these IoThings. Not only do the companies not want to be on the line forever, but if they allow users to modify it in sane ways the users can do the work. Eg, if it's just small linux installations with a very basic http api in open source libs, it can probably be patched with relative ease.

Instead, everyone is doing weird crap, everything is complex. I hate it as a user.


I don't think the FTC is asserting that companies can't discontinue support for old products.


It seems like companies should be on the hook for recalls, but they should be able to outsource this, by buying some kind of insurance to cover the recall. Then it becomes the insurance company's job to assess the risk and price accordingly.


In which case the cost of recalls would include the profit of insurance companies.


The user should be responsible for updating the firmware.


Last year I looked into how exposed D-Link devices on the Internet and created this overview page:

https://dlink-report.shodan.io/

There are basically ~500,000 D-Link devices directly connected to the Internet and exposing a service that a person on the outside can connect to. Most of the devices I saw back then, and still do now, are their standalone webcams.


This is great AND its a poor substitute for tort reform that permits private companies and individuals to collect damages for hardware/software that is negligently or fraudulently insecure. Products liability lawyers are very effective at changing industry behaviors towards safety to the lawyer's financial benefit but, ultimately, for end-user's security and safety. IANL.


I agree but you seem confused about the phrase/concept of "tort reform", which in the abstract could mean many things BUT as usually touted by politicians, generally involves making it harder for companies and individuals to sue other companies/individuals for damages.


Maybe they're from the future and it's long since been changed to prevent individuals from bringing suit against corporation. So in that future "tort reform" means repealing the previous reform.

Honest timetraveler mistake.


Interesting, most users of the phrase "tort reform" seem to want the opposite of what you want. You might want to avoid using that phrase in your argument (which I agree with!).


This should happen more often.

The software industry must come to the same level of responsibility and quality expectations of other industries.


This. But when we go here, and I believe we will, I am concerned of what happens to our ability to put alternastive firmware / OS / SW on a systems we've purchased. We already see this in regulated industries such as automotive. Part of the motivation not to let us replace software in an automotive infotainment system surely is a concern over a liability this might cause. It would be a sad day indeed when we would regulate out our ability to update vendor provided crapware with OpenWRT, Lineage OS, or what not.


In my opinion, progress in that domain should come in the form of mandatory warnings to end-users when selling technologies that do not meet acceptable standards.

A big warning on the package would be enough to turn away average users - thus forcing companies to comply - but wouldn't interfere with hacking activities.


IMHO, we can make parallels with RF spectrum. Global network and RF spectrum are globally shared resources, so some rules for RF spectrum should apply to global network too. If your system is beyond some minimal power level, e.g. less than 1000 hosts, you are free to do what you want without license, unless you are harming someone. For more than 1K network hosts exposed to global network, minimum safety requirements must be met.


Nice analogy. Expanding a bit, if a popular walkie-talkie brand from 20 years ago starts failing today in a way that switches it to aviation frequencies, I believe the manufacturer would be on the hook.


"Voids the warranty" comes to mind.


This is a reasonable stance, but remember who will bear the cost: the consumers. Demanding security for all these cheap plastic boxes will hopefully make it harder for DDoS attackers to gain cheap weaponry, but it won't come for free. I think it's too vague to just demand that the devices be "secure", as there is an infinite spectrum of security that these devices could live up to.

Also, let's say, theoretically, that I discover a software bug on the ftc.gov website that allows me to get the ftc.gov servers to execute a DoS attack against some target. Should I be able to sue ftc.gov for not securing their site well enough, or does this policy only apply to physical devices?


As a consultant working in the security industry, I get asked this question a lot. "How do we minimize liability when we know that security is never perfect?" The answer is best practices. I have never seen a company held liable for any breaches that occurred while they were following best practices. As long as you can show that you decided on a set of best practices before making/implementing your product, show that your product follows these best practices (or documenting why it doesn't), and show that you have a process to evaluate these best practices periodically against the latest industry standard, you're basically off the hook.

If FTC.gov was vulnerable to Heartbleed in March of 2014 and you suffered damages related to someone exploiting that, in practice FTC.gov holds no responsibility for that. They were following best practices, and no one knew there was a vulnerability. If they're vulnerable today and you suffer damages due to that being exploited, then you could make a case against them. They should have known better by now.

As for consumers bearing the cost, that's working as intended. Good security costs a little money. How many people do you hear complaining that consumers are bearing the cost of seatbelts and airbags installed in cars? We're willing to pay the few extra dollars because we all know how dangerous it is to not have those things. We could make cars a lot cheaper if they didn't have crumple zones, air bags, seat belts, roll cages, or steel frames. But that's not the world we want to live in.


The current state is not free either - the attacks cost money too, although it's paid by the targets and not the manufacturers. But ultimately it's paid by consumers.


> How many people do you hear complaining that consumers are bearing the cost of seatbelts and airbags installed in cars? We're willing to pay the few extra dollars because we all know how dangerous it is to not have those things.

This is not so black/white. Car safety constantly becomes more advanced over the years and it has become a competitive feature of cars to get "best safety rating". So they still spend a lot of R&D money to stay at the top of the line, regardless of fixed regulation.

Often best practices come baked into frameworks that we use now, such as Rails being relatively "secure by default". Which makes adoption of them easier.

But at the same time cyber security evolves quickly too and software updates quickly underneath it, exposing new threats.

So I believe it will always be an R&D investment that will evolve and change over the course of softwares line span.

The question is about incentives. Car manufacturers have a big incentive to make their cars safe because people know how dangerous cars are and consider that as part of their high-dollar purchase. There is also a big consumer reports style industry around car safety to inform users. Also car manufacturers (and physical products in general) actually kill people increasing the liability of a broken product.

Most software lacks those incentives for a variety of reasons. Such as low cost barrier to entry, lack of understanding of security, lack of tactile differences, low time investment evaluating alternatives, low cost to publishers when hacks do happen, etc.

I'm all for the cyber security industry maturing and software security best standards being taught to every developer. But I don't think we have the costs, consumer understanding, or infrastructure entirely in place or figured out yet to have an optimal effect.

At the same time it's very possible it might remain a constantly moving target and be a larger challenge than most other engineering safety standards, like buildings or vehicles. Something to consider.


You're 100% right, which is why "periodic evaluation of industry standards" is such a critical part of any SDLC or best practice. Tech moves faster than other industries of course, but in general technology is just a sped-up clone of basically anything else. We didn't (and couldn't) make a definitive set of best practices the day we invented the computer, we made them up as we went along. Like the car industry, where safety standards were tightened as technology improved.

We do have companies already making a stand on security, using it as a competitive advantage. Apple, for one, has staked their name on their security in the past few years. That's their competitive advantage, is how secure TouchID and their secure enclave is. You're right that consumers don't see security as entirely necessary yet, but it took a long time for seat belts and airbags to become standard in cars, and even longer for them to be appreciated. Even today, many people refuse to wear seat belts.

But bear in mind, automotive security is a moving target too. Seatbelts, then airbags, then roll cages, then crumble zones, then backup cameras, then backup sensors, then automatic parking, then self-driving. There's very little that's new in the world, it's almost always been done before.


To add up to it, here is one example of companies that validate security in high integrity systems.

http://www.ldra.com/en/software-quality-test-tools

Namely for C, C++, Java and Ada stacks on the domains of medicine, trains, aviation, automotive, factory automation.

This needs to be extended to other business domains.


I agree. There's no accountability for such neglect. There should be a fine, or even better, a tax on bad security.


True, but who will write the guidelines, what will they say, and how will they be implemented and enforced?



Does anyone know if the FTC is backdooring a federal warranty carve out here for software with respect to security? Generally devices that come with software have specific disclaimers of all warranties for fitness and damages (either direct or incidental). And California has held such exclusions to be allowed[1]. So if the FTC prevails and D-Link is held to be liable for their software flaws, would that then create a place where consumers could sue software producers when their software broke even though there are specific exclusions in the TOS and EULA?

[1] http://wmblaw.blogspot.com/2013/05/are-contractual-exclusion...


The FTC is suing D-Link for "unfair or deceptive acts or practices in or affecting commerce" because they claimed their routers were secure, but then failed to follow even the most rudimentary best practices.

I imagine that if D-Link claims that their software was provided without any "warranties for fitness and damages", the FTC will argue that such a claim definitively proves that D-Link's marketing of their security features was unfair and a misrepresentation.


I get that, but every software package I have ever bought explicitly disclaimed it would do anything in the EULA and TOS documents. So if they are successful at suing D-Link does it create a carve-out that says "If you tell people your software does x or has feature or attribute y, and it doesn't actually do that or have that, you are liable for misrepresenting your product."

Because it it does create that carve out then there are going to be a BOATLOAD of lawsuits against software companies.


> So if they are successful at suing D-Link does it create a carve-out that says "If you tell people your software does x or has feature or attribute y, and it doesn't actually do that or have that, you are liable for misrepresenting your product."... Because it it does create that carve out then there are going to be a BOATLOAD of lawsuits against software companies.

They also need to prove material harm to consumers. So lawsuits will almost certainly focus on security, privacy, and outright fraud/theft.

I really do not understand why this bothers people so much. Hire competent engineers. Follow best practices. It's really not that hard, and it's in your company's best interest anyways.

(Also, since its come up in other sub-threads, the name of the FTC is the Federal TRADE Commission, and they regulate commerce. OSS projects that don't engage in commerce aren't subject to these regulations.)


The commoditization of this equipment means that any company doing things well would quickly be outmanuevered by those who do not care enough to do the same.


If that's true, then market mechanisms are insufficient to curtail the existence of huge IoT botnets.

That's a strong motivator for regulatory intervention.

But also, the sorts of minimum safeguards that are being requested by the FTC don't actually cost these companies anything more to implement. Requiring credential changes on first login? That's maybe 100 LoC. A basic once-over to catch any super obvious command injections in a relatively simple web UI? Maybe a day or two of engineering effort. Not posting private keys on a public website? Properly train one set of people and make them gatekeepers -- a bit of overhead, but again, extremely minimal when measured against overall product lifecycle costs.

FTC isn't asking for formally verified secure kernels with locally sourced state-of-the-art crypto and in-depth end-user training by top psychologists. They're asking for very simple, easy to implement safeguards that a novice developer could implement in a few days. These companies surely spend more than the entire cost of compliance on executive retreats and office holiday parties...

Device manufacturers do have to hire one or maybe two engineers who are competent at security. But it's not like those people necessarily cost more than whoever they have on payroll ATM. And in fact, it's likely that those people already are on payroll at these companies -- in which case, all they have to do is start prioritizing and listening to security-minded voices already in their organization.


I think, in support of this point you are making, the key is that the FTC is not in charge of the Internet, and doesn't have to be trying to protect the Internet with this action. Large-scale consumer frauds are more the thing the FTC goes after.


After Mirai botnet disaster, government decided to step-in to IoT security landscape. I am not sure how far they can go to force consumer electronic manufacturers to add security to their products. Some CE manufacturers do not control the bulk of the software in their products. They just take kernel/libraries/binaries provided by SoC vendor and glue them together. SoC vendors are even more careless about security and they often provide out-dated and unpatched software. Even if SoC vendors or CE manufacturers start patching the vulnerabilities, it is very difficult to implement a secure firmware upgrade mechanism. A glitch in automated firmware upgrades can be very risky.

Perhaps FCC could start addressing these issues as part of certification requirements. However there are many challenges. How will FCC enforce SecureBoot and secure firmware upgrades and longterm product support? What will be the penalties. How will they deal with unbranded (or foreign) brands.


Now if only ISPs could start notifying/disconnecting compromised users until they fix their shit.


So the customer throws away every networked device in their home (because they have no idea which one got them banned) and buys new ones from different vendors... which are also insecure and result in their ISP banning them again? And ISP bills go up to cover the massive security scanning infrastructure needed?


> So the customer throws away every networked device in their home (because they have no idea which one got them banned)

If a breaker in your kitchen trips do you throw out every single appliance? If your water bill is high do you replace every single sink and toilet?

> and buys new ones from different vendors... which are also insecure

Maybe after getting burned by purchasing insecure unsupported devices they find out what vendors support their products and don't ship devices with unfixable backdoors before buying new ones?

> And ISP bills go up to cover the massive security scanning infrastructure needed?

A bunch of $30 cameras can scan and infect the entire internet, what makes you think the infrastructure needed to find these devices is massive?

ISPs already have everything they need to find out which of their customers are compromised, they just don't give a shit.


and where is the downside?


I'm thinking of an elderly couple I know; they have no hope of fixing these problems and it's not reasonable to expect otherwise. They shouldn't have to fix them; the people who make the products should do it.


And the people who have these devices are not going to even care about fixing their devices until their ISP disconnects them for spewing crap onto the internet.

I've been saying for 10+ years that there needs to be something like the DMCA for accountability for security issues.

A copyright owner can send a takedown notice to an ISP alleging infringement and have action taken, but I can't do anything about the 1,000,000 hosts that scan the shit out of us every day.


> the people who have these devices are not going to even care about fixing their devices until their ISP disconnects them for spewing crap onto the internet

It's not a matter of caring; there is nothing they can do about it. They don't know what "device" is causing a problem, how that could even happen, which electronics are even connected to the Internet, how something could be connected to the Internet, or even what "Internet" or "electronics" mean. They don't know what a web browser is and do not understand what a webpage is.

Abusing innocent people doesn't solve the problem and, more importantly, is unjust.


Why do you care about being scanned, though?


I think that this is the layer where this problem should be dealt with at.

What we are in for right now is burdensome, expensive and vague regulation that amounts to security theater.


It's interesting to see the FTC bringing this based on injuries to the consumers themselves. We've previously discussed a regulatory body such as the FCC regulating the externalities of networking bugs. Here, the damages are internal to the consumer. It makes sense the FTC acts here, where individual damages could be very difficult to prove, if even discover. But query whether the potential for a class action suit (and lack thereof) have already suggested the proper result here. While this claim seems well-placed, hopefully, both the FTC and the court weighing the claim will establish a reasonable "foreseeableness" limit to the liability such that potential claims as this don't quell innovation in the space.


Case wouldn't get that far. It'll settle before there's any assessment of liability. Probably end with a consent order and a suspended judgment. So basically, they have to submit to compliance audits once a year, and there's an order in place that triggers liability if they violate the same law again. That accomplishes your foreseeable principle.


I have a Netgear router. Netgear has end of life'd it and will not provide anymore firmware updates. It has security vulnerabilities. There is DD-WRT firmware available for it. However, Netgear has since modified their firmware to disallow installing DD-WRT.


When that happened to my WRT54GS, I managed to downgrade to an older Netgear firmware that did not do this check. Then, from the older firmware, I could flash DD-WRT, no problem.

I now build routers / firewalls with a free OS to ensure I do not get burned again.


How about all these smartphones being sold with unsupported broken versions of Android?


Mostly not remotely exploitable or not easily, so they can probably get away fine. There's no giant Android botnet eating the internet.


That still doesn't excuse it at the least. Forcing manufactures to support security updates on these devices for a reasonable amount of time (no less than 6 years) would greatly slow planned obsolescence and may even force manufactures to actually agree on driver and kernel standards instead of their mess of non-upstreamable garbage patches.

Right now we're lucky to get even 2 years of normal updates (much less security updates) on any Android device.


Are the any big vulnerabilities you're thinking of?


Anyone know what are the reasonable steps that d-link failed to take?

I see no mention of it in the article.


Here are the things the FTC faults D-Link for (https://www.ftc.gov/news-events/press-releases/2017/01/ftc-c...):

1. “hard-coded” login credentials integrated into D-Link camera software -- such as the username “guest” and the password “guest” -- that could allow unauthorized access to the cameras’ live feed;

2. a software flaw known as “command injection” that could enable remote attackers to take control of consumers’ routers by sending them unauthorized commands over the Internet;

3. the mishandling of a private key code used to sign into D-Link software, such that it was openly available on a public website for six months; and

4. leaving users’ login credentials for D-Link’s mobile app unsecured in clear, readable text on their mobile devices, even though there is free software available to secure the information.

There are pretty common-sense mitigation techniques for all of these.

For 1,

* Don't ship with default credentials (change credentials per-device and slap a sticker on the device with user/pass)

* Force credentials to be changed on first login.

* To avoid attacks targeting specific devices via crafted search engine queries, add a robots.txt and/or use HTTP authentication.

For 2, sanitize user input

For 3, don't publish private keys on your public-facing website

For 4, encrypt stored credentials (or don't store credentials at all)

None of these are particularly difficult to implement. In fact, they're all pre-Freshman-level, obviously boneheaded security mistakes. That we've known about and how to prevent since at least the 90's...


Correct me if I'm wrong, but isn't #4 just encouraging security through obscurity? If I hacked someone's phone, I can read those credentials, whether that amounts to reading them off the device, sniffing the connection, injecting code into an app, etc. Not encrypting the credentials on the device is not a security offense as it's not a valid security mechanism. Go ask Mozilla, Google and Microsoft about how well their encrypted credential stores work against freely available NirSoft tools or actual malware like iStealer.


> If I hacked someone's phone, I can read those credentials, whether that amounts to reading them off the device, sniffing the connection, injecting code into an app, etc.

Assuming attacker has root while the customer is logging in. But what if the attacker only has access to the file system (e.g. in the case of a stolen phone, or a broken jail/sandbox)? In that case, requiring a password/PIN to unlock the stored credentials does provide some protection to the user.

Storing passwords in the clear is always a terrible idea. If you must store credentials, at the very least they should be encrypted when at rest, and the system should require a sufficiently strong password/pattern/PIN to unlock the key chain. That way the attacker has to intercept and/or brute force, rather than merely gain access to, the device.

Ultimately, the answer is very simple: if you can't store credentials securely, then don't store them at all!

So you may be correct that DLink should not have stored those credentials at all. But just because there's no secure way to implement a convenience feature doesn't mean it's reasonable to go ahead and store passwords in the clear.


Is the mobile app storing the credentials so the user can send them to an outside service? If the user checkmarked the box for "save my password", then

>requiring a password/PIN to unlock the stored credentials

defeats the purpose of a feature built to not require a user to remember a password. "Save my password" features are very common. I use that feature of Firefox all the time. I fully understand if someone gets access to my filesystem, then they get my passwords. (And they also get all the other important stuff on my filesystem like my photos and documents.)


> defeats the purpose of a feature built to not require a user to remember a password

And yet millions of people use password-protected password managers.

Including FireFox, which DOES encrypt your passwords, and which DOES allow you to optionally set a master password. And if you do set that password, it DOES use that password as an encryption key. And in that case, if someone gets physical access to your device, they'll need to install a keylogger or brute force your master password. Which requires a lot more effort than simply copying a file, providing more opportunities for the attacker to fail, for you to detect the attack, or for investigators to track the source of the attack.

And no, that won't save you if there's a critical security bug in your OS's kernel. Which there probably is. But that doesn't mean you give up on a sensible and easy to implement security policy.

The calculus is very clear here -- encrypting the passwords costs the developer and the user almost nothing (maybe 10 minutes to 2 hours of time for a competent developer; maybe a penny or two of electricity across the lifetime of the device). And there is a definite benefit to the overall security of the system -- the attacker needs to clear one more hurdle to get device passwords, and security-sensitive consumers can make that hurdle a rather large one (rooting the device and installing a keylogger).

No, it's not a silver bullet.

Yes, it is strictly more secure that the alternative.


> they'll need to install a keylogger or brute force your master password

Not true, they can simply dump the RAM on your browser while your passwords are decrypted, which is probably the case when you run their malware anyways. It's a completely pointless half measure to encrypt those passwords. Worst case they have to patch it or hook it with something like Frida to do it next launch cycle. Keyloggers just ain't targetted and automated enough for me at least, but it's fairly trivial to just grab what you need from memory and run.

In fact according to this bug, the master password is stored in cleartext in RAM when your firefox is logged in so a simple string search on a memory dump could potentially do the job for someone without even the smallest RE capability:

https://bugzilla.mozilla.org/show_bug.cgi?id=539608

If you have the RE capability to do one, it's not really any more than is required to do the other. It's not another hurdle. Just doing it at runtime instead of purely on disk.


> If you have the RE capability to do one, it's not really any more than is required to do the other. It's not another hurdle. Just doing it at runtime instead of purely on disk.

Phones/laptops are stolen. When they're stolen, they're turned off -- often by yanking a battery -- to prevent lockout/wipe/locating features. That wipes RAM. And after they're stolen, they're rarely returned for use.

Also, file system access doesn't imply access to a RAM dump. Especially on Android/iOS. And on a server, there are whole classes of vulnerabilities that provide RO access to certain files but can't be trivially exploited to provide access to a RAM dump.

So on many relevant platforms, I don't think it's true that access to portion of file system == access to RAM dump for particular process.

I agree that nearly all password storage mechanisms have serious problems, and that storing passwords is not ideal. However, if you're going to do it, then encrypting at rest is one small, extremely cheap security mechanism that does prevent out certain attacks.

I think maybe you're arguing that we shouldn't store passwords in app-specific password managers at all, which I don't necessarily disagree with. But I do disagree that encrypting at rest is "not another hurdle". For many attack profiles, it is.


> does prevent out certain attacks.

No, it doesn't. If I have filesystem access in the sense you're talking I can just replace the application's executable with a malicious or modified one for example, which will allow me to steal your password still.

Correct me if I'm wrong here and there is some exploit class which might allow reading of but not writing to private application data and executables? I certainly haven't seen one and this would be the only scenario in which what you're saying has an impact.

As for the theft scenario - you could just as easily prevent the theft risk by setting a password on your device which would protect it at the platform level - where it should be. It'd also prevent other private data leaks which I'd say are higher risk than your router password which is useful only on your internal network, like your SMS messages, email accounts, authenticator tokens, etc. None of which I might add ask for passwords, dumping Google Authenticator tokens is as simple as running a SELECT on the sqlite database, they're not encrypted at all.


If the mobile app discussed is only used for logging into the router, then making it have an option to save the one password you use in it and encrypt it behind another different password would be pretty pointless.


> Assuming attacker has root while the customer is logging in. But what if the attacker only has access to the file system (e.g. in the case of a stolen phone, or a broken jail/sandbox)? In that case, requiring a password/PIN to unlock the stored credentials does provide some protection to the user.

Has such a case ever occurred? I can't think of one. Of course, if you have filesystem access you can replace the application on disk, patch it to steal the password easily.

> Ultimately, the answer is very simple: if you can't store credentials securely, then don't store them at all!

Many users want it anyways, and it's secure enough that unless you have remote attacks or the security level of Windows desktops with people downloading executables and running them constantly, it's unlikely to be an issue.

It's a trade-off, of course, but I don't think that means it's an unreasonable one to make.

Of course, you should probably go ahead and tell this to browser vendors. Script kiddies are running around selling accounts because of how they store passwords - encrypted, but in a total half measure way. Personally, I don't think it's reasonable to hold D-Link liable for a trade-off that even the biggest companies in tech make all the time. And one which is much more costly in those cases too!

There's just no secure way to do this on today's computers, we must rely on the security of the platform here. If storing passwords in this way is a problem, it's a platform security issue or a user issue, not an application issue.

TPMs have the potential to solve this, but at some scary costs which may adversely impact security in other ways, like preventing reverse engineering.


Yes, encrypting credentials is far from a silver bullet. Yes, much more work is needed on the platform side.

No, that doesn't make it OK to store credentials or other sensitive data in the clear.

> Has such a case ever occurred? I can't think of one.

Stolen encrypted drive/stolen drive with encrypted passwords on it? It's happened to me, personally (a usb key with my .password_store on it).

> Personally, I don't think it's reasonable to hold D-Link liable for a trade-off that even the biggest companies in tech make all the time.

First, I'm not aware of any tech giants who store passwords in the clear.

Second, just because the heavy weights do it doesn't make it reasonable.

> Of course, if you have filesystem access you can replace the application on disk, patch it to steal the password easily.

There are a lot of plausible scenarios in which an attacker gets RO/RW access to a portion of a file system but doesn't have the root or the capability to MITM/replace a binary.

> There's just no secure way to do this on today's computers

I think FTC's issue is this: "look, there's this really really simple thing that you could have done. Obviously it's not perfect, but doing this would have cost you basically nothing and would have at least made attacks a little bit harder. And even though it's basically free to implement and a very common practice, you didn't even bother."

Now, perhaps DLink had this discussion internally and decided that they agree with you. In that case, I'm sure those internal discussions will come out during the lawsuit, and a lay jury will have the difficult task of assessing some variant of the argument we're having here.

(Also, regardless of this one issue, shipping with default credentials w/o requiring a password change, and publishing a private key on a public website, are pretty egregious breaches of known best practice at time of sale. All while describing the device as "secure"...)


> First, I'm not aware of any tech giants who store passwords in the clear.

Mozilla, Microsoft and Google all effectively do this in their browsers. Don't believe me? Run the nirsoft webbrowserpassview. With just the files in your profile directory someone can dump your passwords. Sure, they're not "in the clear" but in practice it's just as bad if not worse. The number of accounts stolen with things like istealer is ridiculous. You can literally grab someone's entire lifetime of accounts in a fraction of a second.

I'd argue this isn't something a good developer would even attempt as it provides little more than a false sense of security to users. Good developers should be safe to absolutely refuse half measures like this as I do.


> Run the nirsoft webbrowserpassview.

This won't work if the user sets a master password (which I do).

> Good developers should be safe to absolutely refuse half measures like this as I do.

Yes, I mentioned this as a reasonable alternative in my original post.


Modern Android/iOS devices have hardware-backed certificate storage, which (if used by an app) would prevent an attacker from getting access to private keys.


#4 is very widespread and used to be standard practice, under the excuse that you should use chmod to control access, and that access to the encrypted file probably comes with access to the key since they both have to be present on the same device.

One old example: fetchmail

One new example: kodi/xbmc


You are right that these aren't difficult to implement, the problem is that we now have unelected government dictating coding standards by fiat, and ex post facto fiat at that.

This precedent creates unlimited liability for so much software and hardware that gets created.

It is the abiilty to warranty against this type of stuff that has let OSS take off. I expect this to have a chilling effect.

In my opinion we should be against this the way we were against software patents and CISPA.

Just because as devs we are mad when our bosses make stupid security decisions, doesn't mean this is the way to handle it.

I believe this is a very different situation than product liability we are used to. In these cases damage is the result of malicious attack by criminal actors.


> unelected government dictating coding standards by fiat

An elected congress gives agencies regulatory authority and provides over-sight. If Congress chooses, it can dismantle the FTC or pass a law stating that DLink cannot be held liable.

Congress enables regulators to do things you don't like, because it's impossible for Congress to micro-manage the entire Federal bureaucracy and military. Congress is well within its Constitutional rights to empower third parties to enact enforce certain classes of regulations.

I understand why you find that frustrating, but it's dishonest to characterize that situation as an "unelected government". The government is elected, and regulatory agencies must answer to those elected governments.

> This precedent creates unlimited liability for so much software and hardware that gets created.

You know what? Tough luck.

Most people who build things professionally and sell those things are held liable for their work. Take a look at what DLink actually did, and tell me that their engineers/management was behaving in a responsible and reasonable manner.

Maybe it's a good thing that the gov't is forcing software and hardware companies to start taking ownership of their craft. The market is certainly failing to price in security.

> It is the ability to warranty against this type of stuff that has let OSS take off. I expect this to have a chilling effect.

On the other hand, I highly doubt this case or any resulting precedent (if any) will have a noticeable chilling effect in OSS.

I guess time will tell which of us was correct...

> damage is the result of malicious attack by criminal actors

And yet, those attacks were completely foreseeable since at least the early 00's... I could've told you the day that DLink released these webcams exactly what would happen. I'm sure plenty of DLink's engineers tried to tell their own management...

> Just because as devs we are mad when our bosses make stupid security decisions, doesn't mean this is the way to handle it.

Perhaps this is the best jumping off point for a productive conversation -- what's your proposed solution?


My solution is we prosecute botnet operators and malicious actors. We have the technology. The NSA has the ability to know who they are. Countries that harbor them should face sanction.

And ISP's could stop this DDOS activity. We should police this at that level. A system where when a DDOS is reported by a reputable web hosting company or service provider to a system that ISPs belong to, could stop DDOS in its tracks.

Enable 2 factor auth at the ISP level. If a report of DDOS comes in, send an SMS to the authorized user of the account.

And punish companies that perform bad security in the marketplace. We are getting more tech literate as a people, as a society. The market might not be fulling pricing in security right now, but the issue is new and awareness is just beginning.

Lets be clear, this is selective enforcement. D-Link is one operator, using what were very common practices at the time, and still are.

No one who created these devices was ever told that these practices were illegal, or would result in sanction. And now we are fining them. D-Link will weather this as it is big enough. But small operators wouldn't.

So the result of this government action is that startups and crowd funded projects are less viable. Now, startups and kickstarter projects need to have a line item in their budget for security insurance. And an insurer has to take on this risk. Because it wont be policed or litigated evenly. And thers no telling what the security standards of tomorrow will be.

So startups will have to buy security insurance. And that will be expensive.

Big companies will survive. They will be able to pay the protection fee, the extortion fee. The status quo will remain. We won't be more secure, because the real actors making us insecure are nation-states and Chinese manufacturing.


> the result of this government action is that startups and crowd funded projects are less viable

If those projects are being run by people who are incompetent and unable to consider the importance of protecting user privacy through competent security, maybe that isn't such a bad thing. If FAA rules stop people from crowdfunding new passenger aircraft and leave it to the companies who actually have the skills, expertise and responsibility to build a plane with the relevant safety measures, as an airline passenger, I'm okay with that.


> My solution is we prosecute botnet operators and malicious actors

But what if this course of action -- investigation, enforcement, sanctions, etc. -- ends up costing substantially more than the effective cost of device manufacturers giving a damn?

All of DLink's issues (maybe aside from command injection, but I'd argue even that) could've been trivially prevented. I could've identified and fixed these issues in a week or two of time.

So basically, you're saying the country should suffer millions or billions in lost business opportunities (via sanctions), in addition to paying tens or hundreds of millions for increased investigation and law enforcement activities. All of this, to save DLink from having to hire even a single half-decent PM or SE who knows that default passwords are a bad idea and maybe we should not post private keys on a public website?

How is that even remotely fair?

> And ISP's could stop this DDOS activity. We should police this at that level.

This doesn't address other forms of harm. If botnet owners can't use their farms for DDoS and spam services, they'll turn to identity theft and ransomware. Most already have.

> And punish companies that perform bad security in the marketplace

Isn't that exactly what the FTC is doing here? Or do you mean that the marketplace should punish them? Because the latter clearly isn't working...

> Lets be clear, this is selective enforcement

Well, yeah. Literally all enforcement is selective enforcement. From speeding tickets to illegal gambling, drug distribution, and tax fraud. Police and regulators choose which cases are most egregious, and make an example. Even in murder cases, police sometimes choose to stop investing resources in a tough/cold case.

All investigation and enforcement is selective.

> No one who created these devices was ever told that these practices were illegal, or would result in sanction

With all due respect, a lot of us have been calling on the FTC and other regulatory agencies to sue these companies under existing consumer protection laws for a long time now.

And the FTC themselves have been making some noise since at least 2013 about following best practices, together with wording that specifically points out to major companies (like DLink!) that the FTC already has the legal tools needed to pursue punitive actions.

Perhaps DLink didn't know there were already laws on the books that could be used punish particularly incompetent software engineering. If they were a small company, I might be more sympathetic. But they aren't, so IMO, that ignorance is entirely on them.

> Now, startups and kickstarter projects need to have a line item in their budget for security insurance. And an insurer has to take on this risk... So startups will have to buy security insurance. And that will be expensive.

I doubt it. They might have to start following best practices and designing secure software. And yeah, that means they can't "move fast and break things" when it comes to security. Cry me a river...


Thanks! :)


Although we should emphasize securing consumer devices by default I don't see any other way to motivate companies to secure devices having either the FTC or a class action lawsuit to implement security.


Any tips on how to find out which cameras are actually secure?


This is an unfair lawsuit targeting a foreign corporation. If the FTC were consistent, they'd go after a majority of American tech companies (Cisco, juniper, MSFT..) with publicly known back doors into their technology.


Cisco seems pretty good about patching. There have been some controversial cases, but typically when I see Cisco exploits at a conference, it will start with, "If you haven't patched your routers to xx.xx.xx, do it now."

Do you have specific examples of holes that have existed for a long time without official patches or mitigation?


Flip this around. If Chinese regulators sued Cisco, Juniper, etc. for known vulnerabilities while deliberately omitting local companies, everyone would scream protectionism.

Router companies need to be security conscious, but for the FTC to single out ONE foreign company doesn't smell right.


Many of these backdoors were instigated by American information agencies. Would an American law enforcement agency be able to sue an American company for putting in backdoors at the insistence of an American governmental agency? It almost seems like a double-threat, where you are at risk for attack from law enforcement if you comply, but at risk for attack from the governmental agency if you do not.


> a foreign corporation

> California-based D-Link Systems, Inc.

If you don't want to be bound by the laws of the United States and the State of California, maybe don't incorporate there.


Wiki disagrees, so does D-Link's own website.

https://en.wikipedia.org/wiki/D-Link


They originate in Taiwan, but they have a company registered in Fountain Valley, California. Company number C1362992, registered since 1986.

http://www.bloomberg.com/research/stocks/private/snapshot.as...

Feel free to search for it on the California Secretary of State's website: https://businesssearch.sos.ca.gov/

If they have legal incorporation in the United States, it is fair game for the FTC to go after their US-based company for failure to protect US consumers. Don't want legal liability in the US for your business activity, don't incorporate in the US.


As long as your product is FCC certified, you're allowed to sell in the US. And if you sell in the US you can be subject to action from multiple federal agencies looking out for the end user, the FTC being one of them. Where you're incorporated has little to do with it.

Back to my point, singling out DLink when every other vendor has similar practices smells fishy.




Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: