Last week the internet for the whole office was going down. Weirdly I could remote in, but the DNS was not working. I first thought our DNS server crapped out but it was working fine. After some investigation, the firewall was not responding. After rebooting the firewall, it would work fine for a while, but go down shortly after.
Long story short: the DVRs my boss got (unbranded) come with telnet access on some nonstandard port. A botnet got access to it and was making thousands of dns and telnet queries, overloading the firewall.
2. Don't expose entire hosts to the internet. Punch only necessary holes in the firewall. That way the device at least needs to phone home in order to cause a problem like this.
3. Do you have a similar policy with other hosts are your network? I.e., do you figure "well that's inside the firewall so we don't need to worry about encryption/timely application of security updates/resetting default passwords/etc."? If you're not 100% sure (or if the answer is "no"), you now have a lot of cleanup work to do.
Remember, smart phones are literal bridges from one network to another.
I'm assuming what you mean is that smartphones may be connecting to your internal network and bringing malware with them.
That said, the corporate networks I've seen have a separate network for phones/laptops and you need to VPN in if you want other access.
A security design that takes advantage of multiple layers and compartmentalization is your ally against attackers. They love networks with hard shells and squishy insides. Once they are in via a service, no matter how innocuous, they can move laterally to the real targets with impunity.
But ultimately this kind of stuff is a culture issue. Culture issues are hard to fix but it's usually the root cause of bad blood between operations and development. It generally needs to be addressed on both sides though. It's really easy to think it's just a bunch of grumpy and possessive ops people but those behaviors are often rooted in how the dev teams interact with them. Things like punting releases over a wall and calling it a day, not participating in oncall duties despite causing many outages and a disparity between how credit (for releases) and blame (for outages) are assigned are often cited as issues that create what devs think or irrational BOFHs.
Strong passwords, two-factor auth for privileged services, access control policies (ACLs/firewalls), access logging, etc. are all requirements of any secure network. And that was just "off the top of my head on a Friday" kind of stuff.
Ah, hello, every manager that has ever made a decision causing the problems people further up are grousing about.
The point of defense in depth is there IS no "one perfect layer."
If not, you have relatively little to complain about.
And, I'll add, if you're a developer, we'd all prefer you just crank out perfect code. That way we never have deployment issues, get paged for outages, never have to work around poor architecture or assumptions that don't scale or aren't load tested. thanks!
That's the problem.
 - http://git.io/ftp
 - https://www.reddit.com/r/sysadmin/comments/53cor1/someone_ju...
 - https://github.com/turbo/c4
Open FTP servers are an asset, not a risk, as they're ideal for distributing downloads, uodates, packages, etc.
Thanks to your efforts, many groups which used to provide all downloads via HTTP and FTP have since stoppes the FTP access, and don't provide wget-able HTTP URLs either.
Congratulations, now I have to run a full browser on my servers to be able to download many packages.
Works for me.
> Open FTP servers are an asset, not a risk, as they're ideal for distributing downloads, uodates, packages, etc.
I agree, I use many of them frequently.
> Thanks to your efforts, many groups which used to provide all downloads via HTTP and FTP have since stoppes the FTP access, and don't provide wget-able HTTP URLs either.
I can't confirm that. Quite the opposite. The number of functional servers actually increases between oftp4 scans. The current one looks like the number is rather increasing.
Public FTPs have always been public. That's their purpose - as you said. Their addresses are listed in many public lists. I don't get how an FTP that is publicly listed and serves the public would suddenly cease operation because it's URL is in yet another public list.
Not because the URL is in the list directly – but the way the list is created seems like public shaming, and I’ve seen unknowing managers go "I’ve read somewhere on the internet that there’s a list of open servers, and I found ours on it, everyone can hack us, remove it now!".
So, that’s the problematic effect – with the way the list is presented, especially the way it seems to publicly shame them (and the "fix it to get off the list" statement, too), can be counterproductive.
Is addressed as "(This doesn't concern FTP servers that are public by design.)"
> seen unknowing managers go
These are not the people managing the FOSS mirrors that are public by design.
In fact, that's one of the reasons openftp4 now stores the complete banner. It makes it easy to identify FOSS mirrors by just grepping for "mirror" et al. and maybe even find new mirrors that you didn't know about. Or older software archives with some awesome abandonware.
I’m concerned about companies that used to host drivers, software, etc on FTP, publicly available, but moved it now behind a clickthrough-wall, impossible to wget.
Turns out this was a false assumption. We're all learning this the hard way.
All internet enabled devices need to ship with a unique, resettable default password. Many ISP provides modems/routers do this now and it's great.
I'd be very disappointed if most HN readers are still learning this.
Even better if the high-risk devices like cameras are on a separate VLAN from critical stuff.
In perfect world ISP should drop you in addition to charging $penalty$ for breaking its ToS. This would result in someone from IT being fired (maybe you) and company starting to actually care about security from now on.
If I'm a one man contractor building a house, certain categories of mistakes I'm on the hook for for the rest of my life.
I don't think it would be unprecedented for IoT device makers to be on the hook forever for certain categories of security flaws. It is not like security flaws grow spontaneously like rust a car. They are all in there from the beginning, whether they are known or not. Most of the IOT security flaws that I have heard about could have been easily prevented with security conscious design and development practices. If we want to have secure IOT devices, then we need to hold people accountable for making insecure ones.
They absolutely can. Its well know that cryptography algorithms "decay" over time as computer power increases. There was a time where encrypting with DES was secure, DES is no longer secure due to increase in computing power. In 50 years I doubt we'll be using many of today's algorithms. Its exactly like a car slowly rusting over time.
That's even assuming support is possible. I may have a stroke so I no longer have the ability to support my product.
If a device was made in 99, I wouldn't blame it for having DES. If a web appliance was built in 2005, I wouldn't blame the maker for unsalted MD5 passwords.
If a device or critical app were made in 2017 and stored its passwords with 777 permissions in the clear, I would blame the maker.
Granting that "reasonability" is a very fuzzy standard, it seems obvious that a product with 30 year old crypto should not be subject to lawsuits because someone got solved integer factorization on real hardware.
Modern algorithms are replaced mostly because the alternatives are easier to use, faster, or more flexible in some ways. With some huge emphasis on "easier to use" because that means "more secure" on practice.
But aren't we (and the NSA) discovering new attacks all the time? So hence, their de facto security decays.
All the time is a bit of an exaggeration. Some algorithms are broken very fast, others slowly accumulate partial attacks until people don't trust them anymore. Those last ones don't normally get completely broken¹. I think the only exceptions were the shorter key lengths of RSA².
1 - But you will find many examples of algorithms with less than the modern strength parameters that were broken by the mix of faster computers and partial attacks.
2 - But by the time those were abandoned people were mostly using even shorter keys that wouldn't suffice by today's standards even without any known attack.
This is bullshit.
No reasonably modern crypto algorithms has ever been broken. If you use any crypto algorithm in your less-than-10-year-old product that gets broken it's because you shipped a product with sub-standard crypto.
There was a time when people thought DES is secure, but that time was in the 80s and early 90s. Nobody will blame you for bad crypto if you released software in that time.
>Nobody will blame you for bad crypto if you released software in [the 80s and early 90s].
So you are 100% agreeing with me.
Security expert Bruce Schneier expanded upon what I said back in 1998:
>Cryptographic algorithms have a way of degrading over time. It's a situation that most techies aren't used to: Compression algorithms don't compress less as the years go by, and sorting algorithms don't sort slower. But encryption algorithms get easier to break; something that sufficed three years ago might not today.
>Several things are going on. First, there's Moore's law. Computers are getting faster, better networked, and more plentiful... Cryptographic algorithms are all vulnerable to brute force--trying every possible encryption key, systematically searching for hash-function collisions, factoring the large composite number, and so forth--and brute force gets easier with time. A 56-bit key was long enough in the mid-1970s; today that can be pitifully small. In 1977, Martin Gardner wrote that 129-digit numbers would never be factored; in 1994, one was.
>Aside from brute force, cryptographic algorithms can be attacked with more subtle (and more powerful) techniques. In the early 1990s, the academic community discovered differential and linear cryptanalysis, and many symmetric encryption algorithms were broken. Similarly, the factoring community discovered the number-field sieve, which affected the security of public-key cryptosystems.
The ironic thing is this article also said "I recommend SHA-1"... SHA-1 was broken 7 years later.
I get that D-Link sucks, and their security practices right out of the box were "bad", but what is "bad"? In the eyes of the FTC and US law, what constitutes "good" and "bad" practice? How do I ensure that I am holding myself to those practices when a product ships? What is my responsibility for vulnerabilities found after the product shipped?
Let's say for example I ship a widget with an IP address and something like Heartbleed shows up 4 years later and my device is vulnerable. Am I on the hook for patching all the systems in the field? What are my obligations here?
The questions you're asking are good ones, and they're ones that we as a society need to start answering. 20 years ago, the idea of always-connected devices littering our homes would've seemed like sci-fi magic, but now it's actually coming commonplace.
But this isn't the first time we've had this sort of thing happen. Like, how did electronics gadgets become safe enough that we never think twice about the fire hazard of plugging in a mystery wall wart?
Or any kind of product. What we've come up with is, generally speaking, companies have to try to uphold a standard, they can be held liable, and can be put out of business for making unsafe products.
Why it should be any different just because it's a "now with Internet!" product, I don't really know.
I'm struggling think of a good analogy, but it seems more like suing Ford because thieves can easily steal their cars for use in robberies.
In essence, the device is being used by an unauthorized 3rd party to harm a 4th party. The device owner in some cases is never harmed or even inconvenienced, and neither is the manufacturer.
It's a shitty situation, and I don't personally know where the line should be drawn, but IMO it should be drawn clearly.
We need an IPA (perhaps a different name...). We need someone that will set "standards" for a minimum baseline of "security" to ensure the health of the internet, and dole out fines based on violations.
However they need to be VERY careful. With something like freon it's a physical "thing" that can be regulated. We don't want to regulate "ideas" or even code, that to me seems like a very dangerous thing.
But you are right, we need something that will protect the "health of the internet" like we protect the health of our environment.
To me it gets dangerously close to regulating ideas.
I'd want to it more based in consequences. If your product or code is used in an attack, you get fined. No need to dictate the code or software solutions allowed.
In other words, a better safety analogy might be the prompt when you do "rm -rf".
I'm not opposed to introducing standards here; I am just saying that it's adofferent problem.
Being honest with yourself, would you have certified something as secure if it used OpenSSL correctly (assuming such a unicorn could exist), before you knew about Heartbleed? If your answer is no, what would it take for you to be willing to certify something using SSL? (I assume "not using SSL" is an obvious certification black mark.)
What I mean by "put your business hat on" is that I am not trying to make the point that this is impossible. I don't think it is. What I mean is, think business, think risk, think risk management, think about how your business is basically made out of black swan events, think about what it would take for you to put yourself out like that, and put some real numbers on the money and at least mentally come up with a sketch of what it would take.
Speaking for myself, it doesn't take me long before I notice that my certification standards would simply annihilate the entire IoT industry as it now stands, on even basic stuff like "Since you're using C, are you using best-practices static analysis? Can you update your firmwares? How secure is your firmware update process?" Those three questions alone and I've probably tipped nearly the entire industry into a negative cost/benefits analysis. Does that solve the problem? Again, I mean that question more honestly than it probably sounds on the Internet; a case can be made that an industry that is currently basically only able to survive by extensive offloading of what become negative externalities really shouldn't exist, even in a bootstrap phase. Perhaps nuking the industry as it now stands is the best thing we could do in the next couple of years. Pour encourager les autres if for no other reason. Let the industry come up with some best practices, form some "sell shovels rather than dig for gold" companies around building more secure IoT platforms, come back at the problem after that.
A real certification process that really solves this problem is probably unsustainably expensive for the industry as we now know it, xor the certification will be a useless rubber stamp that doesn't solve the problem.
Maybe it slows down the progress of the industry, but if that progress comes at a price all of us are paying (through the currently-unaddressed externality of shitty code enabling DDOSes around the world), I think that is progress that should be slowed down.
I certainly don't want to wake up one day and find that my employer's sites are gone, and their business (and my livelihood, and my home and family's security) threatened because rando manufacturer X's IoT cameras have taken out a data center for lulz. So..regulate? Bring it on.
Human society is continually striving to build a better idiot. Manufacturers often aren't held liable or for the first instances of idiocy, but eventually they are.
For example, it was easy for engineers to believe that no one would be dumb enough to stick their hand in a running lawnmower. And for a time if you stuck your hand in a running lawnmower you would not be able to win a lawsuit over it.
But once you know about the new flavor of idiocy, it is your civic responsibility to mitigate it with safety features if possible. If you don't agree, the civil suit you lose will convince you.
If you build products that are vulnerable in well known ways, you are neglecting your civic repsonsbility as a manufacturer.
There you go. This is how you deal with the trajedy of the commons, punish the overgrazers. Industries that neglect civic responsibility get regulated. Particularly bad actors get punished.
Otherwise, what makes it so obvious that the manufacturer must handle anything at all?
This complaint seems to have nothing to do with patching. Did you check out any of the actual text? Their complaints are about disregarding security norms from 2007 (backdoors, injection flaws), using hard-coded passwords, and posting their private signing key publicly.
> the FTC says the company failed to protect its routers and cameras from widely known and reasonably foreseeable risks.
It sounds like they shipped the devices with known security flaws. This is not at all related to not updating your software when new security flaws are found.
> Please don't insinuate that someone hasn't read an article. "Did you even read the article? It mentions that" can be shortened to "The article mentions that."
I agree here. Your first sentence can be entirely removed and your point will still stand. Except now the receiver won't have to be in a defensive position about their reading comprehension.
I'm not so sure that's true anymore. Lots of kid's toys are internet-connected these days...
If you sell even a single device to Europe, you have, depending on category, for 2 or 5 years to patch every single flaw that appears.
In the first 6 months, if a user reports a flaw, you have to patch it anyway, no matter what it is, unless you can prove that this problem was caused by the user.
After that, you are liable for the rest of your life (and this pierces the corporate veil) for any and all damage your products do, or can be abused for.
I'm not sure this is strictly true. Stanards change over time and old toys might not meet the new standards. Sometimes product recalls occur, other times the change is so broad and insignificant an advisory may be issued or no advisory is necessary.
Point being it isn't always so clear-cut.
With software and hardware it is possible for new vulnerabilities to be discovered as new attack methods are developed.
Also, forever could be too strong. No one is going to complain about vulnerabilities in token-ring network protocols.
Some things only exist because of the environment.
If I went back in time to 2001 and tried to get people to take CSRF attacks seriously, they would lose me when I started talking about opening multiple tabs.
For me personally, this kind of product liability is like giving me a big bag of money, because of my profession. But this is a huge can of worms opening up and we don't even know it's going to lead to better security.
Ford isn't on the hook for the Model T, but it does do recalls for decades-old Saturns.
Well, yeah, in any case, even a defunct subsidiary (of GM, not Ford) is getting recalls decades later.
High-tech has different standards than the rest of the world.
Should contractors that built houses in the 60s be held liable for using things like asbestos and lead paint?
Should car manufactures from the 70s be held liable for using non-tempered glass and other unsafe features? What about CO2 emission standards?
The topic of product safety cannot be divorced from the historic timeframe in which it is considered.
No one wants to insure your product is secure, even if they've fully audited it themselves - it's too easy to miss something and make a mistake, especially so in the C-centric world. Software security is a minefield much more so than standard building codes, child safety laws or meeting the best of standards insurance companies may request of those things.
The only alternative here is that we go all-in. Everyone who develops software is individually responsible for it, we all pay insurance for our ability to develop software. Because just about any piece of software can be a huge security liability.
Sounds like a scary world to me, one in which I would have never gotten involved in software development.
Lead and asbestos.
And it appears that the FTC is forming the basis of liability in software, which nearly every company doing software doesn't warranty.
No (or at least, not to my knowledge), instead people just had to buy new things. Standards change. New security vulnerabilities are discovered. Liability doesn't stand in these cases.
If you can't possibly find every security vulnerability in your product, you shouldn't be held liable for the inability to do so. You have to disclaim that, as I'm sure D-Link does.
My knowledge of this is really fuzzy now (I had to learn about it for a college ethics course) but I believe that for asbestos manufacturers knew about the health hazards for years and covered it up.
You're not on the hook as a contractor for a house if it's vulnerable to missile attacks.
Blitz went out of business because it could no longer afford liability insurance. Blitz made those ubiquitous red plastic gas containers you see on every landscaping trailer. They were constantly being sued because their gas can could explode if you poured gasoline directly from it onto a fire. They even put warnings and disclaimers directly on the cans against pouring gasoline on a fire.
This is a very one-sided read of the situation.
The typical Blitz can lawsuit went something like this:
A 3-year old toddler knocked over a blitz can in a basement. Vapours from the can reached the water-heater, which then flashed-back into the can, causing the can to explode, severely burning the child. This would not have happened had the can's nozzle been built with an industry-standard 10 cent flame arrestor, which federal regulators STRONGLY advise all gas can manufacturers to include, but which Blitz had for years refused to take the simple precaution of adding to their product.
It's the "ignoring simple, industry-standard safety precautions" that will get your ass nailed to the wall by a liability judge. Engineers who had worked for the company testified at trial that they were ordered to destroy documentation showing that Blitz was aware of the problem, had done internal testing, and had designed flame-arrestors for their nozzles, and that management killed the project after a change-of-ownership.
Like, if you built a product today, and (pulling an example out of the air), used bcrypt for password encryption, you wouldn't be liable for that choice down the road -- you used what's generally considered a recommended best practice for protecting user's passwords at the time you released the product.
But if, in 2017, you used an unsalted md5, a lawyer could make the argument that you by now should sure as hell have known better, and that the problems arising from that were easily foreseeable (since most of the industry was aware of the problem and in fact had been writing about it for years).
In this case the FTC is essentially alleging that D-Link's practices were so bone-headed and obviously counter to industry best-practices that they have no real excuse .
edit: D-Link has been famously shitty for a long time. At least among everyone I know who runs a 'serious' network, D-Link is seen as the exact polar opposite of a proper carrier grade switch or router... D-Link is to home routers as the Trabant is to automobiles.
Some would say they took it to an entirely new level. Check out this disclosure from October of 2011:
Summary: after receiving a burst of traffic it disabled WPA/2 and failed wide open. Apparently it was possible to trigger this fault with auth attempts so an attacker didn't even need to wait for someone on the WiFi to send a lot of data before it failed open. I don't even know how something fails like this. Brain hurts.
I'm glad D-Link are being slapped down.
Trabant was actually a very ingenious design that solved a real problem. D-Link is an analogy of a large car company producing, over decades, model after model of Trabant - each laden with the same old problems - while the rest of the industry moved forward.
That's foreseeable, as in, before the sale.
The manufacturer of the device is responsible for security updates forever. If at any time a security flaw is discovered in the device, they have two options:
1. Develop and deploy a fix at their own expense, and make it freely available to all owners of the device.
2. Publicly release all source code and documentation necessary for any third party to independently develop and deploy a fix.
This way, any company which still cares about their customers, devices, patents, copyrights, etc which are involved will pay the expense of fixing the problem, as the cost of keeping that which they value. And any company which does NOT care about those things can simply walk away from it all at no cost to themselves.
So, as a dude in your workshop, after n-years have passed and you feel the responsibility is too onerous, you opt for #2 and wash your hands of the mess.
As an interesting side-effect, this opens up a business opportunity for source and documentation escrow services. Because if you've somehow managed to lose some or all of your source and documentation, you've also lost the ability to choose door #2.
This doesn't seem to be a complaint about future support, but security at the time it was sold.
But I agree with you here because at the time of sale, this feature was widely agreed upon to be a bad practice and was also a known feature. So it's not like someone discovered a clever 0-day DLink didn't know about. DLink knew what it was doing, knew that best practices said not to do the thing it was doing, and did that thing anyways.
This could expand to a question of "what is the expected lifetime of a product"... is it when the replacement is released (dont like the security bugs? get camera 2.0!)?, is it when the new camera becomes standard (there are 1000000 camera 2.0s out there, and 100 camera 1.0... maybe you should get on board)? Or is it when the failure rate of the product is above a threshold (Your camera 1.0 is even still working? most of them had their CPUs burn out by now, dont expect an update, in fact you should be grateful/amazed its still running)
Aging hardware is one thing ("modern security practices use a new cryptography algorithm that literally wont run on this device") but aging software is another ("we just dont want to update the software, buy our new product")
This might not be practical because the OEM couldn't include source to third-party components that aren't EOLed. But it would be a start.
Instead, everyone is doing weird crap, everything is complex. I hate it as a user.
There are basically ~500,000 D-Link devices directly connected to the Internet and exposing a service that a person on the outside can connect to. Most of the devices I saw back then, and still do now, are their standalone webcams.
Honest timetraveler mistake.
The software industry must come to the same level of responsibility and quality expectations of other industries.
A big warning on the package would be enough to turn away average users - thus forcing companies to comply - but wouldn't interfere with hacking activities.
Also, let's say, theoretically, that I discover a software bug on the ftc.gov website that allows me to get the ftc.gov servers to execute a DoS attack against some target. Should I be able to sue ftc.gov for not securing their site well enough, or does this policy only apply to physical devices?
If FTC.gov was vulnerable to Heartbleed in March of 2014 and you suffered damages related to someone exploiting that, in practice FTC.gov holds no responsibility for that. They were following best practices, and no one knew there was a vulnerability. If they're vulnerable today and you suffer damages due to that being exploited, then you could make a case against them. They should have known better by now.
As for consumers bearing the cost, that's working as intended. Good security costs a little money. How many people do you hear complaining that consumers are bearing the cost of seatbelts and airbags installed in cars? We're willing to pay the few extra dollars because we all know how dangerous it is to not have those things. We could make cars a lot cheaper if they didn't have crumple zones, air bags, seat belts, roll cages, or steel frames. But that's not the world we want to live in.
This is not so black/white. Car safety constantly becomes more advanced over the years and it has become a competitive feature of cars to get "best safety rating". So they still spend a lot of R&D money to stay at the top of the line, regardless of fixed regulation.
Often best practices come baked into frameworks that we use now, such as Rails being relatively "secure by default". Which makes adoption of them easier.
But at the same time cyber security evolves quickly too and software updates quickly underneath it, exposing new threats.
So I believe it will always be an R&D investment that will evolve and change over the course of softwares line span.
The question is about incentives. Car manufacturers have a big incentive to make their cars safe because people know how dangerous cars are and consider that as part of their high-dollar purchase. There is also a big consumer reports style industry around car safety to inform users. Also car manufacturers (and physical products in general) actually kill people increasing the liability of a broken product.
Most software lacks those incentives for a variety of reasons. Such as low cost barrier to entry, lack of understanding of security, lack of tactile differences, low time investment evaluating alternatives, low cost to publishers when hacks do happen, etc.
I'm all for the cyber security industry maturing and software security best standards being taught to every developer. But I don't think we have the costs, consumer understanding, or infrastructure entirely in place or figured out yet to have an optimal effect.
At the same time it's very possible it might remain a constantly moving target and be a larger challenge than most other engineering safety standards, like buildings or vehicles. Something to consider.
We do have companies already making a stand on security, using it as a competitive advantage. Apple, for one, has staked their name on their security in the past few years. That's their competitive advantage, is how secure TouchID and their secure enclave is. You're right that consumers don't see security as entirely necessary yet, but it took a long time for seat belts and airbags to become standard in cars, and even longer for them to be appreciated. Even today, many people refuse to wear seat belts.
But bear in mind, automotive security is a moving target too. Seatbelts, then airbags, then roll cages, then crumble zones, then backup cameras, then backup sensors, then automatic parking, then self-driving. There's very little that's new in the world, it's almost always been done before.
Namely for C, C++, Java and Ada stacks on the domains of medicine, trains, aviation, automotive, factory automation.
This needs to be extended to other business domains.
I imagine that if D-Link claims that their software was provided without any "warranties for fitness and damages", the FTC will argue that such a claim definitively proves that D-Link's marketing of their security features was unfair and a misrepresentation.
Because it it does create that carve out then there are going to be a BOATLOAD of lawsuits against software companies.
They also need to prove material harm to consumers. So lawsuits will almost certainly focus on security, privacy, and outright fraud/theft.
I really do not understand why this bothers people so much. Hire competent engineers. Follow best practices. It's really not that hard, and it's in your company's best interest anyways.
(Also, since its come up in other sub-threads, the name of the FTC is the Federal TRADE Commission, and they regulate commerce. OSS projects that don't engage in commerce aren't subject to these regulations.)
That's a strong motivator for regulatory intervention.
But also, the sorts of minimum safeguards that are being requested by the FTC don't actually cost these companies anything more to implement. Requiring credential changes on first login? That's maybe 100 LoC. A basic once-over to catch any super obvious command injections in a relatively simple web UI? Maybe a day or two of engineering effort. Not posting private keys on a public website? Properly train one set of people and make them gatekeepers -- a bit of overhead, but again, extremely minimal when measured against overall product lifecycle costs.
FTC isn't asking for formally verified secure kernels with locally sourced state-of-the-art crypto and in-depth end-user training by top psychologists. They're asking for very simple, easy to implement safeguards that a novice developer could implement in a few days. These companies surely spend more than the entire cost of compliance on executive retreats and office holiday parties...
Device manufacturers do have to hire one or maybe two engineers who are competent at security. But it's not like those people necessarily cost more than whoever they have on payroll ATM. And in fact, it's likely that those people already are on payroll at these companies -- in which case, all they have to do is start prioritizing and listening to security-minded voices already in their organization.
Perhaps FCC could start addressing these issues as part of certification requirements. However there are many challenges. How will FCC enforce SecureBoot and secure firmware upgrades and longterm product support? What will be the penalties. How will they deal with unbranded (or foreign) brands.
If a breaker in your kitchen trips do you throw out every single appliance? If your water bill is high do you replace every single sink and toilet?
> and buys new ones from different vendors... which are also insecure
Maybe after getting burned by purchasing insecure unsupported devices they find out what vendors support their products and don't ship devices with unfixable backdoors before buying new ones?
> And ISP bills go up to cover the massive security scanning infrastructure needed?
A bunch of $30 cameras can scan and infect the entire internet, what makes you think the infrastructure needed to find these devices is massive?
ISPs already have everything they need to find out which of their customers are compromised, they just don't give a shit.
I've been saying for 10+ years that there needs to be something like the DMCA for accountability for security issues.
A copyright owner can send a takedown notice to an ISP alleging infringement and have action taken, but I can't do anything about the 1,000,000 hosts that scan the shit out of us every day.
It's not a matter of caring; there is nothing they can do about it. They don't know what "device" is causing a problem, how that could even happen, which electronics are even connected to the Internet, how something could be connected to the Internet, or even what "Internet" or "electronics" mean. They don't know what a web browser is and do not understand what a webpage is.
Abusing innocent people doesn't solve the problem and, more importantly, is unjust.
What we are in for right now is burdensome, expensive and vague regulation that amounts to security theater.
I now build routers / firewalls with a free OS to ensure I do not get burned again.
Right now we're lucky to get even 2 years of normal updates (much less security updates) on any Android device.
I see no mention of it in the article.
1. “hard-coded” login credentials integrated into D-Link camera software -- such as the username “guest” and the password “guest” -- that could allow unauthorized access to the cameras’ live feed;
2. a software flaw known as “command injection” that could enable remote attackers to take control of consumers’ routers by sending them unauthorized commands over the Internet;
3. the mishandling of a private key code used to sign into D-Link software, such that it was openly available on a public website for six months; and
4. leaving users’ login credentials for D-Link’s mobile app unsecured in clear, readable text on their mobile devices, even though there is free software available to secure the information.
There are pretty common-sense mitigation techniques for all of these.
* Don't ship with default credentials (change credentials per-device and slap a sticker on the device with user/pass)
* Force credentials to be changed on first login.
* To avoid attacks targeting specific devices via crafted search engine queries, add a robots.txt and/or use HTTP authentication.
For 2, sanitize user input
For 3, don't publish private keys on your public-facing website
For 4, encrypt stored credentials (or don't store credentials at all)
None of these are particularly difficult to implement. In fact, they're all pre-Freshman-level, obviously boneheaded security mistakes. That we've known about and how to prevent since at least the 90's...
Assuming attacker has root while the customer is logging in. But what if the attacker only has access to the file system (e.g. in the case of a stolen phone, or a broken jail/sandbox)? In that case, requiring a password/PIN to unlock the stored credentials does provide some protection to the user.
Storing passwords in the clear is always a terrible idea. If you must store credentials, at the very least they should be encrypted when at rest, and the system should require a sufficiently strong password/pattern/PIN to unlock the key chain. That way the attacker has to intercept and/or brute force, rather than merely gain access to, the device.
Ultimately, the answer is very simple: if you can't store credentials securely, then don't store them at all!
So you may be correct that DLink should not have stored those credentials at all. But just because there's no secure way to implement a convenience feature doesn't mean it's reasonable to go ahead and store passwords in the clear.
>requiring a password/PIN to unlock the stored credentials
defeats the purpose of a feature built to not require a user to remember a password. "Save my password" features are very common. I use that feature of Firefox all the time. I fully understand if someone gets access to my filesystem, then they get my passwords. (And they also get all the other important stuff on my filesystem like my photos and documents.)
And yet millions of people use password-protected password managers.
Including FireFox, which DOES encrypt your passwords, and which DOES allow you to optionally set a master password. And if you do set that password, it DOES use that password as an encryption key. And in that case, if someone gets physical access to your device, they'll need to install a keylogger or brute force your master password. Which requires a lot more effort than simply copying a file, providing more opportunities for the attacker to fail, for you to detect the attack, or for investigators to track the source of the attack.
And no, that won't save you if there's a critical security bug in your OS's kernel. Which there probably is. But that doesn't mean you give up on a sensible and easy to implement security policy.
The calculus is very clear here -- encrypting the passwords costs the developer and the user almost nothing (maybe 10 minutes to 2 hours of time for a competent developer; maybe a penny or two of electricity across the lifetime of the device). And there is a definite benefit to the overall security of the system -- the attacker needs to clear one more hurdle to get device passwords, and security-sensitive consumers can make that hurdle a rather large one (rooting the device and installing a keylogger).
No, it's not a silver bullet.
Yes, it is strictly more secure that the alternative.
Not true, they can simply dump the RAM on your browser while your passwords are decrypted, which is probably the case when you run their malware anyways. It's a completely pointless half measure to encrypt those passwords. Worst case they have to patch it or hook it with something like Frida to do it next launch cycle. Keyloggers just ain't targetted and automated enough for me at least, but it's fairly trivial to just grab what you need from memory and run.
In fact according to this bug, the master password is stored in cleartext in RAM when your firefox is logged in so a simple string search on a memory dump could potentially do the job for someone without even the smallest RE capability:
If you have the RE capability to do one, it's not really any more than is required to do the other. It's not another hurdle. Just doing it at runtime instead of purely on disk.
Phones/laptops are stolen. When they're stolen, they're turned off -- often by yanking a battery -- to prevent lockout/wipe/locating features. That wipes RAM. And after they're stolen, they're rarely returned for use.
Also, file system access doesn't imply access to a RAM dump. Especially on Android/iOS. And on a server, there are whole classes of vulnerabilities that provide RO access to certain files but can't be trivially exploited to provide access to a RAM dump.
So on many relevant platforms, I don't think it's true that access to portion of file system == access to RAM dump for particular process.
I agree that nearly all password storage mechanisms have serious problems, and that storing passwords is not ideal. However, if you're going to do it, then encrypting at rest is one small, extremely cheap security mechanism that does prevent out certain attacks.
I think maybe you're arguing that we shouldn't store passwords in app-specific password managers at all, which I don't necessarily disagree with. But I do disagree that encrypting at rest is "not another hurdle". For many attack profiles, it is.
No, it doesn't. If I have filesystem access in the sense you're talking I can just replace the application's executable with a malicious or modified one for example, which will allow me to steal your password still.
Correct me if I'm wrong here and there is some exploit class which might allow reading of but not writing to private application data and executables? I certainly haven't seen one and this would be the only scenario in which what you're saying has an impact.
As for the theft scenario - you could just as easily prevent the theft risk by setting a password on your device which would protect it at the platform level - where it should be. It'd also prevent other private data leaks which I'd say are higher risk than your router password which is useful only on your internal network, like your SMS messages, email accounts, authenticator tokens, etc. None of which I might add ask for passwords, dumping Google Authenticator tokens is as simple as running a SELECT on the sqlite database, they're not encrypted at all.
Has such a case ever occurred? I can't think of one. Of course, if you have filesystem access you can replace the application on disk, patch it to steal the password easily.
> Ultimately, the answer is very simple: if you can't store credentials securely, then don't store them at all!
Many users want it anyways, and it's secure enough that unless you have remote attacks or the security level of Windows desktops with people downloading executables and running them constantly, it's unlikely to be an issue.
It's a trade-off, of course, but I don't think that means it's an unreasonable one to make.
Of course, you should probably go ahead and tell this to browser vendors. Script kiddies are running around selling accounts because of how they store passwords - encrypted, but in a total half measure way. Personally, I don't think it's reasonable to hold D-Link liable for a trade-off that even the biggest companies in tech make all the time. And one which is much more costly in those cases too!
There's just no secure way to do this on today's computers, we must rely on the security of the platform here. If storing passwords in this way is a problem, it's a platform security issue or a user issue, not an application issue.
TPMs have the potential to solve this, but at some scary costs which may adversely impact security in other ways, like preventing reverse engineering.
No, that doesn't make it OK to store credentials or other sensitive data in the clear.
> Has such a case ever occurred? I can't think of one.
Stolen encrypted drive/stolen drive with encrypted passwords on it? It's happened to me, personally (a usb key with my .password_store on it).
> Personally, I don't think it's reasonable to hold D-Link liable for a trade-off that even the biggest companies in tech make all the time.
First, I'm not aware of any tech giants who store passwords in the clear.
Second, just because the heavy weights do it doesn't make it reasonable.
> Of course, if you have filesystem access you can replace the application on disk, patch it to steal the password easily.
There are a lot of plausible scenarios in which an attacker gets RO/RW access to a portion of a file system but doesn't have the root or the capability to MITM/replace a binary.
> There's just no secure way to do this on today's computers
I think FTC's issue is this: "look, there's this really really simple thing that you could have done. Obviously it's not perfect, but doing this would have cost you basically nothing and would have at least made attacks a little bit harder. And even though it's basically free to implement and a very common practice, you didn't even bother."
Now, perhaps DLink had this discussion internally and decided that they agree with you. In that case, I'm sure those internal discussions will come out during the lawsuit, and a lay jury will have the difficult task of assessing some variant of the argument we're having here.
(Also, regardless of this one issue, shipping with default credentials w/o requiring a password change, and publishing a private key on a public website, are pretty egregious breaches of known best practice at time of sale. All while describing the device as "secure"...)
Mozilla, Microsoft and Google all effectively do this in their browsers. Don't believe me? Run the nirsoft webbrowserpassview. With just the files in your profile directory someone can dump your passwords. Sure, they're not "in the clear" but in practice it's just as bad if not worse. The number of accounts stolen with things like istealer is ridiculous. You can literally grab someone's entire lifetime of accounts in a fraction of a second.
I'd argue this isn't something a good developer would even attempt as it provides little more than a false sense of security to users. Good developers should be safe to absolutely refuse half measures like this as I do.
This won't work if the user sets a master password (which I do).
> Good developers should be safe to absolutely refuse half measures like this as I do.
Yes, I mentioned this as a reasonable alternative in my original post.
One old example: fetchmail
One new example: kodi/xbmc
This precedent creates unlimited liability for so much software and hardware that gets created.
It is the abiilty to warranty against this type of stuff that has let OSS take off. I expect this to have a chilling effect.
In my opinion we should be against this the way we were against software patents and CISPA.
Just because as devs we are mad when our bosses make stupid security decisions, doesn't mean this is the way to handle it.
I believe this is a very different situation than product liability we are used to. In these cases damage is the result of malicious attack by criminal actors.
An elected congress gives agencies regulatory authority and provides over-sight. If Congress chooses, it can dismantle the FTC or pass a law stating that DLink cannot be held liable.
Congress enables regulators to do things you don't like, because it's impossible for Congress to micro-manage the entire Federal bureaucracy and military. Congress is well within its Constitutional rights to empower third parties to enact enforce certain classes of regulations.
I understand why you find that frustrating, but it's dishonest to characterize that situation as an "unelected government". The government is elected, and regulatory agencies must answer to those elected governments.
> This precedent creates unlimited liability for so much software and hardware that gets created.
You know what? Tough luck.
Most people who build things professionally and sell those things are held liable for their work. Take a look at what DLink actually did, and tell me that their engineers/management was behaving in a responsible and reasonable manner.
Maybe it's a good thing that the gov't is forcing software and hardware companies to start taking ownership of their craft. The market is certainly failing to price in security.
> It is the ability to warranty against this type of stuff that has let OSS take off. I expect this to have a chilling effect.
On the other hand, I highly doubt this case or any resulting precedent (if any) will have a noticeable chilling effect in OSS.
I guess time will tell which of us was correct...
> damage is the result of malicious attack by criminal actors
And yet, those attacks were completely foreseeable since at least the early 00's... I could've told you the day that DLink released these webcams exactly what would happen. I'm sure plenty of DLink's engineers tried to tell their own management...
> Just because as devs we are mad when our bosses make stupid security decisions, doesn't mean this is the way to handle it.
Perhaps this is the best jumping off point for a productive conversation -- what's your proposed solution?
And ISP's could stop this DDOS activity. We should police this at that level. A system where when a DDOS is reported by a reputable web hosting company or service provider to a system that ISPs belong to, could stop DDOS in its tracks.
Enable 2 factor auth at the ISP level. If a report of DDOS comes in, send an SMS to the authorized user of the account.
And punish companies that perform bad security in the marketplace. We are getting more tech literate as a people, as a society. The market might not be fulling pricing in security right now, but the issue is new and awareness is just beginning.
Lets be clear, this is selective enforcement. D-Link is one operator, using what were very common practices at the time, and still are.
No one who created these devices was ever told that these practices were illegal, or would result in sanction. And now we are fining them. D-Link will weather this as it is big enough. But small operators wouldn't.
So the result of this government action is that startups and crowd funded projects are less viable. Now, startups and kickstarter projects need to have a line item in their budget for security insurance. And an insurer has to take on this risk. Because it wont be policed or litigated evenly. And thers no telling what the security standards of tomorrow will be.
So startups will have to buy security insurance. And that will be expensive.
Big companies will survive. They will be able to pay the protection fee, the extortion fee. The status quo will remain. We won't be more secure, because the real actors making us insecure are nation-states and Chinese manufacturing.
If those projects are being run by people who are incompetent and unable to consider the importance of protecting user privacy through competent security, maybe that isn't such a bad thing. If FAA rules stop people from crowdfunding new passenger aircraft and leave it to the companies who actually have the skills, expertise and responsibility to build a plane with the relevant safety measures, as an airline passenger, I'm okay with that.
But what if this course of action -- investigation, enforcement, sanctions, etc. -- ends up costing substantially more than the effective cost of device manufacturers giving a damn?
All of DLink's issues (maybe aside from command injection, but I'd argue even that) could've been trivially prevented. I could've identified and fixed these issues in a week or two of time.
So basically, you're saying the country should suffer millions or billions in lost business opportunities (via sanctions), in addition to paying tens or hundreds of millions for increased investigation and law enforcement activities. All of this, to save DLink from having to hire even a single half-decent PM or SE who knows that default passwords are a bad idea and maybe we should not post private keys on a public website?
How is that even remotely fair?
> And ISP's could stop this DDOS activity. We should police this at that level.
This doesn't address other forms of harm. If botnet owners can't use their farms for DDoS and spam services, they'll turn to identity theft and ransomware. Most already have.
> And punish companies that perform bad security in the marketplace
Isn't that exactly what the FTC is doing here? Or do you mean that the marketplace should punish them? Because the latter clearly isn't working...
> Lets be clear, this is selective enforcement
Well, yeah. Literally all enforcement is selective enforcement. From speeding tickets to illegal gambling, drug distribution, and tax fraud. Police and regulators choose which cases are most egregious, and make an example. Even in murder cases, police sometimes choose to stop investing resources in a tough/cold case.
All investigation and enforcement is selective.
> No one who created these devices was ever told that these practices were illegal, or would result in sanction
With all due respect, a lot of us have been calling on the FTC and other regulatory agencies to sue these companies under existing consumer protection laws for a long time now.
And the FTC themselves have been making some noise since at least 2013 about following best practices, together with wording that specifically points out to major companies (like DLink!) that the FTC already has the legal tools needed to pursue punitive actions.
Perhaps DLink didn't know there were already laws on the books that could be used punish particularly incompetent software engineering. If they were a small company, I might be more sympathetic. But they aren't, so IMO, that ignorance is entirely on them.
> Now, startups and kickstarter projects need to have a line item in their budget for security insurance. And an insurer has to take on this risk... So startups will have to buy security insurance. And that will be expensive.
I doubt it. They might have to start following best practices and designing secure software. And yeah, that means they can't "move fast and break things" when it comes to security. Cry me a river...
Do you have specific examples of holes that have existed for a long time without official patches or mitigation?
Router companies need to be security conscious, but for the FTC to single out ONE foreign company doesn't smell right.
> California-based D-Link Systems, Inc.
If you don't want to be bound by the laws of the United States and the State of California, maybe don't incorporate there.
Feel free to search for it on the California Secretary of State's website: https://businesssearch.sos.ca.gov/
If they have legal incorporation in the United States, it is fair game for the FTC to go after their US-based company for failure to protect US consumers. Don't want legal liability in the US for your business activity, don't incorporate in the US.
Back to my point, singling out DLink when every other vendor has similar practices smells fishy.