Hacker News new | comments | show | ask | jobs | submit login
Webcams used to attack Reddit and Twitter recalled (bbc.com)
368 points by rietta on Oct 24, 2016 | hide | past | web | favorite | 239 comments



Forgive me if I overlooked it in the article, but if this recall is being conducted at Hangzhou Xiongmai's own initiative, it should be applauded. No doubt it'll be expensive to fix these up, and it's not clear that the organization has any real liability for the security faults. People are suggesting that we should "name and shame", and I somewhat agree, but I think we should also applaud efforts of those who are taking the expensive steps of fixing the problem with no direct incentive to do so.

Because the DDOS's costs are borne externally to the consumer, consumers can't really be counted on to mandate security fixes. On the other hand, establishing liability for a company adding to a preexisting botnet through security faults seems tenuous.

One solution seems to be regulation (self or third-party), and it's exciting to see a manufacturer take this issue seriously and start us down the path of self-policing.

[edit: for clarity]


I join you in applauding Hangzhou Xiongmai in doing the recall.

Still, I think liability (for either users or manufacurers) is the right answer. If your device is participating in a DDOS, then it is a bad actor and you should pay for that.

As for regulation, I am in favor of the right regulations (which I suspect will be rules about liability). The danger is that we end up with rules forcing particular engineering methods and security certifications.

That will benefit the incumbent companies who already used the official methods, without necessarily improving the software. In fact it would create "standing targets" that will benefit attackers.


> If your device is participating in a DDOS, then it is a bad actor and you should pay for that.

I wonder how that would play into the debate on gun manufacturers being responsible for people shot and car manufacturers for people run over.

It's hard to find a good way to do make a law about this, that doesn't end up taking ownership away from the owners.


Those scenarios aren't comparable. For gun manufacturers, you need a person choosing to use a gun irresponsibly or maliciously to cause damage. Car liability is more complicated, but in a lot of cases it's the driver that's held at fault, and rightly so. In the cases where a defect in the car causes a crash, car manufacturers typically are held responsible.

IMO a better metaphor would be to car manufacturers being held responsible for undue environmental degradation, something that does happen to a degree already. (Hello Volkswagen!) The manufacturer should be held responsible for damages to common infrastructure or resources caused by design flaws or deliberately negligent design.


Gun users and car users are already liable in these cases.

In the case of IoT security, it's probably more realistic that we see manufacturers held liable. But in my book owner liability would be fine exactly because it keeps ownership with the owner.

If some DDOS victim can show your camera was pinging them, then he can sue you for 1 cent per ping or something. It's only $200 in total, no big deal. But it will make you ask for more security on your next IoT purchase.


$200 would be devastating to large swathes of the population in the US


True, but I wonder if they'd be buying IoT devices in the first place.


Echoing some of the other comments, guns as dangerous things are remarkably easy to use safely, e.g. compare ~600 fatal accidents/year and ~33,000 as I recall for cars (both, through quite a lot of effort, making large absolute drops over the last N decades while the population and number of them has significantly increased).

But to use an Internet of Shit device safely, well, I could do it, but I'd have to uncrate a firewall, freshen its software and get it to play nicely with my AT&T U-verse modem/firewall, freshen my knowledge of packet filtering, and then....

This is not my forte, and obviously impossible for the vast majority of people. Instead, to borrow a phrase from RMS/FSF, these defective by design devices need policing, perhaps at a pretty severe level (e.g. requiring stable responsible entities, which could be 3rd parties (should be, actually, since hardware companies are particularly notorious for bad software and security practices), requiring updating capabilities, ISP policing (https://news.ycombinator.com/item?id=12772979), etc.), we need to generally improve the state of the art, etc. etc. etc.

To bring it back to the gun analogy, some basic human factors allowing the rigorous application of 4 rules (https://en.wikipedia.org/wiki/Jeff_Cooper#Firearms_safety) are all that's needed to keep them from causing unintentional harm (and teaching these isn't hard), just sitting in my holster this morning when I went shopping, mine wasn't going to do anything bad as long as I retained it from a bad actor trying to snatch it (and first he'd have to realize I was carrying it).

Whereas all it takes is plugging in an IoS device into the net, following the instructions provided for the consumer, for bad actors to compromise them and cause increasing chaos.

We're in for some interesting times; this furthers my determination to make sure I can be productive with my computer(s) without the net being available, and my recent and very much related to all this decision to not go "paperless" with all my utility etc. accounts.


I have fairly strong control over how I use my gun, since it's a relatively simple device that doesn't make independent decisions. Less so for my car, especially more recent ones, and far less so for any IoT devices I own. Therefore, it makes sense for gun owners to have liability, although in the case of "smart" guns more liability should go to the manufacturer.


Everyone makes mistakes and I'm against more liability regulations, which could just end up hurting smaller companies.

Best practice is to require your customer to set an initial password before they can even use the device. But to make something labial for missing a potential attack vector seems counterproductive.


The definitely the right regulation but I think regulation is going to be needed. The simplest thing is forcing devices to generate unique random passwords when connected.

After-the-fact liability can't stop a million small users from buying off-brand merchandise and can't stop off-brand merchandise makers from functioning - especially if quality manufacturers add a premium for the liability they face.


Zero regulation is required. That's the whole point of the Internet.

It is the responsibility of the IETF and the infrastructure manufacturers.

There is absolutely no way to control the actions of individual devices, nor should there be.

The liability lies with the network itself. Period.


I'd be very, very careful about deciding on regulation. The industry has done very, very well for us with no regulation whatsoever.

The state of the industry also changes so fast, it's hard to see how regulation could even be relevant.


Regarding manufacturer liability: As this is a software issue, it would also apply to all software manufacturers. Suddenly, Microsoft/Apple/Google could be liable for damage through viruses that was enabled by a bug in their software. I don't think that law would ever be passed.

The current problems are not that different from computer viruses a decade ago. The problem was only fixed after some major incidents (not that computers are virus free now, but you can install windows and connect it to the internet without having a virus on your device within 5 minutes).


If we look at nist guidelines for cryptography then it's fairly stable but really well designed. The official fips and cc certification process takes a lot of time and money.


> it's not clear that the organization has any real liability for the security faults.

If everyone is allowed to point the finger at everyone else, nothing will get fixed.

This organization profited from the sale of opaque, unserviceable devices with known flaws. Anyone with a basic understanding of the internet knows a default password is an accident waiting to happen. When will we consider this an unacceptable product defect and mandate that manufacturers be responsible for implementing fixes?

That's the only way we'll see sustained investment in open, vetted architectures and push back against security corner cutting for business reasons.


What about all the open source products out there with default passwords? Most distributions with ARM ports (Arch Linux Arm, Bananapian, etc.) have default passwords. At least Openwrt doesn't (it allows one-time telnet access to set a password, then disabled telnet and enables SSH), but should other oss devs be liable for this bad practice?

You might be like, "Well hobbies have to install those images on their hardware." Well what if those projects sold hardware on the side to raise funds? Or a 3rd party sold hardware+software bundles? Who in the chain is liable?


> "This organization profited from the sale of..."

Usually not the case for OSS projects.


Also the default goto distro raspberry pi had at least few years ago. Who would then be responsible?


Is the manufacturer dealing directly with the customers? If so, I imagine that direct contact(assuming the things were re branded) is probably worth more in advertising than the recall will cost them. Hell, I am going to consider that company positively, next time I buy a device in that price range.

As for liability, I don't know about fairness, but I think that the isp is really the only entity prepared to deal with this sort of thing on a timely basis. All service providers know the"fix your shit out we un plug you " drill. It flows in both directions, too. Generally speaking, the more you pay, the more warnings and hand holding you get. But the point is that the isp is the first responder here; nobody else is in a position to stop an attack in progress.

Of course, if you make the isp legally liable, fairness aside, internet access will cost rather more.


I can applaud only to a smart PR move here. I can't imagine even one hundred people recognize vendor name on their cameras, let alone bother to return them overseas to China. (The article didn't mention where they were sold but I assume overseas). Devices of this vendor constitutes a tiny portion of the whole botnet. So this recall can't really change anything, but vendor's name hits the media.


There should be a carrot and a stick solution with a regulated baseline of security (such as "don't use hardcoded passwords, dummy!").

Carrot: labeling/rating system ("our IoT device is A+ security rated, best in class!")

Stick: Devices found part of a botnet and/or DDoS attack are taken offline permanently until cleaned, and/or recall mandated by the governments.

That ought to be enough to improve IoT devices security by about two orders of magnitude from what it is now.

The nice thing about such regulations is that now manufacturers don't have to race to the bottom on costs by not increasing their devices' security and updating them. Because they could win more sales through good rating systems or they could lose much more money through recalls, being shut off from the Internet, and therefore potentially losing customers permanently, and so on.

I'm a believer in "free markets with good baseline standards" for competition, fairness, consumer protection, etc.

I'm not entirely sure how support for updates should be handled, but I think it should be mandated that until at least 80% of your customers stopped using your product, then you are still liable for updating all of them.

So, for instance, if 30% of your customers still use your "smart fridge" 10 years later, then you should still send security updates to it (within 3 months of bug discovery). If after 12 years, only 19% of your customers still use it, then you can stop updating it.

Even then, I worry that 20% of IoT devices could mean potentially billions of devices 15 years from now. And they could still be used by botnets. But hopefully by then the Internet would also be a lot more resilient to DDoS attacks (perhaps by decentralizing it more) and a couple billion IoT devices embedded everywhere into our cities won't represent that much potential DDoS firepower.

EDIT: Whatever made you think this was "voluntary" of Hangzhou Xiongmai? The FBI and likely the DHS and the NSA were investigating this issue. They probably at the very least felt some pressure from them to do this recall. After all, the U.S. government has often been quite decisive on banning Chinese companies from selling in the U.S. because of "national security reasons", so it doesn't seem unlikely that this company feared the same could happen now, too. From that perspective a recall seems cheap.


Just because a device is deemed to be 'secure' one day, doesn't mean there aren't hidden vulnerabilities and therefore in the future it will be deemed as 'insecure'. How would you manage that after the hardware has shipped and is on the shelves? It gives a false impression of security if you simplify it to a sticker on a box. Plus it's not exactly hard for a device to put a false label or counterfeit in some way.

All hardware should follow basic security principles, it shouldn't be something you can grade on. If there's been an issue then the company should be rightly called out, which seems to have happened in this case.


I think that one of the baseline requirements for an internet connected device to be considered fully secure is automatic patching by the manufacturer, with a signature verification system for the patches. If a vulnerability is found, this allows the manufacturer to patch it out of shipped devices before it can be exploited, reducing the attack surface considerably.

This isn't perfect (especially if the signature mechanism itself can be exploited; hello attacker-patchable devices) but it's a start.


One problem is that manufacturer-controlled updates are at odds with consumer ownership of software and hardware. The demands of privacy advocates are almost diametrically opposed to the concept of automatic pushed updates. There have been plenty of examples where manufacturer updates have made devices worse in some way.


I already see your comment turning gray for daring to suggest a role for regulation here. Sorry people, if your company releases a negligently insecure IoT device into the world in any sort of quantity, you are a polluter.


To play devil's advocate a bit - if Best Buy sold a Windows 7 laptop that didn't have auto-update turned on by default, and the system gets hacked 3-4 years later from lack of patching, did Best Buy release a negligently insecure IoT device? Did Microsoft release a negligently insecure IoT device by not forcing auto-updates by default? Should users truly own their IoT products and have the ability to turn off auto-updates?

This is a very thorny problem.


It depends on the definition of negligence, of course. Sometimes you take all appropriate precautions and still get screwed -- OK. We have a situation where companies are releasing products that every competent professional would recognize as a security trainwreck, and it happens with complete regularity, because the incentives simply do not currently align to make corporate executives give a damn.

Due to externalized costs that impact more or less the entire population, this is now a political issue. It's exactly as thorny as environmental regulation, or public safety regulation. Should users truly own their cars and have the ability to disable safety features? I think so, but it shouldn't just be a button on the dash. Should they be able to disable emissions features?

I'm not trying to oversimplify here, but I also don't think we should allow the perfect to be the enemy of the good. If there were real incentives to produce secure products, we could expect to see much more investment in secure software, perhaps even verifiable formal methods, and hopefully more industry collaboration/standardization around open source platforms to mitigate risk. It's a tradeoff, but I so often see front-line engineers faced with situations where acting responsibly/ethically with respect to security puts them at odds with management, and that's a clear sign that we're not getting the balance right.


I'm comfortable with both -- best buy and MSFT liability, and the ability to turn off auto-updates as long as they ship on.

The vast majority of people don't change configuration or even have any idea configuration exists. Taking devices with sophisticated configuration and dumping all responsibility off onto end users simply cannot be the way forward for smart devices. Or we'll live in a world where some 12 year old with internet access can download a script that turns your refrigerator off ruining food; turns your kettle/oven/coffeemaker/toaster oven on burning your house down; and makes you an internet porn star by turning cameras on in your home and streaming the video to who-knows-where.


Although I agree with the general ethos of what you're saying, rating systems have done rather poorly in Capitalistic societies int he last ten years. The BBB and everything on Wall street come to mind.


>but if this recall is being conducted at Hangzhou Xiongmai's own initiative

I imagine there were a lot of angry calls from Washington and Beijing over this. I don't think companies are magnanimous like this without external pressure.

>but I think we should also applaud efforts of those who are taking the expensive steps of fixing the problem with no direct incentive to do so.

Applaud what exactly? That all these companies ship devices that happily take IP addresses and go on the public internet or punch through firewalls with upnp and do not force a first-time password change and do not have an auto-updater installed? Even the cheapest Netgear comes with a sticker on the side for a semi-random password instead of shipping everything with 'admin/password.' Or using antiquated and non-encrypted transports like telnet. These 1990's habits need to die and it won't happen because of corporate good graces, because those haven't worked historically. If it happens, it'll happen via external pressure via governments and lawsuits. Market forces are rarely a fix all here.


I imagine there were a lot of angry calls from Washington and Beijing over this. I don't think companies are magnanimous like this without external pressure.

No one in Washington or Beijing with diplomatic credentials and contact with the other side even knows what DNS is or that there was an attack on Friday.

A lot of Chinese companies are both privately owned and awash in money, and that gives them the latitude to behave in ways that aren't strictly about achieving maximum Pareto efficiency.

These guys are doing the right thing. Please don't try to minimize that.


>No one in Washington or Beijing with diplomatic credentials and contact with the other side even knows what DNS is or that there was an attack on Friday.

They have advisors that sure as hell know exactly what is going on here. This is a very naive view of how government works. Diplomates don't need to know what code is to make policy. Chinese bureaucrats certainly know that pissing off their biggest trade and manufacturing partner isn't good for their economy.

This was a very high-profile attack. Pretty much every intelligence agency in the world has notified their higher-ups on what likely happened and the hammer most definitely fell on this company. Chinese businesses aren't exactly known for their good corporate personas and generosity. See baby and dog poisonings and desperate reactions by the government like executing executives. On top of the everyday IP infringements and other cavalier attitudes towards international business norms.


They have advisors that sure as hell know exactly what is going on here. This is a very naive view of how government works. Diplomates don't need to know what code is to make policy.

Are you speculating? Do you work in the diplomatic corps? I get the opposite impression given what I've read in wikileaks releases.

This was a very high-profile attack. Pretty much every intelligence agency in the world has notified their higher-ups on what likely happened and the hammer most definitely fell on this company.

Assuming for the moment that there is any communication made at all regarding this incident, do you really think the Chinese government is going to pressure some small Chinese business to lose a whole bunch of money unnecessarily because an American anarchist group attacked some American websites with said Chinese company's webcams?

Nobody cares.

Chinese businesses aren't exactly known for their good corporate personas and generosity.

What an awful generalization.


> Assuming for the moment that there is any communication made at all regarding this incident, do you really think the Chinese government is going to pressure some small Chinese business to lose a whole bunch of money unnecessarily because an American anarchist group attacked some American websites with said Chinese company's webcams?

What about naked self-interest? Those webcams are vulnerable and can be used to attack any infrastructure, including China's own.


Not to sound like an ass, but it would help us to not generalize if you have shown some counter examples.


I feel like the burden falls on the accuser


Let's see. How about this news story that is 4 hours old?

http://www.nzherald.co.nz/business/news/article.cfm?c_id=3&o...

Or this http://www.foodandwaterwatch.org/news/potentially-unsafe-foo...

Well, try this if you are still not convinced http://lmgtfy.com/?q=china+us+food


I would go further and say that regardless of the motivation, they are doing the right thing and therefore should be applauded.


There should be recalls from more manufacturers. Someone I know purchased a surveillance camera with a major brand name (Samsung) from Costco [0] just a few weeks ago that gave me a root shell by simply telneting in as root with no password and no way to reliably set a root password or disable telnet. It was returned the following day. Last I checked, Costco is still selling it. This problem isn't confined to cheap Chinese cameras you can buy online. Vulnerable devices are being sold at major American retailers and they are still on the shelves.

[0] http://www.costco.com/Samsung-SmartCam-HD-Plus-1080p-Wi-Fi-I...


Yeah, this is one reason I still don't have security cameras setup on my home network. If I decide to get them, I am going for a dedicated ethernet network just for cameras and no internet connection. I may allow a VPN to a inside the house server to see footage. According to the Wirecutter, Nest cameras are some of the better commercial one but I've still not bought one or done any review myself.


When we were shopping for a baby cam to keep an eye on the baby, I opted to get a simple RF cam [1] instead of the more popular IP cameras that allow you to use your smartphone and monitor from anywhere.

The lower tech approach means you can park a van in my driveway and probably pick up the signal but that's a lot harder (and more obvious) than scanning an IP range from anywhere in world and finding vulnerable devices.

[1] https://www.amazon.com/Foscam-FBM3501-Wireless-Digital-Monit...


I got a Wansview camera and assigned it a static IP and just don't allow any traffic not originating from the chromecasts or tablet -- it's nice because all the TVs do picture in picture with the baby camera.

Still pretty weird seeing the constant log entries trying to reach a couple servers - I've been doing traffic capture since I'd like to see what it's trying to do. One is obviously the plug-n-play stuff, but it's crazy that those packets apparently get broadcast outside the network (? - I haven't really looked into how that PnP IP/port is handled but it's getting caught at my firewall).


We have IP cameras (Axis) on a dedicated VLAN that doesn't have access to/from the WLAN, and things work pretty well. I don't trust VPN's (NSA clearly watered down the IPSEC standard and can definitely compromise most IPSEC connections [not sure about IKEv2]; OpenVPN is a messy pile of shit that is undoubtedly swamped with vulnerabilities), but do allow a VPN into my camera network. The compromise I made is to send a notification email for each established VPN connection, regardless of how it was established, so at least I'll probably know if someone else connects.

With Nest, you have to use their "cloud" for it to be fully functional, which to me makes it a no-go for anybody like you who is actually concerned with his/her security/privacy.

The most popular IP camera on Amazon is a Chinese camera gets your Wifi password through their app via the "cloud". Fuck that.


>gets your Wifi password through their app via the "cloud".

And? What does it matter that someone has a password that's only good for about 100 metres around your house?

Of all the passwords I have, my wifi password is the one I care least about.

I'd be more worried about what the app itself is doing on my phone - I caught one attempting to update outside of the Play Store. No thanks.


> I'd be more worried about what the app itself is doing on my phone - I caught one attempting to update outside of the Play Store.

If it is Chinese-made, that might just be because the Play Store is blocked by the Great Firewall. Apps in China need to use some other way to update.


This is a great point, but the app in question was Broadlink eControl - https://play.google.com/store/apps/details?id=com.broadlink....


Made in China.


I have my router firewall blocking all traffic to and from the Internet to my cameras. My router also offers OpenVPN for when I need access. It's not perfect, but it provides pretty good protection against someone attempting to use generic methods to compromise my devices as we've seen here.


If you have the interest and knowhow, you can build your own with an RPI. That's what I eventually did.

Admittedly, it's a far cry from an Off-the-Shelf solution though.


If we start down the legislative road for all elements connected to the Internet, where is that going to end up?


"...hackers were able to take over the cameras because users had not changed the devices' default passwords."

"Security issues are a problem facing all mankind," it said. "Since industry giants have experienced them, Xiongmai is not afraid to experience them once, too."

The fact that they aren't scared of having brain-dead security failures in their products is, to put it lightly, telling.


Liability rests with the network itself. Bitching about devices or retailers is pissing against the wind


Keep going with the down votes...

Shall I give you a couple of hundred comments for ammunition.

On the other hand, if you disagree with my perspective, man up and present an alternate perspective.

How is that?


Please don't complain about downvotes. Keep the discussion centered around the content of the article.


Yes you are right

The thing is, I'm seriously concerned about the rhetoric on this thread. There seems to be a general bias toward legislative action and I know it isn't going to go well if that's the way things turn.

My reaction against down votes is pure frustration though. Down votes are a dead end.

I think I'll go back to my happy place...


The downvotes (for comments after the first) are more for the spamming and metacomplaints than anything else.


You can down vote me all you want. Go for it


Oh no there are vulnerable devices on the Internet. Do you have any idea what you are saying?

EVERY device on the Internet is vulnerable, and it makes no difference to Dyn DNS where it was manufactured or how long it had been running without an update.

Zero Day Exploits are real!

Wake up!


Is this recall of IoT devices the first of a kind? While it is but a drop in the bucket of IoT insecurity, it has to be an expensive way for the Chinese firm Hangzhou Xiongmai to learn that having the same password on all devices is a security bad idea. Other manufacturers should take note.


At least shaming the manufacturers into considering security is a good thing if it makes them avoid being shamed in the future by caring about security. But I doubt most people who bought and use these devices will care as long as they still work.


Agreed. This is a good business decision to issue a recall. It probably won't cost too much since very few people will actually return it.


I can imagine having a vulnerable webcam is a huge privacy risk too.


It's common to see people frown upon seeing

  curl ... | sh
Sad that we happily allow a black box to connect to our internet without any inkling of the fact that it might be used to attack someone, spy upon someone (ourselves?), ...


You say that like the people frowning about curl|sh are the same ones plugging random crap into the internet.

I doubt there's much overlap between the two groups of people.


ahem... well, there's, uh... me...

I'm afraid that you would find the number of people who failed to consider blackbox internet appliances as attack vectors to be disappointingly large.

Luckily, I already had a paranoid router setup, so I think I'm probably okay.


The crowd raises their styrofoam coffee cups and says "Hi knodi123."


I didn't say that "people frowning about curl|sh are the same ones plugging random crap into the internet".

That said, I wouldn't be terribly surprised to see a fair bit of overlap.


Certainly not the majority, but probably more than you think.


The set of those who frown upon curl ... | sh and the set of those who happily allow a black box to connect to their internet and potentially spy on them must be pretty close to disjoint, no?


What is wrong with curl | sh with HTTPS?

Linux does essentially the same thing with GPG signatures instead.


There was a nice writeup some time ago that the server can detect piping to shell via some properties of that call, and perhaps send an exploit in such case:

https://www.idontplaydarts.com/2016/04/detecting-curl-pipe-b...

https://news.ycombinator.com/item?id=11532599


Ok, here's the thing:

Almost no one actually audits the source they download before they run it, whether it be a package installer, a configure/make/make-install tarball, or a curl|sh.

If you know enough to audit a shellscript, you can easily decompose the command-line to download, audit, and run the local copy.

For the vast majority that chooses to simply trust the author, there is no additional security issue with curl|sh compared to other non-signed distribution methods provided the script is wrapped in a function call or otherwise resistant to errors that might happen if a partial download is executed.

Should we do better than curl|sh? Yes. But it's no worse than downloading some random tarball or even cloning some random git repository, as long as it's served over HTTPS.


Note that neither dpkg -i nor rpm -i with a downloaded .deb or .rpm is any better than curl | sh. Installing .deb or .rpm packages is more than just unpacking an archive, it's effectively giving root to the package author: both .deb and .rpm files can have embedded scripts that are run at install time. This is probably true for all Linux distro package formats (I hope there are exceptions, but I'm not putting too much faith in that).


Another issue is partially written responses. If the connection fails in the middle of your curl (or the source's application crashes) you can receive part of a shell script that could very well contain `rm -rf /usr`.

Edit: I cooked up an example actually. Here is a small Go program that will panic after 1 nanosecond: https://gist.github.com/kyleterry/dc304503dfca2d149b189694d1...

This will sometimes return partial responses.

Run this in a bash `while true` loop and curl localhost:8080 a few times. You will mostly see empty and full responses because my example isn't perfect, but occasionally you will only get part of the script dumped to the screen and that's the problem with curl|bash.


The "curl | bash" usages I've seen from reputable sources avoid that by defining a single bash function f and then executing the the function as the final character, i.e.:

  function f {
          echo "hello world"
  }
  f


Totally, this is a good way to protect from that. It's just impossible to trust that is what someone is doing without going and looking at the script unfortunately.


>Almost no one actually audits the source they download before they run it,

Thats not the security issue here, its effectively running build-time logic during install-time, on every bodies machine at every install, instead of at package maintainers secure build system, once, and having the installation basically be an unpack instruction.

Another negative point of curl | bash, is you assume your users are idiots who do not know their OS package manager well enough to run apt-get install shitpackage pacman -S shitpackage or apk add shitpackage.


> Another negative point of curl | bash, is you assume your users are idiots who do not know their OS package manager well enough to run apt-get install shitpackage pacman -S shitpackage or apk add shitpackage.

Absolutely, but packaging for all major distributions can be a larger headache than writing the project in the first place. For a lot of small projects, this isn't justifiable.


If you download a tarball, you can google the sha256sum to see if at least Debian, a couple of mailing lists etc. have the same one.

If your GPG signature is in your keychain, it's safer anyway.

HTTPS by itself does not protect against malicious content or malicious actors (DigiNotar, TurkTrust, GCHQ, ...)


I'm fairly comfortable with `curl | sh` over HTTPS, but to my knowledge, the servers serving the GPG signatures for Linux packages aren't able to sign them with the right keys.


GPG is great for offline signatures, which in addition to HTTPS is a great way to assure that the software you obtain is legitimate.

That isn't to say that it isn't the product of a targeted trojan signed with a stolen GPG key.


> What is wrong with curl | sh with HTTPS?

That you may unwittingly opening up your machine to vulnerabilities.

Most sophisticated people won't execute remote scripts from untrusted sources, but as more people switch to unix-like operating systems, they may have as much scrutiny.

A designer on my team was given a mac, and if a script failed to run locally, he would immediately re-run the script using `sudo.` I can't imagine he was the only person to use this solution.


It doesnt ever work.

Ive yet to see a "curl https://thisshit.net/trust/us/really/we/have/https | bash" script which didnt fail for whatever dumb reason, such as only working on specific variants of Ubuntu, with specific packages installed in specific version, and/or RedHat.

Usually these shit scripts fail when trying them out, for curiosity, in ArchLinux, Gentoo or Alpine Linux.

Sandstorm.io looking at you.

If you are already going to invest in making "an install script" with defined/researched which packages/versions with which your software needs, for various platforms - submit this data to that platform as an debian/{control,etc} to make a .deb or specfile for .rpm or PKGBUILD or whatever, and use the platforms _build tools_ to generate a platform specific package - be a package mantainer if you have to.

Another side effect of telling your users to curl | bash, is I wont ever use your software. You didnt make the effort to do the right thing with packaging, why would you have done any better for whatever it is youre trying to do with your shit software? (sandstorm again).


> Usually these shit scripts fail when trying them out, for curiosity, in ArchLinux, Gentoo or Alpine Linux.

> Sandstorm.io looking at you.

That's hardly fair, since the reason the Sandstorm installer fails on Arch is because it explicitly checks for the presence of a required kernel feature, and Arch explicitly doesn't have that feature. The installer script fails intentionally because of the missing feature, not because it is badly written. Shipping an Arch package wouldn't have magically made Sandstorm be able to work without the required kernel features. OTOH, if you build a custom kernel that has the necessary features, the Sandstorm installer will run just fine on Arch.

(Coincidentally, we plan to lift this restriction this week, as we recently implemented an alternate sandbox design which doesn't require this feature.)


It was badly written, it was checking /etc/lsb-release instead of actually checking for that feature, and I hope you dont rely today on /proc/config for "checking for the feature" since /proc/config is not available but maybe the feature actually is. Anyway good job for fixing it.

In any case, I doubt your sandbox today would work in my sandbox.

That is kind of the point, why cant you ship an application which runs as a normal user only, why do you require root for everything and assume your sandstorm will be the machine-manager? Last time I tried, you actually recommended curl shit | sudo bash. sudo! Nevermind I hate sudo and dont have it.

If you need sudo, take it/ask for it, specifically for that 1 operation, not for whole install script. Maybe thats how its done today?


> it was checking /etc/lsb-release instead of actually checking for that feature

No it wasn't. The install script ran a little program that actually attempted to invoke the system call to see if it failed:

https://github.com/sandstorm-io/sandstorm/blob/800d1c016c150...

We have never relied on /etc/lsb-release nor /proc/config.

> why cant you ship an application which runs as a normal user only

First, we're not shipping an application, we're shipping an application platform which includes a container engine.

On systems that have unprivileged user namespaces available, you actually can install and run Sandstorm as a regular user -- no root privileges at all.

Unfortunately, Arch explicitly turns off user namespaces in their kernel build.

Without user namespaces, we have no choice but to require root privileges, because the Linux system calls for setting up containers require either root or user namespaces. (Note that Sandstorm is careful with its root privs -- most of the system runs as a regular user, calling out to a separate daemon for the specific operations that need root.)

If you don't want to give Sandstorm root -- which is perfectly understandable -- and you don't want to enable the kernel features that allow it to be non-root, then you'll need to run it in a dedicated VM.

> Last time I tried, you actually recommended curl shit | sudo bash.

We have never recommended that.


Hi! I maintain a "curl | bash" installer script [1][2].

    such as only working on specific variants of Ubuntu,
    with specific packages installed in specific version,
    and/or RedHat
Our installer is capable of automatically installing dependencies on current versions of Debian, Ubuntu, and CentOS. If you're running something else it prints out "This doesn't appear to be a deb-based distro or an rpm-based one. Not going to be able to install dependencies. Please install dependencies manually and rerun with --no-deps-check.", which seems pretty reasonable to me?

    submit this data to that platform as an
    debian/{control,etc} to make a .deb or specfile for
    .rpm or PKGBUILD or whatever, and use the platforms
    _build tools_ to generate a platform specific package
    - be a package mantainer if you have to
Building a binary package for ngx_pagespeed isn't really practical. It needs to depend on nginx, but nginx doesn't have an ABI, so it needs to be built from source alongside your existing nginx or as part of a new nginx install. Getting into the distributions, so that they can build the ngx_pagespeed module at the same time as they build their main nginx would work, and is somewhere I would like to be eventually, but requires a lot of coordination with distros.

    Another side effect of telling your users to curl | bash,
    is I wont ever use your software. You didnt make the
    effort to do the right thing with packaging, why would
    you have done any better for whatever it is youre trying
    to do with your shit software?
I do think what we've done in this case is "the right thing" in terms of how we should be spending our development time. Our installation page does still offer manual install instructions, however, for people who would rather not run a script.

[1] https://developers.google.com/speed/pagespeed/module/build_n...

[2] https://ngxpagespeed.com/install


> Our installer is capable of automatically installing dependencies on current versions of

So you dirty the system where its supposed to be installed, a server, with gcc, make and friends?

> . If you're running something else it prints out "This doesn't appea

State so on your documentation/page before telling the user curl | bash, as I wrote in another post here, that is nice.

Your script does not support transactionional installations, as it is it can break at any point in the script and leave the target system broken/dirty. With a simple PKGBUILD youd also have solved the "binary ABI" part. If it really is tedious to coordinate with several distros, you can take a shortcut and dont deliver a "pkgspeed-for_nginx_module-1.2.3@v1.2" package but instead a "pkgspeed-1.2.3" which conflicts with the normal nginx package and thats it.


    So you dirty the system where its supposed to be
    installed, a server, with gcc, make and friends?
That's the way nginx is traditionally installed, and we're shipping an nginx plugin.

    State so on your documentation/page before telling the
    user curl | bash
The first thing the install script does is check if you're on a supported system, and if not it exits. Since the vast majority of people running the script are on supported systems, this seems better than putting a lot of warnings up front.

    Your script does not support transactional
    installations, as it is it can break at any point
    in the script leave the target system broken/dirty
By default the script only builds ngx_pagespeed, and doesn't install it. If you ask it to handle the install, it call's nginx's "make install", which again is the standard way you install nginx.


> That's the way nginx is traditionally installed

No. That is not how nginx is traditionally installed.

You misunderstand basic Linux package management and build systems, which is why you end up in this mess. But thats okay, world is keepin on spinnin. To each their own.


Don't all the same criticisms apply to `./configure && make install`?


Sure, so make it clear what curl | bash is, tell your users "we dont have any packages for your system, here is the source and we use X build tool, please do ./configure && make install, or cmake or whatever", that would have been nice.

When I have to install packages from source, I have a more secure/locked down linux container, with seccomp applied, dropped capabilities etc, then if I intend to use the the software over longer time, and expect updates, I skim the source and build-system quality, quickly write a PKGBUILD or APKBUILD, build it, install it into its own new container - with iptables filtering outgoing connections, and thats it, sometimes submit the APKBUILD to alpine testing or aur.

And all this is easier than curl | bash oh watch it fail for dumb reason in the install script - and I did try that, the time spent troubleshooting the install script is longer than time to sandbox the whole thing as described above.

So, if I can do it, why cant the project? Why spend their efforts on writing an install script instead of package-build instructions to various distributions?


For Sandstorm in particular, it doesn't necessarily run inside a container since it is itself a container management / sandbox tool, makes use of seccomp protections, etc. I'm not sure what the best way to sandbox Sandstorm is. Maybe get a separate VM instance.

(I am a Sandstorm fan, but I don't have it installed because it wants me to flip some sysctl about user namespaces or something, and I don't currently have a separate machine to do that on)


> it wants me to flip some sysctl about user namespaces or something

FWIW, this is no longer needed! Sandstorm has been updated to work without user namespaces and this installer script change will remove the check from the installer (will probably ship this week):

https://github.com/sandstorm-io/sandstorm/pull/2656

(You will need to let Sandstorm start as root if you don't want to flip the sysctl, though.)


> You will need to let Sandstorm start as root

Joke of today, thanks for the laugh.


What do you run as root that isn't accessible to other users on your system, and why are you deploying Sandstorm on this machine?

If it's a single-user machine, any process running at any privilege level can trivially get root by editing ~/.bashrc and aliasing the sudo command.


Aliasing the sudo command? Are you serious?

I dont even have sudo, let alone if I had it, I wouldnt use its dumb caching of credentials/session or "ask no password" "feature".

Just because my system is single-user doesnt mean I run every process with that one user, in fact I run firefox as its own user, chrome has its own, mpd as its own user, nginx is its own user and so on. This is basic security practice.

If you seriously think that it is so easy to get root just because you can run a process as a normal user - "by aliasing sudo command", then by all means run everything as root, whats the harm or big deal, right? Why do you even have a user which is non-root?

Why cant sandstorm run like lighttpd or nginx - as their own user, requiring no capabilites, in fact even syscalls can be revoked from them with seccomp and they will still work fine. All they need is socket api, file system api ( open close create), and some others, no they dont need to load kernel modules, ptrace or open_file_handle_at and so on.


nginx usually needs to start as root, in order to bind low-numbered ports, but the worker processes run as non-root.

Sandstorm works exactly the same way.

nginx can run as completely non-root, if you are OK with high-numbered ports.

Sandstorm can too, if you are OK with high-numbered ports and if unprivileged user namespaces are enabled.

(It sounds like you actually use UID separation for security on a desktop. That's cool, although keep in mind that if everything is talking to the same X server, then UID separation probably doesn't help much. If you're serious about this approach you should probably be using QubesOS.)


In theory yes, but in my experience ./configure && make install is a lot more reliable.

Projects that recommend piping curl to bash are trying to dumb down the process, and they have a tendency to make too many assumptions about the system it'll be running on.


Pretty much all devices with wifi have "black box" elements: their closed source firmware is quite capable of sending and receiving packets without the OS ever being informed.


  THE MOST POPULAR CURL DOWNLOAD – BY A MALWARE
https://news.ycombinator.com/item?id=10574011


Even worse, people would happily comply with any suggestions their IT department told them to.

Sometimes IT department just google random instructions from the interweb. Some of them include really outdated security practice


> The web attack enrolled thousands of devices that make up the internet of things - smart devices used to oversee homes and which can be controlled remotely.

It's almost poetic that the IoT devices in question are remote-controllable webcams, since constant surveillance is the other symbol of a dystopian Big Brother society.


Has anyone seen an explanation of how the telnet port on these devices is getting exposed to the internet to be exploited? I would think that most home users are behind a NAT device. Even with UPnP, why would the manufacturer have that port set to be forwarded?


It's UPnP [0]. It was always going to be UPnP. UPnP is the wrong set of trade offs and always was. And even making it 'off by default' won't solve the problem because the standard instructions for getting any multiplayer game or IoT gizmo to work are 'turn on UPnP'.

Not that this in any way absolves the OEM for the utter idiocy of including the telnet port in their forwards at all and the absolute negligence of having it active by default and 'secured' by a single or small combination of well known auth tuples.

But yeah, that's really what they did. Here's the section of Mirai's scanner.c that sets up the destination port. [1]

    // Set up TCP header
    tcph->dest = htons(23);
    tcph->source = source_port;
    tcph->doff = 5;
    tcph->window = rand_next() & 0xffff;
    tcph->syn = TRUE;

They really did just forward port 23. Tempting to call malfeasance but at best massive incompetence.

[0] https://www.us-cert.gov/ncas/alerts/TA16-288A

[1] https://github.com/jgamblin/Mirai-Source-Code/blob/master/mi...


I've never seen any embedded UPnP implementation (I think the spec is "Internet Gateway Device") require any kind of authentication before forwarding ports. I wonder if that's even possible?


> why would the manufacturer have that port set to be forwarded?

Because remote management, and the default state is to have everything opened up to facilitate pain-free setup and config.


To lower support costs. Its a lot easier to use upnp than explain to consumers why their app or web browser won't connect to their home device because of how NAT and firewalling works. Or to implement hosted servers, which also cost money, to be the go-between.


There are lots of ways to get inside a network besides going through open nat ports. If there's a web interface, you could pop that, or go through an infected pc, etc. Once you're in the network, you can hit all the ports you want inside it.


But that's not how it's happening. These IoT devices are being granted IP addresses, somehow. Mirai is scanning the web and blindly trying telnet, and if it works, it tries these password combos. It doesn't do anything even remotely sophisticated to navigate a network.


Agree, seems bizarre but sounds like sloppy device config.

https://news.ycombinator.com/item?id=12765265


Is there a relatively easy way to see what devices in your house are visible to the Internet?


To answer my own question, and to fish for better answers, http://iotscanner.bullguard.com/#/ gives an answer. Not sure how definitive though.


Is this the first time that this has happened? A severely insecure device leading to a recall.


The other company that was widely affected by this, Dahua (see my coverage here: https://ipvm.com/reports/dahua-ddos?code=hn) also issued a statement that they would offer a trade-in discount for affected devices. It wasn't a full recall, and you have to jump through some hoops (work with authorized dealer, etc.) in order to get it.



I have the same question. I am not aware of a similar situation.


Underwriter's Laboratories should start including basic security hardening in their tests.


Vulnerabilities can be surprisingly hard to find: http://mjg59.dreamwidth.org/45098.html

(Matthew is an absolute expert at breaking into cheap IoT devices)


But with this particular vulnerability, e.g. weak passwords and no way to reset it, I would think that UL could in fact test for this and fail the device.

I like the idea of UL testing for at least basic security vulnerabilities.


I think you mean testing for basic security best practices. Testing for vulnerabilities is the hard part, but having good practices is easier to test (e.g. is it a vulnerability or a feature it can't reset?).


They should be hard to find. Basic UL testing could establish that the most common problems are at least tested for.


I the small electronics market, price is everything. Thus any compliance that raises costs is likely to be routed around, especially by Chinese companies. Do most products coming from China even get UL listed?


Many distributors won't stock products that are not UL listed.


Who needs a distributor when you have China Post?


I doubt millions of IP security cameras were bought off eBay or Alibaba. We're talking more like "Will Walmart stock it"?


I wouldn't make the perfect the enemy of the good. It could start off with just a checklist of common vulnerability types and go from there. The root cause of these issues is that no one cared about them at all. Just the idea that a certifying agency you are going through anyway will be checking would help enormously.


I mean just a couple days ago on the front page we read about a Paypal multifactor bypass by.... Wait for it... Deleting the token field from the post request.

Vulnerabilities are often trivial in hindsight and lost in a massive forest up until the moment they are serendipitously discovered.

I'm not saying a proper audit shouldn't have found this and flagged it. But yes, agree strongly, vulnerabilities can be surprisingly hard to find.


I completely agree with his sentiment. While it certainly won't catch all vulnerabilities, it should at least catch the more obvious and easily exploitable ones. And it concentrates the skills required in an entity that has a business model that can sustain it.

I see it as analogous to FCC certification of RF related equipment.


The logical answer to this, which is what we're seeing in the US, is to tie every device to the corporate backend servers.

This does a few things.

1. Secures endpoints down 2. Chains users to company 3. Makes cloners hard to copy product, given server based logic

Hell, Google and Microsoft can't even bother to keep up products or their DRM up for any length of time. But again, users are willing to rent chained devices like the Nest.


That's not going to work--we just had the discovery of a major vulnerability in Linux that has been around for years but got patched last week! It is what it is.


Regulating IoT is going to be impossible when tiny little devices can be shipped undeclared. Even if it could be policed effectively it would destroy the economics of some of these cheaper devices.

Instead, how about ISP supplied modems/routers have sensible defaults(and have the admin password reset)?

Much more cost effective.


One down, one million to go.

Thee is no way these horses can be put back into the barn, there are too many of them. As long as consumers make their decision based on price there is every incentive for the manufacturers to continue cutting corners - the ones that put extra work into security will be at disadvantage compared to those playing fast and lose.

Can we talk about BGP flowspec instead? Filtering offensive traffic early and often can end DDoS once and for all.


>Can we talk about BGP flowspec instead? Filtering offensive traffic early and often can end DDoS once and for all.

What about spoofing? Until broadband providers get serious about BCP 38, this is just cat and mouse.


Is BCP 38 difficult to implement? It seems like something most edge routers should already support.


Yep, let's include that as well!


And at the same time, the company recalling those products is issuing threats against anyone who is defaming their "goodwill": https://news.ycombinator.com/item?id=12778954


This article doesn't mention the brand names of cameras manufactures by Hangzhou Xiongmai. Anyone know any?


It appears that on Amazon they're sold under the name "XM".

https://www.amazon.com/Surveillance-Infrared-Recording-Wirel...

If you sort by the seller there you can see they make dashcams and webcams.


That's been my big complaint about the reporting on the DDOS attacks. The specifics of the devices should be listed.


Was it possible to take control of these cameras even if they were behind a consumer firewall? Is the issue that consumers were connecting them directly to the Internet, not behind a firewall?


Would it not make sense for broadband router manufacturers to step up here, especially ISPs who provide routers to customers.

First of all, IoT devices really need to be connected on isolated vlans with very strictly controlled WAN capabilities. Obviously this already exists, but not in the fashion a layman, who wants to put their fridge on the wifi will understand. The average home routers need cleaner interfaces and clearer abstractions rather than the cruft that exists now.

Does your fridge really need to access the internet, and if it does, perhaps you could setup your router to only allow access at certain times, to a single host and with circuit breaker protections in case traffic has a signature that matches that of a DDoS attack. This circuit breaker pattern could be extended to all traffic running through the router, and provide the user with reports of potential infected devices and traffic hungry users.


As someone who was a VLSI design engineer for 4 years, as well has extensive software experience, much of the disruption happened because of poor engineering. There should always be redundancies in critical systems. The groups such as Twitter, Reddit and Spotify did not use redundant DNS providers relying only on Dyn. Moreover, DNS should be designed so that the systems are more resilient to attacks. The initial design of the internet was meant to withstand nuclear attacks after all.

There is absolutely no way that we can protect all devices on the internet from being bots, etc. Just as it is almost pointless blaming hackers when in most cases the hacks were because breeches from failure to update software, to put in proper security software, and to hire top level consultants to implement secure systems.

Put another way, we can't possibly jail everyone who would want to steal money. That is why we use safes.


Fair point, but how do you decide how much redundancy is excessive? I can always construct a scenario which will require more redundancy and more cost.


It would depend on how critical the element is to the system. I would expect at least one backup system. That is there should be a least one major backup DNS provider for example. Some systems on airliners have two backups.


A significant number of these cameras were bought on Aliexpress and EBay. How are they going to do the recall when they don't even know the end customer?


Aliexpress and Ebay don't ship the products, the sellers do know who the buyers are, at least their name and address.


Ok when I said "they" I meant the manufacturer(s). Look at Samsung and their inability to do a proper recall for the Note 7. A lot of the stores from Aliexpress disappear after one year of existence. I think the proper thing to do immediately is to reverse Mirai and force a password reset on the affected devices.


It's disappointing the article doesn't give any actionable detail of the recall. From what I can see, Hangzhou Xiongmai is a components manufacturer, not a retail brand, so there's no practical way to identify an affected device with the information here.


So by subsidizing each webcam by a dollar or two, China is able to deploy millions of pieces of hardware to the U.S. that can be used to map and destroy our infrastructure for a total cost of a few hundred thousand dollars. If done purposefully, this has to be one of the most efficient military spends in history.


While at least theoretically possible, I think it fails Occam's Razor. Anyone who has worked in this industry in an even remotely security-sensitive area knows this is currently the state of the art, and preventing this from happening requires a major effort. The default for an organization is to throw together a feature as quickly as possible, develop for engineer convenience (which is the cheapest thing to do), and then throw it out there with no plans to support it in the future unless they can show revenue resulting from that. We don't need to hypothesize a conspiracy to produce that result when it would take a very powerful conspiracy to prevent that from happening.

Now, if you can show me that the same manufacturers make cameras for China's internal use that are fully secured, that would make me change my mind. But based on the occasional "trips to Chinese tech bazaar" reports we see here sometimes, I'd be surprised.

(On a similar note, in my political blog reading I saw a lot of paranoia that the DDoSes last week were somehow motivated by name-your-choice of political actor. However, per what we are discussing right now, we don't need to hypothesize that. It is completely plausible that criminals have come into possession of that much power. The trendlines have been clear for a while; we've been getting DDoSes ~25% larger than the largest one ever recorded about every ~three months for a long time now, and most of them do not seem to be state actors. Just because they're out to get you doesn't mean that it may not be paranoia, to invert a famous quote.)


I initially chuckled at this comment.


Sounds a little paranoid to me. These insecure devices are just being pushed out into the wild. It doesn't matter if they are imported into the US. They can be used to attack Chinese infrastructure just as easily.


A lot of malware and ransomware is hardcoded to ignore Russian IPs as per Brian Krebs. The malware runs, does a quick check to see its public IP, and if its Russian it exits. A list for China's IPs could also be put into these devices.


Unless I am uninformed, I don't think it's the case that any of these IoT devices come with malware preinstalled. They are just vulnerable because of weak default identical passwords. The fear is about these devices being exploited, not that they are malicious themselves. My argument still stands. You could just as easily load malware which ignores American IPs.


In the short term. Such a strategy, if proven, would be devastating to China.


It can never be proven and that's the ingenuity of it, meanwhile the company does a token PR effort to fix the issue still leaving millions of weaponisable devices employed. It's not the first time China has been 'caught' doing something like this either https://intelligence.house.gov/sites/intelligence.house.gov/...


And most amazingly, it’s a strategy the NSA devised and used in the first place, now employed against them.

Poetic justice.


You forgot the part where people are voluntarily buying those, because they cost less. Of course those made in California would be totally safe, but damn shame about that globalism.


That's the point, though - a government could indirectly subsidize a product to increase its adoption, exploiting the fact that "people are voluntarily buying those, because they cost less".


They "could" also do the same for the good old capitalistic result of more devices sold, but without any backdoors.


> Of course those made in California would be totally safe

Maybe I'm too cynical, but from what I know about the NSA and other three letter US Government agencies, I beg to differ.

Additionally, good luck finding any type of electronic device that is entirely made in the USA. I honestly can't think of anything with a printed circuit board that is assembled in the USA from 100% American-made parts.

Then of course there's always the whole engineers taking shortcuts to meet unrealistic deadlines thing. Somehow network security tends to be very low on the list of priorities when you have to make a profit.


This has nothing to do with governments, it's freaking hard to build secure software. There'd be bugs in American software just like there are bugs in the foreign software.


I probably forgot to add "/s" at the end of that sentence.


I made that same point the other day too, it's ingenious!


I think I see a opportunity for something new here.

Why isn't there a uber secure OS written in a high level language that would prevent easy privilege escalations, vulnerabilities caused by buffer overruns etc?

It would be nice to have a standard security approving body (like FCC) that gives out graded standards.

It is like every generation forgets the mistakes of the past and repeats them. When $5 Raspberry Pi is powerful enough to run desktop OS, I see no reason to not adopt a high-level language that prevents basic security violations at the roots.


I think this method requires too much effort and attention to detail to be realistic, although I do agree this would be an ideal solution. It makes more practical and economic sense to put an electronic network "condom" around a dirty, likely misconfigured and insecure IoT device.


Security-Enhanced Linux (SELinux) is a Linux kernel security module that provides a mechanism for supporting access control security policies, including United States Department of Defense–style mandatory access controls


I think the real problem here lies in the lack of auto-update. Devices will always have vulnerabilities, constantly being discovered, which once weaponized will be just as trivial to own a large swath of devices as telneting for root was here.

The UCC provides an implied warranty for suitability for intended purpose. The FTC defines unfair and deceptive business practices to require a baseline level of security commensurate with the sensitivity of the data that could be exposed or the potential damages that can be inflicted. As in most cases, new laws tend to make things worse, we just need to do better with the old laws.

It's not that it's illegal to sell someone a Io(S)T device that can be owned for running a DDoS, but I am willing to bet it is illegal or at least creates significant liability for the manufacturer to sell such a device that also has no way to be fixed after that flaw is discovered.

What would be nice is a simple industry standard labeling that indicates a device has auto-update functionality, along with a large numeral indicating the number of years from date of purchase that updates will be provided. The same decal could be used on computers, phones, routers, and IoT.

Just like we trained consumers to look for the WiFi Alliance logo to know the router or card they are buying will "just work" I think we are missing a label which would drive consumer confidence and encourage good behavior by the manufacturers.

Probably an industry consortium already exists for something like this, but I just haven't heard of it... Because there's no such thing as a new idea, right?


Wow, with more and more crap entering the market and more and more people connecting things, it seems like in the future it would be nearly impossible to prevent these types of attacks.

Who could you hold responsible? The user for not setting a password or the manufacturer for accidentally creating a backdoor? Neither is really reasonable nor feasible. Filtering the attack is also extremely hard due to the scale of distribution.

Will this lead to a more locked down internet?


Hopefully it will lead to some regulation of these devices. That's a dirty word to many, but the fact is that Internet-connected devices are part of a global community, and need to behave safely, and being open to hijacking by criminals is not safe.

The proposal I like best is that the industry should get out in front of this, and build a self-regulator organization now which issues recommendations and certifications of Internet-connected products. Then governments could simply require compliance with the industry norms, established and vetted by the industry, and we can keep political institutions out of micromanaging the electronics industry.

The model to follow here is Underwriters Laboratory, which sets standards and grants certifications for electrical and industrial supplies and equipment. Then, for example, city governments can just specify that everyone installing, say, outdoor lighting at their home must purchase lights and outlets rated for wet outdoor usage by the UL.


Egress filtering, or filtering packets not of the source network, would go a long way here. It wouldn't fix everything, but it would be a definite start.


The manufacturer, obviously, just like any other defective product. Devices don't need to be perfect, but if companies are going to say "we're going to sell a potentially imperfect thing," they need to either have a mechanism to update it to address defects, or they need to prepare to issue recalls. As for the password: reputable companies now set a random password and print it on a sticker on the device or a slip of paper inside the box. Having a global default is broken by design; we know users won't change it.


It is like blaming knife manufacturer for murder. In case of cameras, default password was not a defect and anyone with common sense could change it. Obviously a lot of people didn't do that, so that is their fault. Maybe manufacturers should put a red sticker warning about potential consequences of not changing the password, just like knife maker could put a red sticker that you should be careful cause you can cut yourself...


It's generally recommended best practice to force the user to change the password on setup, and/or to ship with a randomized password. At some point not making your product conforming to established best practices means you share some of the responsibility. (And customers have more lee-way in being uneducated than manufacturers).

One would hope best practices don't have to be turned into laws (because they quickly become outdated and are easily miss-applied), but there is going to be a push to do that.


In this case the password is hard coded in the firmware (and the web interface isn't aware of the telnet interface). They actually designed something where the password literally cannot be changed by the user :-/


Oh...


> but there is going to be a push to do that.

That's obvious - nobody wants to be blamed for its own ignorance and then if at the same time you can make money on it, the law is just a matter of time, especially in the left wing climate supporting bureaucracy and over regulation and eventually making people more ignorant and irresponsible. Vicious circle.


From what I read, part of the issue here is the these specific cameras cannot have their password changed, at least not through the normal web admin UI. So, they're shipping a default password that people cannot update.


Though I'm happy we got top billing in the headline, Reddit wasn't actually impacted directly (though of course many of the sites we linked to were).


You guys are crazy. A probably small creative team simply made an error while building a camera in good faith that was abused by criminals. Who is the victim here?

What's all this crap about simple fun web cameras as "Bad Actors"... In fact the use of the term "Actor" as applied to a pretty dumb piece of electronics is pretty creepy in itself. What are we trying to do here?

The term "State Actor" is fairly new in terms of popular usage. Thanks to CNN and media, we are being trained to know this word in a particular context.

Now, we are being trained to place dumb pieces of electronics in the same bucket as Russia and China. LOL

I'm sure there wasn't a meeting where the camera manufacturer execs set around a table and said let's make these things blow up the world.

And you know, even if there was, the shame lies on the fact that we don't have better edge level security that can detect and shut down abnormal traffic patterns close to source.

There is some fairly low hanging fruit here. Routers and gateways with pretty damn simple algorithms could detect and prevent these types of attacks if they were available.

The network should protect itself against "Bad Actors", because... trillions of devices are a coming, and we can't expect them all to be certified to protect the network. The concept itself is completely absurd.

Fat better to improve the infrastructure than to impose per device level policies. It's the IETF that needs to step up. Not the guys in a garage who couldn't code.

Sure maybe they could have done a better job, but from the level of programming we are currently at it is an absolute certainty that this will happen again whether we like it or not.


> A probably small creative team simply made an error while building a camera in good faith that was abused by criminals. Who is the victim here?

Everyone who's been affected by the DDOS. Manufacturers are responsible for the security of their devices. If you ship something that's vulnerable, it's you to fix it. If there are damages, those come out of your pocket.

> It's the IETF that needs to step up. Not the guys in a garage who couldn't code.

No. If you're a guy in a garage who can't code, you shouldn't be writing code, let alone shipping devices.

Are you for real, or some kind of astroturf account?


> Routers and gateways with pretty damn simple algorithms could detect and prevent these types of attacks

I am not sure it's that simple really. the whole point of ddos is that it's distributed so that each node doesn't need to do a whole lot of traffic(and filtering out the traffic at the other side is really hard because of the distributed nature of it).

the main reason this stuff happens is that security is not a feature that most people care about and won't pay for it because they are ignorant of how the system works. The managers decide not to make security a priority.


I'm with you here.

I can't understand why people are calling for regulations on IoT devices. Instead, how about ISP supplied modems/routers have sensible defaults(and have the admin password reset).

Much more cost effective.

Regulating IoT is going to be impossible when tiny little devices can be shipped undeclared. Even if it could be policed effectively it would destroy the economics of some of these cheaper devices.


I've been concerned about the security of IoT devices for a while as the low cost devices generally do have security as an after-thought.

However, these being used for a DDoS attack puts a spotlight on the issue. While I don't know the solution, I feel it will become harder for manufacturers to shrug this off.


So... if a company does all due diligence to perform a recall, but very few people actually send back their five dollar web camera. Is the company pretty much off-the-hook then if that customer's kept webcam gets hacked and is used to brick a nun's IoT pacemaker?


I would imagine they are off the hook for liability if they issue a recall, and I would also imagine there are going to be probably less than 5% of devices actually returned. How many end users of these products have any idea that they were part of this attack and how many end users will care enough about some cheap chinese electronic device to send it back and wait for a refund/replacement?


Exactly. What percent of users are willing to go through the recall effort, especially for cameras mounted outdoors, and hardwired via PoE? Not to mention shipping (overseas?), and being without security for weeks.


So, do there exist any IP cameras that are simple, secure and don't open a crazy number of ports for bizarre, unnecessary protocols whilst including a steaming pile of PHP (or similar) to provide a buggy, over-engineered, exploit-ridden web interface?

I could do with some for a farm project at the moment, but as far as I can tell, they're uniformly awful. Are there any that are reflashable with something more respectable, even?

Ideally what I want is a video stream over TCP with power-over-ethernet support - and no other services.


I'm surprised we haven't seen anything exploiting rompager yet. This is a very widely used web server in home routers

https://www.shodan.io/search?query=rompager http://www.pcworld.com/article/2861232/vulnerability-in-embe...


Apple made a good call with its "strict" requirements for HomeKit devices.

http://www.forbes.com/sites/aarontilley/2015/07/21/whats-the...


its easy to be secure when virtually nobody has your devices...


I'm not sure what point you're making. Which devices are those?


my point was simply that homekit has been incredibly slow to gain any adoption and is pretty limited in terms of who can make things (mfi program), what hardware needs to be in your product (special auth chips..) etc. As a general strategy, if your iot strategy requires iot device makers include special chips and use specific factories, its a rather closed way to approach the market.


MFi isn't too limited. You must just apply and meet certain standards. The benefit is you have access to a huge market (iOS users).

What's wrong with this approach? Imagine the PR disaster if this DDoS attack was caused by HomeKit devices.

As a potential future user of HomeKit it's reassuring to know security is a real concern here. I'm glad I won't have to probe the device to check it isn't running a telnet server with no root password, for example.

When we're talking about an internet-connected camera or a front door lock, yeah I'm going to want high standards for security. If that slows down HK adoption so be it. If I wanted a convenient-but-insecure lock compatible with my existing devices today I'd just leave my door unlocked.


i think this perspective conflates good security and bad security with a single approach.

"You must just apply and meet certain standards" - your factory also needs to apply and meet standards, not just you. we work with a fantastic factory that builds high quality products (numerous baby and toy products) and is large (>45K employees). they aren't mfi certification (its not just meeting standards, its an application process that costs time and money).

The benefit is you have access to a huge market (iOS users). - we already have access to this market. The main thing is slapping a little homekit badge on the packaging and slightly tigher integration with siri.

agreed with you that nobody wants to be at fault for taking down the internet due to bad security on their devices, but its a bit misleading to suggest that apple's approach is a good way to do it.

whats fundamentally wrong with it is the cost it imposes onto companies making something compatible with their ecosystem. I don't want to add a few dollars to my BOM so that I can further help their ecosystem. I also don't like the closedness but I understand that is apple's general approach. I want to have open APIs and cloud integrations. Radio/hardware level integrations are fine but given the giant mess that is IoT radio standards, I would rather just integrate via https.

For perspective on how this makes it down market:

Lets say I want to make a Thread compatible device and a homekit compatible device. I have now likely added 4-8USD to my BOM. Typical multipliers from BOM to retail are 3-5X or more so we could have just added 32 usd to our price. Or we could have just done a cloud integration and used the wifi or bluetooth chipset we were going to use anyways...


I agree it's not a complete security solution, but it's certainly a good baseline if nothing else.

>The main thing is slapping a little homekit badge on the packaging and slightly tigher integration with siri.

It's integration into the entire HomeKit platform including the new Home app across multiple devices.

Thanks for the in depth numbers. Personally I'd pay an extra $20 for something HomeKit compatible, especially if I'm paying $150+ anyway. I've been looking at some devices lately and haven't even considered anything which doesn't integrate with HomeKit.


Laws won't fix this, recalls won't fix this, what we need to do is find a technical solution to the DDoS problem.

You can't un-ring this bell, and it might actually be harmful to try. A free and open Internet is more important than DDoS attacks.


It's not webcams technically, they're the ip surveillance cameras which run a customized linux with some scripts and a RTSP server, and definitely not designed for public IPs.


If they are connected to the internet, why not push out an update to fix the default password issue?

A hardware recall seems silly when it's clearly a software issue. Unless they didn't include any firmware updating system... which is likely the elephant in the room not being addressed with most insecure IoT devices. Android faced this problem as well and has recently made progress addressing it. Although a lot of phone companies get in the way and manufacturers have very short support lifespans.


This. We build IoT devices and frankly, we are late shipping while we are doing heavy duty testing on FOTA. Happy to ship late to avoid being part of a botnet and to make sure we can improve our products over time. Depending on the complexity of the product, FOTA can be rather complex and I don't expect budget device makers that aren't particularly branded to bother.


What a stupid, inaccurate, article title. Shame on the beeb.

Edit: Better? "Webcams used to attack a DNS provider recalled."


Is the denial of service attack on DNS servers still going on? Major sites are still much slower than last week.


Good for them! A recall is the responsible option at this point, and I hope other manufacturers do the same.


How many devices are recalled? What percentage of the DDoS were they responsible for? Not enough information.


what about the routers and other cheap devices. This is a drop in the bucket. Until regulations (and strong enforcement) of devices requiring device security to sell them in countries like the US, UK, EU etc. happen this will continue to be a problem.


Does anyone know if Foscam's were affected at all? I unplugged mine to be safe.


This is extremely disconcerting


I wonder if when you hit show parent (yesterday) it wouldn't load...


there needs to be a regulation body, similar to NHTSA that regulates security of digital products.


Becareful what you ask for. How is any software nowadays different?

If I setup a raspberry pi with camera and there are exploitable bugs in the software is the creator of the software liable? Does every piece of software open source and closed source need to be approved by a regulation body before your allowed to run it? Distribute it? Put it on GitHub?

There's nothing unique about IoT devices. They're just computers. If you regulate one you arguably have to regulate the other


I'm not saying iOT devices specifically. We can define the regulation threshold at mass consumer digital and connected products.

Yes, digital/software is inherently less tangible and trickier to regulate than a physical product sold at smaller volumes, but we need to start somewhere. Poorly made digital+connected products distributed in large volumes can have tremendous impact to the world population.


Or perhaps just the equivalent of Benson Leung from Google, who tests and rates USB-C cables on Amazon.


A recall where you could just upgrade the firmware?


This is not always possible via remote methods; doubly so after the devices are hacked and cut off from the previous open methods manufacturers might have used which allowed the attacks in the first place.


Can you still do so reliably on a compromised device?


Yes. The Mirai software runs in memory. You can clear the infection just by rebooting the device. But if it's connected to the internet, it will be reinfected again within minutes, unless you change the admin password.


You mean, one of the known botnets runs only in memory.

We've no idea if there are others out there who attack these devices and subvert the firmware too.


Sure, and even if there isn't one today, it's probably just a matter of time before nastier stuff gets out there.


[flagged]


We detached this flagged subthread from https://news.ycombinator.com/item?id=12778965 and marked it off-topic.


Quoting from reporting on the topic:

> “The issue with these particular devices is that a user cannot feasibly change this password,” Flashpoint’s Zach Wikholm told KrebsOnSecurity. “The password is hardcoded into the firmware, and the tools necessary to disable it are not present. Even worse, the web interface is not aware that these credentials even exist.”

So yeah, this isn't about the customers.

Even if this was just a matter of the user controlled credentials, shipping all products with the same default is a terrible idea.

Blaming the customer here is totally absurd. Your stereotyped expression of contempt is entirely unnecessary.


OK, with this information we're now in corporate negligence territory. Surely a read-only, non-variable (I can't come up with a decent antonym for unique) value can't possibly be sold as a 'password'?


People at the age of 11 or 12 are generally not intimidated by things they don't understand, because they have not learned that behaviour.


> generally not intimidated by things they don't understand

Lots of educators would disagree with you on that one.


You forgot the other part:

> because they have not learned that behaviour

Children can pick up all sorts of behaviour from their parents, albeit I have only anecdotal evidence, but:

I have met many, many people of the ages 11 to 12, the majority were intensely curious. In all the cases where they were not curious, their parents (or tutors, etc.) either did not think learning to be a joy, actively discouraged them for curiousity, or scorned them for asking questions (especially about things that were 'unrelated'). In other words, the young people were taught that it was a bad thing to be curious or otherwise learn about things, and especially a bad thing to learn in a non-prescibed way.


>Is it really, really too much to expect from any consumer to take 5 minutes to set up a device?

yup


We need a new acronym: IoCT = the Internet of Crap Things.


I much prefer InternetOfShit (https://twitter.com/internetofshit)


I'm designing a weapon on mass disruption IoT confetti. Bow to my demands or I will send you an exploding yey tasteful confetti card with 50,000 confetti all trying to access your wifi network.


Webcams used to attack Reddit and Twitter recalled

I think it's kind of troubling that "the vast variety of information services that comprise the internet" apparently means "Reddit, Twitter, and Facebook" to laymen now.


the title was just a really terrible summation of the attack. TONS of websites were affected. Some huge like reddit and twitter, and some tiny ones.


> "Security issues are a problem facing all mankind," it said. "Since industry giants have experienced them, Xiongmai is not afraid to experience them once, too."

The courage to try something new. Oh wait it isn't new, this is being a copycat.


It's probably just bad English but it really sounds like, "everyone gets hacked, we don't care".


The company issued the recall, it could be hugely expensive for them and is the right thing to do. Looks like they do care to me!


The courage to own up to your mistakes without being forced to by a judge seems like an entirely new business concept to me.


> If your webcam is hijacked you have effectively let an intruder enter your home

Except it's not. They can't touch me, hurt me physically, take things away. All they can do is see me.

So what they see some guy in front of a computer. I'm more worried about the hacks that can take over my keyboard, and can access financial data. But even then it's the banks problem and insurance that will take care of it.


If they can see some guy in front of a computer, can't they see some guy in front of a computer typing a password into a financial website?


So there must be a law:

1. Any device with internet access must be able to automatically update all its software.

2. Because a manufacturer can either go out of business (some devices are used for >10 years) or not care about its users, all its software must be open source. A code, which needs to be secret, can be stored on a hardware level.

3. But if a software is open source, it doesn't mean there would be people who'll fix the bugs, therefore there should be the list of OSes approved (by a regulator and EFF?) to be used on IoT-devices. The development of these OSes should be public (on GitHub, etc.). By having ~10 different OSes instead of a million, solving bugs would be possible and much easier.

4. By having such list of approved OSes, we also solve the problem of having a vulnerabilities in the updating process, e.g. missing signatures, using RSA-1024 or even RSA-512 for signatures.

5. By having such list of approved OSes, it'll be easy to maintain the live kernel patching service (in the future it'll be hard to imagine an OS without it).

6. By having such list of approved OSes, community would quickly fix the problem of using default passwords.

Without such law, expect 10 Tbit/s attacks in a year, and >500 Tbit/s attack in 2022 (if popularity of IoT would increase as fast as mobile phones did).




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: