Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: I’m an FCC Commissioner proposing regulation of IoT security updates
3387 points by SimingtonFCC on Sept 5, 2023 | hide | past | favorite | 925 comments
Hi everyone, I’m FCC Commissioner Nathan Simington, and I’m here to discuss security updates for IoT devices and how you can make a difference by filing comments with the FCC.

As you know, serious vulnerabilities are common in IoT, and it often takes too long for these to be patched on end-user devices—if the manufacturer even bothers to release an update, and if the device was even designed to receive them. Companies may stop supporting a device well before consumers have stopped using it. The support period is often not communicated at the time of sale. And sometimes the end of support is not even announced, leaving even informed users unsure whether their devices are still safe.

I’ve advocated for the FCC to require device manufacturers to support their devices with security updates for a reasonable amount of time [1]. I can't bring such a proposal to a vote since I’m not the chairman of the agency. But I was able to convince my colleagues to tentatively support something a little more moderate addressing this problem.

The FCC recently issued a Notice of Proposed Rulemaking [2] for a cybersecurity labeling program for connected devices. If they meet certain criteria for the security of their product, manufacturers can put an FCC cybersecurity label on it. I fought hard for one of these criteria to be the disclosure of how long the product will receive security updates. I hope that, besides arming consumers with better information, the commitments on this label (including the support period) will be legally enforceable in contract and tort lawsuits and under other laws. You can see my full statement here [3].

But it’s too early to declare victory. Many manufacturers oppose making any commitments about security updates, even voluntary ones. These manufacturers are heavily engaged at the FCC and represented by sophisticated regulatory lawyers. The FCC and White House are not likely to take a strong stand if they only hear the device manufacturer's side of the story.

In short, they need to hear from you. You have experienced insecure protocols, exposed private keys, and other atrocious security. You have seen these problems persist despite ample warning. People ask, ‘why aren’t there rules about these things?’ This is your chance to get on the record and tell us what you think the rules should be. If infosec doesn’t make this an issue, the general public will continue falsely assuming that everything is fine. But if you get on the record and the government fails to act, the evidence of this failure will be all over the Internet forever.

If you want to influence the process, you have until September 25th, 2023 (midnight ET) to file comments in the rulemaking proceeding.[4] Filing is easy: go to https://www.fcc.gov/ecfs/search/docket-detail/23-239 and click to file either an ‘express’ comment (type into a textbox) or a ‘standard’ comment (upload a PDF). Either way, the FCC is required to consider your arguments. All options are on the table, so don’t hold back, but do make your arguments as clear as possible, so even lawyers can understand them. If you have a qualification (line of work, special degree, years of experience, etc.) that would bolster the credibility of your official comment, be sure to mention that, but the only necessary qualification is being an interested member of the public.

I’m here to listen and learn. AMA. Feel free to ask any questions about this or related issues, and I’ll answer as many as I can. I just ask that we try to stay on the topic of security. My legal advisor, Marco Peraza, a security-focused software engineer turned cybersecurity lawyer, will be answering questions too. I’m open to incorporating your ideas (and even being convinced I’m wrong), and I hope that my colleagues at the FCC are as well. Thank you!

Edit: The Q&A is over now, but please keep this great discussion going without us. Thanks again everyone for your input. Don't forget to file comments if you want to make sure your arguments get considered by the full FCC.

[1] https://www.fcc.gov/document/simington-calls-mandatory-secur...

[2] https://www.fcc.gov/document/fcc-proposes-cybersecurity-labe...

[3] https://www.fcc.gov/document/fcc-proposes-cybersecurity-labe...

[4] If your comments are purely in response to arguments made in other comments, you have an extra 15 days, until October 10, 2023.




Thank you so much everyone for the interesting, high-quality discussion so far. My team and I are looking forward to continuing to engage with you for at least a few more hours.

Just a reminder: As fun as discussing this in here with you is, the best way to influence what the FCC ends up doing is to file an official comment by September 25th at https://www.fcc.gov/ecfs/search/docket-detail/23-239 . Click to file either an ‘express’ comment (type into a textbox) or a ‘standard’ comment (upload a PDF). The FCC is required to address your arguments when it issues its final rules. All options are on the table, so don’t hold back, but do make your arguments as clear as possible so even lawyers can understand them. If you have a qualification (line of work, special degree, years of experience, etc.) that would bolster the credibility of your official comment, be sure to mention that, but the only necessary qualification is being an interested member of the public.

Finally, I'd like to extend a special thanks to dang and the rest of the HN team for their help putting this together. They have been a pleasure to work with.


All: To read the entire thread, you'll need to click More at the bottom of the page, or like this:

https://news.ycombinator.com/item?id=37392676&p=2

https://news.ycombinator.com/item?id=37392676&p=3


There’s some great recommendations in this thread but I just want to thank you for engaging with this community to solicit opinions from the trenches.

This is really meaningful to most of us who see the regulations in our lives as something far away that we can’t influence.

Another reminder for everyone that while you likely can’t influence something like a presidential election on your own, you can influence many other spheres with your knowledge and time that are closer to home and probably affect you more immediately.


Thanks! I am thrilled that so many people are participating. The FCC is going to need a lot of this community's input over the next few years as more and more devices go online.


This may be beyond the FCC's purview, but given some of the comments (e.g., https://news.ycombinator.com/item?id=37393644) perhaps an entirely different strategy is warranted.

Instead of trying to compel manufacturers, who may no longer even exist, to support their old products; perhaps the government should focus on protecting consumers and aftermarket vendors who update / modify / reverse-engineer older revisions--especially after they're no longer meaningfully supported by the manufacturer.


There is an overlap with the right to repair topic. It does not make sense to have the DMCA hanging over your head when you are reverse engineering a product that is abandoned by the manufacturer - be it end of life or bankruptcy to name two reasons among many.


Also, security researchers should have strong legal protections; they should be given the benefit of the doubt at every turn.

Currently, researchers are sometimes threatened with decades in prison for testing the security of websites or devices. If they act in good faith as researchers, this should never happen.

This is literally a national security issue. We currently stifle security research on essential IoT devices primarily so companies can avoid being embarrassed by their own poor security.


This might be an unpopular opinion but I respectfully do not see it that way. I agree with promoting security for IoT devices, but there needs to be consent from the company being probed for vulnerabilities or else I find it hard to consider it legitimate research, regardless of intent.

I dont think anyone would like it very much if someone came to their house and documented all the ways to rob it they could find, even if it's for research purposes. There is an inherent risk of your vulnerabilities being broadcasted somewhere either on purpose or accidentally once that information is collected and organized by the researcher.

It isn't harmless and innocent to probe anything for weaknesses unsolicited. It is reasonable to respond to that as a threat. It is genuinely threatening behavior.

Now I do understand it gets complicated when it's a business being trusted with sensitive information / access to devices in your home. I am just saying as part of the solution we need to keep possibly threatening behavior in mind and try to avoid the promotion of it as part of the solution unless there is really no other way (imo)


The problem here is that the thing I am probing is something I own: the device in my house that I ostensibly purchased and am allowed to smash with a hammer or put in a blender for all anyone should care; the context is that the DMCA is often used by companies to claim that DRM on the device is there to protect copyrights--whether music the device had access to, even if it isn't the reason many or even most people buy the device (such as a smart fridge with a speaker in it and the option to log in to Spotify), or the firmware software itself--and that it is thereby illegal for me to distribute tools to help people access to repair (which is the key thing here: there actually are already some legal protections for the act of "probing", but you kind of have to do it alone which is insane) a device I own and where finding vulnerabilities should be about me and my trade-offs, not the wishes of a manufacturer.


I hate it too, but the heart of this is that ownership is under question.

People should not have agreed to buy things where there are parts of it they don't own that they don't even need, but they did. They did it a lot because it didn't matter to them and now those devices are prevalent everywhere and it's a PITA to try to buy the type of item you actually want - where you own it entirely.

Ownership has never actually been absolute. When you buy land you cannot tear it up and make it totally unusable. If you buy a home under an HOA you may have to keep it in a certain type of order.

Maybe what we need is a law that manufacturers always need to provide a "dumb" model of their products which can be completely owned by the consumer.

However, I was speaking from a stance of acceptance that the companies are maintaining ownership of some functionality of the devices. I was primarily thinking about the way it accesses company owned infrastructure (servers and the information on them) but it extends into a grey area on the devices themselves.

You should be allowed to reasonably tamper with the device, but you should also be attempting to communicate with the company about it. They shouldn't be allowed to retaliate against you for requesting to tamper, they should need to reply reasonably quickly, and the reasons for which they are allowed to deny you should be regulated so they cannot just deny for no reason.

I am saying we need to lean in to the situation we are in if we want actual results, and I think there is a lot of room to develop a reasonable legal framework on this subject that incorporates partial ownership.

It shouldn't be as restrictive as it is today, but it also shouldn't be a complete free for all. We should at least attempt to make an effort to control security vulnerability information so criminal behavior and innocent behavior actually looks different.


>People should not have agreed to buy things where there are parts of it they don't own that they don't even need, but they did.

I own zero IoT devices for the exact reasons you gave.

Frankly, I would prefer to change that state of affairs. I would also prefer far less waste. Tons of these devices end up in the garbage too. That is unacceptable and surely not sustainable.

I am not OK with partial ownership, unless there are clear obligations attached to the other partial owner that have real teeth.

Fact is we have law for this case and that is the rental agreement. That is exactly what partial ownership is.

And when people are asked to value something they will be renting, everything changes. A big change is purchase price. That goes down.

What I see happening is IoT companies business model is priced as if ownership happens when it really doesn't. And that is not OK.

I also find putting that onto people disturbing because it was not the people who who made the choice to advertise a sale and then act as if it is a rental.


Out of curiosity - why should I be required to ask for permission from given company to probe company owned infrastructure?

What I mean here is that if there's a bug / vulnerability on given company infrastructure, then that company should fix it and not put on a blame on a user that was affected by it (even if device that communicates with given infrastructure always follows happy path)


I try to get back to a real world analogy, think of a bank:

Can you try opening the public door off hours and discover it is locked? Yes, of course.

If the the public door is unlocked, can you now go inside the bank and start trying different combinations to open the safe? No, you will be arrested.

Anytime you move from probing a website with a browser to using other tools, your actions are subject to interpretation


You need permission because

1) the probing almost always involves breaking the terms of the contract you made with that company.

2) it creates a paper trail of intent

3) it's not your property so why wouldn't you need permission to access it?

I am not sure how permission effects a companies ability or obligation to fix security bugs. I agree they should fix it.

We can make the law that not only does the company approve of the request but they have to disclose to you additional information that can help you find bugs. Idk, point is I'm advocating for creating a system where researchers work with the company rather than as vigilantes


  > I dont think anyone would like it very much if someone came to their
  > house and documented all the ways to rob it they could find, even if
  > it's for research purposes.
The correct analogy would be if someone documented all the ways to rob a house that is currently mass-produced and sold on the market. And yes, as a consumer I most certainly would approve of such activity, especially if I've yet to make a purchasing decision. Or especially if I'm already living in such a house, I need to know that it is not safe.


In that scenario I would MUCH rather the company be aware someone is putting that lust together, notfiy me in advance of the research being concluded, provide updates, organize and manage the contents of that list, offer solutions, patch the fixes in new models, and generally work with the people who already purchased the house.

I would not prefer someone to do it all in secret and then at the last second decide they want to inform the company.

Once such a thing gets broadcasted, there is inherent risk created for a lot of those existing owners that did not exist. Opportunistic criminals are way more common than premeditated ones.

Also if we gain the ability to monitor everyone who is currently probing houses for security issues, then if we are able to have a whitelist of people who pre-notified with their intent then we can more reliably examine people who might be looking to abuse the system.

I guess part of my underlying assumptions here is that we are moving towards a surveillance state and there are no signs of stopping that


> In that scenario I would MUCH rather the company ... notfiy me ... provide updates..

Here is the problem - the company does not give a crap. You get robber, and it's their fault? They don't care. But they will sue the researcher, because the researcher has discovered that it's their fault you got robbed.


Some companies will absolutely give a crap.

And the ones that don't create a paper trail of not giving a crap

The researcher is protected from being sued by being granted permission and following any regulations created for ethical security research.

We can make security notifications from companies mandatory. Now if they try to hide something, and it comes out later, there is documentation of the cover up


Do you believe that your proposal increases the cybersecurity of society as a whole?

You focus a lot on the rights and conveniences of a company, but the rights of a company are not more important that the security of society as a whole.

There are good guys and bad guys out there looking for vulnerabilities. What you propose reduces the number of good guys more than it reduces the number of bad guys (since bad guys are less likely to follow the law). What you propose shifts the balance towards the bad guys and makes it more likely that vulnerabilities will be discovered first by the bad guys. You also propose security through ignorance; security via hoping that nobody notices.

Again, I would really like to hear you assert that your proposal would increase the cybersecurity of society as a whole. I did not clearly see such an assertion in your comment. I want to see an argument focused on the security of society as a whole.

I assert that we currently reduce our national security for the convenience of companies.


I proposed a preference for systemic solutions over building a soft dependence on white hat hackers.

This benefits society as a whole because it clearly delineates actions with intent. If doing X is always not allowed, then all you need to do is find people doing X and you can hold them accountable.

If you allow or disallow the same activity based on merit of intent, then you increase the level of plausible deniability to everyone who gets caught.

I am not proposing security through ignorance. I am proposing security through consent. Nowhere did I say anything about not allowing research, I only said that if you do it unsolicited then it should be considered a threat.

So, we could systemically allow for a right to research that involves notice to the company and their consent for you to test. It would not hinder white hat at all. If businesses resist for selfish reasons we can expand the law to prevent them from denying requests without a legitimate reason. For example, maybe it is okay for them to deny a request from an ex-employee with a grudge who has sent the company aggressive emails. Idk, maybe there are no valid reasons to deny. The point is we can create a framework that promotes security development above the table with all parties involved. And my proposition is that if that is possible then it should be preffered.


You attempt to solve the problem of chaos (think grey-hat) by expanding law enforcement--by enforcing order on every internet user world wide. That's going to require a lot of boots to squash a lot of faces. Curious kids who run port scans will stand before judges, journalists who press F12 will face the ire of the most powerful and decades in prison[0]. This will probably require some national firewalls as well. This will continue the status quo where companies leak the private information of countless millions and nothing happens, while individuals must be careful what they do with their own computer and their own physical devices.

I attempt to solve the problem by embracing chaos and empowering those who seek to do good in the chaos. I'd like to see our IT systems become so hardened that no amount of chaos can harm them. Let the grey-hats and black-hats run wild, it is possible to build our technology well enough that they can do no harm. This would require those with the most wealth and power in our society to do a little more, to take on some additional responsibility and demonstrate they are worthy of the trust and power we have given them. Let individuals be free and make the creators of our technology responsible for their own creations.

What you have proposed is what we already have, it is the status quo. When you hear about a major breach every other week, ask yourself whether or not it's working.

[0]: https://techcrunch.com/2021/10/15/f12-isnt-hacking-missouri-...


The status quo is not sufficiently codified. I am suggesting we codify it so that we can look at the rules and change them so that they make sense.

I also think it would be a good thing to have a legally protected avenue for people to declare their intent before checking for unlocked doors and such.

Imo I think a lot of the problems are coming from companies feeling like they are getting fleeced by security experts. If a company has acknowledged you as a researcher beforehand, then you have a pretty strong legal defense if they decide later that they don't like what you find.

I am not suggesting a new world order over everyone that uses the internet. People who stumble upon vulnerabilities without looking for them, or through incredibly basic means like a port scan, can be protected. We can feasibly list enough ways someone can uncover a security hole without a direct effort to do so such that the spirit of the law is sufficiently obvious to any judge to include any new ways that pop up on a case by case basis.

However, we cannot currently offer any protection to people directly trying to find vulnerabilities when such actions are identical to people who are trying to abuse it. The only possible differentiating action would be someone to announce beforehand that they are aware what they are doing looks like criminal activity and to request permission to proceed.

The argument that we have the technology to make in infeasible to hack systems is moot and imo naive. There is cost, significant cost, to maintaining the highest level of cybersecurity. Cybersecurity experts are some of the highest paid IT professionals on the market right now.

So I do not see how educating people who want to look for vulnerabilities to reach out for approval on what they are doing is too much order, but requiring everyone who creates anything that uses the internet to successfully implement state of the art cybersecurity defenses is not


This is a very poor analogy. For one thing, casing someone's home is not interesting research. It's not news to anyone that locks only keep honest people out. You need physical access to break in. The legal system and the people nearby (neighbors and residents, and their firearms in the USA) are the main lines of defense here. Unlocked doors are a harm targeting one household.

Conversely, with vulnerable IoT devices, we're talking about internet-connected devices. The potential harm is to everyone on the Internet, not just one household, when they're taken over and made part of a botnet. An attacker can exploit them from anywhere in the world, including residents of hostile jurisdictions that are tolerant (or actively supportive) of such activity. Russia, North Korea, Iran, etc. The protections people have relied on for centuries to defend their residences from bad guys don't apply anymore.

These IoT devices can also be used to gain a foothold in your home network, which are usually flat networks. It's surprisingly difficult to find a "router" for home use at a reasonable price point that can setup VLANs, by the way. Even as a technical person.

The better analogy IMO is to building codes, where your property rights are limited by society's interest to keep your family safe, but more importantly, your neighbors safe too, because fires spread. It's still an imperfect analogy for a number of reasons. Cyberattacks are a relatively novel kind of threat. All analogies are going to be imperfect.


I think a better analogy can be drawn by just considering the physical version of some things. For IoT, you can say if someone discovers a specific brand of physical lock can be broken in unexpected ways, they should be allowed to communicate this in a way that benefits the users of the lock without facing any legal risk. For internet banking, you can discuss a physical vault that safekeeps everyone's gold, and say that someone who notices a broken lock should not be punished for telling the vault manager to fix the lock. Unfortunately the common situation is that the lock company and the vault manager will sue because they don't want to admit they put their users and clients at risk - it sounds absurd, but that's what happens in the electronic world.


Well, in this analogy the problem starts with how the person is noticing the lock can be broken in unexpected ways

Everything you said after that is a valid continuation from that, but the scope of the issue I am talking to centers around that how.

Because locks have never actually been unbreakable, right? The main purpose of a lock, the generally accepted way that the lock keeps people out - is by existing, not by being strong.

We have higher standards for the lock in more serious applications, like a vault, but if you buy a vault door, put it in your garage, and begin testing it for vulnerabilities- I feel like it's reasonable to view that as criminal. I admit 100% that it could be a curious tinkerer, but I do not think it is unreasonable to tell the tinkerer that they can't do that without permission.


What happens in that case is said tinkerer does it anyway.

And say they got that door by any of a number of legal means. Fact is they have it and could have a wide range of legal uses for said door too.

Is it better to drive that sort of thing underground?

I question that.


Building codes analogy still supports my argument. You cannot just walk into a strangers home and inspect it for whether or not it is up to code.

I agree analogies are going to be imperfect, which is why it's important not to criticize an anology based on where it fails but to work with it on the point it is meant to express, and then yes if it doesn't actually convey the point then it could be a bad analogy.

I think it might help if we clarify WHY a lock keeps honest people out. If a house is locked, you MUST commit a crime to gain entry. So by nature of bypassing the lock, you are no longer acting honest. It is not about what type of person you are, it is about clearly delineating honest actions from criminal actions.

If the door is unlocked, then a person could walk in and then pretend they didn't know better if they get caught. This is assuming we say it's okay to walk through unlocked doors

However, since we acknowledge it as criminal behavior to even test whether or not a door is unlocked - the existence of locks in general and the common knowledge of where they should be expected to be found establishes a barrier honest people know not to cross.

With respect to cybersecurity, I am proposing we accept a similar relationship while also creating protected legal paths for honest people to conduct security research.

The thing we can all likely agree on is what cybersecurity is and where it applies. By nature of knowing where it should apply, we establish a barrier that honest people should not be crossing without permission.

I agree that there is a lot of foreign danger involved with the topic and botnets are a concern. However, progress there is not going to be made by random hobbyists testing websites for sql injections for fun. It's going to be made by cybersecurity professionals who can easily be educated to and comply with a regulation to declare their intent and get approval before poking around.

The rules for an approval process are a totally open book. It does not need to be restrictive or limiting to researchers


Another analogy could be someone doesn't realize they left their back door open and these guys come and point it out.


I think the analogy would be someone doesn't realize they left their backdoor unlocked.

You can see an open door. You can see an unlocked door unless you go up and try to open the door.

if a stranger informed me that my backdoor was unlocked, then I would be immediately suspicious. Why were you at my door trying to open it without trying to contact me first?


> There is an inherent risk of your vulnerabilities being broadcasted somewhere either on purpose or accidentally once that information is collected and organized by the researcher.

A legitimate researcher is going to promptly notify you of any vulnerabilities they discover and you as a large organization are going to promptly remediate them.

But the trouble isn't that the law might impose a $100 fine on a smug professor or curious adolescent to demonstrate that some audacious but mostly harmless behavior was over the line, it's that the existing rules are so broad and with such severe penalties that they deter people from saying anything when they see something that looks wrong.


I once found a vulnerability. I pressed F12 and saw unintended information in the source of a webpage. I just closed the tab, I didn't report it.

Our laws made it risky to do the right thing, so I didn't do the right thing.


I once saw a vulnerability in the same way. Some website from a really powerful org presented masked info, but the info was completely unmasked in the api responses. I’ll never tell anyone. I’m not American and don’t want my payments to suddenly stop settling or visas denied for unknown reasons.


I agree the laws are too broad. I think we need add layers of granularity to them. Create more of a framework for settling the rules on what is and isn't allowed. Maybe we settle on everything goes, but the company should be involved.

A legitimate researcher should be notifying the company that they are going to be looking for vulnerabilities in the first place. That is part of the distinction in behavior that I am encouraging. This way if someone is caught poking around for things to abuse unsolicited, at least there's a little more merit to holding them accountable. We are able to treat it more like the threat it is.

A good faith company can give researchers pointers on where to look. Maybe the company has a really good reason to prevent looking at certain things, and they are able to convince the researcher of that. I dk. Point is the framework for settling all that should be promoted rather than promoting people to act identical to criminals right up until they decide whether to sell / abuse the information illegally or notify the company and try to get a reward. Does that make more sense?


> A legitimate researcher should be notifying the company that they are going to be looking for vulnerabilities in the first place. That is part of the distinction in behavior that I am encouraging. This way if someone is caught poking around for things to abuse unsolicited, at least there's a little more merit to holding them accountable. We are able to treat it more like the threat it is.

The issue is this. You have some amateur, some hobbyist, who knows enough to spot a vulnerability, but isn't a professional security researcher and isn't a lawyer. They say "that's weird, there's no way...," so they attempt the exploit on a lark, and it works.

This person is not a dangerous felon and should not be facing felony charges. They deserve a slap on the wrist. More importantly, they shouldn't look up the penalty for what they've already done after the fact, find that their best course of action is to shut up and hope nobody noticed, and then not report the vulnerability.

The concern that we will have trouble distinguishing this person from a nefarious evildoer is kind of quaint. First, because this kind of poking around is not rare. As soon as you connect a server to the internet, there are immediately attempts to exploit it, continuously, forever.

But the malicious attacks are predominantly from outside of the United States. This is not a field where deterring the offenders through criminal penalties is an effective strategy. They're not in your jurisdiction. So we can safely err on the side of not punishing people who aren't committing some kind of overt independent crime, because we can't be relying on the penalty's deterrent regardless. We need the systems to be secure.

Conversely, if one of the baddies gets in and they are in your jurisdiction, you're not going to have trouble finding some other law to charge them with. Your server will be hosting somebody's dark web casino or fraudulent charges will show up on your customers' credit cards and the perpetrators can be charged with that even "unauthorized computer trespass" was a minor misdemeanor.


You can't give them a slap on the wrist if you assert what they are doing isn't criminal. Having an issue with the punishment model is no reason to throw out the law.

I think the subject has enough depth and complexity to it that we need to promote cooperation with companies. We can build protections against companies being dicks much easier that we can codify the difference between malicious or innocent intent behind actions that are more or less identical up until damages happen.

I don't think I'm proposing anything that assertive. I'm suggesting we just put it all in the open and down on paper in a way that addresses most of the concerns and involves the company.

Documented evidence that companies were notified of security issues by people who declared that they were researchers, who the company approved to research, is a great thing to have in the fight against ignorant companies.

I completely agree that a degree of this is quaint with respect to a lot of the trouble coming from outside your jurisdiction. I just really don't see an issue with creating protected avenues for people to do research.

Opening someone's front door "on a lark" can get you shot in some states. I get that innocent people do technically illegal actions sometimes but that doesn't change whether or not an action is perceived as threatening.

So I recommend we start writing down the actions that need to be protected and at the very least give someone acting in good faith a bulletproof way to both conduct research and preserve innocence.

If you happen to uncover something accidentally and are concerned, then you can make the request afterwards and repeat your finding and report it. So no need to feel the need to stay silent


> You can't give them a slap on the wrist if you assert what they are doing isn't criminal. Having an issue with the punishment model is no reason to throw out the law.

The law is too broad in addition to being too punitive.

But here's an argument for throwing it out entirely.

There are two kinds of people who are going to spot a vulnerability in someone else's service: Amateurs and professionals.

Professionals expect to be paid. But if you go up to a company and tell them their website might be vulnerable (you don't know because you're not going any further without their permission), and you send them a fee schedule, they're going to take it as a sales pitch and blow you off most of the time. Even if there's something there. To get them to take it seriously you would need to be able the prove it, which you're not allowed to do without entering into time-consuming negotiations with a bureaucracy, which you're not willing to do without getting paid, which they're not willing to do before you can prove it. So if you impose any penalty on what you have to do to prove it, professionals are just going to send them a generic sales pitch which most companies will ignore, and then they stay vulnerable.

Which leaves the amateurs. But amateurs don't even know what the rules are. If they find something, anybody's first instinct is "this is probably nothing, let me just make sure before I bother them." Which they're not really supposed to do, but in real life that's going to happen, and so what do you want to do after it has? Anything that discourages them from coming forth and reporting what they found is worse than having less of a deterrent to that sort of thing.

But subjecting them to anything more than a small fine is clearly inappropriate.

> We can build protections against companies being dicks much easier that we can codify the difference between malicious or innocent intent behind actions that are more or less identical up until damages happen.

The point is that we don't need to distinguish them. We can safely ignore anyone whose malicious intent is not unambiguous, because we're already ignoring the majority of them regardless -- even the ones who are clearly malicious -- when they're outside of the jurisdiction.

> Opening someone's front door "on a lark" can get you shot in some states.

The equivalent action for an internet service is to ban them from the service. Which is quite possibly the most appropriate penalty for that sort of thing.


I think you're getting way ahead of the conversation, and there is no way to know what the implementation would be like and how communication would go between researchers and companies because if you can think of the communication problem today, then we can consider a solution for that problem in the implementation tomorrow.

At the end of the day, I am arguing for promoting people to try to work with companies, and to put out to the public a process for making that effort effective.

I feel like we agree but our solutions are opposite. The current laws are insufficient, so we need adjustments to the laws.

You (and others) propose we make hacking into systems fully legal, presumably because we can target malicious activity based on what they do with that access instead of the access itself. Is that correct?

I also disagree that a ban is equivalent to shooting an intruder. The connection is not the actor, the person using it is. If a person chooses to enter into a protected space they do not have permission to be in, then they are susceptible to consequences to that. I think just because it is easy to do it from your bedroom doesn't change it. Much like how virtual bullying is still bullying; virtual breaking and entering is still breaking and entering.

If we formally adopt this attitude then we also enable ourselves to pressure other jurisdictions to raise their standards to match.

An uncontrolled internet appareny has 1 outcome - malicious spam. That is what everyone in this thread seems to agree on, and the arguments against what I suggest all seem start with the assumption "there is nothing we can do about it" and the corollary "there is nothing we need to do about it"

I think we can actually do something about it, and I think we ought to. But before all of that, I think the first place to start is making a clear legal relationship between security researchers and the private sector and debate the laws that should be in place to facilitate that in a fair way


> I think you're getting way ahead of the conversation, and there is no way to know what the implementation would be like and how communication would go between researchers and companies because if you can think of the communication problem today, then we can consider a solution for that problem in the implementation tomorrow.

A major problem is that communicating with a large bureaucracy, even to just find a way to contact someone inside of it who will know what you're talking about, is a significant time commitment. So you're not going to do it just because you think you might see something, and as soon as you add that requirement it's already over.

You might try to require corporations to have a published security contact, but large conglomerates, especially the incompetent ones, are going to implement this badly. In many cases the only effective way to get their attention is to embarrass them in public by publishing the vulnerability.

> You (and others) propose we make hacking into systems fully legal, presumably because we can target malicious activity based on what they do with that access instead of the access itself. Is that correct?

So one of the existing problems is that it's not always even obvious what is and isn't authorized. Clearly if you forget your password to your own PC but you can find a way to hack into it, it should be legal for you to do this and recover your data. What if the same thing happens, but it's your own VM on AWS? What if it's your webmail account, and all you use it for is to recover your own account? You made an API call with a vulnerability that allows you to change your password without providing the old one, but you are authorized to change your own password.

There are many vulnerabilities that result from wrong permissions. You to go the service and ask for some other customer's account page and instead of prompting for a login or coming back with "401 UNAUTHORIZED" their server says "200 OK" and gives you the data. Is that "unauthorized access"? What do you even use to determine whether you're supposed to have access, if their server says that you do?

This kind of ambiguity is poisonous in a law, so the best way to resolve it is to remove it. Punish malicious activity rather than trying to subjectively evaluate ambiguous authorization. It doesn't matter whether their server said "200 OK" if you're using the data to commit identity theft, because identity theft is independently illegal. Whereas if you don't actually do anything bad (i.e. violation of some other law), what need is there to punish it?

> I also disagree that a ban is equivalent to shooting an intruder. The connection is not the actor, the person using it is.

The justification for being able to shoot an intruder is not to punish them, it's self-defense. Guess what happens if you tie them up first and then shoot them.

You don't need to physically destroy someone to defend yourself when all they're doing is transferring data. All you have to do is block their connections.

> If we formally adopt this attitude then we also enable ourselves to pressure other jurisdictions to raise their standards to match.

The reason other jurisdictions don't punish this isn't that no one is setting a positive example. It's that their governments have no resources for enforcement or are corrupt and themselves profiting from the criminal activity whose victims are outside of their constituency.

Or if you're talking about the jurisdictions who do the same thing as the US does now, it's because their corporations don't like to be embarrassed either, and we could just as well set the example that the best way to avoid being humbled is to improve your security practices.

> I think the first place to start is making a clear legal relationship between security researchers and the private sector and debate the laws that should be in place to facilitate that in a fair way

Companies will want to try to retain the ability to threaten researchers who embarrass them so they can maintain control over the narrative. But that isn't a legitimate interest and impairs their own security in order to save face. So they should lose.

The embarrassment itself is a valuable incentive for companies to get it right from the start and avoid the PR hit. Nothing should allow them to be less embarrassed by poor security practices and if anything cocksure nerds attempting to break into public systems for the sole purpose of humiliating major organizations should be promoted and subsidized in the interest of national security. (It's funny because it's true.)

> An uncontrolled internet appareny has 1 outcome - malicious spam. That is what everyone in this thread seems to agree on, and the arguments against what I suggest all seem start with the assumption "there is nothing we can do about it" and the corollary "there is nothing we need to do about it"

It's not that there is nothing we can do about it. It's that imposing criminal penalties on the spammers isn't going to work if they're on another continent, and correspondingly isn't a productive thing to do whenever it has countervailing costs of any significance at all.

You can still use technical measures. Email from an old domain with a long history of not sending spam and all the right DNS records, probably isn't spam. Copies of near-identical but never before seen messages to a thousand email addresses from a new domain, probably spam.

You can also retaliate in various ways, like stealing back the cryptocurrency they scammed out of people by using your own exploits.

What you can't do is prevent Nigerians from running scams from Nigeria by punishing innocuous impudence in the United States.

And one of the best things we can do is improve the security of our own systems, so they can't be exploited by malicious actors we have no effective means to punish. Which the existing laws are misaligned with, because improving security is more important than imposing penalties.

I'm much reminded of the NTSB approach to plane crashes: It's more important to have the full cooperation of everyone involved so you can identify the cause and prevent it from happening again, than to cause everyone to shut up and lawyer up so they can avoid potential liability.


so are you saying that I shouldn't be testing a product I purchased or a product that someone mandated I have in my house? I shouldn't have to notify anyone, I own it and I should be able to do with it whatever I please. In addition if I do find an exploit I am not obligated to notify the company nor should I be. A good faith company should be doing their due diligence and not releasing unprotected/poorly protected devices as is common today.


You don't own the inside of it. That's the core part of all this. Businesses decided to sell items with special conditions where you can own possession of the item as a whole but not the ability to dismantle it.

That's just a contract with terms. If you are in the position being addressed by my points, then you have already agreed to those terms.

your problem is with the ownership model, or something else. I am saying, since this model is already in existence and accepted by the public, we need to create some safeguards.

We cannot bypass the fact that you do not own the thing you are testing. So if you want to test something you do not own, then yes I think involving the entity that does own it is reasonable


This is what is referred to as "security through obscurity." If companies are going to publish/sell closed source software to the general public, and make any claims regarding it's security, that should provide more than enough consent to probe it.


I think the difference is in what's yours and what's theirs. If it's yours, I agree. If it's theirs, I disagree.

The idea of absolute ownership is being eroded. You purchase a device but that device may use information you do not own. If you are manipulating the device to allow it to give you information you did not purchase and the contract you agreed to with the purchase was that you would not do this, then that is threatening. If what you learn by probing it allows you to breach the security of other people using the same service, then that is threatening.

If you are concerned about the device, I don't understand why we can't live in a world where you are able to vocalize that and give the device provider a chance for feedback before probing it for weaknesses.

If there is a security concern that you want to shine a light on, why is it that we need to address that concern in the dark? It is giving too much unnecessary overlap with people looking to exploit those security issues when we might not need to


> If what you learn by probing it allows you to breach the security of other people using the same service, then that is threatening

What is threatening is that the company that sells baby monitors and keeps video recordings of your family members being naked has zero accountability for their security and almost no chance of being caught if they misuse it.


That does suck and we should do something about that. Accountability could be part of the legal framework.

Trying to gain access to those video recordings by exploiting the device is still threatening too.


A device that is installed in my home but which I do not own is an increased liability on me.


Tampering with a device increases your liability compared to not tampering with it.

Don't install it in your home if you don't trust it. Don't buy things with terms and conditions where you dont own the device if you want to own the device. This is a different problem


I have things installed in my home that I don't own. Electric, gas and water meters. The common factor with all of those is that their liability also remains on their respective utility provider companies.

You do not get to retain ownership and transfer liability. It's that simple. If you insist that you own the device, then YOU are fully liable for it.


I agree liability should be part of the discussion. If we create a legal framework around the subject and clearly identify what's allowed or not allowed by independent researchers and what the ownership model actually is, then this part of the discussion becomes easier and more easily applied to past precedents


Part of systemic improvement to security comes from the market forces that reward producers putting out carefully designed and tested products and punish producers that don't. Your suggestion of requiring prior notice, coordination, approval etc. incentivises them to defer the cost of proper development until there is a crisis, so they can rush out any rubbish product, and force users and researchers to do their security testing for them. Let them fear the unknown, with their necks on the line, and design accordingly.


I proposed protected legal channels for researchers.

It does remove any pressure from companies. Their neck is still on the line.

It adds pressure to companies because it creates a paper trail. It enables good faith companies to work with researchers as well. They can even have researchers contact each other if they are both looking into the same thing.

There's a lot of good that can come of it

Companies can already rush out any product they want with no security. Lack of security is still a risk, regardless of how we address researching vulnerabilities


You proposed requiring consent from the producer of a product/service to have their offering probed. And did so with an example of a house not owned by that producer.

If the production company declines, that DOES remove pressure from that company.

Companies that rush out rubbish products can presently be named and shamed by independent, uncooperative or even adversarial researchers. Your proposal considers that research illegitimate unless said dodgy company decides to open itself up to scrutiny, which it obviously would not be inclined to do.

If you want to suggest the market would respond by not selecting products from such an opaque company, look into how many WhatsApp users care about auditable, open source code vs. those using Signal.


I proposed a preference for systemic solutions over a soft dependence on white hat hackers who operate identically to black hat hackers right up until they have a vulnerability to exploit and decide what to do with it.

In this thread I expanded the detail to include the system to do this could (and imo should) be a legal framework that creates effective communication between companies and researchers.

I also try to adapt my language to try to parallel what the person I'm speaking with is trying to say, rather than telling them they didn't mean what they are telling me they meant. I apologize if created a misunderstanding with my word choice.

Yes I did mention requiring consent from the company as the ideal goal of the model. I am not suggesting the implementation of the model full stop at that just that sentence. In other areas of law, if you can prove a message was received by a company that can sometimes be considered implied consent if they do not respond to it.

We can also require that companies cannot simply refuse for no reason, but leave legal room here for any legitimate reasons to deny should they exist.

And so on and so forth.

It makes the intent of the researcher very clear.

Declining is obviously less pressure for the company in this situation, I agree. But it is not less pressure compared to the current situation. Companies currently have no obligations at all to researchers, and they certainly do not build security out of concern that white hat hackers will out them. They fear black hat hackers. Those are not going away, and if a legal framework exists for companies to work with researchers and better arrange fair conditions for both sides, I would bet companies will be MORE willing to allow research than less.

Because right now they company gets the research for free and then gets to decide whether or not they want to throw the researcher a Starbucks gift card or not. Or just press charges because they are assholes.

I dont really care what the market decides to do. The point of this is to protect the researchers regardless of what the market does. Because to your point, the market has already chosen poorly which is why we have issues on this subject to begin with.

Does this clarify my stance?


Kinda. I think we agree on the need to protect researchers. And if researchers are aligned with consumers rather than manufacturers then that's preferable because it's not the manufacturer's property once it leaves the building.

If protection is in place, that alignment will work because manufacturers' declining to be scrutinised won't prevent researches from doing their job. But making protection conditional on manufacturer approval will suppress their work in those cases. And I don't know the practicality of establishing and enforcing this. So I oppose any conditionality generally.


> there needs to be consent from the company being probed for vulnerabilities

So they never give consent and no vulnerabilities are ever discovered?

If I make and sell bread, there could be a surprise food safety inspection in the middle of the night on Christmas Eve, but don't we dare inconvenience some software firm that holds intimate data on millions of people.


When you get a surprise food safety inspection, you are notified right? They don't just break into your business without your knowledge and look around. You can refuse them entry, even if it comes with consequences later. They also aren't a random civilian, they have some sort of qualification to be conducting these inspections

That's what I'm getting at. People keep assuming I am saying protect the business at all cost and it's not the case. I want security research to stop getting sandbagged by discussions of legality.

We should make a legal path forward for security research to be more accessible and to promote behavioral differences between someone conducting research and someone trying to exploit or abuse a vulnerability.


White Hat: Can I hack your website and services?

Company: No we are super secure! No trying to find vulnerabilities.

Black Hat: lol sells company data


The point is in enabling the conversation. We can make the laws whatever we want that would help it be fair

White hat: can I hack? Company: no

Later: Company has 100 security request denials Company info leaked Company gets sued Judge is presented with 100 instances where the company was offered free security testing and they refused Judge raises issue from possible negligence to gross negligence

We can also only allow companies to deny requests for specific reasons


Why does everyone compare things to houses? If you want to be more consistent with your building analogy, IoT sold to the public or enterprises are more like bars, except that each user has their own privately owned bar that may or may not be stocked by a central liquor company. If a user wants to check it's not possible for someone to break into his bar, or slip poison into his booze shipments, or redirect the shipments altogether, that's legitimate in my mind. Even if someone buys a bar intending to hijack booze shipments, the liability is still partially on the liquor company if they have not taken reasonable precautions against known risks. Imagine buying a bar and the liquor company who you're forced to use says "if you rattle any of the doorknobs or test the alarm works, we'll sue your pants off and throw you in jail" - does that seem fair?


I was speaking towards probing the business not the things you own.

Using housing as a metaphor is common because it's an incredibly common thing people can relate to with personal experience, and is something people typically have relatively detailed intuitions built around what they are okay with and not okay with regarding it.

It got the point I was making across, but I do think there was a misunderstanding about what I was applying it to. I was referring to people who probe businesses security vulnerabilities on the internet side of IoT, not people who check for vulnerabilities in things they own on the T side of IoT.

As for the bar analogy, I agree that there is a lot of room for reasonable due diligence to test the security if there is potential for you to be at risk of its failings. This is more in line of my last paragraph, and I do still assert that solutions that avoid the need for people to verify security themselves should be preferable to one's that do.

If you've got 2 legally independent entities messing with the same device, and then abuse of the device does happen and it leads to damages - can you understand how much more difficult this becomes to sort out than if the company was solely responsible for the device?


What do you think about a person who bought a house and documented all the ways to rob it that they could find? As far as I am aware, there is no law against that; and that is more comparable to what security researchers are actually doing.


If they bought a prefab house which was sold to many people in the neighborhood, and part of the terms and conditions were do not open up the walls, and the owner opened up all the walls to find out weakspots it can be robbed, then that does seem criminal, no?

However, the owner should still have a right to validate the security of his house - so he should be able to request for permission to break the terms of his contract for the sake of security research. That is going to require approval from the company, who the contract is with. I think we should be looking to make some laws around making sure this communication can happen safely and fairly


At present, I'm not sure it's actually possible to sell someone a house with terms like that, since it would interfere with ordinary, even essential aspects of maintaining a liveable dwelling. It does highlight the oddity of the situation we are in with many IoT devices.


Yes exactly with the weirdness. Which is why I am leaning heavily into being a proponent of it needing some laws created explicitly for this situation that will clarify what's considered fair activity between researchers, companies, repairers, etc.

Sticking with the example, there is not reason a company cannot have terms that you can't open the walls, while the state also has laws that regardless of what the terms are - you are allowed to open the walls for maintenence purposes, renovations, etc, but maybe you have to notify the company about what you are doing.

It feels like territory somewhere in between the complete freedom of ownership and the total lockdown of renting.

I'd imagine there is a whole legal tug of war that would need to happen to know where the lines would or should fall, but the main point is that both sides are at the table and no one needs to be keeping their activity in the dark because of how uncertain they are about how the law is going to treat them


> but there needs to be consent from the company being probed for vulnerabilities or else I find it hard to consider it legitimate research, regardless of intent.

The reality of netsec has not born out this model. In practice, you have two broad categories of companies:

- Ones that already have a culture of security, run pentests, have bug bounties, deploy patches, etc. These aren't the ones exacerbating the botnet-of-things writ large.

- Ones that frankly don't give a damn. Either they say "we don't need security research, it's secure enough", or they say they don't want it divulging trade secrets, or any of myriad excuses. No matter what, they don't consent to security research, even if they desperately need it.

The latter often persist even after multiple wake-up calls from black hat breaches. We have in front of us a golden opportunity for distributed, decentralized security research - white and gray hats basically do this for free. Instead we punish them, while the real problem stays far out of reach of the short arm of cyberlaw. Documenting the netsec research is a pretty clear indicator of intent ^1.

Honestly at this point, I don't think we can afford to not go this route. We should give amnesty to researchers who clearly aren't causing any damage, instead of throwing the book at them, which sadly is usually the case.

1 - yes I realize this gives a potential out to black hats. I'm fine with that. There ought to be enough evidence of actual damage to tell the real criminals apart.


People aren't white or grey or black hats. Actions are.

A person can wait until they find a vulnerability to decide what type of hat they want to be. That is not only possible, but also the most rational thing for someone to do if there are no negative consequences to declaring yourself one way or the other before you find the vulnerability.

All of the problems mentioned can be addressed above the table.

We don't allow people to test your defenses unsolicited in any other industry that i know of, and the cost of cybersecurity is very high.

We can make basic security defenses a law if we want to without giving cover to black hats.

You can't throw the book at someone who has approval to do research. Business does not need to have at-will rights over that approval, we can require sufficient reasoning to deny


there needs to be consent from the company being probed for vulnerabilities

What is the type of scenario that you have in mind here? Do you mean probing a web service for vulnerabilities, performing security assessments as part of pre-sale publications (think Consumer Reports, Anandtech reviews etc), or performing pen-testing on a device I bought and is now running on my home network? Because you appear to be arguing that I shouldn't be allowed to examine a device I own without explicit manufacturer consent.


I was speaking towards internet side of things where you do not own the infrastructure.

As a related note, I do firmly believe in right to repair, and if you own something you can do whatever you want with it.

Partial ownership seems to be a thing now. So I think there is a lot of missing framework around managing that properly.

Long story short - I think there is room for manufacturer consent / acknowledgement / notice to be part of the solution and if it can be part of the solution then it should be. We may need regulation around that, it likely cannot be left solely to the companies discretion and may even need an aggressive "receipt but no reply by X days is considered consent" clause - but I would like to promote solutions that come with communication between the effected parties


You give up consent for a device to not be scanned the second it is connected to the public internet. There are botnets that are continuously scanning all allocated IP blocks for potentially vulnerable devices - try logging requests to an open 22 port and take a look at the kinds of requests you get. That's the price you pay for connecting to an open world wide network.

Now the conduct and what the operators of a massive scanning operation intend to do with the data they have collected should be regulated, and punishments should be instilled by those who use this data to facilitate attacks on others. But the ship has sailed for consent to connections from other devices over the internet.


The Ship has not sailed. The ship is still on its way to port. Complete internet surveillance is arguably an unstoppable force on the way to shore.

I think when it gets here there is going to be a lot more trouble for cybersecurity experts due to a lack of clear understanding around what is considered legal activity or not from them. Right now the obscurity is something they hide in - they can choose whether or not to reveal they found a vulnerability.

But what if that's not always the case? What if you get "caught" before you are able to show you had no intentions of doing anything malicious?

We can have the trial by fire we usually do, and let a round of innocent people face unjust consequences and use them as martyrs to create new laws - or we can use some foresight and build some legal frameworks in advance that enable researchers to be "by the book" and not worry at all about legal repercussions


I just don't think that sending port scans to random internet addresses is a big violation of privacy, or undue conduct for a government to participate in. Having your connection details public is the price you pay for connecting to the internet. If you don't like it, than run a private network and firewall the ports on your gateway - the default behavior of all consumer routers.

Quite simply, you will absolutely get portscanned if you have a port open to the public internet today. Try it. No doubt CIA has access to at least some of those botnets. We need policy protections that face the reality of the world we live in today, and harden devices that would like to communicate over the internet in an automated fashion. That includes punishments for operating massive, systemic botnets, but also some auditing of critical infrastructure that is publicly accessible.

For all their problems, certificate authorities have largely let us figure this stuff out on the internet browser side, and I would argue that has had a positive effect on privacy and security. Now it is time to do something similar for devices that connect to the internet in an automated way. For all it's


I agree on the subject, but I think we can leave port scans out of the discussion because they are not an issue as you suggest and still apply everything I am saying to other issues.

There would still be upside of potentially addressing the scan spam as well, though


Fully agree. If some company vanishes, consumers are left holding the shit end of a broken stick. It would sure help if there were protections for those that effectively volunteer their time and effort to keep things running for others.


I imagine someone in the many many comments has already suggested this. But just in case:

It wound be great if all of my emails to security@somewebsite.con could be CC’d to security@fcc.gov and that would immediately convey to me, somewebsite, and the FCC (and anyone else) that I am indeed disclosing and not ransoming.

I understand there would be a cost that the FCC would bear. I just think it would be a worthwhile cost to incur.


I like the general idea of improving communication / transparency.

Perhaps some branch of the government could provide a registry for responsible disclosure (e.g., `https://some-branch.gov/responsible-disclosure`). As a security researcher, you could notify the government of your intent to disclose as a demonstration of due diligence and good faith.

The registry/site could return a case/reference number that could be included with the disclosure to the manufacturer. In addition to discouraging an attitude of defensive reprisal, it might also prevail a greater sense of urgency upon the manufacturer to follow through with remediations.


I'm not sure if it'd be necessary/useful but it might also be interesting to leverage zero-knowledge proofs so that interested parties could verify when the contents of a disclosure were made available without actually accessing the contents until after some attempts at remediation.


This seems like a pretty clear breach of first amendment rights (we have a right to choose what we say, and who we say it to). It is probably a good idea for researchers to implement this strategy, and obviously more protections are needed for researchers in this area, but eroding the bill of rights is not the way.


Im a little confused. Can you explain how their proposal is a first ammendment violation? If you're reffering to "could be CC'd to security@fcc.gov," I assume they mean make it an option, not make it mandatory. Some companies attack you for trying to disclose bugs and exploits -- saying that you're attempting to ransom.


I would like to add most of the IoT problem is no patches at all. The firmware they get is usually bog standard with some very minor tweaks out of china somewhere.

It is a problem of vendor locked in products where you have to buy a hub to do an update. If there even is an update. If you want to get a good picture of how sideways updating can even be watch the linus tech tips on where he wanted (and has the tech ability) to patch his light switches. But could not even get them to give him the correct firmware or even say if he could. Also many devices there is literally no way to even do the update. They flash it on the line and that is the last update it ever gets.

Also Supported and actually maintained in the hardware world can mean different things. So you will need to get your definitions up front correct. Supported could mean to a HW manufacture if the thing burns out ship a new one. The firmware is a secondary consideration.

Another aspect you will run into is licensing. I can sell a device but may not have access to the code. Example: The vendor who makes that code went EoL on their code 5 years ago. They will not even sell me the code as they may or may not even have anyone who works for them to give it away if they could. They may or may not want to sell me that software anymore as they have a new shiny they want to sell me. So I am stuck even though I want to update I can not do it. I had one vendor flat out refuse to give me the older docs because the item was EoL and they had a replacement product that cost like 5x. That was just to communicate with the thing. Not even to update it to a later revision.


> I would like to add most of the IoT problem is no patches at all. The firmware they get is usually bog standard with some very minor tweaks out of china somewhere.

I was on a team that worked with a firmware vendor, from the US, for a bluetooth chip.

We would send in bug reports, they'd send us firmware with fixes. Except it was obvious they did not use source control because they would sometimes base patches off of old firmware versions that had the bugs they had fixed in newer versions. It was absolutely insane having to send emails like "hi, your latest patch is based on firmware from a year ago, can you please instead fix the firmware you sent us last month?"


Those sorts of places are fun to interview at. 'So what sort of source control do you use'. You would think everyone does that by this point. An easy slam dunk question to ask and for them to answer. I had one say 'well sometimes we check it into sourcesafe but usually just copy it around the 5 of us on a fileshare' (this was like 4-5 years ago).


Sounds like broadcom to me.


Hilariously, not that time.

I do understand why you might think that though. :-D


Re: the licensing issue, companies wanting to put a label on their product would probably want to extract similar guarantees up their supply chain. Especially with a voluntary program like the one the FCC is proposing, good practices won't become the norm across the market overnight. But maybe, at the very least, the segment of product and component makers that take security seriously will begin to grow. I encourage you to share your thoughts in an official comment.


As someone who designs IoT devices like these for a living, the device manufacturers here are in many cases the smallest companies in the supply chain and have very little ability to influence things upstream of them, especially for specialty products or companies entering a new market. It's often a major win to get a chipmaker to pick up the phone and sell us their product, much less receive any support at all.

I wish I could put a label like this on all of my products and I've been wishing for this for over twenty years, but the reality on the ground is that our support ends when the support for the individual parts in our product ends. We've looked at our supply chain periodically to see if we can replace parts with better documented/supported comparable parts, but frequently there just really aren't any better options.

This is a great idea in concept, but I fear that the flaw in the FCC's proposed rulemaking is that only indirectly addresses the root cause (the software, documentation, and support/updates provided by chipmakers for their parts). Furthermore, by focusing on device manufacturers who are the weaker partners in the chain, the regulation is likely to punish smaller, more innovative manufacturers.


If it was forward looking, rather than retroactive then it would at least mean that chip manufacturers wouldn't be able to sell their undocumented/unsupported crap because all the buyers have to have it?

If there are no buyers then their attitude should change.


This is incorrect, because you're assuming that all the buyers have to have it, when the chip manufacturer is selling into many industries/markets.

Since the specific "IoT device for the USA market" set of buyers is actually a small percentage of sales for most of the parts they sell, they really don't care to support their product from the IoT security perspective. This support is expensive, so it would very likely be cheaper for them to ignore the market completely.


> This support is expensive

Most of IoT is that way. We had sales cycles that were 2-3 years long and they would in the end buy 300 units. I then go back to my suppliers and say 'hey support these 500 ic's that you sold me for 10 years from right now' They would laugh me out of the room unless I am showing up with big bags of cash. That instantly makes the whole project unviable to sell/support.


Yes, absolutely. This is the exact conditions of most of our higher-end products (500-1000 units sold of a particular configuration is common). It's funny to get laughed out of the room even asking some chipmakers "can you sell us 1000 parts, please?"


It is tough to explain to people that 1000 is not even alot for some of these guys. 1000 parts at say a fun price of 20 each. That is Maybe a 20-25k sale at most. For some of these companies that is a rounding error. You get lower priced parts and they just do not care much. There is no margin in it for them. Especially if you are not coming back every few months.


Disappointingly that makes sense.


> If you want to get a good picture of how sideways updating can even be watch the linus tech tips on where he wanted (and has the tech ability) to patch his light switches. But could not even get them to give him the correct firmware or even say if he could.

It was a mess, but it may not be a good example because part of the confusion was that there was no newer firmware. Their firmware version was being reported in hexadecimal, but the latest firmware version was listed in decimal.


He had a mix of random ones. They were telling him to buy a hub and hope for the best or go thru one of their vendors (more cost). Even if it that was slightly wrong that exact example could very easily happen. You have a group of devices in random levels of firmware states with no real nice way to tell what is what.


Thank you so much for asking HN! I can't think of a more informed, higher signal community to interface with and get open and honest feedback from. Really brilliant idea.

I don't have much input on this issue, but I wanted to ask that if you know folks in the US Copyright Office, that you recommend the same approach to them with regards to their upcoming regulatory stance on AI.

The copyright office is going to hear one-sided input from artists and the largest tech companies (seeking to build moats), but they need to broaden their inquiry to include technologists, researchers, and startups. HN is an excellent place to increase their understanding.

If you can, I would greatly appreciate it if you tip the copyright office off about HN!


The biggest problem isn't even new regulations. The liability for violation always tends to be a rounding error to profits. Then, even if there are teeth, there is no money for enforcement which makes it all pointless.

Look at how the FTC and SEC have completely failed us in the 21st century. Better regulations would matter if we ever bothered to enforce the ones we already have.


IDK man this is a pretty defeatist attitude and doesn't lead to any steps to improve the situation. There's an FTC commissioner here in the comments today who's interested in the community's input. If you don't think it will have any effect that's your prerogative, but members of the public providing good technical opinions can only be a good thing.

I don't agree with the vibe that past failures mean that regulations are pointless. Nothing is perfect at preventing abuses, but regulations do shape the actions of corporations and the terms of the discussion. Plus, the FTC is not one person - the commissioner making the post entered office in 2020, and so it seems broad to pin him with vague statements about the FTC completely failing us.


It's better to get the policies in place now and then complain about the lack of funding/enforcement. The mere threat of enforcement will cause some companies to design their products better and when a major security incident happens because of a bunch of insecure IoT devices and people are outraged it'll be a lot easier to motivate action if we can say "We already have rules that would have prevented this entirely, but the FCC wasn't provided the resources to enforce them."

That's a clear call to specific action as opposed to "We don't have rules that would have prevented this, and also many of the rules we do have across several agencies don't have enough funding to enforce rules designed to solve other problems."


HN when governments agencies have little leverage to enforce rules: The violations are a rounding error to profits! We need to make the laws more stringent.

HN when EU passes laws that have significant teeth in them and let them actually enforce them: This is ridiculous overreach! It will kill innovation and make it impossible to do business there!

Love it, never change <3


Almost like it's different people :)

The bigger issue is that simplistic takes expressed strongly with no room for disagreement tend to get the most upvotes from other people. The people who agree will upvote, the people who disagree will just move on, and the people who don't have an opinion will think the person sounds like they know what they're talking about and will upvote anyway. That's how you end up with back to back threads where completely opposite takes are highly upvoted, and both of them happen to be awful takes.


This sums up the situation that government regulations don't work. These regulations put us on the path of trusting religious-like in government. We could be working toward push-button simple network segmentation with some kind of default filtering for install by the average home user.


> These regulations put us on the path of trusting religious-like in government.

We don't need to have religious-like faith in government because we can vote for people who will do what we want them to and we can vote out the people who refuse to do their job. It doesn't happen without the people getting involved and holding their government accountable though. You don't have to pray when you can vote.

Without regulation you could only ever have religious-like faith in private corporations because they have zero incentive to act benevolently and you have zero power to replace a CEO who is acting against the interests of the public. You have no vote, so prayer is all you have left.


Maybe I could have some faith if regulatory bureaucrats were fired when there are major regulatory failures e.g. 737 max. Maybe I could have some faith if police state agency employees were jailed for FISA abuse.

Voting isn't enough because even elected officials aren't allowed to fire these people.


Specifically, FAA allowed Boeing to use software to cover up design flaw (crammed bigger engines under wings which causes pitch up problem) so it would appear to drive like older 737s. Apparently only a test pilot has been charged for falsifying some paperwork. 737MAX should be required to get a new type certification due to the significant changes.

Another example of regulatory capture leading to inadequate oversight is FDA, which has revolving door with drug companies. https://www.mdlinx.com/article/10-dangerous-drugs-recalled-b...


> Maybe I could have some faith if regulatory bureaucrats were fired when there are major regulatory failures e.g. 737 max.

If I were Boeing, I would hire the fired bureaucrat with a lavish comp package and make sure he's at every meeting, conference, get-together, etc. looking well-tanned and happy.

That makes negotiating with the fired guy's replacement much much easier.


How well did that work for bank oversight in 2008, and again in 2023 with SVB? The accountability of "my one vote will remove government's failed regulators" fails on the scale of $billions.


Your examples are situations were deregulation/lack of regulation caused problems. Without regulations, prayer didn't work out so well. We all know IoT security is a problem, but after decades of that problem existing prayer hasn't worked there either. No one person's vote can fix government but collectively we have the option to enact change. I'll take having the ability to make changes over being powerless to make changes every time.


Is there an argument that the government failed in oversight with SVB?

I think this is a textbook slam-dunk by the government? They stepped in when the situation was _bad_, but not _catastrophic_ yet (mmmmaybe arguable), took over, and no depositors got hurt.

Is there an argument that this could've gone better, apart from "no banks ever fail"?


Fed had to set up a swap/lend facility that weekend. Sort of like they make it up as they go.

The specific regulator is going to retire. "Abbasi and Mary Daly, president of the San Francisco Fed, came under scrutiny after a post-mortem report undertaken by the Federal Reserve found problems with how SVB was supervised." https://www.msn.com/en-us/money/markets/key-san-francisco-fe...

And through regulatory capture the CEO of SVB was on the board of directors of the regulator!

https://www.reuters.com/markets/us/ceo-failed-silicon-valley...


>How well did that work for bank oversight in 2008, and again in 2023 with SVB

Hilarious own goal on this one.


I’ve worked in security before and i don’t really think the government should be involved that much. There are so many different situations to consider. What i would support is the fcc coming up with a list of common patterns and then forcing devices to state which, if any, pattern they follow. I have a weather station for example which doesn’t really need any security on the device end.


Does your weather station connect to the internet? (the discussion is about IoT devices)

If so, plenty of IoT devices have been used in botnets, as point of entry into local networks (hello printer, home assistant, file share...), or simply killed off with a DoS attack.


Which the manufacturers of IoT devices will give us willingly out of the goodness of their hearts?


Yet government is made of people so it does not have God-like powers, even though it is often worshipped.

I would prefer to plug in a box that does this segment/filter. I will pay if it can be rebuilt from available source code. Make it easy to install and setup. If nobody purchases then nobody cares and why would government get involved? Seems like FCC scope creep.

Forcing every IoT vendor to do it overlooks the problem of each vendor having and maintaining the skillsets.

How about something like UL to create a slim standard and test against that standard. The aforementioned box idea could apply to be tested against the standard.

https://www.ul.com


> Yet government is made of people so it does not have God-like powers, even though it is often worshipped.

A slightly bizarre aside.


> regulations put us on the path of trusting religious-like in government

We trust in government to set rules and punish rulebreakers. When that is not true, do we enact punishment ourselves? Results would be not pretty.


If your ISP determines there is a botnet from your home IP and you refuse their request to fix it, then it seems appropriate for your ISP to take action or "enact punishment".


Okay, let look at the reverse situation

If my ISP charges me for 100 mbps but provides 10, can I enact punishment without government interfering and protecting ISP from my punishment?


Using rule of law and courts, it depends on your contract. Many residential providers have service as a best effort. Guaranteed service with penalties are typically possible, if you are willing to pay significantly more.


This is textbook definition of hypocrisy.

ISP does whatever they want without any qualifiers about contract and without any need to go to court.n


Indeed, this is awesome.

Not sure if you're able to comment on this, but is there anything in place to mitigate the risk of automated astroturfed commentary e.g via LLMs in this and other cases?

Edit: on the fcc docket specifically, not on HN


> Not sure if you're able to comment on this, but is there anything in place to mitigate the risk of automated astroturfed commentary e.g via LLMs in this and other cases?

Look at HN account age, karma, and comment histories.


What about new users?


Agreed. Thank you Commissioner Simington for reaching out to us. It makes all the difference in the world.

> while you likely can’t influence something like a presidential election on your own

In fact, there is almost nothing of significance you can accomplish (or influence) on your own. We always do and always have needed to work together - and the results are astonishing: almost everything that's ever been accomplished.


Going to echo my thanks here as well.


As a firmware engineer, I'm one of the people who actually writes the code that goes inside the IoT devices. I'm very interested in what the FCC might be able to do here.

How does the FCC define a security flaw? Would updates only be distributed when there is a flaw that needs fixing?

Remote update mechanisms can themselves present security problems in some domains. Thus, some devices should only be updatable if the owner has physical access to the device. Will the manufacturer be liable for damages caused by attacks on vulnerable devices that were not sufficiently updated by their owners?

IoT is making its way into defense and enterprise environments where reliability is a matter of national security. An update nearly always results in some downtime for the device, even if it's just a couple seconds. Sometimes, it may be in the best interest of a device's owner to defer an update indefinitely, until that device's continuous operation is no longer mission-critical. Even if the owner can't control exactly what is in an update, they absolutely MUST be able to control when an update occurs.


> Even if the owner can't control exactly what is in an update, they absolutely MUST be able to control when an update occurs.

+1 for this at the consumer level. My oven may have a critical update, but - for right now - *nothing* is more critical than finishing dinner. I'll let the update apply the day after thanksgiving when I'm doing the dishes.

There are a few connected appliances brands that do this well: updates are broken down into two classes - "critical/security" updates and "all the other updates" - and I get to pick from "do nothing, notify, notify and download, notify, schedule/auto-apply" for both channels. Unfortunately this is not common because it takes planning and more than the bare minimum to pull off.


I think this is oversimplifying things.

Is finishing the dinner more important than applying a patch that fixes actively exploited bug that locks your oven into cleaning mode and burns everything inside into ash over the next three hours? Or something that disables the safety checks and lets the oven overheat and burn your house down?

(Granted, the latter shouldn't physically be possible because it should have physical temperature killswitches etc.)

People always (understandably so!) consider software updates to be annoyances; but especially when you give an example like _an oven_, the potential for _catastrophic_ failures is too great.


One my favorite IoT botnet scenarios is an attacker taking control of thousands of ovens/air conditions/other high-wattage devices and using them to cause power outages. https://www.usenix.org/system/files/conference/usenixsecurit...

I wonder how the impulse to connect everything to the internet will be remembered.


The flip side of that is you can use control of all those high wattage devices to prevent power outages by shifting load to times when more energy is available.

Hopefully, that's how the impulse will be remembered.


Or by the rise of salmonela infections...


I think it works best when the consumer gets to decide. If a car recall comes along because the seat belts in your car have been shown to actually kill people in accidents, would you blow off your weekend plans to get your car fixed as soon as possible or would you go to your thing then bring the car to the manufacturer on Monday? There will be people on both sides of that decision and, ideally, we should let them choose.


These seem like very rare scenarios. If we're concerned about dire threats like this, the manufacturer needs a way to remotely send devices into an internet-disconnected "safe mode" before anyone's even talking about updates.


How do you, as a user, determine whether your oven is in "safe mode" that the manufacturer has toggled, or in "safe mode" that makes the UI look the same as the real one, but is actually a malicious code that waits until 3AM to start the cleaning mode?

I agree that these scenarios are (relatively) far fetched _now_; but if we expect the future to include connecting appliances like that to the internet, then solving problems like those is table-stakes stuff.

I understand that enforced software updates are annoying; but for high-stakes environments like this they strike me as a lesser of the two evils.


If the attacker has total control, then all bets are off no matter what mechanisms you put in first. Adding a safe mode would at least allow manufacturer to stop any non-total exploit without relying on the more complicated update mechanism. Also, the appliance would more likely work in a kinda normal way in the meantime.


> If the attacker has total control, then all bets are off no matter what mechanisms you put in first.

You could have a second SoC on the device running off ROM, whose only purpose is handling this safe mode and controlling internet access of the main device. Keep it simple and make it essentially just a fuse that can be blown (turning off internet access) by a signed message from the manufacturer. Keep the hardware capability of this SoC to just that, so even if you have a vulnerability on that thing itself, an attacker can't really do anything with it. Keep the code running on that SoC simple and preferably make it a FSM that is proven to be free of vulnerabilities. Also make sure the main device can't interfere with it in any way.

Once the fuse is blown, the user needs to press a physical button on the device to re-enable internet access. Preferably ship the devices with this internet access disabled by default.

Even better yet, don't build an internet connected oven... but that ship has sailed in many areas.


Ideally yeah, the bar for critical appliances would be high enough for this. If not, disabling IP at the OS level would still be pretty good.


>If the attacker has total control, then all bets are off no matter what mechanisms you put in first.

Great, so we now agree that it's important to keep the system patched and up to date.


I didn't say otherwise. Only that a forced update mechanism is insufficient for stakes as high as burning down the house. Maybe the patch fails to download/install, or they take too long to develop the patch in the first place, or the patch doesn't work for all models on the first try. There needs to be a simple kill switch if we're talking about threats like this.

Besides that, the safe mode oven should still be able to cook food manually.


Just as smart people agree that the manufacturer knows nothing about the situation at the appliance, and a forced update mechanism a la the first iteration of Windows forced updated ("update's here, restarting, too bad you're currently doing important stuff that won't be saved") is a stupid idea and outside of private use often a non-starter. That kind of condescension is generally only something you can do to private tech users.

Though I guess making the experience frustrating could keep more people from needlessly connecting everything to the network because it sound futuristic...


That depends on updates never introducing new vulnerabilities.


> If we're concerned about dire threats like this, the manufacturer needs a way to remotely send devices into an internet-disconnected "safe mode" before anyone's even talking about updates.

This was exactly my thought. Download the update then take the device offline until it's applied if the update contains a security vulnerability.


If these are the kind of things we are worried about then the correct course of action is to ban needlessly internet connected devices. Your need for twitter on your fridge doesn't outweigh the botnet threat it poses.


Idk if that can be done, but anyway, I don't want to hear these scary threats thrown out as excuses for whatever policy the manufacturer puts in place. If the threat model really is burning down the house, you need a lot more than some forced updates to protect you.


You make a good point, but this kills me. I've been writing software a long time, and there's nothing I trust less to control my oven than software. God help us. I yearn for simple physical controls


Finishing dinner can be more important than the house not burning down. A burned down house is likely insured. A dinner with a potential business client is not.


Surely this has to be a joke.


Could be a reference to steamed hams


It’s just the Northern Lights.


I'm wondering why you'd have a smart oven in the first place. Seems like all risk and no reward.


The ability to pre-heat my oven without standing in front of it. That's really the big win. But also to be able to tell if spouse or children left it on.


But I can walk to my oven and turn it on, which wasn't a real problem even when I lived in a huge house. What am I missing?


We really can’t force you to see the obvious if you’re set on not understanding something.


That's alright, I'm good. Smart ovens are pretty rare, so it seems like a lot of people don't get this.


> But I can walk to my oven and turn it on, which wasn't a real problem even when I lived in a huge house.

Rushing to get out the door, "Honey, did we turn off the stove?" Presumably you can then check with your phone.


In my case, being able to start it heating when I'm ten minutes away from home, so that I can get the kids fed ten minutes sooner.


A timer to turn on an oven has been a thing for 25 years or more, probably there were clockwork ones before that. So it is down to very fine control on timing, or not turning the oven on, say, if you're in a traffic jam.

I'd expect the network connection to go down and the oven not to turn on at least as often as 10 minute makes an operable difference.


Maybe you don't decide at the start of your day, rather it comes up later that the kids want something baked for dinner that is already prepared and just needs to go into the oven, and you also get home from work at like 7pm. Idk how niche that is, but if that's someone's situation, the remote control makes sense.


Yup. Or the buses and the queues at childcare pickup and the speed of walking home with a 3 and 5 year old means that arrival times at home vary dramatically.


Ok, that makes sense.


People with internet controlled ovens turn them on before arriving home to shorten the time it takes to get dinner on the table.


Lucky guy, it sounds like you've never experienced overwhelming anxiety over having possibly left the oven on while out of the house.


A simple timer that switches off after a couple of hours would do. The timer could reset every time the oven door is opened. This should solve most issues. Long cooking could have a bypass button or somethong (if the door open reset is not enough).


I just remembered many old analog electrical ovens had this already , the only way to turn on the oven is by setting a timer (and when you needed it on for many hours you had to set another timer for yourself to remember to extend the oven timer ).


Even if you left the oven on, I believe it’s very unlikely to burn down the house.

These devices are designed to work for hours with minimal supervision. Given the size of the user base, IMO ovens are extremely reliable. And electricity prices are too low to be anxious about the costs.


> These devices are designed to work for hours with minimal supervision.

I wish. A while back, I put some (food) stuff to dry in the oven on a very low heat, knowing it would take all night to dry fully. After six hours, the oven decided that I must have accidentally left it on by mistake, and switched off automatically, and that stuff was wrecked when I got up in the morning and checked on it.


Indeed. I don't think I've ever heard of an oven burning down a house. And given how pretty much everyone has one, there must be many ovens left on alone in the house at every given moment.


That'd be a legit feature, but far less involved than a full-on smart oven. At most, it'd have a way to turn off the oven remotely (but not turn it on).

Anyway, last time I had this anxiety, I checked my Ring camera in the kitchen that's posted there for reasons like this. Works for the stove and faucet too.


While I have neither, I'd much rather use a smart oven to see its status than have a constant camera feed of my kitchen.


It only records on demand, mainly cause it's on battery. Also doesn't control anything in the house.


If you think the oven can burn the house down, how about the anxiety of trusting some companies IoT oven not to be exploited by script kiddies to burn your house down?


Probably for some feature or astetic that's unrelated to the smart features, but is only available on the smart oven.


This os definitely most of it. Also, someone figured out they could do something with the parameters of the convection oven and get close to an air frier. Got the mode a year later. Lastly, when the meat thermometer hits temp, I get a notification. That is actually useful.


The more critical thing than finishing Thanksgiving dinner for your oven is not burning your house down.


I also write firmware for IoT.

IMO: one of the most important incentives towards secure IoT is: don't make it IoT unless it benefits the user greatly (instead of benefitting the manufacturer).

Don't let companies connect everything they want to the internet irresponsibly and if they must: force them to make it really really secure and long lasting (10+ years). Especially when there's high risk of causing physical damage to things and / or identity theft and / or privacy issues. Which en masse translate to national security issues.

Force them to sell devices that keep functioning even if the manufacturer (servers) cease to exist. Example: introduce a mandatory manufacturer IoT insurance that pays out in case the device ceases to function (securely) within x years (even in case of manufacturer bankruptcy).

Random chaotic brainfarts.


> Remote update mechanisms can themselves present security problems in some domains.

I've proposed many times that the device have a physical write-enable switch on it, not a software switch. That way, a malware infestation won't survive a reboot, and your backup hard drives won't get compromised.

I'm amazed that nobody does this. (Hard drives used to have a write-enable switch on them.)


Many flash chips have write enable pins at the hardware level. So this support really does still exist in theory!

However, these pins mostly aren't used for any kind of switch in basically every PCB design I've seen. Either it's "always enabled" so the storage can always be written, or it's a chip that's programmed from the factory and always disabled to prevent any kind of user update.


I'd sure hate to have my system ransomware compromised, and when I try to restore from a backup drive, the ransomware encrypts it, too, as soon as I plug it in.

I also have, on occasion, flipped the from and to parts of the command to restore from a backup. I'd really like to have a hardware switch to make the drive read-only.


They exist, but only as very expensive specialty gear for digital forensic investigators.

https://digitalintelligence.com/store


These also exist for removable drives (not really any different price compared to normal drives, in wide use) in the govt/defense market, where using a hardware write switch when moving data between airgapped systems in a controlled way is common.


> very expensive

Indeed, for a 2 cent switch.


Why doesn't MS do similar on all of their OS code by placing it on a read-only filesystem? Only allow updates when reboot into an update mode.


Android does this. OS is read only. Updates happen on second partition and takes effect by swapping partition on reboot. Integrity is verified by secure boot.


Interesting. How is it verified? Maybe keep Merkle tree hash of eveything in TPM?


I have zero faith in software read-only modes.


Thanks for this thoughtful feedback. I encourage you to file an official comment, especially regarding end-user control of update timing. Maybe my response here https://news.ycombinator.com/item?id=37394935 addresses some of your other concerns? We'd love to hear your thoughts.


Thank you for the reply! I have submitted my official comment.


This is a good point, some IoT devices really can't be designed to be physically serviceable, while still remaining reasonably compact, e.g. those that need very high levels of water resistance, especially saltwater resistance.

And adding any remote update mechanism at all would more then likely decrease overall security.

So there actually should be a counter mandate too, for devices that are impractical to design to be physically serviceable, while meeting certain size/weight/etc. requirements.

To make sure remote update mechanisms, of any kind, are never implemented, unless the manufacturer can guarantee that the update mechanism itself doesn't introduce new flaws.


It sounds like you're looking for a carve out so you don't have to upgrade your devices to have a modern microcontroller that supports remote updates and are using saltwater as a scary thing so no one challenges you on it.

You can conformal coat a ESP32 with a sensor and battery and a wireless charger, and get remote updating. If hobbyists are doing that without commercial backing, what industry experts like you have access to must be even better.


Wouldn't a Starlink satellite qualify as an IOT edge device? How do you propose a user-servicable physical switch on a device in LEO?

Not all devices have easy or cost-effective physical access -- that's why IOT is particularly effective at bridging the digital-physical divide.


Having a category for inaccessible things in LEO doesn't indict my point that things not in LEO are typically accessible.


You appear to be under a lot of mistaken assumptions.

And in any case, the default ideal is to have an option to update or reprogram devices in-person.

It's never an ideal, that I've ever seen expressed on HN, to have a remote actor capable of doing so, unless in a totally air gapped environment.


I'm interested in learning! what are those mistaken assumptions?


Good comments.

> The FCC recently issued a Notice of Proposed Rulemaking [2] for a cybersecurity labeling program for connected devices.

This sounds like it is intended for consumer products, and it also sounds optional. I would hope that users with a legitimate reason to do so (defense, enterprise) would have the capacity to not participate and forego the label.


The line between "consumer product" and "enterprise/defense product" can be blurry. For example, event security teams may use their personal smartphones to communicate with medical staff.

A lot of IoT companies (especially the startups) focus on the customers with the deepest pockets (enterprise and defense). If big-ticket customers demand this label, it generates a great deal of incentive for IoT companies to just say "to hell with it, we want that label on everything we make."

In any case, the words "national security" are usually a good way to get the attention of a three letter ;)


> Remote update mechanisms can themselves present security problems in some domains.

Not really if done right to be fair. It's just a matter of implementing a signature verification of the firmware updates that are installed on the device.

> IoT is making its way into defense and enterprise environments where reliability is a matter of national security.

If it's a matter of national security surely you don't use IoT devices connected to the public internet. At least the devices are in some private network, where the traffic is under your control. So if you don't do security updates it may be acceptable under that circumstances.


> If it's a matter of national security surely you don't use IoT devices connected to the public internet.

Of course they do. That's the flip side of PaaS and reverse-NIH syndrome, the "opex > capex" thinking: "Industry 4.0" is built on web tech, with all the practices and assumptions baked in. Your critical infrastructure is, or is about to, be running JavaScript on a docker-compose cluster, and expecting to be piecemal-updated daily.

And then, there's also "shadow IT" - going behind the back of IT and using COTS SaaS to work around red tape is still... going around IT and giving untracked third-party vendors access to organizational information and operations. "Making its way" doesn't only mean "introduced by design" - those vulnerabilities just creep in.


I would love to see frank discussion on the record of consumer-grade vs infrastructure-grade practices and what label(s) would be appropriate for each! It’s not lost on me either that the roots of much high-ticket critical infrastructure is about to rest on web tech and highly evolved descendants of 8-bit micros.


> It's just a matter of implementing a signature verification

correctly

With rollback protection, a chain of signing keys to allow revocation, time stamping, correct parsing, enough scratch space to hold an entire separate image in A/B, enough processing power to verify image signatures, etc etc.

Doing this well is very hard. Given most IoT vendors don’t yet know how to prevent XSS, there’s near zero chance they’ll get updates correct in the next decade without solid open source frameworks to leverage.


Also with right to repair, you need to be able to disable signiture checks and upload custom firmware


Yeah true, that’s a fascinating challenge that I haven’t seen particularly well discussed anywhere in detail.

It needs to be really well protected so that the owner, and only the owner can disable the chain of security, but that they can do it without unreasonable overhead and without actually involving the manufacturer (in my opinion. To handle examples like manufacturer trying to lock-in or charge fees or simply going out of business)

Perhaps the owner mints a public key pair, and the device only unlocks with proof of the private key. But in a way in which is easy for your every day person.


> It's just a matter of implementing a signature verification of the firmware updates that are installed on the device.

In principle: yes. In practice: signing keys seem to get leaked all the time.

It's not a mechanism I would blindly trust in security sensitive domains.


We could start by asking iot companies to be ISO 27002, stop saving passwords in excel files or in s3 bucket publicly accessible could help

Security is not an update, security is a cultural trait of an entity or an individual


Remote update mechanisms will be used to deliberately brick or downgrade the device (usually for petty commercial reasons), by unintended parties (e.g. Chinese spies with leverage over Chinese manufacturers), at the wrong time (e.g. one hour of outage when the smart oven should be cooking dinner), and so on.

Signature verification without human users controlling the updates means protecting an attack vector.


It would help if there were reference designs and testsuites for common embedded tasks like remote updates, configuration storage, input validation, etc.

I don't do much embedded work these days, but I remember a lot of shoddy, ersatz designs based on random OSS components. I remember a small mfg of hardware devices used widely in payment processing tell me they were secure because "they used SSL" despite not taking care of physical or system security in a coherent, effective way.

Honestly, companies need to have some liability or other incentives to even care about security a lot of the time.


Greetings from Ukraine, European country with real Great war just now.

I must say, we see extreme grow of cyber-crime as part of modern war. I think, in nearest future, cold war will guaranteed have huge cyber-crime part.

And, hacking of IoT devices has very significant share of cyber-crime now. For real war it is question of life and death, because hacked devices with radio emission, are used by hostile intelligence, to find targets for attacks of heavy weapon, but also, we seen cyber attacks on electric-energy infrastructure, indented to make blackout (fortunately for us, unsuccessful).

Chinese IoT devices are very special part of question, in many cases are connected to Chinese clouds, and this is also extremely dangerous, not only because potential unfriendly Chinese moves, but also because their security is not good enough, so in many cases, cyber-crime could intercept communications and interfere operation of device or even hijack control.

For example, exists smart door locks with camera and I hear hackers hacked them and used them to observe work of air defense, so enemy could tune their air attacks to make more harm.

In civilian life without war, videos from hacked door locks (or other IoT cameras) could be used for illegal surveillance, to coordinate riots, etc.


Hi and thanks for commenting. My concern with this topic is motivated in part by the AcidRain family of energy infrastructure attacks and the larger questions they raise about infrastructure security. Teardowns on Chinese-sourced equipment have been somewhat worrying as well -- one report I've read highlighted about two dozen versions of SSH in a single base station. Best wishes and good luck.


Can you share this report?


> the larger questions they raise about infrastructure security.

Not your mission to fix a network design problem which should air gap all of those devices. USA taxpayers can't afford your agency scope creep.


I'll bite - so how do we get all those "air gapped"?

It's a leading question of course.


The example was "energy infrastructure", so network group in those firms use their skills to set it up.

If any government group should be providing guidance and best practices on how to air gap devices, maybe NSA should write the standards. This FCC proposal looks like a ploy to spend the ever-growing pot (reportedly ten billion USD each year) from the regressive USF phone bill tax instead of reducing the USF tax.

As mentioned in another comment, a plug-and-play home device which provides network isolation and filtering for IoT devices may have a market. I would likely be a buyer at home.

"The bigger culprit is the FCC’s spending on USF, which is close to $10 billion per year, practically doubling in size since 2001."

https://www.commerce.senate.gov/2023/5/sen-cruz-it-s-past-du...


I'm talking way out of my pay grade here.

> If any government group should be providing guidance and best practices on how to air gap devices, maybe NSA should write the standards.

I guess this is a bad joke? It's hard to tell w/ the internet.

> This FCC proposal looks like a ploy to spend the ever-growing pot (reportedly ten billion USD each year) from the regressive USF phone bill tax instead of reducing the USF tax.

I can agree this is what it is under the hood [0].

> As mentioned in another comment, a plug-and-play home device which provides network isolation and filtering for IoT devices may have a market. I would likely be a buyer at home.

Here's the key - there isn't a market. Otherwise there would already be one (you are unique). That's the crux of the problem. IoT is a race to the bottom when it comes to consumers. Consumers compare "smart devices" to what they already have - a light switch, a light bulb - commodities - they don't think about security until it's too late.

So, that leads to:

> If any government group should be providing guidance and best practices on how to air gap devices

You can't have "guidance" and actually get anything done in the consumer devices space. Standards and certifications - rejection of devices that don't meet them.

When it comes to dealing with communications FCC is the 3-letter-agency, and there's no changing that.

I guess the question boils down to - mass spying on Americans with un-secured devices sending data to China or let the FCC handle the problem by potentially expanding the USF?

[0] https://docs.fcc.gov/public/attachments/FCC-23-65A1.pdf page 45


UL tests and certifies electrical devices voluntarily. I would like to see improvement on a industry basis without more government regulation. Apparently people voluntarily purchase carbon offsets when purchasing airline tickets, do people pay for non-tangibles.

Open standards of tcpip allowed for tremendous innovation, unlike the old Bell System which regulated through monopoly what could be attached to the network.


NSA Standards here are non-binding unless regulated by the FCC.


Thank you for engaging with the community in this way. Many years ago, in a fight to preserve individuals ability to flash their own routers, Vint Cerf, and I, and a coalition of many others, filed this report:

http://www.taht.net/~d/fcc_saner_software_practices.pdf

(retaining the ability to reflash our own routers, allowed my research project to continue, and the resulting algorithm, fq_codel (rfc8290), now runs on a few billion devices) The Linux and OpenWrt development process continues innovating and is very responsive to bugs and CVEs. It is a constant irritation that many products exist downstream from that that are 5 or more years out of date, and not maintained!

Key bullets from that fcc filing are on page 12-13.


High-quality comment. Thanks very much! I'll read your filing and think about it. But also, it's a great example of impactful public FCC commentary. I hope your work inspires others to make their mark in the record.


Thank you very much for reading. It was the first, and last time, I ever took place in the public process and political action.

https://www.computerworld.com/article/2993112/vint-cerf-and-...

We were within months of delivering a massive RFC8290-based fix for wifi performance and we´d been bricking routers left and right... and then got in a whole bunch that we could not modify... due to that proposed regulation... I lost my temper, organized 260+ to sign, made that filing, won, and went back to work. I should perhaps have pressed harder, as the binary blob issue has got ever more terrifying, deeply embedded into too many baseband processors.

If I hold your attention a little bit? It would be rather nice if modern fq + aqm algorithms went into the internet nationwide. "Bufferbloat" is at epidemic proportions. Most the fixes arrived for it in Linux in 2012, and are only slowly rolling out 10+ years later, due to the accompanying epidemic of manufacturers´ not tracking new Linux kernels, (with all those accompanying vulnerabilities). I care very much about addressing security issues, but care about internet "latency under load" and better videoconferencing experiences more.

There´s only a billion or so devices left to upgrade.

https://www.usenix.org/system/files/conference/atc17/atc17-h...


Thanks again -- will review both links (especially the latter!)

The FCC hasn't traditionally been a cybersecurity agency and will, most likely, never really be one; however, we can certainly do things through rules to empower experts, the public, and the agencies with cybersecurity expertise. If that one thing is all you ever did at the FCC, sounds like the public owes you a big debt of gratitude.


As communications increasingly overlaps with other elements of information --- data acquisition, storage, retrieval, processing, and transmission --- keeping the FCC out of the security space will become both more difficult and less tractable.

We've already seen instances where broadcast channels have been hacked or hijacked, where false reports have been injected into news streams (at times affecting global financial markets, or disrupting emergency / disaster responses), where communications providers have disabled public access to alerts (mobile providers and wildfires, Twitter's recent hostile takeover), and more.

There's also the overlap between communications and monopoly (generally the FTC's remit), which I realised a few years back: Censorship, surveillance, propaganda, and targeted manipulation (AdTech and similar tools) are all intrinsic properties of media monopolies:

<https://web.archive.org/web/20201014011009/https://joindiasp...>

<https://news.ycombinator.com/item?id=24771470>

There are other concerns where media are highly decentralised or fragmented, including spread of rumours and confusion (e.g., "fog of war", or the general uncertainty in natural disasters or after political and military upheavals such as Germany as the Third Reich fell). But the monopoly -> media concerns issues seem well established. Most though not all of these are addressed by people such as Tim Wu, Bruce Schneier, Cory Doctorow, and Shoshana Zuboff, though I'm not aware that all the components I've identified had been linked previously.

I'm aware that regulatory agencies are constrained by their legislative mandates, but communicating concerns over those limitations to Congress is also possible.


Where cybersecurity is a critical aspect of being able to communicate in our modern world, I urge you to rethink that. I can't go by BestBuy and buy a TV that radiates rf noise all over the place, interrupting communications, but I can buy an IoT device that will get hacked and radiate packets all over the Internet and become part of a botnet, interrupting communications.


There's no small amount of irony in the fact that attempting to open your .PDF without the comforting assurance of SSL gives me a "security warning" that I have to go out of my way to circumvent. I'm sure that it won't be long before it will simply become impossible for me to open the file "for my protection."

The push to mandate certificates and other gatekeeping mechanisms that enforce obsolescence for the sake of digital security theatre is ultimately going to benefit corporate bottom lines (especially those of landfill operators), but it will indisputably harm consumers.

I guess I should turn that into a comment and submit it...


Thanks for reaching out to the community.

Instead of mandatory updates, there are lower hanging fruits you can win, and will have just as much, if not more positive security impact.

1. No default password, one must be set at initial configuration

2. Devices must function without public internet connection (unless it is one of the device's primary function to transmit out)

3. Devices must function without centralized host

4. Explicit disclosure of all "phone home" destination hosts, and ability to change or disable this

5. Explicit disclosure what information is transmitted out, and ability to disable this

I think the above five can be implemented relatively easily, requires no continued maintenance from the manufacturers, and improves the CIA triad of IoTs.


1. - routers have mainly solved this by having a unique, random password which is provided on a sticker on the device.

Other than that, these are really good.

I'd add something to address the problem of manufacturers going bust and then all their devices becoming paperweights. Perhaps:

6. it should be possible for the user to install their own firmware / updates. Optionally at the cost of losing guarantee and access to future manufacturer provided updates.


Routers are decently large , generally have enclosures, and are meant to be placed in a reasonably accesible position for those who should have access to them while at the same time out of sight for those unauthorized, which makes putting a sticker on it, keeping it there, and having the right people read it when needed is trivial.

Some IoT devices could be handled the same way, but there are plenty of reasonable IoT applications where a password written on the device is impractical or a security risk.


Sticker in the box / on the user manual could probably solve those cases. The problem with requiring a setup phase is that that means you're shipping it in a vulnerable state.


I recall Dell and HP servers at least used to come with a hang tag attached that listed the random initial firmware password. It doesn't need to be a permanent part of the device -- though you do risk losing the hang tag.


> unless it is one of the device's primary function to transmit out

I'd argue that even in that case it should still "function" without public internet connection, in the sense that everything but the egress transmission should be operable without public internet connection; e.g initial setup, later configuration changes, status, diagnostics, maintenance... Many devices use the "it's a transmission device by design" part as an excuse for "let's blanket require connection even for things that don't require it".


I have zigbee and wifi smart home based on Home Assistant, zigbee2mqtt and ESP Home. It is completely local, no internet required and works perfectly. The automation is done in Node Red. If I need to do some action from the internet, I do it through wireguard VPN (one click on my Android phone).

I especially like that zigbee is not even able to make connections to the internet itself. I "own" the devices.

I understand that VPNs are hard for average users, but centralised, potentially insecure server infrastructures, generally situated in China, pose a significant security concern.


By default, no telemetry. NO DATA OUT OF MY HOUSE WITHOUT EXPLICIT PERMISSION.


Out of curiosity, in what context has IoT telemetry been meaningful to the consumer? In other words, what data has been gathered that can be sold or otherwise abused? I personally don't see a reason to be concerned about a manufacturer wanting to track which features are being used and how those features are being used.


Yea, totally OK except they have to ask me first.

Telemetry has huge implications and it can be literally anything. Photos captured by a webcam can be “telemetry”, manf can always say we were verifying our image sensor calibration.


I've dealt with this multiple times, so let me give my perspective.

- It is hard for manufacturers to do this with small teams. Mostly because they do not always have good CI/CD or platforms available to keep being on top of vulnerabilities and so on and so forth.

- Not all manufacturers write their own software and often contract it out to other experts in the field. This includes firmware and app developers.

- If a manufacturer goes out of business or their website is hacked or whatever, the devices are going to send information to someone else, this is a big risk.

- A lot of blast damage can be contained if home devices use local / MDNS based service discovery as opposed to Internet based services. Many services could then either choose to reply locally or sometimes relay to the Internet if users policies allow. Unless people want other people unlocking their doors through the Internet, and they explicitly say it, Internet connection can not be mandated.

- If a producer goes out of business they should be forced to give out a signed firmware that disables the key checking, then they must put up their source code for any users who wish to build and flash it themselves.

- Some of these will not be practical to get manufacturers to agree on. IP issues will arise. Following decent open protocols for firmware upgrade and sharing platform specific specs can alleviate this. One should be able to re implement open firmware for their bulbs if everything else shuts down.


> It is hard for manufacturers to do this with small teams. Mostly because they do not always have good CI/CD or platforms available to keep being on top of vulnerabilities and so on and so forth.

"It is hard for hospitals to keep their ORs clean with small teams. Mostly because they do not always have good cleaning products or procedures available to keep being on top of contaminations and so forth". I do not believe this is a valid argument to protect small firms.

> Not all manufacturers write their own software and often contract it out

Two words: liability chain. This is standard practice in almost every other industry.

I agree with the rest of your points, but I do not think that IP protections should trump regulatory requirements: if a company cannot comply with certain requirements due to contracts with their suppliers, the device should not be allowed on the market.


Liability chain in many industries is a fantastic way to build a large legal moat to prevent competition from small players.

The goal of any regulatory agency must be to ensure as much safety as can be done while preserving the ability of small players to enter the field and compete & while keeping the costs low for consumers. Otherwise, safety becomes a rationale that larger corporations are excellent at spinning to justify more regulatory moats.


A fair point. However, what are we optimizing for? An open/fair market, or consumer safety? Balance is key, but I'm interested in any counter proposals that do a better job.


Consumer safety is long term optimized by having competition. Multi-goal optimization is essential. Try to optimize for just one thing and you’ll quickly go off the rails as a regulator.


> "It is hard for hospitals to keep their ORs clean with small teams...". I do not believe this is a valid argument to protect small firms.

I think your argument here undermines itself. I have not seen an operating room anywhere but a hospital, and hospitals are big. It couldn't really be said that hospitals represent a place with small teams or limited resources. Insurance money is involved, and if you are anywhere other than America, a great deal of government money as well. This is not the picture of a "small firm".

Therefore I must conclude that you either didn't mean to put that hole in the argument, or that you did and that therefore you claim that "small firms" should not create IoT devices.


These are all great points and maybe a good reason why a voluntary program like this is the way to start, so a higher-tier of secure products can begin to emerge. We would also love to see the emergence of platforms that allow small teams to build on top of a secure, update-ready base. Some interesting discussion here https://news.ycombinator.com/item?id=37394546


> If a producer goes out of business they should be forced to give out a signed firmware that disables the key checking, then they must put up their source code for any users who wish to build and flash it themselves.

Requiring an organization to do even a small amount of engineering as it is going under is simply not going to work in practice.

IMHO the only possible way for something like this to work is to require vendors to upload buildable firmware source into a third-party escrow system before you can ship devices to customers.


Escrow doesn't really work either. You have the original code, but do you have the signing key? Is it still the correct signing key, or has an update rotated it without putting the new one in escrow? Are there different versions of the hardware that need different firmware?

The only way to know that it works is to let people install custom firmware from day one, so they can discover if they can't and raise their objections before the company is defunct.


If your team is not large enough to meet the required standards then you need to get a bigger team, not ask everyone else to forget the standards. You could make the same point about taxes, radio frequency use, etc. This is a non-argument.


>> If a producer goes out of business they should be forced to

Going out of business is a loaded expression yhat covers many scenarios.

For example if my company is acquired am I 'going out of business' ?

If I go bankrupt then my assets are sold off to pay creditors. Certainly the IP is an asset and it ultimately turns up somewhere. Releasing it before sale would be illegal in some places (disposing of assets while the business is insolvent for below-market value. )

I know what we experience are abandoned PalmOS devices, but fundamentally PalmOS is owned by a legitimate company and has some nominal value.

I think mandating requirements on the owner is a better approach. They reduce the asset value, so if it doesn't sell it could be released as public domain. But that in turn gets very complicated if there are multiple code suppliers, and the downstream goes bust, but upstream is fine.


> - A lot of blast damage can be contained if home devices use local / MDNS based service discovery as opposed to Internet based services. Many services could then either choose to reply locally or sometimes relay to the Internet if users policies allow. Unless people want other people unlocking their doors through the Internet, and they explicitly say it, Internet connection can not be mandated.

Networking ignoramus here. Are you suggesting the device could be prohibited from accessing the internet directly, and would be required to relay through a separate device (presumably with better security assurances)? Because that sounds like a good idea.

Are there off-the-shelf firewall (or whatever) products that do this already? Quarantine the IoT devices and limit them to whitelisted, curated endpoints?


Homekit tried to do this. I don’t know if they still do.

This can be done today by only advertising a ULA v6 prefix to IoT devices. The problem is the router has to now have global knowledge of what devices are allowed to talk to what services. Or the device has to work entirely locally with mDNS, DNS-SD etc.


Vulnerabilities would be virtually eliminated with better choices of programming language. Then they wouldn't need a great CI/CD system to keep on top of things, although it's obviously still recommended.


I think the most valuable security feature for IoT devices is being able to work without contact with a central service.

If the value of a device is tied to opening a connection to and occasionally retrieving code from a third party it is inherently insecure. All I have to do is buy the company that owns the central server (or compromise it in some other less visible way) and I now have the ability to introduce malicious code to all devices that are receiving 'security updates.' You won't be able to make a rule to prevent asset transfer (correct me if I'm wrong) so you won't be able to close this hole. And this assumes the manufacturer isn't malicious in the first place.

For people to be able to protect themselves and to protect the value of the property they have purchased (e.g. the company tanks and the central service is lost) a rule should exist mandating minimum useful functionality in a disconnected and/or self-managed environment.


>"All I have to do is buy the company that owns the central server (or compromise it in some other less visible way) and I now have the ability to introduce malicious code to all devices that are receiving 'security updates.' You won't be able to make a rule to prevent asset transfer (correct me if I'm wrong) so you won't be able to close this hole."

Has this actually been a problem in the past? I do not know of any examples of this, do you?

I hate having to create and maintain accounts and subscriptions for so many devices, but I'm not sure it's a huge security problem.


Google's acquisitions of Nest and Dropcam are the two which impacted me personally. Data ended up in the hands of people I didn't want, features were removed that I found essential. Perhaps others can volunteer their stories, I've largely opted out of IoT because of these experiences and concerns.


Suppose you buy a car from manufacturer A. You lose both keys (perhaps you and your partner each bring one on a canoe trip and capsize) so you have no choice but to ask the dealer to assign new ones. You find that Google now owns the entire brand A including its dealer network, and they only offer rekeying service in conjunction with an update that installs what you consider spyware. Do you opt out of the motor vehicle industry?


I opted out of the entire motor vehicle industry for far less.


There are malicious actors in the business of buying popular App Store apps and introducing malware into updates.


Or buying chrome extensions. Even domains.


There are reports about Saudi stakeholders co-owning few billion stake Twitter with Elon just to be able to install people who exfiltrate data used for tracking and arresting journalists and activists, for example.


This is virtually impossible due to certs. If you want your device to keep its traffic secure with ssl or wss, you have to have valid certs. Thanks to apple, that either means a device with a 1 year expiration date, or an internet connection so you can periodically provide the device with new certs.


commitments on this label (including the support period) will be legally enforceable in contract and tort lawsuits and under other laws.

When it comes to U.S. laws that touch technology, enforceability is a mess. Spyware, spam, fraud, misleading labels, etc. are already governed by various state and federal laws, yet enforcement efforts are whack-a-mole at best.

For IoT devices, having the proposed requirements sounds good in theory but I fear it is practically unenforceable, particularly for consumer-grade devices manufactured overseas.

However, if powerful IoT platforms are also tied into the new regs - with Google, Amazon, Apple, Microsoft, PTC, HPE, etc. required to audit supposedly qualified devices and ban those that don't meet the standards, with escalating penalties for failing to do so - that might shift the needle.

My 2 cents.


Your point about buyers at scale is really important. The current effort is focused on sellers, but we think that if sellers have to define their security commitments, buyers will pay attention and their risk management people will insist on high standards.

I fear it is practically unenforceable, particularly for consumer-grade devices manufactured overseas

Also a good point. The way we handle this for RF interference is to look at distributors and importers, not just manufacturers, but there will probably always be an untrustworthy product tier out there.


They're proposing an opt-in labeling program that essentially amounts for to the FCC underwriting certain attestations that vendors are choosing to make about their products.

This means that someone applying the label without meeting the standards the label indicates would be guilty of exactly the sort of fraudulent advertising you're describing, and contract and tort law are the relevant mechanisms of enforcement for this.

I'm not sure what you mean by enforcement efforts being "whack-a-mole at best", but if you're expecting some sort of preemptive regulatory barrier to be enforced by a bureaucratic agency in advance, that's just not the way this sort of thing works or is intended to work, and the FCC certainly wouldn't have the legal authority to implement such a regime.

Legal actions for fraud, false advertising, trademark infringement (in the case of trademarked standards certification badges, e.g. UL) are frequently used mechanisms for this sort of thing, and seem to work well enough to ensure that vendors are deterred from fraudulently applying certification labels to their products.


Hmm, yeah. Just as fraud telemarketers set up a new shell run by the same principals when the legal bills come due for their old one, so we're likely to see new labels for a new shell company slapped on the same old insecure IoT box.

So I'm not sure "escalating penalties" is going to cut it. It's still whack-a-mole. You need a way to kill the mole, not just drive it to pop up a new hole.

You need a way to get to the principals. They're the mole.

You need to either make them personally liable financially, or you need to jail them. Nothing else is going to stop serial fraud-behind-a-shell-company.

I'm not sure I have an answer. But whatever answer there is needs to be applied not only to fraud telemarketers (please), but also to fraud IoT manufacturers/resellers.


If there’s a private right of action you can bet the class action lawyers will do the enforcing.


Fair enough, but against whom?

Fly-by-night foreign manufacturers or exporters would be difficult to prosecute. Unless the domestic importer, reseller, or transportation provider can be held liable, even class action lacks teeth.


I think that IOT device manufacturers should be required to support their device for some minimum period of time AND be obligated to release the full source code for the device once they decide to end support. This also requires releasing the keys to any firmware signing mechanism or publishing a firmware update that removes such checks.

The core problem is that without control of the firmware, consumers don't really own these devices. The company can unilaterally decide one day to brick your device and force you to buy a new one. It should be obvious that this behavior is egregiously anti-consumer and anti-competitive.


There's also the problem that electronic devices last a long time -- often much longer than any manufacturer wants to admit.

Vehicles, PCs, printers, and routers can easily last 10 years. Refrigerators and HVAC units can last 20 years or more. And now we're putting "smart" stuff into electric circuits that should last the lifetime of the house. The manufacturer will probably go out of business long before those devices go out of service, and there's no guarantee that there will be anyone to push one final firmware update or release the source code in the hectic last few days of an imploding business.


Some sort of escrow with a dead man switch could solve this. They can reset the switch by releasing an update or affirming that they are still providing service. If no communication is received after a certain period of time, then it gets released publicly.


Seconding this thread. There is no reason why a device shouldn't be servicable in 30 years regardless of whether the vendor is still in business.

There might be a way to integrate these IoT regulations with e-waste regulations, where the liability for disposal, recycling, and cleanup is related to servicability.


This is great for hackers but doesn't it make IoT devices incredibly insecure for normal users who wouldn't even know their device has reached end of support?


> doesn't it make IoT devices incredibly insecure for normal users

How secure or insecure a device is is unrelated to whether its source code is public.

Disclosure: I might be biased on this, as I'm a reverse engineer.


I think the parent comment is implying that if the source code is released at the end of the device's supported life, it will be much easier for hackers to find vulnerabilities. Then users who aren't paying attention will continue running that last version, and hackers will attack them using those now-public vulnerabilities.

So you'd still need some mechanism to force-update devices in response to vulnerabilities found in open-source end-of-support firmware.


If it were really unrelated, nobody would pay you to reverse engineer.


Which is why I believe I'm biased: I support releasing source codes, even if it sounds like it's going to reduce my pay.


What I mean is, reverse-engineering takes time, effort, and special talent, hence your job. This is the little security moat they get by not releasing src code, or at least not the latest running version. Of course a well-maintained and audited open-source codebase is better than a closed one, but a lot of this stuff isn't well-maintained.

Also, there are high-profile instances of hardware security that rely on obscurity, like secure enclaves or the iPhone passcode unlock. They tend to get cracked eventually, but it's still hard.


Releasing source code could lower the barrier a bit but the main thing I was calling out is releasing the keys - maybe they could be transferred to a trusted custodian instead.


In certain cases probably yes, but maybe still worth it? If you have the keys you still need to get your maliciously manipulated build on the customer's device... And this is assuming the manufacturer even bothered signing and verifying in the first place.

So this would be bad for manufacturers releasing secure well designed devices without security vulnerabilities.... But if you think about it for a second, isn't this good? As long as there is no known vulnerability, the manufacturer can say the device is still supported, and it costs them nothing, as they have no reason to release an update. And well, if there is a security issue, then it might be better to have the source and keys after all?


This is not really a solvable problem. Whether or not your iot devices are out of support is just not something anybody has time to think about. It's right up there with those "our terms of service have changed" emails from your bank in terms of priority.


No more than our current situation. A sufficiently determined hacker can break into most IoT widgets given enough time, and decompile the firmware.

The vulnerabilities exist and will be exploited either way. But with source available, we now have the possibility of a whitehat group springing up to patch them.

Would you rather have no chance of ever patching exploits or giving people the ability to patch them?


>AND be obligated to release the full source code for the device once they decide to end support.

This is unreasonable. Code is often reused in the next generation of a product. The company may not have the rights to release all of the code.


A competent technician with access to a workshop can make even 80 year old vehicles work. That is long past the service life of that vehicle but it can still be done, an iteration of the same technology is likely still in use today though in your car.

That isn't possible for software simply because reverse engineering is not simple, reverse engineering a small microcontrollers firmware might be possible, reverse engineering even something like old unix wouldn't be and definitely any modern operating system wouldn't be even though there is no legal precedent for that source code to be protected.

In 30-40 years time when early windows and early DOS copyright expires do you think Microsoft is going to benevolently make that source code available? It has legally become public domain but the source code will still remain closed.

How about firmware for IOT devices being installed in your house, it's likely that that hardware can be made to work for the next 60 years but you are forced to replace it in 3 when the manufacturer decices it's no longer financially viable to support it.


>That isn't possible for software simply because reverse engineering is not simple

Nor is repairing a car. It is not as hard as you think to RE some random IoT device firmware.


Is it "Here's a manual and a pile of tools, follow the instructions" hard?

Most maintenance on vehicles is exactly that.

The smart fridge that has a buffer overflow leading to RCE cannot be fixed quite a easily as as replacing the brakes on my car, neither is easy but one of those a monkey with a spanner could figure out eventually.


Great, then if they keep supporting the codebase internally, back-porting security patches to an older device should be easy.

Manufacturing millions of widgets that go into the garbage every three years is bad and should carry heavy consequence. If the company isn't willing to support the devices they sell, it should be possible for consumers to do it.


Then they have to acquire the rights to do so for of all components before they use those components.


For those of you unfamiliar with the specific challenges IoT patching brings, here is a blog post from just last week on one aspect of the topic: http://tomalrichblog.blogspot.com/2023/08/british-cuisine-de...

FTA:

> I assumed that device manufacturers update the software in their device about every month...he said they do it annually.

Those devices are at least _getting_ updates - there is a long tail of devices whose operational lifecycle [far] exceeds the vendor's support timeframe - in other words, they don't get patches at all N months after release.

The solution to these problems is straightforward - we've been managing it in software for a long time. EOL OSes, Long Term Support (LTS) OS releases, etc - but the device manufacturers are not as mature, and have not been making natural progress to do so.

And since this is HN - there is a startup hidden in the midst of all of this: an enterprise-grade IoT OS that "does security right." Sell to the device manufacturers, allow them to market it as "enterprise-ready" or some such. If the FCC guidelines here are approved, there will be a suddenly increased demand!


>And since this is HN - there is a startup hidden in the midst of all of this: an enterprise-grade IoT OS that "does security right." Sell to the device manufacturers, allow them to market it as "enterprise-ready" or some such. If the FCC guidelines here are approved, there will be a suddenly increased demand!

Agreed. Building an automatic firmware update system from scratch would be burdensome for many IoT makers, but as it becomes necessary or encouraged, we would expect the market to provide a packaged solution/framework that manufacturers could fold into their products. It would be really helpful have to discussion of this on the record. How generalizable do you think such a solution could be? We are aware of the Uptane project, an OTA firmware update framework being jointly worked on by several car manufacturers, but would love to hear more about the feasibility of a solution for IoT devices generally, or particular classes of IoT devices.


Firmware is fairly balkanized relative to SaaS stacks, I think regulatory pressure is likely to nudge the industry towards more consolidation, which would open the door for this kind of service. But I have no idea what form the regulation should take to produce the right market and incentives


> there is a startup hidden in the midst of all of this

There are already some companies that do this, but obviously adds to the cost to making these iot devices.

Ex: balena.io, and even AWS iot management does this.

Maybe there’s someway to get the AWS iot gorilla in the room the weigh in?


IoT devices need regulatory standardization w.r.t a few things:

1. software stack – big fat "firmware" should not exist. Entire stack should be upgradable safely, securely and frequently during its official supported lifetime and should be open-sourced for owner's own upgrades past end of life. For this, the hardware stack needs some amount of standards compliance.

2. Vendor should clearly declare/advertise the period for which they will support the device. During this period the device vulnerabilities should make them liable. After this period, they should mandatorily open-source the device drivers and unlock the boot loaders to enable free software alternatives to work on them. For certain class of devices, there should be mandatory minimum period of support.

3. Networking capabilities should be legally standardized and verified/certified before being released to market and checked for continued compliance and fined if out of compliance.

3.1 Use of latest mainstream TLS with valid certificates should be mandated for all communication.

3.2 If there is outbound communication from the device, it should make it clear to which domains it will communicate with so that it is easy to allow only that through firewall and keep everything else locked.

3.3 IoT should not accept inbound communication without authentication.

3.4 Follow best practices w.r.t rollback resistant cryptographically verified secure and brick-safe software upgrades.


> 3.3 IoT should not accept inbound communication without authentication.

Ideally the user should have to specifically consent to inbound communication on an instance-by-instance basis, even from the manufacturer. There's many cases where forced updates are triggered that change/limit functionality unexpectedly. There's numerous anecdotes of people's devices being required to update to be used while they have some pressing need to use it.


That's not what regulation is for though. The marker will regulate companies making bad products. Regulation should only grip where the market doesn't.


That's an overly rosy view of what markets are capable of self-regulating in a situation where network effects are very strong and it's difficult to stop being a customer once you are.


These could be great suggestions for the criteria a product must meet to quality for a label. I encourage you to file an official comment with your thoughts.


In regards to 3.2, I'd like to propose 3.2.1, where a smart device manufacturer must clearly label and explain that A. their device requires a connection to a cloud service, and if so, what for and what data is sent/transmitted, and B. what functionality, if any, remains if you refuse.

I have a Winix air purifier that has a sensor and filter lifetime suite that I cannot communicate with because it refuses to connect to my "IoT" SSID that has no access to the outside internet (no, you stupid piece of crap, my WiFi is working fine, now just start sending data damnit).


I wrote elsewhere, but a standard/compatible requirement for a rolled up management dashboard. https://news.ycombinator.com/item?id=37395102

I dont want 30 semi/unmaintained apps to manage each manufacturers security settings.


I hate all of these well intentioned ideas, just like I hated when apple forced <1 year certs. All of this is in service to online devices, and completely screws devices intended to operate over lan only. I find security and peace of mind by blocking a devices access to the internet, but these changes mean that every single trinket that talks over the lan needs a whole boatload of suspicious looking internet traffic.

It also makes it so you can't operate devices in offline networks. (this is already a thing due to apple's cert changes)


How about requiring devices to accept alternate, Free Software firmware, from the upstream provider?

At the very least, it should be possible after some time period of no updates or insecurity, but a blanket requirement is less susceptible to games.

Probably the best thing to happen to wireless routers is OpenWRT and the other descendents of the WRT firmware.


Possibly weird idea: federal firmware escrow. The OEM gets to put a stamp on their product after submitting firmware source/keys to the FCC. When the OEM either declares the product not supported or provides no updates for X length of time, the files are automatically published to a public repository. Perhaps there is an appropriate license which says essentially that it is almost public domain, with an exception (or fee?) to use it for any purpose other than supporting the lifetime of an existing product.

As others have stated, free software is one way of giving the public ability to keep things up to date but that's almost like the government saying people are allowed to clean up pollution. It doesn't put any pressure on companies to behave better.

Another issue is build-ability of open source code. If an OEM submits firmware source and keys to a third party, even regularly, who really knows whether it is actually functional and complete. Automated tests or sample hardware are possible ideas but have their own failure modes and could be difficult for to implement solely for this purpose.

Another weird idea for the above. If one requirement was deterministic builds, then in addition to source/keys, a suitable toolchain to build could also be required such that the repository stewards would only need to run exactly what the OEM provides, and if the checksums don't match then it means they are not in compliance.


This is an amazing idea and I would only buy a product that has this stamp on them. I would put some additional triggers into the publication of source code as well, notably if the company goes out of business. I would also put some kind of timer and renewal process on it, like a company needs to recertify every 1-5 years (pros and cons to different time lengths) and that they have indeed been providing actual updates (and not just fake ones) and actual support.


The OEM could be allowed to choose their recertification period, perhaps with slight differences in requirements. Perhaps even different options offered by company size. For example 1-5 employee companies might get a "no recertification, provided as-is" option which releases automatically 3 years after filing. Vendors who re-certify every 6 months could get an extra mark on their stamp or whatever. There are tons of possibilities honestly, and though I've been thinking about it for a long time writing it is much easier to come up with more.


Agree this approach seems to be worth investigating further, but as a citizen of a non-US country, I'd like to see a solution that wasn't based on a US-centric set of controls and governance bodies.

These days, with nationalism and populism rampant across the world, I think we need a solution where no one country (or country's leader) can simply decide to turn off critical infrastructure for the rest of the world and/or hold the rest of the world to ransom. Then you run into questions of "do we really want (insert bad country) to be able to expose IOT source code to their evil hackers?".

This is a really difficult problem to solve, but ultimately I think ownership of the "keys" to unlock escrowed code needs to reside with (winging it here...) a body such as IEEE or ISO. Or possibly something like a global council where e.g. any 5 countries out of 7 can collaborate via a sharing of keys to release source code, but no one country is able to do so.


I completely agree that such a thing should not be US-only. There would need to be a clear distinction between one-gov't backdoor and voluntary regulatory certification, because ultimately the goal would be for other countries to follow suit and provide similar/identical certifications. You could look to standards bodies to provide standard implementation details on what "firmware escrow" is, what exact formats and files must be included, etc. IEEE, ISO, JIS, DIN, and all of them could write or adopt the document. But actually running the service and providing the certification is a little closer to a patent office than organizing standards which is why I propose doing it federally. Think Energy Star (which is a US gov't program based on EPA standards) which has been implemented successfully outside of the US.


Classic tech to think of technical solutions to a regulatory problem, but I like it.

Could have the code run in a sandbox where people can apply “external” network traffic trying to hack it (or apply vulnerabilities), inspired by how you can run ML models on kaggle.org on Kaggle servers to validate models.

Have end-points as honney-pots, so if you can access these endpoint you prove you have compromised the code.

If there is no new code with patches the keys are released.

This way FCC/gov don’t need to maintain a technical system. Just build this once.


A challenge to this sort of suggestion is that the device OEM rarely controls all the software which goes into a device. There are third-party modules, hardware drivers, and more, which are also components. Then there are patents.

Either the escrow would have to apply to all software on the device, regardless of whether owned by the OEM or third parties, or OEMs would be required to vouch for all the included software.

I'd prefer the former myself: multiple software and patent assets can be combined, but on support EOL, all those become public domain.

Build / release / update toolchains must also be included.


I have been thinking that something along these lines should apply to most embedded systems, from mobile phones to game consoles to IoT devices. And more, for cases when companies go under: industrial control automation, etc.

However, it's very hard to police: what if the company only provides a header file or so? Or the basic OS but not the UI?

Moreover, what counts as "support"? There have been cases where companies refuse to consider remote code execution a "security issue". What's to prevent token updates that let the company claim a product is supported while it's riddled with holes?

I could also see companies fighting teeth and nails against this if their devices share a common software base. But hey, you have your support incentive right there!


So this feels like an amazing idea...but do we really want to give the federal government the keys to update your equipment remotely and to be able to pinpoint weaknesses of the source? This feels like Edward Snowden's grimmest nightmare.


As I understand that's not what's being proposed. The "keys" in this case would decrypt the encrypted source code that's available in a public repository, and there's some logical mechanism(and actual use for smart contracts) that would key the key in escrow until certain conditions are met(company doesn't renew, goes out of business, etc.) After which it will be publicly released so anyone can decrypt the already available encrypted source code


And what happens if the server holding the keys gets compromised? I guess most manufacturers won’t care, but the more reputable ones would have things in their source they consider proprietary and would definitely not want to have to submit it.

Verification that it is, in fact, the actual shipped source might not be trivial either.


What happens when someone hacks GitHub and gains access to private repositories? That slim possibility doesn't stop the vast majority of companies from hosting their source in a private repo.


Framed like that, it sounds terrible. However, consider this:

[1] This discussion is about a federal agency providing certification of products essentially in the form of a stamp (on a device, website, etc.). Nothing is stopping a vendor from committing to and offering the same thing but without government involvement. This could easily be a selling point for the paranoid. Something something blockchain and smart contracts...

[2] Even though we're discussing IoT devices, it's not necessary that they be capable of updating over the air 24/7. Creative engineers could probably devise a method to prevent complete remote takeover by anyone holding the keys– physical switches, additional authentication required during the support period, etc.

[3] Personally, I think the federal government getting access to keys for any IoT device made/sold in the US is the only part of this idea that could already be happening. They can knock on doors or mail subpoenas, plant moles, etc. I would be much more comfortable with a technical solution on the physical device than any presumption of privacy in the current state.


The problem here is the government can’t be trusted with the signing keys. They will simply use them to deploy their botnets.


I like this idea! A code&keys escrow org, yes please


This is also a key point to fighting ewaste and making devices last longer. I have appliances from the 70s including a rotary telephone, that I still use regularly. If you combined mandatory OSS support with repair cafes, you would have a model for sustainable reuse and better security. You may even start a commercial aftermarket in reflashing older devices!


Yeah if companies that make IoT hardware complain about the costs to keep old devices updated then they should be required to make them more user-modifiable and release source code / signing keys when they're abandoned by their manufacturer so that they can be picked up by the communities and development can be continued (also requires some policing to determine when hardware is functionally abandoned, as releasing a minor update once a year that doesn't fix real bugs should still be considered abandoned). Repair cafes would be fantastic for helping support small businesses keeping people's old hardware running.

Of course, manufacturers don't want that either because they make money off of planned obsolescence and consumers keeping old hardware running makes them less likely to buy old hardware.


Releasing Signing keys seems a potentially dangerous one. Someone could produce malicious firmware, sign it, and convince your device to auto-update with it.

I think (and I'm a security know-nothing, so could very well be off in the weeds), the firmware should accept updates signed with two keys. The manufacturer key, which can allow automatic updates, and a post-service key that cannot be automatic. Either a user has to initiate the firmware update manually, or consent via some other means.

This post-service firmware may very well enable a third key for automatic updates of its own, so there's just a manual step on the transition from manufacturer to some community project you support, not each revision afterwards.


Presumably they'd remove the auto-update functionality before releasing signing keys and require that it be physically loaded by a user at that point.


That's assuming they make a final firmware update. Having the 2nd manual key available from day 1 ensures the device is unlockable with just a release of the key. Having a 2nd key or a signed unlock firmware update I guess are two ways to achieve the same goal, but the 2nd key would be better. The 2nd key likely stay in place forever, in each update, while the unlock firmware would likely end up remaining as the original firmware, because why would most vendors build two firmwares for each release. It would sit forgotten on a drive somewhere. The use of original unlock firmware could mean making a device vulnerable between loading the unlock firmware and the community firmware, so the 2nd key is preferred. It's always ready.

Ideally, the key would also be pre-registered somewhere to ensure they can't skip out on releasing it, but I'm not sure how you do that without it potentially leaking before the device reaches end of support. I guess a code sitting in a lawyer's vault somewhere. Again, nobody is paying the lawyer to refresh his vault's firmware image after each patch, so 2nd key wins over unlock firmware again.

Why a lawyer's vault? I know lots of people would love to take ownership of the device immediately, but from the device creator's perspective, they tend not to like that... especially if a device is a source of subscription revenue. So I'm not sure how you'd get vendor buy in to early release unless it becomes mandatory... which I can't see.


At rotary phones still compatible with todays standards? If that’s the case I might get one too.


It's been years since I've seen a landline, but as of a ~decade ago you could still dial "rotary" by tapping the receiver hook with the correct spacing. (1 click to dial a "1," 2 clicks to dial a "2," etc. with a pause between digits.)


In my case I use a small gadget to convert the pulses to DTMF tones, which feed into a voip connection. The point is that the device itself still works, unlike much iot crap which can't be made to work once the server goes down.


Maybe the presence or absence of this capability could be something disclosed on the label? It's an idea worth thinking about. I encourage you to file a comment with your thoughts on the matter. You probably want to focus on how this information could help a security-concerned consumer/business make a better purchasing decision. Or if you're arguing that this be a hard-requirement for the label, why a device should not be considered secure without this capability.


As far as I remember FCC about 8 years ago didn't liked OpenWRT, and even enforced on TP Link to lock it.


There was no requirement that firmware be locked down.

The requirement was that consumer radio transmitters could be too easily made to use frequencies and power levels that violate FCC regulations.

If a device had a transmitter where firmware could control those things, and the firmware for the device was one blob that contained everything so letting the user replace firmware meant letting the user control those restricted parameters, then the manufacturer might have to lock the firmware.

There were other possible approaches. One would be to split the firmware into two parts. One part for the radio hardware and the other for everything else. Make it so the firmware update process only allows the manufacturer to supply the first part.


Isn't that a bit overreaching? I can make you a device the spews garbage on any wavelength you fancy, so they're really only preventing accidental radio pollution. Even in that case it's pretty unusual to prevent a consumer device (other than a radio) from being used in an unlawful way, Part 15 notwithstanding.


People were asking online for help dealing with WiFi interference, and were getting answers telling them how to install open source firmware on their WiFi routers, and giving them exact commands and configuration changes that would set the power higher than was legally allowed or stop them from avoiding channels that were being used by active weather radar (5 GHz WiFi shares channels with weather radar and is supposed to monitor and only use those channels when the radar is not in use).

I suppose you could call that accidental radio pollution, because most of the people doing it probably didn't realize that they were causing interference, but regardless it was becoming a problem.

Hence regulations to address it.

That's the world we're in now. You have a problem, you do a search online for help, and among the answers you often will find some that really should only be used by people with more experience or expertise than you but do not make that clear.


I believe the regulation applies to such a device you make as well, not specifically consumer Wi-Fi products. I.e. if you make a transmitting SDR it's not supposed to allow certain things.

The prevention all comes down to enforcement though, the law doesn't physically stop you from making a device it just means you could get in trouble for intentionally ignoring it and selling a lot of those devices.


You're totally right. I'm trying to make a moral argument, I think if it were unfeasible for an individual to make something (e.g. modern CPU) then you could make an argument for producers limiting them on the basis that it would effectively prevent anyone from doing the banned thing. The fact that it's roughly as easy to reflash an IoT device with custom firmware as it is to make an antenna that produces noise in a forbidden frequency means that you are adding a technical measure to prevent just some of that illegal action and not stopping a determined hacker.

I'm not claiming it wouldn't be an overall social good, but other areas of law and regulation don't seem to function like this (with notable exceptions like photocopying banknotes).


The (particular) law isn't actually aimed at stopping those who just want to go into their garage and produce illegal interference out of malice. E.g. people can create 200 Watt space heaters in their garage easily but it'd be odd to then conclude CPU regulations wouldn't stop people from doing bad things with CPUs.

That is to say, it's infeasible the average someone will make a working Wi-Fi radio which uses the Japanese channel 14 (involves changing what is sent by the radio, not just raising the frequency... unless you want to accurately re-adjust the frequency inside of your smartphone too) but it is reasonable to expect the average someone might just load some open firmware from the internet which allows them to set the channel to 14.

I will say I agree it's different than a lot of regulation. On the other thing I think that has more to do with radio space itself being very different than most things (i.e. a shared public resource) than inconsistency.


Vint Cerf and I shot that proposed anti-dd-wrt regulation down thoroughly: filing here:

http://www.taht.net/~d/fcc_saner_software_practices.pdf

Substitute IoT for router in everything we wrote there on page 12-13 and that seems to be a starting point everyone around here has come to think is necessary. I would prefer not to summarize such a large filing here.

Also Dan Geer wrote extensively on these topics at the time.

Of late I have been strongly suggesting that software be at least "built in america": https://blog.cerowrt.org/post/an_upgrade_in_place/

I care most deeply first - that the front doors to our houses, the home gateways, are properly secured, kept up to date, and have ipv6 and bufferbloat fixes on them.

IoT devices belong on their own vlan...


IIRC the main objection was that it could be used to do something with the radio (boost power?) that caused the device to exceed FCC limits for a consumer radio? Something along those lines?


It was about the radio and being able to modify the radio firmware.

As many modifications would void the FCC certifications of the device, due to changed parameters (more power, disabled anti-interface mitigations, out-of-band channels, etc.). This was avoided by making the radio firmware separate blobs, but there was a real danger of the whole device having to have signed firmware.


The main issue these days is the 5GHz band that might interfere with radar. That's mostly an issue outside near an airport, that is, it affects practically nobody, so the solution is to snoop the problematic bands and disable them when detecting radar signals, and use them otherwise for a bandwidth boost.

These days this seems to be mostly done in wifi chip firmware (that is then signed and under lock and wrap to make it tamper proof), but back in the day it was too easy to circumvent the mechanism.


For those out of the loop these documents have a good introduction to how free software interacts with radio regulations

https://wireless.wiki.kernel.org/en/developers/regulatory/st... https://wireless.wiki.kernel.org/en/developers/regulatory

TLDR manufacturers and "serious" companies won't touch anything that could potentially be configured to emit signals that your local government doesn't like. So Linux has to pretend it doesn't allow to do that (even though anyone with 2 braincells can patch it)


As these regulations change frequently, I am glad that linux makes it possible to update this database and the devices in the field. For example a portion of the 5.9ghz spectrum became available.

https://www.computerworld.com/article/2993112/vint-cerf-and-...


So Linux takes government's side, not user's?


I'd call it "safe defaults".


I am all for alternative free software firmware. But I don't think it adresses IoT security in any meaningful way.


Why? The person you are replying to outlined one major example where IoT security was improved: wireless routers. Not allowing users to update the software on the hardware they own is just a botnet waiting to happen.


I'm sure the number of routers running OpenWRT is dwarfed by the number of OpenWRT-compatible routers running vulnerable, stock firmware.

Allowing people to install software on their hardware isn't a cure for vulnerabilities. It's a step in the right direction for sure, but it's a very small one from the perspective of something as huge as "IoT security".


We have worked very hard in the OpenWrt and Linux projects to make it easy to update them in the field. Linux distros, android, apple, openwrt, etc have this facility built in now. IoT should also.


Devices should have the ability to run whatever software the user chooses. My point is that simply allowing this isn't enough to ensure those devices are secure.


but that's just because manufacturers desperately hide the fact that their own firmaware is based on OSS, and that there are alternate stacks available.

imagine instead that vendors had to acknowlege the structure of their firmware, and make the (usually obvious) hardware interface documented from sale. that would automatically solve the issue of out-of-support(-by-vendor) hw.


The solution without free firmware (and I don’t like this) is that the device bricks itself at the end of its scheduled lifetime.

Which is to say, you are buying a multi-year lease up front. And the manufacturer should send you a recycling return box.

This is a more honest way to sell these devices.

Consumers that would not care about length of security updates will suddenly very much care how long their “lease” is… and manufacturers would compete on the length of that lease (which is where the FCC could require security updates for the length of the lease period).


That just forces e-waste. Aftermarket firmware lets a device stay useful indefinitely.


Agreed. I fully expect to see the routers we used in the cerowrt project from 2008 still operational for 10-20 more years. Thereś one out there with 4+ years of uptime that I know of.

https://blog.cerowrt.org/post/an_upgrade_in_place/


99% of users don't know their iot devices have firmware nor that it can be updated.


Maybe that figure would change if the firmware could indeed be updated.


It would change, but again -- it wouldn't be appreciable.

Security policy is needed that accounts for the behaviors of the vast majority of users.


Users update their phones, there's no reason they can't be educated to update their other devices


Most non-technical users that I know don't actively update their phones and push back when I tell them that they need to do so faster than the automatic process because of an actively exploited vulnerability.


They may have trusted family members, friends, or neighbors who they feel comfortable allowing the management of their internet connected devices.


No. As long as their iot device is still working consumers could care less about security updates.


What do you mean by "no"? Are you denying the existence of my grandparents who trust me to manage their devices?


I am saying that people like you are not enough to help the 99% of people who have an iot product.


Apart from Smart TVs, most people don't have an IoT device to begin with.


Smart speakers, printers, thermostats, light bulbs, security cameras, door bells, locks, smartwatches, TV sticks...


I don't count a watch as an IoT device. And most on the market are Apple devices.

Apart from printers and smart TVs, less than half of US households have any of those other devices.

https://www.statista.com/statistics/1124290/smart-home-devic...

Anecdotally, most people I know who own those devices are the more technically inclined. Especially the thermostats and light bulbs. Even the doorbell surveillance networks have low penetration.


This approach may function effectively with your close family members. However, it can sometimes fail when your cousins won't let you near their IoT devices because they view you as the hacker or tech enthusiast who might tamper with their gadgets.


So what? Just because there are some atypical people doesn't make it a "no".


Free software firmware would be great for free software lovers and tech experts, no doubt. But sophisticated users who'll take advantage of things like that are only 1% of the market.

But if the aim is to stop DDOSes from botnets of poorly secured IOT devices, we need something to help the other 99% of the market.


> But sophisticated users who'll take advantage of things like that are only 1% of the market.

Most folks can't or won't do lots of things in their lives (e.g. plumbing, electrical, construction, lawn services, Automotive).

The main thing blocking routers and IoT devices is the control every vendor wants to hold over their customers' devices after sale.


Even if you give control to the users, it's up to them to use it.

I'd argue that the main blocker to IoT security is the lack of culpability on the part of device manufacturers. I don't want to go so far as to suggest that companies should be wholly liable for software bugs, but vulnerabilities that are brought to the attention of the company privately or disclosed publicly absolutely should be their responsibility to address.

For you or me (or most of the folks here I suspect) we feel better if we had the ability to decide what software our fridge runs, but for 99% of people they're better off if their fridge's manufacturer provides them with regular security updates for the life of their product.

That being said, these aren't mutually exclusive. In a perfect world we'd have laws compelling fridge companies to allow 3rd party software if they don't keep their firmware up to date.


I'd argue that in an ideal world, a fridge wouldn't have networking capabilities!

Even if I don't buy one, I dread to think of what might happen if enough fridges across the world stopped working all at once (demand for non-perishable food and fridges would skyrocket).

For the sake of consumer safety in our imperfect world, there should be a safe-mode hard switch fallback for any life-critical and/or high wattage device that gets networked.


replacing the firmware can allow knowledgeable users who want to secure their devices to improve the security. it can also allow malicious actors to replace the firmware (or trick users into replacing the firmware) with something less secure. allowing users to replace the software on the hardware they own is also a botnet waiting to happen.


It allows users to replace insecure software with secure software. And it allows updates long after the company drops official support of the device.


That's great for the 0,1% of users who will do that. As said: I'm all for it. But the problem is the other 99,9%.


Seriously. I'm a SWE and I would throw out a TV and get a new one before spending hours minimum figuring out how to switch the firmware to a open source version.


Because right now we have to actually perform some exploit to run custom firmware most of the time. What if there were just a toggle in the settings menu for which repo to look at?


Probably not, honestly. I get paid to do things like manage dependencies, and I'm not trying to do it at home. If this was one and done, just click a button and forget about it again, then maybe, but if I have to do things like think about what model number I have, and is it compatible with this version of the firmware, then no way.


This is kind of adjunct to the right-to-repair question. "Smart" features are being added to things that a person would expect to have a long usable lifetime--like cars, kitchen appliances, and so on. If the manufacturer is saying, we'll only support the "smart" features for five years, one would be inclined to opt out on that unless there is a way to ensure the appliance remains useful.


Openwrt is not the best example. Community sucks, some routers are full of bugs and the security is not great either.

In general even if I like open devices and having the option to use my own software, this is not a solution for most of the consumers.

It is not a solution even for the enthusiast that know how to flash their own firmware. Because even if they may do it a few times initially, eventually they stop doing it.

You need a system that can update automatically even when you are busy in another project.


openwrt is surely lacking in many aspects, but all the points you brought forward also apply to the manufacturer firmware but those are even less user friendly and cannot be modified.

there are a lot of open and closed firmware projects building upon openwrt


> but all the points you brought forward also apply to the manufacturer firmware but those are even less user friendly and cannot be modified.

That's more than a reach of a claim. All manufacturer firmware are buggy with poor security? That's very obviously false. With closed manufacturers the history is that it's a mixed bag, not a blanket. Some are excellent, some are very poor.

Openwrt has been mediocre and all the negatives about it do not equally apply to all closed manufacturer firmware.


> All manufacturer firmware are buggy with poor security? That's very obviously false.

guess we have a difference of opinion then


I am not clear how "all the points you brought forward also apply to the manufacturer firmware"


There is absolutely no scenario where I want the firmware of any of my infrastructure devices updating without my say so. Even if there are dire security consequences of not updating.

If a firmware update on my smart watch bricks it, who cares? But if my entire house/office is without internet connection because of a bug in the router update, then I don't want to waste time determining if its my ISP, my physical connection, my local hardware, or the firmware update that just occurred silently. I want to send the "update" command, note that network response did not resume within 5 minutes, and revert from there.

A lot of folks tend to think of firmware updates as identical in complexity and risk to any other software update. I submit that if it can't go wrong in such a way that it requires an in-system-programmer to fix, its a software update, not a firmware update.

In that sense, I think true firmware updates (e.g. BIOS updates and the like) require a different set of regulations than your standard IoT security updates.


I see two essential points (that might have been addressed in another comment somewhere else in the thread, but I can't read everything) regarding pushed firmware updates:

• It happens, especially if the update has been a "quick fix" to a security issue, that the update introduces unexpected behaviours, or incompatibilities. Supposing this was just a "security-only" update that doesn't change any features, I would approve it, and then discover it breaks something in my installation (e.g., compatibility with a specific device or software I'm using). In that case, I need to be able to rollback the update and run the previous firmware version (possibly mitigating the security issue in another way, if it's properly documented) to avoid serious issues that, depending on the device, might prevent important equipment from being operated.

• For firmware updates that include more than security fixed, approval and the possibility of rolling back is even more important. It's quite common that updates remove seldom-used features a minority of users depend on. It even happens that some features get removed and replaced by subscription-only services, which is even worse.


Also, one of the single most important parts of security is human manageability. Having to go into a proprietary app for each manufacturer.

There's this level of management at the google/amazom/apple level, and in some cases google even lets me push firmware updates (sennheiser ambeo is an example), BUT there should be a requirement that security settings in firmware be exposed in a way that they can be manipulated in a rolled up dashboard. I shouldn't have to depend on a first party app to stay maintained to get to the settings for a device.


The best alternative firmware example for true IOT devices is Tasmota [1]. Erase manufacturer firmware for every ESP devices the day after purchase to avoid those careless manufacturer firmwares.

[1] https://tasmota.github.io/docs/


I'm a big supporter of the idea of applying right to repair principles to software, but I don't think it should (or legally can) be implemented by fiat of unelected bureaucrats at the FCC. Labeling requirements like what the OP is proposing seem much more palatable to me.


I like this angle. Require device manufacturers to actually comply with OSS licenses and extend it to require providing the means for consumers to build upon the software as you mention.

Furthermore, if the concern is national security, then I think some of the onus should be on the corporate consumers of such devices. Holding them responsible for doing due diligence on their vendors seems easier to regulate and enforce than trying to regulate supply side.

Of course, this leaves the general population without a clear solution to updates if the process of updating using alternate channels has any amount of friction whatsoever. I'm clueless as to what to do about that. Regulating it entrenches established companies. Not regulating it maintains the status quo.

Anecdotally, it feels like software has gotten far more secure in the past decade without regulation but security theater and concerns around national security have grown considerably faster. This isn't easy to measure of course, but that's how I see it.


We’ve seen manufacturers abuse ongoing access to devices to turn off features the device came with at the time of purchase or convert one-time-fee features into subscriptions. One of my concerns is that security updates are strictly defined in a way that prevents this type of regulation from being used as cover for these shenanigans.


I just want to reaffirm the importance of this point. I've used an open source solution named Home Assistant[0] to manage my own network of IoT devices that I don't expose to the internet. I want to stay local because of the risks involved with the internet and with trusting companies to protect such private data.

As such, I look to purchase relativity open devices. But, companies want to keep trying to inject themselves as a middleman, sometimes after the fact. In that case I'm let with a device that becomes e-waste. I don't know what other actions are being taken in regards to subscriptions, but it's a problem here.

[0] https://www.home-assistant.io/


> One of my concerns is that security updates are strictly defined in a way that prevents this type of regulation from being used as cover for these shenanigans.

And then as a manufacturer you need to pay someone to certify that, or otherwise risk a class action lawsuit?

What about hardware that uses third party software? (Either because it's a genuine third party, eg when you put open source on your router, or because the manufacturer split into two companies to exploit a legal loophole?) Can open source software only make releases that update automatically after getting certified, or risk getting sued otherwise?


I'm not a US citizen, but I too would cosign the idea of not using security updates as a vector for pushing monetization "features".


FWIW, seeing a security compliance label on an IoT product wouldn't mean anything to me as a consumer. There is no such thing as computer security in 2023, and there are no hints that security will exist at any point on the horizon. Even the biggest names in the field cannot put out secure products. Products from well-meaning manufacturers are going to be absolutely riddled with security problems, and putting a sticker on the box won't change that. It is literally impossible to put out a software product with anything resembling security today. It'd be like putting a "secure against bricks" sticker on a window. Our industry is a joke. Building secure software products can't be done without completely rearchitecting how our industry operates, which isn't going to happen.


I understand your skepticism. That's why I want to see the label functioning as something like an enforceable representation to consumers. If someone wants to sell brick-proof glass, and get a sticker from the US Government saying so, it better be brick-proof.


Well, my comment is predicated on the, apparently erroneous :), assumption that no glass is brick-proof. It is impossible to build a secure software product with our current tooling & development practices. The number of security flaws in every software product is so high as to make the label meaningless. I don't think there's a meaningful distinction to end consumers between "this product has 1,000 holes, 100 of which are publicly disclosed" (i.e. no label) and "this product has 900 holes" (i.e. with label).


A car that's safe to crash in is also impossible to build, but the NHTSA has standardized crash tests that they built up over time that has meaningfully made cars safer.


+1 for having a bare minimum requirements for IOT sellers against some standard testing criteria. Raise the bar but

Possibly another important would be NHTSA gathers and publishes numbers on accidents. Having some regularly published numbers would certainly shine more light and is probably lowest hurdle to cross from a political standpoint.


> the, apparently erroneous :), assumption that no glass is brick-proof

That reminds me of:

> With sufficient thrust, pigs fly just fine.

https://www.rfc-editor.org/rfc/rfc1925


> If someone wants to sell brick-proof glass, and get a sticker from the US Government

In a free society, why would we ask government (lowercase g) for a window certification sticker? Should government also provide condom anti-breakage stickers? If we want this, maybe UL can set the standard and ask for volunteer testers to affirm the condom or window anti-breakage quality.

Or maybe we can put the Bell System back together and let them regulate what devices may connect to the network. That led to expensive monthly handset charges.


Are you suggesting the stakes are the same for digital communications as for condoms? How big of an actual problem is condom breakage? How big of a problem is digital surveillance and theft? How do these two problems compare economically today?

Reasons I think we might want some government certification that has real teeth include: the freedom to protect and control our own digital data. A statistically high rate of surveillance and cyber crime with no tools to prevent it impedes the very freedom you’re defending. Absolute freedom for all cannot exist. You can’t be free to keep your money & privacy while I’m free to take it. Real certifications with enforcement teeth wouldn’t solve all problems, but it might make an actual dent. It would be nice to have national security and privacy standards, make purchasing decisions easier (actually sane), prevent some of the crime before it happens, and reduce the crime and surveillance that we know exists. That’s just from a consumer point of view. I’m sure there are many many companies and organizations who would love to be able to have some level of trust in their equipment purchasing without expensive vetting (or far more realistically for most orgs, little to no vetting at all, just hope).

Didn’t the government break the Bell system apart in the first place? When did they get back together and upcharge handsets? I don’t know what you’re referring to. Are you saying that what was needed after the Bell breakup is stronger regulatory oversight with bigger teeth?


Because there's no such thing as a completely free society and those who believe there is or should be would be the first to be taken to the cleaners by unscrupulous or incompetent actors.

The condom comment is absolutely ridiculous because there are loads of regulations regarding condoms from the FDA. Unsurprisingly you aren't allowed to sell condoms that are likely to break.


This take is simply not based in reality. Compare what it took to gain root access to a computer 20 years ago to a modern iPhone and tell me again that there is absolutely no point in caring about security.


Amusingly enough, being able (or not being able) to gain root access to my own devices is one of the main reasons I can't address or verify their security - especially as official support is dropped over time.

Modern phones and other appliances have (or are) computers to which it is nigh impossible to operate as root. You might say you have to pwn them even if you supposedly own them ;)


> There is no such thing as computer security in 2023

This is absurd.

Even the passive basics like relying on your free email provider's filtering and running Windows Defender is going to stop a huge number of attacks.

If you're expecting perfect security, you'll be disappointed -- but we can't declare complete bankruptcy.


Scroll down: https://arstechnica.com/author/dan-goodin/ This is just a teeny tiny sampling of the security vulnerabilities disclosed every day. Our industry is just not built with security as a goal, and even if we started caring about it today, we have 50 years of not caring to patch up. Caring about security in a meaningful way (i.e. formal verification, engineer licensing & liability) is really, really expensive. No one is going to carry that burden when their competitors don't have to. The end result is the situation we find ourselves in: there is no such thing as computer security, and any networked computer should be considered compromised by default.


I'm sorry, I can't really debate with a reductionist claim that everything is 100% fucked when it's readily apparent that some threat actors are still being stopped.

For what it's worth, I'm not disagreeing with your points about needing drastic, systemic improvements in how we handle security.


Quote from one [0] of the articles at the linked Ars Technica index page:

«A ragtag bunch of amateur hackers, many of them teenagers with little technical training, have been so adept at breaching large targets, including Microsoft, Okta, Nvidia, and Globant, that the federal government is studying their methods to get a better grounding in cybersecurity.»

My opinion is that we fail completely to build secure systems in a forward thinking way and the fact that we manage to stop threat actors that exploit holes that are already known to us is insignificant.

[0]: https://arstechnica.com/security/2023/08/homeland-security-d...


> It'd be like putting a "secure against bricks" sticker on a window.

I lived in a house that had such secure windows. I even witnessed someone trying and failing to smash a window with a brick.


we installed a storm door not for protection against storms, but an intruder armed with a brick and/or hammer because our front door was 85% single pane of glass.

it seems like a bad analogy on the GP's part


I completely agree that it is meaningless to assert that something is "secure". Even very well secured things have been hacked, and the smallest of mistakes can have huge impact.

What would be meaningful is an assertion that some basic set of secure practices have been or are followed. For example, that there are no default passwords on a device, that security updates will be provided on some defined schedule, that network protocols meet some reasonable standard of security, etc.


I think this might be useful if we kind of wargame prospective regulations to follow all the ways people might abuse it, then you might get something that works. Maybe lawmakers already do that? This thread feels in that spirit too…


Does the IoT company called "Eve" give hope to the industry? From what I understand, they take a security first approach to their IoT products. I haven't tried them out yet though.


There is no security against all the threats you can possibly be imagine, but being free of well known, stupid simple vulnerabilities is still good.


As a developer and a consumer, what I'd really like to see is:

- Manufacturer voluntary guarantee of 1/3/5 years security updates with an expiration date.

- Separation of functionality and security updates.

- The ability to "turn off" connectivity and retain full local functionality.

- An industry security certification like UL.

- A single point way of identifying and validating devices.

As it is, I avoid using IoT mostly for security reasons. Having worked in security for many years I have seen the best and the worst. Having security isn't a panacea either - it needs an ongoing management & reporting infrastructure.


> Manufacturer voluntary guarantee of 1/3/5 years security updates with an expiration date.

I just have to point out that these are all extraordinarily short numbers. There are industrial control systems that are still in operation despite being made out of mechanical relays from before the advent of microprocessors.

We got used to electronics getting replaced every 3-5 years because if it's a laptop by then it will be considered slow and have a questionable battery. But these devices are now being permanently affixed to real estate.

We need a way to update these devices that will outlive the manufacturers. Because many of the devices will.


> these devices are now being permanently affixed to real estate

I predict the NEC will start demanding the use of Wago style splices, no more wire nuts, due to how frequently people are swapping out smart switches and the like. Even non-smart dimmers have been changed multiple times in my residence due to evolving LED compatibility (another "wild west" situation right now). I haven't broken any copper, but the increased likelihood is pretty obvious. Electricians may start seeing more pigtails installed for no other reason than breakage.


That's for the people who actually change them out.

Then you're going to have the guy who loves Smart Thingies, fills his house with them, and sells it to someone content to use the Smart Switch as a switch and the Smart Stove as a stove even if they're >10 years old and none of the smart apps are supported anymore.

But they're all still sitting there soliciting incoming connections.


>-The ability to "turn off" connectivity and retain full local functionality.

>- An industry security certification like UL.

I don't think those should be separated. UL is about safety and, while insecurity is roughly the software equivalent of a device's propensity to suddenly catch fire, reliability is also an important part of safety in all but the most frivolous applications, and local functionality is an important consideration for reliability.


Nest thermostat being one example, but myself I'm very happy I have IR remotes to the A/C at home, because I wouldn't want to be unable to turn on the cooling during one of the recent heatwaves over in Europe, just because Internet is down and the control app can't connect to the damn cloud.

(Not that I mind networking in general. Operating those A/C units via Home Assistant app is a glorious and pleasant experience - entirely unlike the vendor's official app.)


Has much consideration been given to labeling when a third party cloud or paid service is required to use the device? As somebody who uses IoT devices "locally" on my private network, I want to know my data will stay local and protected. The recent issues with Eufy doorbells claiming to be under local control [and encrypting data], but actually sending data to the cloud stands out to me as an example where labeling and enforcement could help.

[0] https://arstechnica.com/gadgets/2022/11/eufys-no-clouds-came...


Right now, the actual requirements for a label are totally up for grabs. This would make for a good public comment, in my opinion.


Thank you for your response, I'll consider submitting it.

For context, I've thought about commenting on issues in the past, but especially on the heels of the fake comments regarding net neutrality, for lack of a better way to put it I'm left feeling outgunned against such sophisticated lawyers and companies. This may be a little paranoid, but from a risk perspective I also worry about having my name attached to comments that go against the interests of large companies which dominate the marketplace. I also have hesitation about identity theft and uncertainty about the process.

Again, thank you for bring the conversation home, so to speak. I'll look at the process again.


> required to use the device

The Eufy story was blown out of proportion. Alarmist tech clickbait. There was no requirement to use the cloud. People were misled into believing their Eufy cameras were spying on them, or doing bad things, or easily hacked by anyone armed with the "knowledge" found in the arstechnica story and numerous repeater channels.

Today in 2023, the Eufy cameras are solid, I like how they work. Zero subscription costs, local storage expansion, optional cloud storage, no cloud dependency, and the cameras keep recording if your home internet goes down. If you want thumbnails included in push notifications, then tick the box in settings where is says underneath: "thumbnails will be temporarily stored in the cloud".


I really appreciate you directly going to the community for feedback.

As someone who writes software for IoT devices and has worked in the past on security in the IoT space this is sorely needed. By far the biggest issue in my view is that manufacturers are not motivated to take device security seriously since they are largely isolated from any fallout. Device manufacturers already have to pass certification for RF emissions and safety among other things and should have to pass certification for at least a basic security audit on the device and the services the device connects to. Even self-certification would improve the current situation.

For many device types there exists some form of open source OTA update software or a commercial offering. In the last few years there has been significant maturing of the tooling in this space but the security aspect is often left as optional even though the tooling often makes it fairly easy to add. At this point I think the industry just needs a little push to make secure OTA updates the standard.


> The FCC recently issued a Notice of Proposed Rulemaking [2] for a cybersecurity labeling program for connected devices.

That appears to me to be the wrong way to go about this, and it has specifically to do with how IoT security is a problem.

The most severe case of IoT security problems we have seen were things like mass botnets, where plenty of devices of the same type were hacked and then used for things like DoS attacks. Notable cases include the DoS attacks against Brian Krebs for some of his reporting.

The important thing to understand here is that the device owner is not the primary victim. That's a third party.

This is not about consumer choice, because consumers by and large do not care, because they are not the people being affected by this. An optional security label tries to adress it as a consumer choice problem, which it isn't.


Thanks for this great observation. I encourage you to share it in an official comment. In a final rulemaking, perhaps we could make it clear that the purpose of the label is not only to protect the purchaser of the product but also anyone who might be injured by way of a compromised device. In an FCC enforcement proceeding, we have broad discretion in assessing damages. In contract, the third-party beneficiary doctrine could allow victims of such attacks to enforce label commitments. And in tort, statutory duties apply to anyone who is within the class of people that the law seeks to protect. So the law is flexible here, but it depends on what exactly our final rules say.


That’s only one aspect of the problem.

Lots of people have been bitten by suddenly unsupported devices.

I think it could do some good.


Sure, but the question here seemed to be about security, specifically. What you're talking about is definitely a problem, but it seems like a different one.


Making "x years of security updates" mandatory is likely to end up in a warning disclaimer every time you turn on the TV after x years: "This device does not receive any more security updates. You may be at risk."

That will either make a large part of consumers paranoid or annoyed. So they will replace a TV that is in perfectly working conditions with a new TV.

Samsung, LG and the consumerist economy would love that!


I think you are underestimating how a well known „secure“ label (or lack thereof) could influence customer behavior. It’s not that they don’t care - they (understandably) lack deeper knowledge and therefore don’t base their purchasing decisions on how long they will get updates. „If sticker X is not on the package I will get hacked“ is much easier to grasp.


> „If sticker X is not on the package I will get hacked“ is much easier to grasp.

Hmm. In the grocery store, where this idea comes from and has the most persuasive history in govt. regulation, where manufacturers own the front of the package and regulators own the back, what is the healthiest food?

The produce and meat. Which has no nutritional label.

There's no such thing as a secure IoT device. There's absolutely no such thing as a secure connected device that is also cheap.

If you build your computer from commodity parts, it tends to be the longest lasting and most secure. It is usually the most expensive.

Anyway, what would the label for a PlayStation 5 and an iPhone 15 look like? Miles long.

Then, for the consumer buying the cheapest smart plugs off Amazon? Like one paragraph the vendor copied and pasted from the Internet, along with all the other legal shit they deal with.

Whom should be regulated? I guess Amazon and Walmart, the retailers, they are the real gatekeepers. That's what the EU does! Which doesn't fly here. The Waltons live here, not in the EU.


Exactly. I don't see the situation improving until either the owner or, preferably, the manufacturer of a device that participates in a DoS attack is held partially accountable for said attack.


Well, you can't hold somebody accountable if there isn't even a label or information somewhere saying that what they are doing is dangerous.


Sure you can. For example, we have vehicle codes that hold people accountable for dangerous driving.


We don't hold people accountable for buying the wrong model of car and using it.


We do -- in a sense. For example, in CA, you can't lawfully drive a car that has failed a mandatory emissions test.

In some sense, cars are better examples for responsible ownership, because almost every state requires you possess insurance to drive it. The price of an insurance premium is (supposed to be) commensurate with the risk of driving it, and insurance premium rates do influence the market for vehicles.


Even if consumers don't necessarily care about security, required labelling gives brands an opportunity to stand out from one another. If I'm looking at two products on the shelf, where one claims to have greater security, and the other makes no such claim, I'm likely to buy the more secure one, even if I don't necessarily care much about security. If getting the secure label is relatively cheap (which it should be, since most of the issues we see are the product of laziness rather than being especially hard to fix), than we could see the market dominated by products promoting high security, even without customers ever caring that much about insecure devices.


oh man, you sound like the type of person that would fall for the intentionally misleading labels that makes it sound like one thing but is in fact absolutely not that thing. just yesterday, there was a link to an article about the lies on food packaging.

so, labeling requirements are one thing, but requiring that the information is straight forward and leaves no options for misleading would be great. I just don't think there's ever going to be a way from preventing someone from finding loopholes.


There are a couple of NIST papers on specifics for labels:

https://nvlpubs.nist.gov/nistpubs/CSWP/NIST.CSWP.02042022-2.... https://www.nist.gov/itl/executive-order-14028-improving-nat...

They're in FN 20 of the linked proposal for rulemaking (which is 48 dense pages and which I don't expect anyone here to have had a chance to read yet.)

If you find yourself skeptical about the NIST proposals, please feel free to comment on the record!


"This milk contains no <insert illegal additive>!"


A big issue with botnets running on device owner equipment is the amount of bandwidth they "steal" from the device owner. Especially for device owners who are on constrained networks (such as a mobile/satellite network) this can be a really expensive issue for the device owner.

So while the device owner may not be the primary victim, they can definitely still be heavily affected.


In order to make it a consumer problem, we'd have to make it a criminal violation to participate in a botnet. We could make it a punishable infraction I suppose, much like a speeding ticket for automobiles, but somehow I just don't see this happening in a coordinated fashion across the world.


I think it is already illegal to participate in DDOS. Even though enforcement is .. pretty much nonexistent? And how could you enforce it? It would create an outcry if done consequently. (But maybe a neccessary one)

But it also will be a consumer problem, if they cannot access important services anymore, because their IP has been blacklisted, because their toaster participated in too many DDOS or spam attacks.


> I think it is already illegal to participate in DDOS.

While it is unlawful to knowingly or intentionally participate in a DDoS, what I'm talking about is the potential of also "criminalizing" (in the sense of a speeding ticket, not jail time - an infraction, not a misdemeanor or felony) using a vulnerable device that is then hijacked by an attacker.

Like you, I don't think it's a realistic outcome; I'm merely brainstorming how one could make this a consumer problem through economics.


Sounds like you are making an argument based on externalities.

That's fine. But economic theory also gives you standard answers for externalities:

Don't ban the behaviour you dislike. Either let people sort it out themselves (like the Coase Theorem https://en.wikipedia.org/wiki/Coase_theorem describes), or at most tax the offending behaviour.


I don't get what you're saying here. What "offending behavior" are you referring to in this case that might be taxed to disincentivize it?


Suppliers can already make binding promises about their hardware. If you open your wallet wide enough, you can already buy enterprise grade hardware that comes with guaranteed long term support.

OP says, amongst other things:

> I’ve advocated for the FCC to require device manufacturers to support their devices with security updates for a reasonable amount of time [1].

So the offending behaviour in this case would be for a manufacturer not to provide security updates.


This is not really relevant to the proposed rulemaking but it is something that bugs me deeply, and would like to get off my chest. I would like to see a mandate that red LED be wired inband to every camera and microphone, on every device, so if it is powered up, the LED is also. This is what John Gilmore proposed in 2004, and we adopted in the OLPC project, as the first step towards not being ubiquitously surveilled. It is low cost, low power, and easy to implement.


I encourage you to file a comment suggesting this. It's actually not irrelevant. The FCC is free to decide that the label (which can include a QR code linking to more information) must include information about cameras and microphones and whether there is a software-tamper-proof way to tell whether they are on. Or the existence of a hardwired LED could even be a requirement to qualify for a label. Your experience with the OLPC project would really bolster the credibility of your comment as well, so don't forget to mention that.


The horse has bolted on the question "is that camera/mic on". If you see the camera, assume it's on. There's no going back.

Often the camera/mic is dumb, oblivious to the status of downstream recording. Passive mics aren't even powered.

I understand the concern about signalling recording status, but it's too late for people to adjust their expectations of recording status from the presence of a red light.


Thank so much for posting this here first of all! I agree with other comments that rather than, or in addition to, some direct required period for security updates I'd really like to see a dynamic setup along the lines of "Power Means Responsibility": manufacturers can stop supporting devices when they wish, but must at that time release all keys and IP licensing needed for hardware owners to take over. If a company wants to keep supporting something, and in turn keep their power over deciding how it works, for 10 years that's fine. If they want to drop it after 6 months and make the firmware fully source available and allow owners to add their own root keys to devices that'd be fine too. Or someone could offer something fully open source with no strings attached but also no responsibility attached. The market can fill with a range of decent options.

But manufacturers shouldn't be allowed to have it both ways, with control post-sale over their customers' hardware AND no responsibility to support it. It should be directly linked by law.


I strongly suspect that regulation at the IoT product level will have a very small practical impact because I think its largely targeting the wrong issue. The vast majority of the vulnerabilities aren't coming from the manufacturer, many of them are making relatively small changes to a reference design provided by a company like Broadcom (which is notorious for exactly the behavior I'm about to describe).

The reference design problem is an issue where a manufacturer like Broadcom creates a specialized chip. To use this chip they create a "reference driver" for it, package it in a custom firmware, then will never update that reference software. I've worked building internet routers for homes and small business and there are pieces of software we couldn't touch because they had been modified and only the fully compiled version is provided.

Broadcom passes the buck by calling it a reference design and washing their hands of it. Some upstreams do provide the source, but it's the complete source, not just the changes they made and usually without any specific reference to what the specific version they based their changes on was. Trying to tease specific changes from the Linux kernel's raw source code is quite the needle in the haystack problem.

I'm not sure how a lot of device manufacturers _could_ handle this. They tend to have very small development teams that are more electrical engineers than software engineers and usually their only directive is to make it work under an extraordinarily tight deadlines. Maybe part of the answer is they need to hire more to be more responsible... But even with experienced developers _every single hardware manufacturer_ is going to have to repeat the security fixes that companies like Broadcom refuse to fix.

I don't even know where to begin proposing a legal foundation for reference design software. I do think if the penalties and pain were strict enough at this level it would lead to a different shortcut that would be much more beneficial to the world... If Broadcom and other companies doing this kind of malicious apathy were forced to keep their reference designs up to date, my money would be that they stop doing it entirely and instead get those driver merged into the Linux kernel proper where it can be properly maintained and updated by the legion of developers that care.

The act of getting that code into the kernel would force them to improve the code and not take the shortcuts that cause so many headaches because the kernel developers gate the quality of code they produce.


I’ll preface this with the fact that I am a security engineer (‘penetration tester’) but I probably don’t live in your country.

I care more about my privacy than I do about the security of my device, but an architecture that supports the second almost always supports the first (my neighbours hacking my zigbee isn’t a threat model almost anyone should be concerned about, unless there’s a pattern of hacking en masse).

I found out my smart lights literally have a microphone in them the other day, under the guise of ‘plays light to your music’ or something.

I architect my IOT by implementing a network without internet, fronted by home assistant - that way my devices can’t ‘phone home’ which who knows what privacy infringing crap.

I know that botnets driven by iot is a real and ongoing problem, but it’s a problem that is probably not going to get much vendor buyin without regulation, but what I’m pointing out is that it’s not the only threat facing these devices.

I want:

- clear guides on what data is collected, I don’t trust they’d only use it the way they say they will so I wouldn’t bother reading that part, if it existed

- the ability to opt out of any data collection aside from anything required to do technical updates. I want any data transfer to occur in a clear and auditable way (e.g the ability to inject a root CA and perform a mitm if I wish)

- enough protocol spec that at least basic functions can work offline via a system like home assistant

These won’t directly solve the ddos vectors, but they will solve the problems that come shortly after on the timeline.


Edit: when I say opt out, I mean it - I frankly think ‘if you don’t like it don’t buy it’ is a good article when every non-commercial TV reports usage information back to the provider. There aren’t viable options for many products as all the product manufacturers scummy analytics teams have got their hooks in.


(I also submitted this as an Express Comment in the proceeding. If you agree, consider also filing a comment.)

A lot of issues around IoT device security are hard, but there is one simple and easy piece of policy that would be a big win:

Make the requirements stricter if the product contains a microphone than if it doesn't.

Some device makers are putting microphones into devices that don't need them, to support functionality that isn't useful, just because microphones are cheap. For example, TCL (a Chinese television brand) puts microphones into its remote controls. They do this because while most people don't want to control devices by voice, a few people do, and microphones are very cheap. This is a problem because anything with a microphone in it is a valuable target for hackers; compromising a TV remote with a microphone is _useful_ to them, in ways the compromising eg a wifi-connected clothes dryer would not be. If adding a microphone to a device created additional legal requirements, vendors would stop putting them in places where they lack a legitimate purpose, and there would be fewer insecure microphones floating around.


Make strict rules for all of them. As another commenter pointed out, the wifi-connected clothes dryer could be used in an attack to take down the power grid by having many of them switch on at the same time - causing a network overload.

Don't try to predict potential avenues of attack. Make strict rules for all IoT devices.


The microphone is not scary because of hackers, its scary because of nation states.


Simple. Give the manufacturers the choice: either they must provide full (FLOSS) source code and documentation (full schematics) to the user to enable them to maintain, patch and thus secure their devices (see also: right to repair), OR they are liable for all damages (direct, indirect) for a 30 year expected lifetime that arise from security issues with the device AND must have insurance to cover those damages (so that they cannot get out of that liability by bankruptcy). Most will opt for FLOSS, and none will have the excuse that it would be more secure to make it proprietary. And then users will at least be able to fix issues -- and the security community will be way more effective at finding issues as it wouldn't have to do the slow reverse engineering.


30 years of expected support is pretty unreasonable. Stating a requirement like this makes the discussion about competing dogmas. Rather, it's about the right way to keep devices operational as long as possible while also allowing companies to remain possible.

30 years of support expectations immediately makes the cost of any device go up to hedge against the risk of fines during the entire 30 years. It also makes it harder to disrupt an industry with hardware at its core.

I don't have a single computing device that has lasted longer than 10 years. Reasonably speaking, either performance or features start to make the device largely obsolete and unusable.

I think a better way to propose this would be the expectation that when a product is EOL, it should be supportable by the buyer for a certain period. This requires figuring out the right period of support. I'd propose something that scales period based on cost or device class. A $1200 phone should be usable for 10 years while a $10 disposable glucose sensor with a battery should not.


Sorry, but some people will run routers (and other IoT devices) for > 10 years, and long past some random 2 year EOL a manufacturer may set. We need less e-waste, and if manufacturers have to warrant security for 30 years, they may also invest enough to make the hardware itself last longer. More expensive is totally fine if the product is useful for longer! Oh, and please double-check if you really have no 1st generation Raspberry PI anywhere, or maybe some ancient Arduino? What about your washer? Modern washers are IoT devices. My (admittedly not yet IoT washer) is > 10 years old. Or take your car. Sure, you may buy a new one every 10 years, but there are plenty of cars > 10 years on the road. Do you want all of them to be vulnerable and out of warranty in the future?


> 30 years of expected support is pretty unreasonable.

I happen to know, having been with a Ford unit at the time, that the Ford EEC-IV engine control unit in 1980s Ford cars and trucks was designed for a 30 year lifetime. Many are still working.

The average age of light vehicles in the US is 12.2 years.

This is more in NHTSA's wheelhouse, though.


> I don't have a single computing device that has lasted longer than 10 years. Reasonably speaking, either performance or features start to make the device largely obsolete and unusable.

Are you just buying cheap junk? An i7-3770 PC - a good example of an 11 year old PC, and one I happen to use every day - can be quite usable today.


> either they must provide full (FLOSS) source code and documentation

I like the spirit of this, but one problem with this is that the software stack is likely not FLOSS, and the manufacturers don't own all the software.

A second problem is that lot of the software for production IoT-devices doesn't live in the device.

Third, there are safety concerns with a lot of devices that you'd need legal productions for.

Finally, the best IoT devices use a zero-trust architecture. You'd need to support a variation of this pattern to allow users to modify the devices.


If parts of the supply chain aren't FLOSS, then manufacturers would have to lean on those suppliers to change their licensing or find different suppliers. Same with other regulations around things like lead in consumer products. Anyone wanting to be part of consumer product software supply chains would have to start offering it as FLOSS if they want any customers, so the supply chain would adjust to the new reality.

We do need to establish common sense liability if it's not already there. If you modify your circular saw to remove the guard and injure yourself, that's your fault. If you modify some software to run outside of safe design parameters and it malfunctions/injures you, that's your fault.

I don't see why zero-trust is incompatible with user-modified devices. In fact it's in line with the spirit of zero-trust: don't assume just because something is able to talk to one of your servers (e.g. because it's on your VPN/LAN) that it's friendly. People should already always be assuming customer-owned hardware will potentially be completely controlled by a malicious actor and acting accordingly.


I'm working on an IoT device for industrial use, and we're wrestling with this very problem.

The answer we're probably going to go with is that the device is 'leased' to the customer. It's part of their subscription.

This solves a ton of problems about FLOSS and support of the same. It's now a closed device, and you have no rights to the code inside. If we go out of business, you have a brick that you don't have to pay for anymore.


I think it's always better for the customer to have access to the code inside. I'll actively recommend FLOSS solutions to customers even if they're not quite as good as the competition on paper right away. Simply because a large part of the cost of industrial hardware is actually supporting it for a long time. And support is SO MUCH EASIER if you have all the source code and schematics. Of course big customers get to demand this kind of arrangement (floss, escrow, or even just "give us all the paper") while small industrial operations end up paying a premium for inferior service.


>> The answer we're probably going to go with is that the device is 'leased' to the customer. It's part of their subscription. <<

1000% wrong answer, unless you straight up front sell a service with an installer making a site visit to deploy chattels of service.

such as satellite television, or DSL internet.

when you swap handfulls over the counter before any contractual agreements i.e. clickthrough TOS , you are selling a hardware, that means user ownership.


No, we're straight up selling with a dealer/installer in the pipeline. We're not that stupid to try and sell direct to the customer these days.


i find that revealing, it seems direct enduser engagement has really stung you, is there something other than people being people, or are there onerous requirements that are not worth it?


Ah, the utopian dream of a world where every manufacturer gives away their intellectual secrets just so users can play tech guru. You're suggesting that companies offer up decades of R&D and risk their competitive edge, or else face 30 years of liability? With the speed at which technology evolves, we're lucky if a device is even relevant after 30 months. And let's not forget the minor detail of skyrocketing costs. Want a device built under these fantasy rules? Hope you're ready to pay through the nose—think 10 times the current price. Because nothing says 'accessible technology' like pricing out the average consumer.


I favor something like this, if less strong. It should be required that a product that reaches end-of-life as defined by the manufacturer should have all documentation and source code released and open sourced; prior to end-of-life (and perhaps for one year after), they're required to provide security updates. The manufacturer is then free to decide the point at which closed source is no longer worth the maintenance cost.

A few additional thoughts:

- Perhaps hardware design/specs should be released as well?

- A government body should probably host this information after EOL.


^^^ This right here ^^^

Additionally, this cannot be an excuse to charge subscriptions or force lease agreements into the fine print for items consumers buy outright.


You as a customer can already give the manufacturer that choice, and simple refuse to buy from any manufacturer that doesn't comply.


I've been not-buying IOT trash as hard as I can for decades. But nothing's changing... please tell me how to do this correctly!


Well, lots of people have been not-buying liquorice their whole life, but nothing's changing. The market for liquorice candy is alive and well.

Less snarky: if other people still want to buy certain products, manufacturers will provide. But that's not a bad thing. Different folks have different preferences.


Why did you suggest not-buying as a better action than regulation if you acknowledge that it doesn't work? Are you a manufacturer of low-quality IoT devices? People don't prefer insecure devices, they just want convenience and manufacturers are not being upfront about how dangerous these "convenient" devices are. Ergo, regulation.


Convenience and low price are legitimate preferences, even if you disagree.


Not when the consumer doesn't know the trade off they are making. Buying a bottle of colorful poison and drinking it and dying because it looked tasty is not a legitimate preference.

You are being willfully ignorant of the power dynamics and information disparity that exist between manufacturers and consumers. The whole point of the label is to better inform consumers.


Insecure IOT has the huge externality of providing muscle to criminal botnets, though.


Tax them, then?


Consumer's power is not the same as FCC's


Indeed. And that's good.


It's good that consumers have much less power in context of forcing manufacturers to the described choice?


What do you mean by less power? It's different.

One manufacturer can't force you to buy stuff you don't want, nor ban you from buying from a different manufacturer that does what you want.

(In contrast with the FCC, which has a lot of power over you, by banning you from buying what you want.)


What if no or very few manufacturers produce a thing, yet the thing would be very beneficial to many owners?

Why would one want specifically to buy a product not conforming to the choice described in the first-level comment?


> What if no or very few manufacturers produce a thing, yet the thing would be very beneficial to many owners?

If it's useful enough, and the existing manufacturers leave significant customer needs unfilled, competing suppliers can step in.

> Why would one want specifically to buy a product not conforming to the choice described in the first-level comment?

All kinds of reasons. It might be cheaper, for example.


So what we need is giant warning stickers on products of which their parent companies don't follow good practices. Kind of like tobacco products.

"Leaks your personal data to unknown servers" Or "Manufacturer typically does not support their products beyond 2 years after which critical features and functions may stop working"


A relatively small group of people won't have an effect, that's why regulation plays an important role.


Perhaps we should respect the wishes of the large rest of the people who are outside that relatively small group?


Ignorance is not a wish. We're talking about users that don't know any better when buying products


Are there any that currently do comply?


Many large companies open their wallets to buy hardware (and software) that comes with guaranteed long term support.

If you are willing to pay, manufacturers are happy to comply with a lot of weird requests.


I love this.


Regulation to require a certain period of security updates doesn't seem useful to me. It's very easy to send out a "security update" that doesn't actually improve security. You can send out an ad to all your users saying "You should upgrade now to our newest product!" and call it a security update. Requiring security updates may end up just requiring companies to spam their users with a certain amount of marketing material.

A bigger issue than the available of updates is whether security updates are automatic and mandatory, or optional for the user. If a security update requires some action on the user's part, most users won't want it.

The overall problem is that the main IoT security problem is botnets, not insecure devices per se. A botnet does not affect the owner of a device very much. Thus, the owner of a device usually prefers an insecure device, rather than taking some risk of the security update breaking the device.

I'm not sure what the FCC should do here. It seems reasonable to hold the manufacturers of devices responsible in some way when those devices are used in a botnet, but I'm not sure if that's within the FCC's scope.


I think you're right that it would be difficult for the FCC to precisely define exactly when security updates are required. This is a problem in law generally, one that is usually resolved by imposing a reasonableness standard. Maybe here, a vulnerability needs to be patched if it might reasonably be expected to allow an attacker to take control of a device, or to do so when combined with other known or unknown vulnerabilities. Or maybe a different standard. Then when enforcement/lawsuits come around, the judge/jury/regulator has to evaluate the reasonableness of the manufacturer's actions in light of that standard. We'd love to see commentary on the record as to what the right legal standard might be.


> This is a general problem for law generally, one that is usually resolved by imposing a reasonableness standard.

Exactly this. Here in the UK we have "merchantable quality" as the standard for the required quality of any goods sold. How "merchantable" is defined is a matter for the courts to decide on a case-by-case basis. In practice, the courts take into account generally market expectations as well as the marketed price to determine the expected quality standard and it seems to work just fine. If my chair falls apart after a few years after ordinary use by ordinary people, then it wasn't of merchantable quality and the seller is in breach of the law.

In the case of security vulnerabilities, I think a similar approach would work well. The key thing is to ensure that sellers of IoT products cannot disclaim responsibility for security vulnerabilities altogether, which is exactly the problem today. If an IoT product can be subverted by an adversary after a few years of ordinary use by ordinary people, then the seller should be in breach of the law.


This sounds like a reasonable approach (sorry for the pun). One question - reasonable to whom? (who? - english is my first language sorry).

I ask because when I was doing security research, we'd often present issues and get responses like "but who is going to think of that?" or "No one could find that", only for someone to think of or find it later and take over a system. I still occasionally hear this from software developers (even though the industry as a whole has gotten much better over the years), but quite often from people who work in "cyberphysical" systems (e.g IOT).

Part of the tension seems to come from the fact that some infosec people can be equally unreasonable, declaring something utterly useless if there's a remote theoretical chance of a problem.

Unrelated to the above:

> Maybe here, a vulnerability needs to be patched if it might reasonably be expected to allow an attacker to take control of a device...

I suspect you know this and short-cutted for conversation, or maybe these are all the same legally, but "take control of a device" isn't the only win condition - DOS, info leaks, and so on also exist. I note this because I'm kind of curious if the law considers those the same or vastly different scenarios, and if any sort of FCC regulations would include them.


A cyber vuln is a defect in a product expected to function well. In other domains (cars, apartments, pharmaceuticals), if there is a defect, the manufacturer is responsible to ensure it is fixed.

It seems pretty simple. The standard should be the same as used in other industries where vendors need to recall, repair, or refund products in case of defects.


One way to mitigate this is to require introspection into what the update is. This has two implicit requirements which are that the firmware is source-available and has reproducible builds. With those two requirements you would be able to see what is being updated, and prove that the update your device receives is actually the update the manufacturer said they created.

The second requirement is something that is really overlooked in the software supply chain, partly because of the difficulty in achieving it. But it's a goal that the proper push from regulators could help us reach.

A knock on benefit is this helps secure the update channel, which if you are requiring firmware updates you must also require a way to make sure those updates are secure (since it inherently creates more attack surface area)


As a very broad starting point, we should be sure to address the fundamentals of security:

CIA: Confidentiality, Integrity, Availability.


The irony is that botnets often function as an automatic update: they break in through a vulnerability, often include a patch for said vulnerability, and then stay somewhat updated via their C&C server. Of course, this is all to prevent other botnets from coming in and stealing their devices away.

We had a WiFi camera get compromised. We put it on the internet - so it could get an update - and it got pwned before the update even finished downloading. The malware blocked the admin interface, but kept the camera feed running, presumably to minimize suspicion. As far as we can tell, the actual vuln was patched (some sort of dumb command injection in one of the many exposed endpoints), so there was also no way for us to get back in.


Liability. Make the manufacturer liable if a known vulnerability is exploited.


I would think that tort law already achieves this - unless some law was passed that shields manufacturers from lawsuits. If that's the case, then the easy fix is removal of such shields instead of trying to create new regulations. Same applies to nearly all aspects of product liability.


The general rule in tort is that you need physical injury or physical destruction of property to sustain a lawsuit. There's exceptions at the edges of that, but you basically can't sue a device manufacturer for crummy security that caused you to lose money or other non-physical damages like reputational harm. The same limitation does not apply to contract law. We think that a cybersecurity label could be enforceable under contract law, as well as help bolster claims that a duty was breached in tort (when there is physical injury/damage). It would also be subject to FCC enforcement, for failing to live up to the commitments made to get the label.


I would expect some sort of license "agreement" that shields the manufacturer and resellers from all liability.


Not too many industries have such a shield. The nuclear industry comes to mind as an example of one that does.


You could regulate they have to patch any outstanding CVEs for their device/firmware but enforceability might be difficult.


This would be an absolutely terrible standard. CVEs really, really suck. See, for example, this CVE for curl[1] that was assigned a 9.8. Or read sqlite's page on CVEs[2]. The sqlite issues alone would make this a non-starter, because you're not gonna convince everyone in every piece of software you use to update their version of sqlite.

[1] https://daniel.haxx.se/blog/2023/08/26/cve-2020-19909-is-eve... [2] https://www.sqlite.org/cves.html


Not all CVE are real vulnerabilities.


There are too many IoT devices that want my email/phone just to perform what normal devices have been able to do for decades. No, I don’t want to download an app just so I can use my apartment stationary bike. I get enough spam already, and I don’t want to agree to a long terms and conditions just for that. In that case I couldn’t even use the bike at all without creating an account.

I think a lot of places got duped into thinking their internet connected stuff was an upgrade but in my opinion it’s a major downgrade. A device should do what other non-IoT devices do without being online, and internet capabilities should only be a value-add. A toaster should make toast without being online.


This is a great point. What are your thoughts on requiring a switch on all IoT devices so the consumer can flip a switch and their "smart Widget" just becomes a "widget"? This would be a nice for both security perspective and a consumer perspective.

An insecure e-stationary bike should just become a stationary bike rather than a 100-pound pile of trash.


My opinion is that a stationary e-bike should be a stationary bike whether or not the wifi is connected, and then I can choose whether I want to connect it online. I don’t think a switch is necessary, just don’t connect the thing.


The Bob dishwasher from DaanTech gets this right, it has Wi-Fi capabilities and a DRM scheme for its propietry dishwasher fluid modules, but those are extra features, and it works fine as a dishwasher without them. If the company drops support or goes out of business, I won't even notice in terms of impact on washing my dishes.


> Companies may cease supporting a device well before consumers have stopped using it

In which case all information required to create and load custom firmware should be released to the public. This information should be placed in escrow, in case the company ceases to exist. The same rule should apply to backend services, in case of a device being depended on such a service to operate.

> security updates for a reasonable amount of time

Which is 25 year or more for some classes of devices. Phone have already reached a point where they should be required to come with 10 years of security updates. I'd expect light switches to get at the very least 20 years of security updates.

Generally I believe that governments are being WAY to lenient towards manufactures of any type of electronics when it comes to updates. It's bad for security, the environment and the causes consumers to make bad investments. The companies making these devices have long since proven that they DO NOT CARE and shouldn't be trusted to deal with the issues themselves.


A mechanism requiring disclosure of how long security updates are available seems like a great step.

Another great step would be a guarantee of making the firmware Open Source after no more than a certain amount of time, and having that guarantee known at compile time. Effectively, that means the device will always be supportable.


Another great step would be a guarantee of making the firmware Open Source after no more than a certain amount of time, and having that guarantee known at compile time. Effectively, that means the device will always be supportable.

It's not inconceivable that this could be a requirement for getting a label (or some tier of label.) It depends how the advocacy comes out on the record.


The requirement can be easily bypassed by going bankrupt before the required time is up. You will soon hear advice like "if you want to get in to the IoT space, create a new C Corp for each iteration of your product..."

Whatever you require them to disclose after X years, it must be escrowed with a trusted third party in advance.


Agreed: any kind of future disclosure requirement should be backed up by a source code escrow requirement.


> A mechanism requiring disclosure of how long security updates are available seems like a great step.

So, as I work in this space, there needs to be realistic guidelines on this, does a security flaw need to have a CVE ? Do they need to fix every CVE ? What is the timeframe requirement ?

This kind of thing keeps me up at night.


Voluntary certification, please. Law is slower than technology. This is a good thing! EnergyStar is a great example of a voluntary program doing more good than DoE or FTC mandates. HIPAA is a good example of what happens when mandates can’t keep up with technology. When it comes to security, we can’t afford another HIPAA.


100% agree! This is a totally voluntary program that is explicitly based on EnergyStar.

I also worry that check-the-box compliance is one possible outcome. I'd love to see professionals comment on the record about where a checklist would and wouldn't be helpful. I'd also love commentary on if and where liability for failure to meet stated commitments would be helpful.


> Voluntary certification, please. Second this, otherwise it'll just put smaller companies without enough resources at disadvantage.


To add to previous similar comments, I think that one of the best ways to ensure that security updates are provided is to ensure that manufacturers either commit to continuous security updates, or after a minimum sunset period during which they provide security updates (e.g. 5 years), they agree to provide source code as well as build and deployment instructions, so that the community can take over. It must be possible to build the source code using a freely available toolchain. Furthermore, they must agree to provide links to these communities through their support pages for these products, so that users can be made aware of new third party firmware.

A durable IoT device could last decades, but few companies building these products will survive as long as the devices, let alone support a device they are no longer profiting from. As long as they are supporting the device with security updates, it's fine for the firmware to be proprietary. But, when they decide to cut support for the device, they should be willing to ensure that consumers who have purchased this hardware and are still using it won't become victims, and that the overall Internet community won't end up harboring botnets made of living dead ewaste.


Yeah I can't see an alternative to this. I'd go further to say that to guarantee this is done, company's should be required to provide this data upfront in some encrypted form, so that it's out and public in advance and can be unlocked by a simple encryption key (an FCC escrow service would be a good idea).

And that's on the "if I really thought business should get a handout" approach.

Practically, I see no reason the full source code for any of the network-interactive software components IoT devices shouldn't be required to be open and user-flashable upfront. I can buy pre-flashed ESPHome devices which will do wireless updates and come with the full source code and a map of how to talk to their pins (which implements the functionality) - I see no reason why this sort of access shouldn't be the default.


I think that the use of an escrow service would be an excellent idea. There's some complexity to deal with in order to make this fair for both companies and consumers, but I think that these difficulties are surmountable.

An open source firmware model doesn't always make sense for businesses, but I think that for most hardware-oriented businesses, it makes perfect sense. There are plenty of business models in which the hardware itself is deeply discounted or even sold at a loss in order to sell the overall service -- the IoT portion. Right, wrong, or indifferent, that is a model that many businesses pursue. If their business model makes sense in the marketplace, I think that's fine. Plenty of consumers choose proprietary and service-oriented systems -- e.g. Apple's closed ecosystem -- and that's fine as long as the consumer safety and security is prioritized. However, I think that regulation should ensure that the right for consumers to maintain their devices should fall back to the consumers if or when these companies fail.

That being said, I think that consumers should always have a right to root their devices. If consumers decide that the iPhone or IoT light switch that they purchased does not meet their needs, there is no reason why they shouldn't be allowed to flash any firmware they want on it. In the case that hardware is sold at a loss, there should be an up-front contract with a buy-out clause, which also should be regulated to ensure that the company charges a reasonable and non-discriminatory "regular fee" for hardware independent of contracts, much like how many cellular carriers work. If the consumer chooses to "buy out" this contract in order to root their device, then that should be allowed if they pay the pro-rated "regular fee", adjusted for the amount of time they have paid into the contract.

I've considered governance models that can exist beyond the lifetime of a company that would guarantee escrow access to source code. Pitching this to a company is of course quite difficult, since no company thinks that far ahead, and many in leadership refuse to consider what happens if and when their venture fails. I think that the only way to build such a governance model is to provide an open source framework for managing both builds and OTAs that can ensure this. Escrow as a service could be built into this, using one of various cryptographic election strategies for recovering key details if an organization goes dark.

Either way, having the FCC seriously consider the security of IoT devices is a great first step, as long as it is a step and not a hurdle for innovation.


I can't file because I'm not based in the US, but I'd love to see smartphones, tablets and similar devices to be covered as part of IoT in general, as they share the most important of the characteristics - the manufacturer sells a device connected to the Internet.

There are multiple issues that I think need urgent regulatory attention, and the issue classes are valid for both "classic" IoT devices and phones:

1. Manufacturers often do not state anything about support: availability of spare parts, feature updates, security updates. Even those that do, like Google's Pixel lineup, have ridiculously short times, and "enterprise" devices like my Samsung Galaxy Tab Active 3 that's 2.5 years old don't have spare screens available any more. I bought an "enterprise" device in the hope that it would have a better supply chain than consumer devices, but I was mistaken.

2. Many devices with batteries are sold without the ability to easily replace them or without officially sanctioned spare parts, which causes a risk of people running devices with swollen or otherwise damaged batteries, or devices living way shorter than they could be because batteries can and do simply lose capacity.

3. Many devices are completely locked down. This is particularly relevant for SSL root certificates whose expiry leads to devices being bricked, or for people who simply would like to enjoy the freedoms of the GPL and other FOSS licenses but can't because custom firmware can't be installed at all (due to Secure Boot) or permanently bricks features out of DRM concerns (e.g. Samsung Knox, Netflix, banking and many other apps that refuse to run on rooted or otherwise modified devices).

4. Many devices' BSPs (board support packages) are littered with ridiculously old forks of stuff like bootloaders, the Linux kernel or other userland software, and the chip/BSP vendors and manufacturers don't give a fuck about upstreaming their changes or code quality is so bad it cannot be reasonably upstreamed.


Re your point 4 in particular, I feel your pain -- I said "exposed public keys, expired certs" in the OP for a reason. The current item doesn't contemplate a requirement to tie these off as such, but I'd be interested to see if commenters ask for this as part of getting a stronger label.


Thanks for your response!

To add on the "label" point: I don't think labels are enough, not in a world where consumers (private, commercial and governments) primarily look at the price in purchase decisions. At least a base set of legally binding requirements must be established.

ETA: I'd also love to see an exception for small scale / startups. Like < 1000 units sold per model and year. That allows quick iterations while the large offenders still have to comply.


Thanks for yours!

It depends how much the labels shape behavior. I'm envisioning a "high-tier" label that says that risks X, Y and Z have been addressed by M means and that, e.g., addressing risk Z meant sweeping stated databases for known security holes, committing to security-only patches for N years, and hiring J compan(ies) to sweep your firmware within specified parameters -- or whatever other things from the wish list of infosec pros that people like posters in this thread choose to advocate for. Hopefully that would be better than what we have now, which is mainly price/churn-driven minimum viable product.

Re your exception: I don't think mandatory labels are on the horizon in the USA, but this could indeed be a problem under other regulatory regimes.


I have personally found several IoT vulns in everything from Zoom devices to Japanese robot hotels, and I run a security consulting firm. Swooping in with my 2c.

Most of the time the engineers making these things -think- they are reasonably secure, but they tend to have little to no infosec experience and are moving too fast with no accountability.

Worse, even when there is some accountability such as code review, the release engineer creates the security problems at release time either as a supply chain attack or stupidity.

If I were making the rules, I would ramp up common sense supply chain accountability which would cause some of the most prevalent problems to be spotted early.

My wish list:

1. Require all source code be signed (git signatures or similar)

2. Require all source code reviews by peers be signed (minimum 1)

3. Require source code to compile deterministically

4. Require at least two individuals or entities verify code signatures, compile code, and compare identical hashes

5. Require proprietary firmware products have an external security firm on retainer incrementally reviewing code (including dependencies!), as well as reproducing, and co-signing releases.

6. Require proprietary products use a source code escrow service that will make their code public the day support and security updates stop so the consumer community can patch for themselves

7. Require open source firmware products have a bug bounty program (potentially with government funding like the EU does)

Happy to chat about this sort of thing with anyone interested. Contact info in my bio.


I really like #7


A required support period of some number of years is problematic for products developed by startups, because startups cannot guarantee that they will still exist to provide support in several years. They can have the best of intentions and excellent engineering, but still fail in the market and be unable to keep maintaining a device. So requiring security support for several years wouldn't have any effect on these devices, because the company will be gone and there won't be anyone to take an enforcement action against.

For a labelling program for this type of company, you could require additional disclosure to consumers at the time of sale, along the lines of "We can't guarantee that we'll provide security updates." Or you could require open-sourcing of firmware if a company goes defunct. Or perhaps device manufacturers could be required to hand the technology of to some third party maintainer if they go under.

Any regulation at end-of-support-life has the "company disappears" problem, so probably disclosure at time of sale is the only thing that could be reliably enforced.


Totally valid point. Escrow then open sourced, or perhaps even some insurance policy so that the future patching and vulnerability remediation is guaranteed.

There's a bunch of stuff consequential to EO 14028 which could allow for some automation of library vulnerability.


I was trying to think of ways to finance it, and "future patch insurance" is clever! There are other sorts of business insurance where the insurer is liable even if the business no longer exists, so it would be doable. Though a policy would require a level of technical competency that other insurance policies don't, since their providing a guarantee of service rather than a guarantee to pay out a certain amount in damages.


Any guarantees and warranties have the same caveat: if the company goes under, you got nobody left to sue. And there’s no squeezing water from a rock anyway, a defunct company can’t pay for the updates nor the damages


One thing that regulators need to be very careful about is how "security updates" are defined, and exactly what manufacturer obligations for issuing security updates should be. CVEs are a notoriously terrible representation of actual security risks, so a measure like "manufacturer must issue new releases that include any released patches for CVEs with a severity rating greater than 9" would be a clear non-starter.

There are also often practical issues related to security patching embedded devices: for example, a downstream supplier's driver can make it impossible to upgrade a kernel unless/until the supplier provides a fix. Of course, strong regulation here could help to drive bad practices like that out of the industry, but I'm not going to hold my breath on that one. The effect of regulation like this would make it harder for manufacturers who don't have the market power to lean on their suppliers to provide security patches.

Finally, it's important that any regulation that mandates or strongly encourages software updates also mandates that the update system itself be implemented in a secure way. This is my specific area of expertise, and I can tell you that it's very often done very badly. A bad update system is a gigantic, flashing red target for attack. So something like mandating signatures (and sig validation) on software update images would be a good start. Mandating the use of TUF-compliant repositories would be even better.


Thank you for these thoughtful points. Some relevant responses from other threads:

From https://news.ycombinator.com/item?id=37394188 :

I think you're right that it would be difficult for the FCC to precisely define exactly when security updates are required. This is a problem in law generally, one that is usually resolved by imposing a reasonableness standard. Maybe here, a vulnerability needs to be patched if it might reasonably be expected to allow an attacker to take control of a device, or to do so when combined with other known or unknown vulnerabilities. Or maybe a different standard. Then when enforcement/lawsuits come around, the judge/jury/regulator has to evaluate the reasonableness of the manufacturer's actions in light of that standard. We'd love to see commentary on the record as to what the right legal standard might be.

From https://news.ycombinator.com/item?id=37394793 :

Agreed. Building an automatic firmware update system from scratch would be burdensome for many IoT makers, but as it becomes necessary or encouraged, we would expect the market to provide a packaged solution/framework that manufacturers could fold into their products. It would be really helpful have to discussion of this on the record. How generalizable do you think such a solution could be? We are aware of the Uptane project, an OTA firmware update framework being jointly worked on by several car manufacturers, but would love to hear more about the feasibility of a solution for IoT devices generally, or particular classes of IoT devices.

From https://news.ycombinator.com/item?id=37393926 :

[...] companies wanting to put a label on their product would probably want to extract similar guarantees up their supply chain. Especially with a voluntary program like the one the FCC is proposing, good practices won't become the norm across the market overnight. But maybe, at the very least, the segment of product and component makers that take security seriously will begin to grow. I encourage you to share your thoughts in an official comment.


> How generalizable do you think such a solution could be? We are aware of the Uptane project, an OTA firmware update framework being jointly worked on by several car manufacturers, but would love to hear more about the feasibility of a solution for IoT devices generally, or particular classes of IoT devices.

One thing to be aware of: a decent number of connected devices are white label devices or "lightly" tweaked forks of a reference design. The consumer-facing company may have no power to actually update anything. If the originating company only provides proprietary versions of some critical component and can't/won't ship updates, the consumer-facing company can only patch issues with _their_ portion of the final software running on the device.

A _requirement_ that the consumer-facing company be able to update any/all portions of the software stack for $someTimeFrameAfterSale might start to change this but expect a fight from every link in the software-supply-chain on this front.


>we would expect the market to provide a packaged solution/framework that manufacturers could fold into their products.

These kinds of solutions exist, see for instance: https://docs.aws.amazon.com/freertos/latest/userguide/freert...

My concern is that these firmware update platforms will become oligopolies/monopolies because they will control a legal barrier and naturally accumulate the obligations of many manufacturers.


You're the lawyer guy? What statutory authority are you drawing on that you believe allows you, the FCC, to regulate this stuff?

Thanks!


Good question. The Notice of Proposed Rulemaking has a Legal Authority section that discusses this issue https://www.fcc.gov/document/fcc-proposes-cybersecurity-labe.... I also touch on it here https://news.ycombinator.com/item?id=37393316


Thanks, but, that FCC document clearly says it's about a "voluntary labeling program", and, the title of this HN post has the word "regulation" and the text has language like "require" [0]. And the phrase "oppose[...] even voluntary ones", which clearly sounds like someone's proposing non-voluntary stuff.

I read your linked HN comment too, but: "legitimate interest in" [1] a thing and actual "authority" to do a thing are not the same thing.

I feel like I'm being bamboozled here. The fcc.gov "Notice", and this HN post, seem like they're talking about substantially different proposals.

[0] "I’ve advocated for the FCC to require device manufacturers to support their devices with security updates for a reasonable amount of time"

[1] "...we think that the FCC has a legitimate interest in just about any vulnerability on a wireless device"


Nathan's post and the proposed rulemaking are both quite explicit that the proposal under comment is a voluntary labeling scheme. Perhaps the intro could be better written to be clearer, but I don't really understand your complaint. There's no bamboozle.

From above:

"I’ve advocated for the FCC to require device manufacturers to support their devices with security updates for a reasonable amount of time [1]. I can't bring such a proposal to a vote since I’m not the chairman of the agency. But I was able to convince my colleagues to tentatively support something a little more moderate addressing this problem.

The FCC recently issued a Notice of Proposed Rulemaking [2] for a cybersecurity labeling program for connected devices. If they meet certain criteria for the security of their product, manufacturers can put an FCC cybersecurity label on it. I fought hard for one of these criteria to be the disclosure of how long the product will receive security updates. I hope that, besides arming consumers with better information, the commitments on this label (including the support period) will be legally enforceable in contract and tort lawsuits and under other laws. You can see my full statement here [3]."


Thanks! Sorry for any lack of clarity. My initial draft was way over the character limit and I had to cut a lot prior to posting. Thanks for highlighting the relevant language and clearing things up.


Maybe, reach out to the FTC over the fraud that's being perpetuated with this cloud-locked (other peoples' servers) *rental* being sold as a *sale* ?

If these companies are selling defective goods and preventing individuals to fix it themselves (in other words, the selling company holds material control of the device), that's a *rental* .

Properly reclassifying consumer garbage with company-locked electronics as a rental would be the big kick-in-the-pants that nearly every company is playing now. And that includes the cellphone-on-wheels (Tesla), the stunts being played by most other car manufacturers ($$$ for heated seats, etc), Apple holding control over what approved software a general purpose computer can process, and loads more.

I don't think the FCC can require firmware updates other than in radio based units, to require regulatory requirements for specific frequencies (2.4GHz no channel 12/13 in USA, 10 minute wait on a part of 5.8GHz for ground radar). But the FTC could force it by clarifying cloud-crap is a rental, and not a sale.


To expand on this, since it's not explicit in Marco's comment: The statutory authority is section 302(a) of the Communications Act, which authorizes the FCC to regulate devices that can interfere with radio communication. Their reasoning is that IOT devices fit this category, so regulations on security updates are within scope.

Full quote from the notice of proposed rulemaking: "In particular, section 302(a) of the Communications Act authorizes the FCC “consistent with the public interest, convenience, and necessity, [to] make reasonable regulations (1) governing the interference potential of devices which in their operation are capable of emitting radio frequency energy by radiation, conduction, or other means in sufficient degree to cause harmful interference to radio communications; . . .” While this program would be voluntary, entities that elect to participate would need to do so in accordance with the regulations the Commission adopts in this proceeding, including but not limited to the IoT security standards, compliance requirements, and the labeling program’s operating framework. We tentatively conclude that the standards the Commission proposes to apply when administering the proposed labeling program fall within the scope of “reasonable regulations… governing the interference potential of devices….” We seek comment on this reasoning."


> There are also often practical issues related to security patching embedded devices: for example, a downstream supplier's driver can make it impossible to upgrade a kernel unless/until the supplier provides a fix. Of course, strong regulation here could help to drive bad practices like that out of the industry, but I'm not going to hold my breath on that one. The effect of regulation like this would make it harder for manufacturers who don't have the market power to lean on their suppliers to provide security patches.

This. We were building an IoT product that was effectively stuck on a derivative of Ubuntu 18.04; we couldn't upgrade because vendor wouldn't rebase on a new LTS for a very long time. As our project was being developed in Python, we were stuck on 3.6, and as it reached EOL, many third-party libraries dropped support and wouldn't even release security fixes; we needed to stay on that particular OS because of hardware support; and moving off the distribution-provided Python packages would increase maintenance burden beyond what we were able to handle.

Even if the vendor would continue to provide security updates to the base OS and its packages, any real-world software solution will rely on third party packages, which may choose to drop support.

I would love it if the lawmakers considered this scenario.


This is an honest question to these arguments, but as a consumer (and as an extension the FCC protecting them) why should I care? Would you accept the same arguments from your car manufacturer, "sorry we can't fix your broken brakes, our supplier uses a process that isn't supported by new brake standards so just don't brake"?

I suspect not, so why not because the car is more expensive?

I would argue that the purpose of regulation is exactly to root out this sort of practice. If it was cheap and effortless to do this we likely wouldn't need regulation.


The issue is that it's currently not a regulatory requirement. So when you go to the chip maker and demand that their chip have drivers in the Linux kernel tree so it will continue to support newer kernel versions, they turn you down. Most of their customers don't care about this and they would have to pay a developer to produce drivers of the quality that would be accepted by the Linux kernel maintainers. Then you're stuck using what you can get.

If you had a rule saying that device makers have to produce security updates, now the device makers will all demand this because they need it to satisfy the regulatory requirement, and not be willing to take no for an answer.


I don't understand your argument, are you agreeing with me that regulation will cause this to happen? So why is that an argument against regulation?


It's an argument for getting the regulation right.

For example, one of the obvious ways around these requirements is you set up Sell To Retailers, LLC which nominally does the final assembly, is responsible for the update requirement and then files for bankruptcy whenever anyone tries to enforce it against them.

The bad way to get around that is to try to hang the requirement on some kind of larger entity, like the retailer. Then every retailer bans every kind of smaller device maker who might not be around to make updates in ten years and you have a rule that unintentionally causes catastrophic market concentration.

The good way is to require that the customer can flash custom firmware to the device and the hardware has sufficient published documentation for a third party to make drivers for it (the easiest way to satisfy which would be to publish open source drivers and firmware).

That way if the manufacturer goes bust, as some of them will even independent of trying to get out of the requirement, someone else can still patch the device. And that someone will be more likely to exist, because communities like DD-WRT will have already produced custom firmware for the device and be there to patch serious vulnerabilities even if the manufacturer is gone.


The same thing happened to my car — they discontinued support for the cellular module it shipped with. I had to bring it in (and I believe pay something) to have the module updated. I did not and now it no longer has the online functionality.

Brakes are not internet-connected, but where the line is between features or functions that might be lost and those that represent the core of the product is an interesting question.


That's the thing though: most IoT devices shouldn't be Internet-connected, and most definitely should not depend on a vendor cloud (or increasingly, a cloud of a different vendor that sold white-label IoT solution to the "vendor" you you bought the device from). It's an unnecessary limitation, a combination of laziness (going over cloud is easier than figuring out local-first and standardizing on some VPN solution) and abusive business (the cloud on the other side of the world is holding your Internet-connected air conditioner hostage, better play nice).

If brakes are not Internet-connected, that's mostly because they were established before Internet - and given the trends in car manufacturing in general, it's only a matter of time.

(In some sense, we're already there - if you have cloud-connected self-driving, and that self-driving can override your command to apply brakes, then your brakes are de-facto Internet-connected, even if connectivity isn't a hard dependency in all cases just yet.)


Brakes are fundamentally both a safety-critical system, and one that is both relatively well isolated from other systems, and dead simple in principle (a bike has simple mechanical brakes and a 3yro could explain why they work).

The issue with software OTOH, is that a security hole in one trivial component (e.g. resize images to make thumbnails) can often lead to a full system compromise. Even if you don't get full root, you can still use a compromised system to your advantage: steal personal data, use it in a botnet, serve malware, mine proof of waste, etc.

On top of that, adding a dependency is often made very easy by modern package managers, and as the number goes up it gets rather difficult even to vet your direct dependencies, let alone transitive. Installing brakes in a vehicle doesn't automatically pull in a kitchen sink, but in the software world it's widely accepted, almost inevitable. You can spend your time removing the 90% of that library that you don't need, and rewriting the remaining 10%, or you do the "reasonable" thing and just ship.


Under sensible regulation you wouldn't get to blame a third party here. You would have signed a contract with your vendor to give you updates in line with what the regulation demands, and your insurance company would cover your liability if the vendor goes out of business and you have to pay through the nose to replace them or settle a class action lawsuit. Your expenses would go up and those would be passed on to the consumer, but everyone cheering for this regulation is OK with that. Hopefully the marginal cost of insurance and better vendors would be only slightly above the cost of providing this kind of long term support.


> stuck on a derivative of Ubuntu 18.04 [...] as our project was being developed in Python, we were stuck on 3.6

I might be missing something but why do you need to rely on the OS provided Python version? Newer versions that 3.6 should run on older Ubuntu versions. You could have installed newer versions using the deadsnake PPA for example onto 18.04 up until earlier this year (since LTS only has a 5 year support window, and deadsnakes only supports active LTS versions).


If we had the resources to disentangle the entire Python situation, trust me we would. Unfortunately the web of dependencies for that project was quite intricate, and at one point you just need to swallow the vendor's proprietary libraries that they've built against what they've shipped in the base OS. (L)GPL is good on paper, but the effort to actually make use of the freedom it grants is disproportionate.

(Which is why I'm a firm believer in the suckless philosophy: if the software is too complex to fully understand, source access or even copyleft aren't worth much.)


GPL is burdensome for businesses to comply with, so I would support public funding for drop in replacements for common GPL tools and libraries


> I would love it if the lawmakers considered this scenario.

You're building on quicksand, and you're asking for us to give you leeway when the building collapses.

Either do the work of making all of those security fixes yourself, or pick a better platform to build on top of.


> pick a better platform

Unfortunately there isn't all that much competition in this space. The choice was try building on quicksand, or let the idea die. I'm glad we tried it.


Until there are consequences for building on quicksand, the vendors have no reason to improve their offerings.


I don't understand what are you trying to imply here? That we should be punished for building a prototype? Or that had we shipped it in its current state, upstream vendor should be punished for stalling on updates?


As the support schedule for python is known ahead of time, this scenario seems pretty well covered by "disclosure of how long the product will receive security updates": just choose the EOL for the relevant python version in the date-picker.


Then fire your shitty vendor or refund your customers.

Nothing will change unless everybody changes.


You can't fire your SoC vendor especially once the product ships. And their are all PITA about security updates.


If you buy from a supplier with a contract that stipulates security updates then you certainly would define the damages which failure to fix will cause you, wouldn't you?


One of the issues is that the upstream vendor goes out of business. What you really need is to have the source code for the firmware, ideally in the public mainline kernel tree so that new kernel versions continue to work on the hardware.


Certainly true. Source code escrow should be part of any kind of company selling internet connected devices.


> regulation like this would make it harder for manufacturers who don't have the market power to lean on their suppliers to provide security patches.

Thought question (I’m asking, I don’t know the “answer”):

Today, many of these devices are marketed and sold by a company that has little to no involvement in the creation of the firmware or software, besides maybe sending over an image of their logo to be rolled into some turnkey “app.” Would we actually be better off if companies couldn’t really afford to basically dropship some sketchy white-label Chinese product, and instead could only sell a product here if they were confident they (acting alone) would be fully capable of supporting and updating it for a reasonable lifetime? Yes, it would raise the barrier to entry above basically the floor where it is today, but I don’t imagine there is a way to have it both ways.


My two cents is that this would be an excellent comment on the record -- I'd love a discussion at the level of defining security risks to be part of the official federal commentary, because this is going to be a thorny implementation problem.


It would be great for people to post an update like "comment submitted" on threads like this one to make sure it was entered as a comment into the official record.

I'm sure these comments in themselves are helpful to @SimingtonFCC individually, but having them be part of the official record gives the FCC legal grounds to consider them and incorporate them into rules.


Completely agree! The public record in this case is going to be what agencies and industry looks to, far more than whatever I might happen to personally believe. I'm going to get as much information from this discussion as I can, but every participant should feel free to comment on the record, or to get their employers, companies, trade associations, ad-hoc working groups, concerned citizens congregating on Discord to complain, etc. to do so as well.


I'm curious about your thoughts on balancing the damage of another Mirai with the damage of another SolarWinds. A regulation where every IoT device must accept a signed OTA update would make update servers an extremely valuable target for supply chain compromises.

On the one hand, without updates, a world of IoT devices will inevitably get infected slowly and permanently (as long as they're physically active).

But on the other hand, with mandatory updates, a world of IoT devices can get infected all at once (in the case of a supply chain attack) and possibly just as permanently (if the attacker's payload can disable or re-route the update system)?

Do you think that prevailing security standards for IoT manufacturers are good enough that this balance falls in favor of a mandatory-update regulation?


I don't know about a mandatory update regulation -- one way or the other, that isn't on the table right now. I would love extensive discussion on the record, however, of the costs and benefits of requiring updates to get the label.


While I acknowledge that CVE scoring of risk can be inconsistent and sometimes wildly wrong, what would you suggest in its place?


That's the problem, there isn't a good objective measure. Some type of "reasonableness" standard is usually invoked in situations like this, but that kinda just takes us back to square one: what's currently considered reasonable in the industry is pretty terrible.


I'm not sure we will ever have a universally accepted objective measure of risk. Risk is, by its nature, somewhat subjective.

Most organisations will use CVEs and the CVSS system as a starting point, but will triage them and produce their own assessment of the actual risk to them and their products given how the software is used.


I don't think a legal reasonableness standard would be the same as "common industry behavior." Regulation would hold companies to a real reasonableness standard, as determined in the text of the regulation or by a court.


just go by past incidents. Quite often it is not software vuln that enables hacker's attack - it is insecure default config that user never changes and manufacturer supplies same default user/pw with each device.

also insecure backdoors left by developers for debug purposes (or is it really debug or maybe espionage?)


> also insecure backdoors left by developers for debug purposes (or is it really debug or maybe espionage?)

It should be made clear that any "backdoor" is a criminal offense under the "unauthorized access" provision of the Computer Fraud and Abuse act, unless the device is covered by an explicit remote maintenance agreement which imposes duties upon the maintainer.


Awesome!

Thanks for engaging, where the rubber meets the road!

Hopefully, you are also looking into other venues, as well.

HN has a great group of folks that represent some of the most cutting-edge tech, but IT runs on Java 8[0].

[0] https://news.ycombinator.com/item?id=19877916


Thanks for participating! After this thread winds down, I and my team are going to comb through it for suggestions and take as many as we can. We're also looking into other venues to engage directly with cybersecurity professionals. But please feel free to comment on the record as well -- a robust and detailed record is worth a lot more than whatever I can do individually.


An even better venue for informed cybersec professionals is the info-sec community on Twitter and Mastodon, https://infosec.exchange/about .

People like Michal Zalewski, https://twitter.com/lcamtuf, could point you to the best of that.


I think that you'll get a lot of feedback.

I would suggest to my peers, that the links you gave are "official channels," and are probably what you really want, as opposed to a rather rambling thread of comments.

But for me, you just get a rambling comment.

I made my career on devices. In particular digital scanners and cameras.

I worked for a company that was about as tinfoil as you could get, and they supported devices long past their sell-by date.

But I also know that my company was an outlier. They sold premium equipment, at a premium price. They were an "old-fashioned" Japanese corporation, and had a basic mindset of keeping the customer's workflow in the center of the screen.

I think IoT security is a huge issue, and I think that the solution could be that there are standard, open-source, open-license, free-to-use packages; maybe written in languages like C, that could be offered to the industry. These could enforce low-level compliance with security standards.

Oh, and keep the TLAs out of it. They would really like to put a bit of "extra spice" in something like that.

That said, I know that it will never happen. There's a gazillion issues.


I would suggest to my peers, that the links you gave are "official channels," and are probably what you really want, as opposed to a rather rambling thread of comments.

I sort of want both. Official commentary moves the needle, but selfishly, I love the thread comments. People tell you what they really think, and sometimes go into a lot of detail as to why. It's an education for me.

I think IoT security is a huge issue, and I think that the solution could be that there are standard, open-source, open-license, free-to-use packages; maybe written in languages like C, that could be offered to the industry. These could enforce low-level compliance with security standards.

"Universal basic security" would probably be a major field of policy approach if we found ourselves with some huge disaster requiring a regulatory response. It's at least worth thinking about now, even if it goes beyond the scope of what the immediate regs can do.


Pretty cool (or at least interesting) to see a government agency engage on HN like this. Never seen that before.


They're definitely making efforts to engage where practitioners and subject matter experts are. Substantial Federal gov showing at defcon this year for example.

https://www.dhs.gov/news/2023/08/11/secretary-mayorkas-deliv...

https://www.politico.com/news/2023/08/11/def-con-hackers-spa...

https://arstechnica.com/information-technology/2023/05/white...


IMO here's some better solutions.

1. Blend FCC action with right to repair -- Require device makers to provide software patch utilities to the public, and open source the code after a period of time.

2. Rather than regulate manufacturers, educate consumers. Companies that do dumb shit should go bankrupt because customers can understand the company sucks.

3. I'd prefer the government to defined standards and repercussions, not solutions. ie, Do not mandate security patches, instead add liability, per sold device and scaled to severity, for security flaws. Then let the market decide the solutions. Rather than giving patches, they might decide to just give free replacement devices, for example.


Nathan -- thanks for your work on this issue. I'm the ceo/co-founder at Seam (YC S20). We're building an API for IoT devices. I have many, many thoughts for you.

For Seam, we purchase, set up, and test many individual devices and systems in our lab in San Francisco. During the course of this work, we discover quite a few interesting things. When possible, we work directly with manufacturers on addressing the more concerning problems we find. We maintain an internal device database (partially available here https://www.seam.co/supported-devices-and-systems) where we keep track of our findings on devices we test & integrate. One area that I haven't seen addressed here is data-storage jurisdiction. imho, that might be one of the more concerning aspect.

happy to have a chat; my seam email is in my hn profile.


The single biggest problem with IoT devices is the black box, vendor-specific cloud platform nature. This causes privacy issues galore as well as requiring every manufacturer to reinvent the wheel to secure their devices, while also making huge quantities of ewaste when Random Manufacturer #484 goes out of business, taking their cloud with them.

How about instead, mandating that all IoT devices need to comply with an open standard? Customers would be free to connect their device to Siri or Alexa if they wanted, but by default the device just works with an open standard that you can control fully, hosted at home if desired.

It would also remove the cloud secu