Thanks! I am thrilled that so many people are participating. The FCC is going to need a lot of this community's input over the next few years as more and more devices go online.
Instead of trying to compel manufacturers, who may no longer even exist, to support their old products; perhaps the government should focus on protecting consumers and aftermarket vendors who update / modify / reverse-engineer older revisions--especially after they're no longer meaningfully supported by the manufacturer.
There is an overlap with the right to repair topic. It does not make sense to have the DMCA hanging over your head when you are reverse engineering a product that is abandoned by the manufacturer - be it end of life or bankruptcy to name two reasons among many.
Also, security researchers should have strong legal protections; they should be given the benefit of the doubt at every turn.
Currently, researchers are sometimes threatened with decades in prison for testing the security of websites or devices. If they act in good faith as researchers, this should never happen.
This is literally a national security issue. We currently stifle security research on essential IoT devices primarily so companies can avoid being embarrassed by their own poor security.
This might be an unpopular opinion but I respectfully do not see it that way. I agree with promoting security for IoT devices, but there needs to be consent from the company being probed for vulnerabilities or else I find it hard to consider it legitimate research, regardless of intent.
I dont think anyone would like it very much if someone came to their house and documented all the ways to rob it they could find, even if it's for research purposes. There is an inherent risk of your vulnerabilities being broadcasted somewhere either on purpose or accidentally once that information is collected and organized by the researcher.
It isn't harmless and innocent to probe anything for weaknesses unsolicited. It is reasonable to respond to that as a threat. It is genuinely threatening behavior.
Now I do understand it gets complicated when it's a business being trusted with sensitive information / access to devices in your home. I am just saying as part of the solution we need to keep possibly threatening behavior in mind and try to avoid the promotion of it as part of the solution unless there is really no other way (imo)
The problem here is that the thing I am probing is something I own: the device in my house that I ostensibly purchased and am allowed to smash with a hammer or put in a blender for all anyone should care; the context is that the DMCA is often used by companies to claim that DRM on the device is there to protect copyrights--whether music the device had access to, even if it isn't the reason many or even most people buy the device (such as a smart fridge with a speaker in it and the option to log in to Spotify), or the firmware software itself--and that it is thereby illegal for me to distribute tools to help people access to repair (which is the key thing here: there actually are already some legal protections for the act of "probing", but you kind of have to do it alone which is insane) a device I own and where finding vulnerabilities should be about me and my trade-offs, not the wishes of a manufacturer.
I hate it too, but the heart of this is that ownership is under question.
People should not have agreed to buy things where there are parts of it they don't own that they don't even need, but they did. They did it a lot because it didn't matter to them and now those devices are prevalent everywhere and it's a PITA to try to buy the type of item you actually want - where you own it entirely.
Ownership has never actually been absolute. When you buy land you cannot tear it up and make it totally unusable. If you buy a home under an HOA you may have to keep it in a certain type of order.
Maybe what we need is a law that manufacturers always need to provide a "dumb" model of their products which can be completely owned by the consumer.
However, I was speaking from a stance of acceptance that the companies are maintaining ownership of some functionality of the devices. I was primarily thinking about the way it accesses company owned infrastructure (servers and the information on them) but it extends into a grey area on the devices themselves.
You should be allowed to reasonably tamper with the device, but you should also be attempting to communicate with the company about it. They shouldn't be allowed to retaliate against you for requesting to tamper, they should need to reply reasonably quickly, and the reasons for which they are allowed to deny you should be regulated so they cannot just deny for no reason.
I am saying we need to lean in to the situation we are in if we want actual results, and I think there is a lot of room to develop a reasonable legal framework on this subject that incorporates partial ownership.
It shouldn't be as restrictive as it is today, but it also shouldn't be a complete free for all. We should at least attempt to make an effort to control security vulnerability information so criminal behavior and innocent behavior actually looks different.
>People should not have agreed to buy things where there are parts of it they don't own that they don't even need, but they did.
I own zero IoT devices for the exact reasons you gave.
Frankly, I would prefer to change that state of affairs. I would also prefer far less waste. Tons of these devices end up in the garbage too. That is unacceptable and surely not sustainable.
I am not OK with partial ownership, unless there are clear obligations attached to the other partial owner that have real teeth.
Fact is we have law for this case and that is the rental agreement. That is exactly what partial ownership is.
And when people are asked to value something they will be renting, everything changes. A big change is purchase price. That goes down.
What I see happening is IoT companies business model is priced as if ownership happens when it really doesn't. And that is not OK.
I also find putting that onto people disturbing because it was not the people who who made the choice to advertise a sale and then act as if it is a rental.
Out of curiosity - why should I be required to ask for permission from given company to probe company owned infrastructure?
What I mean here is that if there's a bug / vulnerability on given company infrastructure, then that company should fix it and not put on a blame on a user that was affected by it (even if device that communicates with given infrastructure always follows happy path)
1) the probing almost always involves breaking the terms of the contract you made with that company.
2) it creates a paper trail of intent
3) it's not your property so why wouldn't you need permission to access it?
I am not sure how permission effects a companies ability or obligation to fix security bugs. I agree they should fix it.
We can make the law that not only does the company approve of the request but they have to disclose to you additional information that can help you find bugs. Idk, point is I'm advocating for creating a system where researchers work with the company rather than as vigilantes
> I dont think anyone would like it very much if someone came to their
> house and documented all the ways to rob it they could find, even if
> it's for research purposes.
The correct analogy would be if someone documented all the ways to rob a house that is currently mass-produced and sold on the market. And yes, as a consumer I most certainly would approve of such activity, especially if I've yet to make a purchasing decision. Or especially if I'm already living in such a house, I need to know that it is not safe.
In that scenario I would MUCH rather the company be aware someone is putting that lust together, notfiy me in advance of the research being concluded, provide updates, organize and manage the contents of that list, offer solutions, patch the fixes in new models, and generally work with the people who already purchased the house.
I would not prefer someone to do it all in secret and then at the last second decide they want to inform the company.
Once such a thing gets broadcasted, there is inherent risk created for a lot of those existing owners that did not exist. Opportunistic criminals are way more common than premeditated ones.
Also if we gain the ability to monitor everyone who is currently probing houses for security issues, then if we are able to have a whitelist of people who pre-notified with their intent then we can more reliably examine people who might be looking to abuse the system.
I guess part of my underlying assumptions here is that we are moving towards a surveillance state and there are no signs of stopping that
> In that scenario I would MUCH rather the company ...
notfiy me ... provide updates..
Here is the problem - the company does not give a crap. You get robber, and it's their fault? They don't care. But they will sue the researcher, because the researcher has discovered that it's their fault you got robbed.
And the ones that don't create a paper trail of not giving a crap
The researcher is protected from being sued by being granted permission and following any regulations created for ethical security research.
We can make security notifications from companies mandatory. Now if they try to hide something, and it comes out later, there is documentation of the cover up
Do you believe that your proposal increases the cybersecurity of society as a whole?
You focus a lot on the rights and conveniences of a company, but the rights of a company are not more important that the security of society as a whole.
There are good guys and bad guys out there looking for vulnerabilities. What you propose reduces the number of good guys more than it reduces the number of bad guys (since bad guys are less likely to follow the law). What you propose shifts the balance towards the bad guys and makes it more likely that vulnerabilities will be discovered first by the bad guys. You also propose security through ignorance; security via hoping that nobody notices.
Again, I would really like to hear you assert that your proposal would increase the cybersecurity of society as a whole. I did not clearly see such an assertion in your comment. I want to see an argument focused on the security of society as a whole.
I assert that we currently reduce our national security for the convenience of companies.
I proposed a preference for systemic solutions over building a soft dependence on white hat hackers.
This benefits society as a whole because it clearly delineates actions with intent. If doing X is always not allowed, then all you need to do is find people doing X and you can hold them accountable.
If you allow or disallow the same activity based on merit of intent, then you increase the level of plausible deniability to everyone who gets caught.
I am not proposing security through ignorance. I am proposing security through consent. Nowhere did I say anything about not allowing research, I only said that if you do it unsolicited then it should be considered a threat.
So, we could systemically allow for a right to research that involves notice to the company and their consent for you to test. It would not hinder white hat at all. If businesses resist for selfish reasons we can expand the law to prevent them from denying requests without a legitimate reason. For example, maybe it is okay for them to deny a request from an ex-employee with a grudge who has sent the company aggressive emails. Idk, maybe there are no valid reasons to deny. The point is we can create a framework that promotes security development above the table with all parties involved. And my proposition is that if that is possible then it should be preffered.
You attempt to solve the problem of chaos (think grey-hat) by expanding law enforcement--by enforcing order on every internet user world wide. That's going to require a lot of boots to squash a lot of faces. Curious kids who run port scans will stand before judges, journalists who press F12 will face the ire of the most powerful and decades in prison[0]. This will probably require some national firewalls as well. This will continue the status quo where companies leak the private information of countless millions and nothing happens, while individuals must be careful what they do with their own computer and their own physical devices.
I attempt to solve the problem by embracing chaos and empowering those who seek to do good in the chaos. I'd like to see our IT systems become so hardened that no amount of chaos can harm them. Let the grey-hats and black-hats run wild, it is possible to build our technology well enough that they can do no harm. This would require those with the most wealth and power in our society to do a little more, to take on some additional responsibility and demonstrate they are worthy of the trust and power we have given them. Let individuals be free and make the creators of our technology responsible for their own creations.
What you have proposed is what we already have, it is the status quo. When you hear about a major breach every other week, ask yourself whether or not it's working.
The status quo is not sufficiently codified. I am suggesting we codify it so that we can look at the rules and change them so that they make sense.
I also think it would be a good thing to have a legally protected avenue for people to declare their intent before checking for unlocked doors and such.
Imo I think a lot of the problems are coming from companies feeling like they are getting fleeced by security experts. If a company has acknowledged you as a researcher beforehand, then you have a pretty strong legal defense if they decide later that they don't like what you find.
I am not suggesting a new world order over everyone that uses the internet. People who stumble upon vulnerabilities without looking for them, or through incredibly basic means like a port scan, can be protected. We can feasibly list enough ways someone can uncover a security hole without a direct effort to do so such that the spirit of the law is sufficiently obvious to any judge to include any new ways that pop up on a case by case basis.
However, we cannot currently offer any protection to people directly trying to find vulnerabilities when such actions are identical to people who are trying to abuse it. The only possible differentiating action would be someone to announce beforehand that they are aware what they are doing looks like criminal activity and to request permission to proceed.
The argument that we have the technology to make in infeasible to hack systems is moot and imo naive. There is cost, significant cost, to maintaining the highest level of cybersecurity. Cybersecurity experts are some of the highest paid IT professionals on the market right now.
So I do not see how educating people who want to look for vulnerabilities to reach out for approval on what they are doing is too much order, but requiring everyone who creates anything that uses the internet to successfully implement state of the art cybersecurity defenses is not
This is a very poor analogy. For one thing, casing someone's home is not interesting research. It's not news to anyone that locks only keep honest people out. You need physical access to break in. The legal system and the people nearby (neighbors and residents, and their firearms in the USA) are the main lines of defense here. Unlocked doors are a harm targeting one household.
Conversely, with vulnerable IoT devices, we're talking about internet-connected devices. The potential harm is to everyone on the Internet, not just one household, when they're taken over and made part of a botnet. An attacker can exploit them from anywhere in the world, including residents of hostile jurisdictions that are tolerant (or actively supportive) of such activity. Russia, North Korea, Iran, etc. The protections people have relied on for centuries to defend their residences from bad guys don't apply anymore.
These IoT devices can also be used to gain a foothold in your home network, which are usually flat networks. It's surprisingly difficult to find a "router" for home use at a reasonable price point that can setup VLANs, by the way. Even as a technical person.
The better analogy IMO is to building codes, where your property rights are limited by society's interest to keep your family safe, but more importantly, your neighbors safe too, because fires spread. It's still an imperfect analogy for a number of reasons. Cyberattacks are a relatively novel kind of threat. All analogies are going to be imperfect.
I think a better analogy can be drawn by just considering the physical version of some things. For IoT, you can say if someone discovers a specific brand of physical lock can be broken in unexpected ways, they should be allowed to communicate this in a way that benefits the users of the lock without facing any legal risk. For internet banking, you can discuss a physical vault that safekeeps everyone's gold, and say that someone who notices a broken lock should not be punished for telling the vault manager to fix the lock. Unfortunately the common situation is that the lock company and the vault manager will sue because they don't want to admit they put their users and clients at risk - it sounds absurd, but that's what happens in the electronic world.
Well, in this analogy the problem starts with how the person is noticing the lock can be broken in unexpected ways
Everything you said after that is a valid continuation from that, but the scope of the issue I am talking to centers around that how.
Because locks have never actually been unbreakable, right? The main purpose of a lock, the generally accepted way that the lock keeps people out - is by existing, not by being strong.
We have higher standards for the lock in more serious applications, like a vault, but if you buy a vault door, put it in your garage, and begin testing it for vulnerabilities- I feel like it's reasonable to view that as criminal. I admit 100% that it could be a curious tinkerer, but I do not think it is unreasonable to tell the tinkerer that they can't do that without permission.
Building codes analogy still supports my argument. You cannot just walk into a strangers home and inspect it for whether or not it is up to code.
I agree analogies are going to be imperfect, which is why it's important not to criticize an anology based on where it fails but to work with it on the point it is meant to express, and then yes if it doesn't actually convey the point then it could be a bad analogy.
I think it might help if we clarify WHY a lock keeps honest people out. If a house is locked, you MUST commit a crime to gain entry. So by nature of bypassing the lock, you are no longer acting honest. It is not about what type of person you are, it is about clearly delineating honest actions from criminal actions.
If the door is unlocked, then a person could walk in and then pretend they didn't know better if they get caught. This is assuming we say it's okay to walk through unlocked doors
However, since we acknowledge it as criminal behavior to even test whether or not a door is unlocked - the existence of locks in general and the common knowledge of where they should be expected to be found establishes a barrier honest people know not to cross.
With respect to cybersecurity, I am proposing we accept a similar relationship while also creating protected legal paths for honest people to conduct security research.
The thing we can all likely agree on is what cybersecurity is and where it applies. By nature of knowing where it should apply, we establish a barrier that honest people should not be crossing without permission.
I agree that there is a lot of foreign danger involved with the topic and botnets are a concern. However, progress there is not going to be made by random hobbyists testing websites for sql injections for fun. It's going to be made by cybersecurity professionals who can easily be educated to and comply with a regulation to declare their intent and get approval before poking around.
The rules for an approval process are a totally open book. It does not need to be restrictive or limiting to researchers
I think the analogy would be someone doesn't realize they left their backdoor unlocked.
You can see an open door. You can see an unlocked door unless you go up and try to open the door.
if a stranger informed me that my backdoor was unlocked, then I would be immediately suspicious. Why were you at my door trying to open it without trying to contact me first?
> There is an inherent risk of your vulnerabilities being broadcasted somewhere either on purpose or accidentally once that information is collected and organized by the researcher.
A legitimate researcher is going to promptly notify you of any vulnerabilities they discover and you as a large organization are going to promptly remediate them.
But the trouble isn't that the law might impose a $100 fine on a smug professor or curious adolescent to demonstrate that some audacious but mostly harmless behavior was over the line, it's that the existing rules are so broad and with such severe penalties that they deter people from saying anything when they see something that looks wrong.
I once saw a vulnerability in the same way. Some website from a really powerful org presented masked info, but the info was completely unmasked in the api responses. I’ll never tell anyone. I’m not American and don’t want my payments to suddenly stop settling or visas denied for unknown reasons.
I agree the laws are too broad. I think we need add layers of granularity to them. Create more of a framework for settling the rules on what is and isn't allowed. Maybe we settle on everything goes, but the company should be involved.
A legitimate researcher should be notifying the company that they are going to be looking for vulnerabilities in the first place. That is part of the distinction in behavior that I am encouraging. This way if someone is caught poking around for things to abuse unsolicited, at least there's a little more merit to holding them accountable. We are able to treat it more like the threat it is.
A good faith company can give researchers pointers on where to look. Maybe the company has a really good reason to prevent looking at certain things, and they are able to convince the researcher of that. I dk. Point is the framework for settling all that should be promoted rather than promoting people to act identical to criminals right up until they decide whether to sell / abuse the information illegally or notify the company and try to get a reward. Does that make more sense?
> A legitimate researcher should be notifying the company that they are going to be looking for vulnerabilities in the first place. That is part of the distinction in behavior that I am encouraging. This way if someone is caught poking around for things to abuse unsolicited, at least there's a little more merit to holding them accountable. We are able to treat it more like the threat it is.
The issue is this. You have some amateur, some hobbyist, who knows enough to spot a vulnerability, but isn't a professional security researcher and isn't a lawyer. They say "that's weird, there's no way...," so they attempt the exploit on a lark, and it works.
This person is not a dangerous felon and should not be facing felony charges. They deserve a slap on the wrist. More importantly, they shouldn't look up the penalty for what they've already done after the fact, find that their best course of action is to shut up and hope nobody noticed, and then not report the vulnerability.
The concern that we will have trouble distinguishing this person from a nefarious evildoer is kind of quaint. First, because this kind of poking around is not rare. As soon as you connect a server to the internet, there are immediately attempts to exploit it, continuously, forever.
But the malicious attacks are predominantly from outside of the United States. This is not a field where deterring the offenders through criminal penalties is an effective strategy. They're not in your jurisdiction. So we can safely err on the side of not punishing people who aren't committing some kind of overt independent crime, because we can't be relying on the penalty's deterrent regardless. We need the systems to be secure.
Conversely, if one of the baddies gets in and they are in your jurisdiction, you're not going to have trouble finding some other law to charge them with. Your server will be hosting somebody's dark web casino or fraudulent charges will show up on your customers' credit cards and the perpetrators can be charged with that even "unauthorized computer trespass" was a minor misdemeanor.
You can't give them a slap on the wrist if you assert what they are doing isn't criminal. Having an issue with the punishment model is no reason to throw out the law.
I think the subject has enough depth and complexity to it that we need to promote cooperation with companies. We can build protections against companies being dicks much easier that we can codify the difference between malicious or innocent intent behind actions that are more or less identical up until damages happen.
I don't think I'm proposing anything that assertive. I'm suggesting we just put it all in the open and down on paper in a way that addresses most of the concerns and involves the company.
Documented evidence that companies were notified of security issues by people who declared that they were researchers, who the company approved to research, is a great thing to have in the fight against ignorant companies.
I completely agree that a degree of this is quaint with respect to a lot of the trouble coming from outside your jurisdiction. I just really don't see an issue with creating protected avenues for people to do research.
Opening someone's front door "on a lark" can get you shot in some states. I get that innocent people do technically illegal actions sometimes but that doesn't change whether or not an action is perceived as threatening.
So I recommend we start writing down the actions that need to be protected and at the very least give someone acting in good faith a bulletproof way to both conduct research and preserve innocence.
If you happen to uncover something accidentally and are concerned, then you can make the request afterwards and repeat your finding and report it. So no need to feel the need to stay silent
> You can't give them a slap on the wrist if you assert what they are doing isn't criminal. Having an issue with the punishment model is no reason to throw out the law.
The law is too broad in addition to being too punitive.
But here's an argument for throwing it out entirely.
There are two kinds of people who are going to spot a vulnerability in someone else's service: Amateurs and professionals.
Professionals expect to be paid. But if you go up to a company and tell them their website might be vulnerable (you don't know because you're not going any further without their permission), and you send them a fee schedule, they're going to take it as a sales pitch and blow you off most of the time. Even if there's something there. To get them to take it seriously you would need to be able the prove it, which you're not allowed to do without entering into time-consuming negotiations with a bureaucracy, which you're not willing to do without getting paid, which they're not willing to do before you can prove it. So if you impose any penalty on what you have to do to prove it, professionals are just going to send them a generic sales pitch which most companies will ignore, and then they stay vulnerable.
Which leaves the amateurs. But amateurs don't even know what the rules are. If they find something, anybody's first instinct is "this is probably nothing, let me just make sure before I bother them." Which they're not really supposed to do, but in real life that's going to happen, and so what do you want to do after it has? Anything that discourages them from coming forth and reporting what they found is worse than having less of a deterrent to that sort of thing.
But subjecting them to anything more than a small fine is clearly inappropriate.
> We can build protections against companies being dicks much easier that we can codify the difference between malicious or innocent intent behind actions that are more or less identical up until damages happen.
The point is that we don't need to distinguish them. We can safely ignore anyone whose malicious intent is not unambiguous, because we're already ignoring the majority of them regardless -- even the ones who are clearly malicious -- when they're outside of the jurisdiction.
> Opening someone's front door "on a lark" can get you shot in some states.
The equivalent action for an internet service is to ban them from the service. Which is quite possibly the most appropriate penalty for that sort of thing.
I think you're getting way ahead of the conversation, and there is no way to know what the implementation would be like and how communication would go between researchers and companies because if you can think of the communication problem today, then we can consider a solution for that problem in the implementation tomorrow.
At the end of the day, I am arguing for promoting people to try to work with companies, and to put out to the public a process for making that effort effective.
I feel like we agree but our solutions are opposite. The current laws are insufficient, so we need adjustments to the laws.
You (and others) propose we make hacking into systems fully legal, presumably because we can target malicious activity based on what they do with that access instead of the access itself. Is that correct?
I also disagree that a ban is equivalent to shooting an intruder. The connection is not the actor, the person using it is. If a person chooses to enter into a protected space they do not have permission to be in, then they are susceptible to consequences to that. I think just because it is easy to do it from your bedroom doesn't change it. Much like how virtual bullying is still bullying; virtual breaking and entering is still breaking and entering.
If we formally adopt this attitude then we also enable ourselves to pressure other jurisdictions to raise their standards to match.
An uncontrolled internet appareny has 1 outcome - malicious spam. That is what everyone in this thread seems to agree on, and the arguments against what I suggest all seem start with the assumption "there is nothing we can do about it" and the corollary "there is nothing we need to do about it"
I think we can actually do something about it, and I think we ought to. But before all of that, I think the first place to start is making a clear legal relationship between security researchers and the private sector and debate the laws that should be in place to facilitate that in a fair way
> I think you're getting way ahead of the conversation, and there is no way to know what the implementation would be like and how communication would go between researchers and companies because if you can think of the communication problem today, then we can consider a solution for that problem in the implementation tomorrow.
A major problem is that communicating with a large bureaucracy, even to just find a way to contact someone inside of it who will know what you're talking about, is a significant time commitment. So you're not going to do it just because you think you might see something, and as soon as you add that requirement it's already over.
You might try to require corporations to have a published security contact, but large conglomerates, especially the incompetent ones, are going to implement this badly. In many cases the only effective way to get their attention is to embarrass them in public by publishing the vulnerability.
> You (and others) propose we make hacking into systems fully legal, presumably because we can target malicious activity based on what they do with that access instead of the access itself. Is that correct?
So one of the existing problems is that it's not always even obvious what is and isn't authorized. Clearly if you forget your password to your own PC but you can find a way to hack into it, it should be legal for you to do this and recover your data. What if the same thing happens, but it's your own VM on AWS? What if it's your webmail account, and all you use it for is to recover your own account? You made an API call with a vulnerability that allows you to change your password without providing the old one, but you are authorized to change your own password.
There are many vulnerabilities that result from wrong permissions. You to go the service and ask for some other customer's account page and instead of prompting for a login or coming back with "401 UNAUTHORIZED" their server says "200 OK" and gives you the data. Is that "unauthorized access"? What do you even use to determine whether you're supposed to have access, if their server says that you do?
This kind of ambiguity is poisonous in a law, so the best way to resolve it is to remove it. Punish malicious activity rather than trying to subjectively evaluate ambiguous authorization. It doesn't matter whether their server said "200 OK" if you're using the data to commit identity theft, because identity theft is independently illegal. Whereas if you don't actually do anything bad (i.e. violation of some other law), what need is there to punish it?
> I also disagree that a ban is equivalent to shooting an intruder. The connection is not the actor, the person using it is.
The justification for being able to shoot an intruder is not to punish them, it's self-defense. Guess what happens if you tie them up first and then shoot them.
You don't need to physically destroy someone to defend yourself when all they're doing is transferring data. All you have to do is block their connections.
> If we formally adopt this attitude then we also enable ourselves to pressure other jurisdictions to raise their standards to match.
The reason other jurisdictions don't punish this isn't that no one is setting a positive example. It's that their governments have no resources for enforcement or are corrupt and themselves profiting from the criminal activity whose victims are outside of their constituency.
Or if you're talking about the jurisdictions who do the same thing as the US does now, it's because their corporations don't like to be embarrassed either, and we could just as well set the example that the best way to avoid being humbled is to improve your security practices.
> I think the first place to start is making a clear legal relationship between security researchers and the private sector and debate the laws that should be in place to facilitate that in a fair way
Companies will want to try to retain the ability to threaten researchers who embarrass them so they can maintain control over the narrative. But that isn't a legitimate interest and impairs their own security in order to save face. So they should lose.
The embarrassment itself is a valuable incentive for companies to get it right from the start and avoid the PR hit. Nothing should allow them to be less embarrassed by poor security practices and if anything cocksure nerds attempting to break into public systems for the sole purpose of humiliating major organizations should be promoted and subsidized in the interest of national security. (It's funny because it's true.)
> An uncontrolled internet appareny has 1 outcome - malicious spam. That is what everyone in this thread seems to agree on, and the arguments against what I suggest all seem start with the assumption "there is nothing we can do about it" and the corollary "there is nothing we need to do about it"
It's not that there is nothing we can do about it. It's that imposing criminal penalties on the spammers isn't going to work if they're on another continent, and correspondingly isn't a productive thing to do whenever it has countervailing costs of any significance at all.
You can still use technical measures. Email from an old domain with a long history of not sending spam and all the right DNS records, probably isn't spam. Copies of near-identical but never before seen messages to a thousand email addresses from a new domain, probably spam.
You can also retaliate in various ways, like stealing back the cryptocurrency they scammed out of people by using your own exploits.
What you can't do is prevent Nigerians from running scams from Nigeria by punishing innocuous impudence in the United States.
And one of the best things we can do is improve the security of our own systems, so they can't be exploited by malicious actors we have no effective means to punish. Which the existing laws are misaligned with, because improving security is more important than imposing penalties.
I'm much reminded of the NTSB approach to plane crashes: It's more important to have the full cooperation of everyone involved so you can identify the cause and prevent it from happening again, than to cause everyone to shut up and lawyer up so they can avoid potential liability.
so are you saying that I shouldn't be testing a product I purchased or a product that someone mandated I have in my house? I shouldn't have to notify anyone, I own it and I should be able to do with it whatever I please. In addition if I do find an exploit I am not obligated to notify the company nor should I be. A good faith company should be doing their due diligence and not releasing unprotected/poorly protected devices as is common today.
You don't own the inside of it. That's the core part of all this. Businesses decided to sell items with special conditions where you can own possession of the item as a whole but not the ability to dismantle it.
That's just a contract with terms. If you are in the position being addressed by my points, then you have already agreed to those terms.
your problem is with the ownership model, or something else. I am saying, since this model is already in existence and accepted by the public, we need to create some safeguards.
We cannot bypass the fact that you do not own the thing you are testing. So if you want to test something you do not own, then yes I think involving the entity that does own it is reasonable
This is what is referred to as "security through obscurity." If companies are going to publish/sell closed source software to the general public, and make any claims regarding it's security, that should provide more than enough consent to probe it.
I think the difference is in what's yours and what's theirs. If it's yours, I agree. If it's theirs, I disagree.
The idea of absolute ownership is being eroded. You purchase a device but that device may use information you do not own. If you are manipulating the device to allow it to give you information you did not purchase and the contract you agreed to with the purchase was that you would not do this, then that is threatening. If what you learn by probing it allows you to breach the security of other people using the same service, then that is threatening.
If you are concerned about the device, I don't understand why we can't live in a world where you are able to vocalize that and give the device provider a chance for feedback before probing it for weaknesses.
If there is a security concern that you want to shine a light on, why is it that we need to address that concern in the dark? It is giving too much unnecessary overlap with people looking to exploit those security issues when we might not need to
> If what you learn by probing it allows you to breach the security of other people using the same service, then that is threatening
What is threatening is that the company that sells baby monitors and keeps video recordings of your family members being naked has zero accountability for their security and almost no chance of being caught if they misuse it.
Tampering with a device increases your liability compared to not tampering with it.
Don't install it in your home if you don't trust it. Don't buy things with terms and conditions where you dont own the device if you want to own the device. This is a different problem
I have things installed in my home that I don't own. Electric, gas and water meters. The common factor with all of those is that their liability also remains on their respective utility provider companies.
You do not get to retain ownership and transfer liability. It's that simple. If you insist that you own the device, then YOU are fully liable for it.
I agree liability should be part of the discussion. If we create a legal framework around the subject and clearly identify what's allowed or not allowed by independent researchers and what the ownership model actually is, then this part of the discussion becomes easier and more easily applied to past precedents
Part of systemic improvement to security comes from the market forces that reward producers putting out carefully designed and tested products and punish producers that don't. Your suggestion of requiring prior notice, coordination, approval etc. incentivises them to defer the cost of proper development until there is a crisis, so they can rush out any rubbish product, and force users and researchers to do their security testing for them. Let them fear the unknown, with their necks on the line, and design accordingly.
I proposed protected legal channels for researchers.
It does remove any pressure from companies. Their neck is still on the line.
It adds pressure to companies because it creates a paper trail. It enables good faith companies to work with researchers as well. They can even have researchers contact each other if they are both looking into the same thing.
There's a lot of good that can come of it
Companies can already rush out any product they want with no security. Lack of security is still a risk, regardless of how we address researching vulnerabilities
You proposed requiring consent from the producer of a product/service to have their offering probed. And did so with an example of a house not owned by that producer.
If the production company declines, that DOES remove pressure from that company.
Companies that rush out rubbish products can presently be named and shamed by independent, uncooperative or even adversarial researchers. Your proposal considers that research illegitimate unless said dodgy company decides to open itself up to scrutiny, which it obviously would not be inclined to do.
If you want to suggest the market would respond by not selecting products from such an opaque company, look into how many WhatsApp users care about auditable, open source code vs. those using Signal.
I proposed a preference for systemic solutions over a soft dependence on white hat hackers who operate identically to black hat hackers right up until they have a vulnerability to exploit and decide what to do with it.
In this thread I expanded the detail to include the system to do this could (and imo should) be a legal framework that creates effective communication between companies and researchers.
I also try to adapt my language to try to parallel what the person I'm speaking with is trying to say, rather than telling them they didn't mean what they are telling me they meant. I apologize if created a misunderstanding with my word choice.
Yes I did mention requiring consent from the company as the ideal goal of the model. I am not suggesting the implementation of the model full stop at that just that sentence. In other areas of law, if you can prove a message was received by a company that can sometimes be considered implied consent if they do not respond to it.
We can also require that companies cannot simply refuse for no reason, but leave legal room here for any legitimate reasons to deny should they exist.
And so on and so forth.
It makes the intent of the researcher very clear.
Declining is obviously less pressure for the company in this situation, I agree. But it is not less pressure compared to the current situation. Companies currently have no obligations at all to researchers, and they certainly do not build security out of concern that white hat hackers will out them. They fear black hat hackers. Those are not going away, and if a legal framework exists for companies to work with researchers and better arrange fair conditions for both sides, I would bet companies will be MORE willing to allow research than less.
Because right now they company gets the research for free and then gets to decide whether or not they want to throw the researcher a Starbucks gift card or not. Or just press charges because they are assholes.
I dont really care what the market decides to do. The point of this is to protect the researchers regardless of what the market does. Because to your point, the market has already chosen poorly which is why we have issues on this subject to begin with.
Kinda. I think we agree on the need to protect researchers. And if researchers are aligned with consumers rather than manufacturers then that's preferable because it's not the manufacturer's property once it leaves the building.
If protection is in place, that alignment will work because manufacturers' declining to be scrutinised won't prevent researches from doing their job. But making protection conditional on manufacturer approval will suppress their work in those cases. And I don't know the practicality of establishing and enforcing this. So I oppose any conditionality generally.
> there needs to be consent from the company being probed for vulnerabilities
So they never give consent and no vulnerabilities are ever discovered?
If I make and sell bread, there could be a surprise food safety inspection in the middle of the night on Christmas Eve, but don't we dare inconvenience some software firm that holds intimate data on millions of people.
When you get a surprise food safety inspection, you are notified right? They don't just break into your business without your knowledge and look around. You can refuse them entry, even if it comes with consequences later. They also aren't a random civilian, they have some sort of qualification to be conducting these inspections
That's what I'm getting at. People keep assuming I am saying protect the business at all cost and it's not the case. I want security research to stop getting sandbagged by discussions of legality.
We should make a legal path forward for security research to be more accessible and to promote behavioral differences between someone conducting research and someone trying to exploit or abuse a vulnerability.
The point is in enabling the conversation. We can make the laws whatever we want that would help it be fair
White hat: can I hack?
Company: no
Later:
Company has 100 security request denials
Company info leaked
Company gets sued
Judge is presented with 100 instances where the company was offered free security testing and they refused
Judge raises issue from possible negligence to gross negligence
We can also only allow companies to deny requests for specific reasons
Why does everyone compare things to houses? If you want to be more consistent with your building analogy, IoT sold to the public or enterprises are more like bars, except that each user has their own privately owned bar that may or may not be stocked by a central liquor company. If a user wants to check it's not possible for someone to break into his bar, or slip poison into his booze shipments, or redirect the shipments altogether, that's legitimate in my mind. Even if someone buys a bar intending to hijack booze shipments, the liability is still partially on the liquor company if they have not taken reasonable precautions against known risks. Imagine buying a bar and the liquor company who you're forced to use says "if you rattle any of the doorknobs or test the alarm works, we'll sue your pants off and throw you in jail" - does that seem fair?
I was speaking towards probing the business not the things you own.
Using housing as a metaphor is common because it's an incredibly common thing people can relate to with personal experience, and is something people typically have relatively detailed intuitions built around what they are okay with and not okay with regarding it.
It got the point I was making across, but I do think there was a misunderstanding about what I was applying it to. I was referring to people who probe businesses security vulnerabilities on the internet side of IoT, not people who check for vulnerabilities in things they own on the T side of IoT.
As for the bar analogy, I agree that there is a lot of room for reasonable due diligence to test the security if there is potential for you to be at risk of its failings. This is more in line of my last paragraph, and I do still assert that solutions that avoid the need for people to verify security themselves should be preferable to one's that do.
If you've got 2 legally independent entities messing with the same device, and then abuse of the device does happen and it leads to damages - can you understand how much more difficult this becomes to sort out than if the company was solely responsible for the device?
What do you think about a person who bought a house and documented all the ways to rob it that they could find? As far as I am aware, there is no law against that; and that is more comparable to what security researchers are actually doing.
If they bought a prefab house which was sold to many people in the neighborhood, and part of the terms and conditions were do not open up the walls, and the owner opened up all the walls to find out weakspots it can be robbed, then that does seem criminal, no?
However, the owner should still have a right to validate the security of his house - so he should be able to request for permission to break the terms of his contract for the sake of security research. That is going to require approval from the company, who the contract is with. I think we should be looking to make some laws around making sure this communication can happen safely and fairly
At present, I'm not sure it's actually possible to sell someone a house with terms like that, since it would interfere with ordinary, even essential aspects of maintaining a liveable dwelling. It does highlight the oddity of the situation we are in with many IoT devices.
Yes exactly with the weirdness. Which is why I am leaning heavily into being a proponent of it needing some laws created explicitly for this situation that will clarify what's considered fair activity between researchers, companies, repairers, etc.
Sticking with the example, there is not reason a company cannot have terms that you can't open the walls, while the state also has laws that regardless of what the terms are - you are allowed to open the walls for maintenence purposes, renovations, etc, but maybe you have to notify the company about what you are doing.
It feels like territory somewhere in between the complete freedom of ownership and the total lockdown of renting.
I'd imagine there is a whole legal tug of war that would need to happen to know where the lines would or should fall, but the main point is that both sides are at the table and no one needs to be keeping their activity in the dark because of how uncertain they are about how the law is going to treat them
> but there needs to be consent from the company being probed for vulnerabilities or else I find it hard to consider it legitimate research, regardless of intent.
The reality of netsec has not born out this model. In practice, you have two broad categories of companies:
- Ones that already have a culture of security, run pentests, have bug bounties, deploy patches, etc. These aren't the ones exacerbating the botnet-of-things writ large.
- Ones that frankly don't give a damn. Either they say "we don't need security research, it's secure enough", or they say they don't want it divulging trade secrets, or any of myriad excuses. No matter what, they don't consent to security research, even if they desperately need it.
The latter often persist even after multiple wake-up calls from black hat breaches. We have in front of us a golden opportunity for distributed, decentralized security research - white and gray hats basically do this for free. Instead we punish them, while the real problem stays far out of reach of the short arm of cyberlaw. Documenting the netsec research is a pretty clear indicator of intent ^1.
Honestly at this point, I don't think we can afford to not go this route. We should give amnesty to researchers who clearly aren't causing any damage, instead of throwing the book at them, which sadly is usually the case.
1 - yes I realize this gives a potential out to black hats. I'm fine with that. There ought to be enough evidence of actual damage to tell the real criminals apart.
People aren't white or grey or black hats. Actions are.
A person can wait until they find a vulnerability to decide what type of hat they want to be. That is not only possible, but also the most rational thing for someone to do if there are no negative consequences to declaring yourself one way or the other before you find the vulnerability.
All of the problems mentioned can be addressed above the table.
We don't allow people to test your defenses unsolicited in any other industry that i know of, and the cost of cybersecurity is very high.
We can make basic security defenses a law if we want to without giving cover to black hats.
You can't throw the book at someone who has approval to do research. Business does not need to have at-will rights over that approval, we can require sufficient reasoning to deny
there needs to be consent from the company being probed for vulnerabilities
What is the type of scenario that you have in mind here? Do you mean probing a web service for vulnerabilities, performing security assessments as part of pre-sale publications (think Consumer Reports, Anandtech reviews etc), or performing pen-testing on a device I bought and is now running on my home network? Because you appear to be arguing that I shouldn't be allowed to examine a device I own without explicit manufacturer consent.
I was speaking towards internet side of things where you do not own the infrastructure.
As a related note, I do firmly believe in right to repair, and if you own something you can do whatever you want with it.
Partial ownership seems to be a thing now. So I think there is a lot of missing framework around managing that properly.
Long story short - I think there is room for manufacturer consent / acknowledgement / notice to be part of the solution and if it can be part of the solution then it should be. We may need regulation around that, it likely cannot be left solely to the companies discretion and may even need an aggressive "receipt but no reply by X days is considered consent" clause - but I would like to promote solutions that come with communication between the effected parties
You give up consent for a device to not be scanned the second it is connected to the public internet. There are botnets that are continuously scanning all allocated IP blocks for potentially vulnerable devices - try logging requests to an open 22 port and take a look at the kinds of requests you get. That's the price you pay for connecting to an open world wide network.
Now the conduct and what the operators of a massive scanning operation intend to do with the data they have collected should be regulated, and punishments should be instilled by those who use this data to facilitate attacks on others. But the ship has sailed for consent to connections from other devices over the internet.
The Ship has not sailed. The ship is still on its way to port. Complete internet surveillance is arguably an unstoppable force on the way to shore.
I think when it gets here there is going to be a lot more trouble for cybersecurity experts due to a lack of clear understanding around what is considered legal activity or not from them. Right now the obscurity is something they hide in - they can choose whether or not to reveal they found a vulnerability.
But what if that's not always the case? What if you get "caught" before you are able to show you had no intentions of doing anything malicious?
We can have the trial by fire we usually do, and let a round of innocent people face unjust consequences and use them as martyrs to create new laws - or we can use some foresight and build some legal frameworks in advance that enable researchers to be "by the book" and not worry at all about legal repercussions
I just don't think that sending port scans to random internet addresses is a big violation of privacy, or undue conduct for a government to participate in. Having your connection details public is the price you pay for connecting to the internet. If you don't like it, than run a private network and firewall the ports on your gateway - the default behavior of all consumer routers.
Quite simply, you will absolutely get portscanned if you have a port open to the public internet today. Try it. No doubt CIA has access to at least some of those botnets. We need policy protections that face the reality of the world we live in today, and harden devices that would like to communicate over the internet in an automated fashion. That includes punishments for operating massive, systemic botnets, but also some auditing of critical infrastructure that is publicly accessible.
For all their problems, certificate authorities have largely let us figure this stuff out on the internet browser side, and I would argue that has had a positive effect on privacy and security. Now it is time to do something similar for devices that connect to the internet in an automated way.
For all it's
I agree on the subject, but I think we can leave port scans out of the discussion because they are not an issue as you suggest and still apply everything I am saying to other issues.
There would still be upside of potentially addressing the scan spam as well, though
Fully agree. If some company vanishes, consumers are left holding the shit end of a broken stick. It would sure help if there were protections for those that effectively volunteer their time and effort to keep things running for others.
I imagine someone in the many many comments has already suggested this. But just in case:
It wound be great if all of my emails to security@somewebsite.con could be CC’d to security@fcc.gov and that would immediately convey to me, somewebsite, and the FCC (and anyone else) that I am indeed disclosing and not ransoming.
I understand there would be a cost that the FCC would bear. I just think it would be a worthwhile cost to incur.
I like the general idea of improving communication / transparency.
Perhaps some branch of the government could provide a registry for responsible disclosure (e.g., `https://some-branch.gov/responsible-disclosure`). As a security researcher, you could notify the government of your intent to disclose as a demonstration of due diligence and good faith.
The registry/site could return a case/reference number that could be included with the disclosure to the manufacturer. In addition to discouraging an attitude of defensive reprisal, it might also prevail a greater sense of urgency upon the manufacturer to follow through with remediations.
I'm not sure if it'd be necessary/useful but it might also be interesting to leverage zero-knowledge proofs so that interested parties could verify when the contents of a disclosure were made available without actually accessing the contents until after some attempts at remediation.
This seems like a pretty clear breach of first amendment rights (we have a right to choose what we say, and who we say it to). It is probably a good idea for researchers to implement this strategy, and obviously more protections are needed for researchers in this area, but eroding the bill of rights is not the way.
Im a little confused. Can you explain how their proposal is a first ammendment violation? If you're reffering to "could be CC'd to security@fcc.gov," I assume they mean make it an option, not make it mandatory. Some companies attack you for trying to disclose bugs and exploits -- saying that you're attempting to ransom.
I would like to add most of the IoT problem is no patches at all. The firmware they get is usually bog standard with some very minor tweaks out of china somewhere.
It is a problem of vendor locked in products where you have to buy a hub to do an update. If there even is an update. If you want to get a good picture of how sideways updating can even be watch the linus tech tips on where he wanted (and has the tech ability) to patch his light switches. But could not even get them to give him the correct firmware or even say if he could. Also many devices there is literally no way to even do the update. They flash it on the line and that is the last update it ever gets.
Also Supported and actually maintained in the hardware world can mean different things. So you will need to get your definitions up front correct. Supported could mean to a HW manufacture if the thing burns out ship a new one. The firmware is a secondary consideration.
Another aspect you will run into is licensing. I can sell a device but may not have access to the code. Example: The vendor who makes that code went EoL on their code 5 years ago. They will not even sell me the code as they may or may not even have anyone who works for them to give it away if they could. They may or may not want to sell me that software anymore as they have a new shiny they want to sell me. So I am stuck even though I want to update I can not do it. I had one vendor flat out refuse to give me the older docs because the item was EoL and they had a replacement product that cost like 5x. That was just to communicate with the thing. Not even to update it to a later revision.
> I would like to add most of the IoT problem is no patches at all. The firmware they get is usually bog standard with some very minor tweaks out of china somewhere.
I was on a team that worked with a firmware vendor, from the US, for a bluetooth chip.
We would send in bug reports, they'd send us firmware with fixes. Except it was obvious they did not use source control because they would sometimes base patches off of old firmware versions that had the bugs they had fixed in newer versions. It was absolutely insane having to send emails like "hi, your latest patch is based on firmware from a year ago, can you please instead fix the firmware you sent us last month?"
Those sorts of places are fun to interview at. 'So what sort of source control do you use'. You would think everyone does that by this point. An easy slam dunk question to ask and for them to answer. I had one say 'well sometimes we check it into sourcesafe but usually just copy it around the 5 of us on a fileshare' (this was like 4-5 years ago).
Re: the licensing issue, companies wanting to put a label on their product would probably want to extract similar guarantees up their supply chain. Especially with a voluntary program like the one the FCC is proposing, good practices won't become the norm across the market overnight. But maybe, at the very least, the segment of product and component makers that take security seriously will begin to grow. I encourage you to share your thoughts in an official comment.
As someone who designs IoT devices like these for a living, the device manufacturers here are in many cases the smallest companies in the supply chain and have very little ability to influence things upstream of them, especially for specialty products or companies entering a new market. It's often a major win to get a chipmaker to pick up the phone and sell us their product, much less receive any support at all.
I wish I could put a label like this on all of my products and I've been wishing for this for over twenty years, but the reality on the ground is that our support ends when the support for the individual parts in our product ends. We've looked at our supply chain periodically to see if we can replace parts with better documented/supported comparable parts, but frequently there just really aren't any better options.
This is a great idea in concept, but I fear that the flaw in the FCC's proposed rulemaking is that only indirectly addresses the root cause (the software, documentation, and support/updates provided by chipmakers for their parts). Furthermore, by focusing on device manufacturers who are the weaker partners in the chain, the regulation is likely to punish smaller, more innovative manufacturers.
If it was forward looking, rather than retroactive then it would at least mean that chip manufacturers wouldn't be able to sell their undocumented/unsupported crap because all the buyers have to have it?
If there are no buyers then their attitude should change.
This is incorrect, because you're assuming that all the buyers have to have it, when the chip manufacturer is selling into many industries/markets.
Since the specific "IoT device for the USA market" set of buyers is actually a small percentage of sales for most of the parts they sell, they really don't care to support their product from the IoT security perspective. This support is expensive, so it would very likely be cheaper for them to ignore the market completely.
Most of IoT is that way. We had sales cycles that were 2-3 years long and they would in the end buy 300 units. I then go back to my suppliers and say 'hey support these 500 ic's that you sold me for 10 years from right now' They would laugh me out of the room unless I am showing up with big bags of cash. That instantly makes the whole project unviable to sell/support.
Yes, absolutely. This is the exact conditions of most of our higher-end products (500-1000 units sold of a particular configuration is common). It's funny to get laughed out of the room even asking some chipmakers "can you sell us 1000 parts, please?"
It is tough to explain to people that 1000 is not even alot for some of these guys. 1000 parts at say a fun price of 20 each. That is Maybe a 20-25k sale at most. For some of these companies that is a rounding error. You get lower priced parts and they just do not care much. There is no margin in it for them. Especially if you are not coming back every few months.
> If you want to get a good picture of how sideways updating can even be watch the linus tech tips on where he wanted (and has the tech ability) to patch his light switches. But could not even get them to give him the correct firmware or even say if he could.
It was a mess, but it may not be a good example because part of the confusion was that there was no newer firmware. Their firmware version was being reported in hexadecimal, but the latest firmware version was listed in decimal.
He had a mix of random ones. They were telling him to buy a hub and hope for the best or go thru one of their vendors (more cost). Even if it that was slightly wrong that exact example could very easily happen. You have a group of devices in random levels of firmware states with no real nice way to tell what is what.
Thank you so much for asking HN! I can't think of a more informed, higher signal community to interface with and get open and honest feedback from. Really brilliant idea.
I don't have much input on this issue, but I wanted to ask that if you know folks in the US Copyright Office, that you recommend the same approach to them with regards to their upcoming regulatory stance on AI.
The copyright office is going to hear one-sided input from artists and the largest tech companies (seeking to build moats), but they need to broaden their inquiry to include technologists, researchers, and startups. HN is an excellent place to increase their understanding.
If you can, I would greatly appreciate it if you tip the copyright office off about HN!
The biggest problem isn't even new regulations. The liability for violation always tends to be a rounding error to profits. Then, even if there are teeth, there is no money for enforcement which makes it all pointless.
Look at how the FTC and SEC have completely failed us in the 21st century. Better regulations would matter if we ever bothered to enforce the ones we already have.
IDK man this is a pretty defeatist attitude and doesn't lead to any steps to improve the situation. There's an FTC commissioner here in the comments today who's interested in the community's input. If you don't think it will have any effect that's your prerogative, but members of the public providing good technical opinions can only be a good thing.
I don't agree with the vibe that past failures mean that regulations are pointless. Nothing is perfect at preventing abuses, but regulations do shape the actions of corporations and the terms of the discussion. Plus, the FTC is not one person - the commissioner making the post entered office in 2020, and so it seems broad to pin him with vague statements about the FTC completely failing us.
It's better to get the policies in place now and then complain about the lack of funding/enforcement. The mere threat of enforcement will cause some companies to design their products better and when a major security incident happens because of a bunch of insecure IoT devices and people are outraged it'll be a lot easier to motivate action if we can say "We already have rules that would have prevented this entirely, but the FCC wasn't provided the resources to enforce them."
That's a clear call to specific action as opposed to "We don't have rules that would have prevented this, and also many of the rules we do have across several agencies don't have enough funding to enforce rules designed to solve other problems."
HN when governments agencies have little leverage to enforce rules: The violations are a rounding error to profits! We need to make the laws more stringent.
HN when EU passes laws that have significant teeth in them and let them actually enforce them: This is ridiculous overreach! It will kill innovation and make it impossible to do business there!
The bigger issue is that simplistic takes expressed strongly with no room for disagreement tend to get the most upvotes from other people. The people who agree will upvote, the people who disagree will just move on, and the people who don't have an opinion will think the person sounds like they know what they're talking about and will upvote anyway. That's how you end up with back to back threads where completely opposite takes are highly upvoted, and both of them happen to be awful takes.
This sums up the situation that government regulations don't work. These regulations put us on the path of trusting religious-like in government. We could be working toward push-button simple network segmentation with some kind of default filtering for install by the average home user.
> These regulations put us on the path of trusting religious-like in government.
We don't need to have religious-like faith in government because we can vote for people who will do what we want them to and we can vote out the people who refuse to do their job. It doesn't happen without the people getting involved and holding their government accountable though. You don't have to pray when you can vote.
Without regulation you could only ever have religious-like faith in private corporations because they have zero incentive to act benevolently and you have zero power to replace a CEO who is acting against the interests of the public. You have no vote, so prayer is all you have left.
Maybe I could have some faith if regulatory bureaucrats were fired when there are major regulatory failures e.g. 737 max. Maybe I could have some faith if police state agency employees were jailed for FISA abuse.
Voting isn't enough because even elected officials aren't allowed to fire these people.
Specifically, FAA allowed Boeing to use software to cover up design flaw (crammed bigger engines under wings which causes pitch up problem) so it would appear to drive like older 737s. Apparently only a test pilot has been charged for falsifying some paperwork. 737MAX should be required to get a new type certification due to the significant changes.
> Maybe I could have some faith if regulatory bureaucrats were fired when there are major regulatory failures e.g. 737 max.
If I were Boeing, I would hire the fired bureaucrat with a lavish comp package and make sure he's at every meeting, conference, get-together, etc. looking well-tanned and happy.
That makes negotiating with the fired guy's replacement much much easier.
How well did that work for bank oversight in 2008, and again in 2023 with SVB? The accountability of "my one vote will remove government's failed regulators" fails on the scale of $billions.
Your examples are situations were deregulation/lack of regulation caused problems. Without regulations, prayer didn't work out so well. We all know IoT security is a problem, but after decades of that problem existing prayer hasn't worked there either. No one person's vote can fix government but collectively we have the option to enact change. I'll take having the ability to make changes over being powerless to make changes every time.
Is there an argument that the government failed in oversight with SVB?
I think this is a textbook slam-dunk by the government? They stepped in when the situation was _bad_, but not _catastrophic_ yet (mmmmaybe arguable), took over, and no depositors got hurt.
Is there an argument that this could've gone better, apart from "no banks ever fail"?
Fed had to set up a swap/lend facility that weekend. Sort of like they make it up as they go.
The specific regulator is going to retire. "Abbasi and Mary Daly, president of the San Francisco Fed, came under scrutiny after a post-mortem report undertaken by the Federal Reserve found problems with how SVB was supervised."
https://www.msn.com/en-us/money/markets/key-san-francisco-fe...
And through regulatory capture the CEO of SVB was on the board of directors of the regulator!
I’ve worked in security before and i don’t really think the government should be involved that much. There are so many different situations to consider. What i would support is the fcc coming up with a list of common patterns and then forcing devices to state which, if any, pattern they follow. I have a weather station for example which doesn’t really need any security on the device end.
Does your weather station connect to the internet? (the discussion is about IoT devices)
If so, plenty of IoT devices have been used in botnets, as point of entry into local networks (hello printer, home assistant, file share...), or simply killed off with a DoS attack.
Yet government is made of people so it does not have God-like powers, even though it is often worshipped.
I would prefer to plug in a box that does this segment/filter. I will pay if it can be rebuilt from available source code. Make it easy to install and setup. If nobody purchases then nobody cares and why would government get involved? Seems like FCC scope creep.
Forcing every IoT vendor to do it overlooks the problem of each vendor having and maintaining the skillsets.
How about something like UL to create a slim standard and test against that standard. The aforementioned box idea could apply to be tested against the standard.
If your ISP determines there is a botnet from your home IP and you refuse their request to fix it, then it seems appropriate for your ISP to take action or "enact punishment".
Using rule of law and courts, it depends on your contract. Many residential providers have service as a best effort. Guaranteed service with penalties are typically possible, if you are willing to pay significantly more.
Not sure if you're able to comment on this, but is there anything in place to mitigate the risk of automated astroturfed commentary e.g via LLMs in this and other cases?
> Not sure if you're able to comment on this, but is there anything in place to mitigate the risk of automated astroturfed commentary e.g via LLMs in this and other cases?
Look at HN account age, karma, and comment histories.