Hacker News new | comments | show | ask | jobs | submit login
Why Do Keynote Speakers Keep Suggesting That Improving Security Is Possible? (usenix.org)
878 points by zdw 66 days ago | hide | past | web | favorite | 171 comments



I love Mickens' work, and think this is overall a great presentation, but I feel like it misses (or maybe just doesn't fully explore) an important point.

Start with the Internet of Things example. He chalks up the abysmal security record of IoT devices to two factors: it keeps IoT devices cheap, and IoT vendors don't understand history. And there's a lot of truth in both these assertions! But they are both just expressing facets of a deeper, more fundamental reason: IoT devices aren't secure because their customers don't demand security.

This deeper problem completely explains why the two higher-level problems he observes exist. Making your product secure makes it more expensive and slower to come to market than just leaving it wide open, and the IoT vendors know their customers care about cost and availability and don't care about security. So they do the rational (in the homo economicus sense of the term) thing and optimize for things their customers are actually willing to pay for.

The same causality can be observed in the ML world. Mickens asks why people are hooking ML systems whose operation isn't fully understood to important things like financial decisionmaking and criminal justice systems. The answer is that the customers demand it. ML is trendy and buzzworthy, so if you're a vendor of (say) financial systems, and you can find some way to incorporate ML into your offerings with a straight face, now you have an attractive new checkbox on the feature list your salespeople dangle in front of potential customers. And once the effectiveness of having that box checked becomes clear, you kind of have to do it, even if you know it'll be ineffective or even worse, or risk losing business to a competitor with fewer scruples.

All of which is to say that what we see playing out in both these scenarios isn't really the vendors' fault. They are instead classic examples of market failure. People end up buying shoddy products because spotting their shoddiness requires technical expertise they don't have; responsible vendors who try not to make shoddy products lose sales to irresponsible vendors who don't; eventually all the responsible vendors are out of business and the only products available to buy are shoddy ones. There are lessons to learn from this, but they're economic rather than technological.


This is like saying doctors should push cheap drugs that may or may not make your testicles explode because customers don't demand non-testicle exploding drugs.

We trust doctors to take into account all the nuances of medicine that laymen have never even heard of, and give us good advice. Because not everyone can be an expert on everything.

Its the same with software. We can't expect everyone to be an expert.. its up to our industry to act responsibly.

Sure its "market failure" in so far as duping uninformed people is a good way to make a quick buck, but the deeper issue is moral failure / failure to take responsibility.


...and we don't just rely on drug makers, for example, to be moral and take responsibility. We have government agencies that _require_ strict testing of their safety and effectiveness. If we left it up to the market, we would get inferior results. The problem is, we have no FDA equivalent for tech security.


That's a great point made with a pretty suspect example. FDA very subject to regulatory capture.


Regulatory capture is certainly an important problem, but the pre-FDA record suggests strongly that the FDA we have is much better than not having one. But I would certainly not suggest that there is not a real problem with regulatory capture, just that the current situation in tech security (no FDA equivalent) is worse.


Regulatory capture is certainly an issue, but it doesn't mean that the FDA isn't better than not having a regulatory regime at all.


The cost of the FDA is that the process is slower.

The benefit is that medicine is effective and measurably safe. It’s obviously necessary, and the supplement industry shows why.


I agree, this is a core reason we have a government.


We have FTC/FCC and EU/GDPR


Doctors would totally push cheap testicle-exploding drugs on their patients if there wasn't extensive regulation preventing them from doing that.

They do push life-explodingly addictive and harmful painkillers on their patients, despite knowing the harm it does, because regulations don't prevent them from doing that.

What would be the consequences of an FDA for IoT? Huge price increases, sudden workability of patents as a means of protection, but more security and better products?


> Doctors would totally push cheap testicle-exploding drugs on their patients if there wasn't extensive regulation preventing them from doing that.

And who would have had some of the most input into said extensive regulation?


Customers don't demand non-testicle exploding drugs because that's already the standard in the same way that customers don't demand software that doesn't wipe their disks at random intervals, because software already doesn't (careless usage of dd notwithstanding).

If drugs started exploding testicles you can bet customers would start demanding they didn't (male customers at least). Just look at the Thalidomide incident, I've seen it in the news within the last decade and it happened nearly 60 years ago at this point.


I think consumers are a little more savvy than people in this thread are giving them credit for. Sure, nobody want exploding gonads, but most folks couldn't give a whit if some overseas teenager manages to sneak a look at the contents of their driveway. People just want a cheap camera to catch their neighbors letting the dog poop in their lawn, and if it means becoming part of a botnot, who cares.

The market has spoken, cheap wins over secure time after time. The consumers know, and they don't care, because to them the stakes are just not that high. Their genitals will be fine, and who wouldn't mind an extra set of eyes on the front yard.


Consumers are not savvy as a group. There is always an "eternal september", new suckers born every minute, that can be abused. Beyond that, there are plenty of ways that you can maintain consumer trust while abusing it at the same time. You can sell them products that hurt them in ways they don't understand, and you can control the media surrounding your product enough to ensure that they don't understand. Advertising has a basic purpose of making people aware of products, but it also serves to mislead them on the value of things, overstating the benefits and understating the costs.

This idea that people understand the total consequence of what they do with their money is so simplistic that it's stupid. Markets don't "speak" from a vaccuum, they demand what their constituents are convinced is valuable, regardless of the accuracy of valuation. Lies sell garbage all the time and the consumer isn't to blame for wanting it, the professional, skilled, psychology-wielding liars who sold them on it are.

Hypothetical example: pay for a bunk study that concludes eating apples prevents hair loss, benefit for decades, with negligible repercussions to your business when the lie is uncovered. I'd be skeptical if you claim you can't identify several real examples yourself.


I think you are missing a crucial point. I as a consumer really do not care in the least if someone hacks my device. Worst comes to worst I either do some sort of factory reset or just throw it out, I was probably looking to buy the shinier version anyways. Who cares?

I really dont care if my tea kettle is part of some botnet. I cant even imagine a reason why I should care. I guess it sorta sucks for the people getting ddosed :/

For instance, my router cost me $20. It is probably full of security holes. I dont care. To buy a router that was secure I would have to pay more than $20. I would not feel any benefit for spending that extra money. So I don't do it.

On the other hand, as a developer I'm always thinking about security because it is fun and I feel a sense of responsibility for the things I make.


yup- the "consumer" is not a source of moral force. Its an approximation of whatever purchase decisions people make.

So consumers would of course be happy if you made plastic straws - look at how many get sold!

Now if you told people they would not have plastics, and everything would cost 5x more because we dont have a cheap packaging option, OR tell people that they couldnt transport liquids anymore because we dont have bottles - well you can imagine those customers and consumers would be upset.

The economy is not moral.

Morality is laws and regulations which impose restrictions on the system to make it

fair Environmentally friendly less exploitive etc.

We try and let the market resolve as much of this on its own, so that we can have market efficiency without tying it up with regulations.


Consumers are happy about plastic straws because it was conveniently (for the producer of straws) not communicated to them how manufacture of plastic straws is irresponsible and creates external costs to the environment at no cost to the producer.

You can't honestly believe it's both okay to mislead people in commerce and okay to put the onus of good judgement on them.


Outside of the fact important information can easily be stolen.

Personally I think the consumer should face financial liability when iot devices are used in massive attacks that create problems for others.

Just because you chose a shitty vendor with a shitty product doesn't mean the entire internet should suffer.

I am a fan of things like brickerbot and I hope that sort of thing continues aggressively.


How can you reasonably ask a consumer to evaluate the security of a product when many don’t have basic education? Also, many reputable companies that make “good products” have security breaches, so you can’t just rely on reputation.


Force the consumer to force manufacturers to make less shitty products. Until that happens I hope brickerbot type attacks continue to happen for the cheapo crap.

Sure good products can have a security flaw. But iot and home routers are complete garbage. The consumer should be held liable for being apart of massive disruption of the internet.

It's the equivalent of manslaughter, you might not have intended it. But in this case you didn't do anything to stop it and helped to cause millions in damage. I quite frankly don't care about their education. That's their responsibility. Just the same as you need to learn to drive.


The way the consumer would force manufacturers to do this is by passing laws that would make manufacturers liable.

They would do this because of information asymmetry and the collective action problem. At the point of purchase, consumers don't have the information to make a choice, and they don't have the ability to will an alternative into existence so they can choose it. Improvement in collective outcomes is often hard to achieve purely through market means, which is why we don't rely on markets to solve all these problems.


Why make millions liable where a few hundred bad actors can be trivially dealt with.


Would you similarly not care if drug dealers sold drugs in your driveway, Viagra sellers sent spam from your email, and so on? I think you're being disingenuous.


Not just that but the customers informed enough to care can do it themselves. Most people on this site care about security, and most people on this site can set up their own home automation, servers, security camera, and/or speaker system. So the people buying these future botnet-nodes end up being the unsavvy inevitably.


Doctors have done that - and pharma firms would and have done worse.

The reason they dont is because there are regulations and trials which have to be passed before you can go forward.

And those are things which people on HN regularly criticize - pointing out that life saving drugs would be on the market faster if these regulations were not so "onerous".


What you are describing is an example of of customers demanding non-testicle exploding drugs as why we don't have them.

When a drug causes problems, customers often end up suing the manufacturer/developer of said drug. If doctors prescribe said drugs after it becomes common knowledge that it could cause a problem, they also might be sued for malpractice.

Are people sing IoT companies for poor security practices? If so, are they winning? Without that, what inventive is there for those companies to do anything more than they already are? It's not like there's actually any level of brand awareness for the vast majority of these devices, so it's easy enough to just ignore complaints and rely on the fact that nobody pays attention to your track record when it comes to this market.


nobody's ever sued me for leaving flaming bags of dog poop on your front porch before ringing your doorbell and making a getaway by segway while cackling madly. yet, every day, i resist the overriding temptation to do exactly that. why? well, gosh darn it, because it's the right thing to do!

i think the drive to reduce every bit of human behavior to economic incentives backed by a government force structure is ultimately counterproductive. would you agree?


>leaving flaming bags of dog poop on your front porch before ringing your doorbell and making a getaway by segway while cackling madly. yet, every day, i resist the overriding temptation

If there were millions of dollars to be made in the flaming dog shit Segway getaway business, I am positive many would succumb to the temptation.

So your comparison is unfair, it's easy for you to avoid such a behavior because you have no benefits. Not securing a device is a significant economic win for the manufacturer, as explained by the thread originator. You get a device that "just works" as opposed to one with complex key setup instructions that by necessity must default in the misconfigured state (else, you bet everybody is using the defaults).


If you’re trying to explain the behaviour of unusually, upstanding moral people sure. If you’re trying to deal with anything larger than a small and highly committed group no.

> there are three classes of humans 1) those who will throw the rock at you with the mob 2) those who will not throw the rock and avert their eyes 3) those who will speak out against throwing the rocks

> the ratio is probably 90:9:1


I’d be a little more optimistic and put the ratio at more like 9:90:1.

We don’t typically go around continually throwing actual rocks at each other, so it is possible to make progress on these issues.


https://www.politico.com/blogs/media/2015/03/new-york-times-...

The author of the tweet quoted was speaking metaphorically based on his own experience. Virtually no one supported him publicly when he needed it.


Good but me and I'm sure many other people will happily leave flaming of dog poop on your front porch before ringing your doorbell and making a getaway by segway while cackling madly if nobody ever sued me.


How much money do you earn by leaving flaming bags of dog poop? Can you get rich that way?


I agree, but in this system we indoctrinate our children to operate on profit motives. It took me many decades to understand that money is, ironically, worthless.


if you consider how doctors are happy to prescribe drugs that are not ideal (understatement) for their patients' health for money from pharmaceutical companies, your argument falls apart. consider the opioid epidemic


There aren't enough doctors in medicine to go around. There probably aren't enough doctors (as in PhD) in all the other technology industries supporting medicine.


Customers can't evaluate security of IoT devices and, furthermore, they can't even evaluate what the downside of an insecure device is. So my printer is insecure- what does that mean for me? How much should I care?

At least with cars, you know what an unsafe car can do (kill you) and it still took Ralph Nader's book and citizen pressure to set up a federal agency to oversee car safety. Also, even when most people know that seatbelts are a good idea, we still have seatbelt laws because they mean fewer people die.

https://en.m.wikipedia.org/wiki/Unsafe_at_Any_Speed?wprov=sf...


They can for some of them if you give them this pic from Brian Krebs:

https://krebsonsecurity.com/2012/10/the-scrap-value-of-a-hac...

Got through to a lot of them that way. They were more likely to practice better computer security or buy less "smart" products that don't need to be smart.


That pic made my eyes glaze over. It's a good concept, poor execution.


Maybe they shouldn't have those devices then.


Lets evaluate what makes more sense, OEMs and programmers that have an understanding of the software and hardware and the programming they undertake being responsible for for their own work.

Or blaming the users for not understanding what is essentially an black box that is basically an entirely unknown quantity before (and after) you buy it, often even with when the user has very high technical skill.

I know a lot of programmers are allergic to taking responsibility for their products, maybe its time that changed.


> They are instead classic examples of market failure.

The way to fix market failure is well understood, though; regulation. You're arguing for regulation of the software industry, just as we have regulation of the medical industry or the oil industry.

(The software engineering industry is, I would argue, drastically under-regulated.)


> The way to fix market failure is well understood, though; regulation. You're arguing for regulation of the software industry, just as we have regulation of the medical industry or the oil industry.

That's an excellent idea. I hope your country regulates the hell out of your nation's software industry. Meanwhile I'll buy a rake to help me gather all the money your economy will throw my way because somehow developing software in your nation became suddenly cost-prohibitive and your economy has no alternative to outsource it to nations unencumbered by regulation.


Do you really think suggesting that selfish drive for selling insecure and underregulated software is really an argument against regulation?

I don't think anybody denied that capturing an unregulated space by selling shoddy and cheap products is actually a great way of making any ruthless actor a ton of money, I'm really not sure what point you're trying to make here


we have regulations for software and software services in EU (e.g. GDPR) and US (e.g, DMCA, HIPAA) and the economy has not collapsed.


You don't understand. We should have a global government, and then regulate software all over the world at the same time


Not necessarily. It could also be done by allowing people to sue makers of insecure software or hardware.


This is worse. This leads to lawyers making the critical decisions instead of regulators and auditors. The latter group at least has some familiarity with the subject area.


No, judges and juries decide lawsuits. They have the benefit of being harder to bribe than regulators.


One leads to the other. Lawyers start making a bunch of decisions on corporate strategy and product design because they have to anticipate the rulings of judges and juries. Their decisions are usually going to come late in the game though, leading to lots of last minute, shoehorned changes because they aren't able to review early enough in the development process (since lawyers are expensive and you can't get enough of them be involved early).


Only if you have money to get to court. Everybody else would be left depending on the good will of big companies. That's why courts should be the last resort, not the first. We need regulation, and if everything else fails the courts should be the way to go.


Liability is generally a much better approach than specific regulation. Lawsuits happen after the fact and concern actual harm suffered by actual people. Damages are assigned based on this actual harm. That means that in liability system the price of bad behavior is approximately the harm it causes, which is exactly what you want. Liability doesn't require everyone to actually go to court, because almost all lawsuits or threats thereof are settled based on expectations shaped by previous cases that did go to court. Further, class action lawsuits allow large numbers of harmed people to be represented in a single action at no cost to themselves.

Regulation, on the other hand, is an ex ante affair. It involves some central planning authority, whether Congress or some administrative agency, trying to create rules that they believe will prevent future problems. The regulator will always get it wrong to some extent, often to a very large extent. Rules can be too specific, stifling innovations that would allow actors to achieve the same or better results with different methods. They can be too strict or too loose. The rule making process is also necessarily slow, so regulations tend to come too late and linger too long after technology has moved on. Finally, regulations are ultimately political, driven by what will translate into votes, not necessarily efficiency. If they represent a right-wing constituency, that will mean looser regulation; if a left-wing constituency, tighter regulation.

What's interesting about liability is that companies will buy insurance for it. The insurance companies will demand compliance with certain rules in order to be covered--essentially private regulations. But unlike government regulation, there are multiple competing insurance companies. The resulting market for insurance means that the market searches for the optimal balance between harm prevention and profitability. Insurance companies have a strong incentive to devise the rules that provide the optimum level of security for lowest cost possible.


I agree that liability is probably the best approach and is long overdue for software. The problem is the standard for proving security nonfeasance? My thought is that if your product was found to have a security problem and you did not have a security audit performed by licensed security auditor then you are liable. But I'm not sure there are licensed security auditors in the way, for instance, a CPA is licensed. Over time, if a security issue is publicly reported (e.g. a CVE) and you haven't fixed it within a certain amount of time then you are also liable. The length of time a vendor must provide security updates to a product for free should probably be defined in law, e.g. 2 years.


> It could also be done by allowing people to sue makers of insecure software or hardware.

What about free/open source software? Should society punish those idiots who had the gaul to contribute their free time to a project that everyone can use free of charge?


Cap it at the value paid for the product.

If it's given for nothing then that's what can be charged for it's failure, nothing.


> The way to fix market failure is well understood, though; regulation

No, the way to fix market failure it to increase the aspects that cause markets to function and reduce aspects that cause market dysfunction, and if that doesn't do the trick, then you fall back to regulation.

Markets change in small ways constantly which results in large changes over time, and even regulation that fits perfectly initially is doomed to affect the market negatively given enough time.

When it's important enough, we use regulation to ensure minimal levels some attribute are maintained for the benefit of everyone, such as privacy, safety. Regulation might end up being a good response for a part of the problem, but so could actually holding some companies liable for negligence. I suspect some combination might be best.

I think if you approach the problem of market failure with the idea the the only and best fix is regulation you're likely to just punt problems down the road a decade or two (if you're lucky).


The DO-178B and now DO-178C regulations appear to be doing well. A whole ecosystem of quality-supporting tools, certified components, and QA experts have formed. Likewise, most or all of the early, secure products were designed for the TCSEC regulations. Although it had issues, the parts that increased assurance worked fine.

So, given TCSEC half worked and DO-178C currently works, I'd say regulation is the answer on this stuff. It just can't be too prescriptive. The situation would vastly improve if just a few things like checking inputs, avoiding unsafe code where possible, fuzzing, and so on were required.

And we also sue their ass in court for not doing this easy, provably-useful stuff. That's to get stuff done when regulators aren't along with using legal damages to force them to take action.


South Korea regulated their software security!

That's why even this decade, people were required to use Internet Explorer 6 with ActiveX enabled, to access online banking, because it was the only system the government considered secure enough. We're talking well after IE6 had become a distant memory in the rest of the world.

Are you sure you want governments to regulate software security?


Good regulation doesn't get made because regulation as a practice is broken by malicious actors who sponsor our non-representative elected officials.

Remember campaign contribution limits? Yeah.


Here is the most classic and widely cited paper ever on market failure when customers can't tell what's good and what's a lemon:

The Market for "Lemons": Quality Uncertainty and the Market Mechanism

https://www.sas.upenn.edu/~hfang/teaching/socialinsurance/re...

It's strikingly prescient that Akerlof mentions 'group insurance' as another market that is rife for failure due to a slightly different mechanism. Here we are 50 years later failing to understand this economic lesson.


That paper is interesting because it proved that the used car market doesn't exist. A proof of a false result is not a good proof.


It didn't 'prove' used car markets don't exist, it showed that naive car markets with high amounts of information asymmetry can't exist, and in fact they don't.

All real used car markets have multiple layers of either testing and warrantying (which solves or reduces the asymmetry to a manageable level), legal remedies (many states have 'lemon laws' that push liability back to the seller), or are filled with sophisticated buyers (e.g. car auctions) who can actually tell which car is a 'lemon' because they bring a trained mechanic who will inspect the car in person.


> many states have 'lemon laws' that push liability back to the seller

It’s not that many, and they don’t work that well.

https://www.edmunds.com/auto-warranty/my-used-cars-a-lemon-a...


But how can a customer demand security? There is nothing that a customer can do to choose a more secure IoT device over a less secure one. Even if you look at known vulns, simply having vulns in the past is not necessarily reflective of current security posture. Beyond pentesting an app, how does a consumer act on their desire for a secure device?


There's been security evaluations of products where evaluators do both checklist stuff and try to hack the product. Consumers could buy the stuff that gets cleared through those processes. For instance, there's products on the market like INTEGRITY-178B and LynxSecure designed specifically for securely partitioning systems. They have networking stacks available, too. On occasion, a company would make things like routers with them. Virtually nobody bought them because they cost more than insecure devices or lacked unnecessary Thing X, Y, or Z. Intel tried with i432 APX, BiiN with i960 CPU (a nice one), and Itanium w/ security enhancements Secure64 SourceT uses. Lost a billion dollars or something over the three. So, those companies usually folded, withdrew the products, or switched to selling for outrageous amounts to defense sector.

So far, almost no money is going into stuff with higher assurance of correctness. Those companies are losing money when they try though. So, the market naturally responded to the demand. I strongly discourage anyone from even trying again given the cost and fact that users won't buy it. Instead, I recommend making a product that's decently secure that can be secured later. Make it good enough to sell on its own with great marketing and so on. As money comes in, move a percentage of it toward improving its overall assurance. Basically takes a nonprofit and/or ideological group that wants strong security to happen at a loss or at least opportunity cost to get it done. CompSci people also make strong designs with FOSS code that often needs polish. Companies can pick up their ideas or prototypes to convert into something that can sell. Alternatively, team up with them to split the work into what each can financially sustain and are good at. That's happening with CompCert whose innovations come from CompSci but sold by AbsInt. K Framework people and Runtime Verification Inc. are another good example with one coming from the other.


OK so now you have one good security certification and a dozen BS phony ones, and plenty of international drop ship / amazon fba sellers happy to counterfeit the legit certification. Now what?


The answer is that the customers demand it.

I have to say that this even more of a non-answer than the motivations Michens offers.

Sure, customers want X because it's trendy and seems to provide some vague value. But the underlying answer is customers are willing buy the latest crap damn-the-consequences because these particular customers are buying products whose failure mode is going to cost society a lot but isn't going them all that much. IoT being a prime example. The Internet light bulbs knocking out hospitals or whatever - no one is holding anyone accountable and that's great for someone.

Software failures and security failures so far involve remarkably low costs to companies compared to costs to society. Liability provides some disincentive for dumping battery acid in a river (though that seems to be lessening, sadly) but liability for running or selling crappy software is the stuff that dreams are made of.


i agree with everything. however.

i recently subscribed to Curiosity Stream. its like netflix but only academic-ish documentaries. its "curated" by human beings. i can almost feel the lack of "algorithm". its weird how i feel about it, compared to youtube or whatever.

it reminds me a little bit of going to a "health food store" in the mid 1990s. they were all tiny, tiny niche shops usually owned by one person or a family. they sold weird stuff like organic tofu and soy milk. nowdays, you can buy both of those products in walmart and target.

something very strange happened... somehow the shitty mass market moved towards the tiny, higher quality, higher price niche products.

how did that happen?


That's how it always happens. Something becomes perceived as high quality and desirable. Due to its high quality, it is expensive. But many people want it, so there's an opening for a product that is similar enough for the "layperson", but doesn't cost what the "connoisseur" is willing to pay. Nine times out of ten, that means lower quality.


>Start with the Internet of Things example. He chalks up the abysmal security record of IoT devices to two factors: it keeps IoT devices cheap, and IoT vendors don't understand history. And there's a lot of truth in both these assertions! But they are both just expressing facets of a deeper, more fundamental reason: IoT devices aren't secure because their customers don't demand security.

Its not just price though. You cant just make the devices more expensive to be able to do proper security, the bottleneck in a lot of cases is the energy consumption. That doesnt really scale with more expensive hardware. If your device needs to run from a coin-cell for the next 10 years you will be cautious with how much security you can afford. Even worth off are energy harvesting products without even such a little battery.


>IoT devices aren't secure because their customers don't demand security.

Apple offers the most secure devices, a tiny fraction of its consumer base demands security, or is even aware of how secure their products are.


My understanding of regulated industries (e.g. CE products sold internationally) there are two sides of the coin.

1) properly understanding history as a motivation for risk management and properly funding that quality control.

2) technical ability to implement solutions to the risks identified from step one.

For example, the founder of the company that designs and builds a medical device does not necessarily understand the negatives of pressing CTRL+ALT+DELETE when the software from the manufacturer freezes. People can do so many things wrongly in just a few simple steps.

We can think of dozens of ways to fix the problem but the C levels might only understand 0.5 to 1 of those solutions.

There simply isn't enough quality work going in to a proprietary/closed system that is profit driven.

In my little dream world if all businesses were open-source (code, process, profit margins, all of it) we'd be better at building off of past work and innovation would literally be cheaper. Maybe it's a pipe dream.


> IoT devices aren't secure because their customers don't demand security.

Customers cannot evaluate security, just like in cars and many other technologies.

Vendors need to be held accountable and fined by 3rd parties.


> The same causality can be observed in the ML world. Mickens

> asks why people are hooking ML systems whose operation

> isn't fully understood to important things like financial

> decisionmaking and criminal justice systems. The answer is

> that the customers demand it. ML is trendy and buzzworthy

But that's the same as with the testicle exploding argument: ML is nowadays called AI, can self-drive cars and beat humans at any task (like Jeopardy or Go). So people assume from their experience that it just works, even better than any human. Of course also a big mystery bubble is created around that both by Marketing people and ML practitioners (oh and IBM).

Being myself an engineer working on "normal" systems, I somehow feel pressed as well to do something fancier like ML - according to some survey already 40% of Engineers do that. But on the other hand I realize most of this stuff is, as already pointed out in the talk, just there to target ads or work on meaningless financial systems. I was recently listening to a talk of an AI expert person, using the AI for fraud detection in an online payment system. At the end of the talk somehow asked a really interesting question which was: so how do you connect that to your online system? He answered: we don't, it's just for compliance reporting. That's just stupid, I feel misguided. It's cool to do statistics on your data, simulations but calling that AI is incredibly misleading.


The bigger hurdle is that security actually works against usability, because you have to build something that works on arbitrary networks with who-knows-what configured and no guarantee that the consumer has access let alone knowledge of how to fix random networking issues. Granted there is plenty of low-hanging fruit with minimal usability impact, but if we want to talk about actual decent security that is a very difficult proposition for a plug-and-play consumer product regardless of customer demand.


Customers barely grasp identity theft with respect to bank accounts. Nobody understands the risks of a magic light switch.

We’re living in an era of laissez faire commerce in the US. The biggest, most influential retailer routinely ships counterfeit products and nobody really care.

That is a failure of the regulatory environment — economic forces aren’t powerful enough to deal with these issues. The kickback from government will be brutal and overreaching when it happens.


You assert that "IoT devices aren't secure because their customers don't demand security."

I'll assert that customers can "demand" recycling all they want but companies are going to continue to package their products in the cheapest thing possible without regard to its ability to be recycled. Speaking with your dollar only works if there is at least one company doing what you want.


Apple takes security (and privacy, it’s natural extension) very seriously. It’s not an open source process unfortunately, but they’ve shown a clear financial and strategic commitment to hardware and software level security. They also done an excellent job communicating this to users in the way that they ask for permissions, etc.

A lot of consumers explicitly choose this option, but it’s all wrapped up in “quality”. When I buy a MacBook I know they won’t cheap out on the casing, or the user experience, or the security, and I pay a premium for that.


I'm not sure that companies responding to obvious market failures isn't the companies' fault. We're not some geoup of mindless automatans max/mining for profit (that's the purview of the ML at discussion here). Selling shoddy, dangerous wares should come with consequences.


>We're not some geoup of mindless automatans max/mining for profit

Haven’t spent much time around investment bankers huh?


Another subtle point is that “operation poorly understood” could in fact be a desirable feature for a system that makes sensitive decisions s.a who’s taken to the black room on border crossings.


With your comment about IOT security...

I believe HomeKit devices are a great example of devices that can almost be perfectly secured. A lot of IOT devices support multiple IOT platforms, for example the Philips Hue supports IFTTT, Google Home, Amazon, and of course HomeKit, but the first three options only allow your IOT devices to work in your home with permanent wide area network access. Latency issues aside, this is bad for security because it simply opens more attack vectors to your devices, and relies on third parties to manage your security. What's the benefit of relying on Amazon to manage your IOT devices? Well for the average Joe, it means he won't have to buy a home "hub" (Apple TV/iPad) for allowing remote access of some sort, and also the setup process is generally easier. Problems arise because the IOT device is now responsible for accessing the Internet, and has to contain a much larger codebase.

HomeKit's design is that each IOT device will talk to your local devices, i.e. an iPhone, an iPad, an Apple TV. If and only if you set up an iDevice as a home "hub", do you allow remote access. HomeKit is keeping it modular, which means that if a serious bug is found in remote access code, then you can be confident that Apple will update the Apple TV's firmware, as opposed to an IOT device from a will-be-bankrupt company.

Now what if you have a rogue device on your local network that is hacking other devices? Well this is where a firewall, as Mickens' suggests in his talk, can help. Keep in mind that this is a problem for any style of IOT device, and can only really be protected using a firewall. You can actually create something called a bridging firewall that inspects each packet passing through it's network interfaces. Currently, I've bought a small WiFi router from MikroTik just for this purpose (only 25 USD). All of my IOT devices (and my less secure devices like printers and audio receivers) are plugged in or associated with my MikroTik device, and the bridging firewall acts as follows:

a) drops ethernet packets sent to my main router's MAC address This stops any WAN access

b) drops ethernet packets sent to my home server's MAC address, except for port 67-68 Allows DHCP

c) drops packets sent to any other IOT device

And that's it! I can generally assume my Linux Desktop and my MacBook are secure enough. A few reasons why this is not overkill. First, it separates my two networks without using any VLAN nonsense (and avahi/Bonjour nonsense), and creates a powerful firewall in between the two. Second, it allows my IOT WiFi network to have a different password from my home WiFi network. Third, it doesn't slow down my main router's WiFi speed, and I would hate to have a 802.11g device slowing down my wireless network. Fourth, I believe the firewall can be set up to stop ARP spoofing.

Finally, HomeKit devices are the few IOT device standards that allow you to truly own a device. In fact, after buying the device, you can set up your own local HomeKit Controller in Python (https://github.com/jlusiardi/homekit_python) meaning you don't need to buy anything at all from Apple.


I have to ask, because IoT and lack of security are synonymous, who is providing secure IoT platforms to consumers? Is there history of such companies failing?


At Azure Sphere we're trying! Dev boards ship in a month.


Such a company would be maybe 5 years late to the market, so...likely would never happen.


Misapplications of ML are not demanded, they are sold. Demand is for clean solutions, not dirty ones that make new problems. The selling is where false confidence in broken solutions gets made. Greed is sitting at the root of institutional incompetence in most situations.


No. There are potential regulatory principles here that have nothing to do with the customer's demands. You sound like you are assuming a free-market, free-enterprise situation. That is not reality. There are hundreds of years of regulatory norms in other domains. https://en.wikipedia.org/wiki/Precautionary_principle


>IoT devices aren't secure because their customers don't demand security.

This is hard for me to agree with, because as a consumer literally ALL THE TIME I notice small things product designers do because they know better but that I am sure none of their customers noticed, or read about in reviews or something.

Producers often know better and do the right thing just because they're the experts, and even though nobody demands it.

It's just that IoT security is not something that these experts can do.

To use a recent cupcake analogy, it's as though every single bakery in the entire world that sold cupcakes, sold ones that to the few people who actually have good taste (which includes you and me) actually tastes like shit. Why do the bakers only sell cupcakes that taste like shit? Because nobody demands cupcakes that don't taste like shit? No, because if the bakers knew how to then at least some of them would be selling good cupcakes. It's because a good cupcake recipe doesn't exist anywhere on the planet. Anybody who is making a cupcake is making a shit cupcake. This is the state of iot security: the experts are shit at it. You and I notice.

If the experts figured it out then bakeries would follow. What, you don't think anyone who goes through the trouble of manufacturing and boxing a product bothers to Google "how to make a secure IoT device" and read what they find? Of course they do. What they find is "hahaha whatever."

It's as though if you Googled "best cupcake recipe" all of the top hits said "I don't know mix some flour and butter and bake for a while, put some frosting on it. Whatever, it's a cupcake."

Here is the link: https://www.google.com/search?q=how+to+make+a+secure+iot+dev...

Do you see a single useable recipe there? I don't. All I see is "I don't know, mix some flour and butter and bake it? Put frosting on it. Beats me."

An actual cupcake requires milk, sugar, baking powder, eggs, and an actual recipe. Maybe some vanilla essence. These aren't even listed.

If the state of the art is shit, blame the state of the art.

A secure IoT device is like a watermelon soufflé. You're on your own.


"Using case studies involving machine learning and other hastily-executed figments of Silicon Valley's imagination, I will explain why computer security (and larger notions of ethical computing) are difficult to achieve if developers insist on literally not questioning anything that they do since even brief introspection would reduce the frequency of git commits."

For anyone who hasn't heard a James Mickens talk, do yourself a favor!


He's the guy who wrote the Slow Winter. He's hilarious.

https://www.usenix.org/system/files/1309_14-17_mickens.pdf


In exchange for their last remaining bits of entropy, the branches cast evil spells on future genera- tions of processors ... The point is that the branches, those vanquished foes from long ago, would have the last laugh.

More true than ever, now.



It'd be more hilarious if he didn't reference real problems with no obvious solutions short of a painful dismantling of our heavily exploited societal constructs.


Every time I watch or listen to a james mickens thing, it physically pains me that he is SO CLOSE to my office, but there's basically no chance in hell of us poaching him. Oh well, at least we get glorious comedic writing and talks at a frequent basis.



He is like that in real life, too... I am in his department, and every single social event ends with him surrounded by a group of people, listening to his hilarious rants. At the same time, he's a great teacher, too. Somehow you learn things while listening to 90 minutes of his stand-up comedy.


He was like that at MSR also. Microsoft lost someone great when he moved to Harvard.


Thanks for the link. This is hilarious!

> This World Of Ours: Wherein it is revealed that 1024-bit keys cannot prevent people from sending their credit card numbers to Nigerian princes. (I think that 1025-bit keys might solve the problem, but nobody listens to my common-sense advice.)


My favourite lne from my favourite Usenix paper:

"YOU’RE STILL GONNA BE MOSSAD’ED UPON"


This guy looks like the one who wrote those satire magazine-style articles.


He is that guy. Mickens is a legend.


But how can we be sure? Maybe there are two people with the same name who look exactly alike with the same writing style. If we put them all on the blockchain, do they have the same hash?


You just made me remember PC Accelerator. Anyone else remember that hilarious satire gaming magazine?


He is! I've seen him present a couple of times and he's great. Funny as hell, super engaging, and makes good salient points.


Thanks for this. I'd never heard of him. Been listening to his talks for the last 3 hours!


"I'm not saying that machine learning is the portal to a demon universe, I'm just saying that some doors are best left unopened."


This is an entertaining and important talk. Technology is not value-neutral, and insistence that it is is a larger meta-security issue in of itself.


Regarding the "we don't know how this stuff works" point, doesn't the FDA approve a ton of drugs where we don't know the exact mechanism of how it works? Do we need to know exactly and precisely how something works to know _that_ it works?


The overwhelming majority of medical treatments don't have inteligent humans actively trying to maliciously sabatoge them. Many drugs can be made horrendously lethal or otherwise dangerous with little effort (often just by significantly increasing the dosage) but we don't need to care very much because it's not possible to silently untracably apply that effort from arbitrarily far away, and there usually isn't anything to gain from doing so even if it were possible.


At the end of Mickens' talk he suggests putting black box devices behind smart firewalls and routers. I'd say that's the equivalent role of doctors.

That is, the trifecta of bad decisions is black box functionality connected to an internet of hate (or unfiltered/tested input data) and given levers of power in society. Take away any one of those 3 and you're probably ok.


Is there any kind of sabotage other than malicious sabotage?


Ignorant, unintentional, self, incidental.

Stupidity is probably the biggest threat. Dietrich Bonhoeffer:

https://religiousgrounds.wordpress.com/2016/05/11/bonhoeffer...


I think the parent meant that "sabotage" is "deliberate destruction" by definition.


No, but drugs spend literal years in testing through various models and animal subjects before even being considered for years more of human trials. Then the benefits are weighed up against the known side-effects and if they stack up the drug is approved. Even after that sometimes the analysis is wrong and the drug turns out to be ineffective or even harmful.

Technology companies churn out things with barely a few weeks of testing at times and no oversight.


"Barely a few weeks"? Ha. Many places it's days, hours, or a few minutes.


Yes they do, but they circle the risks with statistics, when the life expectancy and quality of life goes up vs the current best treatment, the unknowns side effects can’t be that bad.


In some cases, no. But let's say there was a drug for 99% of people, but totally messed up 1% of people. Well then it's probably really worth understanding how to predict or avoid that. Because the same thing happens with image classifiers. Google had an embarrassing incident where their classifier was very impressive, until it got it wrong and it was really embarrassing. Do you just accept that sometimes image classifiers act racist? Or would it be nice to have a tool that can highlight what parts of the picture contributed most to the classification, so you can identify pictures that would have prevented that error in the training set?


Seems well opinionated but I disagree. He's thinking too much in absolutes while in practice people care about relative security.

Computer security has gotten a lot better,many organizations have acheived a security posture they are comfortable with. I think he's focusing strictly on application security,in reality you care about maintaining C.I.A. for the data.

I don't care if the entire software stack is riddled with vulnerabilities and the CPU has unfixable vulnerabilities so long as that does not result in attackers (as defined by my threat model) fail to compromise confidentiality,integrity and availability of data I consider valuable.

The software might get exploited but there are post exploit controls,those may get bypassed but attacker facing machines would ideally not store valuable data. The attackers can move laterally but there are detection and prevention measures for that. I mean, both in life and computer security,one shouldn't expect absolute security, acheiving and accept level of a security posture should be enough.

I'm not prepared to handle 10 guys mugging me as I walk home,but that isn't my goal. My goal would be to defend myself against one or two attackers of the same weight class as myself.

There is a reason so much security appears bad,easier to clean up a breach of security or to just ignore it than to implement a SDLC and have independent security staff. In the end,security improves only if it's cheaper to do so.


I disagree with your disagree.

Say we all lived 50 years ago and worked in ergonomics engineering instead of software engineering. People were fairly comfortable doing non-stressful work, which I guess was better than being pulled into meat grinders of The Jungle.

However, there was this new science that was indicating a new problem of repetitive stress injuries. Over the next 20-ish years, we learned that these injuries caused a ton of harm, so we started legislating protections against these types of stresses, when which resulted in increased productivity.

Now switch to today. What makes the lax of software security best practices so different from repetitive stress injuries 50 years ago?

Software engineering is feeling like it will follow the same path as every other engineering. First, we'll feel like we're gods. Then, we'll suffer losses. Finally, we'll be regulated.

Remember, every regulation is written in blood. Software will be no different.


I am not against regulation,I think it's needed,especially on the 3rd party code audit side.

What makes software security practices different is thay 'computer security' is much more than how securely the code was written. A perfectly written software could be rendered useless by incorrect configuration or bad admin security practices. Heck,even the cpu could be come faulty and compromise security as you've seen with the latest intel bugs.

Yes,software security needs to improve by a lot,but look at the whole picture and include operational security,system and network design,risk assesment and proper threat modeling practices.

Good and easy example - yubikey. Google hasn't had anyone phished in over a year or so due to their yubikey enforcement. Even of software security or bad human practices were a problem the check and balance of yubikey prevented compromise of data security.

Next Gen AVs are so good,there are companies that haven't had single malware infection in 1y+. Insider threat is being accounted for too as a result of ML+behavioral analytics.

Modern security assumes the software is riddled with bugs. For example, if MS word starts powershell or the browser an unusual program like cmd.exe,modern endpoint solutions would block+alert. They assume browsers and document processors are filled with holes,so they account for post-exploit behavior and that actually works well.


An 8yr old can write software and post it to github. An 8yr can't build a house or car from scratch (two things who's construction is regulated).

My point being software is harder and possibly impossible to regulate. Is all open source going to be banned unless it's been written by licensed certified programmers and gone through review by an appointed inspector? That seems untenable.


Writing software can't be regulated but use of unaudited software can,especially for commercial use.


"I'm not prepared to handle 10 guys mugging me as I walk home,but that isn't my goal."

Muggings have an understandable statistical distribution, which allows you to take a calculated risk.

It's impossible to calculate the risk of software security problems, and almost by definition the problems are less contained than you think.

Will the next secuirty breach hurt a few individuals, destroy the business, or hurt the entire country or the entire world?


That's what security professionals do. We measure risk and plan for the next breach.

I used that as an example,but in security we can measure the risk of a specific data or system being compromised. We can define specific security posture requirements that can be met. Incident response plans account for recovery and cost-efficient remediation of the next breach. Extensive IR playbooks can be defined for when software security fails.

Acheiving security means being able to measure risk,place security controls,audits,policies and plan for IR. It does not mean elimination of vulnerabilities as a whole.

Like you said,the next breach could impact the entire world, the problem is that the entire world as a whole is not prepared for it. More realistically,corporations are far more prepared than individuals.

End users can't do their own computer security. Unfortunately this can only be fixed by regulation, and that can only happen when people are scared enough. But even then, people don't understand technology enough to demand such regulation. In my opinion,silicon valley's political involvement would be a roadblock since it will inevitably get perceived as a liberals vs conservatives issue. I hope technologists become more socio-politically neutral just for that readon.


Here's the YouTube link to his talk: https://youtu.be/ajGX7odA87k. This made me laugh more than it probably should.


Sidestepping comedy for a bit, there are a lot of inscrutable systems that we connect to 'things that matter' all the time. The financial systems themselves are pretty damn inscrutable. Corporations are very often inscrutable.


But corporations and financial systems can often be analyzed in terms of the incentives of the humans involved, or at least the human incentives pushing back against changes to the crappy status quo. When they behave badly, they usually do so in predictable ways because there are humans in the loop somewhere.

By contrast, there is nothing remotely resembling a human mind anywhere in machine learning, and the failure modes are often, by our standards, insane (like thinking a picture of random noise is a cat). That creates a whole new level of danger when connecting to "things that matter".

And to the extent we already connect inscrutable systems to things that matter, we should be trying to make that problem better, not worse. "When you're in a hole, the first thing to do is stop digging"


I feel like you're making his point for him.


The word Mickens uses is "interpretable", which financial infrastructure is, and ML models are not.


Financial IT is interpretable, maybe, but is the financial system itself? You have a lot of agents taking actions that you can't really interpret from outside - unless you say something vacuous like this person made this trade because they think it was good, at which point you may as well say this ai model made this decision because it thought it was good.


What does "the financial system" mean? Obviously there's a level you can address with that term where, just like with ML, interpretability becomes an open question. But on most every level that is genuinely comparable to the role an ML model plays in a software product, finance is plenty interpretable.


There is also a lot of regulation around those entities because they're known to be inscrutable and are expected to be if no regulation existed.

Technologists tend to think that tech is value-neutral, and will therefore give good outcomes.


To quote, "This has made a lot of people very angry, and been widely regarded as a bad move."


Speaking generally, and not about this post - Keynote speakers often aren't technical (enough), but speak about topics that have technical underpinnings.

Take for example, dangerous management consultants who speak all over the place about AI, disruption, innovation, digital transformation, but don't know technology, which is the underpinning of all the things they're speaking about.


I get this feeling from even some of the biggest conventions there are...cough I felt fairly fricken disenfranchised during a recent convention for a popular containerization solution...


Good point. "Technologists in management /leadership" groups need to form everywhere to get the right people speaking about topics they understand.

It's ironic that there is an imposter syndrome among competent people, and incompetent people have no issue being imposters.


He says at one point "Patrick Thistle" and I grant this seems like it makes sense, but indeed the name of that particular association football club is "Partick Thistle" and Partick is a real place, albeit not one which today would obviously be in need of a professional soccer team, and it isn't where they play.


A few years ago, I was working in a company that was trying to build an innovative NLP system, or in more honest words, to do a chatbot that doesn’t suck. Spoiler alert: we failed.

There were a lot things wrong in how this company was run and the product we were doing, but I won’t go into details except to say that there were a lot of intelligent people forced to do silly things by a clueless micromanaging boss.

Anyway, one of the problems with chatbots is the one of prior knowledge. Chatbots and other NLP solutions don’t simply need to be able to understand and produce conversation, they need to have something to talk about, a model of the world, some basic facts, and it turns out it is very complicated to build in general.

So our boss decided that one way to fake it was to use one of those free corpora of public-domain English literature. Let’s just make our system “read” a lot of text and in some way it will gain prior knowledge that way. So if it reads “the Sun was high in the sky”, it would understand that the Sun is something that has a position and that one of the possible position is “high in the sky”. So if someone ever asks the chatbot “where can the Sun be?” it could answer “The Sun can be high in the sky”. It was all pattern matching, nothing very smart about it, just something to fake some parts of the conversation and avoid having too many “I don’t know”.

Of course, it was literature, including fiction. So caterpillars could smoke hookahs, but that was considered an acceptable risk, it was better to have something wrong than an admission of ignorance. In some way don’t humans also repeat stuff without understanding them?

It kinda worked. If you asked “What do people eat?” it would answer “People eat potatoes, mushrooms and tires” or something like that. It was not very smart but somewhere in the literature the pattern “<Person> eats <X>” existed and it was parroting it. If you asked “What do children eat?” it would answer “Children eat carrots, rocks and cupcakes”. It was a bit silly but nice. But then we asked “Who eat children?” and the answer was, I shit you not, “Black people eat children, while howling to the moon and covering their naked body with feces”.

Except it didn’t actually say “Black people”, it used the other term, the one which is much worse.

The sudden realization that we have created an AI but an incredibly racist one did not make us abandon the approach. We just found the guilty piece of text in the corpus and expunged it. Then it just said “Companies eat children”. Depending on your politics you can consider that better.

To be fair, it was not really Machine Learning but the story shows what can happen if you don’t control your input, either because it comes from the evil internet or because it is a large dataset that it is too big to reasonably sanitize and was not built for this purpose.


nice anecdote thanks for sharing. You assert that it was not really ML. I think it is, It may not be true AI, however patten matching/recognition is the core part of ML. ML is just a stochastic and statistical approach to do pattern matching, the hype around ML has kind of distorted the expectation from the field.

You don't have to really control the input, it is not difficult to automate the sanitation by building a feedback loop of abuse reports to delete patterns from the corpus, if you cannot release before significant cleanup, you could either use something like Mechanical Turk/ Crowd-sourced paid users to test the system extensively, or be more through generate millions of possible questions and the answers for them and run content moderation tools on them, human assisted or otherwise, or build a filter layer into your chatbot itself. None of these approaches of course give you a guarantee something won't go wrong, they give you a reasonable probability it won't.


I'll post IMO the most interesting slide of the talk.

---

The Assumptions of Technological Manifest Destiny:

1) Technology is VALUE-NEUTRAL, and will therefore automatically lead to good outcomes for everyone.

2) Thus, new kinds of technology should be deployed as quickly as possible, even if we lack a general idea of how the technology works, or what the societal impact will be.

3) History is generally uninteresting, because the past has nothing to teach us.

---

How relevant is this. With Cambridge Analytica scandal and now Google's censored search engine in China. How about self driving cars? Cryptocurrencies?


I’d like to see James Mickens vs Yoram Bauman in a comedy roast battle.


I love some good James Mickens content. Highly encourage people look up his other work, especially his talks.


This was comic genius. It was also equally insightful. What a wonderful speaker and a wonderful talk. Did anyone else catch the the Bob Ross painting references during the graphic of the number 4? That had me in stitches.

Thank you for posting this. This made my day.


Noticed this too. I recall one person in the audience laughing uproariously at it - probably the only person that got the reference.


In the section of the talk "how do we pick the wights of the neural net" the speaker states:

"the error then is going to be difference between what the classification of the neural net outputs and what the classification or the oracle will be."

Could someone say what is an "oracle" in this context?

He says this at 10:31 in the talk.


The oracle in this case is a piece of software that compares the neural net's outputs with pre-classified data.

A test oracle "magically" knows the truth, from the perspective of the system, is the idea. Sometimes oracles don't even exist but can be useful as a conceptual tool in deriving some other finding -- such as a proof by contradiction.


The oracle is each digit that's tagged to each digit image, which exists (in the case of MNIST) because someone sat down and tagged every image.


The oracle = magic box that always gives the correct answer.


Thank you. Googling offered very little help as it just returned results regarding Oracle the company and their ML service offerings. Cheers.


Because so many people/companies think about security as a secondary thing, and in most places Improving Security is some really simple changes.


Fixing security is quite possible.

Install a backdoor, go to jail for "exceeding authorized access".

Fail to fix an security bug, get sued for negligence.

Make it public policy that license contracts cannot override those responsibilities.


>Make it public policy that license contracts cannot override those responsibilities.

This would be a disaster for open source. Who wants to write software for free if you can get sued for a bug?


Make it only apply to paid software or that used commercially. Then, you get what you pay for more often. ;)

Also, the liability of companies pushing open-source software for commercial use might be a way to get contributions to it improving quality. The companies can get sued. They're financially benefiting from it. So, they might invest some money into companies developing the code to make sure it meets whatever the standard is. It's not the best, incentive structure but it's a incentive structure. Right now, most can freeload off code which also might be shoddy enough to affect their users.


It might work to put the onus on those deploying the software. You can still freely publish what you want because free speech, and the legal responsibility starts exactly where it should: when the bugs have a chance to hurt someone other than the deployer. Said deployers, however, will be more highly motivated to ensure their software is secure, and will probably wind up with some sort of homegrown software certification process.


Quite the opposite. If you give a product for free you cannot be fined for it, obviously.

Yet, open source can be vetted, and people can be paid to review and vet software.

Debian developers review software before uploading it end often do additional work on hardening it.

The distribution then freezes to ensure maturity, let people discover vulnerabilities and backport fixes.

https://www.cip-project.org/ builds from Debian and goes even further by supporting releases for decades.


If it helps clean up the npm ecosystem mess, would that really be a bad thing?


I think it's implicit in that proposal that the amount of software available would massively decrease. That's not necessarily a bad thing.


I think thats a ridiculous statement. Should we also limit how many books are written and who can write them?

What is the difference?


If you buy a book and it turns out to be trash, is that negligence on the part of the author? Is your safety at risk because of it?

You could maybe argue that this is true for textbooks, but not much else.


Start a marketplace and only accept listings for products certified to meet certain minimum security standards. Publish extremely clear and accessible guidance on the requirements and how to achieve them for certification. Gradually increase the requirements at a pace that the industry can keep up with. Advertise heavily on news reports of high profile security incidents.


In the not too distant future: all security vulnerabilities are resolved by updating the documentation to include mention of a previously omitted feature.


How about a law where someone who finds an exploit can claim a bounty against the company selling the product or publishing the website? This doesn't address the machine learning part, but it does address IoT and security generally.


That's because the metrics keep changing.


Oh boy Mickens must really hate blockchains and uncensorable platforms like a̶s̶s̶a̶s̶s̶i̶n̶a̶t̶i̶o̶n̶ prediction markets


"Blockchains Are a Bad Idea: More Specifically, Blockchains Are a Very Bad Idea."

https://www.youtube.com/watch?v=15RTC22Z2xI


eh that was a pretty bad lecture. Most of the technical problems he talked about are easily solvable.


After he started outlining his solution, I was disappointed he didn't go on to say "gotcha! I'm describing Git with signed commits! Hahaha!"


This is an amazing lecture!


No security industry wants to make the security industry go away: it's that sad.


Makes me think of the recent XKCD on voting machine software: https://xkcd.com/2030/


"Would you want Kingsley doing these things?"

Depends on how mission critical <thing> is and how accurate Kingsley is?




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: