
We can no longer leave online security to the market - jaredwiener
https://www.nytimes.com/2018/10/11/opinion/internet-hacking-cybersecurity-iot.html
======
GreenToad5
Government regulation? They can't keep themselves secure. There was just a
post here a couple of days ago saying how vulnerable the DOD's systems are.
How are they going to police others when they can't police themselves?

I work in the banking industry where security IS regulated (by the FDIC). We
have government auditors come and review our technology once a year. These
guys don't know what the hell they are doing. We have had blatant security
problems (now addressed) that they couldn't see right in front of their nose.
Community banks have terrible security. Larger ones are better, but still rife
with problems.

I fail to see how government regulation and intervention has helped in my
industry, or how it would help in any. If by regulations, you mean that we
would get fined if some data got compromised, that already happens through
negligence lawsuits. It is not an effective motivator though.

In my experience, the threat/worry of bad publicity is actually the best
motivator in a company getting their security up to par.

~~~
gkya
I'm going to exploit this comment which is at the top of the thread and stands
above another comment that argues against government regulation in software
industry to tell you guys this: hopefully this industry will be regulated from
top to bottom before too long. GDPR came, hopefully more will come, w.r.t.
security, privacy, and even UX standards (e.g. all companies should be
required to accomodate all sorts of disabled people, probably by allowing
assistive tech in browser to work properly on their websites).

You guys will not and want not to fix the status quo where shitty software is
pushed onto us. You guys will not stop implementing unethical, "agressive"
software. So someone should be watching over you, entrepreneurs and devs, and
that someone is the government.

Government regulation need not be perfect. But it needs be there. That means
companies will be more incentivised to keep their shit together. Surely your
bank would be doing worse if nobody was watching over. If more budget and
worktime is devoted to such regulation, it will become better.

I understand that no regulation is a strong political position in the US, but
I call bullshit on it. I wouldn't bother writing as I'm mostly at the user
side of things these days but I wanted to write this given most of you are
devs here. It is not about some silly social network or an irrelevant SaaS
anymore. The world runs on this, software is as important as medicine and food
to our livelihood, and the software industry needs to be regulated like
medicine or food industries are. Something simple like Twitter and Facebook
affects lives of the masses. You'll have to get your... act together.

~~~
beaconstudios
Your argument is that banks don't have the will to fix security issues. The
parent was arguing that security is hard and that the government is not
particularly competent at it so is not in a position to define raised
standards. You're not even having the same conversation.

~~~
mhjas
> The parent was arguing that security is hard [...]

It isn't though, at least not compared to the state of things. Pretty much any
government would be competent enough to mandate some sort of two-factor
authentication that would greatly improve security and make a lot of phishing
and hijacking a thing of the past. Of course different governments would have
different success rates, if not in terms of security at least in terms of
elegance. But that is like everything else. People die everyday by the lack of
road safety and healthcare.

~~~
candiodari
Oh yeah ... I see it now. Instead of the "do you accept cookies" in your face
idiocies we now need to identify ourselves using 2 factor authentication on
every website.

That sounds SO great.

Obviously there are no realistic security measures that are 100% effective.
All this will amount to is further cementing the power of large internet
companies. You know this, so why ask for it ?

~~~
mhjas
"Do you accept cookies" is only relevant because there isn't a separate login
mechanism in HTTP. Actually knowing whether you are sharing data with the
website, and what website that is, would be a major improvement. Security
measures don't have to be 100% effective. Just like road safety you should
focus removing the impact of flaws, not to prevent flaws as such. A separate
authentication mechanism would remove a large amount of security issues,
including potentially phishing and password leaks entirely. These common
security issues of compromising the system of the user or the company would
simply not have the same impact anymore.

A not insignificant part of the large Internet companies power comes from that
they are the only ones who can handle, or people trust to handle, security. It
isn't that hard today to create your own e-mail system or smart phone. But
managing those systems, especially for a reasonable cost at scale, is just
beyond what most new entrants in the market can handle.

~~~
candiodari
Government mandated authentication mechanism. This question is almost a joke:
what could go wrong ?

Everything can go wrong.

> A not insignificant part of the large Internet companies power

So it's about breaking the power of large internet companies ? Figures. Can we
please do that WITHOUT destroying the web ? The last regulation that tried to
break the power of large internet companies was the GPDR, and that has
significantly entrenched the position of the large internet companies instead,
while creating a ridiculous amount of inconvenience for everybody. This ...
will do the same.

People WANT to share that data. Or perhaps I should say, they want the things
that happen when they do. Quick searches that get them the products they want,
on Google, on Amazon, on clothing shops and on tons of small webshops. Even
the obnoxious image ads. People want them.

That means that a login mechanism will just be an extra hurdle with zero of
the effects you want.

------
pg_bot
I disagree with Bruce's assessment that government regulation is the best
solution to this problem. As someone who has read a lot of government
regulations regarding technology, I have often found existing regulations to
be unsatisfactory when it comes to actually protecting consumers.
(recommending old and broken encryption schemes, arbitrary/nonsensical
password requirements, etc.) The speed at which the industry moves far
outpaces the ability to regulate effectively.

I would much rather see software engineers follow the lead of electrical
engineers and embrace non-profit (or even for profit) certification companies
à la underwriters laboratories. It would be easier for consumers as the could
just see a seal of approval and know they are getting a quality product.
(Think LEED certification for buildings)

If anyone is interested in starting such and organization let me know as I do
think it could do a lot of good in the world.

~~~
koolba
Having the government outline minimum penalties for data breaches would go a
_long_ way toward fixing the problem. It’s much easier to justify fixing a
known issue or dedicating time to updating dependencies if you know there’s a
defined cost (per customer!) of failing to do so.

~~~
AnthonyMouse
> Having the government outline minimum penalties for data breaches would go a
> _long_ way toward fixing the problem.

Then companies start covering up data breaches because disclosing them would
cost millions in fines, resulting in people not even knowing when they've been
compromised.

You also have the problem where politicians/media have no idea what they're
talking about, e.g. calling the Google+ issue a "data breach" when it was
actually a vulnerability discovered internally with no evidence of anyone
having ever used it. If that's the standard then every time there is a
vulnerability in a major operating system or TLS library, no one will be safe
from the litigious trolls.

~~~
tomjakubowski
> Then companies start covering up data breaches because disclosing them would
> cost millions in fines, resulting in people not even knowing when they've
> been compromised.

Couldn't you make this same argument about any law that punishes bad behavior?
As an extreme example, if we make murder illegal, that incentivizes covering
up the act, at the expense of closure for victims' families. It seems flawed
to me.

~~~
AnthonyMouse
It's a lot harder to cover up a murder than a data breach. People notice when
a someone turns up dead or mysteriously disappears. If some criminals break
into your servers, who has any way to know other than you and the criminals?

There is also the issue of intent. Murder is illegal when you intend to do it.
Nobody intends to have a data breach. In that case sunlight is more important
than punishment because it's in everyone's interest to prevent it happening
again, which requires understanding how it happened, which requires
cooperation. Putting otherwise-aligned people on opposite sides creates
unnecessary conflict at odds with the common goal.

~~~
pka
I'm not sure you could cover up a data breach _that_ easily. Those data dumps
are going to be sold on the black market eventually, and I speculate that in
many cases government agencies will be able to identify unannounced breaches.

Slap a 10x (or even 100x) fine on companies whose data breaches are discovered
independently and covering stuff up won't look like such a good idea anymore.

~~~
AnthonyMouse
> Those data dumps are going to be sold on the black market eventually, and I
> speculate that in many cases government agencies will be able to identify
> unannounced breaches.

Sure, but how do you prove it was covered up rather than merely discovered
externally before it was discovered internally?

------
OliverJones
As an alternative to government regulation, how about an approach like
Underwriters' Laboratories?

Those guys got their start around 130 years ago with regulations about fire
doors in factories. Now they have standards for all sorts of EGoT (Electric
Grid of Things) devices, from lamps to toasters.

They get their teeth from the fire-insurance companies who back them. If you
have non-UL junk in your office, an insurance risk-manager inspector will
instruct you to change it. If you have that kind of junk in your apartment,
heaven help you if you have a fire and make a claim.

Industrial shops have Factory Mutual filling the same role. One place I worked
required Factory Mutual certification. Those guys are not fooling around; they
improved our products.

If Walmart and Best Buy discontinued selling uncertified IoT products it would
help the cause. Even MicroCenter and Fry's could help get the ball rolling.
But they can't do that until a certification process is workable.

A UL or FM approach is more workable with USA attitudes toward government
regulation. And workable is what we need.

~~~
wongarsu
Fires are incredibly expensive, which makes everyone buy fire insurance and
incentivizes fire insurances to push fire safety on their customers.

In contrast, most data breaches are very cheap. In most industries the market
doesn't seem to punish breaches at all, so there's only the unquantifyable
cost of that data benefiting your competitors.

If there was a multi-million-dollar fine on data breaches regardless of fault
and countermeasures, we would get exactly what you describe: the market
working to reduce risk and average insurance payouts, making everyone's life
better in the process.

------
softwaredoug
Authentication is the real issue. We treat SSNs as a lifelong shared “secret”
- shared with just about everyone. When so many need to have this “secret”,
trying to hide SSNs is futile.

Someone should be able to steal a database of my info and whatever the shared
secret is should only be tied to that org, and of course not stored in plain
text.

~~~
Covzire
How will everyone keep their dozens or hundreds of shared secrets safe?

~~~
paulryanrogers
Password managers appear to be growing into this role. As secret questions
become increasingly ineffective better answers are random and unique. It
quickly reaches the point that human memory and paper can not accommodate.

EDIT: forgot 'not'

------
nineteen999
From the tone of articles like these sometimes I feel like I must be the only
person left in the world without an internet connected refrigerator or robot
vacuum cleaner.

~~~
village-idiot
You’re not alone! I had to go with the monoprice Sous Vide cooker, since it
was the only one that could be operated without a smart phone.

It was also cheaper, which is nice.

~~~
jmulho
That was a good choice. There is a nasty worm going around that will turn your
water up from 55 degrees Celsius to 56 for 15 minutes ruining any chance of
perfect 48 hour short ribs. It hit a lot of Michelin star restaurants in Iran
really hard back in 2010.

~~~
village-idiot
I’m more interested in having my Sous Vide cooker continuing to work after the
company goes under or Google buys them and cancels support.

------
dkrich
I don’t get the premise of this. On the one hand he’s making the assertion
that government regulation is needed because consumers won’t pay for added
security.

He then immediately goes on to say that in the past security breaches weren’t
life threatening but now they are because refrigerators and cars are connected
to the network.

Okay so people won’t pay more for security when it’s not life threatening but
this time the threats are different, but people still won’t pay more. How does
he know this if the threats are different this time?

Tbh though he lost me when he advocated for government regulation as the fail
safe solution.

------
tptacek
One thing I've learned in ~25 years of working in what the NYT would now call
"cybersecurity": Internet hacking is _always_ "about to get much worse".

~~~
InitialLastName
I mean, are they wrong? It sure appears to have been getting progressively
worse.

Granted, some chunk of that is from an expanding surface area vulnerable to
attack, and an expanding amount of valuable data available for the taking.

~~~
tptacek
They are wrong (so far), and it is not getting progressively worse. In the
1990s, it was realistic for an amateur hacker to aim at _owning up a whole
backbone network_. You broke into computers by running "showmount -e" and
looking to see which ones were exporting their root filesystems r/w to the
entire Internet. In the early 2000s, worms targeting Win32 vulnerabilities
were so effective there was almost legislation. _Nothing_ was sandboxed
(except for, ironically, Java applets), and virtually every web application on
the Internet was riddled with SQL injection. The first time I ever did a
professional consulting application penetration test, I logged in as "admin"
with 'OR''='.

It's a lot more _fun_ to be an attacker today (I mean, if you dig computer
science), but I don't know a lot of people in this field who think it's gotten
_less challenging_.

~~~
bmays
I think you can agree that _worse_ doesn't necessarily imply _harder_. More
critical systems are online, software is more complex, more actors are in the
mix, etc. Feels like semantics, anyway.

~~~
tptacek
I don't agree at all that more critical systems are online. What I see instead
is a greater recognition of the variety of critical systems that are and
always have been exposed, leading in turn to _better_ security for those
systems. And we're kidding ourselves if we think that the attackers we're
facing today weren't active 10 years ago.

20 years ago, owning up someone's voice mail was a funny joke (teenagers were
literally owning up switching systems.) Today, we're all carrying HSMs in our
pockets. Things are better, not worse.

~~~
TeMPOraL
Owning someone's voice mail, or even a PC, is ultimately very low impact on
societal scale. But what about the increasing amount of physical systems that
are on-line - factories, powerplants, hospitals, cars, pacemakers (via phone),
etc.? Is this not as big of a problem as it seems to be?

~~~
tptacek
The point isn't that voicemail is super important; the point is that
infrastructure wasn't even secure a decade and a half ago. The systems you're
talking about were all exposed then too.

~~~
insertcredit
It's clear to me that this is nowhere near accurate and I'm not sure why you
insist on making these sort of claims.

One only has to look at self-driving cars to disprove you.

Dan Geer also entirely disagrees with what you wrote [1] [2] and you're no Dan
Geer, sorry to say..

[1]
[http://geer.tinho.net/geer.indiana.19x17.txt](http://geer.tinho.net/geer.indiana.19x17.txt)

[2]
[http://geer.tinho.net/geer.uncc.5x16.txt](http://geer.tinho.net/geer.uncc.5x16.txt)

~~~
tptacek
Look, Dan Geer is fine, so I won't snark and say "that's one of the nicer
things anyone has said about me on HN"†, but let's be clear: Dan Geer and I
have a very different kind of day-to-day workload. We'd probably come to
different conclusions about all sorts of things. I would also in a million
zillion years never quote Nassim Taleb on anything. He's wrong here, as he has
been in the past. We've all been wrong about things! I just happen to be right
about this one thing.

† _Sure, I just did, but I 'm being upfront that it's a cheap and unfair thing
to say. I'm human._

------
Kaotique
There are already a lot of safety regulatory agencies in the world. These
government sponsored organisations managed to make food and other consumer
products a lot safer.

These type of discussions often end up with mass control by the government
versus free market will solve it by itself.

The sweet spot is somewhere in the middle, like many of these agencies have
proven for decades.

------
ilovecaching
Here’s a hard reality: The people who can fix the security problem are the
ones who are already working on it. There’s no magic dust or elite squad of
cyber security professionals that are going to walk through the doors of a
FAANG and turn things around. Security is hard, and security at global scale
is still an unsolved problem that no one has the answer for. What I do know
from working in the valley is that neither the government nor the media has
the damndest idea of what they’re talking about when it comes to technology,
and they certainly wouldn’t know how to secure Google let alone keep search
running globally for a night.

~~~
fulafel
Most people working on security don't have much power, and are reduced to
tinkering with small incremental improvements that don't break anything.

case in point: If you look at the history of malware infactions in
organisations, one vector historically stands out since the early 2000's:
office/pdf attachments in emails. It's been obviously a catastrophic
combination to feed untrusted, unauthenticated complex office formats to
insecure productivity applications, but nothing was done about it despite
weekly new public vulnerabilities and pwnage continuing over 20 years.

------
marnett
The market won't solve it. Regulations won't solve it. Why can't we just have
meaningful legal frameworks where impacted parties can sue these negligent
corporations?

------
ballenf
There’s a YouTube talk from Uncle Bob where he foretells a future in which
software kills some people and then regulations are put into place governing
software development.

I think he was close in his vision. It will be Facebook and Google and
millions of IOT devices that push us to that future.

Big companies won’t be hurt, but your startup better be able to afford the
certs or too bad.

[https://youtu.be/ecIWPzGEbFc](https://youtu.be/ecIWPzGEbFc)

------
coretx
The #1 risk are nation states. They became a threat by means of convincing
politicians of "cybercrime" being ordinary violence. Violence is something the
state has a monopoly thus the idea is easy to sell. However, online security
should be left to the market otherwise we'll only see more "violence".
Vulnerabilities kept secret and online targets being compromised.

------
JDEW
If anything should be regulated it's ethical hacking and security research.
Having a bug bounty program that is accountable to some government agency
should be mandatory for every single enterprise that does some form of
development and the prizes/awards should be some function of the revenue and
the severity of the vulnerability.

------
dustfinger
Governments are not neutral parties. They have interest in mass surveillance.
Moreover, I don't believe a neutral party is possible. Organized groups with
enough motivation and resources will find a way to influence neutral parties.
Just take a look at journalism, or even education, today. Both of those groups
were intended to be neutral, but they are not because they are heavily
influenced from outside. I know education in regulated, but it shouldn't be. I
don't believe the original intention was for heavy handed political parties to
use education to meld young influential minds to their liking.

On a side note: companies and governments alike believe that a single security
audit before each release is sufficient (many don't even do that much). They
are wrong. Instead, they should be hiring a team of full-time penetration
testers that work in parallel with standard quality control testers.

Now back to my original train of thought. I believe the solution is exactly
what we have -- natural selection. When the financial loss exceeds the
executives tolerance threshold they will either fold, or adapt. The
organizations that are better at adapting will survive. It will take time, but
as long as the losses are great enough natural selection will affect the
course of things to come.

------
netcan
" _The primary reason computers are insecure is that most buyers aren’t
willing to pay — in money, features, or time to market_ "

Im not sure this is true. That the market is not producing adequately secured
stuff is fact, but... It strikes me as similar to " _journalism is broken
because people aren 't willing to pay for good journalism anymore"_. Maybe be
it's true in a sense, but I don't think it's a useful sense.

It's not like computers come in regular or secure, with a 20% discount on
regular. Money is not always a direct lever on things. Some software has
crappy UI. This does not generally correlate to UI spending. A much bigger
influence is the type of market that software is in. "Enterprise" will likely
be much worse than consumer stuff, because of market structure, incentives and
hard feedback loops.

Bureaucracy/rules come with costs that can't be easily priced too.

For example, gdpr...

The writer complains that current laws are written from a naive perspective,
as if the internet existed within its jurisdiction. That nativity is inherent
in regulatory/rule-based systems.

GDPR was written as if it will be written by a person writing software. It's
not. It is written by lawyers, hired by companies to "do gdpr." Mostly,
lawyers reduced this to paperwork. Policies that must be meticulously written.
Checkbox software that must be installed. Agreements with vendors that must be
updated.

..All things that cost money, put lawyers and compliance officers in more
powerful positions, and do very little to improve user privacy and agency over
their data.

If you want to start a company in a regulated market, your first hire is a
compliance expert, preferably one with a personal relationship with that
specific regulator.

Regulators are process oriented, not results oriented.

For example, let's say some drug is overprescribed. Regulators respond with
new small print that must be included in ads. They will meticulously measure
"compliance," but may not even take an interest in results. Ie, they may not
even check to see if sale/consumption of the overprescribed drug have gone
down.

Anyway... Whether through regulation or whatever, security is hard. It is
almost always reactive, responding to past crisis.

Personally, I'd start with laws (not regulators) targeting after-the-fact
disclosure. I think self reporting is the most useful/successful part of gdpr,
for example.

Light helps. It can also create the pressures, incentives and information
required for change.

------
sigsergv
Such gorvernment certfification organization (or something like that) will
harm local firms and benefit chinese, because chinese companies will ignore
all those regulations and flood market with cheap devices. Honestly, I cannot
see any solution other than mass education system that teach people what
security is and how to use it. Every consumer MUST clearly undestand what
exactly he/she is going to lose when using insecure appliance.

~~~
mr_toad
> flood market with cheap devices

To some extent that already happens with safety regulations, & imports/sales
of these products are illegal. That doesn’t mean that you should just give up.

~~~
craftyguy
Exactly. It's trivial today to purchase electronics without any required
certifications (UL, FCC, etc) from aliexpress, etc. However, those
certification programs are still alive and well, and knowledgable consumers
can still seek them out.

------
jkingsbery
"The National Institute of Standards and Technology’s Cybersecurity Framework
is an excellent example of this... The Cybersecurity Framework — which
contains guidance on how to identify, prevent, recover, and respond to
security risks — is voluntary at this point, which means nobody follows it."

How "excellent" of an example could it be if no one follows it?

If he's worried about low-cost devices today that don't have security teams,
it seems that fining companies for having security issues could lead to some
percentage of them going bankrupt, which in turn would lead to more devices
that are abandoned by their manufacturer post-launch.

I also think it would, to some degree, stifle innovation. Even if what's
involved is paying some fee for some new security technology or license,
that's still less money that a startup can spend on the part of the product
that customers are paying for.

I wouldn't say we shouldn't have any sort of regulation whatsoever, I'm just
skeptical that the government could do a good job of it.

~~~
ardy42
>> We also need our standards to be flexible and easy to adapt to the needs of
various companies, organizations, and industries. The National Institute of
Standards and Technology’s Cybersecurity Framework is an excellent example of
this, because its recommendations can be tailored to suit the individual needs
and risks of organizations. The Cybersecurity Framework — which contains
guidance on how to identify, prevent, recover, and respond to security risks —
is voluntary at this point, which means nobody follows it. Making it mandatory
for critical industries would be a great first step. An appropriate next step
would be to implement more specific standards for industries like automobiles,
medical devices, consumer goods, and critical infrastructure.

> How "excellent" of an example could it be if no one follows it?

I can be very excellent indeed, from the computer security perspective. The
problem of why it's not followed is probably twofold: 1) organizations don't
know about it (and aren't motivated to find out) and 2) business leaders don't
want to spend the money to implement it if they do know. Making it mandatory
nicely solves both of those issues.

> If he's worried about low-cost devices today that don't have security teams,
> it seems that fining companies for having security issues could lead to some
> percentage of them going bankrupt, which in turn would lead to more devices
> that are abandoned by their manufacturer post-launch.

That's no big loss, because those devices are inevitably abandoned today.

> I wouldn't say we shouldn't have any sort of regulation whatsoever, I'm just
> skeptical that the government could do a good job of it.

The government will do a better job at regulating in this area than anyone has
ever done before, because no one has ever tried.

------
hamilyon2
Continuous security upgrades means byers no longer buy a device. They buy
service and should be charged for one

------
rexgallorum2
I don't buy the regulation argument. Regulators will not have a sufficient
understanding of the systems they are regulating, and of course regulation is
too slow to react to a dynamic phenomenon. Perhaps more basic regulation
stipulating liability if something goes wrong would be of use.

I suggest we apply some lateral thinking to the underlying problem and
approach it from another perspective entirely. Over the last 30 years or so,
really since the end of the Cold War, high on idealism and technological
utopianism, we've built a whole new high tech infrastructure to replace the
low-tech infrastructure that preceded it. In so doing, we have invariably
embraced technologies that we did not and do not understand, technologies that
have never really been tested (as in been subjected to the test of time). Was
this wise? Should we be using new, unproven technologies for security critical
systems? These new systems have untold vulnerabilities, and their often
centralised structure makes them very susceptible to disruptions. Should we
not be building robust, decentralised, low tech solutions instead? Could
something as fundamentally vulnerable as modern undersea cables have survived
a cataclysm like the Second World War? I anticipate that any valuable data
sitting on a networked device anywhere is at risk of eventually being lost,
leaked, or stolen. Any networked safety critical system will be hacked or
otherwise exploited (or fail catastrophically). It is only a matter of time.
So much of modern (hybrid) warfare hinges on sowing discord, confusion, using
disinformation and misinformation to cripple adversaries--and we have
collectively built an infrastructure that is tailor made for this kind of
disruption. What I mean is virtually everything that has come into being in
the last 30 or so years, from complex global supply chains to modern banking.
What that exists now would survive a SHTF scenario (not hard to imagine)?
Again, we should be designing systems to be robust, decentralised, secure, and
wherever possible, totally independent of high tech gadgetry. What use would
'identity theft' have been in the 1970s? Exactly.

------
JoshuaAshton
Why? Incompetence will always exist. Now youre just wanting to stifle the free
internet.

------
craig_peacock
We can no longer leave article writing to Muppets. All journalists should be
HIGHLY regulated, certified and only licensed journalists are legally allowed
to write articles for the public.

------
jumelles
I think he's right - the line between regular hacks and cyber warfare has been
blurred to invisibility, and defending the nation is one of the most basic of
governmental responsibilities.

~~~
apetrov
that’s a really interesting point. A good example is cops patrolling streets.
But if these cops force their way into your house to “protect” you without
your consent it’s not protection anymore, so voluntary consent is neccesary to
make it protection and not a regulation.

~~~
youdontknowtho
Cops have always been able to force their way into your house to protect
society. They aren't just protecting you, they are protecting you as part of a
larger group that employs them.

------
nsarafa
We should fine companies for data breaches.

------
LeicaLatte
At best, we will end up like China.

------
tribby
online security will always be cat and mouse. that has nothing to do with the
market. if you get the government to step in, then it will just be the
government that fails instead of the market.

~~~
westpfelia
Its not entirely cat and mouse. Especially in some older banks. There is a
local home town bank whose online banking only works in IE6. Its a joke.

~~~
tribby
the bank example is interesting because I do think banks might belong to a
special class of institution that deserves to be regulated, but I think most
other businesses really don't need it.

------
craig_peacock
I would rather take any insecure device over any regulation from any
government, particularly the US. Your government has no business interfering
in what hardware I decide I should run and from who. You are a muppet.

