Yet companies I work with now big and small look at security as just a bunch of checkboxes on a government audit form. As long as upper management continue to see security as a cost loss center, and continue to only do the minimum nessissary to pass said audits. These breaches will continue to happen.
What provision of HIPAA does this actually violate?
Its clearly a bad practice (and obviously increase the risk of a breach, which, if it occurs, becomes an issue under HIPAA and related laws), but AFAIK neither HIPAA and subsequent modifying statutes nor the regulations adopted thereunder actually mandate particular password handling practices. Or is there something addressing that in the "guidance" issued under the HITECH act (I remember that establishing, by reference, some standards for encryption, and it wouldn't have been out of place for it to establish password-handling practices)?
HIPAA and similar laws don't codify whatever we think is good computing practice today. Down that path lies madness. Congress would have to re-write the law any time GCPs change, or else the law would become a hindrance to the very goals its trying to achieve (in this case, healthcare-related information security). Instead, the law is written more generally, with "reasonable" being the keyword that lets the legal system refer to current practice.
(My adaptation of "GCP" is stolen shamelessly from the clinical research folks, who use it to refer to "good clinical practice", https://en.wikipedia.org/wiki/Good_clinical_practice.)
But they also have freedom to select the particular security measures to use, considering: "(i) The size, complexity, and capabilities of the covered entity. (ii) The covered entity's technical infrastructure, hardware, and software security capabilities. (iii) The costs of security measures. (iv) The probability and criticality of potential risks to electronic protected health information." 45 C.F.R. § 164.306(b)
> HIPAA and similar laws don't codify whatever we think is good computing practice today.
No, but that's what implementing regulations usually do. HIPAA regs mostly don't include minimum technical standards (most of the security minimum standards are procedural).
> Congress would have to re-write the law any time GCPs change
Well, sure, if the minimum standards were written into the statute, which is why they are usually in the much-easier-to-change implementing regulations. The guidance under the HITECH act in effect did some of this for HIPAA PHI, as it created minimum standards for PHI to be considered "secured". But, generally, there's not much there, and its very difficult to make a solid case that any particular technical practice is necessarily a violation of the HIPAA Security Rule.
At the moment, it's very easy for companies like Anthem to claim that they were the victim of a 'very sophisticated' cyber attack, when in reality they were probably just wilfully negligent. As understanding seeps into the regulators and law-makers minds, businesses will start to comply with the spirit of security / HIPAA, not just the boxes. In the mean time, the best you can do is continue to advise clearly and calmly why things should be done. If the management doesn't accept your reasoning, at least you have done your due diligence.
Management focus would ensure everyone in the organisation focuses on security but most security breaches are the result of IT people doing stupid things or making stupid decisions on the ground. It's not senior management's role to check that you didn't introduce a SQL injection risk in your code. Like it is not senior management's role to check that the accounting department followed properly the latest US GAAP guidelines. It's down to employees being competent at what they do.
Senior management might not directly set the password policy, but they do say what you should work on, and by proxy, what you're not going to have the time to work on. And besides, what is the role of management if not keeping track of their employees? If an employee fucks up that bad, it's their managers fault.
And yes, if the accounting department was that incompetent, SOX says it's senior management's fault. That's what the yearly attestation is for.
IT is in many respect an unregulated profession. Pretty much anyone can declare himself a programmer. There are some regulations on certain systems but not on people.
I am not a fan of regulation but the current pace of data breaches is just unacceptable. If we don't find a solution, some old lawmaker will.
When this does happen, it's often because management's "security" request is either nonsensical--based on some puff piece they saw in an airport magazine--or they won't accept the necessary financial-costs/organizational-changes of doing it right.
It's no coincidence that the people with the most exemptions from security policy are usually the upper management.
Generally, yes, but not always. Sometimes they get boxed in by management and forced to make bad choices. When this happens, it usually leads to a spectacular failure ... and then the scapegoat is found.
The most famous case is probably the Challenger space shuttle. At Texas A&M they make sure every engineering student reads the story  of the Morton Thiokol employees assessing whether or not it was safe to launch at such low temperatures. The engineers had solid doubts, and refused to declare it safe. The managers had pressure from every direction to get to "yes" and launch the bird already.
Finally, during the teleconference the night before, one of the engineers' superiors said "Take off your engineering hat and put on your management hat." A new recommendation was put out (bypassing the engineers who still refused to sign off), and the next morning they got their launch like they wanted.
Just over a minute later the shuttle exploded, killing the crew and putting American manned spaceflight on a multi-year hiatus.
But when you go see your doctor, you do not rely on the fact that he is one of the best doctors in the US. You have no way to tell. You rely on the fact that even an average doctor is good enough to not miss something important.
Well the problem with IT security as it is now, is that we see major breaches almost every week. Some were difficult to avoid but in many case it is just because of bad security design, and in some case people developers ignoring completely security.
And I am sure Sony's management didn't box the developers in storing passwords in clear text.
I'm just going to have to disagree with you and move on about regulating the people, though. I see your point, but I just don't agree. If anything, I feel managers should be regulated, so they are only allowed to oversee positions where they have the knowledge to fully understand what their direct reports are doing, from front-line to c-level. That's what I feel is the problem.
SOX isn't a regulation of the programmer, it's a verification that his management actually doing their job overseeing him.
Complaining about budgets to fix these issues is like saying that the problems with collapsing bridges is that we don't spend enough fixing the structure. Well, it should have been built properly in the first place.
Yes, resources will have to be allocated to fix existing systems but I think the problem here is more fundamental than a problem of budget and management focus. We need to have a profession competent enough to build a bridge structurally sound even with average engineers.
And this is a general comment. We don't know yet how this particular breach happened.
I'd like to think that engineers do want to build things properly, however to build something properly it usually involves more resources. The problem is when it comes down to brass tax the low bidder wins. I've worked in many big companies over many years, and yes I've hacked things together MANY times because of budget/time constraints. A lot of it was because of Management wanting to come under budget, or because they wanted to rush the product to be a hero.
I don't get where this idea comes from that engineers just want to do things in the worst way, do you think a doctor wants to kill his patients or do something that would endanger the patient if they didn't have to?
I don't think we proactively pentest our stuff either. I've never heard of any security discussions but that may just mean I'm not being included. We have a few more zeroes after our PHI record count too.
I can only imagine the data protection standards at small equipment manufacturers and old-school pharmacies. I'd guess their biggest security measure is keeping paper files in a locked office.
> Anthem learned of the hacking last week and called in Mandiant over the weekend. The company was not obligated to report the breach for at least several more weeks but chose to do so now to show that it was treating the matter seriously.
As user jakejohns has pointed out (https://news.ycombinator.com/item?id=9002003), the WHOIS points to a creation date for ANTHEMFACTS.com of `2014-12-13` with GoDaddy.
have obtained personal information from our current and former members such as their names, birthdays, medical IDs/social security numbers, street addresses, email addresses and employment information, including income data
And you were doxxed nearly two months ago. Or maybe not, because Anthem goes out of its way to NOT tell you when this occurred. If you were affected here's how they will notify you:
We continue working to identify the members who are impacted. We will begin to mail letters to impacted members in the coming weeks.
So sometime within the next month you will get a snail mail telling you that you were doxxed... and that letter will probably be extremely vague about the details, but will be quite heavy on the PR and perhaps even have a nice picture of Grandpa CEO at the top.
Anthem is not taking this seriously. No matter what they are trying to communicate with their PR gloss, they seem to care about covering their asses first and really don't seem to give a hoot about all your personal data that is out there in the wild.
More like AnthemLies.com...
A pertinent example of doxxing is what the FBI did to linking DPR to Ross Ulbricht due to the mistake he made on a bulletin board.
Why does the victim have to be anonymous?
Kind of like how troll now means 'person who is an asshole on the internet' instead of 'post designed to rile up and elicit frivolous responses'. The meaning has changed over time for better or worse.
And didn't the GGers "dox" Randi, Anita, Brianna, etc?
But I'm even more old school because I'd just call it skiptracing instead of doxing....
Essentially, doxing is revealing and releasing records of an individual, which were previously private, to the public.
Where's the "reveal" in this hack? They'll use the hacked info privately or sell it.
Did you read the comments completely? If Randi, Anita, and Brianna weren't anonymous but they were "doxed" it seems to me that "doxing" doesn't have to refer to revealing info of an anonymous person.
Could it be that the anthemfacts.com domain was intended for a different use, or to prevent someone else from registering it, and was re-purposed after the intrusion to present Anthem's case? I don't know much about SEO, but quarantining negative information on a separate, immediately available domain might be the motivation here.
Maybe after the Stanford (and other such announcements), they had decided in mid-December to snag anthemfacts.com and then, after learning of the breach, decided to put it into action for this monumental event. However, what are the chances that it took one week for a health insurer, upon discovering the breach, to launch its PR campaign, nevermind fully understand the nature of the breach to be able to publicly announce it. Given the delicate nature of the situation, as well as its historic size, this is not something that a health insurer would want to prematurely make an announcement on without being very sure that the damage is contained. And they contained it within a week? I realize that I'm slightly begging the question here, but yes, part of my skepticism comes from how quickly they were able to move...One week would make it one of the fastest discoveries-to-announcements, which given the scope of the breach, is pretty amazing.
Edit: It's worth pointing out though that there would be records of them contacting the FBI and Mandiant, and I would give them the benefit of the doubt that they would make such contacts upon discovery of the breach...so if the FBI confirms that the contact happened a week ago, I would take Anthem at their word.
Seems very unlikely.
And I'm sure they're quarantining negative info on an unrelated domain, but why would they even need to consider repurposing an existing domain name, instead of buying one? We're not talking about somebody doing a side project and hoping to save a few bucks by repurposing another domain name. And it takes all of a few hours to buy a domain name and have it propogate.
My point is that while it is used for that purpose now, that doesn't mean that it was registered for that purpose back in mid-December. Your theory about the breach occurring earlier and being concealed until now is certainly possible, but the domain registration date on its own is not supporting evidence.
The website itself says "we have created a dedicated website ... anthemfacts.com" for this incident.
[Edit: replaced two egregious uses of "website" with "domain"]
> "Anthem’s Mr. Miller said the first sign of the attack came in the middle of last week, when a systems administrator noticed that a database query was being run using his identifier code although he hadn’t initiated it."
AnthemFacts was registered 54 days ago, which would be within the legal timeframe for disclosure that the Wall Street Journal notes in their article:
> "Federal law requires health-care companies to inform consumers and regulators when they suffer a data breach involving personally identifiable information, but they have as many as 60 days after the discovery of an attack to report it."
Lastly, some more "specifics" that NY Times didn't mention:
> "Investigators tracked the hacked data to an outside Web-storage service and were able to freeze it there, but it isn't yet clear if the hackers were able to earlier remove it to another location, Mr. Miller said. The Web storage service used by the hackers, which Mr. Miller declined to name, was one that is commonly used by U.S. companies, which may have made the initial data theft harder to detect."
If you're a parent, monitor your child's SSN for activity. Especially considering this is a healthcare breach, nobody is immune.
Also, for clarification, the breach involved all of the information required to establish identity - which was my main point in the protection and monitoring of the SSN, with special regard to children/minors.
But realistically, the cat is out of the bag with regards to SSNs. Legally you can obtain someone's SSN for very little money. If you go the illegal route, I'd be willing to bet that there is black-market identity data on over half of Americans. We really need to treat SSNs as about as secret as your e-mail address, because for all intents and purposes they are already. I wouldn't be surprised if online ad networks were using your SSN as a primary key in the background - the information is so easy to get and it would solve a lot of problems.
I guess I'm saying that sticking your head in the sand and pretending that SSNs are secure won't make them any more so. I'd doubt that a whole lot of SSNs were gathered in this hack that weren't already effectively disseminated widely in black market circles or marketing databases already.
The difference is simply data dispersion. If a breach dump ends up on the public Internet everyone has access to that data, worst case scenario, infinitely. Individual targeting has a similar risk but the overall impact is smaller.
Not sure what your point is with the "head in the sand" comment - I happen to work for a security company in an engineering role. I'm not, in any way, defending security through obscurity or the way SSNs are used or (mis)handled. Reading through these comments it is apparent credit agencies don't even get it - and that is disturbing in itself.
But stating that you "doubt a whole lot of SSNs were gathered in this hack that weren't already effectively disseminated widely" is, in fact, a head-in-sand approach compared to doing everything you can to preserve and prevent in the mean time. I, personally, don't agree.
edit: added "compared" to second to last sentence for clarification
Not legally. You certainly can go onto a website and buy them, if you misrepresent your purposes, and you won't be caught... but it's still illegal.
However, there would be less harm from these kinds of breaches if consumers were not obliged to prove their own innocence whenever someone loaned money in their name without rigorously verifying their identity. If someone claims to have loaned a bunch of money to me without ever interacting with me, the recovery of that foolish loan should really not be my problem. It would still be bad for an insurance company to expose private information, but there wouldn't be such a tremendous incentive to steal, agregate, and distribute this kind of data if there wasn't so much easy money in it.
Stolen credentials of the kind described in this breach are valuable largely because there is an asymmetry of effort favoring thieves: it's so much easier to borrow money in my name than it is for me prove my innocence that the process of borrowing money with other peoples' identity can be done in bulk, and to some extent automated. This situation is only sustainable because the lenders have shifted the responsibility of authentication onto their customers, retroactive to the issueance of credit. Identity verification prior to extending credit to a debtor is trivial and automated, while retroactively proving fraud has a large cost to the debtor in actual human labor.
It seems like payment systems and consumer creditors have colluded to force a Faustian bargain on us: to gain access to utilities and payment systems you have use credit, even if you don't want it. Therefore, if you want to be able to have municipal water, a place to live, or a phone, all of which are practically contingent on credit rating even if you pay with cash, you have to protect your credit rating.
It would be nice to decouple payment systems from consumer credit, but we won't. Nobody, whether they are a buissiness or the state, can afford to cross the credit card companies or the ratings agencies. They are buisiness titans with big lobbying clout. If you get taken by theives, it doesn't mattter if you're a consumer, a big corporation like Target, or a government agency like the VA, you're going under the bus because the status quo is too profitible to fix, and security is your problem. Nothing can be allowed to slow down the issuance of easy credit, or to create the slightest friction in CC transactions. Look what just happened with chip and pin? We can't even _opt into_ a pin for CC transactions because it might confuse us. While we're on the subject, go read about what happens to people who to try to build alternative payment systems that cut out MCVISA...
How many data breaches would there be if bad actors had to take the trouble to personally hassle each of the millions of people they had data on before they could take our money?
Probably some, but how much would we care who knew our SSN's or addresses if they couldn't easily be monetized?
Some, but less, I think.
SSN should not be worth anything because it's really not different from a name. Instead of saying "hi my name is exelius", you're saying "hi my name is 302-45-9522". You wouldn't trust me if I said the former, so why the latter?
I don't know any solution to this problem that would realistically be any better. Crypto isn't a good long-term solution -- any crypto we use today will be trivially cracked by a cell phone 20 years from now. Trust mechanisms seem better, but even then they can be simulated (see: twitter bots, facebook bots, click fraud, etc.)
Identity theft is far too easy today, but even if we had an effective system that could prove identity... I'm not sure we would want that societally. It basically guarantees big brother and wraps it in the guise of security.
Tldr: this is a tricky problem where the situation caused by the solution may actually be worse than the original situation.
This is completely false for correctly implemented crypto unless mobile phones of the future are made of something other than matter and occupy something other than space. It could also be that fundamental understandings of math and physics are incorrect. But the idea that just because of improved technology we'll be able to crack today's crypto is ludicrous
I've long been a proponent of the government announcing that they will publish everyone's SSN 2 years from now. Banks, insurance companies, the govt, etc have until then to figure better methods.
Combine this with a smartcard. I guess a lot of European countries already do something like this?
The problem is that everyone working on crypto products focuses on just developing technology, often attempting to make existing crypto systems easier to use for ordinary people. This is fine, but it's only a partial solution. We need to educate people who don't know and don't care about proper security. Nobody is going to use the most secure and easy to use crypto system if they don't see the benefit and think that a SSN or a driver's license is a good way to show their identity.
There is a lot of hand wringing about how hard it is to get ordinary people to take security seriously, but honestly this is a problem that will solve itself given enough time and enough breaches such as this. Until people understand that only secret information--which they and only they know--can be used to authenticate them and protect their information, this will just keep happening.
Their main purpose is to serve as a primary key - many people have the same name, but SSN is unique. It should never be used for establishing identity - it's about as effective as asking someone for their middle name.
What about not doing that at all? Hear me out. Not relying on "identity" would cost many orders of magnitude less. And besides, why should I care who you are-- what does your identity matter to me? And why should anyone else care?
Can you save us a long and stupid discussion and simply explain your plan to practically deploy a better authorization system that will cost many orders of magnitude less?
Let's not pretend there aren't valid reasons for identity to be established.
In most states it will cost in the neighborhood of $10 for each of the three bureaus, unless you're already the documented victim of identity theft (ask me how I know this).
You can also apply to put a security freeze on your child's SSN. State by state laws and application process here: http://consumersunion.org/research/security-freeze/
And then there's the myriad of companies who can give you protection for a monthly fee:
LifeLock Junior: http://www.safety4yourkids.com
Also, Experian has a monitoring service as well specifically for kids: http://www.familysecure.com/
Hope this helps.
They've done a bad job of protecting their customer's data, and an even worse job of explaining what actually happened.
It's great that they "made every effort to close the security vulnerability". How's that going?
They hired Mandiant to "evaluate our systems and identify solutions based on the evolving landscape." Is "evolving landscape" CEO-speak for "Oh, god, we're still leaking customer data like a sieve, make it stop!"?
I'm just going to keep speculating, because if Anthem's not going to bother speaking plainly, I'm just going to assume the worst.
I love that quote, they try to cover their asses by saying we closed the vulnerability. My question is why did you wait till it was taken advantage of?
2/4/15 (umm, today): http://www.careers.antheminc.com/jobs/cloud-encryption-secur...
Could be a coincidence, but I wouldn't be surprised if they were compromised several days before this press release.
Basically to sum it up: "Your Social Security Number, Name, Birthdate, Address, and everything else needed to steal your identity is at risk. But don't worry! Your credit card number is safe."
To give 'em the benefit of the doubt-- perhaps perhaps perhaps they needed that particular domain in anticipation of some other instance where they dropped the ball but your conclusion is more compelling.
"The company also confirmed Friday that it found that unauthorized data queries with similar hallmarks started as early as Dec. 10 and continued sporadically until Jan. 27.
The hackers succeeded in penetrating the system and stealing customer data sometime after Dec. 10 and before Jan. 27, Binns said."
That forces individuals to treat it as sensitive information.
Seriously, if California is giving driver's licenses to whoever wants them (and who's 16 and can learn to drive), I don't see the harm sending out centrally verifiable identity cards. The costs of implementing such a system have gone way down over the years, but to be sure, bid out the job and finance it with surcharges on credit report checks, and any other transaction that involves verifying identity. There are surcharges everywhere else in the transaction. What probably concerns a lot of people is that they don't want the government to know every time they get a credit check. Not sure how you solve that, other than making this a GSE or legal monopoly.
>I'm guessing its some kind of privacy issue behind there not being a similar system in US?
The social security system, which is a federal program, produced a unique number for all citizens. The states quickly started using this number in their own bureaucracy and everyone else followed (banks, etc). Now its a defacto numeric identifier.
The big problem here is how easy it is to get credit in my name if you have my SSN, like its the root password to my finances. Credit is far too easy to get in the states from a paperwork perspective. I should not fear other people getting my SSN. Banks and other organizations need to realize that if someone presents my SSN, that doesn't mean its me. More numbers or psuedo-SSN's aren't the fix here. The fix is due diligence and better fraud protections.
Not to mention everyone already carries a unique identifier thats easy to verify - your fingerprint. I think SSN + fingerprint plus a letter sent to my home that needs to be signed should be the minimum to open any line of credit. SSN alone should be worthless.
Its also bothersome that PCI-DSS and other regulations treat credit cards like NSA secrets, which is fine as they should be encrypted, but there's no legislation or guidelines to make SSN's encrypted. SSN's sit as plain-text in every database in the US. That's kind of scary and probably invites hacks.
Actually a surprising number of people don't. For either medical (dermatitis), work (manual laborer wearing them out, operation room personnel scrubbing them out...) or age related reasons.
There is also the Real ID Act that trying to establish federal id requirements. This is going to cause some problems and look for it in the news. It is a DHS enforced national ID law.
And yes, some of the folks in the US believe a national ID that is needed to buy, sell, or get a job would be a little too close to the Bible's mark of the beast. That gives quite a lot of friction to any national id.
You're exaggerating a bit into a strawman.
I strongly oppose REAL ID (which, by the way, was around for a while before the DHS existed). And as a "tooth fairy agnostic" as Dawkins would say, I'm not the least bit concerned about the number of the beast.
What I am concerned about - and this goes the same for anyone else with whom I've discussed the issue - is, why is it the federal government's business at all when and how I "buy, sell, or get a job"? This seems like a tool for the federal government to get its grubby mitts into more stuff that's not within its enumerated powers.
Perhaps I should have separated that from the DHS stuff, but it is a belief of some folks (enough who vote to have made a long difference) and it goes to why we don't currently have a national id. It is part of the history in the US and the original poster is not from the US and wanted some reasons.
DHS is the agency currently charged with Real ID Act oversight. I'm not sure the who is important before the law is implemented.
The problem with this number is that, similar to Sweden, it can be used as an identity number and as a password. This is a terrible thing to do. In your small country of homogenous socially protected people, you may not have a widespread problem of theft. In the US, however, there is an entire industry of stealing these numbers in order to take out new lines of credit, buy items at stores, and then not pay them off.
https://www.privacyrights.org/how-to-deal-security-breach covers situations like this where there's been a security breach - how to order and monitory credit reports, put in a security freeze (which makes it harder to open up new credit cards or credit lines in your name), etc.
https://www.privacyrights.org/content/identity-theft-what-do... covers when you've actually been the victim of an identity theft
Your income is strongly correlated with your health. The lower your income the more likely you are to suffer from conditions such as obesity and diabetes, and the higher your mortality rate will be. Health insurers can use income figures as one factor when calculating the overall risk of a policy.
It's not illegal, but it violates the contract you sign with them and lets them off the hook for paying for things. Mind you, they'll still keep the money you paid them.
Possibly Affordable Care Act compliance? Calculating income-based health care subsidies appropriately?
It's because they offer disability benefits, which tend to be a percentage of one's income.
My employer uses Anthem for health insurance but another company for disability, so if our data leaked our income data should be safe. We'll see!
A question to ask is how secure is a large network of EHRs going to be? I don't know of data showing the frequency or severity of EHR security breaches but it would be surprising if there were not at least some. In any case, this kind of info would probably not be made available to the public, even though it should be.
Anthem's poor job of keeping confidential info private is especially distressing given the fact that many health insurers are also health care providers (e.g., hospital systems). Computer systems are very hard to operate securely, and after what happened, it's hard to trust these corporations will take the task seriously.
I've been quietly predicting that security of health information is going to become the Next Big Privacy Issue as the Internet of Medical Records grows ever larger.
How to implement that technically becomes an interesting question, but between pocket spies with storage measured in tens of GB to TB, and various forms of key authentication, it seems that there are several possible options.
The whole discussion above regarding the false crime of "identity theft" (it's impersonation fraud facilitated by the data holder's negligence) is another point of increasing frustration for me.
I've been having a few related discussions with David Brin (a data cornucopian) on Google+. Brin, hardly to my surprise, responds with extreme derision.
LOL, everyone 'on the inside' (by that I mean: at least anyone who works on computers, software or networks professionally) knows the answer to that question: it's going to be a train wreck. There is not a single person on this planet who really understands just 1% of the software, hardware and network infrastructure they/we work on every day; let alone how all of these interact. Computers, in 2015, are so complex, and our 'engineering' is so shoddy, that there is no way to safeguard networked data for anyone but the most determined and resourceful parties (by which I mean organizations of which there are but a handful in the whole world, and even those can't seem to keep secrets really secret.) Either way, there is no way at all that a non-IT focused organization like a healthcare insurer or provider will be able to keep data secure, and it's only a matter of time before incidents like this will become commonplace.
Consider: I have an in-law who is a partner in a largish practice in my area. We talked a bit about the business aspects of the practice when she became a partner because she had to put up with all the management crap all of a sudden and it was nice for her to vent to people who had similar issues. Anyway, point being I know a bit about the finance and management of a rather typical organization like that. These people will in the next 5 years somehow get access to our, by then, country-wide EHR system. They work on computers they buy from the local computer shop because the prices 'seem reasonable' and Jimmy who works there dates the secretary or whatever; so Jimmy (whose training was in swapping out hard disks and reinstalling Windows) is the one who 'maintains' their systems, too. Their cash flow is so precarious that some months they can't pay full wages to the partners. How will an organization like that ever be able to secure their network? Their 'security' consists of the cable guy setting a non-default WPA key on their wireless router.
And of course, they're required by the organization that maintains the EHR system to have 'regular auditing of their systems' to ensure security. Which consists of a couple of big 4 consultants who interview the management, tick some boxes on their checklist and make a 50-page CYA report out of that, without ever having touched a server or network.
I got out of the security game 10 years ago, and it was already scary back then. Maybe somebody who still works there will feel otherwise, but computer security (on the blue team) is like FEMA sending two guys with a shovel and a Walmart plastic bucket to a dike breach. (whereas on the red team it's shooting fish in a barrel, of course.) We are truly fucked, because too few people understand the magnitude of the problem and as long as there are no problems and you don't look too closely at the robustness of things, using computers is much cheaper than the alternatives.
It seems like a risk with no benefit, with the only justification being "all data could be valuable eventually so let's never delete even the personal sensitive data." Ironically, the data did eventually become valuable - to someone else.
It used to be common for insurance companies to look carefully at your coverage record, and if you had any time during which you were not covered, they'd say stuff like "Oh, that horrible cancer you have? Yeah, we're not paying for it because it was a 'pre-existing condition' that you got during that weekend you had between two jobs six years ago." And the law let them do that.
Health care in the US is . . . the phrase "utterly broken" isn't strong enough. We need a good fifteen syllable German word for how fantastically fucked up it is.
Of course I'm trying to explain Anthem hanging onto data. Probably it was totally selfish ("we can send them spam") or sheer laziness.
> they'd say stuff like "Oh, that horrible cancer you have?
> Yeah, we're not paying for it because it was a 'pre-
> existing condition' that you got during that weekend you
> had between two jobs six years ago."
This is just a random link describing one scenario - http://www.yourwisconsininjurylawyers.com/library/claim-deni...
I believe this is no longer allowed under relatively recent law.
IIUC, not since Obamacare went into full effect in 2014. One of the main provisions of it was that it became illegal to deny coverage based on pre-existing conditions.
They still need the records because one of the other effects of Obamacare is that it became illegal to not have health insurance, but it's broken in a different way now.
They said in an email that they would pay for one year of credit protection for all those that they say were victimized. I don't think that they are capable or trustworthy enough to state who was victimized. It looks to me that they are just ignoring their responsibility for this attack. They also stated that they do not think health records have been compromised. I believe that they are just trying to avoid HIPAA fees. If so much personal data was stolen, it is likely that health information was also stolen. Generally, the patient's personally identifiable information is stored more securely than their actual health record.
Now I'm off to get credit protection for me, my wife, and my one year old. Does anyone have any advice on where to begin?
Anthem’s own associates’ personal information – including my own – was accessed during this security breach.
If small groups of individual "hackers" are capable of executing high-profile operations, just imagine the capabilities of nation-state cyberwarfare forces. The intelligence agencies of large governments employ thousands of professionals, all at least as qualified as the hackers behind these attacks. The difference is that government employees (or contractors!!) have no fear of legal repercussion restraining their operational activities.
When attacks like this move the market, any scrutiny of the attack must include analysis of market trading in the days following. Who profits from the drop in Anthem stock price? I imagine the SEC investigates this as part of due course, but one should consider that nation states are active investors in the stock market, whether directly or through hedge fund proxies. If a nation state can hack a large enterprise, and a nation state can trade large volumes of securities against that enterprise, then it follows that nation states can profit from cyber warfare.
The next five years are going to be very interesting.
I'm especially unimpressed by Anthem's failure to hire a good copy editor for such a vital message, as evidenced by the painfully obvious error at the end of the penultimate paragraph: "share that information you" should read "share that information with you".
Call me a cynic, but my intuition says the whole page is a lie. My guess is the data was simply pilfered and copied to a USB stick by a disgruntled ex-employee or even a corruptible current one.
Each time this happens, the breached company partners with some firm or another to offer "one free year of identity monitoring" or somesuch. e.g. ProtectMyID after the Target breach.
Are there better alternatives to ProtectMyID?
"Ask 1 of the 3 credit reporting companies to put a fraud alert on your credit report. They must tell the other 2 companies. An initial fraud alert can make it harder for an identity thief to open more accounts in your name. The alert lasts 90 days but you can renew it."
I have had several scares, and each time I just call them and they give me the steps to verify if it has been breached. I like the terms of their contract better as well. Just be advised that this is identity insurance. Not protection. It is designed to be reactive rather then proactive. I feel that everybody will have their identity stolen at some point, so instead of trying to prevent it. I chose to insure the consequences of it happening. I feel it's a much better return on my investment, as a lot of the protection cosines don't do much for you if they miss a theft.
P.S. A million dollar reimbursement clause really helps me sleep at night.
However what are the situations where the person for whose identity was stolen is asked to pay back the fraudulently obtained goods? I can think of no examples.
Is there any way to check if I'm affected by the breach? University of California has not made an official statement regarding the breach whatsoever.
I'm looking for something similar to the way you could enter your email address and figure out if your Adobe account was hacked.
"Fees for Identity Theft Victims: Free; Non-victims: $10"
If I wait to become a victim, I can save tens of dollars!
And as long as it is not practice to sue companies and Cxx for negligence when they do not internally protect the data (no unencrypted data at rest) this will not change.
I may still call Anthem back out of principle.
1. https://www.alerts.equifax.com/ - should automatically propagate to the other two
Nothing says "state of the art" quite like a highly pixelated image on your "we got hacked" response letter.
I miss my old insurance.
edit: I had group insurance with Unity for like 8 years. Never once had a scrap of paper to review or a bill to squabble over; everything always covered. Now I'm on a group plan for Anthem and I almost choked when I received the summary of benefits which was greatly reduced in scope.
I guess I had it good.
Then I tried continuing with one of their individual plans after leaving, and they were easily the worst insurer I've ever dealt with. Things like not informing me that my PCP (who'd certainly been part of the group plan) was not part of the individual plan's network, or finding out that the nearest available PCP who was is 40 miles away (I live in a major metropolitan area with several million inhabitants). Not being able to change my address through the website - they have a form up that doesn't work, along with a message saying "If this form doesn't work, please call ..." Taking hours to get ahold of a human on the phone. Billing hassles. Sending out "your coverage is ending in 30 days because of non-payment" notices even though I'd faithfully paid online on-time. I'm actually quite glad that their terms are "Your policy ends automatically when you don't pay", because they've made it pretty much impossible for me to pay them - their online billpay refuses to take my payment (failing with no error message), which I suspect is because my address changed, but their website makes it impossible for me to update my address, calling them takes more time than I'm willing to invest, and I don't have any trust that if I send them a check it will actually be credited to my account. I just started a policy with Blue Cross Blue Shield instead, which has been a joy in comparison, and let Anthem lapse.
If you read the Yelp reviews, they're far worse than my situation - folks being promised coverage for hospital stays and then denied coverage afterwards, and multiple lawsuits outstanding against them.
The cynic in me thinks that Anthem is basically unable to continue as an operating business, and so they're triaging accounts. The big group accounts like Google get top-of-the-line service, so that they can keep them and hopefully bring in enough revenue to tide the company over. The individual accounts - anything that's small enough to (presumably) not have many other options and unable to sue - get screwed. So if you're in one of those groups, be thankful; if you're an individual, start looking elsewhere.
I am concerned that if the industry doesn't fix this, regulation will.
(Hint: http://www.ecfr.gov/cgi-bin/text-idx?SID=9e10f619aa05225aef1... Subpart C—Security Standards for the Protection of Electronic Protected Health Information)
It will mean licenses and certifications to have the right to store personal data, regulations to comply with in term of system architecture with audits and penalties for breaches. More bureaucracy and processes. You won't create a website over a week end.
Currently any idiot can create a database and store sensitive information without even knowing what a SQL injection or a rainbow table is.
Most professions are regulated: architects, doctors, pilots, farmers, bankers, even restaurants! And each time regulations come as a result of fk ups: banks or homes collapsing, conmen selling snake oil, food poisoning, etc. IT is the only sector where mild amateurism is not only acceptable but rather the norm more than the exception.
: Disclaimer - I'm CTO at @menlosecurity.
It seems to me that its the usual issue. People don't see the need for protection until they've been hit. It seems to be a cost that doesn't make sense to them. They don't even care anymore.
Then they get hit hard. But it can take years.
"The scanner says your server is vulnerable"
"Ya, we patched that vulnerability weeks ago"
"The scanner says it's vulnerable"
"OK.... looks at scanner - oh, it's just reading the banner, and not taking into account that the major rev didn't change, it's patched"
"OK... so what if I change the banner so it doesn't pick it up as vulnerable?"
"The scanner says it's secure now, thanks!!"
The guys who know their stuff in security generally have a desire to actually get paid well, and have time to do legitimate research. They don't really have a desire to sit in a corporate job dealing with the mountains of bureaucratic bullshit that goes along with security in a corporation. Do you really want to be the guy who gets thrown under the bus because you had to disable strong passwords because the CEO was angry he needed both upper and lower case letters in his AD password?
Except those strong password policies don't strengthen security at all, neither in theory nor practice. Congratulations, the CEO's password is now "qweRTY" and it's written on a yellow sticky-note on his monitor.
I literally tell my parents to have a secure password they write on a post-it note. The odds of someone breaking into their house for their password is about 1/10000th the odds of someone cracking their simple password on a website and getting the keys to the kingdom.
Disclaimer: I built the first IPS to be commercialized and yes we used signatures amongst other things.