> CBP ... is closely monitoring all CBP work by the subcontractor
What. In the private sector, they'd have been fired and probably legal action levelled against them. The CBP's punishment for this is 'monitoring'? Please tell me I'm reading this wrong...
(In the enterprise software world, I can tell you how epic failure to perform on an 8+ figure contract unfolds: the sales guy takes a VP out to the next game so they can discuss it over drinks in the corporate box and nothing will change)
a breach wasn't found, but that contracting company eventually became bankrupt under the weight of our negative press and litigation. I know that this is essentially bullying but it was used as an example to other contractors who might try something like that.
Incidentally the SaaS provider no longer exists, gobbled up by netsuite (which was, itself, acquired by Oracle).
If a company with weak data-protection standards wins out over a company with strong ones, it's never because of their lack of data-protection standards. Rather, it'll be because all the other features, pricing, marketing, etc. they can do that's the opportunity cost of decent security. So as far as the information available to laypeople is concerned, most companies do a decent job with security and it's just a few bad apples that happen to be gigantic like Equifax, Facebook, Target, Yahoo, Anthem, and the U.S. government that are screwing things up.
(FWIW, at Google we took security very seriously and implemented some truly heroic measures to keep your data safe.)
Experian didn't lose any customer data, though. They only lost data on their products. Their actual customers had no reason to stop paying for their services.
I agree that an individual unfairly blamed by a company for their failure should be able to move on with their life but... we've seen plenty of clearly guilty people get out with a golden parachute and turn to serving on the board of directors of companies for the ridiculous sum that tends to net you.
IME, this is especially the case for security positions like CISOs, where the pool of people with such experience is excruciatingly limited to begin with (and no, a high level engineer/developer does not have the same skillset as a security professional).
There's also something to be said for allowing people to learn from their mistakes. It's obviously higher stakes for an executive, but it's along the same vein as how we don't blacklist-for-life the developers who write vulnerable code.
I don't hate management, I've worked for some great middle managers that have made my life easy - and for some terrible ones that constantly over-promised and pushed the weight down on us in the trenches. For upper management I've worked for three main veins of persons, the ones that micromanage and attempt to constantly invest themselves in every problem - leading to an inability to make good high level decisions... the sort that are removed from business by such an extent that they are unable to reason about direction decisions and fail to support a company's natural growth.. and those that are approachable but limited, who will voluntarily back out of any low level decision discussion but coordinate what decisions are being discussed and what those decisions mean for other portions of the company.
So mainly I'm rejecting your assumption that the pool is limited to begin with - people do come from famous families and waltz into the field with no prior experience, and those who try to work their way up tend to be stifled due to their lack of experience.
I'm certainly not saying that all C-levels possess these necessary traits in a positive way, and there are definitely some C-levels that only got where they are because of nepotism or luck, but I also disagree that there is a significant 'stifling' of newcomers. Nearly every company I have worked at has had a specific "track" for its employees to pursue management (including C level) positions, but my experience is that most people just aren't cut out for it (either because they self-selected that they didn't want/enjoy it, or because they didn't have the necessary personality for it). More specific to the tech industry, I've often seen/heard of Silicon Valley companies having separate "Individual Contributor" versus "Management" tracks. Many engineers self-select the IC track because they don't enjoy management aspects.
And that's not necessarily a bad thing, either. Not everyone is destined to be a CEO, nor should that be everyone's goal, and there's definitely nothing wrong with not being a possessor of the negative-in-many-aspects cutthroat ethics that being a CEO often requires. It's not all too dissimilar to how not everyone is destined to be a programmer, and you can't take just anyone off the street, hand them a programming textbook, and turn them into Linus Torvalds, nor should you.
My hunch is that this is no more true of C-levels than it is of any other profession where some natural aptitude (eg. above average intelligence) is required. In other words, I think the "pool" of C-levels is small almost solely because of organisational hierarchy; for every C-level there are many more people with the required natural aptitude who are not C-levels. Of course, for a sufficiently narrow domain, the intersection of people with the required natural aptitude and people with the required years of domain experience may become very small.
In that sense, I think being a C-level is something that many people can just "train up to," if given the right opportunities. I'm not sure if there is any empirical evidence that could tell us who's right.
I've seen enough "emergency temporary promotions" succeed in their job that I tend to agree with you and not with the self-serving "I am special" arguments you hear from people in these circles.
If we revert from "involved in a controversy" back to "has demonstrated extreme incompetence", your argument carries less weight. We can at least say that the inexperienced new guys haven't been tested and found wanting. The
Is there some finite limit of mistakes that humans make over their lifetimes? In fact, it would be the opposite - those who are making more decisions are by definition likely to make more wrong decisions, as compared to someone who doesn't make as many decisions.
> Isn’t the board, by definition, paying for people who have a very high chance of making good decisions?
Yes, which is why the salaries for such positions are often so high.
The demonstrable lack of financial repercussions for failure, which you are arguing is justified in some cases, belie this causal relationship. I'd echo Taleb and sat that if an elite class is to be healthy, incompetence must swiftly and summarily result in expulsion.
Tell me again one meaningful action against a data leak in the private sector. I'll wait.
That's trivially true, but the proper response to bad security is good security, not shutting down the whole system.
Although until 9/11 the intent usually was either to get to a non extradition country, or demand something from some nation state primarily.
The source above shows a clear decrease in airline fatalities through the years but I suspect that’s due more safety improvements through autopilots, better sensors, and more redundancy than the decrease in hijackings.
This is nothing to justify the massive surveillance.
2. Delete information after use.
Approximately, the digital equivalent of having a human rifle through filing cabinets to get to that one folder that is actually important.
To this day, the only reliable way to achieve this has been printing things on paper, especially if put in individual folders do that even OCR efforts take some human work.
Time spent by human hands are, in a way, the only somewhat fair currency to measure privacy in.
But often the risk of personal harm outweighs the benefits. And in the case of digital assets the question is when, not if this personal data will be exfiltrated. And when it is, that is often more inconvenient than any potential convenience benefits.
The photos themselves are pretty useless anyways. A database of images will only ever be searched by an ML algorithm for which signatures should be good enough anyways, or manually, based on highly specific timestamps, by some form of police.
Not sure why you see that as useless; it's basically the moral takeaway from Hamlet. There are many situations where it's best to not join in 'the game'.
Edit: Wait, just social media handles/account names, not login details. That's less ridiculous. My mistake.
It is naturally very difficult to enforce security mandates on a company that isn't your own, but I feel that this is one of the best ways we can improve security overall in our society: companies need to start requiring that everyone they do business with have a strong, independently certified security program, or else no contract will be signed. This is already done for things like data center contracting, but it should be much more widespread and encompass every type of b2b deal.
No, actually your system was compromised by allowing the subcontractor to copy the data to another, more insecure network.
As a start how about requiring ISO 2700x security certification?
“There should never have been the ability to download a database like this off of government servers.”
Sorry that I don't have a ton of links to support this claim, but "believe me" (as our Commander-in-chief would say) that the US Government would cease to function if it were not for subcontractors (read, private companies) performing tasks on behalf of the government. Personally, I don't agree with this way of our government doing business, but that is the way it is.
When I was in college, I worked for an archeology lab, and our lab was the subcontractor, of the subcontractor, of the contractor that had contracted to provide a service to the USACE (US Army Corps of Engineers). And every way along the way, money was skimmed off of the top. It's just "the American way" of doing business.
People lament regulation all the time. I have a feeling the executives of Ingersoll Rand love it every time a new regulation is put into place.
Follow the money.
How long will it take the general public and elected officials to understand that the only authorization that matters for digital data is the actual implementation. Policies, legalese, mandates or any other agreements are meaningless.
If the data can be get at from or transferred to outside of a controlled environment, it will.
When you plan your security, step 1 is making it hard to get in, step 2 is making it hard to persist, i.e. plant a command and control process somewhere inside the perimeter, and to move laterally in the system, i.e. get from one service into a more important service.
There's some basic stuff, such as firewall rules that prevent outbound traffic from ports/processes you aren't expecting. That makes it harder for the hacker's command and control systems to get instructions. There's other stuff like using separate credentials for low sensitivity vs high sensitivity systems, two-stage approval processes for especially sensitive operations to prevent a single compromised user from being able to get to the good stuff, automatic password rotation so that exfiltrated tokens aren't valuable, and more.
Those are just single things though. I think the more interesting part is an exercise like this: assume that the hackers have compromised a developer's computer. In that case, what does a system look like that would prevent that developer from exfiltrating payment info? I would argue that the developer doesn't normally need access to real payment info, so maybe the network should be configured so that the developer is unable to SSH into that set of database servers without first requesting a special short-lived SSH keypair. That at least means the developer has to explicitly ask for access. That doesn't make the hacker's job impossible, it just makes it harder. Also makes things less convenient for the developer, so is it worth the trade-off? For especially sensitive data, it probably is. With this setup, maybe the hacker gets to the account information, but they're stopped short of account numbers long enough to notice the breach.
This is all on the theoretical side, but that's the thought exercise once you go "let's pretend someone compromised ____ system."
So obviously, your payment hosts should be very wary of things like port forwarding over SSH, and any unknown outbound traffic.
The fundamental imbalance in such an arms race is that the tech giant might have countermeasures that would prevent the SSH alias from working (my team does), but the level of paranoia required to get those countermeasures in place is beyond what a bank could effectively implement. This particular battle disproportionately favors the red team.
And that's not to say that my team has everything covered. The red team consistently manages to find forehead slapping holes in our defenses. There's just too much surface area to cover.
2. Don’t collect data that doesn’t actually help enforce any laws.
3. Don’t produce new legislation that doesn’t actually solve any existing problems (it is already illegal to break the law).
4. The best way to keep secrets is to not have secrets in the first place. Once you have secrets the best way to keep secrets is to not share them.
Or, we could avoid building a massive surveillance network that doesn't help make our world better.
> The compromised photos were taken of travelers in vehicles coming in and out of the US through specific lanes at a single Port of Entry over a one and a half months period.
The article also mentions this was part of a new program to use facial recognition on everyone entering the country. We've never had that capability before, so I see this working like it always has.
Regarding the visa photos, it depends on what they're used for. In general I'd prefer avoiding ongoing surveillance as much as possible, which reduces the need to keep digital photos preserved in accessible places.
For example, you could try to put payment credentials in a separate subnet where they are never read out of that enclave. Access to that subnet might require separate authentication credentials that most employees don't have, and API calls might require the calling server to possess a separate type of short-lived certificate. So when the main DB is compromised through an employee, it's still hard to laterally access more sensitive data.
What was stolen:
What wasn't stolen:
>No other identifying information was included with the photos and no passport or other travel document photos were compromised, the official said. Images of airline passengers from the air entry and exit process were also not involved.
Sounds like CBP's issue was less about compartmentalizing, more about controlling for how the subcontractor accessed the data.
Honestly the problem sounds more like something borne from ignorance than malice. It's a headache having to download every image you have to analyze, so why not copy the whole thing to a local network drive and work with it here? And then some hacker lifted it from the local network drive.
Anyway I wasn't talking about the CBP specifically. I was responding to the question about why decentralization saves you from compromise. My response was that compartmentalization is useful for damage control.
Anyone think they approved the security of that subcontractor before giving sensitive information to them?
More importantantly, why is that type of data leaving CBP in the first place?
Compliance, almost by definition, needs to make people's job harder, or create extra work. Because people are lazy, and they tend to go for the path of least resistance, and those are not good things in the context of safety and security.
They almost certainly did, actually. FIPS  and FISMA  are pretty strict requirement for every company contracting with a government agency. IMO it's one of the rare situations where, at least conceptually, the federal government has done something right in terms of security.
Now whether FIPS/FISMA, and the people enforcing it, actually have any teeth or effectiveness is a different topic entirely.
A policy could be something like:
"Vendor shall not move sensitive data out of CBP's secure network"
So it's pretty much on the honor system. And some new employee at the vendor may not even be aware of all of the policies they are supposed to be following. The vendor is still reponsible for that employees actions, but it can be discovered too late (as in this case, the breach was already made)
But instead of just a written policy (among dozens or hundreds of others) that people are expected to abide by, this could be enforced by limiting the vendor's access to the network. For example, by counting how many records they access, how many bytes of data they download over their connection to the secure network, or not giving them direct access at all and exposing only an API controlled be CBP that gives them access to only the data they require)..
>  https://en.m.wikipedia.org/wiki/Imran_Awan
I don't see how that link supports your conclusion? From my reading of it, no data was stolen by Imran Awan?
Of course we shouldn't just 'give up' and stop trying to improve our security, but the unfortunate truth is that breaches are practically inevitable. In addition to constantly striving to improve our security, our society also needs to start investigating ways to make it so that breaches are less impactful (for example, stop using SSNs as any type of secret identifier, so that if an SSN database is breached, it doesn't matter).
His wording was misleading. Not his intentions. Nobody is in disagreement that security is very important.
Granted, lowering liability is apparently something I shouldn't worry about since no one is ever held to account for breaches these days.
This implies that in addition to reducing the likelihood of breaches, you should also focus on all the other aspects of security, especially detection and mitigation; and for databases one of the main ways of reducing the impact of breaches is to avoid storing sensitive information as much as possible. In this particular example, was it really necessary to store pictures of license plates beyond a very limited period of time? A breach can't leak what you don't store, and you will get some breaches.
If you ignore these principles, you make room for people who lack self-worth, and those are the most destructive forces in a society because they have nothing to lose.
This is, of course, a serious breach and there will and should of course be consequences for the negligent parties
I am struggling to see the threat model being faced here.
biometric data is just a username. I flash my face around all day, and am careless as to where I leave my thumbprint.
The loss of so many photos and names is unlikely to have national level consequences (Compare this to say the Office Of Personnel management breach from some years back - that has horrible implications for US National security for decades) and the personal level consequences are ... hard to see
What this does underline is that we are outrageously careless as an industry with our data (comparable to early industrial "pollution" as Schneier points out). And it is not going to get better without a) career and business ending consequences b) new ways to store / secure data c) a new way of thinking about who owns and what is personal data
Personally I think we need a new form of intellectual property (just as we are trying to work out what kind of company FAANG are (not telcos, not newspapers, what is a platform?) we need to ask what is personal data
This comment is presumed under law to be my property, my copyright. I might license that property away (dunno never read HN T&Cs) but it is mine. But google and apple and others will track that I sat down at a certain time and place to write it, my ISP will see when I sent to which servers.
All of that data is also created by my conscious actions - should that data not also be my property. And if need be licensed - and compensated for its use?
And when (if) my data is held - then we should presume that it can be accessed by my agents for my benefit (from spending patterns to heart data). I would argue that Sometimes surveillance can be good for us - but only in ways similar to doctors knowing more about me can be good for me - the entire industry of medicine has individual interests at its heart and took a long time to get there.
We are heading in that direction (perhaps) but till we get there, carelessness will be the cheapest option, surveillance always bent agansit is (by state or other actors). We should rail against this stupid dumb breach, but punishing the "bad guys" is not even the first step on the road.
If I can make a bad analogy - It's not one incident that people got sick from one chef badly cooking chicken - it's we need to look at factory farming and meat consumption and healthy eating and marketing bias as a whole.
We don't really know the full details of the breach, but if the facial recognition database contained names in a column associated with pictures, that data can absolutely be leveraged and cross-referenced against other "fullz" for fraud that even passes a lot of online verification procedures.
But this kind of comes back to my point - why do we have online verification systems that rely on things like knowing my address in the last three years - Equifax breach should have meant we gave up on using a credit risk scoring system as an identity provider.
But we don't.
We need to rethink what is identity (start with web of trust) and who owns data that links to that identity.
I mean this could be the start of a positive identity provider - grab that downloaded database and provide a system that says this is a picture of Paul Brian's face, and his passport, and on the 20th August last year a official of the US government compared them
in real life and verified they matched (there may even be a hash of the digital images made at the time but I should not get my hopes up)
Now make that globally available. Is that useful and valuable - I think so. I would prefer if I had been able to upload my public key to that at the same time (I can always visit NYC again) but you get the idea. This leads to question like why does my passport not generate a key pair for me to use? Can I use facial recognition to match my gravatar / facebook / twitter ? Why is knowing a non-secret (mother's maiden name, passport or drivers license number, three digits on back of credit card) seen as security?
Why is it we use what we have to hand and not what is needed? Why don't american banks use chip and pin?
It's not bad that my online identity is clear and visible - as long as the legal and practical frameworks exist to support it - which they basically don't right now but we could make it happen
It's just a pile on at this point.
Nothing else in the article.
I'm a long distance trucker. A few weeks ago I was traveling north from Laredo. When i drove through the border patrol checkpoint, a bank of five or six cameras to my right flashed, i assumed getting my face, license plate, and likelihood of committing a crime in the near future.
The truck is registered to my employer, but I'm sure that can lead to me with a WHERE clause.
At the least they would know where they've seen this face in this truck. I wonder if being in a different truck would be suspicious. I guess it would be if they needed it to be.
That's one way of looking at it.
Could this lead to criminal charges? Perhaps charging the contractor under CFAA for unauthorized access?
> And on Monday, after I published this column online, Department of Homeland Security officials called me to disclose that photos of travelers were recently taken in a data breach, accessed through the network of one of its subcontractors.
They have a joint venture with DEA to have fairly comprehensive coverage of interstates. Also, private companies offer LPR services and sharing, not sure if this company did or if that database was breached.
how many individuals and vehicles has been impacted?
anyway we can hold the agency and its contractors accountable for this issue?
Requires, but how do they enforce it?
They created policies that could be ignored. That’s on them. They shouldn’t be able to use their position to avoid accountability or to scapegoat their contractors (that they likely hired without due diligence).
Government agencies should never be seen as victims. They hold power and authority that nobody else can hope to enjoy. There is no higher power to hold them to account because the electorate had already been subverted to maintain their position. So they should not be protected from fucking up. In this context, God or the Lord is not a higher power, it is also a scapegoat.
With great power comes everybody else’s responsibility... said only by people in this century.
Edit: to follow this up, CBP is also the agency that sucks up all the data on your phone and laptop. They have treasure troves of license plates, passport photos, and titty and dick pics.
They cannot absolve themselves of liability when they are invading everybody’s privacy. If they say they don’t use the data, and they are acting out of ignorance, then that’s a solid case for not collecting it in the first place.
As it stands, the US needs a GDPR.
Go after CBP for constitutionality of collection, for working outside of borders where they are legally not allowed to work, etc, but in this case I’d say let’s not blow things too out of proportion.
Remember when OMB lost hundreds of thousands of detailed compromising personal background check reports with all the identifying information including biometrics? This sounds like some port of entry data you could get with a camera in public.
Further: they are not absolving themselves. They are probably working their asses off right now to make sure this never happens again but somebody is going to pay for credit protection and insurance, and it should be the contractor that ignored their contract and all sensible security policy. So, there is is in the press release.
Lastly: I don’t think GDPR fixes this. Government (especially intel community and law enforcement) keeps the data as long as their record schedules allow.
Thankfully, laws about breaches required them to reveal this to us within a certain time. Privacy Officers have really hard jobs. To do them well is hard and thankless. Glad this one stuck to the law.
Maybe government agencies shouldn't be allowed to contract out. And if they are, then they should be held ultimately responsible for their choice of contractors.
Historical table: https://www.opm.gov/policy-data-oversight/data-analysis-docu...
A concurrence in my assessment: https://www.nationalreview.com/2017/02/federal-government-gr... ("So, since 1960, federal spending, adjusted for inflation, has quintupled and federal undertakings have multiplied like dandelions, but the federal civilian workforce has expanded only negligibly, to approximately what it was when Dwight Eisenhower was elected in 1952." Note I'm not necessarily agreeing with the sentiments expressed elsewhere in that article.)
AFAIU for over half a century there's been something of a gentlemen's agreement in Congress among Democrats and Republicans that keeps the official headcount fixed while expanding government through contractors--the closest thing to a wide-spread "conspiracy" (tongue-in-cheek) I've ever seen. Of course, lobbyists and the contracting industry play a huge part in maintaining the system, but IMO that overlays the long-term political equilibrium reached in Congress.
One reason I finger Congress, and not lobbyists, as the principal supporters of the system is that Democrats would much rather have full-time federal employees, so they're clearly compromising. It's hard to say what Republicans want, but to many Republicans hiring contractors 1) squares limited government with electoral pressures to "do stuff" at the federal level, and 2) superficially provides better price signaling through competitive bidding (though if we're honest that's... complicated). Note how the numbers remain conspicuously stable across major domestic and international political shifts. It's fascinating.
State and local government workforces have ballooned, and a lot of federal expenditures are administered via state-based programs. But that doesn't conflict with the "conspiracy" noted above, it's arguably just a way for the Democrats and Republicans to jockey around it.
I'd like to believe that this will happen, but I've seen plenty of cause for FSCs to be revoked and almost no FSC revocations.
seems to have worked out very well for the army, and their contractors.
So well in fact, that a senator is on a campaign to pass legislation to specifically address the military case (leaving cases like the CBP which should be as obvious as from the get go, to be dealt individually too). The system is so broken in its lack of accountability that even well intentioned people are driven to insanity as the norm.
This is incorrect. They can absolve themselves of liability an act with impunity.
You and I might not like that, but it is fact.
I think that giving the benefit of objectiveness makes it easier for them to continue down this path.
That's a weird absolute, and that's before the side dish of theology and... Spiderman? You can be powerful or negligent or whatnot and still be a victim.
It's not _the people_ who made the decision to collect this data.
I don’t fault the engineers in any case, it seems like their technical security wasn’t tested here; it was some kind of policy failure that lead to the information leaving government control. And that’s the problem, we don’t solve this with engineering, or empathy for engineers, we solve this by letting legislators know what we feel and know as members of the industry, through letters and the ballot box.
Why did citizens' private data leave CBP systems in the first place?
Quote from the article:
“There should never have been the ability to download a database like this off of government servers.”
I love when I read quotes like this that are so obviously written by non-tech people that have no idea what they're talking about.
As we at HN all know, if it exists digitally, it can - and will - be downloaded. End of story.
In a functioning democracy "they" is "us".
That notion is generally not in keeping with the western common-law tradition, which holds a more skeptical view of governments which have the moral authority to operate within any domain and with any powers so long as they can connect those powers to the consent of the governed.
Instead, the tradition of the United States, for example, is that government power is limited and enumerated and does not change no matter what "us" may say about it in the form of political elections.
I suggest that you read (just an example) Federalist 10 by James Madison and consider how thoroughly these guys thought through the argument you are making about democracy and how hard they tried to make something better.
And none of this is to amount to founder worship: we can all see now that there was tremendous hypocrisy in founding a nation which didn't categorically prohibit slavery from the get-go. And in fact, slavery continues to this day in the form of a prison system that "us" has occasionally been happy to endorse, the rights of the incarcerated be damned.
All I'm saying is: don't tout democracy in such simplistic terms without also considering the arguments of its critics.
It's no different than the same moral hazard large corporations face that we empower through cronyism and the effects are obvious, notably events leading up to the great financial crisis in the late 2000s.
Large institutions not held accountable get to take outsized risks knowing they'll always be bailed out and not held accountable. This article is one of many examples of such.
The whole setup seems more and more like a grand cash grab.
The Permanent Apportionment Act of 1929 was enacted because it was "too hard" for Congress to rezone/redistribute House of Representative members. This measure, and ones like it both in law and in business, create large bureaucratic organizations that move slowly and are prized for their stability, which is another word for "zero accountability or disruption."
Very few people set out to have growing inequality of resources or to amass power for the sake of doing it, though of course the people in power now seek to keep it for the sole reason of not wanting to lose it (they frame it as "too big to fail," "stability is important," and so forth).
It's just pure inertia. We went away from smaller regional governments that reports up to a lightly-empowered federal one with a lot of individual liberty step by step, for convenience and for "safety" (any number of military or police actions, foreign and domestic), and we get what we deserve.
Leaving aside the question whether the US is a "functioning democracy"...
In a "functioning democracy", "they" is in fact very often not "us", by definition in fact, since every vote has winners and losers.
Plenty of stuff that I disagree with is electorally popular: unlimited police powers, extremely severe punishments for crimes, military intervention in foreign affairs, censoring of offensive speech, criminalization of victimless acts like drug use, cutting taxes, lack of restriction on CO2 emissions, and so on. I certainly don't define myself as part of any "we" that supports, or is implementing, any of that.
"The will of the whole Nation is expressed in the State" is an authoritarian idea, not a liberal democratic one.