Hacker News new | past | comments | ask | show | jobs | submit login

AT&T has 110 million customers. Let's be optimistic and assume that each customer only has to spend one minute of extra time managing their account due to the break-in. That is more than 209 years of lost time.

Laws related to data breaches need to have much sharper teeth. Companies are going to do the bare minimum when it comes to securing data as long as breaches have almost no real consequences. Maybe pierce the corporate veil and criminally prosecute those whose negligence made this possible. Maybe have fines that are so massive that company leadership and stockholders face real consequences.




It surprises me that there isn't a single comment pointing out that corporations like AT&T don't collect all that data for fun. This actually costs them a lot of money, but they're legally required by the government. While everyone is blaming the company, did you not take a second and contemplate how weird it is that you're fine with the government (and now everyone else es well) getting a record of all your phone activity? I'm old, back in my youth we'd have referred to that as a dystopian surveillance state.


There's no federal law requiring AT&T to hold onto this data.

There's possibly a FISA court requirement (too secret to reveal), but AT&T has long been an exceedingly willing part of the gov's spying apparatus. It fed these records and Internet data to the feds without any court order, and only escaped legal troubles when Obama, contrary to his campaign promises, gave AT&T, Verizon and more retroactive immunity


I'm no longer under this specific NDA, so, I can talk a bit about this.

It was well known in the wireless industry that ATT collected and kept the most data on all of the carriers: 7 years for text metadata, "7 years" for call history (I put that in quotations because it was rumored that ATT kept them indefinitely, but, there were technical limitations for restoring data that far back), and 7 years for the contents of the text messages themselves. Verizon was up there as well, but, I don't remember specifics.

The carrier that I worked with kept only 3 days content of the actual messages, 28 days for the text message metadata, and 28 days for the call records for their enforcement database, but, they could get calling records and sms envelope information for billing back 7 years, and at the time, we had to implement sharding at the database layer that maintained the warrant database due to the amount of traffic that we were receiving from the calling systems and the amount of queries/data that we were sending out, in near realtime, to law enforcement users who paid $10,000/month for access to that data.

AT&T wasn't storing this data out of the kindness of their heart, it was a (probably small) revenue stream for them.


Ah, back in the day the FBI would pay our CTO $5000/hr to talk to and work with him. On top of that we would charge them a monthly colo fee for their equipment that collected data of customers.

Sometimes they had warrants, but mostly just bought the data.

A year or so after 9/11 and that relationship lasted years.


Welcome to the US - claimed "praiser" of freedom, but with no respect for privacy. Even the EU is better at maintaining privacy than the US.


the EU is much more aggressive at banning and censoring websites though. I can't recall the last time I ran into a website in the US that's blocked at the provider level (private moderation like e.g. Youtube is a different story). Maybe Tiktok is the most famous, but it's still around and available afaik. But in the EU, ran into "the government has decided this information is bad for you" all the time, with a nice notice from the internet provider. My hunch is that under various pretexts both societies will continue to drift towards more censorship and less privacy, perhaps with some temporary local differences.


It depends on the country, the EU doesn't have the same laws for Internet censorship. Still in most EU countries it is better than in the US: https://en.wikipedia.org/wiki/Internet_censorship_and_survei...


I've never encountered anything like that while over here.


Retention periods seem like a moot point if the government just slurps every piece of data anyway and stores it indefinitely


Not everyone in law enforcement gets to play with the NSA's toys though. Some actually have their warrant and subpoenas glanced at by a judge before it gets rubber stamped.


While being briefly "glanced at" by a judge is certainly better than nothing (or just already having the data like NSA), practically it just means law enforcement needs to adapt some generic boilerplate justification text to each request.


Thank you for sharing this, it is helpful context when discussing data security and privacy with regulators and federal Congressional reps.


They keep personal customer details like SSNs indefinitely despite no longer being a customer.


They added windows to this now, but I always wondered what this windowless skyscraper was, back in the day, in Downtown NYC.

https://nymag.com/intelligencer/2016/11/new-yorks-nsa-listen...


That’s the AT&T Long Lines Building. It probably did have an NSA surveillance closet, but it wasn’t built without windows for that reason. The story I was told (by older colleagues when I worked at AT&T Labs) was that it was built during a time when riots and street violence were more common, so the fortress appearance was to ensure the city could maintain long-distance connectivity during urban unrest.

I believe there was another similar nexus downtown near the World Trade Center, which was destroyed on 9/11. For at least a couple of weeks we had very limited communications and credit cards were hard to use as a result.


It’s built to withstand a nuclear blast. There’s buildings like this all over the country (though not in skyscraper format).


Perhaps, but the other version would explain the "nuclear-war-proof" thing.

I am sure the employees were told SOME kind of legend, because that building begs questions.


There was a lot of nuclear war planning around those from the 50s through the 80s.

There's some good sites out there that go into detail like http://coldwar-c4i.net/


A tall above-ground building with no windows doesn’t seem like a good candidate to survive a nuclear blast.


Long lines buildings were not going to take a direct nuclear hit, but were very robust to handle shockwaves and EMP.

I came very close to buying a long lines microwave relay site, and got to tour it a few times. It had a hardened tower, as well as copper grounding that went deep into the ground. Mining the copper would have paid for the site, but alas.

These buildings were built based on the 1950s threat of Soviet bombers attacking the United States. The New York City metro area was protected by air defense missile sites and interceptors. The air defense systems would air burst small nukes in wartime to destroy bomber formations.

Once the threat shifted to ICBMs in the 1970s hardening was moot.


Yup, an underground structure would normally be a better design. But that would quickly get flooded with water in Manhattan in the event of a nuclear blast followed by loss of power.


Americans like to complain about the GDPR, but it exists to prevent exactly this sort of thing. Data cannot be retained longer than it's actually needed or required by law, and can't be sold without explicit permission. Law enforcement can't just buy data: they need to have legal authority to get it (though in many countries the bar for that is too low). In most cases the cheapest and easiest approach is to collect as little data as possible, and to delete it as soon as it's not strictly needed. This greatly reduces the compliance burden.


You obviously did not follow the recent drama in the EU related to Chat Control V2.

The EU wants LEOs to have access to the contents of your messages/emails/metadata and keeps extending the Chat Control V1 law in order to not have to delete the data that it already has.

You may not be able to buy that data outright but it will be out there and collected by the messaging providers on behalf of the EU.

It even had a data retention law that forced providers to keep up to 8 years of data related to their customers so that it could be handed over to LEOs.

The EU's stance on privacy is just lipstick on a pig. When you pick under the curtain of the privacy laws in the EU, you'll see that it's not better here than in the US.


> You obviously did not follow the recent drama in the EU related to Chat Control V2.

It is strange to say they wanted it when we have proof it is voted down and widely unsupported. A part of the EU government apparatus wants it, but taking that and saying the EU wants it is not honest.


The regular Joe doesn't really care to be honest.

I have talked about it around me a bit and most people who do not work in tech or who don't have a certain interest in online privacy or privacy in general don't know about it.

Of course when you ask the citizens of the EU if they are cool about being monitored at all times by the EU LEOs then they don't want it but the commission wants it bad. All this is due from the heavy lobbying that has been happening in Brussels.

The worst part is that this is happening while the EU is saying that it wants data sovereignty, and wants to become less dependent on the software coming from the US, but it's ready to get in bed with a US company in order to deploy this mass surveillance system who supposedly is very good at finding CP.

Nevermind the fact that it means that every bit of online communication will be analyzed and dissected by a corporation that is out of reach of the EU.

But the commission is not stupid, they carved themselves a nice little clause so that they can be exempted from such mass surveillance. I guess they understand that having all telecommunications monitored by a for profit company that is not from the EU could lead to some embarrassing data leaks, just like we saw with AT&T but they don;t care if it's our data that leaks as long as it's not theirs.

That is why to me GDPR is just a facade. You can't seriously say that you are pro privacy and pro democracy if you keep trying to recreate the Stasi on a larger scale.


CP is just a pretext to keep records on everyone. Good thing everyone over 40 in Eastern Europe still remembers the Stasi and its sister secret police agencies that collected data on everyone and tortured political prisoners. I suspect that climate activists are the next likely candidates for an eventual repression apparatus, so better beware.


Portugal and Spain also aren't found of their politicians from 50 years ago (their regimes fell in 1974, and 1975, respectively). To add to your point.


The fact that it had to be voted in the first place, and then represented again within six months is the problem.


I was talking about the GDPR, not EU regulations in general.


How does it look on one hand to say that the EU cares about it's users data and wants the users to be able to choose who it is shared with, has clear guidelines related to it's storage and levy fines on companies who breach these terms and then turn around and come out with Chat Control V2?

Something does not compute. Either you are pro privacy and you act like it or you are not.

It kills me to hear that Europe is pro privacy, because it is not true. Not if you look under the veneer and start peeling back the layers.

These sorts of data breaches should be a wake up call for any state actors who are planning on collecting massive amounts of data on their citizens.

It should make them pause and say, you know maybe we should not just give away all our data to Russia or China if they manage to break in our system.

Maybe the best way to avoid such data breaches is to not store the data in the first place.


You're arguing with a lot of things that I didn't say. My comment was entirely about the GDPR.


The US also has laws that, in isolation, would suggest some sort of protection against universal corporate/government surveillance, but they’re no more effective here than in the EU.


At first I read this as GDR


Do Americans complain about the GDPR? I’ve only ever seen them say they wish the US had something similar.


American businesses, especially in predatory industries like adtech, complain all the time.


I would hardly roll that up to all Americans though. Of course companies who's business model is seriously hurt by GDPR would complain.

Most Americans wouldn't even know what GDPR is, let alone have a reason to complain about it.


They are talking about Americans on this site, who very often work at companies that GDPR is made to stop predating on users. Many European users here also works at such companies, so you often see it from them as well, but not as often since those companies are mostly American.


Ah got it, I totally missed that context here somehow. I hadn't noticed a habit of Americans here complaining about GDPR, but that's interesting given another common pattern here of libertarian ideas. An American complaining about a different countries internal policies doesn't seem particularly libertarian.


Yes, mostly blaming them for cookie banners (which aren't because of the GDPR) but also because it makes them need to think about compliance.


"but the cookie banners look so bad and ugly!"

Well, that's kinda the point, but way too many website owners rather torture their users with barely compliant implementations than do what the GDPR intended: get rid of third parties.


> way too many website owners rather torture their users

including official EU websites


Which usually have an

[ACCEPT] [REJECT]

without any dark patterns whatsoever.


Also cookie banners are from the e-privacy directive, not the GDPR.


I'm positive informed consent doesn't require cookie banners, but the advertisers opted to make it as annoying as possible so that everyone would click "accept" just to be left alone. It could be a browser mechanism that only asks once for all sites and have a whitelist.


Let's not pretend that the GDPR fixes this in any way. There are still EU data retention laws in place which force ISPs/carriers/... to store all kinds of data for a reasonably long time.

I don't know who Europe's biggest telco is, but if they got breached, the damage would be just as bad.


> There's no federal law requiring AT&T to hold onto this data.

This is false? https://www.law.cornell.edu/uscode/text/18/2703 https://www.usnews.com/news/articles/2015/05/22/how-long-cel...


There's required disclosure using an administrative subpoena for records over 180 days old if they have them

CALEA requires phone (and later broadband) equipment to conform to wiretapping standards, and if a carrier gets a court order to wiretap it has to provide that data from warrant receipt til warrant expiration.

Landlines have some data retention requirements.

But there's no law on broadband or wireless data retention.

There may well and likely is a secret FISA court order under section 702 that's been served to telecoms, but an astonishingly small number of people in govt and industry know whether that actually says that they just have to hand over records in real time or whether they need to keep records for some period of time.


That’s interesting, I did not know this about the Obama govt. Do you have a good article about this? (Yes I’m lazy I could search for this)



That’s was Bush not Obama


Being required to do something doesn't justify doing it poorly. AT&T brought in over $3 billion with a B of profit with a P in Q1 2024. They have more than enough money to secure their systems. They're not struggling. In March of this year they bought back 157M of their stock. They could have instead put that money towards security, but they didn't: they put it towards enriching shareholders.


Money can't buy competence, at least not at organizational scale.


Sure, but “incentivise a business to do something, and they’re more likely to do it” is still true.


Fine, but they can clearly afford to pay for a lack of it.


Fine is cheaper than solving compliance issues. Many such cases unfortunately.


Maybe not for execs, but if not for money you literally couldn't hire competent security folks


And whom is a large percentage of shareholders?


It was snowflake’s lack of security that did this not ATT. Not saying ATT is a paragon of security or anything but snowflake was where the hack took place.


A vendor’s security is the clients security. Companies might choose a vendor for CYA in these instances, but if someone decides to send all of their internal business data to a third party, they better have a pretty good idea what will happen if that third party fails.


Snowflake has the same shared-responsibility structure as any other cloud provider: they provide enforcement but you are responsible for setting up and protecting your own credentials and permissions. They can’t impose “security” unilaterally in the abstract.


What do you know about Snowflake's role in this? According to the article, Snowflake says that they offered 2FA and AT&T didn't use it.

Perhaps that's not the whole story, but if true then blame certainly lies with AT&T to a significant degree.


It’s mostly AT&Ts fault but it’s sort of a side effect of Snowflake making their product easy to use and most of the industry overlooking credential reuse risks.

Databases are not historically internet facing so data compromise also meant getting network access. But Snowflake provided web access to your database so they were “easy to use” database as a service (“cloud data warehouse”). Snowflake did not offer you a way to host data within your network or within your dedicated subnets within a cloud provider, so companies could not solely rely on those networking barriers to limit malicious counterparties.

Snowflake has apparently begun requiring MFA for new accounts since this incident I’ve heard. If shutting the gate after the horses have left implies culpability, Snowflake has some.


Part of the job of the contractor is taking responsibility with who they take security from. To take it to the logical extreme if 'some rando they met in a bar' offered to store AT&T's credit card information for cheap and it turns out said rando was stealing credit card information? Totally AT&Ts fault for not properly vetting them.


the number 1 job of a company is to enrich shareholders.


Enriching shareholders is exactly what they are required to do.

What, nobody is allowed to make money anymore?


Sure, and then it's the government's job to ensure the shareholders lose their money when the company loses a hundred million customers' records. So yeah, it turns out that when you pay yourself instead of doing right by your customers, I think you shouldn't be allowed to make a profit.


No, they shouldn't be allowed to fuck over their customers at ever turn so they can be greedy. The suggestion that we should be more worried about how much money the AT&T execs and shareholders make over their needs of their 100 million customers is bizarre.


Those are not mutually exclusive.


Banks are required to maintain financial transaction records.

Is the argument that governments don't have a good reason to mandate record collection?

Why can't I ask my government to keep me safe from terrorists but also expect that companies will not just be careless with the data they collect as part of that?


Government has no right to track that either, they themselves launder trillions, start wars and massacre millions, even a drug lord is a petty criminal compared to them, and it's clear their tracking of any and all records of any type is more about control than safety, thus it should be disregarded as an argument and be done away with entirely.


> they themselves launder trillions, start wars and massacre millions, even a drug lord is a petty criminal compared to them

And then people wonder why privacy has a difficult time getting public support.


No, we already know it is because people are complete idiots who not only fall for 'tiger repelling rocks' but actively demand them.


The government can't keep its own data safe, as the OPM breach showed. Apart from some resignations, nobody faced any serious consequences for that either.


Even more reason for regulatory requirements covering data security for all organisations- both private and public sector


Many (all?) banks keep financial transaction records for way longer than what is legally required. Thankfully, most banks are technically incompetent and are unable to easily use data that is not relatively recent. In fact, one bank I worked for had to load transactions from a CD-ROM archive which contained all the transactions in a printable text format (the same format as their printed bank statements). Multiple CDs per day, with no indexing or identification beyond the date. Trying to find a specific 10 year old transaction was very hard work indeed.


I agree. I think it's reasonable to expect companies to safeguard that information from malicious actors.


I don't agree. I don't think it's reasonable to expect it, because companies show over and over that they cannot do it. And let's face it, the only reason your company hasn't fallen victim to a data breach or ransomware is that you haven't been seriously targeted yet.

We need to change our approach. We need to look at why these kinds of data are valuable, and then make them not valuable. Then nobody will bother with hacking to get it.


This data is valuable primarily for spam mitigation and perhaps customer profiling.

Expect every SMS and MMS sent or received to be part of a spam mitigation and profiling program where it's stored indefinitely.

Apple not encrypting RCS is likely due to similar factors, where they have seen existing spam problems on RCS that are much harder to root out when you have end-to-end encryption.


In my not so humble opinion, the biggest problem with phone numbers in general is the general ability to spoof any number. Please correct me if I am wrong but stir/shaken is only available on the new stuff and even then there is no good way to track the origin of a phone call. This is beyond ridiculous and clearly leadership is asleep at the wheel.

There needs to be a firm timeline -- maybe a year maybe a decade, I don't know the details but something that allows customers to transition to a system where all calls can be traced through the network with 100% guarantee.

Step zero is actually having a process/protocol where any phone is tamper evident meaning we can tell 100% that this call came from this operator and the operator knows the call came from this user.

Perhaps the first phase allows individual users to opt in. So we would ask our operators to only route us calls and texts that positively identify themselves as fully traced with whatever the new protocol is that will replace SS7/sigtran so the origin of a call or text is positively identified. If this guarantee is not available, route the call to spam inbox somehow.

Then the hard part I'm guessing is fixing all the defects?

The second phase is to say after this date, no operator in the US is allowed to relay calls that are from legacy systems. This will likely take many years as I don't know how we will handle international calls and texts. But at some point we have to put our foot down and say enough is enough.


> Step zero is actually having a process/protocol where any phone is tamper evident meaning we can tell 100% that this call came from this operator and the operator knows the call came from this user.

This basically doesn't work because the mapping between phone numbers, users and operators isn't exactly 1:1:1.

Some businesses have a single number that they use as Caller ID on all their calls , despite having one corporate HQ in New York, one branch in New Orleans and one customer support callcenter in New Delhi. All of these use different carriers and are based in different countries, yet they're all legally authorized to use that number.

If you want to read more about why this is such a hard problem to solve, see https://computer.rip/2023-08-07-STIRred-AND-SHAKEN.html


> ...yet they're all legally authorized to use that number.

But why? I get that they want a unifed appearance, but as a phone subscriber I want to know if it's BigCo calling from New Delhi vs. BigCo calling from Chicago.


Amazing article about why phone spam is so much harder to fight than email spam.

Thank you for sharing it!

Now I need to lean SS7 signaling.


Finally, some sense. My first though when reading the article was why are we even allowing these companies to collect that data in the first place.


How would they bill customers and other providers for usage if they didn't keep call/text metadata?


These are records from 2022. The hack wasn't carried out the second the calls were made. You really need to keep the records that long to do your billing? That's absurd.


I don't think it is. I assume everyone gets hacked eventually. It's really hard (I would argue impossible) to make a 100% secure computer system, and if they're operated by people, you're terribly vulnerable.


You are more likely striken by lightning than coming in contact with terrorism whatsoever


Pish posh. They also sell that data at an increidble markup – and without the knowledge of their customers – to anyone who'll pay, including governments and their cutouts.


Why they hold it and how they protect it are valuable conversations. But their customers deserve something akin to security regardless of the why.


Spam mitigation and management is a huge bugaboo in wireless networks today.

The big three wireless carriers in the USA today formed a cartel called The Campaign Registry that seeks out TINs/EINs and the SSNs of the owners of Sole Proprietorships and LLCs as part of a lengthy approval process to be allowed to send texts.

It's a great extra judicial rent seeking machine that bans any SHAFT content (sex hate alcohol, tobacco, firearms and anything tangentially related) along with hefty fines for anyone that they feel has crossed said boundaries.

Letting the morality police run amok on our Telecom networks here in the USA is happening, and they also want all the data they can get along with bribes from businesses.

Ajit Pai created the opening for this mess, and the current FCC has done nothing to clean this up (though given recent SCOTUS rulings, who knows if they ever had the authority...)


Tangent, but it's ridiculous that sex is in the same group of undesirables such as firearms, alcohol, tobacco and hate.


That T-Mobile is out here slapping spam mitigation blocks on phone numbers who received SHAFT content from numbers on T-Mobile's network is pretty ridiculous, but silently blocking and providing no appeal or escalation path is just how we let companies operate these days.


I've never heard of this, and cursory web searches don't seem to be turning up anything relevant (although that's admittedly not saying much with the state of search lately). Can you explain how the law requires this level of data retention?


Apparently they'd uploaded their customer data into something called Snowflake to do some kind of analysis on it, but it wasn't particularly well secured. They haven't said why they were analysing the data, but there's no indication that it had anything to do with government demands.


"legally required by the government" to keep securely. If you can't keep to the rules don't play the game. I'm sure any other telecom would be glad to get the market share.


That's a good point. Had they valued the citizens' privacy they would have done the opposite, that is make it illegal for network providers to store customer data that is not essential for them providing the services. But I guess creating a dystopian surveillance state is more of a priority.


Sure - pretty well every corporation you purchase a service from is required to store your credit card information as well. But there are stiff penalties from the government and credit card processors for unauthorized access to that information; consequently, it's rarely stolen.

Your address, cell metadata, phone number, email address, and passwords are leaked pretty well contsantly though.

It's not that corporations are incompetent. The laws and regulations mean it's not worth the cost to treat your personal information with any real respect.


> store your credit card information ... but there are stiff penalties from the government and credit card processors for unauthorized access to that information; consequently, it's rarely stolen

Citation: The Onion?

The Payment Card Industry Data Security Standard (PCI DSS) is the main information security standard for organizations that process credit or debit card information must abide by. The guidelines established in PCI DSS cover how to secure data handling processes.

So here are the top 5 info breaches:

https://www.goanywhere.com/blog/the-5-biggest-pci-compliance...

To be fair, if what happened to Heartland happened more often, PCI compliance would be taken more seriously, and breached less often.


I'm not saying it doesn't happen. Credit card data is too valuable to never be stolen. I am saying that ~37 to >500 is a hell of a difference in how frequently things are stolen [0]

You pointed out how there are guidelines for holding that information, I'm saying there are consequences [1]. I'm following that up by saying that the consequences for mishandling customer information are not nearly as severe. They do not result in 6 figure fines.

I'm saying the severe consequences to mishandling CC data have led to the incredible disparity shown in the first paragraph

[0] https://haveibeenpwned.com/PwnedWebsites

[1] https://resourcehub.bakermckenzie.com/en/resources/global-da...


Most places don't actually store or process anybody's credit card information any more, all they have is a Stripe token, which is completely useless to a hacker.


The government isn’t distributing my data to everyone else (so far). For profit companies have a pretty massive list of breaches so far.


You are forced to give your personal data to government. You don't have to give your data to any company. That's huge difference.


Only if you cut all ties with civil society and live solitary.


Only dead fishes flow with the stram.


do yourself a favor and accept that phone records have never not been recorded and the data is mostly available for purchase. the company is to blame because they are complicit or negligent in the bespoke surveillance state, probably both.


welcome to a post 9/11 world. privacy has been dying for a long time. the general population doesn't care anymore. they freely give up everything to big tech anyways.


> how weird it is that you're fine with the government getting a record of all your phone activity

I don't like it, but accept it as the lesser evil. I'm from Europe and I believe the number of reported prevented terror attacks. The agencies need data access for that. Not good, but necessary.

But are you aware that Meta, Google, Apple, MS, etc. collect every kind of information about every user of Android, iPhone or WhatsApp, Insta, Facebook, Windows? Phone manufacturer, huge apps like TicToc as well. The kind and size of that data is crazy beyond imagination. I don't care if the government can get access to my WhatsApp messages when some of the most irresponsible companies, collect and use everything to their advantage. Are you really that naive and think that Meta doesn't analyse their gigantic data lake including billions of WhatsApp messages to predict the results of elections? That is the real danger to democracy.


> I don't care if the government can get access to my WhatsApp messages when some of the most irresponsible companies, collect and use everything to their advantage.

This is all voluntary. You give those companies your data. You don't have to. I use grapheneos and do not use any of those socials, for example.


The problem comes as people start shoving more and more DRM around, whether it be Google Play Protect, the new Android WebView Media Integrity API, or an eventual reboot of the Web Environment Integrity proposal.


I understand, but I also won't participate in those and will actively work to undermine them.


Hurting the shareholder is the only option to actually fix anything. Until the C-suite and board are forced to face the music caused by rich people being parted from their money, they'll just continue patting themselves on the back and giving themselves bonuses.


If bankruptcy can clear liabilities then your suggestion won't help. The shareholders are usually gone by the time the bill comes due: it's often cheaper to go bankrupt. And there's a whole private equity industry revolving around taking dirty liabilities and slowly bankrupting a company to squeeze the last dollar out before shutting down.

Look at the same problem with environmental disasters that were created by corporations. The problem with security liabilities is similar? Externalities are hard to get shareholders to pay for.


You don't need to try to seek value from the shareholders in a bankruptcy to hurt them. (Doing so would be going against rule of law and as for changing the law, well do you hear that giant sucking sound of funds fleeing your economy?) Just having their holding's value go to zero is sufficient.


Maybe irrelevant for security flaws, but the point is that externalities can easily exceed market capitalisation. Trading while insolvent is illegal, but that is hard to judge with liabilities. Examples are Johnson&Johnson (public) and Purdue Pharma (private).


The C-suite and board are not the shareholders.

The shareholders are mostly the pension funds that will eventually pay your money and the banks that already do.


I agree.

Shareholders can vote and decide the direction of a company. They should also be held liable for any problems the company causes.

If the company is fined it should come out of company and then shareholder pockets. I might even add courts should be able to award damages by directly fining share holders.

If a company does something severely illegal then very large shareholders should risk jail time.

It’s your company after all as a shareholder. You own it.

It’s no different if your dog bites someone or child breaks the law. You have to pay the fines.


Under that twisted logic Israel would be perfectly justified with nuking Palestine. They voted for terrorists, therefore they should be liable for everything their country caused.


The people “whose negligence made this possible” are probably just rank-and-file employees. Careful what you wish for. I know I sure wouldn’t want to be legally liable if my software were vulnerable to something I didn’t know about.

Maybe a reasonable first step is third-party standards, audits, and certifications around data security to make privacy- and security-conscious consumers aware of what a company is doing. If consumers really find value in that, then they will preferentially deal with that company, and other companies will follow suit.


> The people “whose negligence made this possible” are probably just rank-and-file employees. Careful what you wish for. I know I sure wouldn’t want to be legally liable if my software were vulnerable to something I didn’t know about.

This isn't what's being suggested.

Higher ups set the incentive structures that result in dwindling security resources.

If their ass is on the line, they will actually listen to the developers and security experts telling them they are vulnerable, instead of brushing them off to divert resources that boost the reports which determine their bonuses.


I understand that isn’t what’s being suggested. What I’m suggesting is that there is perhaps a distortion of the common idea of who is “responsible” for something. I think the idea that fault bubbles up to the highest level in the chain of command is silly. Fault is distributed across the entire chain, and if we want to address this issue, we can’t ignore that.

To draw an analogy, if someone’s 16-year-old child is texting while driving and gets in a car accident, is their parent to blame? Most people could see that there is some fault on the part of both the parent (for perhaps not emphasizing enough the importance of safety while driving), and the child (for doing something they know is unsafe). And this fault exists in a continuum; maybe the parent told their child every day to not text while driving, and the child did it anyway. Maybe the parent never told them anything about safe driving habits, so the child had never considered that texting while driving was unsafe.

My point is that pretending that the highest C-suite executive is wholly responsible for everything that goes on in the company is extreme. Everyone along the entire chain of command has to do their part to ensure secure products are shipped - the executive needs to prioritize it, hire the right people to develop a plan, ensure people are enforcing the plan, etc., all the way down to the software engineers, the cleaning staff, etc. If one link in that chain breaks, the entire system fails, and it could be because of a weakness anywhere along the chain.


I agree with your view completely. There is nuance, and there should often be blame at multiple levels. At the same time, there is a basis for the common view, which is that higher ups create the incentive structures from which most things flow. If it turns out the incentives here were well made by the brass, I'd retract my jumped-to conclusion. But it rarely turns out that way, which is why I jumped to it.


> Higher ups set the incentive structures that result in dwindling security resources.

What if this isn't the problem at all? What if a company invests a huge amount in data security, but still gets owned? That happens all the time.

I don't understand why people leap to the conclusion that these events are inevitably the outcome of neglect.

> If their ass is on the line, they will actually listen to the developers and security experts telling them they are vulnerable, instead of brushing them off to divert resources that boost the reports which determine their bonuses.

Again, why are you making this assumption? But let's say, for the sake of argument, that you're right. Now we go implement some draconian, top-down "you must be secure or the C-suite goes to jail" mandate. Corporations, out of fear of liability and prosecution, lock up tight, and refuse any and all changes that might undermine their security posture. Nobody builds anything new, because why take a risk?

Expensive "security expert" consultants start appearing out of nowhere to help with "compliance" with the new rule, and companies pay for them -- because it provides a veil of responsibility for the company, even if the consultant is useless. Worse, a certain percentage of these "experts" will be hucksters (or more likely: morons) themselves, and will always tell people that "they are vulnerable", because that essentially ensures a payday. You can't prove that a system is "secure", so who can say otherwise?

If you doubt that any of this is plausible, I suggest you take a hard look at our existing top-down security rules (e.g. ISO 27000, HIPAA, GDPR, PCI DSS, NIST SP 800-88 and SOC2, just to name a few) and the bureaucratic industrial complex that has erupted around them, and ask yourself it these things actually make you safer. I guarantee that AT&T was "compliant" by any conventional IT standard with these, employed an army of IT staff to document said compliance, and otherwise invested a huge amount of money in that kind of performative nonsense. Because that's what every company does.

But they still got owned.


If one breach exposed all of their data, they don't practice the well-known security (since ancient times) technique of never having all your goodies in one location.


The attack vector was an exposed Snowflake instance.

Snowflake's entire business model is based on selling the idea of "data lakes", "data warehouses", etc...

The basic premise of data lakes, etc, is to replicate and dump all your company data into easily queryable database instances, like Snowflake. I'm not disagreeing that this is a stupid thing to do, but just pointing out that this is something basically every Fortune 500 company is doing. Because big data is cool. (Or was cool)

Specifically since the article called out no 2fa... I'm actually very surprised how difficult 2fa is to set up with Snowflake. It's been 2-3 years since I set up a Snowflake instance, but I remember there being no obvious or easy way to enable it. (I wanted it on, but at the time enabling it was a multi-hour task, not just a setting to enable)


One password fail should never expose everything.

2fa is not the answer. The answer is compartmentalization. Just like a battleship is divided into many watertight compartments, because someone will poke a hole in it.

The Titanic needed 6 compartments to be breached before it was in danger of sinking.


Yeah, security checkboxes don't necessarily result in good security. One option is to still make companies liable for security breaches, regardless of what meaningless checkboxes they may have checked, and then trust that they'll figure it out. Real liability would shift things from theater to weighing actual risks and costs.

Another option is we can empower red teams (security researchers) to test the security of all systems even without permission, so long as they report their findings responsibly.

It's currently quite convenient for companies. They get to deny security researchers from testing their security, and they also have no liability if a security breach does happen. Or, to make it personal, if I want to investigate the security of a company by trying to hack their system, I risk going to jail, but if they lose my data in a breach I have no recompense.


I'm saying that's the same thing. It's probably worse, actually, because imagine yourself at the head of a company the size of AT&T. What would you do -- what could you do? -- that would ensure that some random employee would never do something that makes you vulnerable to attack? How terrified would you be?

It's impossible to ensure what you're asking for. That's the problem with all of these kinds of rules, but worse, because at least something like SOC2 is providing a safe haven if you do the right things. Making companies "liable" for breaches is tantamount to saying that companies will never develop software again, because the risk is simply too great. Certainly, if I were in that kind of a situation, I'd rarely use a third-party service, and never use a startup, or a smaller company. I can't be responsible for the risks of AT&T, and every software company AT&T uses. That's crazy!

We're going to have to come to terms with the fact that "security" is a verb, not a noun, and that data leaks are going to happen, even in the best secured institutions. Punitive rules might improve security in the marginal case, but only at huge costs industry wide.


If a company the size of AT&T finds themselves unable to move or do anything without creating security vulnerabilities, then it's time for the company to stagnate and go out of business, leaving fertile ground for more competent companies to replace them.

It would be kind of nice if companies would say "we've grown to our level of competence, we cannot safely do more, so we will keep doing the same, no more, no less, and make sure we do it well, and we will allow innovation to come from other companies". Instead, they say "let's recklessly chase every fad and who cares about poor security, it's not our liability".


Yeah, that's some nice rhetoric, but...I guarantee that, right now, some part of your personal software stack has a security vulnerability. If you write software for a living, some piece of software you maintain has a critical vulnerability.

Do you want to be held personally responsible when they're breached? If your wireless access point is hacked because you waited too long to update it, and it is used to launch DoS attacks, do you want to be liable? Do you want to be held personally responsible when you click on the just-good-enough phishing attack in your corporate inbox?

If not, then consider why you'd ask the same thing from a corporation of tens of thousands of people.


> Do you want to be held personally responsible?

No, I don't. I don't want anyone to be held personally responsible.

> consider why you'd ask the same thing from a corporation

I'm not asking the same from companies. I don't consider putting liability on a company the same as putting liability on an individual, and neither do our laws. Companies may pay liabilities out of profits, companies may have to sell assets, companies may go out of business and people lose their jobs. None of that is the same as someone being personally liable.


> If your wireless access point is hacked because you waited too long to update it, and it is used to launch DoS attacks, do you want to be liable? Do you want to be held personally responsible when you click on the just-good-enough phishing attack in your corporate inbox?

This is a strawman, corporations are suppose to have a process in place to make sure stuff is up to date. You don’t jail like a random rank and file guy for a huge breach.


> Making companies "liable" for breaches is tantamount to saying that companies will never develop software again, because the risk is simply too great.

Making humans liable for car crashes is tantamount to saying that humans will never drive again, because the risk is simply too great.

Replace with any complex activity - nuclear reactor development, aircraft, etc.

How is it that in your head data breaches are this special human activity where Boone should ever be held accountable?


> I don't understand why people leap to the conclusion that these events are inevitably the outcome of neglect.

Because that’s what happens 90%. Of the time.

In most cases I’ve seen, there are zero people on the team who could describe themselves as having any kind of expertise in security. Developers explicitly know about at least several vulnerabilities, but management doesn’t care to allocate resources to fix them, etc. that’s what’s happening in most shops.


This reminds me of the story where someone accidentally deletes the database and there are no backups. Who's at fault? The individual IT employee who made a mistake, or the entire organization (especially leaders) who created a situation where one person could delete the database and there are no backups?


There is a whole field devoted to this called governance.


You can safely assume that every company or org which fell to ransomware campaigns didn't have proper backups. Because such a restore wouldn't be in the news as serious outage.

The percentage of no backups seem to be crazy. I only read about the Central bank of Sambia being able to restore from backups, everyone else was down. All these responsible should be fired.


I’m baffled that anyone is even asking the question..

Anyone reading this, if you are of the “well the employee whole typed the command is to blame!” opinion, could you please reply to this comment? I need to know what you think the purpose of a hierarchy is in the workplace.

..needless to say, responsibility for your direct reports is yours. If they fuck up, you fucked up. You have the choice to hire and fire at will. You choose who has access to take chances. You own the wins and the losses. If you’re a good leader you redistribute the wins and dissolve the losses. It’s the entire job.

It’s 2024. There are no kings or dictators in the workplace.


It's a rhetorical question that's effective because the answer is obvious.


You would think so, but one time an undergraduate IT guy in my school's computer lab essentially ran an `rm -rf` on all the students' home directories 2 weeks from the end of the semester. It turns out the lab's backups weren't working. The email from the department was pretty quick to throw that kid under the bus.


Are you trying to say that a university IT department was a toxic workplace? I'm shocked, shocked I tell you!


My read of responsible people are corporate officers and executives--people who actually choose what to work on and are substantially rewarded by the corporation.


1. Absolute carelessness of customer data.

2. Nothing to no consequence to the executives.

3. Lawlessness of such events. Very poor consumer protection laws in this country.

4. Cybersecurity illiterate leadership making cybersecurity decisions.

5. Investing absolute little in Cybersecurity to meet bare-minimum standards.

6. Or all of the above?


this is already an established principle in other engineering fields. If a civil engineer screws up and a building collapses, both that engineer and the engineering firm are liable.

Why should the software industry be any different?


When I was working in a (non-software) engineering role, when I raised a technical concern it was taken seriously. As a software engineer, when I raise a technical concern it is brushed off and it I push it then my job is at risk.


because software developers aren't engineers? -- elephant in the room.


Some call themselves Hackers, they love to bypass processes.

And some call themselves code monkeys, they know how to follow orders, but have no incentives at all to think by themselves for proper security.

Only a tiny fraction call themselves engineers.

I favor non-licensed free professions, but if you're free you should be able to follow best practices and be able to think for yourself.


huh thats strange because I have a BSE and graduated from engineering school. Sure the history major bootcamp grads arent real engineers and we need to weed them out of the industry but there are some of us who are actually real engineers


I think the issue isn't so much the programmers who aren't engineers as it is the managers who don't treat programmers like engineers.


AT&T bought back a ton of shares of its own stock in March. It's likely that shareholders won't feel the effect of this security breach because of those buybacks (over a medium term time window).

How about instead of even more meaningless standards without teeth that don't affect the people pushing for profits over essentials like security, regulators impose punishments that actually affect the investors that ultimately create these perverse incentives in the first place? Nobody should be profiting off of a company that does wrong by over a hundred million people.


Direct liability to the front line / middle management which is cleared in exchange for defined levels of cooperation with criminal, regulatory, and civil investigations aimed at landing higher-ups would be a useful development.


Nonsense. The people who should hold responsibility are the people who have decision-making power and derive financial benefit from these choices. A rank-and-file employee is a scapegoat given the incentives at play in the system, even if they nominally wrote the vulnerable code


No, the people whose name is attached to budget decisions and higher level company direction that leads to this are the ones who are responsible.


The law that would have prevented this breach would be to make it illegal for telcos to sell customer data. The reason AT&T was feeding ALL the data to Snowflake was to sell their customer's location and social graph to marketers. It is unconscionable to me that this in not currently the law.


Do you have a source for that claim?



Thanks!


Here's Snowflake bragging about helping telcos sell location data: https://www.snowflake.com/blog/telecom-data-partnerships/


So if I buy a car with an advertised top speed of 200 mph, it's given that I must be violating speed limits when driving it?


Imagine a world where suffering a data breach meant you could no longer collect, let alone hold or sell that class of data for a decade, and this rule preempted laws that required data gathering.

AT&T would be nearly equivalent to an E2E service overnight.

The lines wouldn’t be encrypted, so the NSA would still tap them, but at least there would be zero mutable storage in the AT&T data centers (except boot drives, SMS message queues, and a mapping between authorized sims and phone numbers).

In this day and age, why do they even maintain call records? They don’t need them for billing purposes, which was the original purpose of keeping them.


Genuinely chonky fines seems to be the answer to this problem, as it aligns incentives with rewards/penalties (if you’re lax about how your company approaches user data then you’ll be at financial risk).

Piercing the veil to prosecute those “responsible” seems like it would just incentivise the business to carry on as normal but with employees that are contractually designated (i.e. forced) to be fall guys if anything goes wrong.


If PG&E has taught us anything, utility companies can literally blow up and burn down cities and no amount of fines or paying for the damages done will matter to them.

Monopolies can always just pass the cost of the fine to their customers.


Penalties would also incentivise businesses to hide data breaches.


That is the worst case outcome of penalties, and it carries significant risk of whistle blowing. The default case will be compliance, because compliance is simply cost of business, something businesses understand well.

Meanwhile, currently businesses are doing shit all about data breaches except handing out the absolutely useless "2 years identity monitoring", so from a consumer view it really can't get much worse.

In general, the idea that penalties make people hide their bad behavior, so we shouldn't penalize bad behavior, is just extremely misguided. Because without penalties, we normalize bad behavior.


Are strong whistleblower protections what’s needed to balance this?

As an Australian I am absolutely horrified that we continue to put people in jail who have blown the whistle on the government here, and it makes me think that large organisations are absolutely terrified about strong whistleblowing protections.

This all suggests to me that whistleblower laws would be very effective.


Whistleblower is a very revealing thing to call Mr. Assange.


David McBride and Richard Boyle. Both tried the official channels then whistleblower channels. Both made some mistakes but all in the public interest. Aussie gov treated them shamefully.


Witness K and Bernard Collaery came to mind when I was writing it. They blew the whistle on illegal espionage used to pillage the resources of our tiny neighbour, and the government threw the book at them. Absolutely shameful.


I understand that Wikileaks is controversial but I don't think there is any dispute that he has acted in the role of whistleblower to some extent. But that's not really the point I'm trying to make, so I've removed the reference.


I think I'd argue for a sui generis classification, which does partake somewhat of the whistleblower, but it seems like calling Napoleon a general. He was certainly that, at times. Apologies for the nit-picking in any case.


Another example would be David McBride who was in the Australian military and blew the whistle on war crimes. He recently got sentenced to jail while actual exposed war criminals are free.


Make laws that protect whistleblowers from civil and legal penalties, punish those who attempt to illegally hide data breaches, including jail time in the worst cases. That would solve it. Individual employees don't care enough to hide it (they just work there), and leadership wouldn't dare risk a whistleblower which would cause them to face criminal penalties.


So you make it a crime to hide the existence of a data breach for more than X amount of time for the purpose of figuring out exactly what happened. I don't know off the top of my head how long X should be. 30 days? 60?


Sounds like a recipe for willful ignorance. Why put any effort into checking for data breaches if it would only hurt you?


Which should result in even larger penalties, hopefully those penalties can also be levied against the individuals that were associated with hiding the data breaches. Mid level manager that gets an email from Snowflake saying that there's been unusual activity who then hides that information or doesn't look into it? Fine 'em (and AT&T). Mid level manager tells a random engineer that DOES look into it and finds that they've been hacked but hides it? Fine AT&T and this person even more!


This appears to be an argument against law itself.


GDPR has fines for data breaches


Nothing happened to Experian, and those clowns have beaches every year. The USA has so far proved that we don't care about privacy and don't believe data is real.


> don't believe data is real.

Oh but they do, try taking some data that belongs to a corporation and see how quickly law enforcement responds. Aaron Swartz found out the hard way

It’s only when you steal personal data that nobody cares.


The AT&T app and website are so bad it takes way longer than 1 minute to log in to e.g. pay your bill. The United States needs to raise the bar for large-cap negligent operators and fine the company enough to make shareholders listen.


In approximately 100% of cases, if your intuition is to say "this company is too large should be fined/regulated more," what you should actually say is "this company is too large and should be broken into many smaller entities."


Or nationalize parts of it, as has been done for electricity, water, and the courts.


I understand the desire to push for this but I also know first hand it would make things worse specifically around competency. I've had countless calls and meetings with state and federal agencies that could not grasp even the simplest of technical issues and this was with the very people charged with the responsibility for their systems. On the state level, explaining to the California DMV repeatedly that they may not use RC1918 address space in public MX records and expect emails and faxes to get through. That was an actual battle. Or arguing and escalating with 3 letter federal agencies that we will not "install their server certs" on our tens of thousands of servers and they must install the intermediate certs correctly. I wish I could share who that was because nobody would believe me... There are countless battles I've had with these agencies. I do not want more of these people running critical and sensitive systems. It's bad enough that leaders in companies like AT&T bend over backwards to just hand over data to them. I've had to hand over the data, looking the other way, giving unfettered unlimited unmonitored access to mainframes without warrants. This was at a company that was gobbled up by AT&T. Or being told to let a scammer with access to an SS7 link scam infinite people because they are paying for the link. Governments running these systems would be the wolves running the hen-house.


We should break down AT&T. Oh wait. We tried already and re-consolidated? Ow.


Part of breaking them up is supposed to be not letting them re-consolidate. Mergers involving any entity that already has 15% market share should just be flatly disallowed.


This is not the AT&T Judge Harry Greene broke up. This AT&T is a roll up of most of the RBOCs the breakup created.


Yes, a man never steps in the same river twice.

Not really the point though, is it.


The correct way is to follow what all other engineering and trade (medicine/law) already follow.

Some software engineers are licensed. A company must hire these software engineers, and any changes to what data is saved or how is saved must be signed by these engineers. If a breach occurs, an investigation occurs and if these licensed software engineers are found to be negligent, they lose their license. If they are found to be at fault, they get criminal penalties.

This, of course, must be coupled with penalties for management personals as well.


This kind of system has consistent led to regulatory capture by the licensed industry. Even the mechanism of operation de facto assumes a significant gatekeeping barrier to getting a license, since otherwise companies would just pick one most willing to cut corners to save costs, or pay the license fee to get greenhorns certified because that costs less than adding two years to the development schedule to do it well. Making everything cost quadratically more than it already does is not a good solution.

What you want here is for them not to be holding the data to begin with. The solution to which is to just let customers sue them. Not for $0.30 and "free credit monitoring" but for actual money. Then companies can choose whether they want to mitigate their risk by doing actual security or by not storing the data to begin with, but most likely the second one is their better option.


> This kind of system has consistent led to regulatory capture by the licensed industry.

That is indeed the intention. To counteract the financial incentives of shareholders (which result in bridges collapsing or data breaches) with the financial and legal incentives of a special class of employees - licensed engineers.

The reasons this works better than letting people sue after the accident has already happened [1] is because that it gets the incentives right. In sue-after model the responsibility before an accident has happened to make the product safe is quite diffuse across the whole organization, and the decision makers (C-suite) do not in fact have the expertise to determine if the product is unsafe.

Giving licensed engineers veto powers over the entire C-suite and the shareholders is indeed how you concentrate responsibility at a single point. This type of licensing model has worked wonders in civil engineering, electronics engineering, law, medicine etc in improving safety standards for the public. Software engineering is not special.

[1] Think letting the victims of the bridge collapse suing as the only method of preventing bridge collapses. This is not how things operate.


> To counteract the financial incentives of shareholders (which result in bridges collapsing or data breaches) with the financial and legal incentives of a special class of employees - licensed engineers.

But now you have a special class of employees whose incentives are wrong in the opposite direction. They make decisions that are overly conservative, because they lose their license if the bridge collapses but by design no one can overrule them if they unnecessarily make the bridge cost four times as much.

This not only makes the bridge cost many times more, it thwarts the original intention because now building new things is so expensive that we avoid doing it and instead continue to use the old things that are grandfathered in or maintained well past the end of their design life, which is even less safe in addition to being less efficient. This is why so much of our infrastructure is crumbling -- we made it prohibitively expensive to build new.

> This type of licensing model has worked wonders in civil engineering, electronics engineering, law, medicine etc in improving safety standards for the public.

And these things are now unaffordable as a result. Ordinary people have been priced out of legal representation and are being bankrupted by medical bills. It's not a solution, it's just a new problem.

> Think letting the victims of the bridge collapse suing as the only method of preventing bridge collapses. This is not how things operate.

The reason this doesn't work in that specific case is that the damage from a bridge collapse can easily exceed the entire value of the bridge-building company, so then if you go to sue them they just file bankruptcy. Which they know ahead of time and then don't have the right incentives to prevent the damage. That hardly applies to the likes of AT&T, which is not going to be bankrupted by a large damages award, but is going to want to avoid paying it out.

> In sue-after model the responsibility before an accident has happened to make the product safe is quite diffuse across the whole organization, and the decision makers (C-suite) do not in fact have the expertise to determine if the product is unsafe.

Neither are they expected to. They're expected to hire someone who does, but then they have the incentive to balance the cost against the harm, so they neither end up with the incentive to abandon quality nor the incentive to make everything prohibitively expensive.

A real issue here is limited liability. The CEO comes in, hires low quality workers or puts them under unreasonable time constraints, gets a bonus for cutting costs and is then at another company by the time the lawsuit comes. Forget about licensing, make them personally liable for what happened under their watch (regardless of whether they still work there) and you'll get a different result.

Limited liability should be for shareholders, not decisionmakers.

That way the same party suffers both in the case of unreasonably high costs and in the case of unreasonably low quality and doesn't have a perverse incentive to excessively sacrifice one for the other.


>But now you have a special class of employees whose incentives are wrong in the opposite direction. They make decisions that are overly conservative, because they lose their license if the bridge collapses but by design no one can overrule them if they unnecessarily make the bridge cost four times as much.

This is not a bug. Having fewer bridges that don't collapse is better than having one fall over every day which is what's happening with data leaks now.


Its a bug.

We now have < 10 megabanks in the US, any of which can bring down the entire US economy.

Instead , we could have 1000s of smaller banks. Tons of smaller banks is the natural state of things, like restaurants. This was true before the banking cartel, TARP, ZIRP, most recently, PPP (genius backdoor to bail out wall st.). In such system, any 1 collapsing bank wont bring the entire system down.

Having fewer bridges means that inevitable when they collapse, there will be far more victims and the event will be catastrophic.

Tech is one of the few bright spots in our moribund economy. Don't introduce a cartel that will blow up eventually.


>Having fewer bridges means that inevitable when they collapse, there will be far more victims and the event will be catastrophic.

I honestly don't even know where to start with this.


It isn't safer to make building new bridges prohibitively expensive, because the result is that new bridges don't get built and then existing bridges are overused and extended beyond their design lifetime. And they're carrying several times more traffic when they ultimately fail.

It's the same for all the rest of it. You're not helping people to nominally make something better unless the better thing is actually available to them.


No, because making bridges prohibitively expensive means you are mono-culturing engineering.

You are only succeeding at keeping 1 engineering firm alive, who can afford to bid and build mega-expensive projects.

Eventually, the megafirm will adopt poor practices. And now, those practices will literally spread out across every single bridge built in the world. You now have a mono-culture of engineering that includes cancer as part of its DNA. Congratulations - you have granted a monopoly to a firm that sells ticking time bombs to your own citizens

This is, in essence, NASA, banking, Fannie/Freddie.

Errors are a part of nature. They must happen. We are humans and fallible. The question, when errors do happen, how big and hurtful will they be? Small or big ?

You can't buy your way out of human error and hubris. This is the fatal conceit.


It's a bug. You can't make everything cost more without bound or ordinary people can no longer afford to make rent. There has to be balance.


You can't make houses cheap without bound either, you turn them into death traps quite quickly.

Everything related to personal data is currently at the slum without firecodes level. But it also has a few unregulated nuclear reactors in the mix.


This is the excuse used to justify the regulatory capture. There is a mile of difference between simply having fire exits vs. minimum parking requirements, de jure or de facto minimum unit sizes and density constraints. You need something that can distinguish these things, not something that provides the trash choice between none of them or all of them together.


Using regulatory capture as an excuse why we can't stop babies from eating lead is the most brain dead take from the American left since they replaced class with race.


I'm not American but isn't a fetish for deregulation a hallmark of your political right, not the left?


Up until recently I agreed with this position because I, like you, thought that this was how licensed engineering disciplines worked. I thought that if you sign off on something you put your career on the line, making the potential penalty for signing off on bad designs worse than the one for saying no to a pushy boss.

Then the MAX crashes happened and Boeing is about to negotiate a sweetheart plea deal and there's absolutely zero talk of any of the engineering licenses that were used to sign off on the bad systems getting revoked.

If the licensing system doesn't actually include a threat of career-ending penalties for knowingly signing off on bad designs, or if the system allows executives to bypass engineer signatures, then it seems like the general consensus on here is right: it's useless overhead at best and regulatory capture at worst.


Wait, your saying the software engineers behind MAX8 debacle were licensed? What licenses?


If AT&T had spent more on security, this would not have happened. I absolutely do not believe individual engineers should be held liable.


The way this works in civil engineering is that the engineer refuses to sign off on an unsafe design. If costs have to increase to address the issue, then they do. If management doesn't budge, then they bleed money while twiddling their thumbs staring at an unapproved design.


Be careful what you wish for… civil engineering is a terrible awful bureaucratic profession.

The crowd here on HN intends to make fun of governments and banks and similar regulated entities… but smug startup culture will not exist if you got what you say you want.


To be fair, AT&T, Equifax, United Health, and Peraton are probably as far away from startup culture as it gets.


"Move fast and break things" isn't an appropriate philosophy for critical public infrastructure


how do you know? maybe they were spending too much on security, but it was going to useless or counterproductive measures like crowdstrike, compliance training, or virus scanners. money is no substitute for competence, as steve jobs's death shows


If you're going to do that, you're going to need to get universities to treat computers as an actual applied discipline. Physical engineers at least get some practice working with numbers around real materials.

I've met too many recent university graduates that don't even know you need to sanitize database inputs. Which, not their fault, but the university system as it currently exists in relation to software is not set up do do the thing you're asking.

The alternative is to have a really long exam (or a series of them like actuaries do?). Here are 10 random architectures. Describe the security flaws of each and what you would change to mitigate them.

The other change that needs to be made, is that engineers need to be able to describe the bounds of their software. This happens in the other engineering disciplines. A civil engineer can design a bridge with weight capacity X, maybe a pedestrian bridge. If someone builds it and drives semi-trucks over it, that's kinda their problem (and liability).

We would need some sort of way to say "this code is rated for use on an internal network or local only" and, given that rating, hooking it up to the open internet would be legally hazardous.


I actually agree with you but this is a dangerous opinion to express on this forum, where move fast and break things is seen as the one true path.


I am not a historian, but I expect there would have been significant pushback as well by other types of engineers back in the day when their profession was regulated.

It's not surprising. But what should not be surprising is that sooner or later, software engineering will be regulated [1]. The question is simply whether software engineers will let politicians do it to them in an unreasonable way, or whether they do it themselves in a more reasonable way.

[1] Well, it has already begun. EU has the notion of the GDPR Data Protection Officer [1] https://www.gdpreu.org/the-regulation/key-concepts/data-prot...


Nothing stops companies or individuals from getting audits or from developing a voluntary license/certification. Consumers that want the added protection can pay the premium. But to force an entire industry into regulatory capture where its unnecessary seems foolish.


Privacy/protection of personal data is slowly being recognized as a Right across the world, as it should.

The standard legal philosophy across the world is that you can't actually predicate protection of a right on ability to pay (under reasonable limits). So, for example, nobody gets to build unsafe bridges and charge less for it, because it violates the right to life.


Are there any other analogies around 'endangering', because that's what happens when this info leaks to criminals


You want a P.Eng (or equivalent) to sign off on anything that involves data? That won’t solve the problem but will dramatically slow down the pace of innovation. And all the while, it will funnel money further into regulated professions instead of into actually securing software.

This is precisely how we end up in a world where we’re all running twenty five year old software.


> This is precisely how we end up in a world where we’re all running twenty five year old software.

Linux?


Are you claiming that Linus Torvalds is a P.Eng (or equivalent)? He doesn’t so that’s a very poor comment. As for Linux, it has changed constantly over those 25 years so that’s not a coherent argument either.


Where do you draw the line? Does that mean you need a license to write Excel formulas?


The license is only for protection of user personal data - names, dob, address, id documents data, credit card data etc, and not, say, how many upvotes you have on HN. The vast majority of sites and software do not need to store any of this data. And the vast majority of code that is written has nothing to do with user personal data.

The larger legal change has to happen is

1. Do not store user personal data if you don't have to (EU already has laws about it)

2. If you store user personal data, you have to guarantee up front that it is stored and processed in a safe way (what I am suggesting). Of course, exception can be made for sites/software with small number of users, or give some time bound leeway, so startups can grow before having to hire a licensed engineer.


Who is ultimately responsible, though when data is stolen in this fashion? The analyst who ETL'd this to Snowflake without MFA enabled? Or maybe the employee who inadvertently installed a data sniffer that captured usernames and passwords? Really want to send your coworkers to jail for falling for a phishing attack?

If you want corporate-death-sentence level fines, are you willing to work in environment with exceedingly strict regulatory oversight? Will you work from an office where the computing infrastructure is strictly controlled? Where you can't bring personal devices to work? Where you have no privileges to alter your work station without a formal security review?

Why not advocate for more resources to capture and try the actual criminals? Or, as elsewhere in this thread, simply make this kind of data collection illegal?


> If you want corporate-death-sentence level fines, are you willing to work in environment with exceedingly strict regulatory oversight? Will you work from an office where the computing infrastructure is strictly controlled? Where you can't bring personal devices to work? Where you have no privileges to alter your work station without a formal security review?

If it means that privacy and safety is actually respected then yes. Working in an environment with "exceedingly strict" regulatory oversight would be a reassurance that observed violations will be dealt with in a timely fashion instead of put in the backlog and never addressed.

> Why not advocate for more resources to capture and try the actual criminals?

Yes, why not? While we're at it, let's try and capture the easily-spotted criminals who perform the most trivial of attacks to servers. Just open up your SSH server logs and start going after and preventing the fecktons of log spam that hide real attacks.

> Or, as elsewhere in this thread, simply make this kind of data collection illegal?

Making something illegal is great! Unfortunately it doesn't really do anything to help people after it's been stolen a second time (first time was by AT&T if it were illegal).


If the data collection becomes illegal, what's the penalty for breaking that law? We're back to figuring out an appropriate punishment.


At&t is up there with defense contractors with how intertwined their businesses are with the DoD. They're basically an extension of the intelligence agencies here in the US. They don't have consequences, much like Boeing.


Personal data cannot be secured. The only way is to not store it. That will (imaginationaly) cost companies in lost revenue for being unable to mine and sell it. Only government can make laws against a company taking your personal information and selling it. Even passwords shouldn't be stored by a company.

The years of lost time argument is disingenuous. Over that number of people, 209 years of lost time from 700 million years of lives is nothing.


There are lots of companies that take security seriously and don’t lose their customers data. Which is good, because there are companies that need to hold customer data.

Companies that don’t take security seriously and lose peoples data should be punished accordingly.

Companies that sell customers data should be identified.

But if we treat them all the same, then we let the bad companies off the hook, and punish the responsible companies unfairly.


there are companies that have already had their customers' data exfiltrated and will have it exfiltrated in the future, companies that will only have it exfiltrated in the future, and companies that are about to be dissolved. there is no fourth category. computer security is not currently achievable; the best we can hope for is to contain the damage from the inevitable breaches and reduce their frequency

new security holes get introduced faster than old ones get patched, and that will remain true for the foreseeable future


I’d take it a step further. If a technology is impossible to secure it shouldn’t be used. Maybe it’s time to rethink all the parts of our lives we’ve handed over to software.


What current technologies do you believe are possible to secure?

I am sympathetic to the overall sentiment here, but between any web browser + server stack you are looking at hundreds of millions of lines of code written in unsafe languages.

Add on the human factor and there is just no hope of really securing this.


sel4, tweetnacl on an avr, pdf/a, html3, gzip, lwip, etc., running on purpose-built hardware. too bad it's not self-hosting yet


Whether or not it's disingenuous, it's our time that didn't need to be wasted in the first place by them not storing phone records


I agree with that. I just don't like big numbers being used to cause emotional responses without proper context. Probably on a spectrum, but it's my beef :)


That's quite a CPNI incident. Wonder what their fine will be. [0]

[0] https://www.tlp.law/2023/08/01/fcc-proposes-20-million-fine-....


Alternatively, we need sharper teeth around the consequences of this data breath.

Why are we using SMS for 2FA everywhere? Why does AT&T have to have residential addresses and KYC for all of its customers? These are the things that should be banned. The government official that mandated all this crap should be forced to sleep with scorpions for 9 years and stink bugs for 3 more years.

If so the leak would be of much less consequence.


Exactly. There is currently no meaningful penalty when a company fails to protect private data or violates its own privacy policies, so of course they continue to do these things because each either makes them more money or costs them less money.

Prison time being on the table for officers of the corporation is the only thing that will change this behavior.


But hey, in 5-7 years there will be a settlement to the inevitable class action lawsuit and each of these customers (that fills in a form, ensuring only a small fraction actually do) gets a $3.75 credit on their next bill. The lawyers will get 30% of the settlement and each walk away with several million dollars. Justice! chef’s kiss


If we go with the logic of the grandparent comment, where were can measure the harm by adding up a minute of time wasted across millions of people to get a big amortized number, it seems commensurate that each of those people can be compensated for their minute of wasted time with a few dollars.


Idk man, the lawyers who made the rules say it's a great system.

Like, it might be an unending atrocity beyond all human comprehension, but, $666/hr soothes a lot of conscience and quiets a lot of tongues.


This is from an email I got yesterday from PayPal:

"Google Referrer Header Privacy Settlement has sent you $0.11 USD."


This is deeply accurate


If you're going to start holding companies accountable for wasting people's time then AT&T has a lot more to answer for than this one little event.


Everyone says what needs to happen. Every thread has this same exact post. We all know what needs to happen. How _would_ this ever happen? This is a board of innovators -- innovate!


No one here can force AT&T to spend more money on IT. If they do, even briefly, everyone involved will be laid off and outsourced within a few years.


we all know this does not need to happen, if 'we' are people familiar with the quality of software in already-regulated environments


Yeah, you're right. Data breaches are essentially just slaps on the wrist to companies like AT&T. Maybe it's possible to fine them based on the proportion of the userbase that was affected and the profits they generated for a certain time period.

I wonder if this will push companies to stop using external vendors to store and process data. If companies stored all of their info in house, it would prevent the case where compromising one vendor compromises everyone's data. But it would also mean that each individual company needs to do a good job securing their data, which seems like a tall ask.


The reason some companies use external vendors is to outsource the risk.


I propose that the fines should be based on what the data would be sold for on a dark web forum. These breaches should be exponentially more expensive, which would incentivize companies to retain less sensitive data.


33% of all living americans? how can it be that much?


There are basically 3 carriers in the US, AT&T, T-Mobile, and Verizon; other carriers use the networks of those 3.


Recount


The breach here was not against AT&T but against a cloud computing company called Snowflake.

Cloud computing companies, so-called "tech" companies, and the people who work for them, including many HN commenters, advise the public to store data "in the cloud". They encourage the public, whether companies or individuals, to store their data on someone else's computer that is connected to the open internet 24/7 instead of their own, nevermind offline storage media.

Countless times in HN threads readers are assured by commenters that storing data on someone else's computer is a good idea because "cloud" and "_____ as a service". Silicon Valley VC marketing BS.

"Maybe pierce the corporate veil and criminally prosecute those whose negligence made this possible."

Piercing the veil refers to piercing limited liability, i.e., financial liability. Piercing the veil for crimes is relatively rare. Contract or tort claims are the most common causes of action where it is permitted.

There is generally no such thing as "criminal negligence" under US law. Negligence is generally a tort.

As for fines, if there were a statute imposing them, how high would these need to be to make Amazon, Google, Microsoft or Apple employees and shareholders face "real consequences".

Is it negligent for AT&T to decide to give data to a cloud computing company such as Snowflake? HN commenters will relentlessly claim that storing data on someone else's computers that are online 24/7 as a "service", so-called cloud computing, is a sensible choice.

Data centers are an environmental hazard in a time when the environment is becoming less habitable, they are grossly diminishing supplies of clean water when it is becoming scarce, and these so-called "tech" companies are building them anyway.

Data centers are needed so the world can have more data breaches. Enjoy.


>The breach here was not against AT&T but against a cloud computing company called Snowflake.

It wasn't really a Snowflake breach, if it's like the other Snowflake data leaks, AT&T didn't set up MFA for a privileged account and someone got in with a password compromised by other means. For smaller companies I'd be willing to put more blame on Snowflake for not requiring MFA, but AT&T is large enough to have their own security team that should know what they are doing.

This is yet another wakeup call for all companies - passwords are not secure by themselves because there are so many ways for passwords to be leaked. Even though SMS MFA is weak, it's far better than a password alone.


If it helps to understand the comment, change the word "breach" to "unintended redistribution of data".

The comment is about the risk created by transferring data to a third party for online storage.

It is not about the specific details of how data is obtained by unauthorised recipients from the third party.

The act of storing data with third parties who keep it online 24/7 creates risk.

Obviously, the third parties will claim there is no risk as long as ["security"] is followed

If we have a historical record that shows there will always be some deficiency in following ["security"], for whatever reasons,^1 then we can conclude that using the third parties inherently creates risk.

1. HN commenters who focus on the reasons are missing the point of the comment or trying to change the subject.

If customer X gives data to party A because A needs the data to perform what customer has contracted A to do, and then party A gives the data to party B, now customer X needs to worry about both A _and_ B following ["security"]. X should only need to trust A but now X needs to trust B, too. If the data is further transferred to third parties C and D, then there is even more risk. Only A needs the data to perform its obligation to customer X. B, C and D have no obligations to X. To be sure, X may not even know that B, C and D have X's data.

A good analogy is a non-disclosure agreement. If it allows the recipient to share the information with third parties, then the disclosing party needs to be concerned about whether the recipient has a suitable NDA with each third party and will enforce it. Maybe the disclosing party prohibits such sharing or requires that the recipient obtain permission before it can disclose to other parties.^2 If the recipient allows the information to be shared with unknown third parties, then that creates more risk.

2. Would AT&T customers have consented to their call records being shared with Snowflake. The people behind so-called "tech" companies like Snowflake know that AT&T customers have no say in the matter.


> Laws related to data breaches need to have much sharper teeth. Companies are going to do the bare minimum when it comes to securing data as long as breaches have almost no real consequences. Maybe pierce the corporate veil and criminally prosecute those whose negligence made this possible. Maybe have fines that are so massive that company leadership and stockholders face real consequences.

I really dislike this attitude.

AT&T were attacked, by criminals. The criminals are the ones who did something wrong, but here you are immediately blaming the victim. You're assuming negligence on the part of AT&T, and to the extent you're right, then I agree that they should be fined in a bigger manner.

But the truth is, given the size and international nature of the internet, there are effectively armies of criminals, sometimes actually linked to governments, that have incredible incentives to breach organizations. It doesn't require negligence for a data breach to occur - with enough resources, almost any organization can be breached.

Put another way - you trust a classical bank, with a money, to secure your money from criminals. But you don't expect it to protect your money in the case of an army attacking it. But that's exactly the situation these organizations are in - anyone on Earth can attack them, very much including basically armies. We cannot expect organizations to be able to defend themselves forever, it is an impossible ask in the long run. This has to be solved by the equivalent of a standing army protecting a country, and by going after the criminals who do these breaches.


No, the root-cause is not AT&T were "attacked, by criminals"; there's a much wider issue involving Snowflake and multiple customers. The full facts are not in yet.

AT&T's data was compromised as one of Snowflake's many customer breaches (Ticketmaster/LiveNation, LendingTree, Advance Auto Parts, Santander Bank, AT&T, probably others [0][1]), which occurred and were notified in 4/2024 (EDIT: some reports says as far back as 10/2023). Supposedly these happened because Snowflake made it impossible to mandate MFA; some customers had credentials stolen by info-stealing malware or obtained from previous data breaches. Snowflake called it a “targeted campaign directed at users with single-factor authentication”. The Mandiant report tried to blame unnamed Snowflake employee (solutions engineer) for exposing their credentials.

How much responsibility Snowflake had, vs its clients, is not clear (for example, seems they only notified all other customers May 23, not immediately when they suspected the first compromise). Reducing the analysis to pure "victims" and "criminals" is not accurate. When you say "criminally prosecute those whose negligence made this possible", it wouldn't make sense to prosecute all of Snowflake's clients but not Snowflake too. Or only the cybercriminals but not Snowflake or its clients.

[0]: The Ticketmaster Data Breach May Be Just the Beginning (wired.com) https://news.ycombinator.com/item?id=40553163

[1]: 6/24 Snowflake breach snowballs as more victims, perps, come forward (theregister.com) https://news.ycombinator.com/item?id=40780064


I think the simple explanation here is likely not that Snowflake has some giant undisclosed breach allowing access to it's customers data, but actually that snowflake instances are just insecure by default in fairly basic ways.

Snowflake built its business on making it really easy for data teams to spin up an instance and start importing a massive amount of their org's data. By default, the only thing you need to access that from anywhere on the internet is a username and a password. Locking down a snowflake instance ends up requiring a lot more effort.

And very few users actually end up interacting with snowflake directly -- they're logging into a BI tool like Looker, which accesses snowflake behind the scenes. So the fact that an org's Snowflake instance doesn't require being on the VPN or login via okta/azure ad/whatever SSO can fly under the radar pretty easily. Attackers realized this, and started targeting snowflake credentials.

Seems similar to all the S3 breaches that have come out over the years -- it's not that s3 has some giant security hole (in the traditional sense) -- it was just really easy throw shit on S3 and accidentally make it totally public.


Yes, like I said Snowflake apparently knew very few of its many customers were using MFA.

Reports say password-stealing breaches were happening as far back as Oct 2023. But Snowflake didn't notify people (customers, FBI, SEC) until May 2024.


> Supposedly these happened because Snowflake made it impossible to mandate MFA

What's crazy is that Snowflake made MFA enforcement available only 5 days ago.


I think the implicit assumption is that the vast majority of these breaches are obviously preventable (basic incompetence like leaving a non-password-protected database connected to the public internet is common).

A better analogy is not a bank defending against an army, but a bank forgetting to install doors, locks, cameras, or guards. _Yes_, the criminals are the root cause, but human nature being what it is it's negligent to leave a giant pile of money and data completely unprotected.


> I think the implicit assumption is that the vast majority of these breaches are obviously preventable (basic incompetence like leaving a non-password-protected database connected to the public internet is common).

Some breaches are certainly preventable. But is that the case here? I didn't see the technical details, I think they aren't released yet, but this is the conclusion everyone seems to jump to automatically, without necessarily good reason.

More importantly - these companies employ thousand of employees, all of whom could be doing something wrong that is causing a security threat. And there are thousands, maybe tens of thousands of people trying to find their way in. my point is that even without any negligence, if you have thousands of people trying to hack your company every day for years, it's easy to slip up, even if it's preventable-in-hindsight.

One of the first things you learn in working in security is that there is no perfect security, and you have to understand the nature of the threat you are facing. For these companies, the threat might very well be "North Korea decides to dedicate state-level resources to breaking into your company, plus thousands of criminals are doing the same every day". How is any company supposed to protect against that?


Which implies that the company is negligent in hoarding the data in the first place. If you admit that there is no effective security for sensitive data, you admit that holding the sensitive data in the first place is negligent. Create real sanctions for the loss of the data, follow through on them, and then companies will do better.

Mind you, Snowflake is the problem here, not AT&T, if it was their leak. AT&T is big enough that no meaningful sanctions will fall on them. It's not like they fell out of the sky and killed a bunch of people.


Would assume someone would notice all the data that is being transferred.

And if this turns out to be a sophisticated attack then who’s to say they didn’t backdoor a bunch of systems? I heard a talk from a big Norwegian company that got attacked. Every single server, every single switch, every single laptop, all had to be reformatted and reinstalled. I assume that AT&T would have to end up doing the same.


To run with the analogy some more:

The bank is expected to have people trying to break into it. Sure would be nice if they didn’t, but that’s not the reality. As such, failing to provide adequate defences is absolutely a failing on the banks behalf.

If they were keeping even more data than necessary, that’s just extra failure on their behalf.


In this analysis, the effort the bank puts towards defending themselves is relevant. We wouldn't blame the bank for an army attacking them, but if they left the door unlocked and the neighbours kids made off with your money you very rightly would feel differently.


Which does make me wonder why we never really hear of banks being attacked and robbed in such a way? One would think they would be the most obvious targets to throw an army of criminals at.


It's pretty much the definition of a functional state that the police can gather more resources faster than any group of criminals. By the time you gather enough criminals to hold off the police for even a few minutes, most of the time, combined with the sibling's point of not that much physical money being stored at banks, there's not much money to go around to that many people.


Banks don't really physically store much money any more.

And more importantly - the police exist. If someone were to actually physically rob a bank, enormous resources would be spent trying to find and capture them, then they'd be thrown in jail.

If they could do the same thing, but also be physically located in another country while doing it, with no chance at all of going to jail... more banks would be robbed!


Crypto Exchange has entered the chat.


If a breach is so inevitable like you say, then it's negligent to store the information in the first place. They're accumulating and organizing data with the inescapable conclusion of handing it out to criminal organizations.


The customers are the victims, not the companies.

You picked the wrong point to counter with. The real problem is that the corporate decision-makers who bear the most responsibility will never be held accountable. They will always be able to shift blame to someone below them in the corporate hierarchy.


Your point needs more emphasis. The idea that the victim is anyone other than the customer is so wrong.

The other points are dubious too.

> But the truth is, given the size and international nature of the internet, there are effectively armies of criminals, sometimes actually linked to governments, that have incredible incentives to breach organizations. It doesn't require negligence for a data breach to occur - with enough resources, almost any organization can be breached.

So given that this is known, why was the data stored such that it could be taken? Why was it kept at all? Oh.. to sell.

> Put another way - you trust a classical bank, with a money, to secure your money from criminals. But you don't expect it to protect your money in the case of an army attacking it.

Yes I do expect that. And it’s protected and insured by my government.


No way. If I were running a small MSP, I was breached, and my customers were infected I'd be sued out of business immediately. The fact that they are a titan means they should be that much more vigilant.


Companies could also stop storing customer information for purposes unrelated to the core product that you are using..... But that's not going to happen because it's still far more profitable to mine customers data even with the risk of theft or breach.


<< AT&T were attacked, by criminals. The criminals are the ones who did something wrong, but here you are immediately blaming the victim. You're assuming negligence on the part of AT&T,

I am sure LEOs will do what they are paid to do and catch criminals. In the meantime, I would like to focus on service provider not being able to provide a reasonable level of privacy.

I am blaming a corporation, because for most of us here it is an ongoing, recurring pattern that we have recognized and corporations effectively codified into simple deflection strategy.

Do I assume the corporation messed up? Yes. But even if I didn't, there is a fair amount of historical evidence suggesting that security was not a priority.

<< Put another way - you trust a classical bank, with a money, to secure your money from criminals.

Honestly, if average person saw how some of those decisions are made, I don't think a sane person would.

<< But the truth is, given the size and international nature of the internet, there are effectively armies of criminals, sometimes actually linked to governments, that have incredible incentives to breach organizations. It doesn't require negligence for a data breach to occur - with enough resources, almost any organization can be breached.

Ahh, yes. Poor corporation has become too big of a target. Can you guess my solution to that? Yes, smaller corporation with MUCH smaller customer base and footprint so that even if the criminal element manages to squeeze through those defenses that the corporation made such a high priority ( so high ), the impact will be sufficiently minimal.

I have argued for this before. We need to make hoarding data a liability. This is the only way to make this insanity stop.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: