I realize that we all suffer if it gets made into a torrent, but sometimes pain is necessary to get action.
Within a week, this whole thing will be forgotten and nothing will have changed because privacy is too abstract for most people -- they need to see the personal information that's being collected. The researcher acted properly, but going full Snowden would have had much greater impact on getting better privacy-preserving laws and technology.
People can be stabbed in the back if they go into dark alleys without watching behind them. Let's stab a few people who go into these alleys so that everyone will be afraid to do so and we have an opportunity to prevent people being stabbed in future by making them aware.
Why would you possibly think this is a good idea? The idea is to prevent pain, not cause more pain in some bizarre attempt at making people afraid. There's enough privacy violations - we don't need to be making more of them ourselves.
1. Secured and private. This is data not exposed in any breach.
2. Unsecured and private. This is data which has been exposed in a breach, and which must be sought out by the reasonably tech savvy.
3. Unsecured and public. This is data which has been exposed and can be easily used by anyone.
We want all sensitive personal data to be in state 1. But because of the taboo of state 3, we end up in a situation where we're hostage to state 2, because everyone wants to treat published sensitive data as if it were still private. That takes power away from the non-tech savvy victims of breaches but doesn't diminish the power of tech-savvy criminals who want to use the data.
In my opinion, forcing all sensitive data to be considered either secure and insecure (instead of the weird, quasi-private state 2) would take power away from people who want to use it. Every time a new breach happens there is a race to use it before it's not useful anymore. I believe we could meaningfully defang these breaches by completely leaning in and demonstrating how public the data is. If there were a party truly committed to that and they couldn't be stopped, my hypothesis is that things would actually change.
- Some high number X of dark alley stabbings occur each year.
- But alleys still "feel" safe to people, because the stabbings aren't well-publicized. So people don't know to avoid them and the rate X remains the same.
- Let's publicize alley stabbings in an emotionally impactful way, so people know to avoid alleys and we can bring X down.
In the actual case at hand, the argument is that you break a few eggs so people understand the issue viscerally, and hope to achieve massive regulatory change because people now actually care. I don't know if it would work, but it's a more reasonable idea than you're making it out to be.
Solving the root problem here is orders of magnitude more important than any single data breach today is.
What the authors here did is correct - they've publicized the issue. Releasing this data as a torrent is not
'publicizing' anything - it is stabbing millions of people in the back, and then waiting for the crowds to come and gape at the dead bodies.
The top post doesn't promote publicizing data breaches that already happened. It is promoting obtaining and publishing the data which weren't published before. It is completely different things. Like making a TV series about alley stabbings - and stabbing actual people in the alley to get better scenes for this video. The former is great, the latter is a heinous crime which can ruin the whole cause.
Only in certain countries...
For some people, the cost of letting other people look up your information is overwhelmingly huge - this is why privacy should be regulated.
We don't really "all suffer" the same - some people suffer disproportionately (stalking, harassment, abuse).
Publishing the data as a torrent is unlikely to change people's opinion, but will almost certainly harm people.
Don't take this approach.
Vaccines can give you a fever but we still take them because the short term side effects are worth the long-term benefit. Leaking this type of information to the public operates under the same principle.
Unfortunately, anyone who tries to normalize the data and release a public frontend for querying it will probably be dropped by their hosting provider and ostracized by the security community. People don't tend to like the idea of what you're talking about and will blame the person hosting the information as much as the people who leak it; much like how Troy Hunt will never release the HIBP corpus of normalized password dumps, he'll only allow you to seen if you're in it.
The impact of searching your personal data with that kind of granularity would probably be more dramatic than seeing your compromised passwords online, but I bet it would be even more villified.
Nope. In fact, you couldn't be more wrong. The outcome Joe Six-Pack would get from it is not that "data aggregators are dangerous" but that "security researchers, privacy advocates and cyber-criminals are pretty much the same, they are doing the same thing - stealing your data from a honest hard-working marketers - and then hide behind 'privacy' and 'research' when they get caught". And most of the press will run with it gladly, it's an entertaining story.
You can't do your cause - whatever it is - worse disservice than to commit crime "to show them". That makes you a criminal - whose argument will be ignored because nobody wants to agree with a criminal - and your cause the one which is promoted by criminals. It's very hard to argue from this position. Sometimes there's no choice - i.e. if the whole enterprise is criminalized in advance, as is criticism of the power in totalitarian states. But nobody smart should put oneself in this position voluntarily.
> going full Snowden would have had much greater impact
Snowden revealed secrets of the NSA that did not hurt average citizen - on the contrary, in many cases were deployed against the average citizen. In this case, you would be the one directly hurting the average citizen. You wouldn't get the Snowden cape.
Also I think it's interesting that people say it is "leaked" while what actually happened is that the price of this data got lowered to zero for a few lucky souls.
An argument could be made that every step above was illegal. I don't agree with that argument, but surely you've heard of (many) cases where people have been prosecuted for things like that.
My point is that he's already taking risks.
Make companies "super-liable" for any data beyond the data they (actually) need for the functioning of the service that is stolen in a data breach from their servers.
This would hopefully not just encourage more companies to believe that data is "toxic"  and treat it as a liability, not as an asset, but it would also encourage them to adopt end-to-end encryption in as many types of services as possible (and eventually stuff like homomorphic encryption or any form of encryption that doesn't give the company itself and hackers direct access to the data).
 - https://www.schneier.com/blog/archives/2016/03/data_is_a_tox...
I think we need to teach people that their data is valuable, likely dangerous in the hands of others, and not to spew it all over the web. Kinda like we did before FB convinced everybody to use their real names.
Other than them, who cares? If you want to put people in harm's way, you should accept the consequences when harm occurs.
>I think we need to teach people that their data is valuable, likely dangerous in the hands of others, and not to spew it all over the web. Kinda like we did before FB convinced everybody to use their real names.
No, it's much easier to hold the companies accountable, and they should be held accountable. No company that suffers a "data breach" should have the resources to exist after the breach. Society should punish them out of existence, because they are known cancers.
We don't consider this a leak when the marketing firm loses its data. It's only a leak when we find out that the marketing firm has lost control of its data.
They advertise themselves as having the most accurate data (why wouldn't they advertise themselves this way?) If so, the people it affects have a right to know, and it seems that they have the means to contact them and let them know.
"The sights of Paris and a personal information purge from the 100 largest US collectors"
Recital 14 - "The processing of personal data is designed to serve man; the principles and rules on the protection of individuals with regard to the processing of their personal data should, whatever the nationality or residence of natural persons, respect their fundamental rights and freedoms, notably their right to the protection of personal data"
Article 3 (2) - "This Regulation applies to the processing of personal data of data subjects who are in the Union"
This hasn't been tested, and each member state could prosecute differently, but it was certainly discussed and then structured in such a way to be a fundamental truth, and in my non-legal opinion (based mainly just on having read the majority of it) it would be interpreted as such by EU courts (ie, not member state courts)
but how would they ever get the contact information for all of those people? surely that's private information....
oh... right ಠ_ಠ
“In order to be in line with Fair Information Practices we will take the following responsive action, should a data breach occur:
We will notify you via email
• Within 7 business days
We will notify the users via in-site notification
• Within 7 business days
We also agree to the Individual Redress Principle which requires that individuals have the right to legally pursue enforceable rights against data collectors and processors who fail to adhere to the law. This principle requires not only that individuals have enforceable rights against data users, but also that individuals have recourse to courts or government agencies to investigate and/or prosecute non-compliance by data processors.”
I think the antithesis of would be information redistribution. Everybody should be entitled to access all of this information if anyone has it. Just for fun lets say the only caveat is that all information access is also public and linked to each identity.
Do you think its better off in the hands of the highest bidders???
Every time you use a loyalty card that information is collected and yes it's used to understand you and perhaps even influence you, to buy certain products. Buying diapers? Have a look at these baby toys. Most people will throw their personal information out there for a price reduction.
The problem here is the leak, not the fact that it exists.
I know exactly what my Safeway card is used for. I also deliberately do not register my phone number or other information to it. Of course they can probably associate it with my credit card but these things are easy to reason about.
The real problem is combining all these datasets in one place for the purpose of perpetuating information asymmetry as a product.
So actually yes, the problem is that this dataset of every single American exists.
While I sincerely admire the quixotic effort, I suspect you are fighting a losing battle.
There are countless situations in daily life where you have no choice but to leak some tiny bit of information about yourself to an external database, and from there on, it's just a matter of cobbling the bits back together.
Maybe it shouldn't be. The bit of info I gave about myself I gave (even if implicitly) to a specific entity for a specific purpose. To sell or give that bit to another unrelated, unknown to me entity for an entirely different purpose is a violation.
But there's unfortunately no regulation in place to insure that.
And if there were, GDPR style, there would still be the matter of:
- exceptions for to e.g. authorities
That leakage may be inevitable but the correlation is not. We just allow it today. GP claimed that the existence of the Exactis dataset was not a problem. I disagree. That dataset exists only because many disparate sets were linked with that inevitable leakage.
Rephrased, the real problem is (currently) what happens once it gets combined with that wealth of other data (that's been purchased, shared, snooped and swindled) belonging to our data overlords like Google.
>Of course they can probably associate it with my credit card
Or if you've ever furnished an ID for some age restricted purchase while also using the loyalty card. Then of course theres location data from Android/smartphone, vehicle telemetry (mfg, finance company, mobile data service, OnStar, anti-theft service, insurance co 'safe-driver' tracking device), members of the Telco mafia (VZW,AT&T, etc.), video surveillance providers running facial & license plate recognition, et cetera.
I have no idea if that's what Exactis is / does and people may not be aware of this, but it's the reality.
EDIT looks like Exactis gets information on users through cookies, which is not the scenario I wanted to highlight.
Yes, I understand companies are allowed to do it. That's beside the point - just because they are allowed to right now doesn't make it right.
The fact that the information exists guarantees that it will leak. If not from one company, from the next.
Personal information is like hazardous chemicals.
Big data is munitions, ammunition that is loaded into algorithms which are like machineguns that can fire at the speed of light millions of times across the world in under a minute.
Now do you think small firms can’t hold large quantities of damaging data?
"There are thresholds for turnover, balance sheet total (meaning the total of the fixed and current assets) and the average number of employees, which determine whether your company is a micro-entity, small or medium-sized."
And there are different requirements for each
Common law gives judges an active role in developing rules; civil law is based on fixed codes and statutes.
Case law is not binding in the EU.
This is a dramatic and misleading oversimplification. Under civil law systems, judges still do have great leeway with interpreting and applying regulations. And under common law, it's not really true that judges have an active role in developing rules - they have the ability to interpret them in the contexts of cases which come up, but they don't legislate. The closest thing that they can do (aside from overturning provisions) is to introduce limitations or tests on existing law that is challenged, but even then they're mostly only allowed to do that to the extent that they are using the tests to connect the law back to the Constitution or other existing legislation.
Case law is not binding in civil law (at least not to the same degree as it is under common law), but does definitely play a significant role.
Furthermore, it's flat-out wrong to say that "case law is not binding in the EU". The Republic of Ireland and the UK both use common law, under which case law is binding. Not only are UK court decisions are enforceable across the entire EU, but UK law is actually the jurisdiction for a lot of contracts and agreements within the EU, similar to how New York is the chosen jurisdiction for a lot of contracts or even international treaties that are enforced worldwide, whether or not the parties are based in New York.
Even if you're referring specifically to legislation passed by the European Parliament itself, it's still not really correct to say that case law isn't binding. The European Parliament is an international body held together by international treaties, and while EU courts might have decided to use civil law in interpreting legislation passed by the European Pariament itself, that doesn't mean that case law does not come into play, either in countries with common law systems or even in countries with civil law systems. It's way more complicated than that.
This is, incidentally, one of the problems that Brexit is currently introducing: it's unclear whether parties that have elected to govern their contracts under UK law will continue to be able to do so with the expectation of enforceability.
It was a 2017 case, but I guess it will reflect what can happen ?
tl;dr: a non-profit got fined 75K€ because their website leaked 42,562 private documents from their users. Anyone could modify numbers in the URL and read other users' documents. The documents included passports, tax information, identity documents, and more.
EDIT: better source: https://www.cnil.fr/fr/sanction-de-75-000-euros-pour-une-att...
By its very nature, however, you cannot nail such a thing down and define it precisely beforehand.
Is there anything stopping a regulator from deciding an unintentional violation is "only" a company-destroying 5M euro fine instead of the full 10M? In fact, couldn't it still be a 10M fine? Or should I expect to be let off with a warning? Seems like I'm depending on the good will of the regulators of every single EU member state...
I do not think it's impossible to write a law that says fines for minor and unintentional violations are limited by statue.
As an outsider, I would love to hear that that's not how it works. Do the member states have any checks on each other's enforcement?
I’d love to search this database for the details of top people at privacy-violating companies and publish them.
Who defines "privacy-violating"? Jumping into the mud because you feel aggrieved just makes you look like a pig.
Basically anyone who profits off user data and makes it difficult/impossible to opt-out.
You can even force the companies subjected to the FCRA to give you a report on exactly what they have on you.
 They are subject to the FCRA if the data is sold to companies who make use of it in credit, employment, and housing decisions.
As long as all those needles are safe in the haystack they can be ignored, a stack of needles on the other hand is not so easily ignored.
I'm talking about revealing the same kind of data their companies collect on us, which is way more personal and could contain embarrassing stuff.
As I and others have said elsewhere, the data was leaked the moment it was collected and priced for selling to attackers. Forgoing full disclosure is really just blunting the truth, giving corporate whitewash a leg up, and delaying society learning the lesson of what we're up against. As (presumably) individuals and not owners of surveillance companies, we shouldn't bless this behavior as being in the public interest.
When the top folks in the US government are personally affected. Until then, "congressional hearings" and presidential ambivalence is the most action we'll get out of them. Most people don't really understand what the significance of these events are.
Remember, we only got the Video Privacy Protection Act after someone published Bork's rental history during his supreme court nomination. I had to say that public shaming works, but, public shaming works.
The OPM breach covered a lot of powerful senior people.
It will only change when consumers demand it to change or outright refuse to give their personal information away. I think everyone should adopt pseudonyms for everything and to be constantly changing their pseudonyms regularly.
In some cases you're not knowingly giving them your data either.
So I refused and made it clear I would walk away. The sales guy went though the whole ‘it’s not a problem, I’ve bought cars from here and haven’t got spammed’. In the end he had to get a manager and it turned out that the option could be removed from the contract, three menus down in the system.
Sounds like no one had ever asked before. I imagine GDPR will have changed this to opt-in.
I scratched that, and other lines, out of my rental agreement when I rented my New York apartment. The landlord agreed.
I don't believe it'll be feasible to purchase just one person's purchase data [easily], but if you knew who you wanted to get to, it should be possible to narrow the targeting to get to them
 http://www.oracle.com/us/solutions/cloud/data-directory-2810... [ctrl+F + mastercard]
but i guess it might be different for different acquirers.
the purpose of loyalty cards is that the messages are usually acquired or processed on non-bank systems so they can go into much greater detail and include individual sale item details
With the advent of the chip and pin cards in the USA, it seems logical that just about everyone upgraded to equipment that does support it; which might explain why you are only seeing this in the past year.
Are you aware of their MPerks program? Tied to your phone number and an email address, electronic receipts, tracking of your savings, online/in-app clipping of coupons auto-applied at checkout time, automatic "rewards" of $2-3 for every $150 you spend.
The only part of a traditional loyalty card program it doesn't have is making their sale prices apply only with card, but it definitely gives you measurable (and measured) discounts both passively through those "rewards" and actively via the in-app coupons.
Data are facts, money is a repository of value. With a bank, you are the customer. With marketing, you are the product.
Data about you is not (necessarily) data you own.
I'm not saying it's right, but any reasonable discussion has to take this legal landscape into account.
There is no rising fascism in America, if people really knew anything about the Wiemar Republic, Republican Spain, or March on Rome they would know that the left always loses when it tried to be humane and gain power. The only time the left gains power is when 'tankies' are the ones leading the resistance.
But, remember the left chooses the hard road, not because they can but because it is moral.
Regardless, Trump is absolutely a fascist. It seems a bit futile to disagree.
Edit: On a more practical note, it's always baffled me how normalised it is. People who defend fascists in the US media are still respected and hired. They might be prominent political figures. Somehow calling attention to someone who is fascist is confused by anyone doing something as simple as saying "liberals are evil!", as if the two party system there has anything to do with racial supremacy. As an outsider, you look at it and think "This is the country that helped liberate Europe from the Nazis. How are they not ashamed?". Maybe I think the first step is allowing shame to enter into things when you think about your nation, instead of a quasi-religious patriotism.
With corporate interests.
And of course we can’t make honest claims to democracy until then either.
If the congress is unable or doesnt want to draft a bill to stop predators from milking money off of your data, then that money probably ends up in their pocket some way. Or at least some of it.
Please dont quit your job for the sake of your security - food, shelter, etc. Move these assholes out with next election. Vote in young people that are probably as angry about this shit as you are, in hope they won’t sell out their soul.
You should reach out to a venture called Equifax. They are providing customer alerts for data breaches and a premiere data protection service.
Your imagination is working against you here. The obvious and well-known reason that congress is ineffective is they work for the private sector, who is the disease; not the cure.
Here you go, first result in Google:
> Average person saw Equifax commercial on “hey be smart we will keep your info safe with alerts” and thought “wow this company cares about my data” when its precisely opposite.
The result you cite is a general consumer favorability rating for Equifax, which is different than this particular claim. People who didn't hear anything about Equifax and Congress are included in the general consumer poll. I'm not trying to nitpick, I'm just pointing at a lack of data for this particular claim.
And yet, when GDPR tries to address the issue, HN is full of "blocking the damned EU users completely" and "stop stifling honest companies".
I mean, I'm a "tin hat" privacy nut in the USA, but that doesn't mean that I'm a fan of 100% of the GDPR. It has plusses and minuses. It'd be nice to have a conversation about them.
Absolute liability for data losses. Exactis lost 360 million peoples' data. They should be able to (a) form a class and (b) extract money damages from Exactis without having to prove specific harm, which is difficult to do with data loss.
A good model is Illinois' Biometric Information Privacy Act . Broaden the the definition from "biometric identifier" to a longer--but still specific--list. If you want to get fancy, create a regulator who can add things to the list after a public hearing. (The specificity avoids GDPR's "what's personal data?" mess. The public input mitigates the risk of unintended consequences and corruption.)
Well they do. Our government takes money through fines and taxes and uses it to build infrastructure and provide services.
> which would make sense with personal data being leaked.
My preference would be that personal data not be leaked at all. Ideally the warnings and fines kick in long before that happens.
> A fine also misses companies who are "doing what the law says" but still have some horrible flaw anyways.
What example are you thinking of?
The GDPR is quite broad and open to interpretation by both sides.
> If you are _genuinely_ responsible for the data, meaning if something happens to it you are liable for it, then you often take more care of it above and beyond, than for simply complying with rules.
That's what the GDPR does.
Requiring people to lawyer up to make the company responsible is far weaker.
Using fines to fund public services creates perverse incentives though, especially where the fines go directly to the agency that brings the case.
> Well they do. Our government takes money through fines and taxes and uses it to build infrastructure and provide services.
That's only true if you consider the public at large to be equivalent to any individual member of the public, or if you believe only the government is "injured" by a data breach.
If I stole all your money and repaid it in fines to the government instead of directly to you, would you consider the matter settled?
> If I stole all your money and repaid it in fines to the government instead of directly to you, would you consider the matter settled?
I mean, that's typically how it works.
People get robbed by people who don't have the ability to directly restore what they've taken, so the state takes them into custody and makes them a productive member of society.
Would I consider the matter settled? Gee, I've seen some really stupid arguments on the Internet that have made me wish I could punch someone over TCP/IP, but while I'm wishing I'm not going to wish for that either.
Can you please detail why paying a fine to the government is the same thing as paying a fine to the people injured by a crime?
Me too! But not at any cost. This discussion involves thinking about scope (both in who and what is regulated), penalties (both in frequency and magnitude) and pre-emptive enforcement, if any. The trade-offs are far-reaching. A conservative approach is prudent. (It's also politically resilient.)
> GDPR is quite broad and open to interpretation by both sides
That's a sin and a virtue.
> Requiring people to lawyer up to make the company responsible is far weaker
This, too, is a sin and a virtue. The sin is it may allow bad deeds to go unpunished. But presently, everything is going unpunished. The virtue is in its prudence. It's unlikely to cause systemic harm, and we can observe its case law to more-precisely draft the next wave of rules.
And what is this "any cost" rubbish?
Replace EU by "the data/privacy regulator of the country in question"
Reporting requirements. If you find out you're breached, you have to notify everyone involved--plus their states' attorneys general--within N days. If you find out you're breached and fail to notify at least one attorney general, that becomes a criminal liability for those who knew but didn't act.
I was working with Albany on a law in this form (notice only) after the Equifax breach. It was tabled due to lack of Equifax-related outreach from voters.
I suspect this is because the very people who would be the most vocal about this issue are also the most politically cynical, and would never think to reach out to their representatives. That's a damn shame, if true.
Of course, "responsible" and "data aggregation company" rarely belong in the same sentence...
For example, Europe was first on texting on the mobile network while the US (single country) took years to come to a standard.
I think it will be the same with regards to GDPR. You (US) will discuss this for years and come up with a different law.
But after going several answers deep you still haven't listed any specific complaints and instead complain that nobody discuss them. This really make no sense.
So, feel free to explain what specific things you dislike, why, and how else you would have done it, and then people would be able to discuss them with you and exchange opinion.
Saying "it's not possible to talk about x" when you don't even try to really isn't the way.
The work sucked, but I was more than happy to help our customers get their data from us.
No one lost your data, they still have it, but someone else made a copy.
That emphasis is twofold: 1) they can do it again, because 2) they didn't lose anything.
The corollaries being that their incentives aren't aligned with the people whose data is leaked, the company don't need to spend on avoiding leaks because they're not harmed beyond a little (bad) PR.
The pedantism is unhelpful.
You can't stop data loss until you can guarantee platform security. You can't do that until you prevent developers from creating bugs and security flaws in the first place. You can only do that unless you have either perfect tools to catch all the issues or a perfect testing regime.
It's basically an unsolvable problem.
If the problem is inevitable on some level, then why isn't insurance to cover that eventuality required?
A number of of very different groups are very opposed to the very idea: libertarians, (some) Christians, and (many) civil rights activist being the most vocal.
> Each record contains entries that go far beyond contact information and public records to include more than 400 variables on a vast range of specific characteristics: whether the person smokes, their religion, whether they have dogs or cats, and interests as varied as scuba diving and plus-size apparel.
It might be "comprehensive" but is it comprehensive in a scary way? It's probably just 400 machine learning features that are estimating what people might like, so not necessarily super accurate?
Even worse. Many people say they “don’t have anything to hide” because they too haven’t considered the vast consequences regardless of having something to hide. For starters, when the data is inaccurate, you might have something to hide that even you didn’t know about, and it could be responsible for all sorts of events and opportunities in your life both public and private without you even knowing. Things that give you an different life experience than your friends to an unknown degree. This sort of lack of knowledge, control, deprivation of explanation or closure etc. would be the lived experience of chaos and it’s one of the most frightening parts.
Birth certificates. Creditworthiness.
At the age of 54, Sigmund Arywitz was a healthy American success story. He was making $30,000 a year as executive secretary and treasurer of the Los Angeles County Federation of Labor, AFL-CIO, his family was sound, his reputation high on all counts, and he had just finished eight prestigious years in Sacramento as state labor commissioner under Gov. Edmund G. (Pat) Brown. But something was awry. In the space of one year, five Los Angeles department stores refused Sig Arywitz charge accounts, and a major car-leasing company turned him down for credit -- even though he had a walletful of oil-company and other credit cards and had always paid his bills on time....
See also: Cardinal Richelieu.
Context on the reference: https://history.stackexchange.com/questions/23785/what-did-r...
Imagine you're a pastor at a church and a datadump claims you're an atheist. Maybe you can convince people it's a mistake, maybe you can't.
And we're not even beginning to think about what this can be used for by authoritarian regimes (cf. https://www.madamasr.com/en/2014/09/29/opinion/u/you-are-bei...)
"Big data" was crucial to the operational efficiency of the Holocaust
OTOH, it would be interesting to know how did they get hold on such data.
If theft occurs at a bank, the money is gone. If I steal information, the information is still there. Knowledge of theft is completely dependent on a logging system capturing the correct information.
edit: nope, this is infinitely worse.
Without more information I can only assume they are scraping public records just like sites like Spokeo etc. Perhaps with some data analysis thrown in.
So I don't see much of a personal concern; especially since their business model appears to be selling this very data!
If this data comes from just scraping other sources that are freely and publicly available and applying some shitty data analysis on it, why should I be particularly concerned? The data itself is already out there for someone to find if they wanted to or even buy it from this company if they are too lazy to scrape themselves.
However, the source of the data wasn't in the article.
Will is a highly accomplished IT Executive designing and developing self-service software applications built on BIG Data, running in Cloud Infrastructure in highly secure environments, leveraging analytics and yielding high profits and rapid growth.
He is responsible for technology strategy which includes highly accurate and automated data processing, cloud infrastructure, MS Azure platform-as-a-service, Cloudera / Hadoop Data Management Platform, APIs, Marketing Automation Platform, Analytics, and Digital Marketing.
( http://www.exactis.com/about-us/ )
I imagine a lot of that info is proprietary, but I'd really like to understand this industry better. It's probably a foolish hope, but I really hope there are a few main choke points that one could opt-out of. If that's not possible, I could always try to inject bad data into the system, if I know what their inputs are.
At the very least it’s a pity that even good people make _mistakes_ like that.
Untill that day, whatever I do that is at least robust, diesn’t require mainatance each 2-3 months etc. not even fast, just decent.
In short while our idustry features stuff like that I will be angry, sad etc. But boy I will always have money, I might have a lot fun at least for some definition of fun!
Seriously, though, this is just getting out of control. I'm almost to the point of writing my representatives. I don't think the industry can adequately self-police.
[Edit] By "industry" I mean any company who handles my personal data.
Why do we assume we have to make an arbitrary choice of landlord to trust just so we can get basic things done on the Internet?
Do you use ad-blocker, vpn, private browsing, same on the phone too? All privacy settings in facebook, avoiging gmail and google? No?
At least teach your kids to.
Anyway, the info being leaked here isn't dependent on browsing history. Companies have been gathering these sorts of profiles far longer than most people have been using the Internet. The only way to avoid it is not just to never have Internet access at all, but never have a credit card or a bank account, never own a house or sign a lease, never drive a car, never register to vote, etc. If you do any of those things, even temporarily, you could be leaking information that can't be unleaked.
Many people cannot jump off facebook. I just use Messenger for comms.
And its no trouble, after setting all up, in few months, you'll hardly notice it, it will be new normal.
Forgot to add, use several emails, one for government, one for facebook, for ones you will not use, like shop and blog accounts - create few protonmail ones, it will be harder to put them all together by data mining.
If registration page asks, give fake names, DOB, phone numbers. Make it a habit asking yourself - do this shop really needs this data be real?
Most devs don't seem to acknowledge that good security requires having a separate, dedicated person/team to handle it, just like how you would hire a lawyer rather than having your software devs handle legal issues.
I once posted on HN that every company that deals with sensitive data, big or small, must have a dedicated security person/team. My comment was downvoted/flagged, and I was bombarded with responses like "why would we waste the money on a security person? my dev team already knows to encrypt passwords".
In fact, I almost certainly know only a tiny fraction of what the actual experts in that company knew, but a number of people have told me that I know a lot more about it than the average developer.
That scares me, and if people flame someone for recommending that a dedicated security expert be hired by companies that handle sensitive data, I can only conclude it is out of ignorance - of what's out there, and what's possible.
On the other hand, there are economic realities to consider, especially in early-stage, underfunded startups. What do they do about this?
Its hard to even think about these things for those of us working at low levels, firmware, embedded, etc...
Your comment got me to thinking about what I don't know. Which is a whole lot.
Information wants to be free
Unless it is about me