But the article touches on the biggest issue with Palantir, which is that they seem to be completely fine with allowing law enforcement agencies to use their software for parallel construction, which is in my opinion a pretty egregious violation of the 4th amendment. Although it may be constitutional for police departments to purchase data from e.g. Facebook or some data broker, even local police departments are known to employ dragnet-style surveillance through things like Stingray devices. Of course we have a number of three-letter agencies, also Palantir customers, who don't even need to pretend that they're following the 4th amendment.
Since Palantir is essentially enabling the construction of all the tools required for a complete surveillance state, and actively using their tools for that purpose for financial gain, I think that makes them one of the least ethical companies in tech. I certainly wouldn't be able to live with myself if I knew I was writing software with the explicit purpose of being able to track down rebels in Yemen, catch illegal immigrants, and violate the constitutional rights of millions of Americans.
Prevent the police state by building the police state.
That being said: the PATRIOT act and NSA's PRISM program have shown just how blatantly the US Govt. can abuse individual privacy and rights so I don't think its a good idea. Weak crypto using backdoors is just a terrible idea period.
I guess what I'm trying to say is that the logic make sense in principle even though in practice it wouldn't work that way.
"The logic" in this case is a proposition about how beings behave. That is, "We can build the largest prison system in history, give law enforcement unlimited surveillance powers and not expect massive injustices." That is simply wrong. Human beings are not like that.
Perhaps open-source licenses should explicitly exclude use in certain applications. Even if not legally enforceable, it would serve as a constant reminder to developers who DO work in these fields of what they are doing.
What am I saying? Most of us still have no idea.
This raises some good points, but that was to be expected as licenses are a tricky business.
I do think our profession needs more awareness of ethics, and a usage suggestion (rather than an order) in the license of software may be a good start.
I wonder if Palantir has any similar policies?
Law enforcement uses that data to acquire admissible evidence of the perpetrators' crimes. This is the crucial step, because although the original evidence couldn't be admissible in court due to its collection without a warrant, it can be used to acquire more evidence. If the warrantless surveillance includes information such as the time and place of where you plan to commit a crime, law enforcement can stop you for a random check on the way there, giving them actual evidence for prosecution. Now that evidence can be used in court, even though the only way law enforcement were able to acquire it was through unconstitutional practices.
The thing about Palantir is that it needs big data to work. Police departments don't just have random chunks of data lying around containing large numbers of connections between large numbers of people - they might have something like that for something like suspected criminal gangs, but that wouldn't be enough data to make Palantir's software worthwhile. So acquiring the data necessary to use Palantir's software requires mass data collection. Some of their mass data collection techniques may not require a warrant, such as using license plate captures, but some does, such as who you're calling.
So does Palantir perform warrantless surveillance themselves? To my knowledge, no. But when they sell their software to law enforcement agencies, they must know that the only way their customer could get bang for their buck is through mass surveillance. In my opinion, the person willingly and knowingly selling tools for oppressive purposes holds a lot of blame for the actual oppression that occurs as a result.
- In our software that we deploy to police departments, generally the origin of any piece of information is tracked, as well as when it was entered and by who. The intention with this is to prevent this (and other) kinds of abuse. Malicious users can still abuse the system and do a parallel construction e.g. outside of our system or maybe in a way that the two actions (finding non-admissible data and finding admissible data) seem unrelated, but at that point they would be spending quite a bit of time and criminal energy on this. I don't think this would happen commonly for a variety of practical reasons (but ultimately there's no way to completely prevent it).
Ultimately trust in the government and that law enforcement (in the various shapes and forms it comes in) is a force for good and prevents many bad things from happening every single day, is at the heart of the palantir philosophy. If you fundamentally think the government is evil, palantir would probably not be the workplace for you. But once you see all of the bad things LE can prevent thanks to our software, it becomes pretty easy to believe in, even if individual bad actors exist (and always will), like racist cops, cops that abuse their power, etc.
- If the organization is acquiring warrantless surveillance, then this generally means that there is the legislative base for them to acquire the data in the first place. What this looks like varies from country to country, some countries are much stricter with the regulations than others. So for instance in some countries it is not allowed to get information from a suspects public facebook profile, whereas in other countries this is considered A-OK. Especially in europe these things have recently become much more strict with GDPR (and similar rules that apply to government entities). You'd think that be a downside to palantir (less data = less value?) but actually it's a big business opportunity, because our software is the only thing available that is even remotely close in this space to having enough access control 'finesse' that it can enable organizations to be compliant to this law. So this is an area (and competitive advantage) we are investing a lot into -- and typically organizations go from 'completely non-compliant' when we arrive there (e.g. never even deleting data like license plates that are supposed to be deleted after 3 months etc, audit-logging any searches investigators do, limiting search scope, ...) to being fairly or even fully compliant.
Now my personal opinion is that everyone should be able to enjoy great privacy and control over their PII just like citizens of the EU do, so if you live e.g. in the US other another country where the law is lax, you should consider taking political action to change the situation.
Of course we never endorse or support any workflows that are in any way unconstitutional, and we have terminated relationships before with very big government agencies when we had doubt about whether our tools would be used for unlawful purposes.
- The "big data" thing is a bit of a misunderstanding I think -- we have "big data" tools too, but usually these are not of interest to local law enforcement. If you check out our youtube channel you will find some (atrociously old!) videos of the tool that's popular in LLE (https://www.youtube.com/watch?v=yMv3TBxulu4 for instance). This tool is in fact often used with just hand-entered information -- analysts create objects (e.g. persons, links between them) in their investigation. So the data really isn't that big and mostly hand-curated, and the tool works fine without mass data collection (better than with, perhaps). Most of what local law enforcement does with 'big data' tools is to generate reports like crime statistics.
"Police departments don't just have random chunks of data lying around containing large numbers of connections between large numbers of people" -- actually, they really do! Any police department of any size that has existed for a while will have a database with millions of convictions, suspects, court-cases etc in them. Usually on some crufty mainframe or in some crufty old SQL database that contains a lot of terribly inconsistent data (dead links, data duplication, etc.) Also sometimes some of this data is shared between countries, states etc. You can find some videos on youtube of our CEO talking about the challenges of data integration and such.
Ultimately of course there is a general statistical trend (belief?) that 'more data is better', because if there is more and more complete data, you have a bigger chance of finding that connection between e.g. a terrorist and some billionaire who might be funding said terrorists, etc. At least up until a point (at which the data probably becomes just too noisy/hard to deal with, because every individual piece of information carries very little meaning. At least the NSA seems to subscribe to this belief according to their public statements, and I'd think if anyone knows about this stuff, it's them) and assuming you have the necessary CPU power and talent (data scientists) to actually do something useful with this data. Local law enforcement orgs like police generally lack the latter. In police departments, most users are only sophisticated enough to run searches for things like names, SSN, number plates etc, really.
- We generally know a lot about how our software is used at most of our installations, and almost always actively take steps to prevent abuse (auditing etc). Most government organizations will not let us see things like what their analysts exactly searched for (because that is obviously sensitive information) but we do a lot of work to help organizations prevent abuse and insider threats at a higher level. As mentioned above, we have taken action in the past in situations where we suspected abuse. It's on a case-by-case basis, of course -- imagine if there is e.g. one analyst abusing the system, then that analyst getting fired/reprimanded would probably be sufficient action (obviously this is not in our responsibility), but if we suspect systematic abuse or we see signs of repeated abuse without repercussions, then we might pull out completely.
"Ultimately trust in the government and that law enforcement ... is a force for good and prevents many bad things from happening every single day, is at the heart of the palantir philosophy."
Trust is built on transparency, accountability, responsibility.
Can I make FOIA requests to Palantir?
In some of the more liberal countries the things we do with the government are even part of public record and you can find contracts and services we provide them etc documented online without having to do any kind of request.
Of course palantir has no input on that in any way. Certain government agencies and activities are excepted from FOIA (and for very good reasons), but if you think it should not be that way (or that the rules should be different) then you should take political action.
Palantir is software deployed for specific customers. The customers then use Palantir against their data and publicly available data.
- FOIAs cannot be used to obtain specific information about persons, this kind of information is protected. In the EU under the GDPR however, you can get some of this information (about yourself only) -- but I think law enforcement agencies are probably exempt from this (not sure, but would make sense.)
- Our software does not make decisions on who is or is not a criminal in an automated fashion (or indeed any fashion). In fact, law enforcement can not make such a decision at all, ever -- only a judge in a court can deem you a criminal. Law enforcement can merely decide that you are a suspect. (And again, this is also not something our software (or any software anywhere) decides in an automated fashion.) So there isn't really a 'chain of reasoning' here
- When you actually ARE accused of a crime (this would happen in a court, and only if there is substantial evidence against you), you will be told what the crime is and why you are suspected of the crime. Thus the 'chain of reasoning' will be presented to you, and be available for verification by anyone participating in the court case (including, depending on country, e.g. a jury)
In another similar case, someone issued a citation for speeding by an automated camera system requested the source code for the firmware running on that system. The court greed this was a reasonable request, and they issued a court order demanding the police department turn it over. The police went to the company they got the device from and found that they had no right to request the source. They had no right to know how the system they'd purchased and were using to charge people with infractions even operated. I expect this would be what would happen with Palantir.
NO! The ends don't justify the means. If we give up our freedoms, LE can prevent a lot of bad things. But it isn't worth it.
Abuse of power is one reason we have checks and balances. If I understand correctly, checks and balances and due process are thrown out the window. And the justification is simply that LE prevented some bad things... But at what cost?
This is disingenuous at best this is a tool for surveillance and its misuse in public life is more obvious than the advantages it might bring. People will naturally distrust a government that spies on them constantly and that has nothing to do with good or bad government.
There is an episode of black mirror on the mother who surveils her daughter constantly if the mother is good why do we feel disgusted by her attitude?
I applauded apple for publicly denying the government a backdoor in their phones.
Everyone knows the various forms of law enforcement perform some measure of (and have to, to do their job) surveillance. That ranges from e.g. ALPR cameras to patrolling the streets of some city. Most people agree that this is a perfectly acceptable thing. Do you think instead the police should be abolished? Or stripped of their investigative or enforcing power? I don't think most people would actually enjoy living in whatever society would result from this.
Of course the exact boundaries of what is acceptable and what is not should be fiercely debated.
What makes most people uneasy, and especially those of us who work in tech, is that many police departments and 3 letter agencies have routinely abused the trust that people put in them. If you want the citizenry to have trust and faith in law enforcement, they have to demonstrate a desire to not break constitutional rights when they deem fit to do so.
I genuinely don't understand why these agencies wouldn't try to build a good relationship with the citizenry they are supposed to be protecting. It often seems like we live in a very dangerous society and everything we say or do is being monitored by someone and may be used maliciously. That itself is a very terrible thing to live with.
Every one of those agencies has had, in their history, at least some cases of individual abuse (e.g. one particular individual abusing their power). This is unavoidable (because these agencies employ humans, and humans are assholes sometimes) but as long as the situation is appropriately handled at the organizational level (the responsible party is reprimanded in an appropriate way, e.g. let go, tried, ... and practices and protocols are put into place to deal with and prevent these kind of situations) this is OK, because the agency operating is still a net win for humanity.
Most people who are paranoid about these kind of things are so because the media loves to pick up these stories and hype them up and spread fear -- but I don't think it's that much of a problem.
The thing that's really a worry (and that media usually does not report on) is when abuse happens in a systemic fashion, e.g. agencies bending the interpretation of the law to extend their reach. Incidentally, this is exactly what palantirs software is designed to help with, because it can actually enforce these rules.
Perhaps a systemic fear of Government overreach is embedded in the DNA of the American people. I would argue that a healthy criticism is very important; but paranoia can be dangerous and unproductive.
> Or stripped of their investigative or enforcing power?
So it's either all the power or none of the power? This is a logical fallacy you are creating a duality for something that does not exist. The police is not the same as your software like you implied. I'm against the government spying on their citizens even with good intentions. See the black mirror episode since it demonstrates where problem lies, it's not in good intentions, it's in the control.
Does this software help the government to spy on their citizens? Yes.
Should the citizens have a say in if they want this software to be used for their data? Yes, why don't they have say?
Can we live without this software? Yes, like we've done so far.
Is this software helpful in a police state? I would say it's the most valuable software for a police state.
Let that sink in for a moment, folks. Then go back and read this other part.
> trust in the government ... is at the heart of the palantir philosophy
So, at Palantir they trust the government that they know is trying to use their product unlawfully. With all due respect, I think many of us are a little more sparing with our trust.
That doesn't contradict the belief that law enforcement is necessary and good.
why is that a bad thing?
If the crime is victimless (such as any drug usage "crime"), then the issue isn't with the police, but with the state defining what "crime" is, and that definition doesn't align with the people's definition.
It's like billboards vs. targeted ads. Both have the same effect (pushing you to buy stuff), but the latter cuts out all the noise.
What I can do is use surveillance tools to find a way to effectively entrap you, to gather evidence. Think of it like a tip-off that you're not allowed to have.
Palantir specialises in providing tools to analyze and aggregate big data. If law enforcement has an interest in it, it's probably for dragnet-style surveillance. The problem is that the 4th amendment means there must be reasonable suspicion before the investigation can begin.
Most dragnet-style surveillance is incompatible with the 4th amendment. So law enforcement has to construct a reason for investigating the case that they should never have been investigating.
I too have a problem with business models that invade our privacy. But I think it’s a bit disingenuous to conflate “private information” with information that you voluntarily posted on the internet.
Whatever happened to educating people that what they post on the internet is permanent? Dragnets over public data are a symptom of the real problem, which is lack of user education and understanding of what data they generate and share.
What if somebody is arrested and happens to have a very convenient data set on a lot of people, such as data used for marketing purposes. Is it okay if those people are stopped and searched on the basis that somebody else held data about them and the police have supposed that 10% of the clients are suspected of criminality?
Normally there is supposed to be evidence that the crime took place before an investigation can begin.
If law enforcement starts with the evidence and then look for the crime that fits it, that's something different.
Palantir goes to the agency and says, "Give us data and we'll give you "actionable intelligence".
You know, back in the 90s and early 2000s, the term "data mining" was perjorative. Financial operators would look through their mass of data for correlations and sell them to people who would discover that they didn't actually work.
1) yes, the LE do already have the data.
2) agencies don't 'give' data to palantir, law enforcement data almost always lives in air-gapped environments it cannot leave in any way (or only in highly restricted ways)
3) we don't 'give' actionable intelligence, we just provide tools for analysts to generate such data. Palantir is a company of software engineers, not of intelligence analysts.
Are you are also cool with companies in the past hiring pinkertons to kill people also - because that's where your argument is leading.
Corporations take an interest in tracking people only to stratify them into groups suited to specific ad strategies. The worst their errors will do is pepper your browser and email with bad ads.
Not so with Palantir when their tracking data can misguide police departments, credit agencies, and even your employer into making your life a living hell. Surely any company with that much power must be held MUCH more accountable than they are now.
Facebook on the other hand is actively collecting the data. Without Facebook that database does not exist. In my mind that makes them far worse.
I think that any company that works for the government to any degree is considered part of the government and is bound by all rules a government agency/employee would be.
Government pushes lots of compliance down in contractors and state government, but tends to not push down transparency requirements.
Ergo, can we all refocus this discussion of universal privacy rights violation on my partisan social group interests?
>The 1920s was the last time one could feel “genuinely optimistic” about American democracy, he said; since then, “the vast increase in welfare beneficiaries and the extension of the franchise to women—two constituencies that are notoriously tough for libertarians—have rendered the notion of ‘capitalist democracy’ into an oxymoron.”
"I had hoped my essay on the limits of politics would provoke reactions, and I was not disappointed. But the most intense response has been aimed not at cyberspace, seasteading, or libertarian politics, but at a commonplace statistical observation about voting patterns that is often called the gender gap.
It would be absurd to suggest that women’s votes will be taken away or that this would solve the political problems that vex us. While I don’t think any class of people should be disenfranchised, I have little hope that voting will make things better.
Voting is not under siege in America, but many other rights are. In America, people are imprisoned for using even very mild drugs, tortured by our own government, and forced to bail out reckless financial companies.
I believe that politics is way too intense. That’s why I’m a libertarian. Politics gets people angry, destroys relationships, and polarizes peoples’ vision: the world is us versus them; good people versus the other. Politics is about interfering with other people’s lives without their consent. That’s probably why, in the past, libertarians have made little progress in the political sphere. Thus, I advocate focusing energy elsewhere, onto peaceful projects that some consider utopian."
Historically, this ideology has had some very bad consequences.
Having worked in this field, I can say that this is hilarious overkill. The vast majority (in dollar terms) of Medicare/Medicaid fraud is drug companies misreporting their Average Market Price (AMP), the critical variable in how much they get paid. In 2012, Glaxo settled(!) a small bundle of cases, some of which I worked on, for three billion dollars .
You don't need police-state surveillance of individuals, you need visibility into company transaction records.
>Going even further back, in 1998, a Senate investigation into Medicare fraud found $6 million in payments to a “business” whose fake address would have been smack in the middle of the Miami Airport.
>The report released today shows no improvement, but investigators say it is not that hard to fix the problem.
>ABC News went to one of the locations listed in the report that was on Chicago’s Southside next to a porn shop, with no doctor’s office evident, where Medicare sent nearly $600,000 using an ineligible mailbox shop location as a billing address until 2013.
Seems like exactly the kind of problem Palantir can solve to the advantage of taxpayers.
That's a huge part of the multi billion dollar fraud that Medicare has.
I assume being able to identify physicians will outlier billing patterns would be helpful in figuring out where to look closer.
The second largest cases are typically against hospital chains that encourage (read: basically force) their coding teams to use particular billing codes. It's easy enough to model this from claims data (and we did). When a chain buys a new hospital, suddenly patients that present with the same diagnosis codes are all getting extensive exams that ostensibly require 45+ minutes of a physician's time and cost 4-10x more.
Again - the large operators (hospital chains) absolutely dwarf individuals (doctors). DOJ barely has the resources to prosecute the largest offenders, and have faced repeated budget cuts. Wasting time on individual doctors or private practices (beyond maybe making examples of a few egregious actors) would be an inefficient allocation of their resources.
This is pretty much everything I was afraid of when it comes to this field - big data "prediction" being used to predispose people into crime, and by extension, algorithmic racism.
At the very least, this should give them the impression that policing is much higher than it actually is. Sounds like a good deterrent to me.
The data can be wrong. This could be easily fixed if law enforcement agencies had to actually verify that the data was correct, but in practice this seems to rarely occur.
When people are told by a group that they are a thing, repeatedly, they eventually accept that they are that thing. If the cops keep harassing you for being a gangbanger when you have never been one your whole, you just happen to be associated with a few individuals who are, you eventually just move towards that group. Why not? The powers that be already think you are one and they treat you like one, which causes other people to treat you like one, so why not just be one?
You also need to consider the idea of enforcing crime that hasn't happened yet. That defeats the whole point of law enforcement. It is meant as a reactionary force, not an offensive force. Policing someone who has the potential for crime assumes that they will do it again, which makes it so your essentially treating them as guilty for something they haven't done yet. From my perspective, that seems to defeat the purpose of our legal system.
Anyway, I think I will end my rant here...
The data from JPMorganChase isn’t visibile to Palantir as an organization, but to the bank.
This headline is weird and as accurate as “Microsoft knows everything about you” because the bank uses SQL Server. Or “the Python developers” because python and pandas is used to link a bunch of data together.
Since this is Bloomberg, I would expect them to know the difference between software and data services. Google gets your data when you use them (except for enterprise), most other products don’t.
Cynically, this seems like oppo PR against Thiel. How could you prove this suspicion?
Cloud same, but they host and promise not to do anything. Like any other enterprise cloud product.
The only data they gave us was either open sets like weather or census or stuff anyone can buy like Quintiles. So there was no “magic data set” we could mix in or something.
If you’re really paranoid, then all non-OSS software.
So again, why is Palantir more worrisome than Salesforce?
>The military success led to federal contracts on the civilian side. The U.S. Department of Health and Human Services uses Palantir to detect Medicare fraud. The FBI uses it in criminal probes. The Department of Homeland Security deploys it to screen air travelers and keep tabs on immigrants.
Where in the article is that stated?
In fact, we can probably also assume that people's Facebook data has already been inputted in Palantir's system as just another data source.
Sure that's a lot of assumptions and probabilities, but on the other hand: Peter Thiel.
My apologies in advance to offtopic haters. But this is too good to overlook. Did anybody else notice how this question relates to the origin of the name?
I can say with 100% certainty that they did not cross-reference our data with other customers, because the software not sending any data back to Palantir was a primary stipulation of our purchase. I was always under the impression that no data was shared across customers, just code, connectors, and best practices. Our installation of Palantir was 100% local, none of the infrastructure sat in a vendor cloud environment or on Palantirs home servers.
One of the primary screens in Palantir is a simple search page designed to work (and look) like Google, to search upon your entire database however you'd like. You can enhance the search function via the various attributes built into your installation or using advanced search tools. So I guess a "fishing expedition" in the traditional sense is very easy, as it's limited by the data you have, not the system itself.
And if so, is that a common practice, or is it more typical to search through all avaliable data for anything potentially useful and weed out later?
This term popping up again made me curious. To my surprise, that's not just some overloaded nickname (what's cooler than styling yourself with a military or martial arts term, right?), that's what they are actually called:
"forward-deployed" software engineer - works at Palantir's client's facilities
They sound cool and, to me, signal that their work is more meaningful.
The problem is that "wrong" can be anything arbitrary, and what's not "wrong" today, it can be "wrong" tomorrow, however arbitrary it is. Somewhere it is wrong (illegal) to call someone a "n*zi", even online.
Also anyone who tells me that they have nothing to hide, I ask for their password. For some reason they don't give me their password. Interesting, huh?
Mind you, I was speaking in general about people who claim they don't care about their privacy, or about these companies collecting their personal data, or having a history of their private messages, etc.
How can you not care about them saving your conversations in plain text, but then suddenly care about it when I ask you to show it to me? The only difference seems to be that I'm actively, personally, directly asking you for it, but ultimately it would be the same outcome.
We are talking in a thread about NSA having your data vs other companies having your data.
(Un)fortunately you can't be prosecuted for crimes which were not illegal at the time of the offence.
Of course this doesn't take into account activities retroactively deemed "wrong" by The Great Twitter Mob which can have some serious effect on your livelihood because you made a Richard Gere gerbil joke some 20 years ago.
As someone who recently left a very abusive, toxic workplace, I see this as a reality.
The abuser of the child, a parent, apparently harassed the agency that received and forwarded the report until they gave up that "a medical practitioner had filed the report", which narrowed it down to a single practice.
At which point then the abuser was able to call the practice, get a record of who had seen the child, and even get their work schedule.
Even tiny leaks are a major problem in these cases.
Since when is the NSA supposed to spy on their own citizens by default? They should only target individuals that represent risk anyway. (if they were respecting the constitution)
If they were respecting the constitution, they should only target individuals for which they have a warrant (assuming we're talking about US citizens). Even then, it's my understanding that this is the job of the police, rather than the military.
Why are you comfortable with casual lawlessness?
The standard for "enemy" could also be as low as they choose it to be, which is also weird that people think the government wouldn't go after them because they are "innocent", as if these people get to decide who is innocent and who isn't.
No, it's the government that does that. Courts may have the final say in the end (unless you're indefinitely detained for multiple years first), but until then the government can do a lot of damage to your life if it comes after you. Even putting you through years of lawsuits, stress and money spent on lawyers would be bad enough. You may win in the end, but it will be a Pyrrhic victory.
And that's if you don't take their plea deal first when they scare you with multiple bogus charges that you think shouldn't even apply to you but you're too scared of the 50 years in prison sentence they're threatening you with.
It seems the press of today won’t really care about solutions, but only highlighting a scandal. Am I wrong? What would one do to get them to write a story about actual solutions like solid.mit.edu or ours:
Press kit? News conference? Some kinda weird publicity stunt?
Oh, the irony!
Is it time to consider whether companies who's primary business is the collection, summation, synthesis, and re purposing of information should be defacto illegal?
Its not that having the government do this is meaningfully better, but I'm convinced that is a better overall path. The public seem more afraid and more willing to engage in a more serious discussion over ethics. At the very least, it would enable the public to have more oversight over how such data is collected, stored, embargoed (or not), and used. The level of abstraction and fuzzing that is allowed by allowing a non-governmental entity like Palantir or Facebook to have (whether as a collector or aggregator) the data and governments be only a user seems too difficult to regulate and even have a transparent discussion about because claims of national security and secrecy are nested within claims of trade secret and intellectual property in ways that create a moat around effective oversight.
So seriously...if we made this business model illegal, what do we lose? What exceptions would need to exist?
You mean IBM/SAP/Oracle?
I've long thought that the "secret sauce" of Palantir is just that signing a Palantir contract gives executives a reason and excuse to pull together data that had never before been indexed against each other.
I mean, if you take any old dumb BI platform and give it access to data that people have not previously connected, you're going to get new insights.
Someone once described Palantir as a data mapping consultancy that marketed itself as a software platform.
I also expect this watchdog to test for data leakage from one site/company to another (e.g. by taking on the role of customer).
They should do this on a frequent basis.
Without this, companies seem to be free to do whatever they want.
This is how the fucking government gets around the Bill of Rights, primarily the 4th. When it's outsourced, you have no rights. Parallel construction needs to be the next addition to the 4th or it's own amendment.
Thanks in advance!
We need to start getting ahead of worst-case scenarios and to hell with people that think that approach is paranoid.
It seems common in history that during war time, a country aggressively pursues it's own population for spies or sympathizers for the enemy. It send also common that the country goes overboard so as not to let anyone through the cracks and a lot of innocent people get swept up in the process.
Here’s a handy formula next time you ask yourself a similar question:
Why does the media X?
— Because people watch / listen to / read X.
Why do people read about companies collecting data? Because it sounds scary, and it blames someone who is not them. Fear + scapegoat + victimisation of people = read read read read.
Don’t think of it any more complex than this or you will drown in irrelevant details. They don’t matter, and they never will. Read some 19th century news papers to see this effect take hold with comical clarity.
Currently, because of facebook, there is a spike of Company Y knows everything about you type stories, as well as a spike of Facebook is evil stories. These things permeate out and loose interest over time.
They love to write about scandals and problems. But if you have an actual solution to the problem, they aren’t going to publish it.
You can also see articles on court cases about some scandalous things, but can hardly find out how the case eventually turned out. And so on.
Journalism highlights problems not solutions because that’s what gets the most “outrage reshares”. The exception is some “cool new technology” that can have a story about it, but not as a SOLUTION associated to a lot of PROBLEM stories.
They particularly love to write about scandals and problems among their rivals. That's the real change here: journalists and bloggers have gone from thinking of Facebook (and other social media) as potential allies to thinking of them as rivals. Negativity sells, as you say, but negativity that reinforces one's view of rivals as evil is especially appealing. People whose own Google Ads revenue is declining because the eyeballs are elsewhere are very highly motivated, both consciously and unconsciously, to write about why those eyeballs are elsewhere when they clearly shouldn't be.
Did you miss the whole Facebook and election thing?
Example from other field: aircraft passengers have breathed compressor bleed air since jet engines were introduced in the 60s, everyone in the industry knows this is how it works, and yet it wasn't until last year, when major publishers such as Fortune , The Guardian , Bloomberg , The Telegraph  etc. happened to simultaneously run this issue, that lot of the general public learned about it for the first time, most of them having assumed that cabin air was taken directly from the atmosphere.
How can expect anything of them when I don't even know that company exists, let alone what it does?
> Why are people surprised that Cambridge Analytica, a political consultancy, did political research and affected a voter outreach strategy? None of this requires any real background knowledge, and yet the outcry seems extremely disproportionate to the revelations at hand. It seems like the Snowden reports all over again, where the world was up in arms that an intelligence agency was collecting intelligence.
This is simplifying the discussion a lot. The general assumption about both intelligence and research was probably that they stick to rules and are concerned with their own fields.
I'd guess the average joe would (before snowden at least) think that an intelligence agency is concerned with foreign diplomats and military strategists and a political consultancy does opinion polls.
It's something different if said agencies collect intimate data of random citizens without their consent. At least in public mind, that was something the stasi or similar organisations did - but certainly not their own side.
The answer says quite a lot, I think.
Arguably, what happens when people are aware of surveillance is one of the worst parts of it. Following the Snowden revelations, there were noticeable chilling effects:
Says Bloomberg, and includes Facebook code on their pages
That guy is a nice piece of work.
Germany supposedly has some of the best privacy laws in the world. Their government freely abuses their people with domestic espionage. They'll continue to work with technology businesses to that end as it fits their purposes.
There's a lot of outrage about Facebook. Meanwhile, absolutely nothing has changed, or is going to change, about the NSA spying. Nothing got fixed. More or less the Snowden revelations were entirely meaningless in terms of changing the status quo. The NSA is spying on EU and US citizens just as they were five years ago, just as they will be five years from now. Where's the outrage today about the NSA? Almost everyone gave up, that's why Facebook is the new target, the people couldn't do anything about the government spying.
GDPR creates the haves and have nots when it comes to access to abuse privacy. That's all it will change. If you have the political access (which Palantir does and will), you're going to get the security keys to abuse privacy (the alternative is for EU governments to entirely shut down all domestic espionage, that will never happen).
The EU could stand up to the US if they felt it mattered.
Other than defence partnerships the EU is no more dependent on the US than the US is on the EU.
It's a two way street.
Unless you think the US would be happy to walk a way from the largest single market in the world (by GDP).
So the EU has teeth if the US pushed it too far.
It's hard to get the EU to agree on a lot of things but something sufficient to make the EU consider punishing the US would likely be important enough they would agree.
> As Thiel’s wealth has grown, he’s gotten more strident. In a 2009 essay for the Cato Institute, he railed against taxes, government, women, poor people, and society’s acquiescence to the inevitability of death. (Thiel doesn’t accept death as inexorable.) He wrote that he’d reached some radical conclusions: “Most importantly, I no longer believe that freedom and democracy are compatible.” The 1920s was the last time one could feel “genuinely optimistic” about American democracy, he said; since then, “the vast increase in welfare beneficiaries and the extension of the franchise to women—two constituencies that are notoriously tough for libertarians—have rendered the notion of ‘capitalist democracy’ into an oxymoron.”
Here's an interesting answer I found when I became curious about this question a few years ago:
2. Money is a value, so your assertion is a paradox.
edit: liberatianism is by its nature less demo-cratic because it reduces the "kratos" part of democracy but it is not fascist because it is not authoritarian. anarchocapitalism is its conclusive endgame.
 clarify "inherently". Libertarianism can be a lot of things given its primary focus on economics and contract law.
As an example of a non-democratic libertarian governance system, consider minarchism - https://en.wikipedia.org/wiki/Minarchism.