Hacker News new | past | comments | ask | show | jobs | submit login

You raise several good points, but a few clarifications here:

- In our software that we deploy to police departments, generally the origin of any piece of information is tracked, as well as when it was entered and by who. The intention with this is to prevent this (and other) kinds of abuse. Malicious users can still abuse the system and do a parallel construction e.g. outside of our system or maybe in a way that the two actions (finding non-admissible data and finding admissible data) seem unrelated, but at that point they would be spending quite a bit of time and criminal energy on this. I don't think this would happen commonly for a variety of practical reasons (but ultimately there's no way to completely prevent it).

Ultimately trust in the government and that law enforcement (in the various shapes and forms it comes in) is a force for good and prevents many bad things from happening every single day, is at the heart of the palantir philosophy. If you fundamentally think the government is evil, palantir would probably not be the workplace for you. But once you see all of the bad things LE can prevent thanks to our software, it becomes pretty easy to believe in, even if individual bad actors exist (and always will), like racist cops, cops that abuse their power, etc.

- If the organization is acquiring warrantless surveillance, then this generally means that there is the legislative base for them to acquire the data in the first place. What this looks like varies from country to country, some countries are much stricter with the regulations than others. So for instance in some countries it is not allowed to get information from a suspects public facebook profile, whereas in other countries this is considered A-OK. Especially in europe these things have recently become much more strict with GDPR (and similar rules that apply to government entities). You'd think that be a downside to palantir (less data = less value?) but actually it's a big business opportunity, because our software is the only thing available that is even remotely close in this space to having enough access control 'finesse' that it can enable organizations to be compliant to this law. So this is an area (and competitive advantage) we are investing a lot into -- and typically organizations go from 'completely non-compliant' when we arrive there (e.g. never even deleting data like license plates that are supposed to be deleted after 3 months etc, audit-logging any searches investigators do, limiting search scope, ...) to being fairly or even fully compliant.

Now my personal opinion is that everyone should be able to enjoy great privacy and control over their PII just like citizens of the EU do, so if you live e.g. in the US other another country where the law is lax, you should consider taking political action to change the situation.

Of course we never endorse or support any workflows that are in any way unconstitutional, and we have terminated relationships before with very big government agencies when we had doubt about whether our tools would be used for unlawful purposes.

- The "big data" thing is a bit of a misunderstanding I think -- we have "big data" tools too, but usually these are not of interest to local law enforcement. If you check out our youtube channel you will find some (atrociously old!) videos of the tool that's popular in LLE (https://www.youtube.com/watch?v=yMv3TBxulu4 for instance). This tool is in fact often used with just hand-entered information -- analysts create objects (e.g. persons, links between them) in their investigation. So the data really isn't that big and mostly hand-curated, and the tool works fine without mass data collection (better than with, perhaps). Most of what local law enforcement does with 'big data' tools is to generate reports like crime statistics.

"Police departments don't just have random chunks of data lying around containing large numbers of connections between large numbers of people" -- actually, they really do! Any police department of any size that has existed for a while will have a database with millions of convictions, suspects, court-cases etc in them. Usually on some crufty mainframe or in some crufty old SQL database that contains a lot of terribly inconsistent data (dead links, data duplication, etc.) Also sometimes some of this data is shared between countries, states etc. You can find some videos on youtube of our CEO talking about the challenges of data integration and such.

Ultimately of course there is a general statistical trend (belief?) that 'more data is better', because if there is more and more complete data, you have a bigger chance of finding that connection between e.g. a terrorist and some billionaire who might be funding said terrorists, etc. At least up until a point (at which the data probably becomes just too noisy/hard to deal with, because every individual piece of information carries very little meaning. At least the NSA seems to subscribe to this belief according to their public statements, and I'd think if anyone knows about this stuff, it's them) and assuming you have the necessary CPU power and talent (data scientists) to actually do something useful with this data. Local law enforcement orgs like police generally lack the latter. In police departments, most users are only sophisticated enough to run searches for things like names, SSN, number plates etc, really.

- We generally know a lot about how our software is used at most of our installations, and almost always actively take steps to prevent abuse (auditing etc). Most government organizations will not let us see things like what their analysts exactly searched for (because that is obviously sensitive information) but we do a lot of work to help organizations prevent abuse and insider threats at a higher level. As mentioned above, we have taken action in the past in situations where we suspected abuse. It's on a case-by-case basis, of course -- imagine if there is e.g. one analyst abusing the system, then that analyst getting fired/reprimanded would probably be sufficient action (obviously this is not in our responsibility), but if we suspect systematic abuse or we see signs of repeated abuse without repercussions, then we might pull out completely.

Thank you for participating here.

"Ultimately trust in the government and that law enforcement ... is a force for good and prevents many bad things from happening every single day, is at the heart of the palantir philosophy."

Trust is built on transparency, accountability, responsibility.

Can I make FOIA requests to Palantir?

FOIAs are not made to companies, they are made to government entities. But yes, of course you can make FOIA requests to government entities we work with (the law guarantees you this privilege), and about the work we do with them -- reporters do this all the time.

In some of the more liberal countries the things we do with the government are even part of public record and you can find contracts and services we provide them etc documented online without having to do any kind of request.

Yep, and they get squashed for "national security" reasons because learning how gangbangers are tracked will also let the $current_enemy_in_the_war_on_terror know how they are being selected for drone strikes.

People FOIA palantir related contract successfully all the time, esp. in local law enforcement. If you try to FOIA certain other government institutions, then yes, you might get that answer.

Of course palantir has no input on that in any way. Certain government agencies and activities are excepted from FOIA (and for very good reasons), but if you think it should not be that way (or that the rules should be different) then you should take political action.

How do I determine everything Palantir knows about me, and all the ways they've used and shared that data?

That is like asking everything Oracle or Postgresql knows about you. It doesn't make sense.

Palantir is software deployed for specific customers. The customers then use Palantir against their data and publicly available data.

Thank you for the clarification. I had always assumed Palantir, like their competitors (Seisent, ChoicePoint, etc), did both services and software.

i too would like to know the answer to this question? is it being selectively ignored?

When I send a FOIA request to [insert law enforcement agency here] and ask them to provide me with the source code and data used to classify me as a criminal so that I can determine whether the chain of reasoning holds up, and the law enforcement agency then needs to turn to Palantir to request that source code and data, what response will they get?

I think there are a couple of misconceptions about FOIAs, our software and the law in general here...

- FOIAs cannot be used to obtain specific information about persons, this kind of information is protected. In the EU under the GDPR however, you can get some of this information (about yourself only) -- but I think law enforcement agencies are probably exempt from this (not sure, but would make sense.)

- Our software does not make decisions on who is or is not a criminal in an automated fashion (or indeed any fashion). In fact, law enforcement can not make such a decision at all, ever -- only a judge in a court can deem you a criminal. Law enforcement can merely decide that you are a suspect. (And again, this is also not something our software (or any software anywhere) decides in an automated fashion.) So there isn't really a 'chain of reasoning' here

- When you actually ARE accused of a crime (this would happen in a court, and only if there is substantial evidence against you), you will be told what the crime is and why you are suspected of the crime. Thus the 'chain of reasoning' will be presented to you, and be available for verification by anyone participating in the court case (including, depending on country, e.g. a jury)

There have been court cases already wherein, for example, a person was stopped for a DUI and subjected to a breathalyzer test. The defense lawyer in that case requested access to the source code of the firmware running on the breathalyzer device in order to determine if it was even a reliable source of accusatory evidence-gathering. It is under that sort of situation I imagine a law enforcement body being led to turn to Palantir to request full source code of the Palantir system, description of its algorithms, data it uses, etc. In the breathalyzer case, the source code was actually turned over and they discovered it to be of deplorable quality, with many features such as in the case of any internal failure, returning an illegally-drunk answer without otherwise indicating any fault.

In another similar case, someone issued a citation for speeding by an automated camera system requested the source code for the firmware running on that system. The court greed this was a reasonable request, and they issued a court order demanding the police department turn it over. The police went to the company they got the device from and found that they had no right to request the source. They had no right to know how the system they'd purchased and were using to charge people with infractions even operated. I expect this would be what would happen with Palantir.

Here in Belgium, every speeding fine you get mentions the number of the calibration report of the camera and supposedly you can request this report and if it's absent, incorrect or outdated you don't pay the fine.

> But once you see all of the bad things LE can prevent thanks to our software, it becomes pretty easy to believe in, even if individual bad actors exist (and always will), like racist cops, cops that abuse their power, etc

NO! The ends don't justify the means. If we give up our freedoms, LE can prevent a lot of bad things. But it isn't worth it. Abuse of power is one reason we have checks and balances. If I understand correctly, checks and balances and due process are thrown out the window. And the justification is simply that LE prevented some bad things... But at what cost?

Thank you for this honest and straight-forward response, it was eye-opening to hear the perspective from the "inside". It helped me to have a more balanced view of the issues under discussion.

> Ultimately trust in the government and that law enforcement (in the various shapes and forms it comes in) is a force for good and prevents many bad things from happening every single day, is at the heart of the palantir philosophy.

This is disingenuous at best this is a tool for surveillance and its misuse in public life is more obvious than the advantages it might bring. People will naturally distrust a government that spies on them constantly and that has nothing to do with good or bad government.

There is an episode of black mirror on the mother who surveils her daughter constantly if the mother is good why do we feel disgusted by her attitude?

I applauded apple for publicly denying the government a backdoor in their phones.

It is unclear to me what you are saying here. It is well-known fact (amongst the intelligence community) that our software has been used to save tens of thousands of lifes, prevent hundreds (thousands?) of terrorist attacks, saved billions of dollar in fraud etc. In terms of most of the kind of projects governments do in an attempt to prevent {crime,fraud,terrorism,...}, on a scale for how much bang you get for your buck (where 'buck' might be how much data collection you have to do), I think you won't find many other alternatives that compare favourably. (But then, I might be biased, of course.) In fact, as I described in many agencies the situation actually drastically improves once they start using palantir software, since they already had all the data all along, but weren't properly using it as the law intends them to.

Everyone knows the various forms of law enforcement perform some measure of (and have to, to do their job) surveillance. That ranges from e.g. ALPR cameras to patrolling the streets of some city. Most people agree that this is a perfectly acceptable thing. Do you think instead the police should be abolished? Or stripped of their investigative or enforcing power? I don't think most people would actually enjoy living in whatever society would result from this.

Of course the exact boundaries of what is acceptable and what is not should be fiercely debated.

> Of course the exact boundaries of what is acceptable and what is not should be fiercely debated.

What makes most people uneasy, and especially those of us who work in tech, is that many police departments and 3 letter agencies have routinely abused the trust that people put in them. If you want the citizenry to have trust and faith in law enforcement, they have to demonstrate a desire to not break constitutional rights when they deem fit to do so.

I genuinely don't understand why these agencies wouldn't try to build a good relationship with the citizenry they are supposed to be protecting. It often seems like we live in a very dangerous society and everything we say or do is being monitored by someone and may be used maliciously. That itself is a very terrible thing to live with.

I think it's important to distinguish between individual abuse and systemic abuse.

Every one of those agencies has had, in their history, at least some cases of individual abuse (e.g. one particular individual abusing their power). This is unavoidable (because these agencies employ humans, and humans are assholes sometimes) but as long as the situation is appropriately handled at the organizational level (the responsible party is reprimanded in an appropriate way, e.g. let go, tried, ... and practices and protocols are put into place to deal with and prevent these kind of situations) this is OK, because the agency operating is still a net win for humanity.

Most people who are paranoid about these kind of things are so because the media loves to pick up these stories and hype them up and spread fear -- but I don't think it's that much of a problem.

The thing that's really a worry (and that media usually does not report on) is when abuse happens in a systemic fashion, e.g. agencies bending the interpretation of the law to extend their reach. Incidentally, this is exactly what palantirs software is designed to help with, because it can actually enforce these rules.

That is an excellent point. Its much more sensational when you have a few bad apples but exposing a rot within an organization isn't as "newsworthy". Besides, if the corruption spreads far enough, it might be able to actively silence its critics (e.g. Nixon's shenanigans).

Perhaps a systemic fear of Government overreach is embedded in the DNA of the American people. I would argue that a healthy criticism is very important; but paranoia can be dangerous and unproductive.

> our software has been used to save tens of thousands of lifes, prevent hundreds of terrorist attacks... This is exactly what I mean by advantages vs disadvantages these look like stapled platitudes from the marketing team. It looks like you are conflating war zones scenarios with civil society a bit like promoting an M1 Abram for New York police, due to the effectiveness in Iraq. If you aren't then I'm baffled by how many terrorist attacks we've thwarted, and incredulous. There haven't been thousands of terrorist attacks in civil society before the software was created.

> Or stripped of their investigative or enforcing power? So it's either all the power or none of the power? This is a logical fallacy you are creating a duality for something that does not exist. The police is not the same as your software like you implied. I'm against the government spying on their citizens even with good intentions. See the black mirror episode since it demonstrates where problem lies, it's not in good intentions, it's in the control.

Does this software help the government to spy on their citizens? Yes. Should the citizens have a say in if they want this software to be used for their data? Yes, why don't they have say? Can we live without this software? Yes, like we've done so far. Is this software helpful in a police state? I would say it's the most valuable software for a police state.

No, most aware people do not agree with your anti-privacy view.

> we have terminated relationships before with very big government agencies when we had doubt about whether our tools would be used for unlawful purposes.

Let that sink in for a moment, folks. Then go back and read this other part.

> trust in the government ... is at the heart of the palantir philosophy

So, at Palantir they trust the government that they know is trying to use their product unlawfully. With all due respect, I think many of us are a little more sparing with our trust.

Not every government in every country is of the same quality in terms of e.g. corruptibility and willingness to improve. That's just how it is. It's not too different from other companies that are not expanding into certain regions because they don't consider the region politically stable or developed enough, for instance.

That doesn't contradict the belief that law enforcement is necessary and good.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact