Hacker News new | past | comments | ask | show | jobs | submit login

I'm much less concerned with Palantir being used internally by corporations to detect information leaks (although in practice this also means it will be used to quash whistleblowers), since that's at least a viable use for their technology. And it's good that Palantir is pivoting towards less invasive applications of their technology such as supply chain tracking.

But the article touches on the biggest issue with Palantir, which is that they seem to be completely fine with allowing law enforcement agencies to use their software for parallel construction, which is in my opinion a pretty egregious violation of the 4th amendment. Although it may be constitutional for police departments to purchase data from e.g. Facebook or some data broker, even local police departments are known to employ dragnet-style surveillance through things like Stingray devices. Of course we have a number of three-letter agencies, also Palantir customers, who don't even need to pretend that they're following the 4th amendment.

Since Palantir is essentially enabling the construction of all the tools required for a complete surveillance state, and actively using their tools for that purpose for financial gain, I think that makes them one of the least ethical companies in tech. I certainly wouldn't be able to live with myself if I knew I was writing software with the explicit purpose of being able to track down rebels in Yemen, catch illegal immigrants, and violate the constitutional rights of millions of Americans.




Thiel is an exercise in contradiction. "In his view, the best way to avoid such scenarios in the future would be to provide the government the most cutting-edge technology possible and build in policing systems to make sure investigators use it lawfully."

Prevent the police state by building the police state.


Is it really? I think what he's trying to say is that we should have tools powerful enough to let the Government find/track people efficiently and correctly as long as there is a lawful reason for doing so. Its similar to the weak crypto arguments where Govt. agencies argue they should be able to break crypto protocols via a backdoor when there is a lawful reason to do so.

That being said: the PATRIOT act and NSA's PRISM program have shown just how blatantly the US Govt. can abuse individual privacy and rights so I don't think its a good idea. Weak crypto using backdoors is just a terrible idea period.

I guess what I'm trying to say is that the logic make sense in principle even though in practice it wouldn't work that way.


Perhaps what he's trying to say is, "I like money and power."

"The logic" in this case is a proposition about how beings behave. That is, "We can build the largest prison system in history, give law enforcement unlimited surveillance powers and not expect massive injustices." That is simply wrong. Human beings are not like that.


For someone as deeply entrenched in Tech as Thiel you would think the "in practice" part would be quite clear to him.


Honestly I'd really like Thiel or someone else to explain this. He strikes me as too smart to be that dumb.


And then hire Palantir to monitor the police as well.


Contradiction? You’re too kind, he’s a straight up hypocrite. He used “privacy” to kill Gawker, in part because he didn’t like their coverage of his ventures.


> I certainly wouldn't be able to live with myself if I knew I was writing software with the explicit purpose of being able to track down rebels in Yemen, catch illegal immigrants, and violate the constitutional rights of millions of Americans.

Perhaps open-source licenses should explicitly exclude use in certain applications. Even if not legally enforceable, it would serve as a constant reminder to developers who DO work in these fields of what they are doing.


Something along the lines of "The Software shall be used for Good, not Evil"? ;) https://tanguy.ortolo.eu/blog/article46/json-license


When I was at IBM the story about IBM legal asking for permission to use JSLint for evil was a classic. The original post appears to be dead, but https://news.ycombinator.com/item?id=5138866 discusses it.


The Internet Archive has an archived copy of the blog post, which links to the original source at

https://www.youtube.com/watch?v=-C-JoyNuQJs&feature=player_d...


Wow 2002 was so long ago. We had no idea.

What am I saying? Most of us still have no idea.


s/shall/should rather/

This raises some good points, but that was to be expected as licenses are a tricky business.

I do think our profession needs more awareness of ethics, and a usage suggestion (rather than an order) in the license of software may be a good start.


Reminds me of how Google bans its employees from using code released under certain licenses: https://opensource.google.com/docs/thirdparty/licenses/#bann...

I wonder if Palantir has any similar policies?


I find it interesting that Google seems to allow GPL (though they ban AGPL). I know at many companies the GPL is an anathema that would be the first entry on a list like this one.


IANAL, but my understanding is that the GPL only requires you to release source code if you "distribute" the resulting binary. As a web company, most of Google's code would not be considered distributed, only the output of that code. The AGPL was created to close that gag for web companies, probably the reason Google bans it.


They certainly are legally enforceable and some software does this already. I can't recall exactly which software it was, but I know I have encountered multiple times software licenses which forbid the softwares use by military organizations or for military purposes, or which forbid government use. Licenses can contain practically anything you like. And thus far, courts haven't been restrictive at all in terms of what can be included.


Can you expand on exactly how Palantir is used for parellel construction?


Law enforcement uses dragnet warrantless surveillance to gather data (such as who is calling whom, who is in physical proximity with whom). That data is loaded into Palantir's software.

Law enforcement uses that data to acquire admissible evidence of the perpetrators' crimes. This is the crucial step, because although the original evidence couldn't be admissible in court due to its collection without a warrant, it can be used to acquire more evidence. If the warrantless surveillance includes information such as the time and place of where you plan to commit a crime, law enforcement can stop you for a random check on the way there, giving them actual evidence for prosecution. Now that evidence can be used in court, even though the only way law enforcement were able to acquire it was through unconstitutional practices.

The thing about Palantir is that it needs big data to work. Police departments don't just have random chunks of data lying around containing large numbers of connections between large numbers of people - they might have something like that for something like suspected criminal gangs, but that wouldn't be enough data to make Palantir's software worthwhile. So acquiring the data necessary to use Palantir's software requires mass data collection. Some of their mass data collection techniques may not require a warrant, such as using license plate captures, but some does, such as who you're calling.

So does Palantir perform warrantless surveillance themselves? To my knowledge, no. But when they sell their software to law enforcement agencies, they must know that the only way their customer could get bang for their buck is through mass surveillance. In my opinion, the person willingly and knowingly selling tools for oppressive purposes holds a lot of blame for the actual oppression that occurs as a result.


You raise several good points, but a few clarifications here:

- In our software that we deploy to police departments, generally the origin of any piece of information is tracked, as well as when it was entered and by who. The intention with this is to prevent this (and other) kinds of abuse. Malicious users can still abuse the system and do a parallel construction e.g. outside of our system or maybe in a way that the two actions (finding non-admissible data and finding admissible data) seem unrelated, but at that point they would be spending quite a bit of time and criminal energy on this. I don't think this would happen commonly for a variety of practical reasons (but ultimately there's no way to completely prevent it).

Ultimately trust in the government and that law enforcement (in the various shapes and forms it comes in) is a force for good and prevents many bad things from happening every single day, is at the heart of the palantir philosophy. If you fundamentally think the government is evil, palantir would probably not be the workplace for you. But once you see all of the bad things LE can prevent thanks to our software, it becomes pretty easy to believe in, even if individual bad actors exist (and always will), like racist cops, cops that abuse their power, etc.

- If the organization is acquiring warrantless surveillance, then this generally means that there is the legislative base for them to acquire the data in the first place. What this looks like varies from country to country, some countries are much stricter with the regulations than others. So for instance in some countries it is not allowed to get information from a suspects public facebook profile, whereas in other countries this is considered A-OK. Especially in europe these things have recently become much more strict with GDPR (and similar rules that apply to government entities). You'd think that be a downside to palantir (less data = less value?) but actually it's a big business opportunity, because our software is the only thing available that is even remotely close in this space to having enough access control 'finesse' that it can enable organizations to be compliant to this law. So this is an area (and competitive advantage) we are investing a lot into -- and typically organizations go from 'completely non-compliant' when we arrive there (e.g. never even deleting data like license plates that are supposed to be deleted after 3 months etc, audit-logging any searches investigators do, limiting search scope, ...) to being fairly or even fully compliant.

Now my personal opinion is that everyone should be able to enjoy great privacy and control over their PII just like citizens of the EU do, so if you live e.g. in the US other another country where the law is lax, you should consider taking political action to change the situation.

Of course we never endorse or support any workflows that are in any way unconstitutional, and we have terminated relationships before with very big government agencies when we had doubt about whether our tools would be used for unlawful purposes.

- The "big data" thing is a bit of a misunderstanding I think -- we have "big data" tools too, but usually these are not of interest to local law enforcement. If you check out our youtube channel you will find some (atrociously old!) videos of the tool that's popular in LLE (https://www.youtube.com/watch?v=yMv3TBxulu4 for instance). This tool is in fact often used with just hand-entered information -- analysts create objects (e.g. persons, links between them) in their investigation. So the data really isn't that big and mostly hand-curated, and the tool works fine without mass data collection (better than with, perhaps). Most of what local law enforcement does with 'big data' tools is to generate reports like crime statistics.

"Police departments don't just have random chunks of data lying around containing large numbers of connections between large numbers of people" -- actually, they really do! Any police department of any size that has existed for a while will have a database with millions of convictions, suspects, court-cases etc in them. Usually on some crufty mainframe or in some crufty old SQL database that contains a lot of terribly inconsistent data (dead links, data duplication, etc.) Also sometimes some of this data is shared between countries, states etc. You can find some videos on youtube of our CEO talking about the challenges of data integration and such.

Ultimately of course there is a general statistical trend (belief?) that 'more data is better', because if there is more and more complete data, you have a bigger chance of finding that connection between e.g. a terrorist and some billionaire who might be funding said terrorists, etc. At least up until a point (at which the data probably becomes just too noisy/hard to deal with, because every individual piece of information carries very little meaning. At least the NSA seems to subscribe to this belief according to their public statements, and I'd think if anyone knows about this stuff, it's them) and assuming you have the necessary CPU power and talent (data scientists) to actually do something useful with this data. Local law enforcement orgs like police generally lack the latter. In police departments, most users are only sophisticated enough to run searches for things like names, SSN, number plates etc, really.

- We generally know a lot about how our software is used at most of our installations, and almost always actively take steps to prevent abuse (auditing etc). Most government organizations will not let us see things like what their analysts exactly searched for (because that is obviously sensitive information) but we do a lot of work to help organizations prevent abuse and insider threats at a higher level. As mentioned above, we have taken action in the past in situations where we suspected abuse. It's on a case-by-case basis, of course -- imagine if there is e.g. one analyst abusing the system, then that analyst getting fired/reprimanded would probably be sufficient action (obviously this is not in our responsibility), but if we suspect systematic abuse or we see signs of repeated abuse without repercussions, then we might pull out completely.


Thank you for participating here.

"Ultimately trust in the government and that law enforcement ... is a force for good and prevents many bad things from happening every single day, is at the heart of the palantir philosophy."

Trust is built on transparency, accountability, responsibility.

Can I make FOIA requests to Palantir?


FOIAs are not made to companies, they are made to government entities. But yes, of course you can make FOIA requests to government entities we work with (the law guarantees you this privilege), and about the work we do with them -- reporters do this all the time.

In some of the more liberal countries the things we do with the government are even part of public record and you can find contracts and services we provide them etc documented online without having to do any kind of request.


Yep, and they get squashed for "national security" reasons because learning how gangbangers are tracked will also let the $current_enemy_in_the_war_on_terror know how they are being selected for drone strikes.


People FOIA palantir related contract successfully all the time, esp. in local law enforcement. If you try to FOIA certain other government institutions, then yes, you might get that answer.

Of course palantir has no input on that in any way. Certain government agencies and activities are excepted from FOIA (and for very good reasons), but if you think it should not be that way (or that the rules should be different) then you should take political action.


How do I determine everything Palantir knows about me, and all the ways they've used and shared that data?


That is like asking everything Oracle or Postgresql knows about you. It doesn't make sense.

Palantir is software deployed for specific customers. The customers then use Palantir against their data and publicly available data.


Thank you for the clarification. I had always assumed Palantir, like their competitors (Seisent, ChoicePoint, etc), did both services and software.


i too would like to know the answer to this question? is it being selectively ignored?


When I send a FOIA request to [insert law enforcement agency here] and ask them to provide me with the source code and data used to classify me as a criminal so that I can determine whether the chain of reasoning holds up, and the law enforcement agency then needs to turn to Palantir to request that source code and data, what response will they get?


I think there are a couple of misconceptions about FOIAs, our software and the law in general here...

- FOIAs cannot be used to obtain specific information about persons, this kind of information is protected. In the EU under the GDPR however, you can get some of this information (about yourself only) -- but I think law enforcement agencies are probably exempt from this (not sure, but would make sense.)

- Our software does not make decisions on who is or is not a criminal in an automated fashion (or indeed any fashion). In fact, law enforcement can not make such a decision at all, ever -- only a judge in a court can deem you a criminal. Law enforcement can merely decide that you are a suspect. (And again, this is also not something our software (or any software anywhere) decides in an automated fashion.) So there isn't really a 'chain of reasoning' here

- When you actually ARE accused of a crime (this would happen in a court, and only if there is substantial evidence against you), you will be told what the crime is and why you are suspected of the crime. Thus the 'chain of reasoning' will be presented to you, and be available for verification by anyone participating in the court case (including, depending on country, e.g. a jury)


There have been court cases already wherein, for example, a person was stopped for a DUI and subjected to a breathalyzer test. The defense lawyer in that case requested access to the source code of the firmware running on the breathalyzer device in order to determine if it was even a reliable source of accusatory evidence-gathering. It is under that sort of situation I imagine a law enforcement body being led to turn to Palantir to request full source code of the Palantir system, description of its algorithms, data it uses, etc. In the breathalyzer case, the source code was actually turned over and they discovered it to be of deplorable quality, with many features such as in the case of any internal failure, returning an illegally-drunk answer without otherwise indicating any fault.

In another similar case, someone issued a citation for speeding by an automated camera system requested the source code for the firmware running on that system. The court greed this was a reasonable request, and they issued a court order demanding the police department turn it over. The police went to the company they got the device from and found that they had no right to request the source. They had no right to know how the system they'd purchased and were using to charge people with infractions even operated. I expect this would be what would happen with Palantir.


Here in Belgium, every speeding fine you get mentions the number of the calibration report of the camera and supposedly you can request this report and if it's absent, incorrect or outdated you don't pay the fine.


> But once you see all of the bad things LE can prevent thanks to our software, it becomes pretty easy to believe in, even if individual bad actors exist (and always will), like racist cops, cops that abuse their power, etc

NO! The ends don't justify the means. If we give up our freedoms, LE can prevent a lot of bad things. But it isn't worth it. Abuse of power is one reason we have checks and balances. If I understand correctly, checks and balances and due process are thrown out the window. And the justification is simply that LE prevented some bad things... But at what cost?


Thank you for this honest and straight-forward response, it was eye-opening to hear the perspective from the "inside". It helped me to have a more balanced view of the issues under discussion.


> Ultimately trust in the government and that law enforcement (in the various shapes and forms it comes in) is a force for good and prevents many bad things from happening every single day, is at the heart of the palantir philosophy.

This is disingenuous at best this is a tool for surveillance and its misuse in public life is more obvious than the advantages it might bring. People will naturally distrust a government that spies on them constantly and that has nothing to do with good or bad government.

There is an episode of black mirror on the mother who surveils her daughter constantly if the mother is good why do we feel disgusted by her attitude?

I applauded apple for publicly denying the government a backdoor in their phones.


It is unclear to me what you are saying here. It is well-known fact (amongst the intelligence community) that our software has been used to save tens of thousands of lifes, prevent hundreds (thousands?) of terrorist attacks, saved billions of dollar in fraud etc. In terms of most of the kind of projects governments do in an attempt to prevent {crime,fraud,terrorism,...}, on a scale for how much bang you get for your buck (where 'buck' might be how much data collection you have to do), I think you won't find many other alternatives that compare favourably. (But then, I might be biased, of course.) In fact, as I described in many agencies the situation actually drastically improves once they start using palantir software, since they already had all the data all along, but weren't properly using it as the law intends them to.

Everyone knows the various forms of law enforcement perform some measure of (and have to, to do their job) surveillance. That ranges from e.g. ALPR cameras to patrolling the streets of some city. Most people agree that this is a perfectly acceptable thing. Do you think instead the police should be abolished? Or stripped of their investigative or enforcing power? I don't think most people would actually enjoy living in whatever society would result from this.

Of course the exact boundaries of what is acceptable and what is not should be fiercely debated.


> Of course the exact boundaries of what is acceptable and what is not should be fiercely debated.

What makes most people uneasy, and especially those of us who work in tech, is that many police departments and 3 letter agencies have routinely abused the trust that people put in them. If you want the citizenry to have trust and faith in law enforcement, they have to demonstrate a desire to not break constitutional rights when they deem fit to do so.

I genuinely don't understand why these agencies wouldn't try to build a good relationship with the citizenry they are supposed to be protecting. It often seems like we live in a very dangerous society and everything we say or do is being monitored by someone and may be used maliciously. That itself is a very terrible thing to live with.


I think it's important to distinguish between individual abuse and systemic abuse.

Every one of those agencies has had, in their history, at least some cases of individual abuse (e.g. one particular individual abusing their power). This is unavoidable (because these agencies employ humans, and humans are assholes sometimes) but as long as the situation is appropriately handled at the organizational level (the responsible party is reprimanded in an appropriate way, e.g. let go, tried, ... and practices and protocols are put into place to deal with and prevent these kind of situations) this is OK, because the agency operating is still a net win for humanity.

Most people who are paranoid about these kind of things are so because the media loves to pick up these stories and hype them up and spread fear -- but I don't think it's that much of a problem.

The thing that's really a worry (and that media usually does not report on) is when abuse happens in a systemic fashion, e.g. agencies bending the interpretation of the law to extend their reach. Incidentally, this is exactly what palantirs software is designed to help with, because it can actually enforce these rules.


That is an excellent point. Its much more sensational when you have a few bad apples but exposing a rot within an organization isn't as "newsworthy". Besides, if the corruption spreads far enough, it might be able to actively silence its critics (e.g. Nixon's shenanigans).

Perhaps a systemic fear of Government overreach is embedded in the DNA of the American people. I would argue that a healthy criticism is very important; but paranoia can be dangerous and unproductive.


> our software has been used to save tens of thousands of lifes, prevent hundreds of terrorist attacks... This is exactly what I mean by advantages vs disadvantages these look like stapled platitudes from the marketing team. It looks like you are conflating war zones scenarios with civil society a bit like promoting an M1 Abram for New York police, due to the effectiveness in Iraq. If you aren't then I'm baffled by how many terrorist attacks we've thwarted, and incredulous. There haven't been thousands of terrorist attacks in civil society before the software was created.

> Or stripped of their investigative or enforcing power? So it's either all the power or none of the power? This is a logical fallacy you are creating a duality for something that does not exist. The police is not the same as your software like you implied. I'm against the government spying on their citizens even with good intentions. See the black mirror episode since it demonstrates where problem lies, it's not in good intentions, it's in the control.

Does this software help the government to spy on their citizens? Yes. Should the citizens have a say in if they want this software to be used for their data? Yes, why don't they have say? Can we live without this software? Yes, like we've done so far. Is this software helpful in a police state? I would say it's the most valuable software for a police state.


No, most aware people do not agree with your anti-privacy view.


> we have terminated relationships before with very big government agencies when we had doubt about whether our tools would be used for unlawful purposes.

Let that sink in for a moment, folks. Then go back and read this other part.

> trust in the government ... is at the heart of the palantir philosophy

So, at Palantir they trust the government that they know is trying to use their product unlawfully. With all due respect, I think many of us are a little more sparing with our trust.


Not every government in every country is of the same quality in terms of e.g. corruptibility and willingness to improve. That's just how it is. It's not too different from other companies that are not expanding into certain regions because they don't consider the region politically stable or developed enough, for instance.

That doesn't contradict the belief that law enforcement is necessary and good.



> If the warrantless surveillance includes information such as the time and place of where you plan to commit a crime, law enforcement can stop you for a random check on the way there, giving them actual evidence for prosecution.

why is that a bad thing?

If the crime is victimless (such as any drug usage "crime"), then the issue isn't with the police, but with the state defining what "crime" is, and that definition doesn't align with the people's definition.


Selective enforcement. Practice surveillance of the population. Select political opponent, as desired. Consult surveillance data. By random chance, sometimes you will find something. Parallel construct a case. Proceed to blackmail/extort/convict. Maintain political power. Repeat.


There was no probable cause for you to be under surveillance prior to the "random" stop.


Some people just get more randomly selected than others, a lot more.


Any interaction with the police is a potentially violent interaction with the State. Even if that were not the case, limiting infringement on your rights as much as possible is a good thing, and the government has limitations placed on it for this reason. Stopping people for 'random' checks may violate those limitations.


I wouldn't blame the tools here. When law enforcement needs something done, they'll get it done, justice be damned. Before Palantir, law enforcement would have just spot checked every single car passing through an area, under the guise of a Ride Program.

It's like billboards vs. targeted ads. Both have the same effect (pushing you to buy stuff), but the latter cuts out all the noise.


I see other responses explaining how surveillance data, itself inadmissible in court, can lead people to evidence that is admissible. That's great, but it still makes it seem like only criminals are at risk. Far from it. It's important to remember that this same parallel construction can be used to strengthen false allegations or convictions, allegations or convictions under an unjust or unreasonable law (e.g. marijuana prohibitions), or discriminatory application of a proper law. Got a political opponent? Use Palantir to discover some little pecadillo, then engage the police/court machinery either to get an actual conviction or - almost as good - a whole bunch of negative PR. It's not just criminals. This kind of thing endangers everyone.


Let's say I'm law enforcement and I have some information obtained in a way which wouldn't stand up even in the most farcical of courtrooms.

What I can do is use surveillance tools to find a way to effectively entrap you, to gather evidence. Think of it like a tip-off that you're not allowed to have.

Palantir specialises in providing tools to analyze and aggregate big data. If law enforcement has an interest in it, it's probably for dragnet-style surveillance. The problem is that the 4th amendment means there must be reasonable suspicion before the investigation can begin.

Most dragnet-style surveillance is incompatible with the 4th amendment. So law enforcement has to construct a reason for investigating the case that they should never have been investigating.


If the data was sourced from public information that the target willingly posted online, does law enforcement even need to use parallel construction? Couldn’t they use the palantir data to obtain a warrant for more thorough searches?

I too have a problem with business models that invade our privacy. But I think it’s a bit disingenuous to conflate “private information” with information that you voluntarily posted on the internet.

Whatever happened to educating people that what they post on the internet is permanent? Dragnets over public data are a symptom of the real problem, which is lack of user education and understanding of what data they generate and share.


I wasn't even thinking about public data. There's a lot of data that law enforcement already collects. What about license plate data, data from IMSI catchers, and leaked data. Add in the publicly available online data and you can supposedly build up quite a profile about them.

What if somebody is arrested and happens to have a very convenient data set on a lot of people, such as data used for marketing purposes. Is it okay if those people are stopped and searched on the basis that somebody else held data about them and the police have supposed that 10% of the clients are suspected of criminality?

Normally there is supposed to be evidence that the crime took place before an investigation can begin.

If law enforcement starts with the evidence and then look for the crime that fits it, that's something different.


You bring up a good point. Palantir is not just mining public data, but also organizing data that law enforcement provides to them (like license plate readings). But if law enforcement already has that data, is the problem that Palantir organizes it, or that law enforcement collects it in the first place?


I think you may have causality reversed. Without Palantir, law enforcement would not only have no use for the data, it probably wouldn't be able to even collect it in a meaningful sense, in terms of technical and financial resources.

Palantir goes to the agency and says, "Give us data and we'll give you "actionable intelligence".

You know, back in the 90s and early 2000s, the term "data mining" was perjorative. Financial operators would look through their mass of data for correlations and sell them to people who would discover that they didn't actually work.


This is not in the least accurate...

1) yes, the LE do already have the data. 2) agencies don't 'give' data to palantir, law enforcement data almost always lives in air-gapped environments it cannot leave in any way (or only in highly restricted ways) 3) we don't 'give' actionable intelligence, we just provide tools for analysts to generate such data. Palantir is a company of software engineers, not of intelligence analysts.


you seem to be quite defensive of your choice to work for a company that builds spy software and sells it to anyone with the money to buy it. are you trying to convince us your employer is good, or yourself? its most certainly an ethical grey area, but you try hard to make it seem benignly white. these engineers know what the software is being used for, they can act innocent all they want, but what happens when this tool is used for bad? can you guarantee that it wont be? seems like your teams are turning a blind eye in exchange for large paychecks and doing some fine mental gymnastics to justify it all.


I would argue that law enforcement would have no use for collecting such vast amounts of data without a service like Palantir. So the blame rests on both Palantir and law enforcement


No, not really. We don’t expect all of our actions to be monitored and collected 24/7 throughout the course of our lives when going about our business offline and then used to entrap us. We should have a similar standard when it comes to our civil rights in the digital realm.


As an aside, education does not scale with the ever increasing number of ways that personal information can “get online”. Additionally, it assumes some degree of privilege as more and more of everyday life moves online. Not everyone can afford to care for their own privacy, and those that do must pay the time, money, and convenience costs associated with it.


New Orleans PD...there was an article about it here recently. That is PT/Palantir's vision of the future. It is going to be pretty damn hard to stop once it gets going hopefully Facebook being on the front page of MSN for a couple of days will we a reckoning because the things the average person does not realise would REALLY upset them.


On a positive note, New Orleans decided to not renew their contract with Palantir. I am not sure whether this is the result of that article or not.


The mayor is leaving office.


They dig up data telling the police where to look. The police then look in just the right place and "legitimately find" the evidence that they would never have stumbled on and couldn't have legally known was there.


Let met get this right you ae cool with "corporations" doing this with zero oversight but not with law enforcement and TLA's who have at least some legal oversight.

Are you are also cool with companies in the past hiring pinkertons to kill people also - because that's where your argument is leading.


The difference is that corporations closely monitor only their own employees while Palantir closely monitors EVERYBODY.

Corporations take an interest in tracking people only to stratify them into groups suited to specific ad strategies. The worst their errors will do is pepper your browser and email with bad ads.

Not so with Palantir when their tracking data can misguide police departments, credit agencies, and even your employer into making your life a living hell. Surely any company with that much power must be held MUCH more accountable than they are now.


Palantir doesn't really _do_ anything. They don't collect the data, just save time/man-hours spent processing it. If that's your stance, Oracle/IBM are just as evil because they also sell to police departments to enable analysis.

Facebook on the other hand is actively collecting the data. Without Facebook that database does not exist. In my mind that makes them far worse.


I think it's not a fringe opinion that Oracle/IBM are just as evil. I don't have a strong opinion on this issue at this point, but just saying.


>But the article touches on the biggest issue with Palantir, which is that they seem to be completely fine with allowing law enforcement agencies to use their software for parallel construction

I think that any company that works for the government to any degree is considered part of the government and is bound by all rules a government agency/employee would be.


Yes and no.

Government pushes lots of compliance down in contractors and state government, but tends to not push down transparency requirements.


Thiel's genuine commitment to freedom is questionable. Also, he's a misogynist and jerk.


I feel these issues are orthogonal. There are plenty of nice people who have no commitment to freedom or privacy, and there are plenty of assholes (and even assholes-to-women!) who are also pretty passionate about both.


If they are orthogonal, then the conjunction is maximally informative...


Excellent point, now that you mention it, let's bring Trump and deep learning and 90's grunge into the discussion to form a maximal eigenbasis of discussion


Except most of those have no bearing on the conduct of Palantir, obviously, while the character of Thiel very much does.


That's not obvious. In fact, it's quite likely that my reading Trump's tweets and news along with my internet history from the early 2000's indicating an interest in Nirvana may have a weak correlative effect in a deep learning system that indicates I am slightly more susceptible to committing certain crimes or personality affects.

Ergo, can we all refocus this discussion of universal privacy rights violation on my partisan social group interests?


I do question his commitment to freedom. But a misogynist?


I base my assertion on sentiments he has expressed, one if which was quoted in the Bloomberg article:

>The 1920s was the last time one could feel “genuinely optimistic” about American democracy, he said; since then, “the vast increase in welfare beneficiaries and the extension of the franchise to women—two constituencies that are notoriously tough for libertarians—have rendered the notion of ‘capitalist democracy’ into an oxymoron.”


some context from his response to the article. https://www.cato-unbound.org/2009/04/13/peter-thiel/educatio...

"I had hoped my essay on the limits of politics would provoke reactions, and I was not disappointed. But the most intense response has been aimed not at cyberspace, seasteading, or libertarian politics, but at a commonplace statistical observation about voting patterns that is often called the gender gap.

It would be absurd to suggest that women’s votes will be taken away or that this would solve the political problems that vex us. While I don’t think any class of people should be disenfranchised, I have little hope that voting will make things better.

Voting is not under siege in America, but many other rights are. In America, people are imprisoned for using even very mild drugs, tortured by our own government, and forced to bail out reckless financial companies.

I believe that politics is way too intense. That’s why I’m a libertarian. Politics gets people angry, destroys relationships, and polarizes peoples’ vision: the world is us versus them; good people versus the other. Politics is about interfering with other people’s lives without their consent. That’s probably why, in the past, libertarians have made little progress in the political sphere. Thus, I advocate focusing energy elsewhere, onto peaceful projects that some consider utopian."


That's definitely an eyebrow raising quote, but I'm pretty sure he meant it the opposite way it's being taken. He's fine with the franchise being extended, he wants those groups to have more libertarians. It's the same thing as Democrats trying to make it easier to vote, people who vote less or have a harder time voting tend to vote Democrat.


Yes, he's observing a trend and comparing its results to the ideal “capitalist democracy” that Thiel wish existed. Women, in general, aren't very libertarian especially on fiscal issues. He's not saying their rights should be taken away. I'll admit, it isn't the best worded statement in the world.


He isn't saying that they ought to never have been granted?


No, he isn't saying that.


Of course not. He's just invoking all of those elements in a provocative sounding statement for no particular reason.


He's saying that women tend not to be libertarians, which is a large enough block of people to where it's very difficult to implement libertarian policies to law. There's no implication that women shouldn't be allowed to vote or women's suffrage was a mistake, it's just a really awkwardly worded statement.


Describing your “ideal society” and then depicting one group, especially one who was politically and socially disempowered at the time, as an obstacle to it is called scapegoating.

Historically, this ideology has had some very bad consequences.


Why is this being down-voted? It's the truth.


Thank you for bringing your sexism agenda into a discussion on Palantir


You are most welcome.


Do you agree that a company's culture starts from the top?


I take it that your consent is agenda-free?


Palantir's repeated forays into politics are a bit troubling as well, particularly when combined with the issues you mention.


What are their repeated forays into politics?




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: