from 5 years ago: https://www.popsci.com/technology/article/2013-06/mysterious...
Palantir ‘wields as much real-world power as Google, Facebook, Amazon, Microsoft and Apple, but unlike them, Palantir operates so far under the radar, it is special ops.’ https://channels.theinnovationenterprise.com/articles/is-pal...
Yes, that's the same Thiel who is suing Gawker for outing him while developing predictive analytics that disproportionately target minorities.
See: Peter Thiel’s Palantir wins $876 million U.S. Army contract https://www.bloomberg.com/news/articles/2018-03-09/peter-thi...
not just facebook but Silicon Valley has in general a total disregard for privacy. You can't hate on Facebook from another corner of the swamp. Also we should be thinking about what's ahead in IoT (Turning IOT sensor data into behavioral insights) https://www.sentiance.com/
Not to mention the gazillion IoT devices with poor factory reset which is the web's equivalent to delete-account function which only disables a users login but retains the data.
apologies for my emotional tone, this obviously has hit a nerve.
> It is essentially an interface that sits on top of existing data sets and displays data to users for analysis, helping to identify connections otherwise impossible to find. Users do not have to use SQL queries or employ engineers to write strings in order to search petabytes of data. Instead, natural language is used to query data and results are returned in real-time. It is not designed to do any single thing, its main strength is that it is flexible and powerful enough to accommodate the requirements of any organization that needs to process large amounts of both personal and abstract data. This makes it more useful for managing HUMINT, or intelligence from human sources, than SIGINT, or intelligence from signals.
The problem with Palantir is that everything it is said to be doing is so vague that it could just be colorful dashboards based on not particularly great or new data. For example, this is how the Bloomberg story you linked to describes it:
> Founded in 2004, Palantir is used by dozens of federal, state and local law enforcement agencies to aggregate far-flung data, find patterns and present results in colorful, easy-to-interpret graphics. Its use by police in Los Angeles, Chicago, New Orleans and elsewhere has raised ethical concerns about the potential for unfairly targeting minorities.
I'm not arguing whether Palantir is living up to its contracts. Just that it doesn't seem to produce or have the unique and expansive kind of data that Facebook, Google, and Amazon have. Though I guess there's nothing that prevented them from doing a massive data pull from FB's API, in the same way CA managed to do it.
FWIW, not everything about Palantir is necessarily a secret. You can see some descriptions of what they offer and at what price via various FOIA requests that have been done: https://www.muckrock.com/foi/list/?q=palantir&status=done&us...
These restrictions don’t apply to digital eavesdropping, which is why it has become a major point of debate once the technology made it possible to do on a large scale.
Another example are license plate scanners: that data has always been there, and anyone could legally write down all the license plates they saw. But add image recognition and a database, and you’ve created a monster.
The tech community usually turns to technology to fight such technology: encryption for communication, Bitcoin to undermine (pun intended) what they see as the failings of the FED. BitTorrent for their qualms with copyright enforcement.
But laws and the court of public opinion are arguably our first line of defense. Underground printers didn’t stop the nazis or the Sowjets, and it’s not clear that technology has significantly moved power to the people in China, Turkey, or North Korea.
So we need better privacy laws. We need politicians to be scared before they use the services of Cambridge Analytica. And we need to convince our peers that they will have joined the dark side if they accept a job at Palantr. These sort of actions have the added benefit of respecting the processes of a civil society ruled by law, and not a techno-jungle where might is right.
But this is possible because we let them use this data. Thankfully, in EU the GDRP makes this almost impossible.
On a side note, working on a platform that is similar to sentiance in data aquisition, but for totally other purpose and zero effect on privacy, due to a transparent anonymization.
But it sure as shit won't let you run a Chinese message board where you allow that sort of thing.
A demand to forget has to show that I have data on that person. Without this, claim is impossible. No invasion.
"Anonymizing" data is a myth; stripped or hashed columns can usually be recovered by correlating the remaining data with other data sets.
> until someone's effort to pin point to any personal data becomes economically unfeasible.
Which admits that the data probably is recoverable, just "economically infeasible". Do you have proof of that claim? Unfeasible for who? In general, the difficulty of re-correlating data goes down as the amount of data grows.
> I do data scrambling, where the data as in values is not important for the engine.
DJB once described hashing as "magic crypto pixie dust" that "takes personally identifiable information and makes it incomprehensible to the marketing department".
> A demand to forget has to show that I have data on that person.
So you're trying to launder data to circumvent the letter of the law. This kind of scofflaw, antisocial attitude is how you attract reactionary, heavy handed regulations.
There is no other data set to correlate. Simply all the external context data is not recorded anywhere. Not even in logs. Unless some higher agency is going to hack some network driver to pick up the tcpip source, etc, I don't see how the data could be associated with datetime and location. This is why I was talking about "economically infeasible".
Data scrambling doesn't mean hashing. Sorry, here you are wrong. It's a on-the-fly frequency/timedomain scrambling, means someone has to physically, again, access a server and pick up from the memory the algorithm. And no marketing department, all research here!
The other stuff, I wound't reply, but let me asure you, there is no law circumventing. We are open and if someone can pinpoint of some personal data, there is no issue removing it.
Lovely. They are also devoid of vision and ethics about the likely results of their actions. In short, they fail to consider the saying:
"The road to hell is paved with good intentions".
Have they even considered the question: Which of their targets would EVER sign-up for their service?
Who are their customers, and why would they pay for the service? The only plausible reason to pay Sentiance is to understand a target's behavior at a fine-grained level in order to insert a stimulus to get them to do something they would otherwise not do willingly. (or a stalker, to assault them).
So, they are making a wonderfully powerful tool to enable strangers to change a target's behavior without permission. Yet they are not bright enough to avoid putting a "sign up" popup on their website in a way that interrupts their own video.
They will enable someone to cause serious damage to our world. Please get a message to them that they need to stop and shut down.
If they want to build something REALLY powerful, they should pivot to building something to allow us to DETECT & PREVENT other software on our phones/computers from doing what they are now trying to do.
I'd pay for that, and I'm not the only one.
It doesn't really matter how nice, smart, talented, excited, curious, or well-meaning a person is. Judge them by their actions. What they build and what it is used for.
If you could pass a message them, please tell them to get a grip and realize how insanely creepy what they are doing is, and they should probably not do it, even if it makes them a bunch of money.