Hacker News new | past | comments | ask | show | jobs | submit login
Palantir's S-1 has a paragraph warning its AI technology may do unethical things (nitter.net)
49 points by DyslexicAtheist on Sept 1, 2020 | hide | past | favorite | 27 comments



A major portion of an S-1 is outlining every possible thing you think could possibly go wrong, so that someone can't sue you later for something going wrong you didn't warn them about.

Given that, there's no incentive to calibrate these risks to real world probability (and sincerely, given the dual purpose of such a document, there shouldn't be), there's a lot of pretty wild possibilities included in S-1 filings. Although, just for fun, I just skimmed Uber's and they failed to acknowledge that an airborne illness resulting in a global pandemic could dramatically reduce their bookings and cause serious long term structural harm to their business. Maybe I should sue...


The incentive to go into all these wild possibilities is pretty strong -- companies must list everything that might ever conceivably happen so that nobody can point a finger at them later and say that they forgot to include something so now they're gonna be sued for millions.

This is the same reason corporate PR statements sound the way they do, and it's the same reason politicians speak the way they do. People say they want openness and honesty, but then they punish businesses and politicians for even the slightest offense. Because of those mixed and contradictory incentives, this is the kind of communication we end up with.


I'm curious: why using Nitter instead of Twitter for the links? It seems content is replicated from Twitter: https://nitter.net/tsimonite/status/1298711894241366019 / https://twitter.com/tsimonite/status/1298711894241366019


I think the main reason is probably just how utterly irritating Twitter’s website has gotten. If I try to visit any Twitter link on my phone I currently see “This is not available to you” (not paraphrased.) Before this started happening, I frequently hit errors that required refreshing the page once. And even when it does work, the site is irritating, requires JS and frequently messes with my scroll position.

Nitter’s about page also provides some reasoning to use it: https://nitter.net/about


Twitter almost always tells me I'm "rate limited" (can't remember the exact wording), even though it's usually the first and only time I've clicked on a twitter link in weeks or months.

edit: And FWIW, I'm not someone who blocks scripts/otherwise cripples my browser either. I don't even run an ad blocker.


Your account may not be able to perform this action.


> Before this started happening, I frequently hit errors that required refreshing the page once.

Hey I face that too. I have to refresh the page to load it sometimes.

Do you block/hide cookies?


I do not do any specific things to block cookies. I am seeing these symptoms at least on Safari/iOS with AdGuard and NextDNS, but if I disable every privacy feature I can think of it still behaves the same for me.


I suppose because: "Nitter is a free and open source alternative Twitter front-end focused on privacy." So no tracking I'd hope.

https://nitter.net/about


Well I snooped around, and turns out there's no js on the front-end that one would count as 'tracking'. Even Wappalyzer reports it can't detect any specific third-party tech on it, which I found was pretty awesome.


Yes. It only has images and css. https://i.imgur.com/MDzq4MB.png


That's cool. I didn't go as far as snooping around, but I noticed there also was no cookie banner.


You mean that you can't possibly see why OP has used it instead of Twitter? Like, at all?


It just says that "some scenarios present ethical issues". And follows by saying that their business model want's to mitigate those risks as to not experience brand or reputational harm.

I feel like the title is just click bait...


I came to the same conclusion. The line itself was common sense to me. Palantir was never held in such high regard in the first place.


I would be fairly surprised if any current AI would have an ethical framework to evaluate its own actions or even any formalized rule system that comes close.

So they warn that decisions by AI should be reviewed, they write "Some AI scenarios present ethical issues". Maybe just to protect themselves they state the obvious.

Looking into the future I could imagine misbehavior to be blamed on AI. Having a handler in such cases might be a good idea, otherwise we might see a lot of cases where people try to shed responsibility.


Why AI in particular? Wouldn't any non-AI based system that has 'a direct impact on human rights, privacy, employment or other social issues' tread the same waters?

Besides, 'ethics" is a fairly fluent concept that has been reified very differently in different societies around the world, despite Silicon Valley seemingly denying this diversity and projecting their own instance as a global norm.


It might help to think of Palantir itself, and all the other tech giants as artificial intelligences.


Does Facebook or Google's S-1 have similar caveats? I mean people have live streamed their shootings on the platform, and frequently it takes Facebook hours to take down (to use the same language) unethical content.


Doesn’t that go without saying? It’s AI. It does not have ethics.


A more poetic S-1 might have asked whether this unit has a soul.


Being some form of "intelligence" may mean for some people to have ethics, I suppose.


Even if it did, all software has bugs...


It'd be more correct of course to say that nearly all non-trivial software will have bugs. Not all software has bugs. I think it's a distinction worth making only after considering the number of encounters I've had where someone takes "all software has bugs" as a literal, universal guarantee.


And with enough outrage every bug can be viewed as an intentional effort to discriminate. Probably just lawyer speak to be prepared for the next wave ouf outrage.


> Though our business practices are designed to mitigate many of these risks, if we enable or offer AI solutions that are controversial because of their purported or real impact on human rights, privacy, employment or other social issues, we may experience brand or reputational harm.

As an example of the kinds of things that AI does when you don't know what you're doing: classifying black people as criminals because your law enforcement system polices black neighbourhoods more heavily, is more likely to arrest black people and is more likely to convict black people, and all your training data came from sampling populations in jails.

As an example of the kinds of things that you are careful not to do yourself but you have no control over when a customer applies your tools to their problems: criminals ram-raiding supermarkets using your cars, or overzealous federal departments using AI to assist in mass surveillance of citizens.

As an example of the kinds of things you are careful to not disclose but feel okay to do anyway: IBM helping the Nazi Party target Jews using census data.

So it's up to you as an investor to figure out whether Palantir is warning investors of future hazards, or advising investors that there are actually human rights violations happening right now that Palantir are well aware of, and it's only a matter of time before the cat is let out of the bag.


Even if their AI can do unethical things it knows how to organize society and what justice requires.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: