
Palantir's S-1 has a paragraph warning its AI technology may do unethical things - DyslexicAtheist
https://nitter.net/tsimonite/status/1298711894241366019#m
======
dbt00
A major portion of an S-1 is outlining every possible thing you think could
possibly go wrong, so that someone can't sue you later for something going
wrong you didn't warn them about.

Given that, there's no incentive to calibrate these risks to real world
probability (and sincerely, given the dual purpose of such a document, there
shouldn't be), there's a lot of pretty wild possibilities included in S-1
filings. Although, just for fun, I just skimmed Uber's and they failed to
acknowledge that an airborne illness resulting in a global pandemic could
dramatically reduce their bookings and cause serious long term structural harm
to their business. Maybe I should sue...

~~~
twblalock
The incentive to go into all these wild possibilities is pretty strong --
companies must list everything that might ever conceivably happen so that
nobody can point a finger at them later and say that they forgot to include
something so now they're gonna be sued for millions.

This is the same reason corporate PR statements sound the way they do, and
it's the same reason politicians speak the way they do. People say they want
openness and honesty, but then they punish businesses and politicians for even
the slightest offense. Because of those mixed and contradictory incentives,
this is the kind of communication we end up with.

------
syrusakbary
I'm curious: why using Nitter instead of Twitter for the links? It seems
content is replicated from Twitter:
[https://nitter.net/tsimonite/status/1298711894241366019](https://nitter.net/tsimonite/status/1298711894241366019)
/
[https://twitter.com/tsimonite/status/1298711894241366019](https://twitter.com/tsimonite/status/1298711894241366019)

~~~
jchw
I think the main reason is probably just how utterly irritating Twitter’s
website has gotten. If I try to visit any Twitter link on my phone I currently
see “This is not available to you” (not paraphrased.) Before this started
happening, I frequently hit errors that required refreshing the page once. And
even when it does work, the site is irritating, requires JS and frequently
messes with my scroll position.

Nitter’s about page also provides some reasoning to use it:
[https://nitter.net/about](https://nitter.net/about)

~~~
creato
Twitter almost always tells me I'm "rate limited" (can't remember the exact
wording), even though it's usually the first and only time I've clicked on a
twitter link in weeks or months.

edit: And FWIW, I'm not someone who blocks scripts/otherwise cripples my
browser either. I don't even run an ad blocker.

~~~
ffpip
Your account may not be able to perform this action.

------
pascah7
It just says that "some scenarios present ethical issues". And follows by
saying that their business model want's to mitigate those risks as to not
experience brand or reputational harm.

I feel like the title is just click bait...

~~~
ralphstodomingo
I came to the same conclusion. The line itself was common sense to me.
Palantir was never held in such high regard in the first place.

------
raxxorrax
I would be fairly surprised if any current AI would have an ethical framework
to evaluate its own actions or even any formalized rule system that comes
close.

So they warn that decisions by AI should be reviewed, they write "Some AI
scenarios present ethical issues". Maybe just to protect themselves they state
the obvious.

Looking into the future I could imagine misbehavior to be blamed on AI. Having
a handler in such cases might be a good idea, otherwise we might see a lot of
cases where people try to shed responsibility.

------
PeterStuer
Why AI in particular? Wouldn't any non-AI based system that has 'a direct
impact on human rights, privacy, employment or other social issues' tread the
same waters?

Besides, 'ethics" is a fairly fluent concept that has been reified very
differently in different societies around the world, despite Silicon Valley
seemingly denying this diversity and projecting their own instance as a global
norm.

------
zxcb1
It might help to think of Palantir itself, and all the other tech giants as
artificial intelligences.

------
Cthulhu_
Does Facebook or Google's S-1 have similar caveats? I mean people have live
streamed their shootings on the platform, and frequently it takes Facebook
hours to take down (to use the same language) unethical content.

------
lawnchair_larry
Doesn’t that go without saying? It’s AI. It does not have ethics.

~~~
dbt00
Even if it did, all software has bugs...

~~~
smolder
It'd be more correct of course to say that nearly all non-trivial software
will have bugs. Not all software has bugs. I think it's a distinction worth
making only after considering the number of encounters I've had where someone
takes "all software has bugs" as a literal, universal guarantee.

------
manicdee
> Though our business practices are designed to mitigate many of these risks,
> if we enable or offer AI solutions that are controversial because of their
> purported or real impact on human rights, privacy, employment or other
> social issues, we may experience brand or reputational harm.

As an example of the kinds of things that AI does when you don't know what
you're doing: classifying black people as criminals because your law
enforcement system polices black neighbourhoods more heavily, is more likely
to arrest black people and is more likely to convict black people, and all
your training data came from sampling populations in jails.

As an example of the kinds of things that you are careful not to do yourself
but you have no control over when a customer applies your tools to their
problems: criminals ram-raiding supermarkets using your cars, or overzealous
federal departments using AI to assist in mass surveillance of citizens.

As an example of the kinds of things you are careful to not disclose but feel
okay to do anyway: IBM helping the Nazi Party target Jews using census data.

So it's up to you as an investor to figure out whether Palantir is warning
investors of future hazards, or advising investors that there are actually
human rights violations happening right now that Palantir are well aware of,
and it's only a matter of time before the cat is let out of the bag.

------
gandutraveler
Even if their AI can do unethical things it knows how to organize society and
what justice requires.

