A major portion of an S-1 is outlining every possible thing you think could possibly go wrong, so that someone can't sue you later for something going wrong you didn't warn them about.
Given that, there's no incentive to calibrate these risks to real world probability (and sincerely, given the dual purpose of such a document, there shouldn't be), there's a lot of pretty wild possibilities included in S-1 filings. Although, just for fun, I just skimmed Uber's and they failed to acknowledge that an airborne illness resulting in a global pandemic could dramatically reduce their bookings and cause serious long term structural harm to their business. Maybe I should sue...
The incentive to go into all these wild possibilities is pretty strong -- companies must list everything that might ever conceivably happen so that nobody can point a finger at them later and say that they forgot to include something so now they're gonna be sued for millions.
This is the same reason corporate PR statements sound the way they do, and it's the same reason politicians speak the way they do. People say they want openness and honesty, but then they punish businesses and politicians for even the slightest offense. Because of those mixed and contradictory incentives, this is the kind of communication we end up with.
I think the main reason is probably just how utterly irritating Twitter’s website has gotten. If I try to visit any Twitter link on my phone I currently see “This is not available to you” (not paraphrased.) Before this started happening, I frequently hit errors that required refreshing the page once. And even when it does work, the site is irritating, requires JS and frequently messes with my scroll position.
Twitter almost always tells me I'm "rate limited" (can't remember the exact wording), even though it's usually the first and only time I've clicked on a twitter link in weeks or months.
edit: And FWIW, I'm not someone who blocks scripts/otherwise cripples my browser either. I don't even run an ad blocker.
I do not do any specific things to block cookies. I am seeing these symptoms at least on Safari/iOS with AdGuard and NextDNS, but if I disable every privacy feature I can think of it still behaves the same for me.
Well I snooped around, and turns out there's no js on the front-end that one would count as 'tracking'. Even Wappalyzer reports it can't detect any specific third-party tech on it, which I found was pretty awesome.
It just says that "some scenarios present ethical issues". And follows by saying that their business model want's to mitigate those risks as to not experience brand or reputational harm.
I would be fairly surprised if any current AI would have an ethical framework to evaluate its own actions or even any formalized rule system that comes close.
So they warn that decisions by AI should be reviewed, they write "Some AI scenarios present ethical issues". Maybe just to protect themselves they state the obvious.
Looking into the future I could imagine misbehavior to be blamed on AI. Having a handler in such cases might be a good idea, otherwise we might see a lot of cases where people try to shed responsibility.
Why AI in particular? Wouldn't any non-AI based system that has 'a direct impact on human rights, privacy, employment or other social issues' tread the same waters?
Besides, 'ethics" is a fairly fluent concept that has been reified very differently in different societies around the world, despite Silicon Valley seemingly denying this diversity and projecting their own instance as a global norm.
Does Facebook or Google's S-1 have similar caveats? I mean people have live streamed their shootings on the platform, and frequently it takes Facebook hours to take down (to use the same language) unethical content.
It'd be more correct of course to say that nearly all non-trivial software will have bugs. Not all software has bugs. I think it's a distinction worth making only after considering the number of encounters I've had where someone takes "all software has bugs" as a literal, universal guarantee.
And with enough outrage every bug can be viewed as an intentional effort to discriminate. Probably just lawyer speak to be prepared for the next wave ouf outrage.
> Though our business practices are designed to mitigate many of these risks, if we enable or offer AI solutions that are controversial because of their purported or real impact on human rights, privacy, employment or other social issues, we may experience brand or reputational harm.
As an example of the kinds of things that AI does when you don't know what you're doing: classifying black people as criminals because your law enforcement system polices black neighbourhoods more heavily, is more likely to arrest black people and is more likely to convict black people, and all your training data came from sampling populations in jails.
As an example of the kinds of things that you are careful not to do yourself but you have no control over when a customer applies your tools to their problems: criminals ram-raiding supermarkets using your cars, or overzealous federal departments using AI to assist in mass surveillance of citizens.
As an example of the kinds of things you are careful to not disclose but feel okay to do anyway: IBM helping the Nazi Party target Jews using census data.
So it's up to you as an investor to figure out whether Palantir is warning investors of future hazards, or advising investors that there are actually human rights violations happening right now that Palantir are well aware of, and it's only a matter of time before the cat is let out of the bag.
Given that, there's no incentive to calibrate these risks to real world probability (and sincerely, given the dual purpose of such a document, there shouldn't be), there's a lot of pretty wild possibilities included in S-1 filings. Although, just for fun, I just skimmed Uber's and they failed to acknowledge that an airborne illness resulting in a global pandemic could dramatically reduce their bookings and cause serious long term structural harm to their business. Maybe I should sue...