Hacker News new | past | comments | ask | show | jobs | submit login
Palantir CEO Rejects Calls to Pause AI Development (newsnotfound.com)
93 points by newsnotfound on June 10, 2023 | hide | past | favorite | 191 comments



Palantir is a criminal organization that should have their corporate charter dissolved. Re-posting what their CEO says is enabling them. And there's especially no reason to listen to them on the topic of machine learning.

I agree there's no reason to "pause" anything but it's just a broken clock being right twice per day.


Palantir is a solution provider for my company [Airbus]. We are reasonably satisfied with the product they provide us. Not being politically involved, and as a European not really aware of the action of Palantir in the US, could you elaborate on the « nastiness » of that Palantir?


Airbus uses Palantir Foundry, which I think is not that controversial.

The controversial things everyone talks about are their other products like Gotham which is for the police/military.

In germany it is Unconstitutional to use.

Here is a german article about it: https://www.hessenschau.de/politik/warum-die-polizei-softwar...


Good to know, thanks. [yes, we mostly use Foundry]


FWIW Foundry isn't that much different than their government products. It's just a platform for data management and analysis.


So you're saying selling any sort of data analytics software to the government is a criminal act?


If it breaks the laws of the land you can call it illegal. And without doubt immoral if it's used to mass control and surveil populations.


Immorality is a tricky subject but the original commenter was specifically talking about illegality. Whatever your feelings about Palantir's business (and the defense contracting business as a whole) I don't think there's any convincing argument that Palantir is engaged in illegal or illicit business.


The hacked HBGary emails show that Palantir was committing criminal acts in order to attack Wikileaks and associated. They're not law enforcement but they do things that only police/FBI normally could get away with (post-2018 law changes allowing the FBI to hack anyone). They're so entwined into the US security state they might as well be part of it, except they have no accountability and because they're not government they can do many things government agencies are restricted from. It's basically a loophole corporate shell game. If it weren't done on behalf of the federal government itself the feds would definitely have charged them and shut them down.

As for the other parts of Palantir that do business with other parts of the security state like Boeing, well, that's why they incorporate. To avoid legal and social accountability for their criminal actions by abstracting it away even within their organization.


It was also a Palantir engineer that helped Cambridge Analytica with the data analysis of Kogan Facebook psychological profiles that they used to psyop Trump into the Whitehouse and Johnson into 10 Downing Street. Karp handily threw that employee under the bus, and 'absolved the company of guilt'.


Palantir makes tools that are used to police people in extremely invasive ways, and coordinate intelligence in ways that can violate human rights.

And Germany has bought and used those same tools, it's not a US specific thing.

At some point it comes down to your own personal ethics on this, but to me you can't say "that Palantir"... it's all Palantir.


It should be considered part of the US military industrial complex and people largely object to it for the same reasons they would object to Raytheon or Lockheed Martin being involved with something. Palantir obviously operates on a different layer, but it still provides support for the military operations of the US.

Of course that should not be controversial for Airbus, which themselves operate military industry and directly cooperate with firms like Raytheon and are competing with Lockheed and such. Working with Palantir is just another drop.


In my opinion there's nothing wrong with being part of the military-industrial complex - as recent developments have shown, defence industry is a fundamental need for any modern society including democratic ones, and we're voting to increase funding for military personnel and hardware (part of which gets bought from the US), and getting volunteers from all fields of technology involved in applying hardware and software for military needs (e.g. adapting consumer drones for dropping grenades, working on software tools for artillery coordination, etc), thus effectively becoming part of the military-industrial complex - and I consider that not only morally permissible, but even morally imperative to support the resistance against Russian invasion of Ukraine not only with words and humanitarian aid, but with actual military-industrial effort. Building weapons of war is not evil, because resisting evil requires them, and blocking the building of weapons does not make you a good person - it makes you a bystander, and being a bystander to evil is not neutral, but harmful.


Seeing actual footage from Ukraine, actually has made me extremely uncomfortable working for a company, which produces equipment for that conflict. But maybe I am one of those insane people who doesn't want to see people slaughter each other. It seems like once there was "the right" enemy people actually are very much in favor of foreign military adventures and war. Pretty pathetic.


I understand that the attitude towards overseas "military adventures" can reasonably be quite different, but coming from that region, I feel that it's morally good (not even acceptable/neutral) to volunteer and join the army, risking your life to defend your people with force of arms that may involve killing of enemy soldiers. And since that is morally okay, for the person who actually is pulling the trigger - then there's nothing wrong also for those manufacturing a more effective gun/vehicle/drone/etc for that soldier.

There is valid criticism for some of these military-industrial companies lobbying for various horrible historical acts to improve their business, and that obviously is immoral - but that's not immoral because they make weapons but because of these specific acts which put them in the exact same position as United Fruit Company, which caused violent conflicts to benefit their banana business.


Ah okay. I mistook you for an American who just flipflops on whatever is currently pushed, but I see that you are just a nationalist. Which I think is far more acceptable, if you actually are not a hypocrite about it, like being a nationalist for a country where you do not live in or don't even have a citizenship for.


> But maybe I am one of those insane people who doesn't want to see people slaughter each other.

I share your ideal but realistically this is a useless stance if there is anyone in the world that doesn’t share your ideal and doesn’t mind slaughtering others.


Lots of very fine reasoning when it comes to “provid[ing] support for the military operations of the US.”

Microsoft, Google, and Amazon all have substantial sales to the DoD. Part of the military industrial complex?

What about Uber, Hilton, AB InBev, and Coca-Cola? What about Harvard and Yale? What about White House staffers?


>Part of the military industrial complex?

I would say absolutely yes. All these companies make US military operations possible.

>What about Uber, Hilton, AB InBev, and Coca-Cola?

Not really. If they are used at all, they are used incidentally so. Certainly the military does not need them.

>What about Harvard and Yale?

Difficult question. I tend towards yes.

>What about White House staffers?

Depends I think.




"Comments should get more thoughtful and substantive, not less, as a topic gets more divisive." and "Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something."


The fact that Palantir is a criminal organization is plainly true and certainly substantive.


>The fact that Palantir is a criminal organization is plainly true and certainly substantive.

Perhaps it's plainly true in some trivial sense (all large orgs have committed some crimes), but not in any legally-relevant sense. Otherwise people have to explain the obvious incongruity between this "plain truth" and the obvious fact that Palantir is not facing the sort of legal problems that would be expected of a "criminal organization" (and no, saying "the US judiciary is just very very corrupt" is not a "plain truth" either.)


It’s not true in a trivial sense it’s true in a deep and legally relevant sense. The company is a major accomplice in the blatantly illegal mass surveillance of Americans.

This isn’t like a big conspiracy theory people have won Pulitzer Prizes over it and all that.

It’s a completely reasonably inference that this person advocating for fewer restrictions on a powerful new technology is hoping to use it to conduct more sophisticated mass surveillance on Americans.

As I said, nothing could be more substantive to this thread.


It sounds like you're arguing that these government programs are "criminal enterprises", which would be a much deeper problem if the legal system can't weed them out.

Even were that the case, it's unclear to me that merely providing material aid to a criminal government organization would make a company a "criminal enterprise".

Perhaps uncharitably, this seems akin to arguments that would condemn a large share of contractors involved in military / police / intelligence work as "criminal".


Yes members of government who routinely break the law and those who further those efforts are engaged in criminal enterprise.

Yes of course the inability of the legal system to deal with it is a deep problem.

My argument is that people who break the law are criminals. It’s not a subtle point I’m making here.

The only point of confusion seems to be your surprise to discover that people with power commit crimes openly and with impunity.

Relevant: https://www.youtube.com/watch?v=ZtL6kbYiP-w


A lot of people similarly claim that the 2020 election was stolen, aided and abetted by a criminal conspiracy. I'm going to go out on a limb and guess that you don't agree with those claims, and it wouldn't stem from a skepticism of the base notion that "people with power commit crimes openly and with impunity".

In any case, even when illegal government programs exist it's again unclear to me what sort of due diligence a contractor would have to make to avoid being labeled as a "criminal organization". Similarly, a lot of people consider soldiers who fought in Iraq to be murderers, and I see the point but it would come off as confusing in a normal conversation about a particular person to just casually label them as a murderer because of this.


Yes some people claim things that aren’t true. Others, conversely, make true statements.

I am pleased that all in this discussion have now demonstrated such a rigorous grasp of the obvious.

To your more specific point, my first recommendation for government contractors who wish to avoid being labeled as a “criminal organization” would be to avoid repeatedly and blatantly committing crimes.


Yes, well, I still can't help but find it odd that such "blatant" crimes are apparently the reach of the legal system. My prior under the circumstances is that the claims are overblown, eg. "Iraqi soldiers are murderers" or "all contractors that aided the US intelligence apparatus belong to criminal organizations".


This conversation is really confusing.

Has there ever been an era in American history where there aren’t private entities with high government officials as allies who engage in organized criminal activity to further our international trade interests while avoiding consequences?

Not sure if you’re a history buff, but…


Yeah, but if the insinuation is that the whole military-industrial complex is largely a "criminal enterprise" it becomes a much less interesting claim.


>legally relevant sense >blatantly illegal mass surveillance

"Treason doth never prosper, what's the reason? For if it prosper, none dare call it treason."

It may be illegal, but this is not legally relevant at the moment.


Between this and your "well actually AI Armageddon is a more urgent danger than addressing climate crisis" comment I'm starting to think you're peter thiel's burner account


I could make similar comments about your account being more than 10 years old yet apparently you don't realise there is a preference for informed discussion on this site.

By all means criticise away, by constructing an argument and preferably providing sources.

Saying "X is bad" is cheap and does not further discussion or teach anyone. In fact, that kind of behaviour being normalised is the most susceptible to brigading and bot farms.


oh i'm sorry, was i supposed to treat "the AI doom is more serious and immediate than the climate crisis" like informed discussion ? because that's honestly the dumbest fuckin' thing i've read on this site in a long time. and as you note, i've been here a long time.

no person on earth -- who is informed by anything aside from safeguarding or expanding their own capital -- would make that argument.


I don't know how to say it any more clearly; by all means criticise palantir, criticise AI maximalism, it's not about the content of your beliefs.

Comments such as: > Yeah, no shit. That's what I'm saying, Peter

Are in contradiction with HN guidelines and what makes this site valuable, in my opinion, is informed discussion. There are multiple forums where sarcasm and low effort comments are the norm; HN should be more than that.


oh brother. your account is 2 months old. God only knows what disgusting filth was spilling out of your mouth to get your other accounts banned. Please keep sending me patronizing lectures about decorum in hn comment sections. You're literally concern trolling bud.


Yes, you got me. I'm actually Peter Thiel and I'm really an AI Doomer but I'm hiding that in my public pronouncements because I'm willing to shave years off my life expectancy to ensure that I die while I'm as rich as possible.


Yeah, no shit. That's what I'm saying, Peter.


Can you explain in what way Palantir is a criminal organization?


Lol, criminal.

I do find that Karp sounds like a huckster, but criminal … sad that this got so many upvotes.


The irony of you advocating for Ukraine and then calling one of its main intelligence partners, whose product has helped save countless Ukrainian lives, a criminal organization.


So by your logic, if someone supports entity A, and A is supported by entity B, there are no grounds to ever criticize entity B without also undermining support for entity A?

This seems like the kind of team sports mentality that has destroyed politics.


I would also argue that describing an organization as "criminal" based on allegations that only concern a small share of its overall scope of impact to also be reflective of poor rhetoric.


the mob helped multiple governments during WW2


The people scared of the AI doom are so funny to me because the idea is highly speculative, when there's climate change happening under our noses and nobody says we should stop anything...


We should stop climate change. We should also be careful about developing the capabilities of new technology that will put us in similarly precarious situations going forward.


The two problems aren’t unrelated either, as we arrived at climate change (in part) because of decades of lies, propaganda, and dissimulation. Before, fossil fuel companies and their ill had to pay quite a bit of money to accomplish this. The prices has now fallen dramatically.


> nobody says we should stop anything...

About climate change..? It doesn't serve your argument to so blatently misrepresent reality.


maybe better rephrased: I don't often see an overlap between those who see AI as a realistic existential threat and climate change as a realistic existential threat.


Plenty of people are happy to say we should do this or do that.

When it comes time to vote though people tend to go with the person telling them they'll have more.


Not necessarily, it can also be the person telling them that others will have less.


People have reasonable concerns about AI blowback. That doesn't mean they are doomsayers, or "scared".


Yud is most definitely a doomsayer.


It's the general case that I was (very obviously) referring to. Not outliers / clickbaiters like Yud.


Unfortunately the general case seems to be influenced quite strongly by Yud. It’s a trickle down effect. So, you’re right, and it’s a good point, as you say. But it was also correct for them to point to Yud specifically as one of the original sources of what some might call hysteria, and others might call reasonable concerns.


Exactly. The median person with AI concerns is nowhere near Yud, but I have seen him influence the discourse so much that to me, AI concerns = Yud.


That's what Yud wants you to believe, of course. Whether you dance along with his shtick is up to you.


I don't care either way. I have the same opinion as Yann.


Who's yud?


Yudkovsky


There are two conflicting requirements- solve climate change without anybody losing any money. When climate change is strongly correlated with consumption that's difficult.


Also we have to solve climate change without using any fancy-shmancy geoengineering, because as sinners we're obligated to suffer rather than have convenient solutions for our problems.


Others downvoted you but I think you’re right to link Puritanism to a specific sliver of climate change activist community. There are some zealots who think we must end modernity as we know it, now!, or all is lost. These people existed before climate change, they just used other arguments to back their agenda. It was over population or before that a fear of factories, industry, progress in pretty much any form. The steam engine was a great fear to many. These are the “return to nature and live off the land” types and authoritarian personalities who are bothered by freedom. Ted Kaczynski was one of these. It’s hard to sort through such peoples thoughts to extract much that’s valuable. It’s like they don’t understand that history moves forward.


Yeah, I believe in climate change but I think that short-term climate doomerism is a pretty clear stalking horse for the sort of anti-capitalist/anti-industrial sliver that you mention. I guess if I were being charitable I'd say that I could be reversing the causality here and that climate change doomerism was simply causing anti-capitalist beliefs rather than the other way around, but I would bet that this isn't the case, and the reflexive attacks on "easy" solutions to climate change is reflective of this.


Or maybe the only kind of geo engineering that is feasible comes with unknowns that people are reasonably suspicous of; especially when it would likely be carried out by profit making entites.

If you want to suck co2 out of the air, no one will have issue with it, you will just run out of money without making a dent.


And trying to go 100% carbon free (or die trying) in the next decade or two doesn’t carry risks? Decarbonizing won’t be carbon neutral that’s for sure.


Deindustrialization also comes with plenty of unknowns that people are reasonably suspicious of.


we could go with untested, still inefficient fancy-shmancy geoengineering which may cause even greater disasters, or maybe we could go with re-engineering our energy systems, with solutions that have been used for decades, if not centuries, at the same time maybe also rethinking a bit the way we live and consume. But, no, where's the fun if we don't destroy an ecosystem or two?


If a sober-minded cost-benefit analysis concludes that geo-engineering isn't viable, fine. But I don't think this is the modal way that geoengineering is being dismissed.

Also these are similar arguments to what has been used to criticize nuclear energy for decades, which has obviously been a disaster on net.


Palantir's AI calculated this comment to be the ideal way to derail this conversation.

1. This article has nothing to do with climate change, so why is such a comment the top reply?

2. "nobody says we should stop anything [to address climate change]" is a flagrant lie. You guys are being baited.


1. because it made people think and reply

2. if it was not obvious, my comment was sarcastic. Of course there are people who care about both, what surprises me is all this fanfare from CEOs about something so speculative that is completely science fiction right now, when the existential threat is here and now, and very few seem to point at some urgent solution like, IDK, stopping using fossil fuel in 5 years? We have about a decade to fix this [1] and the progress so far has been, at best, lacklustre [2].

[1] https://www.ipcc.ch/report/ar6/syr/resources/spm-headline-st... [2] https://climateactiontracker.org/


https://news.ycombinator.com/newsguidelines.html

Please don't post insinuations about astroturfing, shilling, brigading, foreign agents, and the like. It degrades discussion and is usually mistaken. If you're worried about abuse, email hn@ycombinator.com and we'll look at the data.


It would be cool if we could just focus on climate change though. We didn’t need a another man made crisis to deal with.

Imagine if a real natural disaster occurs within the next 5 years, like another Chernobyl.

It will be pretty tragic for our species.


the thing is, there are several of such disasters happening every year because of climate change, and given that climate change is man-made, we can infer that those disasters are too. Pretty tragic for our species indeed, but even more tragic for the other species that live on this planet.


Sorry I do know what you mean. I meant a natural disaster unrelated to climate change. Such as huge earthquake.


Those disasters are not happening here, though.


Hey now, plenty of CEOs care about the climate. As long as it doesn’t affect profit margins or the “Remote work” thing gets popular. In that case screw the climate. Make everyone drive cars to work!


Sam Altman invested $375m in Helion to the contrary.


I consider nuclear fusion for energy production highly speculative too, meanwhile our houses burn.


The point isn't whether you agree with the approach, it's that it's not intellectually honest to claim nobody who considers AI to have existential risks thinks we should do anything about climate change and not eg. mention that Altman's biggest investment is to fight climate change.

Good discourse only happens when people take efforts to honestly represent the state of affairs.


Still haven't heard anybody who says we should stop developing AI that we should also stop, say, driving cars or stop all datacenters. Yes both are drastic, maybe even silly, but one is a response to a real threat affecting the planet here and now, the other is a the result of a vivid fantasy.

Happy to be given counter examples.


Not comparable at all. I’m not afraid of AI, thinks it’s a lot of marketting on OpenAIs part, but climate change is a trend not something you don’t see one day and then the next day everything’s different.


Yep, this is key. With climate change maybe we should be alarmed now, but few people argue that human extinction without our lifetimes is a possible outcome of inaction in the current argument.

Unfortunately, with AI things are different. By the time humans notice a real problem, we may be months if not hours away from death.


How do you envision this AI caused extinction happening?


Here's an influential article on the subject: https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a...

The basic idea is that once AGI hits an unknown capability threshold it will likely recursively self-improve into something very dangerous and difficult to control, and will likely be able to come up with an effective plan to remove any obstacles to its intended desires, ie. humans.

People have varying degrees of confidence in this scenario, but even if you peg it at like a 10% likelihood you're basically conceding that it's the single most important policy issue in the current political climate.


What if I peg the likelihood of an AI extinction event at 0.01%?

What if the likelihood of AI "solving" climate change is 10% and the alternative is human extinction?

Can we agree that coming up with sci-fi scenarios and assigning arbitrary likelihood figures to them is pointless?


Anything to distract from the climate crisis huh Mac


Just like the climate crisis is a distraction from the "no 2nd season of Firefly" crisis, sure.


I’ve met exactly 0 AI doomers who aren’t also highly concerned about and involved with fighting climate change


This is missing a key argument from the AI-cautious, which is that we shouldn't "pause AI development", but rather "pause public deployment". The case for only pausing the public deployment solves the "China problem" because we could continue development. On top of that, it would be harder for China to fast-follow if there isn't any public deployment of what is being researched and built.

edit: I'm realizing there may be different classes of people who want to regulate AI in some capacity. I'm basing myself off of this train of thought from the Center for Humane Technology: https://www.youtube.com/watch?v=xoVJKj8lcNQ


I think most AI doomers (I don't mean the term as an insult) think that would be worse than the status quo. The people who think that the biggest risk of AI is that it won't hire black people or something would probably love the idea, though.


I don't think you are accurately representing the views of the 'doomers' in the sense that it is much more about development


Maybe we're not talking about the same people, then. I am repeating almost verbatim what the folks at the Centre for Humane Technology say here: https://www.youtube.com/watch?v=xoVJKj8lcNQ


Yeah, briefly perusing the founders wiki page, I perceive this as standard, east coast chattering class non-profit take on social media (fake news, it’s hurting our kids), but the AI pause people are coming from a considerably different perspective.


Isn't pausing public deployment essentially the same as pausing development, given that it cuts of the company's revenue from their research? I'm sure some will continue. But way less.


No one who advocate "pausing AI development" see Palantir as much of a threat anyway


Palantir is one of the AIs that’s more likely going to go into kill drones, etc so in that sense it is a threat


Wrong company. That's Anduril.

Palantir is mostly in the analytics, ingest, and business intelligence space. That said I think they are making an AI Platform of sorts, but it's not that impressive.


Anduril is hiring, for anybody looking, there are lots of positions all over the globe, all skillsets - https://jobs.lever.co/anduril/

edit:

there were some interesting responses, but it seems HN has pruned the entire comment chain.

i'd like to encourage others to consider whether censorship is necessary in adult spaces, especially HN - i did not ask to be insulated from criticism. critical comments are a chance to have a healthy discourse.

for what it's worth, to answer the person who asked me what it feels like to be a murderer, since their comment didn't deserve to be censored - i don't feel like a murderer. this is my first time even hearing about anduril, but i did notice that there is a whole spectrum of jobs open on their page and it felt right to pass that on.

given that we're all adults here, capable of self moderating our own emotions, im not impressed that the comment chain was removed, and am hoping to interpret this in the most charitable way possible: that it may have been an automated measure responsible.


I find their salaries posted really odd.

They offer $112,000 - $198,000 a year for this position, which requires some pretty wild math competency, electrical engineering, simulations experience, and security clearance:

https://jobs.lever.co/anduril/b78e5e78-f09f-4be8-995c-71ead3...

Or you can write React and TypeScript for $160,000 - $240,000 a year

https://jobs.lever.co/anduril/610cb495-bfd9-4a0c-8c55-a2756a...


The first one is for Software Engineer, and the second one is for Senior Software Engineer. The difference is in level.


Embedded devs make less money than front end devs. Might not be reasonable or fair but it isn’t unique to this company.


Absolutely bewildering to me.


[flagged]


I don't know, but I imagine - for the right target in the right situation - it would feel pretty good.

George Orwell put it best: People sleep peaceably in their beds at night only because rough men stand ready to do violence on their behalf.


The real word isn’t pretty, societies maintain a monopoly on violence so all politics is inherently violent. Every political opinion you hold would or does before state violence against some group of people.

This comment is childish, it’s like saying “why can’t we all just get along man”


Palantir is doing intelligence analysis, meaning their code might not go directly into a drone, but their code makes or influences the decision, when the drone will fire.


Wait . We don't know if the anti-AI crowd is funded by western adversaries ...


It's safe to assume that it is; that doesn't mean much.

AI risks include, but are not limited to, "if you get this wrong everyone dies" and also "if you get this right, your idea of 'good' is indelibly imprinted on the future light cone, and we don't fully agree with your idea of good".


Alex Karp and Peter Theil... the famously trustworthy individuals guiding our society.


Palantir is Infosys for governments. This remark is a psyop by Karp to make them seem like some frontier CS place.


The call to "Pause AI dev!" reminds me of the COVID-era call to "flatten the curve!".

It sounds reasonable, and if it was possible to do, might be effective.

But it's absolutely not possible after a certain point, and that point is past.

The people who don't realize that just look silly, or they have an agenda.


At least that had some obvious problem it was trying to solve. Here there’s no proposal as to what would happen in those 6 months, what’s at stake if we don’t, why 6 months specifically, how could you even plausibly get cooperation when it’s about competition vs viral spread, no attempt to get cooperation in other countries (eg China), who would even come up with whatever restrictions in those 6 months and what that would look like, etc.

The reality is the cat is out of the bag in so many ways, particularly if you’re a bad actor LLAMA models are available to you. It’s already possible to just create infinite propaganda/spam now. What would 6 months possibly solve?


Only those with “no products” want a pause.

I presume he means among producers of AI products. But factoring in others (like researchers, lawmakers, concerned citizens, me, Sarah Connor) you could flip that around to 'only those with products don't want a pause'. Which is a bit like meth producers not wanting to surrender their labs.

But something tells me Alex Karp isn't concerned about ethics.


Friendly FYI that Palantir is considered an AI stock, because unfortunately BBC and newsnotfound felt no need to mention it.


Who considers it an AI stock and why was this compulsory to mention?


Ok, only in iShares Exponential Technologies ETF as the 3rd entry.

There has been press hype linking Palantir with AI however:

FT: AI has given Palantir its mystique back

The Register: Former NHS AI leader joins US spy-tech firm Palantir

It’s fairer to say it probably aspires to be seen as an AI stock.

They’re financially incentivised to lobby for unregulated AI to realise “upside”.



Somewhat off-topic, but this puzzles me to no end: How come Peter Thiel, an outspoken libertarian, becomes a government contractor, doing data collection and analysis for spying and surveillance on the masses? Like isn’t that what libertarians warn against all the time?


Libertarians love government contracting. The less the government does themselves, and contract out to corporations instead, the happier libertarians are. Many libertarians even want to abolish the US Postal System so that even that work can be contracted out to corporations.

These kind of libertarians are all about "economic liberty", e.g. unfettering and deregulating corporations. They care little for individual liberties; a surveillance police state administered by corporations is right up their alley. In their view, the problem with the Stasi is the Stasi wasn't a for-profit corporation, and was consequently inefficient at what they did.


libertarianism is a spectrum that people from all sides of the political spectrum fall on. He’s a small L libertarian and leans that way on a ton of social issues but small l libertarian != big L Libertarian so most with that title are not ancaps


[flagged]


Another manifestation of "everything I dislike is fascism"


There are plenty of things I dislike that are not fascism: soup as a main course, jogging, different flavours of authoritarian regimes.

Where I live no one pretends "libertarianism" is a real ideology. This has some upsides. For example, we can call a spade a spade. This doesn't, however, stop Peter Thiel's company from receiving enormous sums of money from the government to set up illegal, massive surveillance operations over its citizens.

How should we call those who use their massive wealth to undermine democracy and relax legislation so they can do as they please, if not fascists? Is "libertarian" already clear enough of a term?


I’m sure you’d agree that the libertarian side of all parties, including the left, would count as fascism then?


No, that wouldn't make any sense. I would count them as anarchists.


To Palantir's credit, at least they're objecting publicly.

I assume many parties will be pursuing pushing the envelope on LLM training (and sometimes deployment) regardless of regulations/pledges/treaties.


I find it much more likely we will be wiped out by weaponized biology long before AI becomes a threat. Though I'm pretty surprised we haven't already killed ourselves from it.


The easiest thing we can do to destroy ourselves is to continue on as we are without changing our use of fossil fuels, concrete, farming practices, transportation, etc.

The mass extinction combined with a 2+C world will kill enough of us fast enough that most of these imagined problems probably won’t be a threat.

Who will care about killer robots when feeding ourselves, getting clean water, and finding clean air will be a constant challenge?


Easiest? Perhaps. Certainly the default if nothing else were to happen. But between PV, vat-grown meat, the fact other edibles like Quorn and yeast and Spirulina are on old-news vat-grown foods, etc. I'm focussing most of my attention on the killer robots that want to maximise shareholder value by ${insert any of a bajillion examples of how almost any goal expressed in natural language has a literal interpretation that's catastrophically bad}.


The interlocutor here in this scenario isn’t “AI,” it is, as always, unfettered capitalism.

I’m all down for regulation and forcing AI companies, as the others, to pay for the externalities the enforce on us.

It isn’t a rogue language model that it suddenly going to brute force it’s way to sentience and kill us all; it’s billionaire tech companies with profit motivations. That they can point the finger to a computer and say, “AI did it and you didn’t listen to us when we tried to warn you,” is some fabulously weird post-capitalism.


> The interlocutor here in this scenario isn’t “AI,” it is, as always, unfettered capitalism.

One of them is.

AI also enables every conversation to be transcribed and indexed in a way that would make the Stasi envious.

This is because evil isn't limited to fat-cats twirling moustaches — such villains are just the easy-to-spot examples in the modern world because the big scary Communists fell on their faces decades ago.

> it’s billionaire tech companies with profit motivations

Could be. Could also be the next Pol Pot.

Did the PRC's Great Leap Forward cause such death on purpose, or by accident? Were Stalin and Sir Robert Peel malicious or incompetent with the Holomodor and the Great Famine of Ireland respectively?


AI is likely to help various groups to weaponise biology. Amongst other things.


I think what will ultimately do us in is reckless chemical pollution on a grand scale. Slow and steady.


Like bio weapons? Wouldn’t AI make that problem much, much worse?


Absolutely. My point was that it seems more likely we'd do it without AI before AI gets a chance.

Once AI gets to that level we will also have AI helping to defend against it though.


I find these arguments to pause AI development laughable. It feels similar to western countries calling for pause to using fossil fuel after they destroyed the ffing world.


It's much worst then that because AI could increase productivity by 10x or more with little environmental impact


If anything, we’ll hopefully get a definition of AI out of all of this. Worst case we’re going to ban matrix multiplication or the GPU acceleration thereof.


Pausing AI is like pausing the economy... Countries that don't pause it will be at an advantage.


If anyone wants to see how this will go: https://horizon.fandom.com/wiki/Faro_Plague


I share the sentiment, that many calls to pause AI development come from fear of market disruption and fear of competition, not from genuine concerns over safety.

Especially coming from the people who created attention machines that have changed the course of elections, had a major impact in spreading misinformation during a global pandemic, and are contributing to a mental health crisis. And yet we don't hear them call for more regulations for their own markets...


Palantir can stick their agenda, their company, and their CEO, right up their own jacksie.

Quote me.


If Palantir is rejecting it then we definitely should


Palantir CEO Alex Karp has rejected calls to pause the development of artificial intelligence (AI), stating that it is only those with “no products” who want a pause.

Karp believes that the West currently holds key commercial and military advantages in AI and should not relinquish them.


Somehow reading "The West" made me think their "enemy" is the Russians, and now I'm thinking of RussianGPT. Version 1 is a depressed literary figure who likes to talk heavy stuff. Version 2 is basically Baghdad Bob declaring victory in the face of loses.

Meanwhile version 2 has also been found in many places, like Mar-a-Lago and 10 Downing St, London.


It is only those who nobody is talking about that do not want a pause. They declare “no pause” so that people may talk about them. Hence.


See also - those who do want a pause. Some of them, at least.


> Karp believes that the West currently holds key commercial and military advantages in AI and should not relinquish them.

Delusional take, considering the East holds the AI advantage.


Yeah, I'm sure you have better insight and way more knowledge about the state of AI than Karp. What does he know anyway. It's not like Plantir has any special access to AI or the US Military.


I'm sure the CEO of a US company whose clients include the US government would tell the absolute truth in a public forum, and not platitudes that would paint his company and clients in the best possible light.


He has a vested interest in saying what is necessary to prevent regulation.


Very curious to hear your reasoning!


I'm mostly trolling to stir the pot and see HN's response, and you don't disappoint. :)

But since we're all speculating here, if I had to place my money on who will come out on top with AI, it would be the East. And by East, I specifically mean China.

While the West is scrambling to limit the pace of AI development, no such regulation exists in China. There have certainly been regulations to limit the power of Chinese companies in the past few years, but ultimately the shot caller is the CCP. An autocracy has more power and influence to direct its resources in a single direction much more efficiently than any democratic government.

Then there's the manufacturing side. The West does still control the most advanced chip fabrication technologies, but China is quickly ramping up development with results that rival those of TSMC/ASML[1]. If any country can be self-sufficient with chip fabrication, it's China. There's only so much harm US sanctions can do to a self-sufficient country.

[1]: https://thediplomat.com/2022/08/chinas-semiconductor-breakth...


Thanks for the honesty! That is of course against hacker news guidelines, but oh well. Specifically “Eschew flamebait.”

Regarding the topic at hand: that’s plausible. I wouldn’t describe the west as “scrambling to limit AI development” though, more “establishing conventions and rules and accountability and and and”. I guess that might slow down research, but I don’t see it as that simplistic, like a blue bar ticking up in Civ; AI will need some major paradigm shifts before it gets really crazy, and I think a healthier and more sustainable field helps that rather than hindering it.

Plus there’s the fact that “the west” (kinda hate these terms tbh) has ~66% of all CS research - https://m-cacm.acm.org/magazines/2022/4/259407-trends-in-com...


At my defense, I wasn't flamebaiting as I honestly believe what I said. It's just a controversial topic, and I expected the negative response.

> I wouldn’t describe the west as “scrambling to limit AI development” though, more “establishing conventions and rules and accountability and and and”.

Well, the discussions here are about pausing AI development. There's a growing public concern about its capabilities, and democratic governments must, in theory, abide to its citizens. An autocratic government can impose its will on the people, and even though China has passed AI regulation recently[1], it ultimately must serve the goals of the CCP. The Chinese government has much more power to direct its resources in ensuring it ends up on top in this race, while Western countries will be bogged down by bureaucracy, red tape, and more stringent regulation. Not to mention that the CCP doesn't share the same moral and ethical concerns. So any current lead the West has at the moment, will not remain so in a few years.

But this is entirely an opinion from a layperson, and I have no expertise in the matter. Cheers for the civil discussion!

[1]: https://www.reuters.com/technology/china-releases-draft-meas...


The fastest train still has to stay on the tracks, I think OPs point is that the potential for devastating knock on effects from a mismanaged AI effort can cause overall slowdowns. The red tape and bureaucracy may slow things down from some theoretical maximum, but a train can't fly so as the crow flies is only a theoretical route for a train.


Are there any regulations on AI in the US? I’m not aware of any.

> An autocracy has more power and influence to direct its resources in a single direction much more efficiently than any democratic government.

Fortunately, because it’s an autocracy, there is no way to know if it is the right direction, until it fails.


> Are there any regulations on AI in the US? I’m not aware of any.

I worded that wrongly, but what I meant is that the current concerns with AI and discussions to pause AI development are entirely happening in Western countries. China will regulate its companies to comply with the goals of the CCP, but they don't have the same ethical and moral concerns as the West.

> Fortunately, because it’s an autocracy, there is no way to know if it is the right direction, until it fails.

They can also afford to fail fast. It's the "move fast and break things" approach to government. :)


I was going to say! The US is infamous for treating corporations with kid gloves and letting them basically get away with whatever they do. If there is a place in "The West" that's going to regulate AI, it's not going to be the US.


“Are there any regulations on AI in the US? I’m not aware of any”

There are yes. Environmental regulations that impact hardware. Personal information, privacy regulations that impact the training data. Business regulations that impact what businesses and business owners can do with labor, financial reporting, etc… We also have a fourth estate so if people like Yudkowksi get enough traction in the media the government will be force to act. That is worse than regulation by government, it’s regulation by the whim of the media owners.

These are not AI regulation per se, but they are de facto restrictions that aren’t present to the same degree in communist China.


I wold scarcely call it 'reasoning', it's closer to 'wild guessing'

I think 'the east' has working multi-language LLMs, but the west's version doesn't quite perform as well with multiple languages


How so?


Two words: Nvidia, TSMC


NVIDIA is based in Santa Clara. TSMC is hosted by a strongly west-aligned country.


Doesn't matter where it is based. Nvidia relies on 7nm N7 chips from TSMC for its GPUs. TSMC is based in Taiwan. Both at mercy of China - from trade routes going through South China Sea as well as raw materials being sourced from China.

> TSMC is hosted by a strongly west-aligned country.

Which is right next to an increasingly hostile neighbour who keeps threatening to invade it. Unless the West is willing to go to War with China over TSMC, I don't see for how long it can maintain advantage.

Even Apple, Qualcomm depend on TSMC heavily. Lets face it. AI is primarily driven by GPUs sourced from Nvidia which depends on TSMC. China has been eyeing Taiwan for long now and we never know when it might decide to finally invade. There is no clear roadmap from the West on how it will defend Taiwan (except for saying US should blow up TSMC or TSMC should implement self-destruct mechanism in event of Chinese invasion [1]).

[1]: https://theprint.in/world/us-scholars-advice-for-taiwans-top...


nvidia doesn’t NEED tsmc, they have actually proved otherwise with the success of ampere on samsung chips


Samsung isn't really safe either. It is head quartered in South Korea with 2 really hostile neighbours. After Taiwan, South Korea could very well be China's next target.


led by UC Berkeley engineers...


Not least because the east manufactures effectively every semiconductor in use for training machine learnings models, or for inference.


By the east, you mean Samsung (s Korea) and TSMC (Taiwan), which are U.S. allies, using designs from American firms (NVIDIA/Google)?


CEO of what again?


Palantir


A weird military contractor with one of the most perculiar internet cult followings of any company (seriously - if you thought Tesla fans were a bit much, check out the Palantir crowd) who believe it's the next big thing in tech despite consistently performing ... okay. Big overlap with the WSB-alike social media investment community.


The F-35 performs … okay. That doesn’t mean I would want to go up against it in a fight


It's SAP-style software with a slightly prettier UI, not a cool jetfighter. Your analogy was a perfect example, though.


I wouldn't want to go up against a Spitfire, but donating some to Ukraine would probably be considered rude rather than helpful.

(Although, given how Russia is doing, I wouldn't be surprised if a Spitfire won a few battles…)


Nobody pretends Lockheed Martin is the next Tesla and Nvidia. Palantir fans do.


I am in general unconcerned with AI risks. However, AI should be permanently banned from use in war. There is no good that can come from this.


Good luck enforcing that.

With most high tech it's rather the other way around: the technology gets first used in the military and after some delay, the general public receives the tampered version to play with.


You mean to pull a trigger, correct? There are plenty of other areas where AI can shine like (excuse the layperson terms, I don't know what these are officially called) threat identification, intelligence gathering, logistics, population control.


https://en.wikipedia.org/wiki/Command_and_control#Derivative...

Useful list here for some of what you're looking for.


How would you enforce that? And how would you fight an enemy that doesn't fight with their hands tied behind their backs and uses any means available to them to kill you? Including AI of course.

IMHO, there's nothing humane about indiscriminately lobbing large amounts of explosives at each other just because you don't have the technology to aim properly and strike more surgically, which are two things you might be able to do with AI. The Ukraine conflict is a stark reminder of how dehumanizing old school war can be. Scorched earth, lots of lives being lost, entire cities that get leveled to the ground.


Is it possible to successfully ban AI research by militaries?

Isn’t that asking them to surrender instantly to any military who keeps the research going?


I don’t understand HN sometimes. Why is your comment being downvoted? As other comments are pointing out, it is practically very hard to ban anything when nation governments are the ones doing the very thing that much of humanity would like to ban - AI in warfare, in this case.

Does that mean we can’t talk about it or raise questions? After all, it is hard to prevent nations acquiring nuclear weapons too, but there is a tremendous amount of effort spent trying, isn’t it?


What do you think is going to happen with AI in war that is so deplorable? When I think of AI in warfare, I think about how to strategize against an enemy. Like how to deploy troops and resources in the most efficient way possible. Or how to relay communications between different parts of the military that don’t normally talk to each other.


Perhaps hunter drones that wait for their specific target, anticipate their moves/escape-routes and ultimately, utterly destroy them.

Or perhaps AI can accurately determine the number of civilian casualties that can be had while staying below the threshold of public outrage.

Just spit balling here.


I think the more horrific ideas of a ”hunter” isn’t really any more different than what a human soldier is capable of. What an automated solution can do better though is enforce rules of engagement and restrict war crimes. Computers do exactly what you tell them to, while humans always have discretion.


Good luck, we tried it already with nukes!


Its probably been used in war for decades now


Intelligence agencies have certainly been using it, but I could imagine the military was a bit more cautious with it given how much more can go wrong and how much worse the PR would be.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: