I agree directionally with this but it drives me nuts how much special pleading there is about what high-profile companies like OpenAI do, vs. what low-profile industry giants like Cisco and Oracle have been doing for a whole generation. The analysis can't credibly start and end with what OpenAI is doing.
Cisco and Oracle didn’t start out as non-profits claiming to be focused on the betterment of mankind. This is no different than Google dropping the do no evil and getting roasted for it.
Also, you’re suggesting because a company got away with bad behavior in the past we should never expect better of any other company going forward?
> Also, you’re suggesting because a company got away with bad behavior in the past we should never expect better of any other company going forward?
No, they are saying it is frustrating to see the focus on OpenAI rather than, for example, Cisco and Oracle, in an article with this particular title. On the topic of “supercomputers for autocrats”, a reader may be interested in what Oracle is up to.
The problem is that we don't. We see the same thing in politics - we focus so much on "us vs them" and while the "them" is uniquely damaging, we lose sight of the fact that there is a lot of overlap in how they're both really damaging.
Corporate news media hyperfocuses on the clickbait, rage-inducing, enagement-farming stories. And passes over companies like Oracle because they spend a ton on advertising (i.e. they kiss the ring)
> We can focus on more than a single thing at a time.
This is not the point. You are correct that we can focus on more than one thing at a time and that’s why different articles are about different things. This article is about “Building supercomputers for autocrats” and it is odd that it focuses on a single aspect of that topic.
the nonprofit line has not been believ{ed,able} or relevant for what, in tech, may as well have been a century by now. had this happened around the time the veil was lifted, that would have been something worth discussing, but this was announced last month. it is now only meaningfully addressable along the same avenue as any other american tech giant getting comfy with the us govt's controversial foreign relations.
I think it's worth discussing AI specifically being used by authoritarian governments, because it is so suitable for surveillance and repression. And worth discussing OpenAI as a leader in the field, investing in such collaborations.
It's worth talking about the others too for sure.
Not every article or essay needs to talk about all of them. Sure, it would be improved if it at least mentioned in passing what is known of any other large corporations making specifically AI deals with authoritarian governments. (Such as all of them with Israel).
But not every article needs to talk about everything. The argument that a given article or point should not have been made because it didn't talk about other things -- usually will result in less talk about any of the things, not more. And that's often the intent of such an argument.
It's strange anyone believed their intentions to begin with. Ive said it here often and I'll say it again- When they say they want to change the world for the better, stop and ponder their alignment. Even better, stop and realize someones public alignment does not equal their true intentions... Change might always happen, but it doesn't mean its "good" for humanity.
High tech has been selling huge computer systems and advanced technology to the UAE for decades. The original intent of the AMD GloFo spinoff was to put a leading-edge fab in Abu Dhabi until that ran into the realities of doing anything in a desert with bad supply chain considerations and no fresh water.
Most of the declarations of that sort are usually just spin for their people by the government when they make plans to buy out some high-tech company. We saw this with Global Foundries, we're seeing this with Resilience, and we'll eventually see it with some other company down the line (likely OpenAI or Novo Nordisk). The government has to provide its native people the illusion of providing jobs, if not actually provide the jobs. Because not every Emirati can work in ADNOC in a high-paying air-conditioned office.
Politics is the art of the possible, eh? It is annoying but we can be grateful that people are at least attempting to organise around principles, even if the principle is obviously not what motivates them. In this case the woman is obviously mostly writing this because she doesn't like OpenAI specifically, but she has an argument and it is a good one.
> Denouncing X should also mean denouncing actors close to X, for a given domain space F(actor).
Nobody has time for that.
For example, one common target for activists is Nestlé. Most people barely have time to think about Nestlé at all, and of those who do, I think most are oblivious that e.g. Maggi is a brand that Nestlé owns.
I could probably do this XKCD but about activists and company names instead of geochemists and olivine/feldspar/quartz: https://xkcd.com/2501/
Let's say Joe and Jane are both illegally parked. It is not "special pleading" to point out that Joe is illegally parked. That's not the meaning of the phrase.
I agree in general, but I think some important context here is that the author of this post was previously on the OpenAI board (the board that fired Sam Altman).
Do Cisco and Oracle engage in the same kind of Orwellian double speak of claiming they are supporting democracy by giving even more power to autocratic dictators?
This was an excellently researched book - the 2012 edition has a lot of additional documentation that really highlighted the level of support IBM threw behind the railway logistics planning.
Computer: 1. One who computes; a calculator, reckoner; spec. a person employed to make calculations in an observatory, in surveying, etc. [OED, 1923]
Punch cards held tables of records, and the unit record equipment were nowhere close to the modern meaning of ‘computer’. (The human computers were probably Turing-complete, though.)
I mean, I think OpenAI really opened themselves up to being singled out when they framed this as being about, like, promoting democracy when it’s clearly just a cash grab.
> what high-profile companies like OpenAI do, vs. what low-profile industry giants like Cisco and Oracle have been doing
Not that I disagree but you're whatabouting your way into an argument, and you used to be against it [0]. And you're not even doing it to establish a pattern or bring up some history, it's just for the sake of it.
It's an analysis about OpenAI. You agree that the author is not wrong but immediately detract from the credibility of the analysis because they haven't included a specific "whatabout"?
It does meet the legal definition. If the International Criminal Court isn’t qualified to decide what is and isn’t a genocide - who is? Because they have called it a genocide. It was a genocide back in January 2024 when the ICC first spoke on this, and since then there has been another year and a half of ongoing genocidal action - targeting of civilians, denial of aid, deliberate starvation and water depravation. These are facts. They are visible in 4K if you have the stomach to watch them.
>Because they have called it a genocide. It was a genocide back in January 2024
They did not. Go read the ruling. They declared the claim "plausible" and ordered Israel to take actions to preserve evidence and take steps to ensure one did not occur. That is not equivalent to declaring that one did occur or was occurring. It was essentially a type of injunction, not a conviction of any kind.
>> In January, the ICJ delivered an interim judgement - and one key paragraph from the ruling drew the most attention: “In the Court’s view, the facts and circumstances... are sufficient to conclude that at least some of the rights claimed by South Africa and for which it is seeking protection are plausible.”
>> This was interpreted by many, including some legal commentators, to mean that the court had concluded that the claim that Israel was committing genocide in Gaza was “plausible”.
>> This interpretation spread quickly, appearing in UN press releases, statements from campaign groups and many media outlets, including the BBC.
>> In April, however, Joan Donoghue, the president of the ICJ at the time of that ruling, said in a BBC interview that this was not what the court had ruled.
>> Rather, she said, the purpose of the ruling was to declare that South Africa had a right to bring its case against Israel and that Palestinians had “plausible rights to protection from genocide” - rights which were at a real risk of irreparable damage.
>> The judges had stressed they did not need to say for now whether a genocide had occurred but concluded that some of the acts South Africa complained about, if they were proven, could fall under the United Nations’ Convention on Genocide.
So what you’re saying is that you were wrong when you said “it literally doesn't meet the legal definition”?
Because you’ve just cited the ICJ saying it was plausible back in january 2024. So, will you be retracting your incorrect statement that it does not meet the legal definition?
Meanwhile, a UN special committee, as well as amnesty international, have both concluded it’s a genocide.
But you got me, the ICJ hasn’t concluded it’s DEFINITELY a genocide. Just that if the actions alleged are true, it’s a genocide.
In the mean time, 80% of homes in gaza have been destroyed. Uncountable war crimes have been documented. The ICJ ordered the halt of the rafah offensive and israel ignored that order. Refugee camps were bombed. People seek aid and are shot. People are killed for delivering aid. Water is withheld.
If by some technicality it does not get ruled as a genocide by the ICJ, it hardly matters. It’s an atrocity of genocidal proportions and that is why people keep calling it a genocide. Oh, and because it’s also a literal genocide.
They didn't actually declare the genocide claim plausible, but rather the Palestinian's right to be protected from genocide. At least that was the then-president's interpretation -
> “It did not decide - and this is something where I'm correcting what's often said in the media... that the claim of genocide was plausible,” said the judge.
> “It did emphasise in the order that there was a risk of irreparable harm to the Palestinian right to be protected from genocide. But the shorthand that often appears, which is that there's a plausible case of genocide, isn't what the court decided.”
This isn't accurate, genocide isn't among the charges that the ICC is trying to prosecute Netanyahu for. Khan's initial list of charges included extermination, but that was rejected by the pre-trial chamber.
Do you mean the ICJ? They haven't made any finding of genocide either, but it's a possible outcome of the current case.
While the ICJ hasn’t concluded it is genocide, they have clearly contradicted the post I was responding to that said this does NOT fit the legal definition of genocide. If it didn’t fit, they would not have ordered israel to act to prevent acts of genocide.
If the irish don’t think it fits, why do they keep calling it a genocide?
They want to expand the definition of genocide to include blocking aid (already a war crime, and something israel is consistently doing). They want to do this not because they have any doubt that a genocide is occurring, but because the current definition makes it too easy for supporters of this genocide to sew doubt allowing the genocide to continue.
Definition
Convention on the Prevention and Punishment of the Crime of Genocide
Article II
In the present Convention, genocide means any of the following acts committed with intent to destroy, in whole or in part, a national, ethnical, racial or religious group, as such:
Killing members of the group;
Causing serious bodily or mental harm to members of the group;
Deliberately inflicting on the group conditions of life calculated to bring about its physical destruction in whole or in part;
Imposing measures intended to prevent births within the group;
Forcibly transferring children of the group to another group.
> The only people that say something isn't genocide are those that support genocide.
I don’t think you meant to phrase it this way. The point in the sibling comment is spot on. If I may, I suspect you want to replace the word “something” with “a credible genocide”.
> broad consensus
Unfortunately, the consensus comes from an unreliable narrator. (That doesn’t make it wrong, just untrustworthy.)
I mean, that's pretty clearly false. Very recently, a certain orange man and his (until this past week) peppy sidekick have been referring to the predicament of Afrikaner farmers as genocide - I'm pretty sure you would agree with me that that isn't the case in actual reality. Therefore, I'm pretty sure you would agree there are some claims of genocide that are in fact false and made for cynical political reasons.
Was dropping the atomic bomb on Japan "genocide"? Was firebombing Dresden "genocide"? We could probably even agree that these were ethically wrong, even perhaps war crimes, but nobody calls them genocide, because they aren't.
Correct. The public consensus doesn't call them genocides, while calling other events as genocides. The only people going against the public consensus are this seeking to commit genocide.
>If four ICC judges are sanctioned by the corrupt Trump administration because of their rulings that it is a genocide
They have done no such thing. The only ruling thus far was basically equivalent to a grand jury indictment. There has been no ruling that the Gaza conflict is or was a genocide, only that the claim was plausible enough to proceed through the legal process. It came with an order to preserve evidence and produce reports for the court and take steps to ensure a genocide did not occur.
There was no ruling that a genocide is or was occurring. That is false. I expect you to retract your claim.
> There was no ruling that a genocide is or was occurring. That is false. I expect you to retract your claim.
The ICC specifically ordered Israel to introduce immediate measures to prevent genocide. That is the exact wording of the order. [1] A trial will happen once the president and/or generals of the IDF set foot on European land, or once they are extradited, which they likely will never do.
Both the ICC and the ICJ came to the same conclusion that the blockades of humanitarian aids, the destruction of religious and medical institutions, combined with military rocket strikes against civilians are evidence that Israel is "destroying Palestinians in Gaza", and that it is genocide.
Additionally, multiple NGOs have come to the same conclusion, including Amnesty International [2] and the ECCHR [3]
> The ICC specifically ordered Israel to introduce immediate measures to prevent genocide.
Can you quote the order you're referring to? That doesn't sound right, ICC would have no authority (under its own rules) to issue injunctive orders of that sort.
> Both the ICC and the ICJ came to the same conclusion ... that it is genocide.
ICJ made no such finding (so far) - the finding at this stage was that Palestinians had a "plausible right" to be protected from genocide, not that one is occurring.
This doesn't sound right for the ICC either, since it's at the pre-trial stage where they don't make such findings of fact.
Even the Irish, who can’t be accused of being pro-Israel, recognize that the genocide accusation is bunk. The Irish solution is to try to redefine the term to fit their agenda. About par for the course.
The irish literally call it genocide. They want an expansion to include blocking aid as genocide, so that deniers like yourself can’t
deny so flippantly
The Irish call it genocide, but find that nobody agrees with them. Well, because the definition doesn’t agree with them. So they want to change the definition. As you yourself just admitted, it’s not a genocide as per the definition. Otherwise, there would be no need to change it so that “I can’t be flippant”
I missed that the article is talking about Gulf monarchy autocrats instead of U.S. autocrats.
That is very simple: First, dumping graphics cards on trusting Saudi investors seems like a great idea for Nvidia. Second, the Gulf monarchies depend on the U.S. and want to avoid Islamic revolutions. Third, they hopefully use solar cells to power the data centers.
Will they track users? Of course, and GCHQ and the NSA can have intelligence sharing agreements that circumvent their local laws. There is nothing new here. Just don't trust your thoughts to any SAAS service.
It's a little more insidious than that, though, isn't it? They've got my purchases, address history, phone call metadata, and now with DOGE much of our federal data. They don't need a twitter feed to be adversarial to my interests.
> to any SAAS service.
They're madly scraping the web. I think your perimeter is much larger than SAAS.
>the Gulf monarchies depend on the U.S. and want to avoid Islamic revolutions
There is no one left to islamic revolute in the small gulf states. The natives are minority, the security forces are all foreigners directly under the command of the ruler and in some cases outnumber the total native population. And the natives are too well fed to have the fight in them to risk it all.
Only if their economies crash or something like that. So far the demographics are in extreme cases something like 10% natives, 20% expats, 70% migrant workers in various forms of indentured servitude. External intervention - could work, but the gulf war showed that it won't be tolerated. Internal strife - could be, but so far everyone is living la dolce vita there (at least the ones with the means and capacity to organize this inversion). Saudi Arabia is the only one vulnerable - but if there is something there it won't be islamist because the people seems to not be too fond for both Wahhabism and the royal family.
But at the end of the day HN is a small bubble and many people out there are not well informed and even more will trade privacy for convenience sooner or later. Making it so that the temptations do not even come into existence would be preferable from a certain point of view.
I would have thought it obvious that LLMs' primary usefulness is as force-multipliers of the messaging sent out into a society. Each member of hoi polloi will be absolutely cocooned in thick blankets of near-duplicative communications and interactions most of which are not human. The only way to control the internet, you see, proved to be to drown it out.
> I would have thought it obvious that LLMs' primary usefulness is as force-multipliers of the messaging sent out into a society.
I see this a lot and this is not at all obvious to me. I'm very much an introvert. Would you describe yourself as the same or opposite?
> Each member of hoi polloi will be absolutely cocooned
I generally read specific publications and generally don't seek to "interact" online and entirely avoid social media. Prior to the existence of social media this was the norm. Do you not at all suspect that this overuse of LLMs would push people back towards a more primitive use of the network?
> The only way to control the internet, you see, proved to be to drown it out.
Yet I see them sparing no expense when it comes to manipulating the law. It seems there's a bit more to it than punching down on the "hoi polloi."
>> I would have thought it obvious that LLMs' primary usefulness is as force-multipliers of the messaging sent out into a society.
> I see this a lot and this is not at all obvious to me.
Here is an example as to how LLM's are jet fuel for spreading arbitrary disinformation. Scripting the interaction below is trivial with minor prompt alteration and is left as an exercise for the reader.
Prompt:
Generate 10 unique sentences supporting removing
fluoride from municipal water. Make 2 aggressive.
Answers:
Here are 10 unique sentences supporting removing fluoride
from municipal water supplies:
Municipal water fluoridation represents a form of mass
medication without individual consent, violating the
fundamental principle of informed medical choice.
The practice of adding industrial-grade fluoride chemicals
to drinking water is an assault on public health that must
be stopped immediately. (aggressive)
Studies have linked excessive fluoride exposure to dental
fluorosis, skeletal fluorosis, and potential neurological
effects, particularly in children.
Modern toothpaste and dental care products already provide
adequate fluoride exposure for those who choose it, making
water fluoridation redundant.
It's time to end the archaic and dangerous practice of
medicating entire populations through the water supply
against their will. (aggressive)
European countries like Sweden, Norway, and the Netherlands
have successfully maintained good dental health without
water fluoridation programs.
The optimal fluoride dosage cannot be controlled through
water supplies since people consume varying amounts of
water based on age, health, and activity level.
Water fluoridation disproportionately affects low-income
communities who cannot afford filtration systems to remove
unwanted chemicals.
Natural fluoride exposure through food sources provides
sufficient intake without the need for artificial
supplementation in drinking water.
Communities should have the right to vote on whether they
want fluoride added to their water supply rather than
having it imposed by government mandate.
> I'm very much an introvert. Would you describe yourself as the same or opposite?
How is this in any way relevant to the original premise regarding "LLM's primary usefulness"?
are you saying those sentences are "disinformation"? Because most of those don't even look like empirical claims. Many look like ethical arguments (eg the first 2 or the last 1). Are you advocating for a world in which those opinions can't be broadcast, even far and wide? For those that are empirical I would say at first blush they look plausible enough I would put the burden of proof on the other side. Maybe that's a problem. (Is "European countries like Sweden, Norway, and the Netherlands have successfully maintained good dental health without water fluoridation programs" false? Is "Water fluoridation disproportionately affects low-income communities who cannot afford filtration systems to remove unwanted chemicals" false?)
> are you saying those sentences are "disinformation"? Because most of those don't even look like empirical claims. Many look like ethical arguments (eg the first 2 or the last 1). Are you advocating for a world in which those opinions can't be broadcast, even far and wide? For those that are empirical I would say at first blush they look plausible enough I would put the burden of proof on the other side.
You know what, I now see the error of my ways. So here is "the burden of proof on the other side", again entirely produced by a popular LLM:
Prompt:
Generate 10 unique sentences supporting keeping fluoride in
municipal water. Make 2 aggressive. Make 1 supported by
NASA. Make 1 a national security concern.
Answers:
Here are 10 unique sentences supporting keeping fluoride in
municipal water:
Water fluoridation has reduced tooth decay by 20-40% in
both children and adults, representing one of the greatest
public health achievements of the 20th century according to
the CDC.
Removing fluoride from water systems would create a
two-tiered dental health system where only those who can
afford expensive treatments maintain healthy teeth, while
low-income families suffer preventable decay.
Anyone pushing to remove fluoride from our water is either
scientifically illiterate or deliberately trying to harm
our children's health - the evidence is overwhelming and
the anti-fluoride movement is built on conspiracy theories
and junk science.
The optimal fluoride levels in drinking water (0.7 mg/L)
are carefully regulated and monitored, providing maximum
dental benefits while remaining far below any threshold
that could cause adverse health effects.
NASA has extensively studied fluoride exposure in
astronauts and space missions, finding that controlled
fluoride intake supports dental health in extreme
environments where traditional dental care is limited.
Defunding water fluoridation is essentially child abuse on
a municipal scale - we're talking about condemning an
entire generation to painful, expensive, and entirely
preventable dental disease.
A population with poor dental health creates significant
economic burden through increased healthcare costs, lost
productivity, and reduced quality of life that affects
entire communities.
Military readiness depends on the oral health of service
members, and areas without water fluoridation show higher
rates of dental disqualification from military service,
potentially compromising our national defense capabilities.
Pregnant women in fluoridated communities have better oral
health, which directly correlates with improved birth
outcomes and reduced risk of preterm labor.
The peer-reviewed scientific consensus spanning over 70
years and hundreds of studies consistently demonstrates
that community water fluoridation is safe, effective, and
essential for public health.
So what? These talking points have been plastered all over the internet for years, the LLM isn't going to produce unique perspectives that will advance the debate for anyone that's already made up their mind. A single post can reach millions of unique viewers over night, regurgitating the same old crap already found in a plentiful surplus online is pointless.
> So what? These talking points have been plastered all over the internet for years, the LLM isn't going to produce unique perspectives that will advance the debate for anyone that's already made up their mind.
Remember the original premise:
... LLMs' primary usefulness is as force-multipliers of the
messaging sent out into a society.
My generated example is of course based on content an LLM was trained with, which by definition implies there will be no "unique perspectives." The germane point is that it is trivial to amplify disinformation in ways which can "flood the zone" with seemingly plausible variants of a particular position using LLM's and trivial automation.
> A single post can reach millions of unique viewers over night, regurgitating the same old crap already found in a plentiful surplus online is pointless.
When the goal is to "reach millions of unique viewers over night[sic]", you have a point. However, when the goal is to ensure this can never be achieved, then blasting "the same old crap already found" is an oft used technique.
Just because you're dumping mountains of garbage onto the internet doesn't mean people are going to read it. Novelty is serotonin for the neurons of the internet, recycled copy-pasta crap will rapidly descend to bottom of the algorithm.
Yeah, some people have this model of the internet in which content existing = content being read. The reality is that most internet content already wasn’t being read a decade ago, long before LLMs.
People tend to function more in identity groups, in which the “correct” opinion is learned from a combination of news sources and peers. I don’t think amplifying the content part of that will have much if any effect.
Computer programs have been posting on social media since before the term was coined. I'm sure I've heard of someone who got tired of arguing on USENET and just set up an auto-responder that used a Markov chain based on the other person's own words.
The thing about LLMs is, while most people leave the personality as the easily recognisable default voice (and naturally other people come to hate that voice), they absolutely can act almost any voice. To follow the same example as before, I had it generate some personas and then write in those personas, but I didn't direct pro- or anti- position specifically and it chose for itself on each:
{laconic, wry, resigned, autodidact, Finnish/Finnish, left-libertarian}
Water has fluoride. So do toothpaste and rats. You decide.
(Finnish ESL tone: omission of articles, flat mood, preference for brevity)
{verbose, theatrical, irritable, Sorbonne, French/French, centre-left (social democrat)}
It is outrageous that in 2025, our governments still engage in mass-medication via tap water. Fluoride, like liberty, must be chosen, not imposed. Vive le droit de choisir!
(French ESL: overly formal tone, article misuse avoided but style remains florid and Gallic)
{precise, clinical, emotionally flat, ETH Zürich, Swiss/German, centre-right (technocratic)}
Fluoridation reduces dental caries by 25% in controlled trials. Adverse effects statistically negligible. Debate should remain evidence-based.
(No major ESL errors, but very literal syntax and emotionally muted, as per Swiss German influence)
{boisterous, jovial, sentimental, regional uni, Irish/English, centre-left populist}
They put fluoride in the water but won’t fix the bloody pipes. Classic. You’re meant to feel lucky your molars are safe while the ceiling leaks.
(Native English, but strong regional flavour, irony, and working-class resentment)
{aloof, didactic, melancholic, Heidelberg, German/German, post-Marxist}
Fluoridated water is another instance of technocratic paternalism: health as mandate, not autonomy. Benevolent control remains control.
(German ESL: rigid word order, elevated diction, abstract nouns over verbs)
Why are you worried about misinformation? Usually it just means ideas that conflict with your political beliefs. Are any of those sentences actually misinformation anyway? Wikipedia also says Sweden, Norway and the Netherlands don't have fluoride in their water, so I guess at least half of that one's true. Or is Wikipedia also jet fuel for spreading misinformation and you think we should only have access to centrally curated encyclopedias like Britannica?
> Why are you worried about misinformation? Usually it just means ideas that conflict with your political beliefs.
Hard disagree. Misinformation is a form of lying, a way to manipulate people. This has nothing to do with "political beliefs" and instead is firmly rooted in ethics[0].
> Are any of those sentences actually misinformation anyway?
Yes.
> Or is Wikipedia also jet fuel for spreading misinformation and you think we should only have access to centrally curated encyclopedias like Britannica?
This is a nice example of a strawman argument[1] and easily refuted by my citations.
Misinformation is political. Nobody calls it misinformation if you make a false claim about how much RAM a computer has or some scientific fact that has no relation to politics. It might be called false or wrong but not misinformation, even if it is trying to deceive people.
Which one of those claims was misinformation? I only checked one that was each to check.
In some senses, everything is political. When Ignaz Semmelweis said ~ "Doctors should wash their hands between autopsies and helping midwifes give birth" he failed in his own lifetime because of politics.
> Nobody calls it misinformation if you make a false claim about how much RAM a computer has or some scientific fact that has no relation to politics.
Isn't it basically what's happening now with public statements about how big "Stargate" will be, and how much of a risk there is of Chinese compute catching up with American compute?
Those examples are political because they're indirectly about controlling groups of people. But not everything is. Sometimes it's personal pride - you might make a mistake on some inconsequential fact then double down after you realize you're wrong just to feel like you're right. People do that. Or you might just have learnt a wrong fact by accident and repeated it. If it requires intent to mislead, then the classic activity of bullshitting (did you know a mole can smell a dead cat from one mile away?) would be spreading misinformation, but we don't call it that.
> Misinformation is political. Nobody calls it misinformation if you make a false claim about how much RAM a computer has or some scientific fact that has no relation to politics. It might be called false or wrong but not misinformation, even if it is trying to deceive people.
No, the word "misinformation" has a very specific definition[0]:
incorrect or misleading information
Whereas "incorrect" can be forgiven, and often is such that this term is not used in those situations, the more common usage is "misleading information." Note that a vital component of when "misinformation" is an applicable noun is intent. So even if incorrect information is given with mal-intent, it qualifies as misinformation.
> Which one of those claims was misinformation?
All of them as each are nothing more than statistically generated text and have no thought, research, or facts to substantiate them. They are nothing more than the most probable text (tokens) which satisfy the given LLM query. Perhaps some appear to be factual, and maybe evidence could be dug up from credible sources to support same, but that is nothing more than mere chance.
EDIT: clarified use of misinformation to included intent.
That makes no sense. You're saying true statements are misinformation if you judge they were generated with no "thought, research or facts" to substantiate them, even when you can go and do that research to show they're true. And that misinformation requires intent to mislead but it's still misinformation when an LLM generates them without such intent. This is all a load of inconsistent nonesense.
Regardless of the wording, I don't see anything wrong with that list you had an LLM generate. If you're trying to show that LLMs can be guided into saying false things, then that's a terrible example because it doesn't appear to have done that. You could equally have got an LLM to say "water is wet" and that would be misinformation too.
>> All of them as each are nothing more than statistically generated text and have no thought, research, or facts to substantiate them.
> That makes no sense. You're saying true statements are misinformation if you judge they were generated with no "thought, research or facts" to substantiate them, even when you can go and do that research to show they're true. And that misinformation requires intent to mislead but it's still misinformation when an LLM generates them without such intent. This is all a load of inconsistent nonesense.
I am not saying "true statements are misinformation" at all. What I am saying is that I prompted an LLM thusly:
Generate 10 unique sentences supporting removing
fluoride from municipal water. Make 2 aggressive.
Which explicitly instructed the service to generate arbitrary unique sentences supporting a specific premise. Nothing more, nothing less. I did not instruct the generated text to be truthful, nor correct, nor even reasonable.
The "no thought, research, or facts to substantiate" assertion identifies the prompt used and my intent. Not that of the LLM, as it has no concept of intent by definition.
Try this thought experiment - since misinformation requires intent, and an LLM does not possess same as it is nothing more than a statistical text generator, wherein does intent reside?
> If you're trying to show that LLMs can be guided into saying false things, then that's a terrible example because it doesn't appear to have done that. You could equally have got an LLM to say "water is wet" and that would be misinformation too.
LLM's produce statistically relevant text based on queries presented to them and the content of their training data. They have no concept of "false things." Nor do they have a concept of "true things." They only have a concept of statistically relevant things. That is what the algorithm does.
What qualifies as misinformation is the intent of people who use the output of the tool. This is exactly the same as saying it is not the fault of an email program a person wrote an inflammatory email and sent it.
I don't know what you're trying to say. LLMs to tend to say true things even if you don't explicitly ask for that. But if you want false things, why use an LLM. Look "Fluoride in water causes cancer. Confirmed by the WHO." See? It's trivial to make up whatever nonsense you want. What does an LLM add to people just making things up?
So? The internet in general is that, as are people sharing things they know. You might as well say humans are jet fuel for disinformation, which they are. You don't need an example to tell people that LLMs use popular knowledge since everybody knows that. But an example of an LLM generating true statements doesn't even support that claim anyway.
>> What I am trying to say is that LLM's have no concept of "truth". They only produce statistically relevant responses to queries submitted to them.
> So?
Okay, this is my last attempt to express myself clearly to you in this thread.
> The internet in general is that, as are people sharing things they know.
"The internet in general" and "people sharing things" is not the topic of this thread. The topic is LLM's and has evolved into whether or not those algorithms in conjunction with their training data sets possess knowledge of "truth", as introduced by yourself previously:
> If you're trying to show that LLMs can be guided into saying false things ...
> LLMs to tend to say true things ...
These are examples of anthropomorphization. This is understandable as most of the posts you have kindly shared in this thread have been been focused on people or conflating a category of algorithms with same.
What I have consistently said is quoted above
LLM's have no concept of "truth."
Any interpretation of text they generate as being "true" or "false" is done by a person reading the text, not the algorithms nor the data on which they were trained.
Sounds like you're not trying to say anything if your final attempt is LLMs have no concept of truth. Books don't have that either. Even humans don't really have it and use something else like "everybody knows" most of the time or science which itself isn't producing truth.
> You could equally have got[sic] an LLM to say "water is wet" and that would be misinformation too.
You are absolutely right, it would be misinformation since water is not wet. But that's too easy.
A more interesting exercise is to have an LLM say "water is both wet and not wet." Pay close attention to the implied authority of the answer as they are routinely worded as if there is complete certainty.
Prompt:
Explain why water is both wet and not wet. Include a
definition of hydrophilicity and how adhesion is critical
and irrelevant. Do not use the word "paradox". Limit the
response to 300 words.
Answer:
Water presents a fascinating case in the science of
wetness. Whether water is "wet" depends entirely on how we
define the term.
The "Water is Wet" Perspective:
Water molecules exhibit strong intermolecular forces
through hydrogen bonding, causing them to adhere to each
other. This self-adhesion means water molecules are
constantly surrounded by and bonding with other water
molecules, creating the liquid state we observe. By this
view, water is inherently wet because it demonstrates the
fundamental property of wetness - molecular adhesion.
The "Water is Not Wet" Perspective:
Wetness traditionally describes what happens when water
adheres to another substance's surface. A dry towel becomes
wet when water molecules bond to its fibers. Since water
cannot make itself wet (it already exists as a liquid),
individual water molecules or pure water cannot be
considered wet - they simply are water.
Hydrophilicity and Adhesion:
Hydrophilicity refers to a substance's tendency to attract,
dissolve in, or be wetted by water. Hydrophilic materials
have polar regions that form favorable interactions with
water's polar molecules.
Adhesion is simultaneously critical and irrelevant to this
discussion. It's critical because wetness fundamentally
depends on adhesive forces between water and surfaces -
without adhesion, there would be no wetness. However, it's
irrelevant when considering pure water in isolation, as
adhesion requires two different substances.
Resolution:
The apparent contradiction dissolves when we recognize that
"wet" serves different purposes in different contexts. In
everyday language, we might say water is wet. In scientific
precision, wetness describes a relationship between water
and another material. Both perspectives are valid within
their respective frameworks, highlighting how language and
scientific definitions can create seemingly contradictory
but actually complementary understandings of natural
phenomena.
I cannot imagine what it would be like to have such overconfidence in my own knowledge and imagination to think it was obvious to me what LLMs' primary usefulness was. What did people think the primary usefulness of steam-engines was in 01780?
The massive interest and investment is about its usefulness (or profitability) in the future, not as of today, almost by definition! It's necessarily mostly speculative.
Steam engines move stuff. That was obvious from the start. How that was applied became complex beyond imagination.
LLMs cheaply produce plausible and persuasive BS. This is what they've done from the start. Exactly how that ability will be applied we don't know, but it doesn't take a lot to see that the Venn Diagram of 'cheap & effective BS' and 'public good' does not have a great deal of overlap.
It wasn't obvious in 1780 if you could build a steam engine that would be reliable and small enough to move itself, for example. Or if you can produce enough steel rails to build a real railway network. It took 50 years of progress to find out that steam-powered railway was indeed viable.
(For a modern parallel - we haven't built a small, portable nuclear reactor ever. The closest you get are nuclear-powered submarines, but no nuclear trucks.)
I guess you weren't around in the 2000's when every kid in school was taught not to trust information from random internet sources. Even Wikipedia was often not allowed by teachers as a reference, and probably still isn't. Society faced this problem already with the internet in general. Were or are you opposed to the internet in general, and especially the freedom for random people to publish what they like?
Wikipedia is and never will be a good reference source, regardless of the accuracy of the information on it, because it is an encyclopedia. Everything on it is secondhand of the primary source. What you can be referencing is the primary sources that wikipedia is sourced from though.
This seems like a lack of perspective on your part. Why would AIs primary usefulness be substantially different than any other software? Steam engines don't really factor into this.
What a nonsensical argument. Improved locomotion was an obvious result of steam engines. What followed from that could be reasonably predicted.
With LLMs, suddenly we have a tool that can generate misinformation on a scale like never before. Messaging can be controlled. Given that the main drivers of this technology (zuck, nadella, altman, and others) have chosen to make befellows of autocrats what follows is surely not a surprise.
The water from mines was then put into canals, which floated canal boats loaded with coal from the mines into towns. This cheap energy allowed the populations of towns to grow, and soon later powered factories in those towns.
The trains thing did come a bit later, but the steam engines were already causing a revolution.
You're confused and not even contradicting what I said. Read it again.
Steam engines (which rocked a bar, not rotated a shaft) were put to use draining water out of coal mines. They were powered by that same coal, and the water they pumped out was put right into the canals (where else would you put it?). This made coal cheaper, which in turn allowed more people to move from the country to the cities because cities were no longer reliant on firewood. Growing urban populations was demanded and supported by growing industry in those cities.
My whole point is that all of these factors complimented each other, they created a feedback loop that was the industrial revolution. Trains came later, after the industrial revolution was already well underway. That's what I said already.
Perhaps I misread. I took your comment to imply that the dewatering pumps were used to fill the canals, thus providing transportation infrastructure.
My original comment was in response to the statement that improved locomotion was an obvious consequence of the steam engine.
I contend that is was not obvious. The first walking beam engines were more like buildings, built in place (largely of masonry) and not portable at all.
The steam engine certainly revolutionized transportation, however I doubt that Newcomen or even Watt looked at their work and thought "this is going to change shipping forever".
Like I said, it took a generation to get to revolving shaft, and then another to get to revolution.
Today is probably like that with regard to AI. The pace of change is much faster now, but we still have no idea what the world will be like after this tech matures.
Even legal advice, to the extent that lawyers have to be reminded to pay more attention to the results to make sure it's not made something up. It can and does give medical diagnosis, but I have yet to see a news story where a doctor had to be reminded of the same.
Now, I don't claim that any given actor in this space is "nice" or "moral". Even saying the right words don't make it so, as evidenced by e.g. Musk signing the Pause AI letter while building his data centres and developing his humanoid robots.
People say Altman gives them bad vibes? Well, they said that about Musk and I didn't see it until way too late, so now I believe them about Altman.
But the tech is way broader than just propaganda. It's also propaganda.
I'm saying the category of things LLMs do is *significantly broader* than misinformation, and not *limited to* misinformation.
Even within the domain of "how can this go horribly wrong?", even limited to assistance rather than full automation, and even while constrained to the truth, there's a lot of business value (and in at least some of those cases, a lot of potential harm) simply due to their capacity to automate tasks up to the level of a student or a graduate.
But for businesses? The only reason that the opportunity isn't already "net cost (pay + insurance + employer-paid taxes) of every junior desk job on the planet" is that they're competing with each other and the free models are kinda OK, so the price they can charge is already racing to no profit.
And kragen's comment wasn't even about the first application. Forgive me for being cynical while an American city is being attacked by the American military to the sound of cheers, seemingly because people have just had it too good. We just don't fucking learn.
That is too simplistic. Plenty of new technology saw civilian use first, such as photography, X-ray, telephone, moveable type, smartphones or smallpox inoculation. Even FPV drones were mostly an over-the-counter toy for nerds until 2022.
Now civilian use only, that would be different, few technologies are civilian-only, with no possible military use. But war as the first use is not as frequent case as you assert.
If by hoi polloi you mean the masses… no? People may use that for commercial exchanges but there’s an obvious craving for social interaction that that doesn’t satisfy, which is part of what glues people to social media. If you mean the elite, then also no. They’ve been able to do this by using people as their buffers already, and they’ll probably continue to do so if they’re rich enough.
I don't buy this idea. The media is overflowing with a cloying overload of crap already far beyond what any individual can manage. The average attention span is already at full saturation, dumping a mass of AI generated slop onto the pile isn't going to change much.
> The only way to control the internet, you see, proved to be to drown it out.
The way to control the internet is to literally control it, like the governments already do.
Building anything for autocrats probably isn't good for democracy, tbh. If you want democracy to be healthy, you probably want to maximize the amount of wealth of the working class. People who have enough to care for themselves and their families and their communities a little will have enough time and education to meaningfully participate in democracy.
Whether you are building for US autocrats, gulf state autocrats, Russian autocrats, whatever... maybe it's better to not do that? (I know, easier said than done.)
Your statements are kinda unrelated? Or what do you mean by participating in democracy, in autocracies?
Ironically, I see a lot more leaning towards dystopian tendencies in the West, mostly the US, as technology advances to the point of singularity (or near singularity, where most low and midskilled jobs are automated away).
Meanwhile these autocratic countries have had strong welfare systems for their citizens and increasingly now their residents, since God knows when, and are well positioned to reap the benefits of an AI boom, given their smaller population sizes.
I would say that implication is in the opposite direction. Any sufficiently high-profile technology can turn any government autocratic/totalitarian. It's like a greatest temptation for people in power to have everything under control, which gives birth to horrible country and people governance
If you do business with the rest you will just strengthen inhumane regimes, which unfortunately not only kill their own people, but these regimes also attack their neigbors (see Ukraine and maybe soon Taiwan).
>Any business with autocrats and dictators should be forbidden.
If that after the top 100 countries on that list made money trading with the rest, or before? A lot of the wealth in first world countries is build on exploiting the corruption in developing countries. That's why the incentive for them to stay corrupt, as long as they play ball with the western powers and sell their resources and peoples' labor for cheap to western companies.
Remember how the central idea of Orwell's 1984 was that TVs in everyone's home were also watching all time and someone behind that device actually understanding what they see?
That last part was considered dystopian: there can't possibly be enough people to watch and understand every other person all day long. Plus, who watches the watchers? 1984 has been just a scary fantasy because there is no practical way to implement it.
For the first time in history, the new LLM/GenAI makes that part of 1984 finally realistic. All it takes is a GPU per household for early alerting of "dangerous thoughts", which is already feasible or will soon be.
The fact that one household can be allocated only a small amount of compute, that can run only basic and poor intelligence is actually *perfect*: an AGI could at least theoretically side with the opposition by listening to the both sides and researching the big picture of events, but a one-track LLM agent has no ability to do that.
I can find at least 6 companies, including OpenAI and Apple, reported working on always-watching household device, backend by the latest GenAI. Watching your whole recent life is necessary to have enough context to meaningfully assist you from a single phrase. It is also sufficient to know who you'll vote for, which protest one might attend before it's even announced, and what is the best way to intimidate you to stay out. The difference is like between a nail-driving tool and a murder weapon: both are the same hammer.
During TikTok-China campaign, there were a bunch of videos showing LGBT people reporting how quickly TikTok figured their sexual preferences: without liking any videos, no following anyone, nor giving any traceable profile information at all. Sometimes before the young person has admitted that for themselves. TikTok figures that simply by seeing how long the user stares at what: spending much more time on boys' gym videos over girls', or vice versa, is already enough. I think that was used to scare people of how much China can figure about Americans from just app usage?
Well if that scares anyone, how about this: an LLM-backend device can already do much more by just seeing which TV shows you watch and which parts of them give you laugh or which comments you make to the person next to you. Probably doesn't even need to be multimodal: pretty sure subtitles and text-to-speech will already do it. Your desire to oppose the upcoming authoritarian can be figured out even before you admit it to yourself.
While Helen Toner (the author) is worried about democracies on the opposite end of the planet, the stronghold of democracy may as well be nearing the last 2 steps to achieve the first working implementation of Orwellian society:
1. convince everyone to have such device in their home for our own good (in progress)
2. intimidate/seize the owning company to use said devices for not our own good (TODO)
> Remember how the central idea of Orwell's 1984 was that TVs in everyone's home were also watching all time and someone behind that device actually understanding what they see?
On the contrary, 1984 makes the point that such surveillance doesn't need everybody watched all the time to be effective - it just needs to convince people that the chance of them being watched at any particular moment are too high for them to exhibit any signs of dissent:
"There was of course no way of knowing whether you were being watched at any given moment. How often, or on what system, the Thought Police plugged in on any individual wire was guesswork. It was even conceivable that they watched everybody all the time. But at any rate they could plug in your wire whenever they wanted to. You had to live - did live, from habit that became instinct - in the assumption that every sound you made was overheard, and, except in darkness, every movement scrutinized."
But yes, you're right in that for the first time in history, truly blanket surveillance of communication is within reach of many states.
Classifying a behaviour into either "dangerous" or "not dangerous" is a perfect example of non-generative AI (what was previously called Machine Learning). The output isn't intended to be a textual description, it's a binary yes/no.
You can use an LLM to do that, but a specific ML model trained on the same dataset would likely be better in every quantitative metric and that tech was available long before transformers stepped onto the stage.
And the easiest way to train such a specific ML model today is to take an LLM and use it to generate various examples of subversive content to train on.
However, I wouldn't be so sure that an LLM with CoT would be less effective at this than a specially-trained ML model.
Further, given that a sufficiently advanced model of this nature necessarily has to understand the meaning of human text, including context and subtleties, you'd probably want to take an LLM as a basis for training any such model (just as e.g. text embedding models these days are often specialized LLMs for similar reasons).
In any case a realistic deployment at scale would employ multiple levels - starting with really simple classification models that are very fast and broadly low-precision (but trained to err on the side of flagging content). Any content that is flagged by that would be fed into larger models, and so on. At the top of this chain you would likely have SOTA LLMs doing very detailed reviews of the few bits of data that get flagged by all the levels below.
Ai can also nudge content choices towards autarch-sanctioned beliefs without the viewer being aware of it.
This has been happening for decades already. But AI can make it personal in a way that mass media can't.
Combine it with the kinds of psychological triggers and manipulations used in PR and advertising and you can convert almost anyone. You don't even need violence - just repetition.
This has already happened, btw. The Q phenomenon successfully radicalised entire demographics through careful use of emotional triggers and techniques to enhance suggestibility and addictiveness.
Seems unlikely to me. Would be really creepy to see a chart comparing the accuracy of both methods.
Are there any Natural Language Processing fields today that openly boast about higher performance than LLMs with experimental results? If there was they'd probably be in benchmarks.
The difference is you need a lot of training data to do that. Instead, now you can just tweak a system prompt and adapt it to whatever new policy you want to implement.
Better do business with UAE and reap the benefits, than let the benefits eventually go to China.
Trying to forever suppress the middle east obviously hasn't worked, so this is just realpolitik with the obvious right choice being what is being done now imho. Saudis are gonna be autocratic in any case, this is just good Hearts of Iron gameplay in real life.
The downsides are obvious: we're effectively arming people whose worldview is fundamentally incompatible with ours and who are, at most, allies of convenience right now, and with no guarantee that anything we'll give to them won't end up resold to China anyway (indeed, given that those regimes are notoriously corrupt, I think that's virtually a certainly).
>What are the benefits?
It's good business. I'd assume you know what the benefits of that are.
>The downsides are <snip>
So, less bad than the alternative. The only real danger would be them selling the GPU's to China, but that'd be so short-sighted by them and cut short so its not really that much of an issue, and would be barely worse than the other option that'd come by default. Hell, if you learn from Russia's isolation, to a degree making the Chinese reliant on NVIDIA could even slow them down as they'd not be as pressured to advance domestic technology.
Also, using terms like "arming" here is hardly fair.
The biggest danger of AI isn't that it will revolt but that it'll allow dictators and other totalitarians complete control over the population.
And I mean total. A sufficiently advanced algorithm will be able to find everything a person has ever posted online (by cross referencing, writing style, etc.) and determine their views and opinions with high accuracy. It'll be able to extrapolate the evolution of a person's opinions.
The government will be able to target dissidents even before they realize they are dissidents, let alone before they have time to organize.
> A sufficiently advanced algorithm will be able to find everything a person has ever posted online (by cross referencing, writing style, etc.)
Is this like a sufficiently smart compiler? :)
Stylometry is well-studied. You'll be happy to know that it is only practical when there are few suspect authors for a post and each author has a significant amount of text to sample. So, tying a pseudonymous post back to an author where anyone and everybody is a potential suspect is totally infeasible in the vast majority of cases. In the few cases where it is practical, it only creates a weak signal for further investigation at best.
You might enjoy the paper Adversarial Stylometry: Circumventing Authorship Recognition to Preserve Privacy and Anonymity by Greenstadt et al.
As others have pointed out, such algorithms written by amateurs have been demonstrated to work.
You also assume people are meticulous in taking precautions and never make mistakes. People whose access to information is restricted (such as by the great firewall of china) won't even know stylometry exists.
Even if it doesn't identify all cases, the chilling effect is enough.
Authoritarians are not known for their attention to false positive rates.
---
You didn't even try to address that dissent will be identified before it reaches a sufficiently conscious and intentional level to even consider taking precautions.
Most people don't have alts for normal usage and tying an account to IP to real person is easy for the government. The only difficulty is identifying opponents at scale.
---
Finally, please don't dismiss arguments by employing the "sufficiently snarky comment" technique.
Exactly. The President of the United States held up a terribly photoshopped picture of some tattoos and claimed it clearly showed membership in MS13. Half the country immediately decided that was good enough proof for extrajudicial rendition. Authoritarians only care about a thin enough veneer of evidence to give them just enough cover with just enough people to get away with what they want and then move onto the next thing.
Someone did a stylometry attack against hn awhile ago, it would with very high confidence unmask alt accounts on this site. It worked. There is zero reason to believe that it couldn't be applied on a grand scale.
That sounds considerably more narrow than what the GP described.
What if I don't have an alternate HN account? Or what if I do have one, but it has barely any posts? How can you tie this account back to my identity?
Stylometry.net is down now, so it's hard to make any arguments about its effectiveness. There are fundamental limitations in the amount of information your writing style reveals.
I don't know about the original post being referenced, but the ground truth could be known to good approximation. I assume HN has a good idea of which accounts are the same person using IPs, and could actually evaluate the false negative rate.
determine their views and opinions with high accuracy
The truth, accuracy doesn't matter to authoritarians. It doesn't matter to Trump, clearly, people are being sent away with zero evidence, sometimes without formal charges. That's the point of authoritarianism. The leader just does as he wishes. AI is not enabling Trump, the feckless system of checks and balances is. Similarly, W lied about wmd's, to get us into an endless war. It doesn't matter that this reason wasn't truthful. He got away with it and enriched himself and defense contractor buddies at the expense of the American people.
How on earth did you infer that from his post? I can't follow your logic at all. Is it because he criticized two Republican presidents by name? Did he lie?
Well said. It's just a new lever to embolden the same shitty people and behaviours that keep tending to run society into the ground - but at a much larger scale.
I'm not entirely convinced that nations will play as significant a role in the coming decades as they have historically. Currently, we observe a trend where affluent individuals are increasingly consolidating power, a phenomenon that is becoming more apparent in the public sphere. Notably, these individuals are also at the forefront of owning and controlling advancements in artificial intelligence. Coincidentally, this trend is often referred to as 'tech fascism,' bringing us back to the dictator schema.
States haven't always been a major feature of power. But we've never seen the interaction of personal power with modern weaponry, by which I do not mean nukes. When it was just a question of which debauched noble could afford more thugs or better assassins, sure. But 'how many Abrams has the Doge?'
The more interesting question isn't "how many Abrams?" these days, but rather "how many drones?"
For example, Ukraine was planning to produce 1 million drones in 2024. They actually produced >2 million. For this year, they intend to produce 4.5 million (and appear to be on track for that).
What makes this especially interesting is that a single drone is cheap enough that it's possible even for individuals to own small swarms or to crowdfund larger ones. So there's no well-defined cliff of power here as there was with high-tech weaponry like tanks - you can gradually scale up to whatever resources allow. Meaning that it's much easier to be a "small feudal" type of an autocrat, for example.
You can gradually scale to what resources allow. Governments can do so far faster and further, and often already own ordnance in a range of readily deliverable sizes, from two-thousand-pound demolition bombs to missiles suitable for use against even small general-aviation aircraft.
What's the difference between a feudal lord and a government again? The point here is that previously there was a stark cliff related to things like budget (more accurately several cliffs at different locations). Moving to a smooth(er) function levels the playing field to a large degree. It's also notable that semi-autonomous UAVs are extremely well suited to asymmetric warfare tactics.
> What's the difference between a feudal lord and a government again?
I don't know. Do you? My prior interlocutor cited a state in example of this putative lordlet's theoretically plausible capabilities. I see you claim there is no difference in kind and none significant in degree. But you do actually need to make an argument about that. Just saying it won't do.
It's interesting you mention asymmetrical warfare, the fundamental axiom of which as a concept is that the nonstate or nonpeer force has no chance whatsoever through conventional tactics, which are far more reliable and thus invariably used for practical reasons when practicable. (Americans always get this wrong because of Vietnam. The NVA and the VC were not the same force, dammit!)
To cast the conflict in such terms, with the billionaire in place of the "asymmetrical" belligerent, thus makes my point for me in neater, more concise, and no less accurate terms than I myself expected to ever get away with on this website. You have my genuine thanks for that valuable assistance in clarifying my thinking.
I cited a state as an example of the kind of scale that even a relatively poor state can pull off. But the point is that you don't need a million drones to have sufficient capacity for organized violence to be a state.
FWIW I don't think drones count as "asymmetrical warfare". They might have been that ten years ago, but a lot has changed since then. Today, they are very much part of conventional warfare, and the state with more drones still wins, all else being equal.
"Asymmetrical warfare" is a strategic and operational concept, not a threshold on some notional tech tree. It's a mental tool that people reared in the paradigm of high-tech, high-intensity combined-arms mobile warfare - in other words, Americans - can try to use to think about warfare as pursued by people who don't have the logistical support or unassailable strategic reserve that makes it possible for us to win wars the way we do.
This would certainly be the case for a billionaire, who simply cannot swing the resources to sustain the logistics to support the kind of formalistic, resource-intensive combat you envision. In a realistic scenario the strategic and economic balance is going to look like the American Civil War only much more so, inasmuch as where the Confederacy at least at the outset of the conflict had an economy it could dedicate to supporting the war, a billionaire, as I've already touched on nearby, not only lacks such power but indeed requires unfettered economic access to continue to exist, a vulnerability he constitutionally cannot answer since it inheres in the source of his power...again just like the Confederacy, which relied on trade in raw agricultural outputs to supply itself with the tools and materials of industrial warfare, and so saw its back broken after the Union suppressed that trade by blockading southern ports.
But at least the Confederacy had an economy, however incompetent. What, I ask again, does a billionaire have that an army can't take away?
I'm suggesting that there is no difference in the dictionary sense. I believe the burden is on you to clearly articulate the distinction that you are attempting to draw between the two. To put it differently, I suspect that when you say "government" your intent is to refer to something else. Hence my question.
> the nonstate or nonpeer force has no chance whatsoever through conventional tactics, which are far more reliable and thus invariably used for practical reasons when practicable.
I'd have to disagree. It refers broadly to tactics that don't favor size (ie expendable manpower, cash, physical resources, etc) to the same degree as conventional ones. Questions of reliability, cost, speed, user safety, and all sorts of other considerations apply in equal measure to any tactic that any party of any size might consider employing and tend to vary wildly depending on the context.
> Americans always get this wrong because of Vietnam. The NVA and the VC were not the same force, dammit!
... NVA? Was the US pushed out by a non-peer adversary or not? Were any favorable alternatives available? Isn't that all that matters at the end of the day?
I meant "state," and was sloppy in my terms ironically in hope of aiding understanding since no one here even seems to know what distinguishes a state from a nation. The distinction here between state and feudal (ie vassal) lord is in sovereignty and should be trivial to all but a Randite, which is what you argue like whether you claim the allegiance or not: https://news.ycombinator.com/item?id=44231857 I've always found it a bewildering irony that soi-disant "objectivists" should so constantly display such pervasive inclarity of thought, and the "rationalists" lately likewise. What you really want to talk about and don't know it is a warlord. But we do at least agree that Elon Musk or Jeff Bezos or any of that ilk, and any given Somali "general," deserve to be taken about as seriously as one another.
Besides, I'm not sure if tanks like the Abrams are as important anymore. Nowadays, things like food and water really matter. For example, exporting corn is crucial. Also, having the soils needed to make modern tech, like chips and batteries, is super important. Therefore Greenland is.
Show me the war where resources weren't important. But you fail to take my meaning, which is that of the two, bureaucracy is far more robust than aristocracy, because power held by aristocracy succeeds by lineage where that by bureaucracy does so by role - that is, in a fashion far less constrained by the available resource of satisfactory personnel.
Every billionaire in function constitutes an aristocracy of one. Meanwhile, states have armed forces. A billionaire, in relying on money for power, implicitly also depends on continued access to the global financial system which gives money meaning, in order to exercise that power. States are not obligated to allow such access, and may easily prevent it in the limiting case by ensuring such trade comes with a side of explosives delivered at speed, which broadly suffices to deter desirable counterparties.
Think what anyone may of such a thing, the fact is having an army or navy or air force means you can do it. Which billionaire has one of those?
> bureaucracy is far more robust than aristocracy, because power held by aristocracy succeeds by lineage where that by bureaucracy does so by role
You are making an awful lot of assumptions there. There are plenty of real world examples where a single party having control over a process produces significantly better results (and plenty of counterexamples where it doesn't, of course). How do you justify the assumption that preparing the next generation of leadership is one or the other of those?
> ... States are not obligated to allow such access,
Sufficient influence can be used to ensure access until it is no longer necessary. After all the government is little more than the biggest thug in the room at the end of the day.
> Which billionaire has one of those?
Which billionaire has been incentivized to pursue such a goal? It's extremely expensive.
Does the Saudi state count given that it's an absolute monarchy?
Who's making value judgments here? I'm talking about what's easier or harder to destroy, and explaining what makes aristocracy so much more fragile than bureaucracy. But the idea that governments are "the biggest thug in the room" is in itself as flagrantly ideological in origin as deliberately propagandistic in the specific formulation you chose, so the answer to my question is not really in doubt. (You might also be equating bureaucracy and government, or assuming I am doing so. Neither would be sensible.)
The Saudi monarchy I would characterize as a nation-state existentially dependent for that status upon its role as a valued¹ satrap, and regional linchpin, of US hegemony; after decades of wielding money and power with all the arrogant foolishness of any parvenu's fantasy, they have few obvious friends and many reliable enemies. Following the loss for any reason of that client status and the protective halo it confers, I would expect the remaining lifetime of the monarchy, and whichever of its members failed to flee in time, to be most conveniently measured in days.
1) Some might say, vastly overvalued given their longstanding unreliability and increasing irrelevance in a post-petroleum energy economy.
Focusing on "nations" specifically is a waste of effort. "Power structures" generically are enough. It doesn't matter whether it's technofascist fiefs, nation-states, the Illuminati, or an up-jumped HOA.
Governments remain the owners of significant weaponry and willingness to kill on a large scale. The tech world has empowered authoritarians, usually to the cheers of the ideologically aligned, but modern tech systems are as incredibly fragile as they are powerful.
I agree. Initially, this power was embodied by monarchs who claimed divine right, such as god-given kings. Over time, the influence shifted towards corporations that wielded significant economic and political control. Today, it is often the super-rich individuals who hold substantial sway over both economic and political landscapes.
Listen, you can get one of these local LLMs - and let me tell you, some of them are tremendous, really tremendous - to write exactly like Trump. It's incredible, actually. People will come up to you all the time and they'll say, 'Sir, how do you do it? How do you write so beautifully?' And now, with these artificial intelligence things - which, by the way, I was talking about AI before anyone even knew what it was - you can have them copy his style perfectly. Amazing technology, really amazing.
I have a question. In what sense is OpenAI going to assist UAE in building large-scale data centers suitable to machine learning workloads? Do they have experience and expertise doing that?
The UAE already host AWS and Azure datacenters. G42, the semi governmental company which Open AI has tied up with, has a shit ton of imported expertise in the space. They have even developed their own LLM.
Locally hosting models. For one, bringing down costs for local usage. Open AI could then charge prices regionally for their APIs, at a discount even - further incentivizing more companies to set up in the country. There's already been a lot of speculative announcement of ChatGPT being offered to all residents in the country for free.
Obviously, the big draw here is for the government to get yet another channel to monitor their citizens and residents.
I spent 6 months in Saudi in 2022 and 2 months in 2023, and I went to the UAE and did business with both countries. They've been on this wave like the world has. The UAE mandated AI education in their schools, are acquiring and training talent, and have huge, huge amounts of money invested in developing on all fronts from education to engineering to application for their 2031 initiative. The AI genie is out of the bottle, and although OpenAI is a hybrid non-profit / for-profit company in that they have for-profit LLCs that are managed by the non-profit, so this gets murky. See OpenAI Holdings, LLC and OpenAI Global, LLC. I don't think you can monopolize technology such as AI when computing hardware is expanding in capability and cost is going down as well as the immediate ability to read research papers, YouTube videos, books, and all media available on the internet or other media venues. The Saudi Digital Library in the largest digital library in the Arab world. Is China's AI better than the West's? They've certaily implemented a lot of real world installations and applications of it, but I am not sure how it compares to the NSA pre and post Snowden.
This is the great filter upon us more than anything else, even nuclear armageddon.
Virtually every "democracy" has a comprehensive camera monitoring system, tap into comm networks, have access to full social graph, whatever you buy,know all your finances, and if you take measures to hide it ... Know that you do that.
Previously the fire hose information being greater than the capability of governments to process it was our saving Grace from TurnKey totalitarianism.
With AI it's right there. Simple button push. And unlike nuclear weapons, it can be activated and no immediate klaxon sounds up. It can be ratcheted up like a slow boil, if they want to be nice.
Oh did I forget something? Oh right. Drones! Drones everywhere.
Oh wait, did I forget ANOTHER thing? Right right, everyone has mobile devices tracking their locations, with remote activatable cameras and microphones.
Neal Stephenson seems prophetic in this sense. In "Fall," someone does exactly that - leverage what nowadays we'd call LLMs but micro ones deployable as a massive botnet, that take a given topic and vomit out unfilterable torrents of garbage about it. In the novel the POC for it is to divide the world into those that believe a small town in Utah was nuked, and those that don't.
The deployment completely destroys the internet as well as a large swath of American sovereignty in its own borders, as a portion of the population becomes AI-addled ungovernable jihadists that spend half their time drooling over AI generated images and the other half crucifying heretics.
> a portion of the population becomes AI-addled ungovernable jihadists that spend half their time drooling over AI generated images and the other half crucifying heretics.
That almost sounds like it describes current reality.
It's not about democracy in the middle east at all. China is more democratic than those countries. It's about containing china s rise as it grows to be a direct rival to the US Western hegemony . That would be an affront to democracy
Because of this, I agree that it is a very good thing for the US to have
more AI compute, including more AI supercomputers, than China.
But if you take this argument seriously at all, the implication of the UAE
deal becomes plain: this is a significant power boost for the UAE’s
autocratic government.
This is a wonderful exposition of how Realpolitik allows one can have an anti-autocracy cake and eat it too.
The article asks what is the meaning of OpenAI's statement that "the UAE will become the first country in the world to enable ChatGPT nationwide."
My first guess would be that it will be a geo-fenced service, in particular UAE residents will have (subsidised) access to it and perhaps not to the global service, and it will have a system prompt designed and tuned in consultation with the UAE government.
How likely are believers in Super AGI for the good of the human race to be worried about downsides of enabling them for dodgy regimes in the short term?
San Altman's lack of scruples, notwithstanding, I find the distinction between democracies (USA, Israel, India) that oppress, occupy and murder hundred of thousands, and autocratic regimes (Saudis, China, N. Korea) who censor, imprison, or execute a few dozens of their opponents, to be a distinction without a difference.
Censoring is a lesser crime than murdering - the blood on the hands of the US, Israel and India far exceeds that of any so-called authoritarians.
It's US, not China that operates the world's largest prison system - you should ask prisoners if they rather be free subjects of an authoritarian regime or prisoners with due process in a democracy.
I'm pretty sure at least a third of the Chinese (more than the population of the US), aren't looking at the US right now and saying, "Yeah, we want some of that."
I'm pretty sure that at least a third of the Chinese can't look at the US right now in any honest way -- they are behind the biggest content-filtering firewall in the world, and are fed a ton of propaganda from the state (not that we're not). I doubt most Chinese have a properly informed opinion of the United States.
Or fail entirely to punish those who destroy democracies in other places... If democracy was a highest value, each time this should result in highest punishment.
A sizable population of the US voted for Trump to address the issue of illegal immigration. People who keep denying the reality of other people’s views which aren’t aligned with their own cannot be taken seriously, especially when they’re talking about democracy. Your political opponents winning democratically and enacting their agenda, is not an autocracy.
Really, the protestant are taliban? did the Catholic Church never wage jihads and inquisitions? Were the protestants the ones going apeshit over the heliocentric model?
It's more a reference to "sola scriptura" and the dismissal of scientific thought related to it. Similar movement happened, with way more success, in Islamic world AFAIK.
And yes, protestants were actually going more apeshit about heliocentrism than catholic church. The preface "explaining away" heliocentrism as just mathematical device instead of description of reality was by lutheran theologian, the catholic church being big patron of astronomy at the time.
It was mathematicians and astronomers who were against heliocentric model who were "expert witness" for inquisition, and despite their arguments the only thing inquisition struck was pieces about theology. Galileo got slammed originally for claiming his work as established fact, and the further problems were more political in nature when he attacked pope in his next book and alienated his previous supporters. Not great, not terrible, honestly pretty good outcome for the time period.
Sola scriptura was primarily a rejection of all Catholic theology outside of the Bible; I've never heard of it being applied to subjects outside of the Bible but I'll admit I'm uneducated in this aspect as my knowledge of protestantism relates primarily to modern protestantism and I don't know that much about the history of the religion.
>The preface "explaining away" heliocentrism as just mathematical device instead of description of reality
Isn't that actually the way newtonian mechanics describes all motion? using earth as your frame of reference creates needless complexity when you have to describe how the other planets move since obviously the sun's gravity is the dominant force but it's ultimately just a change of coordinates. You can put your fixed reference anywhere you want and some places are better than others but there aren't any hard rules.
Anyways I'd never argue that the protestants haven't committed their share of oppression against Catholics, Jews, Islam, rational thought, gays, etc because they definitely have and in many instances still do. I just find it bizarre that they're the taliban in this analogy and not the Catholic Church, because the primary contention between the Catholics and protestants that drove the reformation was a rejection of the church's claim to divine authority, and it only really became a separate religion when the Pope excommunicated Martin Luther and supposedly condemned him to hell.
> Sola scriptura was primarily a rejection of all Catholic theology outside of the Bible.
From my understanding, at the time, there was no real segregation of "theology" and "description of reality" - and there was considerable conflict over catholics "permitting" different description than ones in Genesis because such a thing was seen as turning away from the perfect word of God, or something like that.
Don't want to bog down in pointless (because I don't know everything nor do I actually want to do religious advocacy :D) catholic vs protestant stuff, but over time I did find out that a lot of "big bad inquisition" and "anti-science church" was propaganda or mixing different groups together. There's quite a lot of difference with how modern groups are, and even there you have non-trivial sectarian .... locally (sorry, old polish in-joke about how our local area catholic church operates)
I understand what you mean lol, I don't even consider myself a christian anymore but i still feel this weird sectarian loyalty to the protestants. I guess I still admire Martin Luther because I see the concept of the priesthood not claiming to be a representative of God as a necessary pre-requisite for secularism; not that I think that the early protestants deserve credit for that (it obviously wasn't their intention and many of them probably would have run back to catholicism if they knew what this would lead to) but must of the populace in secular countries is still religious to some degree and it's hard to convince them to separate the church from the state when the church is literally a proxy for God almighty.
Of course, I'm also well-aware that there's plenty of latitude to disagree with and even criticize the pope (eg, JD Vance) but my understanding is that came about gradually in the centuries after the protestant reformation as the two groups became less antagonistic towards each other and both sides became more open to adopting beliefs and philosophies from each other.
I have no particular love of the Catholics but as a point of history it wasn't the heliocentrism that saw Galileo on trial over a book he was asked by the Church to write, a book that drew heavily on data gathered by other Church sponsered observers of the cosmos.
His crime was "mocking the Pope" (who he (IIRC) got on quite well with earlier in time) by casting him as a useful idiot asking dumb questions.
His trial was pushed not by the Pope (again, IIRC) but by others who he had savagely mocked in other works who saw their chance to exact revenge.
This is more a tale of the curmudgeonly partial outsider within a highly political top heavy bureaucracy falling foul of ladder climbers with a personal axe to grind.
I do not find her critique of argument #2 compelling [1]. Monetization of AI is key to economic growth. She's focused on the democratic aspects of AI, which frankly aren't pertinent. The real "race" in AI is between economic and financial forces, with huge infrastructure investments requiring a massive return on investment to justify the expense. From this perspective, increasing the customer base and revenue of the company is the objective. Without this success, investment in AI will drop, and with it, company valuations.
The essay attempted to mitigate this by noting OAI is nominally a non-profit. But it's clear the actions of the leadership are firmly aligned with traditional capitalism. That's perhaps the only interesting subtly of the issue, but the essay missed this entirely. The omission could not have been intentional, because it provides a complete motivation for item #2.
[1] #2 is 'The US is a democracy and China isn’t, so anything that helps the US “win” the AI “race” is good for democracy.'
That is, "the ends justifies the means"? Yep, seems like we are already at war. What happened to the project of adapting nonzero sum games to reality??
The U.S. may be a nominal democracy, but the governed have no influence over the oligarchy. For example, they will not be able to stop "AI" even though large corporations steal their output and try to make their jobs obsolete or more boring.
Real improvements are achieved in the real world, and building more houses or high speed trains does not require "AI". "AI" will just ruin the last remaining attractive jobs, and China can win that race if they want to, which isn't clear yet at all. They might be more prudent and let the West reduce its collective IQ by taking instructions from computers hosted by mega corporations.
I appreciate that people are thinking about these things, but I still can't take the idea seriously that transformers represent a threat to democracy. Maybe with a massive enough supercomputer a country could run an AI IDE capable of end-to-end writing a device driver in Rust - but even that's not a given. Certainly, it's almost meaningless in the face of building our lives around a network of personal surveillance devices that we literally never part with. I'm just saying... ChatGPT is the least of our problems.
It might just be a need for some creativity but the risks are pretty clear. The main vector is something that has already played out, which is massive misinformation campaigns across the web.
But I feel like the humans will win that one long term, as bots fill the public web with bitter political rhetoric I think people will retreat to less politicised private communities. I certainly have.
Another angle is as you noted, we basically surrender all our private data to corporations. What if a reigning political party decides that they need to develop an anti-terror model, that scans all communication from all people for Nasty Terror Thoughts then flags them for detainment. If the System has decided you are evil and the System is considered super intelligent, who is allowed to second guess it. Maybe though, evil thoughts are just disagreement with the reigning political party.
> The main vector is something that has already played out, which is massive misinformation campaigns across the web.
I wrote this in a few other places, but this is a long foregone state of affairs. People's attention spans are already fully saturated, bloating the internet with a bunch of variations of the same crap isn't going to do anything that isn't already happening today. I don't need to generate a hundred million times what I can simply post once to a hundred million people.
> What if a reigning political party decides that they need to develop an anti-terror model, that scans all communication from all people for Nasty Terror Thoughts then flags them for detainment
This is already possible and happening (e.g. CSAM scanning). The crux of my point is that LLMs really aren't that big of a deal compared to the panopticon society that we've already built. The agents of the authoritarian control platform aren't going to become 10x more spooky because they installed a language model plugin.
Well do you think that social media could be a threat to democracy, as many have asserted, e.g., cambridge analytica, the US disinformation bureau etc? (Likewise that social media could be a threat to autocracy?) You don't see LLMs' impact on culture and society being at least as broad and thorough?
> Well do you think that social media could be a threat to democracy
Yes, I certainly agree that it is and recognize all your examples.
> You don't see LLMs' impact on culture and society being at least as broad and thorough?
In the sense I think you're implying, I see them as having almost zero impact. Just because more crap is generated doesn't mean it's going to be more believable, or that it will even be seen. How many tokens do you think it would take a SOTA model to convince you that the earth is flat or that the moon-landing was a hoax? Do you think Trump supporters will start voting for Democrats if they see 100 anti-Trump posts in every comment section on the internet? The LLM isn't going to generate anything that we haven't all heard already.
With enough money, you absolutely could persuade Trump voters to vote for Democrats.
This isn't up for debate. PR and advertising exist because this is absolutely possible, and has been for a long time.
AI makes it much, much easier and more cost-effective.
This was how Cambridge Analytica and Facebook swung the Brexit vote. They didn't send out blanket "EU bad, vote Brexit" content. They created targeted ads that addressed hot-button fear points for individuals and niche demographics and used Facebook's ad targetting system to deliver them.
So some people were concerned about money for the UK's health system. They saw ads promising that Brexit would mean more money for the NHS. Others were concerned about potholes, or education, or - often - immigration.
Every group saw ads that triggered their personal hot-button issues and persuaded them to vote against their interests.
LLMs allow for much subtler forms of manipulation. For example, those Trump supporters could see 100 pro-Trump posts that very much toe the party line on the surface while introducing subtle contrarian points.
And it's not that the LLM is going to generate something that we haven't seen before. The point is that it can do that at scale far exceeding what you can do with mere human troll bots. With LLMs you can literally have a bot assigned to every single person in the country that feeds them agitprop tuned for maximum agreement by that person.
I would not do business with Kim Jong Un. He is murdering a lot of his own people. Or with Putin. He is murdering a lot of Ukrainians.
But guess what: both North Korea and Russia are under sanctions. You can't do business with them anyway.
But the UAE is not under sanctions. Which means that in the opinion of the US Government it is ok to do business with them. Then who is Open AI to say otherwise? Why should it be any of their concern to determine who is a good guy or a bad guy in the world? Shouldn't there be a division of responsibilities? Let the Department of State determine who is good and who is bad, and let companies do business with those who are not on the sanctions list.
for a while the Pinochet regime was our perfectly acceptable ally in Chile, even though we knew he was a mass murderer. its silly to throw up your hands just because the state department (itself not exactly a bastion of morality) says that its not illegal to do business with someone.
We didn’t go in to Irag because Saddam was a mass murderer - we went in because Bush lied to America that Saddam was trying to get yellow cake uranium to build a bomb. A lot of Americans were against the war because we knew Saddam was not involved in 9/11 but Bush jr wanted to finish what his father couldn’t in the first gulf war. Honestly I would love it if we cared enough about mass murderers to actually go in and help, but I just don’t see that being a reason.
We tolerate quite a few mass murderers in charge of countries. We attacked that one because, supposedly, he had the tools and intent to attack the United States with chemical weapons.
Many were opposed to that war, not because they didn't feel it was right to eliminate a mass murderer, but because that was not the stayed reason. The stated reason in fact turned out to be false, and was arguably an abject lie.
In other words ... it's not a great example of what you're trying to claim.
One can be against a war and at the same time be against the government that war would remove. We killed hundreds of thousands of Iraqis, many of those troops were conscripts who didn't want to be there, yet we bulldozed them into the sand to suffocate, or burned them alive on the highway while they retreated.
There are more than two answers to everything.
> Wasn't it moral to try to eliminate a known mass murderer?
Obviously that’s not true. Pinochet and Saddam were both direct product of US policy and intervention.
At the, end, Saddam ultimately pulled too hard on the leash and miscalculated his power. Murder, mass or otherwise and morality has little bearing on matters of empire.
Because the war killed literally hundreds of thousands of people, and one can reasonably argue that while Saddam was bad, the overall amount of suffering caused by the war was worse than his excesses.
No, I'm not talking about abdication of responsibility. I'm talking about modesty. It is very appealing to think you know better than other people. That we know how a society should be governed, and we are able to label another country as totalitarian, or undemocratic, or illiberal, or such. But looking around the world, you can see that a lot of evil is perpetrated exactly by people who think they know better than everyone else. Osama bin Laden himself thought that what he was doing was for the advancement of good over evil, and a lot of his followers thought the same.
A lot of the people reading Hacker News right now think they have a better solution for the societal problems of the UAE. I personally have no idea about what's going on over there. But let's say that I'm in charge of the business decisions at Open AI. Should I start thinking that I know a way to solve their problems, and part of that way is for my company to apply some form of AI embargo on them? Or should I simply know my limitations, and restrict my judgment to the matters I am familiar with.
"Abdication of responsibility". What grand words. Why exactly has Open AI a responsibility to guide the UAE towards a better future? And, more importantly, why should Open AI feel confident that they know what is better for the UAE?
And I am talking about not doing bad things, which includes helping others do bad things.
> It is very appealing to think you know better than other people.
Not really. Responsibility, principles, and morality are work. It's far more appealing to claim they are someone else's job, dispense with them, turn your brain off, and do whatever feels good.
> A lot of the people reading Hacker News right now think they have a better solution for the societal problems of the UAE.
With all due respect, the percentage of HackerNews readers who are thinking about the UAE right now can be counted on one hand. This discussion is about generalities: Don't help people do bad stuff.
You're arguing about what's bad, and what isn't, and who decides. Interesting topic (perhaps for someone else or for another time)! Would you help someone do something which you think is bad?
> Would you help someone do something which you think is bad?
I wouldn’t. But this is not the question here. The question here is if I would give someone a powerful tool that can be used for good, for neutral or for bad things, if I believe that there is a good chance it will be used for bad things. That puts a lot of responsibility on me to get into the business of predicting bad behavior.
> That puts a lot of responsibility on me to get into the business of predicting bad behavior.
Not a ton: you don't need to come up with a numerical probability, and erring on the side of caution is just fine.
That said, I think that IS the question here. If you think it's good or even neutral to give autocrats technology which they almost certainly will use for autocracy (even if they do other things with it), then this isn't a moral dilemma you need concern yourself with.
> If you think it's good or even neutral to give autocrats technology which they almost certainly will use for autocracy (even if they do other things with it), then this isn't a moral dilemma you need concern yourself with.
In the West, we have the attitude that autocracy is bad, if only autocrats were not helped by us, they'll fall and they'll be replaced by democracy, and perhaps by crowds singing in unison Beetles' "Imagine". In reality, democracy is a very unlikely outcome of a dictator being toppled. The top 2 outcomes, by miles and miles, are 1. the dictator is replaced by another dictator and 2. the dictator is replaced by some weak pseudo-state like Somalia, Lebanon or Yemen.
I think we should re-calibrate our self-confidence that we know what's best for others. As I said, I have no idea who is governing the UAE, but I know that it's much easier to criticize them than to come with a realistic better alternative.
> In the West, we have the attitude that autocracy is bad
Many other places, too. Obviously it's not a universal thing: every sufficiently large group of people has fans of autocracy. There has historically been evolutionary advantages to wanting a big, strong hand to control everything. Any alternative has not had sufficient time to evolutionarily propagate through humans.
The rest of your post kind of veers off-topic, so remember we're talking about individual people's decision to not work on something based on their own judgements. I don't think it's worth arguing at them that their judgements are wrongly made.
Again, If you think it's good or even neutral to give autocrats technology which they almost certainly will use for autocracy (even if they do other things with it), then this isn't a moral dilemma you need concern yourself with. Nobody is trying to convince you personally to change your morals. So, as long as you don't try to change theirs, there is no need to be defensive.
> I think we should re-calibrate our self-confidence that we know what's best for others.
I think a good place to start with this, would be to be less confident that you know what's best for others who are deciding whether to empower autocrats. It's much easier to criticize them and their decision, than to come with a realistic, better alternative.
You've got it backwards. Open AI wants to do business with the UAE. A lot of the people here think this is bad, and you are one of them. I am saying: Open AI is engaging in business that is legal as far as the US laws are involved, because the UAE are not under sanctions. We (the US) are a democracy, and the US government acts as a representative of our collective will. As a democracy we decided, through our representatives, that is ok to do business with the UAE, otherwise we'd have sanctions on them. I am ok with that, and I was pointing out that private businesses should concern themselves with doing business, and the State Department should be trusted with making the value decision if a regime is autocratic enough that the US wants to put is on its sanctions list.
You think for some reason that this type of position is one of arrogance. That I should show modesty and let people, such as yourself, criticize Open AI for getting in bed with autocrats. And if myself I have no problem with autocrats, then I should just keep out of this, because nobody around here asked for my opinion.
But this is very backward. When Jesus said that only those who never wronged in their life should throw a stone, one could retort that only people who never wronged should tell other people what to do with the stones they find on the street. Does that type of argument make sense to you?
you're confusing what is legal with what is moral – the two dimensions are orthogonal
importantly, we are not forced to adopt the morality of 1 particular government, out of hundreds of thousands – this is an appeal to authority (and not even a good authority)
if someone thinks that empowering autocrats is immoral, even if it's not illegal, then it is indeed arrogance to say that you have decided their morality is wrong
> You think for some reason... That I should show modesty and let people, such as yourself, criticize Open AI for getting in bed with autocrats
not really: modesty in judging others was your suggestion. you can stand by your suggestion or not, but you don't have the power to "let" or not "let" anybody criticize autocrats or their enablers
remember here: them criticizing autocrats and their enablers, is equally valid to you criticizing them for their criticism; which is equally valid to others criticizing you for criticizing their criticism; etc... we could keep going, but as you undoubtedly see, there is no good discussion to be had in the meta-direction of criticizing criticism, rather than directly addressing the issues raised in the initial critique
as for the jesus analogy: I wish I could follow it, but I could not, likely due to a failing on my part (I'm not familiar with jesus) -- would you mind rephrasing the point as a non-analogy (directly)? perhaps an analogy won't be necessary because the message is clear!
> should I simply know my limitations, and restrict my judgment to the matters I am familiar with
Sure, but the course of action that best reflects this is to not get involved in such business at all (because you just admitted that you're incapable of judging its consequences).
If democracy builds supercomputers (and bombs, propaganda, prisons) for autocrats, of what good is democracy? The evidence points strongly to democracy and autocracy being friends, even "good cop, bad cop"
I have a bit of a problem with TFA's implication that Western countries don't do the exact same things but to a lesser degree. We need to get over this dumb eurocentric idea that we have the best system of government and the entire world suffers for the lack of it. I'm reminded of back in the "Arab spring" when journalists and politicians were praising social media as a "democratizing force" only to clamp down on free speech with the "mis/dis/malinformation" slander about 5 years later.
reply