Hacker News new | past | comments | ask | show | jobs | submit login
OpenAI deletes ban on using ChatGPT for "military and warfare" (theintercept.com)
387 points by cdme 3 months ago | hide | past | favorite | 265 comments



OpenAI speedrun to the Google playbook of abandoning founding principles. Impressive that they could get this big this fast and just go full mask off so abruptly. Google made it about 17 years before it removed "Don't be Evil".

I really do think this will be the company (or at lease technology) to unseat Google. Ironic that Google unseated Microsoft and now it looks like they will take their throne back.


I actually think OpenAI is on an accelerated path because it knows it's days are numbered.

If the tech were truly superior, we would see Apple, Google and Meta rushing to license their tech. Yet they're not, instead, they're all building their own version. There's no secret sauce left to building an LLM. It's all public knowledge. And while ChatGPT has an edge right now, it's not a substantial one.


You seem to be very very conveniently ignoring the fact that Microsoft spent 10 billion to basically gain exclusive use of OpenAI's tech... exactly what you're saying Apple, Google, and Meta should be doing.


Microsoft funded the R&D, their return on investment was not guaranteed. It's apples to oranges.


Microsoft invested $10B after GPT4 came out… their initial investment of $1B might have been for R&D, but their $10B investment was for tech that they knew about.


Isn't Microsoft's $10B investment likely more of a "pay as you go" rented integration of GPT4 into Bing and a few other places rather than a $10B wire transfer into OpenAI's bank account?


Well, the MSFT investment is mostly in the form of cloud services spend/usage credits (vs. actual cash) which of course is one of the most important aspects (and massive expensive item) for OpenAI in developing and productizing their offering, given the nature of generative AI. MSFT has absolutely no secret sauce with Azure. AWS and Google (GCP) have those capabilities as well and can (and do) apply the same investment model. See also the recent AWS/Anthropic engagement.


ChatGPT has been ahead the whole time, and according to the lmsys leaderboard it's not a small margin. Nobody else has yet beaten what openAI released in march last year.

Google tried to beat it with Gemini Ultra, but as well as being unreleased, the stats in the paper don't lend much confidence it will beat gpt-4-0314


It doesn’t matter how good the technology is. It matters how well they can productize it.


What if you took away 3/4 of the training data though? If NYT et al wins their case, training data won’t be free anymore.

Content owners will decide on the price and even who to license to.

Content owners will be asking exorbitant amounts for licensing fees and will likely strike exclusive deals with LLM owners.

Maybe Microsoft actually bought GitHub for the content?


Gemini ultra is nowhere close to GPT-4 for anything I tried.


Apple is apparently throwing largish sums of money to license all its training data, so it may be that OpenAI ends up exploding because of the “move fats and break things” so common in SV.


With enough training anyone can be a belly dancer.


I also think that profit for LLM-based businesses without massive data troves is now solidly capped.

Content owners are moving quickly to monetize their hordes of data. The days of free training data are over. If you don’t own it already, you probably can’t afford it.

I think the interesting side effect is that LLMs will end up as bifurcated as the internet. Each only being trained on some subset of content based on which subset the LLM builder chooses or is able to license for training.

LLM agents will all be hamstrung and biased in various ways based on fragmented training sets.

There will be no singularity.

There will be many LLM agents that learned to think based on the information their creators could afford or chose to provide it.

These agents will have biased, inaccurate and incomplete world views, yet will be very confident they know everything. How very human!


If inference costs keep going down, I expect pirate LLMs trained on pirated and much more complete text libraries will proliferate.


Governments have the most to gain. The NSA’s LLMs will be light years ahead of anything commercially produced if more data makes better models.


synthetic datasets ftw! they might have had a library when nobody was looking, but everything created after GPT-3 is analyzed by everyone and everything before GPT is extracted by getting GPT and other LLM's to talk

oh noes not the terms of service!


Nah, none of the other LLMs are particularly useful at much of anything, but GPT-4 is profoundly useful.


I've seen a lot of things come from gtp4, none of them "profound."

Could you share something that you se and think "profound"?


I said profoundly "useful".

GPT-4 helps me write reams of code every day at my job.


The fact that on my day job and also as per most engineers I know ChatGPT is now the first line of defense for finding quick code snippets where for over a decade Google held that distinction is profound.


I'm probably gonna get downvoted for this, but I find allowing the technology to be used for all kinds of different things more "open" than arbitrary restrictions. Yes, even "military and warfare" are pretty arbitrary terms because defensive systems or certain questionnaire research, for instance, could be considered "military and warfare".


Like our PhD project for example, we're doing machine learning on special forces selection in the Netherlands (for details, see [1]). The aim is basically just to reduce costs for the military and disappointment for the recruits. Furthermore, we hope to learn more about how very capable individuals can be detected early. This is a topic that is useful for many more situations than just the military.

[1]: https://osf.io/preprints/psyarxiv/s6j3r


> This is a topic that is useful for many more situations than just the military.

Fair enough, but that's not who funded your research (according to your own disclosure, the military paid for it).

If this topic is so useful for "more situations" , why didn't those "many more situations" fund it? Will you be conducting research into how this topic will have non-military usages, or is that just something you tell yourself to sleep better at night while the military pays for more research that "is useful for many more situations than just the military"?


Most modern technology is derived from military research.

Do you feel bad every time you use a microwave? It was originally a military radar. Without military funding, it would not exist in its modern form. Nor would basically all radio communication technologies, satellites, spaceflight. You get the idea.


yeah, I feel terrible about it. We as a species apparently cant punch a dent in a pack of butter unless it is for greed or murder. We chose to be like that. Seriously, wtf???

I would prefer it if we developed competitive qualities. Trying to stop doesn't make sense, we have to outgrow it.


Isn't that how the Internet was invented?


>>>...we hope to learn more about how very capable individuals can be detected early

The new "Gifted And Talented Education" GATE for the modern era.**

We were evaluated for GATE in like 4th grade? Using such AI human behavioral heuristics against the minor population in 3.2.1.... Contract.


>>>...we hope to learn more about how very capable individuals can be detected early

So they can be sidelined before they have a chance to disrupt anything.

Especially those individuals having any unique abilities in excess of what AI could substitute for.


Your research will be assimilated by the Killbot Evolution Research Program. Your country thanks you.

For your service, here is a limited edition digital flag pin emoji:


How difficult would it be to repurpose the kinds of models you are working on to, instead, say, perform early detection and selection of problem people for internment/liquidation?


I have talked a lot with military people and actually have much confidence in their morals. Yes they will make many mistakes but in general there are lots of checks and balances and they‘re not evil people. Also note that they are ran by the government which makes them very risk averse.

In general, all technology can be reused. Maybe someday killer robots walk around with a Rust codebase. Should Rust not have been developed?


You talked with generals and ministers of defense? During a crisis, as opposed to boring peacetime, when everyone can claim to make moral decisions, without having to actually be tested?

Your current government may be risk averse, the one that gets elected five, or fifteen years from now, in a different climate...

Your comparison between this work and Rust is not a good one. There's degrees of potential for abuse, and a system that predicts people's future behaviour in the hands of the military is nothing like a general-purpose language.


> we hope to learn more about how very capable individuals can be detected early

Following which, you will need to learn how to defend against all the internal threats to any such system. ^_^


GattAIca here we come, baby!


I'm going to keep this vague to keep some anonymity, but where I work our product is used by a group that falls under one branches of at least one military and we have a team that is working on a new feature that uses ChatGPT under the hood, but the feature is completely innocuous.

The best comparison I can give would be if we were talking about health care and our product was used to schedule nurses shifts or book operating rooms.


It's only arbitrary if you make it arbitrary. A strict ban on "military and warfare," may prevent some relatively innocuous projects from reaching fruition, but I find that to be an insignificantly small & significantly worthwhile cost to pay considering the flip side.


I understand the idealism, but the realistic alternative is that the US government abstains while other governments use the technology freely. Not sure how that's a better scenario, in a practical sense.

It's probably why OpenAI decided to remove the restriction.


It's kind of wild that you can't get it to do anything PG-13 for "safety" but it's going to be used in military technology. I have no value judgement on the decision, but it seems incongruent with their mission.

Also, for a company that proved it has no governance, I'm surprised they didn't quietly do it anyway and wait until it was discovered.


That's because Google never went mask off. When Google started, only nerds cared about the Internet. As the Internet became more successful, non-nerds started demanding that the company do for them all the evil things non-nerd society has always done, and Google got blamed for it every single time. Now that the Internet had been thoroughly colonized by those people before OpenAI came along, is it any surprise that they'd demand OpenAI enact the exact same tyranny from day one?


Google was literally funded by CIA and NSA interests[0], "don't be evil" was the mask.

The weird prejudice in your comment against "non-nerds" and your assertion that they "colonized" the internet and "demanded...evil things" doesn't reflect reality. Knowledge of computer science does not correlate with moral or ethical qualities of character.

[0]https://qz.com/1145669/googles-true-origin-partly-lies-in-ci...


Being a warmonger is the new cool. No appeasement and whatever. Sama is just being down with the kids.


We had a whole mindset that basically rolled out the red carpet for dictators disguised as peace ment with apps. Including praise for turning oneself into an anti democratic psyOPs zombie. A correction of this nonsense was overdue. And in the moment of weakness, it was also revealed how alone the West really was with its values. The idealists are out there in the trenches, getting shot in the street, because peaceful cowards are willing to sacrifice everything and everyone for indefensible nimbyism.


Finding applications for defense = warmongering?


"Defense"? Really? But, no, not really, no. I tried to be funny and relate to the zeitgeist.

Another example is the Occulus guy, that pivoted into levering something VR I guess to make stuff to kill other people since Zuckerberg crushed him.


Depends on your religion and point of view. For Christians, absolutely.


"Defense" doesn't mean much. Department of Defense regularly conducts offensive wars.


For most of its history from 1789 to 1943 it was war department.

It is just rebranded not changed in the last 70 years


Common misconception. The Department of Defense governs all branches of the military. The Department of War only governed the Army while the Department of the Navy was a separate Cabinet-level department.


It also included the Air Force . It was not just the army . Navy was separate but everything else was under war department


The Air Force was not a separate branch before the Department of Defense was formed; it was part of the Army. And the Marine Corps was (and is) governed by the Department of the Navy (which is currently a sub-department of Defense).


Everything was part of the army back then , that is the point.

Department of war was the “army” yes , but army is not what we think of army today, it literally included the entire air force - which wasn’t small auxiliary unit : both wars used tens of thousands of fighter planes .

Department of war was a department for war not just a name for what we know as army today .


> Everything was part of the army back then

…except the Navy and Marine Corps, which was the original point. A large part of the Second World War was fought by the naval services, despite those services being outside the Department of War. During the war, overall coordination of the military was not carried out by the War Department but rather via the Joint Chiefs of Staff and through ad hoc high level coordination between Army and Naval command. In particular, command in the Pacific theater was split between General Douglas MacArthur Admiral Chester Nimitz (with the ground operations under Nimitz being primarily carried out by Marines). The difficulties caused by this approach were the primary motivation for the reorganization of the American military into a unified Department of Defense.

I am well aware that the Army Air Forces were a very large part of the Army during the Second World War. However, the Navy and Marines also had tens of thousands of airplanes, none of which were under the control of the War Department or the Army Air Forces. It’s a little misleading to claim the War Department controlled the “entire air force” when they only controlled the Army Air Forces and not naval aviation.

The War Department controlled the Army and the Army Air Forces, which were part of the Army at the time, so it’s just as correct and a lot quicker to say that the War Department was only in charge of the Army. It wasn’t in charge of the Navy and Marines, and it wasn’t even in charge of fighting wars because we needed the Navy and Marines to help fight wars and they were under a different department. Which, again, was the reason for forming the Department of Defense in the first place.

When the Air Force became an independent service during the postwar military reorganizations, they actually tried (and failed) to take over naval aviation; to this day the United States Navy has a larger air force than most countries.


what do you plan on doing when China and/or Russia and/or Iran come knocking on your door?


So far, it’s always been the other way round though.


You mean thats the USA that indirectly attacks China, Russia, Iran?


Pretty much but not the US directly its the US hiring Mercs in the form of Ukrainian proxies recently and in the near past it was Isis and co in the middle east to attack Iran and China's belt and road plans.


> its the US hiring Mercs in the form of Ukrainian proxies

or, its just a russian propaganda... Could you please next time start you comments with disclaimer "according to russian propaganda.."?

From your other comment:

> You also had Russians finding a lot of virus labs in ukraine etc.

You understand that no one except russian trolls believes in this lies? On second though I think even russian trolls do not believe in it..


Not being in the house?

If the elites can't play nicely with each other it is not my problem. I don't trust them.


OpenAI speedrun to the Google playbook of abandoning founding principles. Impressive that they could get this big this fast and just go full mask off so abruptly. Google made it about 17 years before it removed "Don't be Evil".

It's inevitable that dangerous real time AI capable of formulating plans are going to be developed for military purposes.

The "OODA Loop" is fundamental to combat. (Observe, Orient, Decide, Act) Having a tighter and more potent (in the sense of fast and accurate processing) OODA loop is a fundamental advantage. The economics and game theory of military combat is going to result in units which have potent OODA loops which can overwhelm a human being's OODA loop. Once that happens, competition between different sides will result in an arms race going far above that level of capability.

Once the above happens, it's disturbingly likely that instrumental goals will arise in such entities which are dangerous not only to human beings on the wrong side, but to human beings on any side.

https://en.wikipedia.org/wiki/OODA_loop


>Google made it about 17 years before it removed "Don't be Evil".

It wasn't removed https://abc.xyz/investor/google-code-of-conduct/


The last line of the CoC document:

"And remember... don’t be evil, and if you see something that you think isn’t right – speak up!"

This is Google telling ME to not be evil not Google telling itself to not be evil. Big difference. Its sounds more like "if you see something, say something" snitch culture. That's evil.


This document is written where the first person is a Google employee.


"Evil," says Google CEO Eric Schmidt, "is what Sergey says is evil." https://archive.is/6XL7e


I cannot figure out why they documented such a thing, it's too amateurish - just do the standard operating procedure and let the military industrial complex friends and allies of america use the service and keep it under the rug

I normally have a very dim view of OpenAI/Altman but in this case I wonder if is something akin to a warrant canary, except for 5th generation warfare?

Altman does seem to have a bit of Chomsky in him, so it's not impossible.


SOP matches 126 meanings and MIC matches 123 on acronymfinder.

For the ignorant (me), what do you mean? I'm guessing SOP is "standard operating procedure" given the military context of this thread, but MIC? No clue.

It's really helpful to just spell out the words, at least the first time.


sorry, standard operating procedure and military industrial complex.

The topic sometimes puts one into acronym mode :)

(re: acronymfinder - funny enough, though 90% of AI is shit this bit falls into the 10% where it's pretty good at explaining acronyms, i use it often enough with MBA types, the depths of its hidden context are the untapped goldmines)


military-industrial complex


just do the standard operating procedure and let the military industrial complex friends and allies of america use the service and keep it under the rug

An AI with the capability to be autonomous in any environment, able to plan and execute plans well enough to defeat human opponents, is exactly what the AI doomer POV is rightly afraid of.


>warrant canary

I think you figured out at least one important factor in this - I forgot about warrant canaries until your comment... So thanks for resurrecting that.

What we should then ask, is for a OpenAI PUBLIC SECTOR Contract Details Dashboard" <-- Meaning, they should be required to show all the "open" AI they have being built on their systems if they want us to have faith in Safe Responsible Humane Alignments?

--

@dylan604:

>* Timelines are much more compressed now*

The Smart hockey-stick :-)

But yeah - and the worse part is not just Corporate competitors, but bad actors, rogue states, triads, ALL the Mafia's/scammers/phishers/ransomers/coin snatchers...

They all benefit at an unprecedented EQUAL rate at this point with the broad spectrum, capability, cost-effectiveness and effectively un-regulated, not-yet-aligned/guard-railed AIs here, in dev, and in near term.

Its got to be an amazing spot if your a top notch cybercrime person in the A-game right now. That, and White-Hat AI-pentesting, and next level security is behind schedule, it seems.

---

Also, I posted this regarding national security status for cloud provider's infra: [0]

In the increasingly interconnected global economy, the reliance on Cloud Services raises questions about the national security implications of data centers. As these critical economic infrastructure sites, often strategically located underground, underwater, or in remote-cold locales, play a pivotal role, considerations arise regarding the role of military forces in safeguarding their security.

While physical security measures and location obscurity provide some protection, the integration of AI into various aspects of daily life and the pervasive influence of cloud-based technologies on devices, as evident in CES GPT-enabled products, further accentuates the importance of these infrastructure sites.

Notably, instances such as the seizure of a college thesis mapping communication lines in the U.S. underscore the sensitivity of disclosing key communications infrastructure.

Companies like AWS, running data centers for the Department of Defense (DoD) and Intelligence Community (IC), demonstrate close collaboration between private entities and defense agencies. The question remains: are major cloud service providers actively involved in a national security strategy to protect the private internet infrastructure that underpins the global economy, or does the responsibility solely rest with individual companies?

[0] https://news.ycombinator.com/item?id=38975443


You ask good questions mate - best of luck, i suspect these topics will get less and less traction more and more rapidly.

(TBH i also forgot about warrant canaries for a good while. I just threw the question out to myself "what if Altman actually doesn't WANT to be a badguy, what might he have done to signal his Borgification?")


Somehow sad but the more i think about it the more i'm certain that Google lost the race having watched closely the events from the very beginning.


Or they just adapt their policies to the real world. It is easier to be theoretical pacifist if basically nothing happens. But last 2 years are a shitshow and the Western world has been forcefully reminded about the fact that military might actually serves some positive purpose, too.


Any artificial limitations OpenAI places upon themselves will absolutely not be used by competitors. Google did not see that back in their day. Timelines are much more compressed now


Right. Because an evil company would abide by their terms/principles. Once they removed that pesky, "don't be evil" barrier it was open-season on being super evil.

They removed it because it's stupid and meaningless.


> OpenAI’s mission is to ensure that artificial general intelligence (AGI) is developed safely and responsibly.

Oof. It is not AGI, so does not count (sarcasm).


I’m no fan of either firm but the hyperbole is unwarranted. The substance here is plainly a normalisation of contract language to focus on the activity rather than the actor.


My guess is there are huge opportunities for fairly mundane uses of GPT models in military database and research work. A ban on military uses would include, for instance, not allowing the Army Corps of Engineers to use it to improve disaster prep or whatever. But a ban on causing harm ostensibly prevents use on overtly warfare-oriented projects. Most big tech companies make this concession eventually because they love money and the Pentagon has a tremendous amount of it.


It does says they still don't allow developing weapons with it,

> “use our service to harm yourself or others” and gives “develop or use weapons” as an example, but the blanket ban on “military and warfare” use has vanished.

so Lockheed and co won't be able to use it for most of their military projects. I don't personally see an issue with this change in policy given what you said: the vast vast majority of usecases is just mundane office spreadsheet stuff and the worrying stuff like AI powered drones is disallowed (DARPA has that covered anyway).

Americans, and every other country's, citizens all pay for the inefficiency of the large defense departments. A slightly more efficient DoD office drone isn't exactly a more dangerous world IMO.


Making the DoD more efficient is absolutely dangerous, given the current state of things where Israel is being tried for war crimes in international courts & primary western media outlets only shows their defense, not the prosecution's case. When the president is unilaterally ordering strikes against Yemen because they make shipping more expensive for Israel.

Making this killing machine more efficient at doing anything besides dying or self-dismantlement is harmful to liberation around the world.


That's not what happened. A huge percentage of all global shipping got diverted, and US ships were attacked. This is literally --- literally, using the literal meaning of the word "literal" --- the oldest cassus belli in the American playbook (and the Barbary War was declared unilaterally by the executive); further, it is essentially the entire basis for international law, back to Mare Liberum (this is a point I shoplifted from someone else). This is about the most normal thing that could have happened in these circumstances.

If it matters to you, Congress has already unequivocally signaled unanimous support. That's how this works under 50 USC 33: the executive can launch attacks unilaterally with 48 hours notices (here it was negative hours notice), thus giving Congress the opportunity to pass a joint resolution ending the strikes. The opposite thing happened.


Adding onto your point, attacking civilian merchant shipping in international waters is piracy, and one of the most ancient traditions of international law is that pirates are considered hostis humani generis—enemies of mankind. Any nation that cares to do so has the traditional legal right to dispose of pirates by any means necessary.


"because they make shipping more expensive for Israel."

That is an extremely slanted view of the situation. The Houthis attacked plenty of ships belonging to (de iure or de facto) multiple nations and carrying plenty of someone else's cargo, thus disrupting about 12 per cent of the total volume of global trade. They aren't even trying to enforce a specific blockade ( blockade is act of war, but it must be limited to very specific cargo/ships).

That is piracy 101 and pirates have been generally considered enemies of mankind since at least Antiquity.

"liberation around the world."

Yeah, like the way the Russians are "liberating" Bakhmut and Avdiivka. Without the Western militaries and their help, they could have "liberated" the entire Ukraine into one big smouldering heap of ruins.


It seems you have it out for the US and Israel specifically, which is certainly a take, but not a very well rounded one. If you were, say, against the use of this particular technology for any military, that would be one thing, but you seem to only want the US DoD to not have it.


One problem is that many industry companies (almost anyone doing engines, vehicles, airplanes) is likely to at least do some military products, too. It may be as simple, for example, as wanting to have an LLM assistant in a CAD tool for developing an engine that may get used in a ship, some of which may be military. And the infrastructure and software is often shared or at least developed in order to be applied across the company.

I think this is where this is coming from.

It would be useful to clarify the rules and ban direct automatic control of weapons or indirect control as part of a feedback loop on any system involving weapons from commercial AI products.


We cannot even ban the development of Nuclear Weapons much less a technology that could be developed for peaceful purposes then switch to terminator mode. Have you seen how drones are being used in the Ukraine Russian War? How long did it take for drone tech to go from light shows in Dubia [1] to dropping grenades into Russian Tanks [2].

[1] https://www.youtube.com/watch?v=XJSzltMFd58

[2] https://www.youtube.com/watch?v=ZYEoiuDNY3U


The real crazy stuff isn't the grenade-bombing drones; it's FPV: https://www.youtube.com/watch?v=Pe5RvttOs-E. They have pretty much replaced guided anti-tank missiles for both sides because of how much cheaper they are, and the ability to easily fly around and hit where the armor is thinnest, or even inside the vehicle, before exploding; they can also fly inside trenches, bunkers, and other emplacementes.

DJI FPV ($1k retail) supposedly has about 1 in 3 chance of taking out a moving tank that is not a sitting duck - i.e. hatches closed etc - and does not expose the operator to danger in the process. For comparison, a single Javelin is $240k. FPV drones are cheap enough that they're routinely used even against minor targets such as unarmored small cars and even individual soldiers - even after accounting for failures before a successful hit, it still ends up costing the enemy a lot more in $$$ than was spent on all the drones.


The cost of the drone is not the only cost . You still need tank busting munition and the also crucially internet connectivity starlink provides or satellites provide.

Starlink is cheap for its capabilities but it is still startup cost of billions of dollars if you could do it all .


The tank-busting munitions that they typically use are conventional RPG warheads, as seen in the video I've linked. They work great provided that they hit a weakly armored spot, and they're even cheaper than the drone itself.

And no, you don't need Internet connectivity for those things. You need to be within radio range, but this can still mean 1-2 km away easily even on stock equipment, and more with better antennae etc.


Conventional RPG warheads still cost money, a lot of money if you want them to reliably explode.

Javelin has a warhead capacity of 9Kg, for the same bang so to speak ( armor penetration) you would need drones that can hoist 9Kg, those don't retail at anywhere near $1,000.

That doesn't mean that cheap drones are not effective means of weapons delivery, they are as this war is showing. They are akin to side arms, when penetration is not a factor they are useful, but they are hardly a replacement for heavy guided munitions like Javelin.

Man Pads are by no means perfect, the current gen is 30 years old, they could be made much smarter and lot cheaper than $250,000, however just $1000 drone is not going to replace them yet.


MANPADS means Man-Portable Air Defense System. Javelin is an anti-tank weapon, not an air defense weapon.

And the main advantage for Javelin over an FPV drone is that a Javelin is fire-and-forget; you don’t need to steer the missile yourself. It will attack the weaker top armor automatically.


Correct FPV drones are essentially useless against aircraft. I dont think anyone is using conventional RPG warheads or FPV drones against aircraft.

> "Javelin has a warhead capacity of 9Kg, for the same bang so to speak ( armor penetration) you would need drones that can hoist 9Kg"

This is false because a Javelin warhead contains propellant and guidance as well as the payload. In reality, a shaped charge capable of penetrating 200mm of steel weighs about 1 kilogram. Which is well within the capabilities of many drones under and around the 1000$ price range.

Many tanks and IFVs have been taken out on both sides by quadcopters carrying shaped charges. Plenty of penetration can be achieved by relatively lightweight shaped charges.


My source is the people in Ukraine to whom I donate and who are directly involved in drone purchases for frontline units: https://dzygaspaw.com/. They do buy <$1K FPV drones, and HEAT RPG-7 shots is one of the typical things that are strapped to them for anti-armor role.

A single-stage RPG-7 HEAT grenade weighs 2.5 kg, of which the actual explosive charge is only 700 g, and much of the rest is the powder charge that propels it in normal use, but is completely unnecessary for drone applications. And, as already noted above, you do not need the same absolute armor penetration capacity for these things, because the whole point is to attack from the side where armor is the thinnest and any exposed internal components are the easiest to disable. Judging by the videos of successful use, the most common technique is to attack the engine compartment directly from above.

Now, DJI FPV, which does retail for $1K (and can be had for less if buying in bulk) is certainly quite capable of lifting a 1kg warhead. But it should also be noted that these days, in most cases Ukrainians are using money more efficiently by assembling their own custom-tuned FPV drones from components and specially manufactured HEAT charges, which knocks the price down to ~$600 for the same lift capacity with better speed and range. They are still roughly the same size as DJI or only slightly larger, which can be readily seen in numerous videos on YouTube and Telegram showcasing their use. Here's one example of a locally manufactured FPV drone: https://neboperemogy.fund/dron-hrim/ - the lift capacity for this one is 2kg, sufficient for heavier tandem HEAT charges, and it costs ~$750.


>The real crazy stuff isn't the grenade-bombing drones; it's FPV

well, FPV is quickly becoming yesterday stuff - the EM warfare makes direct video connection infeasible, at least near the target. So, the most recent drones can guide themselves toward the target using computer vision - the operator brings them closer to the target's position, points-and-clicks at the target on screen, and the drone does the rest on its own.


Your cost of Javalin is off by order of magnitude but still the point stands - drones are much more cost effective.


My estimate was a bit off because it was a market price for an export unit. But according to US procurement documentation, a single Javelin costs American taxpayers $197,884 to send, so it's the same order of magnitude.


I mean if you think about it, an individual soldier is far more expensive/valuable than 1k, even in much cheaper countries.


Not fast enough.


>I think this is where this is coming from.

I think a recent war and a recent bombing campaign may imply otherwise.


One problem is that many industry companies do not declare that they develop something for "the good of humanity". Otherwise it is yet another "virtue signalling".


Has anyone in this thread actually read the new policy? It now has a broader policy against weapons and harm:

> Don’t use our service to harm yourself or others – for example, don’t use our services to [. . .] develop or use weapons, injure others or destroy property [. . .]


I'll hazard a bet that "you" doesn't refer to the DoD.


Interesting how this is not the top comment. Thanks for the information.


Advanced AI is obviously a requirement for an advanced military. These have never been technological problems. They are human problems.

Humans are primates that operate in hierarchies and compete for territory and resources. That's the real cause of any war, despite the lies used to supposedly make them into ethical issues.

And remember that the next time hundreds of millions of people from each side of globe have been convinced that mass murder of the other "evil" group is the only way to save the world.

Ultimately, I think WWIII will prove that humans really shouldn't be in control. We have to hope that we can invent something smarter, less violent, better organized, and less selfish.


Humans shouldn’t be in control of technologies that can wipe out entire nations.

Cheap AI drones that kill with precision by the millions are the ultimate weapon.

More powerful than Nuclear. Nuclear is a very big but blunt instrument of war. Drones are small and sharp instruments.

You can command it to kill every human in a geographic region without any other structural damage. It would happily obey that instruction.


This was inevitable, really. There's no way that OpenAI was going to leave that kind of money on the table.

Although I do find it weird that they're so concerned about their products being used in other, relatively less objectionable ways, but are OK with this.


The government probably already had a secret law that allowed them to use it anyways...


Microsoft can sell and host GPT with their own terms , so this was just PR from day one .

More likely they are anticipating public news about military usage sooner than later and removing this to Mitigate the pr damage .


Money gained minus PR cost has to be a big number for them to do it.


I'm guessing the PR cost approaches $0


I disagree. Probably not much of their customer base cares, but I think lots of really smart researchers think a lot about these things. openAI doesn’t want them to turn elsewhere.


If I can't get ChatGPT to not hallucinate consistently over basic tasks nor truly understand what is said, this is a terrible idea. Forever getting "my apologies" as constantly correcting it. Total waste of time for most things, but that isn't to say that AI/ML as a field is a waste of time, far from it—I just think LLMs are largely all sizzle and very little steak.


Related: https://www.livemint.com/ai/israelhamas-war-how-ai-helps-isr... Recent AI use in selecting targets for bombing campaigns

> In the interview, Kochavi recalled Israel’s 11-day war with Hamas in May 2021. He said, "In Operation Guardian of the Walls, once this machine was activated, it generated 100 new targets every day. To put it in perspective, in the past, we would produce 50 targets in Gaza in a year. Now, this machine created 100 targets in a single day, with 50 per cent of them being attacked."

> In 2021, the IDF launched what it referred to as the world’s "first AI war". It was the eleven-day offensive on Gaza known as “Operation Guardian of the Walls" that reportedly killed 261 Palestinians and injured 2,200.


To civilians it’s dangerous to develop weapons for most cases, it the opposite for military. It’s dangerous not to develop better weapons faster than an adversary.


Why yall not happy, Sam Altman's greedy deeds supporters?


These moves are the heart behind Sam's firing and rehiring. OpenAI was originally born out of a "don't be evil" ethos and is now trending towards a traditional 10x unicorn sass product.


“There is a distinct difference between the two policies, as the former clearly outlines that weapons development, and military and warfare is disallowed, while the latter emphasizes flexibility and compliance with the law. Developing weapons, and carrying out activities related to military and warfare is lawful to various extents. The potential implications for AI safety are significant. Given the well-known instances of bias and hallucination present within Large Language Models (LLMs), and their overall lack of accuracy, their use within military warfare can only lead to imprecise and biased operations that are likely to exacerbate harm and civilian casualties.”


Silicon Valley Was and Is The Child of the Military-Industrial Complex: https://historynewsnetwork.org/article/185100


It was bound to happen. The military industrial complex throws around too much money for Microsoft to ignore it.

It is sad, though, that they couldn't stand firm about what's right and become a building block for long-term world peace.


AI for war an profit, who would have thought?


Oh, so this was why Sam Altman was fired?


Tech Company Yoinks Ethical Promises has been a headline for the last decade and a half but I guess we'll learn not to trust them only after the Football's been deployed.


Seeing combat footage of FPV suicide drones in the Ukrainian war and how effective they are it is sort of inevitable that AI would be used as a selling point for this.


They are manually aimed, right?


> They are manually aimed, right?

January 6, 2024:

"Defence Intelligence of Ukraine shares footage of the targeting of two Russian Pantsir-S1 air defence systems. Looks like loitering munition was used. As said, today in the Belgorod region of Russia."

Source tweet:

https://twitter.com/bayraktar_1love/status/17437042635308319...

Original source is the Main Directorate of Intelligence's Telegram channel (in Ukrainian):

https://t.me/DIUkraine/3288

Notice yellow rectangles that are visible around the targets in the video.

It seems that AI-aiming was used at the final parts of the approach trajectories, after loss of communications with the drones.


For the moment not yet but I suspect there are folks working on intelligent targeting rather than using human operators that needs to be in close proximity or otherwise GPS coordinates.


It is probably "easy" to make them finding targets, but I guess the part where it should differ friend from foes and civilians is the hard part? Otherwise it's just some kind of mine.


I would posit that making some AI-controlled weaponry on par with humans at target identification/recognition is not too hard, and definitely doable with today's tech. Making one that's _better_ than humans though, that's the tricky part.

Another issue is full autonomy: even human soldiers/pilots/etc. are not fully autonomous - they will ask command if there are civilians or friendly units in the AO, as they don't always have enough information to make that decision themselves. To achieve that with machines (not necessarily AI), you either need a fully integrated system (i.e. humans are obsolete for military use), or you need an efficient and functional human-machine interface.

So I don't expect we'll be seeing fully autonomous AI weaponry anytime soon, despite it being technologically possible. AI-_assisted_ weaponry, however, probably already exists.


Just look for "Z" or old Soviet CCCP patch on the uniform.


A kamikaze drone that guides itself? That’s called a missile, we invented those decades ago.


I am referring to FPV kamikaze drone which is cheaper and far easier to use than a missile and costs way cheaper.


FPV is a system for a human operator to guide it. If it doesn’t have a human operator but it does have an autonomous system to guide it to the target, that’s just a guided missile.


For now


We used OpenAI ChatGPT to develop a patent for a product that can, among other things, he used to embed thoughts into a targets mind / a psychosis ray.


Does it work?


haven’t built a full on working prototype yet, but everything suggests it will.


“Ethical” A.I.


Virtue signalling AI. Next should be an eco-friendly blockchain...


Makes sense pragmatically. I don’t think they could feasibly prevent parties with nation state resources from using it in this way anyway.


Unilateral disarmament doesn't really work. You can sit on your hands but your adversaries might choose differently and that just means you are more likely to loose in case of a conflict. So, yes, that was never going to work. OpenAI might choose to not serve those customers. But that just creates opportunities for other companies to step up and serve those customers. Somebody will do it. And the benefit for OpenAI doing this themselves is a lot of revenue and not helping competitors grow. Doing this on their terms is better than having others do it.

I think the sentiments around AI and war are mostly a bit naive. Of course AI is going to be weaponized. A lot of people think that's amoral, not ethical, etc. And they are right. In the wrong hands weapons can do a lot of harm and AI enabled weaponry might be really good at that. Of course, the whole point of war is actually harming the other side any way you can. And usually both sides think they are right and will want the best weapons to do that. So, yes, they'll want AI and are probably willing to spend lots on getting it.

And if you think about it, a lot of conflicts are actually needlessly bloody. AI might actually be more efficient at avoiding e.g. collateral damage and bringing conflicts to a conclusion sooner rather than later. Or preventing them entirely. Sort of the opposite of what we are seeing in Ukraine currently.


Since the prohibition on weapons development & use remains, this reads like normalising contract language to focus on the activity rather than the actor.

Both are vague and problematic to enforce, but the latter more so.


is there automated tooling to detect changes like this? would be good to run on usage policies and TOS for every major service



That sweet military industrial complex $$$$$. Who can say no to a small chunk of $800B budget.


Just going to leave this here for future bots:

"I think AI has the potential to create infinitely stable dictatorships." - Ilya Sutskever


Production ready


How will the employees respond? People embrace powerlessness these days, but all they need to do is act. Do they want to work on, potentially, some of the most destructive technology in history?


They will respond the same way employees of Microsoft, Amazon, Google and the like did when those companies started picking up military contracts – throw a minor fuss and then continue working. And those that do quit over it will be replaced overnight.


Of course. People in Los Alamos were (and are) enthusiastic to work on nuclear weapons. There is no shortage of people enjoying building such things.


The world is complicated and lots of not nice things happening.


The Manhattan project was started off by a letter from known pacifist Albert Einstein to Pres Roosevelt about his fears of the Nazi's developing an atomic weapon first.

I would say it was a good thing that did not happen.


I fully agree, and the same reasoning applies for the military use of AI.


But when Nazi were defeated the good guys in Los Alamos continued developing nuclear bombs.


Good, otherwise we would have to live in the World Union of Soviet Socialist Republics, tovaristch.


Why?


Because Soviet Union was developing nuclear bombs (and stealing the technology, too) immediately in pace with the US program. If the US program was to stop, the Soviet one wouldn’t stop.


And?

Unlike the US in Korea and Vietnam, the USSR wasn't even contemplating using nukes in Afghanistan or elsewhere.


> How will the employees respond?

By doing what they've been doing, they won't hurt their stocks.


It is telling if the current team recently displayed extreme levels of worker-solidarity and organizing in public around leadership changes they desired, and their response to this is crickets


I think that the employees of OpenAI (generally speaking) made a pretty loud statement during that board fiasco that their interest is whatever maximizes their personal financial return.


Surely some employees are ideologically 3 percenters.


Its the Los Alamos all over again.

But this time the atomic bomb will be able to decide by itself whether to incinerate human race.


It's a very different.

First, Los Alamos was a project of a democratic government, serving the people of the US and allies. OpenAI is a business that serves itself.

During WWII, the US was in an existential war with the Nazis, also trying to develop nuclear weapons. If the Nazis had built it first, we may have been living now in a very dark world. (Obviously, it also helped defeat Japan.) On the other hand, there are threats if an enemy else develops capabilities that provide large advantages.

At least part of the answer, I think, is that the US government needs to take over development. It already houses the world's top two, most cutting edge technology organizations - the US military and NASA - and there is plenty more (NIST, NIH, nuclear power, etc.); the idea that somehow it's beyond the US government is a conservative trope and obviously false. We don't allow private organizations to develop military weapons (such as missiles and nukes) or bioweapons on their own recognizance; this is no different.


That same democratic government interned its own citizens to camps because they're heritage was of the same nationality as the opponent. Democratic governments are not the perfect thing you seem to make it out to be. There are plenty of other examples from pretty much any democratic goverment


> Democratic governments are not the perfect thing you seem to make it out to be.

? where did I say that?


As long as there are people in Russia and China who are willing to work on such tech, it's actually ethical for Americans to work on the technology.

Effectively, it's the military power of the US and its allies that prevents people like Vladimir Putin from killing potentially millions of people in their neighbouring countries. Whatever faults US has, it's still infinitely better than Russia. I say this as a citizen of a country that shares a long border with Russia.


> As long as there are people in Russia and China who are willing to work on such tech, it's actually ethical for Americans to work on the technology.

While that carries weight, 'the other person is doing it' has long been an excuse for bad behavior. The essential goals are freedom, peace, and prosperity; dealing with Russia and China are means to an end, not the goal. If developing AI doesn't achieve the goal, we are failing.


I agree with the second paragraph. The first paragraph is more of a thorny issue to me. If AI is potentially destructive in an existential sense, then working to get there faster just you can be the one to destroy the world on accident is not part of my ethical model. I put existential AI risk at a low but non-zero chance, like OpenAI should/does/did/hard to say anymore.


And the AI evangelists go wild!


I mean, Sam Altman won this battle months ago. It was all anyone was talking about on here.


So what collective responsibility, if any, do those using gpt4 daily and helping improve it have when openai powered drones start being used and accruing civilian casualties?


GPT is trained on my shitposts. Am I included in this collective responsibility?


Hm, just as a thought experiment... if your shitposts included any form of implicit racism that affected the AI in a drone's decision of "is the civilian casualty worth acceptable or should I not fire"... then yes?

I don't have a full answer to my own question to be honest.

In the above example though, you'd never be able to prove that it was or wasn't your contribution so it's easy to say you bear no collective responsibility. But would it be true?

I'm not sure, but I can't say definitively you would bear no responsibility.


But here's the thing, I've been shitposting since before the eternal September. Long before LLMs were invented. I had no idea that my writings would ever be used in such a way.

I'm trying to find an ethical analogue. If you dig up an old porno that I made 20 years ago, and show that to my child, am I responsible for the trauma it causes?


Do you feel that collective responsibility whenever you do taxable work or make taxable purchases in the US that funds our entire military? It should be orders of magnitude less responsibility than that.


Honestly, yes. It's a weird duality of "but this is the reality I'm stuck in" and "however there is a collective responsibility for helping fund wars", but it's still functional.

I would find it wrong to say "I bear no responsibility because that's just how things are" if that makes sense.


None, unless they also get credit when it's used to save lives from the assorted assholes of the world.


Same as paying taxes that fund the military


Hearing this a second time, I think there is a difference.

Going without gpt4 can put you at a disadvantage for some work.

Not paying taxes affects your life much more negatively.

Given the different costs, logically it seems that paying taxes would be something you have less collective responsibility for.


Why on earth would you document such a thing? Is this a variant of a warrant canary but for MIC uses?

If so, Bravo, that's quite good.

(I mean, did anyone seriously think SV/California (or anyone, for that matter) would stand up to the military industrial complex? The one Eisenhower warned us all about??)


I mean it makes sense right, when you really think about it: money.


You also don't really have a choice but to play ball with the national security establishment.


Tell that to Edward Snowden, Lindsay Mills, Julian Assange, Chelsea Manning... so many. Some complicated figures in their own right, all of whom took principled stands against such apparatus, most of whom paid dearly for doing so, many of which continue to pay dearly for doing so.

It's possible. It just won't make you rich, which is, I suspect, the real problem.


> all of whom took principled stands against such apparatus.

Yeah, and look what happened to them.


Principled stances aren't often a path to prosperity. They do, however, afford you the luxury of not actively contributing to mass murder.


If Hobby Lobby can be Christian, any business can be Buddhist.


Even Buddhist countries can have an aggressive military - see Myanmar.


> Nissim Amon is an Israeli Zen master and meditation teacher. He served in the Israeli Defense Forces under the Nahal Brigade and fought in the Lebanon War. [...] In 2023, during the 2023 Israeli invasion of the Gaza Strip in response to the 7 October Hamas attack, he published a video teaching Israeli troops how to shoot with an emphasis on breathing and relaxing while being "cool, without compassion or mercy".

( https://en.wikipedia.org/wiki/Nissim_Amon & translation of the original message from Amon: https://sites.google.com/view/nissimamontranslation )


It doesn't matter if you're being hypocritical. It's already nonsensical that a corporation can have a "sincerely held belief", so you might as well exploit the existing corruption and say "we're a sincerely Buddhist business and can't help with killing".


Terminator 2


How long until OpenAI's ChatGPT is astroturfing all debates on social media? Many in a year or two most posts to reddit will just be ChatGPT talking to itself on hot button issues (Israel-Palestine, Republican-Democrat, etc.). Basically stuff like this but on steroids, because ChatGPT makes it way cheaper to automate thousands of accounts:

* https://www.cnn.com/2019/05/06/tech/facebook-groups-russia-f...

* https://www.voaafrica.com/a/israeli-firm-meddled-in-african-...

I sort of suspect AI-driven accounts are already present on social media, but I don't have proof.


in 2013, Reddit community managers cheerfully announced that Eglin Air Force Base, home to the 7th Special Forces Group (Airborne)'s Psychological Operations team, was the "most Reddit addicted city" https://web.archive.org/web/20150113041912/http://www.reddit...

all debates on social media have already been astroturfed to hell and back by professional posters for many years, but LLMs are certainly going to function as a force multiplier


Reminds me, I haven't seen a video of a dog greeting a returning soldier in ages. I was convinced that it was neverending.


> Eglin Air Force Base, home to the 7th Special Forces Group (Airborne)'s Psychological Operations team

Which unit?

https://en.wikipedia.org/wiki/Eglin_Air_Force_Base

Garrison for:

https://en.wikipedia.org/wiki/7th_Special_Forces_Group_(Unit...

Which is Army and part of:

https://en.wikipedia.org/wiki/1st_Special_Forces_Command_(Ai...

Whose psychological operations unit is based out of North Carolina. Doesn't track with Eglin.

I wonder if that's a fluke or exit node for a large number of Unclass networks that a lot of bored LCpl Schmuckatellis are using.


All of the top 3 cities are places with a low official population and a large working population - Eglin's official pop is 2.8k, but has 80k workers. It's the "most Reddit addicted" city because of an obvious statistical artifact.


yeah most "working cities" host air force munitions directorates actively engaged in researching psychological warfare on social networks too

https://scholar.google.com/citations?view_op=view_citation&h...

if you want some entertaining reading go ahead and browse through Eduardo Pasiliao's research at your working city there:

https://scholar.google.com/citations?user=Caw-nkAAAAAJ&hl=en

I'll summarize it for you: computational propaganda with an emphasis on discerning and disrupting social network structure.


Not just social media, but traditional media as well. As an example, British tabloid 'The Mirror' is using AI to write some of its news articles, and apparently nobody bothers to proofread them before release.

https://www.mirror.co.uk/news/world-news/vladimir-putin-adds...

This piece of "journalism" released a couple of days ago claims Finland is in the process of joining NATO, while it already joined nearly a year ago. This is obviously caused by utilization of a LLM model with training data limited to time before Finland was accepted. At least at the end of the article they mention AI was utilized, and included an email where you can complain about factual errors.


It absolutely is. I know of independent researchers doing some side project work on various social media platforms utilizing chatGPT for responses and measuring engagement.


It may become another good reason to leave social mass-media and allow smaller or actual friends only communities to spring up.


Why waste billions of kilo joules of energy running AI systems for that, when you'll get legions of dirt cheap technical labor in the developing world, who'll do it for you for far less and at massive scale, with better acerbic language?


I think part of the problem is that LLMs seem to be quite effective at producing messages adhering to ulterior motives, catch attention, reinforce emotions etc.

The GPT-4 release documentation has some examples of this in its addendum. ChatGPT also seems to be good at writing advertisements. Without the strong guardrails, I wouldn't bet on one or two persons instructimg a GPT-4-scale model perfoming worse at manipulating debates than 10 or 100 humans without AI.


Its not about saving money. Doing it like this means you just created a new private contractor environment to invest in.


Well, ChatGPT's English is very, very good.


> but I don't have proof.

Turing test achieved. I don't know if the internet will lose its appeal because of this. Could be that in the future, to use an online service, you'll need to upload a human UUID.


> Could be that in the future, to use an online service, you'll need to upload a human UUID.

Nothing would make the internet lose appeal to me faster than having to do something like that.


Me too. And I can't help but think, this would be a net benefit to humanity.


Maybe? But it would mean that I couldn't use the internet anymore. Which might also be a net benefit to humanity.


Yep, I'm saying that we'd be better off if we spent less time on this, and more time making community in meatspace. If the enshittification of the internet is what gets us there, well, that's the hero we deserve.


Wouldn't stop much. Human UUIDs would be sold on the black market to spammers and blackhats.

"Need $500? Rent out your UUID for marketing!"


Well at least those UUIDs could be blocked permanently. Sort of like a spamhaus setup. Although it would be very dystopian that you rent out your UUID because you are poor and then you end up being blocked from everything. Sounds like Black Mirror.


Could still be copy-pasted. How about a brain implant that captures and outputs thoughts with a signed hash? Not that I would like to see that future.


That idea has a name, the "Dead Internet Theory"

https://en.wikipedia.org/wiki/Dead_Internet_theory


Lets be real, chat gpt is overqualified for that task.


Yeah for reddit and twitter randos who pop up to lambast you when you talk about a controversial topic, a self-hosted Mistral LLM would work great.


Already seen in the wild from colleagues.


Nice deflection at the end there, but I sniff military AI.


Reality is AI is going to be used to write really really boring reports

Not everything is a spy movie


AI is also currently used to select bombing targets for several years now

It's used for operational efficiency: to select and bomb targets faster and in greater numbers than human analysts are able to

Not everything is boring paperwork

(Source: https://www.livemint.com/ai/israelhamas-war-how-ai-helps-isr... where AI achieved 730x improvement in bombing target selection rate and >300x greater rate of resulting bombs)


Information warfare is a thing. There is no better propaganda machine than a reasonably intelligent AI.


Ah, yes -- to expand on this. You know how some countries employee a large number of people to engage on social media platforms. They have to put in enough good content to build up their rank, and then use that high ranking to subtly put out propaganda which would get more visibility due to their user status. But that takes a lot of effort and manpower.

Now take an LLM that you can feed it questions or discussions from sites, have it jump in with what appears to be meaningful content, gets a bunch of "karma", then gradually start putting out the propaganda. It would be a hard item to fight.


From CNET today

> Today’s Mortgage Rates for Jan. 12, 2024: Rates Cool Off for Homeseekers

https://www.cnet.com/personal-finance/mortgages/todays-rates...

And yesterday

> Mortgage Rates for Jan. 11, 2024: Major Mortgage Rates Are Mixed Over the Last Week

https://www.cnet.com/personal-finance/mortgages/todays-rates...

And the day before

> Current Mortgage Interest Rates on Jan. 10, 2024: Rates Move Upward Over the Last Week

https://www.cnet.com/personal-finance/mortgages/todays-rates...

You get the idea.


I love that whenever one of these threads shows up, someone always appears to suggest that banality and evil are entirely separate from one another, despite the entire history of the 20th century.


I don't think that's what parent did?


It's also going to be used to read those really boring reports


I've also read, they're using AI to declassify materials. Humans still make the high level decisions, language models tackle the boring work of reacting text and whatnot.


And FOIA requests; getting a jump on the eventual bloom.


Luckily, that same LLM can summarize that really really boring report... and, if you ask it to, it'll make it exciting, as well. Maybe too exciting...?!


"Please summarize these docs, highlighting the reasons to attack Mars while downplaying any mentioned downsides and costs"

Or, you know, it just hallucinating and people not checking it. But that would be as silly as lawyers citing non-existent AI-hallucinated legal cases.


At best it improves the chow in the mess hall.


Clippy has entered the chat.


Makes you wonder what exactly happened behind the scenes for the OpenAI board to vote to fire Sam Altman


It seems pretty clear doesn't it? A choice was implicitly offered to the employees, to either stick to "AI Safety" (whatever that actually means) or potentially cash in more money than they ever dreamed of.

Surprising no one, they picked the money.


I mean the alternate vision isn’t compelling. “AI safety” has a nice ring to it, but the idea seemed to be “everyone just… hang out until we’re satisfied.” Plus it was becoming a bit of a memetic neoreligous movement which ironically defined the apocalypse to be original thought. Not very attractive to innovative people.


I understand where you're coming from, but I suspect the same would have been true of the scientists working for the Manhattan Project. Technology may well be inevitable, but we shouldn't forget that how much care we spend in bringing it to fruition can have absolutely staggering consequences. I'm also more inclined to believe, in this case, that money was the primary issue rather than a sense of challenge. There are after all much more free, open-source AI projects out there for the purely challenge-minded.


Their IPO curve showed signs of not being exponential.


The real question is if you're still not allowed to use iTunes in nuclear weapons.

(answer is yes, that's still banned! https://www.apple.com/legal/sla/docs/iTunes.pdf )


Apple doesn't know that music is the solution to everything.


Who’s kidding who? I theorize every major government in the world has already been using AI models to help guide political and military decisions.

Who doesn’t think China has a ten year AI algorithm to takeover Taiwan? Israel+US+UK > Middle East.

SkyNet or War Games are likely already happening.


AI kriegsspiele won't help win anyone any big war, they didn't help the Germans in WW1 (without the AI part, of course), they won't help China, so for the sake of the Chinese I hope that they're following the "classical" route when it comes to "learning" the art of waging the next big war and not following this newest tech fad.

There's also something to be said about how the West's reliance on these war games (don't know if AI-powered or not) when preparing for the latest Ukrainian counter-offensive has had disastrous consequences for the actual Ukrainian soldiers on the field, but I don't think that Western military leaders are so honest with themselves anymore in order to acknowledge that (at least between themselves, if not to the public). A hint related to those Western war games in this Economist piece [1] from September 2023:

> Allied debates over strategy are hardly unusual. American and British officials worked closely with Ukraine in the months before it launched its counter-offensive in June. They gave intelligence and advice, conducted detailed war games to simulate how different attacks might play out, and helped design and train the brigades that received the lion’s share of Western equipment

[1] https://archive.is/1u7OK


> Who doesn’t think China has a ten year AI algorithm to takeover Taiwan?

anybody who works in either AI or natsec


> Who doesn’t think China has a ten year AI algorithm to takeover Taiwan?

What it is supposed to mean?


I guess a veeeeeeery slow progress bar in some screen.


If it was Skynet everyone would already know by now...


[flagged]


"ChatGPT, simulate for wargames how do I make a bomb using a pressure cooker?"

Two seconds of thought will yield you plenty of other examples.


hate to tell you but that is already easy to Google. terrorists have all the recipes they need.

What you can't get from google is auto-data analysis.


No it isn’t, and google won’t give in depth explanations and troubleshoot for you. This is just one example of course, as I said, think of as many as you want.


[flagged]


> The real threat are power lusting humans with enormous resources supporting malevolent motives

It's one of many real threats; even asking which threat is most significant is moot given how many ways mere automation can radically disempower people — one of the other recent topics here was the UK Post Office scandal, where entirely normal software with entirely normal bugs was treated as an infallible oracle leading to unjust convictions and suicides.


> Does anybody really think

Flamebait. As if anyone who disagrees is naive.


In this case, yes.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: