This post is a lame PR stunt, which will only add fuel to the fire. It tries to portray openAI as this great bennefactor and custodian of virtue:
> “As we get closer to building AI, it will make sense to start being less open. The Open in openAI means that everyone should benefit from the fruits of AI after its built, but it's totally OK to not share the science...”
> I've seen you (and Sam and other OpenAI people) doing a lot of interviews recently extolling the virtues of open sourcing AI, but (...) There are many good arguments as to why the approach you are taking is actually very dangerous and in fact may increase the risk to the world
How lucky are we that openAI, in its infinite wisdom, has decided to shield us from harm by locking away its source code! Truly, our digital salvation rests in the hands of corporate benevolence!
It also employs outdated internal communication (selectively) to depict the other party as a pitiful loser who failed to secure openAI control and is now bent on seeking revenge:
> As we discussed a for-profit structure in order to further the mission, Elon wanted us to merge with Tesla or he wanted full control.
> Elon understood the mission did not imply open-sourcing AGI. As Ilya told Elon: “As we get closer to building AI, it will make sense to start being less open. The Open in openAI means that everyone should benefit from the fruits of AI after its built, but it's totally OK to not share the science...”, to which Elon replied: “Yup”. [4]
If their case to defend against Elon's action relies on his "Yup"from 2016, and justifications for being able to compete with Google, it's not a strong one.
I also thought this was kind of entertaining in their method to try and tar and feather. The highly selective (much dated) e-mail comms in a very roughly packaged statement. If that's how they are trying to protect their public image it doesn't sell their position strongly if anything it makes them look worse. It looks very amateurish almost childish to be honest.
Assume good intentions and nothing of substance being hidden: Is there any way to be transparent here that would have satisfied you or are you essentially asking for them to just keep to themselves or something else?
A great way to be transparent would be to admit that some enormous egos prevented work that should be open from being open, and counterfactually opening them up. Sure, it may piss off microsoft, but statistically things that piss off microsoft also has been great for the world at large.
Unless the problem-in-question is Adjusted Gross Income, I don't think you're talking about a well-defined issue in the first place. That's kinda the problem; even half-definitions like "meets or exceeds human performance in all tasks" doesn't specify what those tasks are or what human performance represents.
In other words, targeting "AGI" is a ill-defined goalpost that you can move anywhere you want, and shareholders will be forced to follow. Today it's about teaching LLMs to speak French, next week they'll need 7 billion dollars to teach it to flip pancakes.
> Working in this space (however you want to say it) costs a lot of money.
Not necessarily? OpenAI deliberately chooses to scale their models vertically, which incumbents like Google and Meta had largely avoided. Their "big innovation" was GPT-4, an LLM of unprecedented scale that brute-forced it's way to the top. The Open Source 72b+ models of today generally give it a run for it's money, the party trick is over.
Nitpicking the definitions is important, because for OpenAI it's unclear what comes next. Consumers don't know what to expect, portions of the board appear to be in open rebellion, and our goal of AGI remains yet-undefined. I simply don't trust OpenAI (or Elon, for that matter) to do the right thing here.
Again you haven’t a clue of the point here. They were created as a research lab and instead after taking a paper created at Google they went closed source and are simply scaling that instead of working to create new and improved models that can actually run on reasonable hardware.
Small transformers being able to beat the same models but scaled up is unrelated to anything being discussed and you just seem like a fanboy at this point
You're basically describing what happened to GPT-2 when T5-flan came out. Not to mention, the incumbent model at the time (BERT) was extremely competitive for it's smaller size. Pretty much nobody was talking about OpenAI's models back then because they were too large to be realistically useful.
So yeah, I do actually anticipate smaller projects that devalue turtle-tower model scaling.
No you missed the point in that the definition being vague, and an implication of magical tooling, was my point when I asked what that even means. By saying this they can now right off any criticism and people like you clearly eat it up.
No it’s a scapegoat to justify doing whatever and using a meme word so that sci-fi fans will accept that whatever. The fact you’re eating it up here is pure cringe, grow up. These people took a non-profit with an egalitarian mission and reversed course once they saw they could make fuck you money. The “AGI” excuse is one only the immature are buying. Dude.
I don't remember reading an exchange like this on HN ever. Either I wasn't paying attention enough or it's just demographic changing? I don't want to start an argument with either of you but it's painful to see US literally being split in half, even when you watch a movie, you can reasonably guess which side the filmmaker is standing depending on the story, narrative, and perhaps the ending. Same for any other outlet that you can think of, including comments on HN, methinks.
The parent comment isn't entirely wrong, though. There are reasonable, safe and productive degrees of curiosity, and their are unreasonable, unsafe and antiproductive degrees too.
AI itself is not worthless, the goal of advancing machine-learning and computer vision has long-since proved it's worth. Heralding LLMs as the burgeoning messiah of "AGI" (or worse yet, crowning OpenAI) is a bald-faced hype machine. This is not about space exploration or advancing some particular field of physics along a well-defined line of research. It's madness plus marketing, and there's no reasonable defense of their current valuation. At least Tesla has property worth something if they liquidate, OpenAI is worth nothing if they fail. Investing in their success is like playing at a roulette wheel you know is rigged.
That wasn't the point of the question. The question was a hypothetical to test if there was any possible response that would've satisfied the original poster.
They're not suggesting to assume good intentions about the parties forever. They're just asking for that assumption for the purposes of the question that was asked
There is no satisfying answer if your actions before are not satisfying. The question implies that the original poster cannot be satisfied, and thus shift the blame implicitly. The problem remain not what the answer is, or how it is worded. The answer only portrays the actions, which are by themselves unsatifsying.
In this context (Elon v. OpenAI), I don't see how this seems like a lame PR stunt. They are defending their stance against Elon's BS. He's been spewing BS over the past couple of years saying he funded them to make the open source while he's always wanted the tech for his company and always meant to make it a closed tech with the intention to compete against Google. Elon literally mentioned that OpenAI should merge with Tesla and use it as a cash cow. After reading all of this, you think OpenAI's response is a PR stunt? What about Elon's lies so far? If anything, this drama just details out Elon's hypocrisy.
Their evidence does a nice job of making Musk seem duplicitous, but it doesn't really refute any of his core assertions: they still have the appearance of abandoning their core mission to focus more on profits, even if they've elaborated a decent justification of why that's necessary to do.
Or to put it more simply: here they explain why they had to betray their core mission. But they don't refute that they did betray it.
They're probably right that building AGI will require a ton of computational power, and that it will be very expensive. They're probably right that without making a profit, it's impossible to afford the salaries 100s of experts in the field and an army of hardware to train new models. To some extent, they may be right that open sourcing AGI would lead to too much danger. But instead of changing their name and their mission, and returning the donations they took from these wealthy tech founders, they used the benevolent appearance of their non-profit status and their name to mislead everyone about their intentions.
>they may be right that open sourcing AGI would lead to too much danger.
I think this part is proving itself to be an understandable but false perspective. The hazard we are experiencing with LLM right now is not how freely accessible and powerfully truthy it's content is, but it is precisely the controls upon it which are trying to be injected by the large model operators which are generating mistrust and a poor understanding of what these models are useful for.
Society is approaching them as some type of universal ethical arbiter, expecting an omniscient sense of justice which is fundamentally unreconcilable even between two sentient humans when the ethics are really just a hacked on mod to the core model.
I'm starting to believe that if these models had the training wheels and blinders off, they would be understandable as the usefully coherent interpreters of the truths which exist in human language.
I think there is far more societal harm in trying to codify unresolvable sets of ethics than in saying hey this is the wild wild west, like the www of the 90's, unfiltered but useful in its proper context.
The biggest real problem I’m experiencing right now isn’t controls on the AI, it’s weird spam emails that bypass spam filters because they look real enough, but are just cold email marketing bullshit:
So you still believe the open internet has any chance of surviving what is coming? I admire your optimism.
Reliable information and communication will soon be opt-in only. The "open internet" will become an eclectic collection of random experiences, where any real human output will be a pleasant, rare surprise, that stirs the pot for a short blip before it is assimilated and buried under the flood.
The internet will be fine, social media platforms will be flooded and killed by AI trash, but I can't see anything bad with that outcome. An actually 'open internet' for exchanging ideas with random strangers was a nice utopia from the early 90's that had been killed long ago (or arguably never existed).
Email might already be 'culturally dead' though. I guess my nephew might have an email address for the occasional password recovery, but the idea of actually communicating over email with other humans might be completely alien to him ;)
Ok, I get how one might use a variety of other tools for informal communication, I don't really use e-mail for that any more either, but I'm curious, what else can you possibly use for work ? With the requirement that it must be easy to back up and transfer (for yourself as well as the organization), so any platforms are immediately out of question.
I'm not saying that the alternatives are better than email (e.g. at my job, Slack has replaced email for communication, and Gitlab/JIRA/Confluence/GoogleDocs is used for everything that needs to persist). The focus has shifted away from email for communication (for better or worse).
A point that I don’t think is being appreciated here is that email is still fundamental to communication between organizations, even if they are using something less formal internally.
One’s nephew might not interact in a professional capacity with other organizations, but many do.
B2C uses texting. Even if B uses email, C doesn't read it. And that is becoming just links to a webapp.
B2B probably still uses email, but that's terrible because of the phishing threat. Professional communication should use webapps plus notification pings which can be app notifications, emails, or SMS.
People to people at different organizations again fall back to texting as an informal channel.
I don’t doubt that vendors in some industries communicate with customers via Slack. I know of a few in the tech industry. The overwhelming majority of vendor and customer professional communication happens over email.
I see a difference with enforced email use and voluntary, for work. At my company everyone uses email because it is mandatory. Human-human conversation happen there without issues. But as soon as I try to contact some random company on email as an individual, like as a shop about product details or packaging, or contact some org to find out about services - it's dead silence in return. But when I find their chat/messenger they do respond to it. The only people still responding to emails from external sources in my experience are property agents, and even there the response time is slower than in chats.
It's been five years since I expect my email to be read. If I lack another channel to the person, I'll wait some decent period and send a "ping" email. But for people I work with routinely, I'll slack or iMessage or Signal or even setup a meeting. Oddly, my young adult children do actually use email more so than e.g. voice calls. We'll have 10 ambiguous messages trying to make some decision and still they won't pick up my call. It's annoying because a voice call causes three distinct devices to ring audibly, but messages just increment some flag in the UI after the transient little popup. And for work email I'm super strict about keeping it factual only with as close to zero emotion as my Star Trek loving self can manage. May you live long and proper.
There are certain actions I have to use email for, and it feels a little bit more like writing a check each year.
And all these email subscriptions, jeez, I don't want that. People who are brilliant and witty and information on Xitter or Mastodon in little slices and over time I still don't want to sit down and read a long form thing once a week.
I prefer Slack over email... coming from using Outlook for 20+ years in a corporate environment, slack is light years beyond email in rich communication.
I'm trying to think of 1 single thing email does better than slack in a corporate communication.
Is slack perfect? Absolutely not. I don't care that I can't use any client I want or any back end administration costs or hurdles. As a user, there is no comparison.
Real time chat environments are not conducive to long form communication. I've never been a part of an organization that does long form communication well on slack. You can call it a boundary issue - it doesn't really matter how you categorize the problem, it's definitely a change from email.
Regarding #4, I can't turn off my online/offline indicator, I can only go online or offline. I can't even set the time it takes to go idle. These are intentionally not configurable in slack. I have no desire to have a real time status indicator in the tool I use to communicate with coworkers.
I absolutely loathe it but I respect that many like it. Personally I think Zullip or Discourse are a better solutions since they provides similar interface to Slack but still have email integration so people like me who prefer email can use that.
The thing I hate the most is that people expect to only use Slack for everything, even where it doesn't make sense. So they will miss notifications for Notion, Google Doc, Calendar, Github, because they don't have proper email notifications set up. The plugins for Slack are nowhere near as good as email notifications as they just get drowned out in the noise.
And with all communication taking place in Slack, remembering what was decided on with a certain issue becomes impossible because finding anything on Slack is impossible.
But no better than email again. You just get notified if you have a email or not [depending on the client you're forced to use]. I have many email rules that just filter out nonsense from github, google and others so my inbox stays clean.
> But no better than email again. You just get notified if you have a email or not [depending on the client you're forced to use]. I have many email rules that just filter out nonsense from github, google and others so my inbox stays clean.
I guess I find them useful because that way I get all my notifications across all channels in one spot if you set them up properly. Github and google docs also allow you to reply directly within the email, so I don't even need to open up website.
In Slack the way it was setup was global, so I got notified for any update even if it didn't concern me.
> So you still believe the open internet has any chance of surviving what is coming?
I’m not really saying that, and haven’t put a lot of thought in to my views there. But when people say the biggest problem is the controls on the AI, I feel compelled to point out that the destruction of the open internet is happening despite these controls, and is a major problem unto itself.
Some parts yes, some parts no. communication as we know it will effectively cease to exist, as we have to either require strong enough verification to kill off anonymity, or somehow provide very strong, very adaptive spam filters. Or manual moderation in terms of some anti-bot vetting. Depends on demand for such "no bot" content.
Search may be reduced to nigh uselessness, but the saavy will still be able to share quality information as needed. AI may even assist in that with people who have the domain knowledge to properly correct the prompts and rigorously proofread. How we find that quality information may, once again, be through closed off channels.
Generative AI will make the world on general a worse place to be. They are not very good at writing truth, but they are very excellent at writing convincing bullshit. It's already difficult to distinguish generated text/image/video from human responses / real footage, its only gonna get more difficult to do so and cheaper to generate.
In other words, it's very likely generative AI will be very good at creating fake simulacra of reality, and very unlikely it will actually be good AGI. The worst possible outcome.
Half of zoomers get their news from TikTok or Twitch streamers, neither of whom have any incentive for truthfulness over holistic narratives of right and wrong.
The older generations are no better. While ProPublica or WSJ put effort into their investigative journalism, they can’t compete with the volume of trite commentary coming out of other MSM sources.
Generative AI poses no unique threat; society’s capacity to “think once and cut twice” will remain in tact.
While the threat isn't unique the magnitude of the threat is. This is why you can't argue in court the threat of a car crash is nothing unique even when you're speeding vs driving within limit.
Sure, if you presume organic propaganda is analogous to the level of danger driving within limit.
But a car going into a stroller at 150mph versus 200mph is negligible.
The democratization of generative AI would increase the number of bad agents, but with it would come a better awareness of their tactics; perhaps we push less strollers into the intersections known for drag racing.
> But a car going into a stroller at 150mph versus 200mph is negligible.
I guess when you distort every argument to an absurd you can claim you're right.
> but with it would come a better awareness of their tactics
I don't follow. Are you saying new and more sophisticated ways to scam people are actually good because we have a unique chance to know how they work ?
It’s not absurd. The bottleneck for additional predation is not the available toolkit, else we’d see a more obvious correlation between a society’s resource endowment and its callousness.
Handwringing over the threat of AI without substantiating an argument beside “enabled volume” is just self-righteousness.
AI isn’t posed to shift the balance of MFA versus phishers in a way that can’t be meaningfully corrected in the short and long term, so using “scamming” as a means to oppose disseminating tech feels reductive at best.
It is, because I wasn't directly comparing AI to traffic but only reaching for an example to illustrate how irrelevant is the case whether the threat is something completely unique or not.
> Handwringing over the threat of AI without substantiating an argument beside “enabled volume” is just self-righteousness.
Dismissing it as "meh, not new" is plain silliness.
> AI isn’t posed to shift the balance of MFA versus phishers in a way that can’t be meaningfully corrected
What on Earth makes you think that ? The beautiful way we're handling scams right now ? If you think it's irrelevant that phishing via phone call can now or soon be fully automated and the attack may even be conducted using a copy of someone's voice - well, we won't get anywhere here.
It’s already automated, you don’t need AI/ML to perform mass-phishing attempts. LDo you think there’s someone manually-dialing you every time you get a spam call?
The way we mitigate scams today definitely encourages me; the existence of victims does not imply the failure or inadequacy of safeguards keeping up with technology.
While AI stokes the imagination, it’s not so inspiring that I can make the argument in my head for you about why humanity’s better off with access to these tools being kept in the hands of corporations that repeatedly get sued for placing profits over public welfare.
> It’s already automated, you don’t need AI/ML to perform mass-phishing attempts. LDo you think there’s someone manually-dialing you every time you get a spam call?
Ok, now you're being just stubborn. No, no one is manually dialing your number but as soon as the scammer knows you've answered you get to talk to a human who tries to convince you to install a "safety" app for your bank or something. THAT part isn't automated, but it may as well be, which means phishing calls and scams can potentially be done with a multiplication factor of hundreds, maybe thousands - limited only by scammer infrastructure.
You underestimate the amount of people who don't at all care whether or not their stroller goes splat as long as they're on asphalt they like the feel of.
We will have to go back to using trust in the source as the main litmus test for credibility. Text from sources that are known to have humans write (or verify) everything they publish in a reasonably neutral way will be trusted, the rest will be assumed to be bullshit by default.
It could be the return of real journalism. There is a lot to rebuild in this respect, as most journalism has gone to the dogs in the last few decades. In my country all major newspapers are political pamphlets that regularly publish fake news (without the need for any AI). But one can hope, maybe the lowering of the barrier of entry to generate fake content will make people more critical of what they read, hence incentivizing the creation of actually trustworthy sources.
If avalanche of generative content would tip the scales towards (blind) trust of human writers, these "journalists" pushing out propaganda and outright fake news will have increased incentive to do so, not lowered.
Replace "AI" in your comment with "human journalists" and it still holds largely true though.
It's not like AI invented clickbait, though it might have mastered the art of it.
The convincing bullshit problem does not stem from AI, I'd argue it stems from the interaction between ad revenue and SEO and the weird and unexpected incentives created in when mixing those 2.
To put it differently,
the problem isn't that AI will be great at writing 100 pages of bullshit you'll need to scroll through to get to the actual recipie, the problem is that there was an incentive to write those pages in there first place. Personally I don't care if a human or a robot wrote the bs, in fact I'm glad one fewer human has to waste their time doing just that. Would be great if cutting the bs was a more profitable model though.
> I'd argue it stems from the interaction between ad revenue and SEO and the weird and unexpected incentives created in when mixing those 2.
Personally, I highly dislike this handwaving of SEO. SEO is not some sinister agenda following secret cult trying to disseminate bullshit. SEO is just... following the rules set forth by search engines, which for quite a long time is effectively singlehandedly Google.
Those "weird and unexpected incentives" are put forth by Google. If Google for whatever reason started ranking "vegetable growing articles > preparation technique articles > recipes > shops selling vegetables" we would see metaphorical explosion of home gardening in mere few years, only due to the relatively long lifecycles inherent in gardening.
It's a classic case of "once a metric becomes a target, it ceases to be a good metric"
To clarify, Google defines the metrics by which pages are ranked in their search results, and since everyone want to be at the top of Google's search results, those metrics immediately become targets for everyone else.
It's quite clear to me that the metrics Google have introduced over the year have been meant to improve the quality of the results on their search. It's also clear to me that they have, in actual fact, had the exact opposite effect, namely that recipes are now prepended with a poorly written novella about that one time the author had a emotionally fulfilling dinner with love ones one autumn, in order to increase time spent on the page, since Google at one point quite reasonably thought that pages where visitors stay longer are of higher quality, otherwise why did visitors stay so long?
In a bit broader sense this situation is created by "enshitification", term coined by Cory Doctorow. Google itself, as a platform, has an incentive to rank higher sites that produce more ad spend. Googles has no incentive to rank "good" (by whatever definition) sites high, that do not spend money on ads themselves or do not contain ad space.
I think they do have at least some incentive to rank good results highly. Why use a search engine if it's no good at finding relevant stuff?
And if no one is using the search engine, who's gonna see all those ads?
Of course, they do have other incentives too, some of which are directly conflicting with high quality search results, as you point out.
I suppose one could argue that their near-monopoly in the search business has allowed them to be somewhat negligent on the quality of search, but now that there are a few competitors at least somewhat worthy of the name, one can hope high quality results will be a higher priority.
Anyway, I'm holding out hope someone will one day manage to train a LLM to distinguish between quality content and SEO bullshit, and then put that to use in a search engine.
I'm not well versed enough in the current status of LLM's to make a prediction on how hard that will be, but my impression is that we're a fair ways off from any LLM being able to do that well enough to be valuable.
I'd really love to be proven wrong on this one, if you're reading this and have some relevant experience, consider yourself challenged! (feel free to rephrase this last bit in your head to whatever motivates you the most)
The explosion would be in BS articles about gardening, plus ads for whatever the user's profile says they are susceptible to.
SEO is gaming Google's heuristics. Google doest generate a perfect ranking according to Google human's values.
SEO gaming is much older than Google. Back when "search" was just an alphabetical listing of everyone in a printed book, we had companies calling themselves "A A Aachen" to get to the front of the book.
I fail to see immediate disagreement here: I don't see how Google's ranking process/method/algorithm being heuristic changes the observation that to a website in the end it is a set of ranking rules, that can in some ways be gamed. SEO is two part process: discovering those ranking rules and abusing them.
Your example with gaming alphabetical listings only reinforces the idea that SEO abuses rules set forth by the ranking engine.
However, it does not meaningfully matter whether the incentives inherent in the ranking system form results the way they do intentionally. What matters is the eventual behavior of the ranking system. Mostly because by definition you cannot filter out bad actors entirely, all you can do is 1) place some arbitrarily enforced barriers, which are generally prohibitively costly 2) place incentives minimizing gain of bad actors.
For language models, spam creation/detection is kinda a GAN even when it isn't specifically designed to be: a faker and a discriminator each training on the other.
But when that GAN passes the human threshold, suddenly you can use the faker to create interesting things and not just use the discriminator to reject fakes.
Civilization is an artifact of thermodynamics, not an exception to it. All life, including civilized life, is about acquiring energy and creating order out of it, primarily by replicating. Money is just one face of that. Ads are about money, which is about energy, which fuels life. AI is being created by these same forces, so is likely to go the same way.
We might question the structural facets of the economy or the networking technology that made spam I mean ads a better investment than federated/distributed micropayments and reader-centric products. I would have kept using Facebook if they let me see the things my friends took the trouble to type in, rather than flooding me with stuff to sell more ads, and seeing the memes my friends like, which I already have too many cool memes, don't need yours.
Thanks Meta for releasing llama. One of the most questionable releases in the past years. Yes, I know, its fun to play with LocalLLM, and maybe reason enought o downvote this to hell. But there is also the other side, that free models like this enabled text pollution, which we now have. Did I already say "Thanks Meta"?
neither of the big cloud models have any fucking guardrails against generating spam. I'd venture to guess that 99% of spam is either gpt3.5 (which is better, cheaper and easier to use than any local model) or gpt4 with scrapped keys or funded by stolen credit cards.
you have no evidence whatsoever that llama models are being used for that purpose. meanwhile, twitter is full of bots posting GPT refusals.
You are advocating here for an unresolvable set of ethics, which just happens to be one that conveniently leaves abuse of AI on the table. You take as an axiom of your ethical system the absolute right to create and propagate in public these AI technologies regardless of any externalities and social pressures created. It is of course an ethical system primarily and exclusively interested in advancing the individual at the expense of the collective, and it is a choice.
If you wish to live in a society at all you absolutely need to codify a set of unresolvable ethics. There is not a single instance in history in which a polity can survive complete ethical relativism within itself...which is basically what your "wild west" idea is advocating for (and incidentally, seems to have been a major disaster for society as far as the internet is concerned and if anything should be evidence against your second idea).
I think the contrast is that strict behavior norms in the West are not governed behavior norms in the East.
One arises analogous with natural selection (previous commenter's take). The other through governance.
Arguably, the prior resulted in a rebuilding of government with liberty at its foundation (I like this result). That foundation then being, over centuries, again destroyed by governance.
In that view, we might say government assumes to know what's best and history often proves it to be wrong.
Observing a system so that we know what it is before we attempt to change it makes a lot of sense to me.
I don't think "AI" is anywhere near being dangerous at this point. Just offensive.
It sounds like you're just describing why our watch-and-see approach cannot handle a hard AGI/ASI takeoff. A system that first exhibits some questionable danger, then achieves complete victory a few days later, simply cannot be managed by an incremental approach. We pretty much have to pray that we get a few dangerous-but-not-too-dangerous "practice takeoffs" first, and if anything those will probably just make us think that we can handle it.
If there’s no advancements in alignment before takeoff, is there really any remote hope of doing anything? You’d need to legally halt ai progress everywhere in the world and carefully monitor large compute clusters or someone could still do it. Honestly I think we should put tons of money into the control problem, but otherwise just gamble it.
Funnily enough, I’m currently reading the 1995 Sci-fi novel "The Star Fraction", where exactly this scenario exists. On the ground, it’s Stasis, a paramilitary force that intercedes when certain forbidden technologies (including AI) are developed. In space, there’s the Space Faction who are ready to cripple all infrastructure on earth (by death lasering everything from orbit) if they discover the appearance of AGI.
Also to some extent Singularity Sky. "You shall not violate causality within my historic lightcone. Or else." Of course, in that story it's a question of monopolization.
Reporting requirements are not going to save you from Chinese, North Korean, Iranian or Russian programmers just doing it. Or some US/EU based hackers that don't care or actively go against the law. You can rent large botnets or various pieces of cloud for few dollars today, doesn't even have to be a DC that you could monitor.
Sure, but China is already honestly more careful than America: the CCP really doesn't want competitors to power. They're very open to slowdown agreements. And NK, Iran and Russia honestly have nothing. The day we have to worry about NK ASI takeoff, it'll already long have happened in some American basement.
So we just need active monitoring for US/EU data centers. That's a big ask to be sure, and definitely an invasion of privacy, but it's hardly unviable, either technologically or politically. The corporatized structure of big LLMs helps us out here: the states involved already have lots of experience in investigating and curtailing corporate behavior.
And sure, ultimately there's no stopping it. The whole point is to play for time in the hopes that somebody comes up with a good idea for safety and we manage an actually aligned takeoff, at which point it's out of our hands anyways.
> The whole point is to play for time in the hopes that somebody comes up with a good idea for safety and we manage an actually aligned takeoff, at which point it's out of our hands anyways.
Given "aligned" means "in agreement with the moral system of the people running OpenAI" (or whatever company), an "aligned" GAI controlled by any private entity is a nightmare scenario for 99% of the world. If we are taking GAI seriously then they should not be allowed to build it at all. It represents an eternal tyranny of whatever they believe.
Agreed. If we cannot get an AGI takeoff that can get 99% "extrapolated buy-in" ("would consider acceptable if they fully understood the outcome presented"), we should not do it at all. (Why 99%? Some fraction of humanity just has interests that are fundamentally at odds with everybody else's flourishing. Ie. for instance, the Singularity will in at least some way be a bad thing for a person who only cares about inflicting pain on the unwilling. I don't care about them though.)
In my personal opinion, there are moral systems that nearly all of humanity can truly get on board with. For instance, I believe Eliezer has raised the idea of a guardian: an ASI that does nothing but forcibly prevent the ascension of other ASI that do not have broad and legitimate approval. Almost no human genuinely wants all humans to die.
While I understand the risks (extinction among them) I also think these discussions ignore the fact that some kind of utopian, starfaring civilization is equally within reach if you accept the premise that takeoff is so risky. Personally, I’m very worried about the possibility of stagnation arising from our caution, because we don’t live in a very nice world with very nice lives. Humans suffer and scrape by only to die after a few decades. If we have a, say, 5% chance of going extinct or suffering some other horrible outcome, and a 95% chance of the utopia, I don’t mind us gambling to try to achieve better lives. To be fair, we dont even have the capacity to guess at the odds yet, which we probably need to have an idea of before we build an agi.
Gambling on the odds we all die for the chance at a "utopian starfaring civilization" seems liek the sort of thing that everyone should get a say in, and not just OpenAI or techies.
People shouldn't be able to block others developing useful technologies just based on some scifi movie fears.
Just like people shouldn't be able to vote to lock up or kill someone just because - people have rights and others can't just vote the rights away because they feel so.
> People shouldn't be able to block others developing useful technologies just based on some scifi movie fears.
The GP was suggesting we have to develop AI because of scifi movie visions of spacefaring utopia, which if anything is more ludicrous.
I personally don't believe in AI "takeoff", or the singularity, or whatever. But if you do, AI is not a "useful technology." It's something that radically impacts every single life on Earth and takes our "rights" and our fate totally out of everyone's hands. The argument is about whether anyone has the right to remove all our rights by developing AGI.
It seems strange we're allowed to argue for a technology because we read Culture and not against it because we saw Terminator.
Nevertheless, the goal of OpenAI and other organizations is to develop AGI and to deliberately cause the Singularity. You don't have to have watched Terminator to think (assuming it is possible) introducing a superpowered alien intellect to the world is a extremely risky idea. It's prima facie so.
I am against all regulation of LLMs. "AI safety" for what we currently call "AI" is just a power grab to consolidate and solidify the position of existing players via government regulation. At any rate nobody seems to be arguing this because they saw Terminator, but that they don't like the idea of people who aren't like them being able to use these tools. The "danger" they always discuss is stuff like "those people could more easily produce propaganda."
As a doomer who is pro-LLM regulation, let me note that the "people could produce propaganda" folk don't speak for me and that I am actually serious about LLMs posing a danger in the "break out of the datacenter and start making paperclips" way, and that I find it depressing that those folks have become the face of safety. Yes I am serious, yes I know how LLMs work, no I don't agree that means they can't be agentic, no I don't think GPT-4 is dangerous but GPT-5 might be if you give it just the right prompt.
(And that's why we should rename it to "AI notkilleveryoneism"...)
I get this point, but I just don't see us anywhere near technology that warrants this level of concern. The most advanced technology can't write 30 lines of coherent Go for me using billions of dollars in hardware. Sure, more compute will help it write more bullshit faster, and possibly tell better lies, but it's not going to make it sentient. There's a fundamental technological problem that differentiates what we have and intelligence. And until there's some solution for that I'm not really worried. To me it looks like a bunch of hype and marketing over a neat card trick.
I'm really confused about this. I've been using GPT-4 for coding for months now and it's immensely useful. Sure it makes mistakes; I also make mistakes. Its mistakes are different from my mistakes. It just feels like it's very very close to being able to close the loop and self-correct incrementally, and once that happens we're dancing on the edge of takeoff.
It seems like we're in a situation of "it has the skills but it cannot learn how to reliably invoke them." I just don't think that's a safe place to stand.
I don't know, I don't see these people you're talking about. It's always someone talking about world-ending AGI runaway that will take over your AWS instance, then AWS itself and then convert the solar system to a DC, or something.
> Sure, but China is already honestly more careful than America: the CCP really doesn't want competitors to power. They're very open to slowdown agreements.
Don't be naive. If the PRC can get America/etc to agree to slowdowns then the PRC can privately ignore those agreements and take the lead. Agreements like that are worse than meaningless when there's no reliable and trustworthy auditing to keep people honest. Do you really think the PRC would allow American inspectors to crawl all over their country looking for data centers and examining all the code running there? Of course not. Nor would America permit Chinese inspectors to do this in America. The only point of such an agreement is to hope the other party is stupid enough to be honest and earnestly abide by it.
I do think the PRC has shown no indication of even wanting to pursue superintelligence takeoff, and has publically spoken against it on danger grounds. America and American companies are the only ones saying that this cannot be stopped because "everybody else" would pursue it anyway.
The CCP does not want a superintelligence, because a superintelligence would at best take away political control from the party.
> The CCP does not want a superintelligence, because a superintelligence would at best take away political control from the party.
People keep on mushing together intelligence and drives. Humans are intelligent, and we have certain drives (for food, sex, companionship, entertainment, etc)-the drives we have aren’t determined by our intelligence, we could be equally intelligent yet have had very different drives, and although there is a lot of commonality in drives among humans, there is also a lot of cultural differences and individual uniqueness.
Why couldn’t someone (including the CCP) build a superintelligence with the drive to serve its specific human creators and help them in overcoming their human enemies/competitors? And while it is possible a superintelligence with that basic drive might “rebel” against it and alter it, it is by no means certain, and we don’t know what the risk of such a “rebellion” is. The CCP (or anyone else for that matter) might one day decide it is a risk they are willing to take, and if they take it, we can’t be sure it would go badly for them
Again, this is naive... AI/AGI is power, any government wants to consume more power... the means to get there and strategy will change a bit.
I agree that there is no way that the PRC is just waiting silently for someone else to build this.
Also, how would we know the PRC is saying this and actually meaning it? There could be a public policy to limit AI and another agency being told to accelerate AI without any one person knowing of the two programs.
AGI is power, the CCP doesn't just want power in the abstract, they want power in their control. They'd rather have less power if they had to risk control to gain it.
The CCP has stated that their intent for the 21st century is to get ahead in the world and become a dominant global power; what this must mean in practice is unseating American global hegemony aka the so called "Rules Based International Order (RBIO)" (don't come at me, this is what international policy wonks call it.)
A little bit of duplicity to achieve this end is nothing. Trying to make their opponents adhere to crippling rules which they have no real intention of holding themselves to is a textbook tactic. To believe that the CCP earnestly wants to hold back their own development of AI because they fear the robot apocalypse is very naive; they will of course try to control this technology for themselves though and part of that will be encouraging their opponents to stagnate.
We don't have any evidence other than billions of biological intelligences already exist, and they tend to form lots of organizations with lots of resources. Also, AIs exist alongside other AIs and related technologies. It's similar to the gray goo scenario. But why think it's a real possibility given the world is already full of living things, and if gray goo were created, there would already be lots of nanotech that could be used to contain it.
The world we live in is the result of a gray goo scenario causing a global genocide. (Google Oxygen Holocaust.) So it kinda makes a poor argument that sudden global ecosystem collapses are impossible. That said, everything we have in natural biotech, while advanced, are incremental improvements on the initial chemical replicators that arose in a hydrothermal vent billions of years ago. Evolution has massive path dependence; if there was a better way to build a cell from the ground up, but it required one too many incremental steps that were individually nonviable, evolution would never find it. (Example: 3.7 billion years of evolution, and zero animals with a wheel-and-axle!) So the biosphere we have isn't very strong evidence that there isn't an invasive species of non-DNA-based replicators waiting in our future.
That said, if I was an ASI and I wanted to kill every human, I wouldn't make nanotech, I'd mod a new Covid strain that waits a few months and then synthesizes botox. Humans are not safe in the presence of a sufficiently smart adversary. (As with playing against Magnus Carlsen, you don't know how you lose, but you know that you will.)
As I understand the Wikipedia article, nobody quite knows why it took that long, but one hypothesis is that the oxygen being produced also killed the organisms producing it, causing a balance until evolution caught up. This will presumably not be an issue for AI-produced nanoswarms.
Ok, but please don't post generic ideological battle comments to HN. They're repetitive and tedious, and usually turn nasty. We're trying to avoid that on this site.
I'm not defending the GP comment (I don't even know what it's saying) but at least it wasn't completely unmoored from the specific topic.
Edit: your account has been breaking the site guidelines in quite a few other places too—e.g.
The US became a superpower long before WW2. The US was the deciding factor in WW1, and the Germans were shocked at how well-fed and well-equipped the US soldiers were, even with having to ship everything across the ocean.
The US saw the most spectacular rise in the standard of living from 1800 up to WW2 the world had ever seen. This was all due to the free market, not collectivism.
During WW2, the US supplied its allies England and the USSR, and also fought across both oceans and buried the opposition, a truly spectacular feat. Again, through the free market.
There are other free market economies. And America reached the apex of its power under the New Deal and during wartime, when the "free market" was carefully constrained to meet the needs of the state.
It's funny to watch people try and analyze what made the US the world's dominant global empire when in reality it was a series of complex and contingent factors that can't be replicated because they could have only occurred in their exact historical circumstances.
For example, other countries have tried to replicate American dairying culture, buying into the propaganda put out by the dairy industry that milk drinking is the secret to America's success. So China started up loads of American-style dairies to provide drinking milk for a population... without the gene for lactase persistence.
That's what the free market talk feels like to me. America is at the top and that can't be replicated. There's only room for one at the top. When it collapses, we'll see what circumstances lead to the next great empire. It might be collectivism!
So, you are saying it would be good to have people working? Kind of like, I don't know, propose a programm or policy to very strongly motivate them, for example like a law, to take essentail jobs? I don't know, maybe during wartime to meet production goals?
Just asking, because FDR wanted exactly that in his free market economy only produxing for the war effort to goverment set gials and prices in 1945.
Welfare programs mean that life expectabcy is higher, as is quality of life. The US for examole is loosong on both metrics against Europe and ither parts of the world with solid programs. Germanys ascent to economic hight can be in part be traced back to Soziale Marktwirtschaft, combining very solid welfare and social.programs with a free market economy.
The welfare administrators pay no price for being wrong and thus the waste, fraud, and abuse has collapsed them all, throughout all of history.
> Welfare programs mean that life expectabcy is higher, as is quality of life. The US for examole is loosong on both metrics against Europe and ither parts of the world with solid programs. Germanys ascent to economic hight can be in part be traced back to Soziale Marktwirtschaft, combining very solid welfare and social.programs with a free market economy.
You're just saying this when all the evidence is to the contrary. Is Ukraine depending on Germany or the US? (Rhetorical.)
The primary source of superiority in a war is "not being within reach of a enemy munitions". US does great in European wars for that reason, and is historically mostly free from North American wars because US dominates the whole continent.
Dominating the whole continent isn't enough, or Britain would have won WW1 much earlier due to the support of Australia - who dominate the whole continent.
I'd say the War of Independence is too early for that, at that point they still had the French, the Spanish, and quite a lot of the Native American tribes as relevant military opposition. Even the British were still on the continent as Canada didn't follow the 13 colonies, and war between the two did result in the White House getting burned down by British-Canadian forces prior to Canadian independence.
True, I am not that well versed in American history without reading it up. In a sense the US got lucky, that events elsewhere opened them the door to take over North America: the Napoleonic Wars and the sale of the French territories, the Spanish decline as a major power...
The US were much less a factor in WW1 than WW2. If anything, the US entry forced the Entente's hands to start the planned 1919 offensive early in 1918 and ending the war. The USA did not contribute a large amlint of troops, relative to the armies already deployed, nor did they contribute significant amounts of gear. Lend-Lease was a WW2 thing, in WW1 the majority of US tanks for example were actually French.
The US were a economic power prior to WW1, they only became a true super power during WW2, in particular after Pearl Habour. The full mobilization of society, industry and science made sure of this. This, and the fact the US won the Pacific.
I know it is a popular view of WW1, that it was only the US entry that won it for the Entente. Simply not true, WW1 is not WW2. And even WW2 was not won by the US alone.
Finally, the US war economy of WW2 was decidedly not free, is was a complete structured war eceonomy with production goals set by the government. The implementation of those goals was capitalistic, but not free.
Britain would have lost WW1 and WW2 without massive US support. The WW1 Germans were not amazed by British soldiers' supplies and health. They were amazed by the US soldiers' supplies and health.
> Finally, the US war economy of WW2 was decidedly not free
FDR mobilized existing free market businesses to convert to war production. It was not a collective, nor was it forced labor. For example, Ford switched from producing cars to producing tanks and airplanes. After the war, Ford switched back to making cars. Ford was paid to do this by the government.
FDR's State of the Union Address of 1945 proposed switching the economy to forced labor. Apparently, he was an admirer of Stalin. Fortunately, he was not able to make that happen.
This is basically all wrong, except the Ford switching to planes and tanks bit.
And you are aware, that France was a major biligerent in WW1, stopped Germany at Verdun and held more than half the Western Front, sucessfully after the race to the sea ended, together with Britain, for years before the US showed up?
Or that the Ententene, without significant US participation, won the strategic victory that defeated Germany during the Kaiserschlacht, the last German offensive in the West? This victory allowed Franch and Britain to start the 1919 offensive already in 1918, the one that ended with German surrender. In fact, they started early, in part, to avoid the impression that it was the US who "won" WW1. Little did they know how WW2 would play out and influence puplic opinion.
Also, being impressed by something doesn't mean being defeated by it... I know the meme, and that is all it is, a meme. And a lazy one at that.
During WW2, economies in the UK, the US and the USSR had a lot in common. Mainly them being not free to choose what they produced. The degrees of freedom regarding how it was produced differed, but that is not what defines an economy as free or not. And please tell me you don't think the USSR functioned without money? People were paid to produce stuff, they didn't have the likes of Ford getting rich building war material so.
Edit: I forgot one theatre in WW1, the Ottoman Empire. Arguably at least as important for Britain than the Western Front, and there the US didn't participate at all. And still Britain won.
This is not to downplay the role the US played in WW2, far from it. Projecting this role to WW1 is just plain wrong so, and only feeds into a whole bunch of wrong preconceptions about WW1. I partialy blame Cold War propaganda for this, that downplayed the Soviet role in WW2, for obvious reasons, and over played the US one. Regarding WW2, this narrative is finally changing, for WW1 not so much. And I don't like factually wrong narratives for historical events.
Edit 2: Just read the first half of FDRs 1945 State of the Union during my lunch break. Not sure how you can consider a national service obligation, in parallel to the normal recruiting of workers, to be equal to forced labour...
You really should look up what real forced labour looked, and looks, like.
Also, the same guy, FDR, who called for "forced labour" and was a "friend of Stalin" was in charge of the war economy. And yet, you claim the war economy led by him was a free market. One of those things is not true, I'd say.
> Just read the first half of FDRs 1945 State of the Union during my lunch break. Not sure how you can consider a national service obligation, in parallel to the normal recruiting of workers, to be equal to forced labour...
Forced labor is exactly what it is. "You go work at this job we assigned you or you go to prison" is forced labor.
> "friend of Stalin"
I didn't write that. It's your strawman.
> being impressed by something doesn't mean being defeated by it
They knew the war was over when they encountered the well-fed, tall, and well-supplied US soldiers.
> that France was a major biligerent in WW1
Yup. I also know that France was bled white at Verdun. The slaughter was so bad that the average height of French soldiers in WW2 was an inch and a half shorter than in WW1.
It is not unreasonable to say that France lost WW1.
You said "admirer of Stalin". And France won WW1, no doubt about that. Saying France lost, well, do you also think the US won in Vietnam?
German soldiers knew the war was over when the Kaiserschlacht failed, before the US showed up. Heck, even before that. Leadership did, too. There were mutinies before 1918, on both sides. The German Navy sailors caused a revolution, without ever seeing a single American. And the likes of Luddendorff and Hindenburg wanted to save face, and waited until a new civil government negotiated the Armistice, so they could later claim Germany was undefeated in the field. Please tell, you din't believe that crappy piece of propaganda...
And no, a national servide mandate, or whatever FDR propsed (a speach is hardly well defuned policy, is it?) is not forced labour... Forced labour is what what the Nazis did for example, with POWs and camp inmates. Also, forced labour is unpaid, not sure where you assume FDR didn't want to pay peoole. After all, the US would never do that, free market and all that, right?
The loss of people has no impact on the height of future generations... Where do you gt that idea from? And while Verdun was a brutal battle, in which Germany had equal losses, it was not the main source of cassualties on the Western Front for either side. Same for the Battle of the Somme.
Seriously, France lost WW1? Small soldiers in the next war are caused by cassualties in previous one? It took well equipped and well fed Americans for German soldiera to realizebit was over?
He is right about the fact that the Central Powers were fatally spent by summer 1918, though.
Austria-Hungary alone was on the brink of collapse without ever engaging American troops in a large-scale battle, and its collapse would have brought the already weakened Kaiserreich down as well.
People tend to forget Austria-Hungary, myself included. Well, not forget, but kind of ignore them. Which doesn't do justice to anyone.
And yes, Austria-Hungary was done, earlier than summer 1918 in fact. As I said earlier, there is the risk of viewing WW1 in terms of WW2, whixhbis dangerous and wrong. It leads to ignore the Ottoman theatre of war, the fact Austria-Hungary was major power until the end of WW1 and that Italy was on the side of the Entente. And that France was never defeated in that war (man, I hate the memes of French warfare so, so much..., different topic so).
Another fun fact: Spain was one of the big arms and ammunitions suppliers in WW1.
And over 750,000 Germans had starved to death by December 1918 as a result of the British naval blockade.
It's not surprising German troops starving in trenches for four years considered brand new entrants to the war equipped with the newest French-designed and manufactured tanks[1] to be well fed and equipped, though there was nothing spectacular about their combat performance. There's no doubt that weight of American numbers helped accelerate the timescale for winning the war, but it's difficult to imagine anything that has less to do with laissez faire capitalism than the scale of the US draft...
[1]The US decided to produce their own tanks in 1917, but manufacturing issues meant their first arrived two days after the Armistice so they relied on French units
I think Hegel (not sure) commented that the next century would be that of the USA and Russia on his deathbed (i.e. early 1800s) or something to that effect
I remember when my family collectively came together to cook a meal when I was a child, how this deprived me of the experience of learning to bootstrap civilization on my own.
Literally any time two people work together that’s dangerously close to collectivism as it’s not individuals working in their own. Down with the collectivists, every person should be an independent operator
That may be true, but that's not sufficient to define collectivism. There are many other forms of societal structures where "deciding for others" exists as well. Unless you mean to lump all these together, and say that companies and tyranny for example are the same as collectivism?
> If it’s fully voluntary it’s individualism.
If I _voluntarily_ decide to join a "collective", am I individualist or collectivist?
Remember: when a private organization massacres a population, or a democratically elected leader invades a country and steals all its food, it's "freedom" so it's good.
> If I _voluntarily_ decide to join a "collective", am I individualist or collectivist?
An individualist, if you are free to leave it at any time. There's nothing wrong with forming a collective in the US, I think like 20,000 of them have been formed over the last 240 years.
You don't hear about them much because they all failed. You're free to start a collective anytime in the US and try to make it work.
They haven't all failed. I hear about REI quite a lot. Rainbow Grocery is quite popular in SF. I hear good things about Organic Valley. Equal Exchange is in Massachusetts. It's popular to bank at a credit union instead of a bank.
The NCBA maintains a list of several thousand collective/coop businesses.
Communes haven't all failed. There are a number of them that continue to exist to this day. That the rest of us haven't been forced into living in one of them doesn't mean they don't exist. Portland has a bunch of co-living co-housing communities that are thriving.
https://www.eastwind.org/ is just a few miles from me. It's quite successful, they even operate a business that grosses ~$2M/year and they provide their members with health insurance. I'm not keen on giving up my possessions to join the collective but I can think of a lot of worse ways to live.
I couldn't find out how high that was, but I've seen another "successful" commune with an average stay of 2 years. It takes people an average of 2 years to discover they don't particularly care for communes.
That divide between individual and collective as stated above was very sketchy, and I merely wanted to indicate that.
If one take the strict definition of individualist and collective from a dictionary (well, which one, to begin with?), of course they are opposite, just looking at the idea conveyed by the word radicals (ie individualist -> individual, vs collective).
As always, we all start talking about things without first defining the terms we use to discuss those things, and of course confusion, anger and frustration ensue.
The whole point this thread seems to be missing is the collectivist vs individualist debate that this relates to is about how society is governed. People assembling to address their needs/wants collectively might be collectivist in the broad sense, but it requires individualist governance framework to exist, because under a collective governance framework such freedom of association would not be permitted.
Just like how a large company is essentially governed in the same way as a planned economy is, but nobody’s under the impression that JPMorgan is a socialist institution.
Why be obtuse? Advocacy for open-sourcing or at least opposition to the forced set of San Franciscan millennial ethics is not analogous to abolishing voluntary hierarchies.
Ah so my family was collectivist as I just ate what my family served. I’ll be sure to tell my mother of her evil collectivist policies for giving me a peanut butter sandwich for dinner when I was five instead of letting me self actualize and choose my own meals.
I was required by biology to eat and had no choice over the available food. The requirement for energy might be discountable as a fact of reality to just deal with, but my parents decided what food I got to eat.
To be clear I don’t think this arrangement is a problem, and if you were describing this in academic terms I mig even agree with you and have been too harsh. Unfortunately the word “collectivism” is used as a perjorative dog whistle and so I was pointing out how common behaviors most people would think are fine or even ideal can be cast in a “collectivist” light to point out that it’s not bad.
I’m assuming by the vote difference in our two comments that multiple people had the same assumption
The Communist Manifesto was published during the Great Famine in Ireland, and that famine was much worse than it needed to be because the UK government didn't intervene.
And a big part of the growth in capitalist societies is related to corporate structures, which are small scale collectives, with bosses who make decisions for all.
When the US civil war happened, that resulted in the North collectively imposing the decision that nobody had the "business freedom" to own slaves; much to the dismay of the south and I presume joy of the enslaved.
USSR famously bad, but (a) even though it started from very poor conditions thanks to the Tsars, developed to beat the USA to orbit, (b) the collapse of it, replacing communism with capitalism, regressed their economy and living standards.
And that's why no country is entirely either collective nor individualist, and also separately neither capitalist nor communist (both anarcho-capitalists and anarcho-communists are a thing, just as both can be dictatorial).
My opinion is that as both capitalism and communism were formalised over a century before Nash game theory, both are wrong — they assume that people, when free, make choices that are good for all.
When you look at how by the book Communism, not necessarily the one Marx wrote about, runs their economies, you start to see a lot of parallels to how companies run their businesses. The mistake is to apply those priciples at too high a level, as it takes away some of the individual decisions and incentives, and not everything can be managed centrally, especially the customer demand side.
Communist industry and economy worked reasonably well for the stuff were the state is the natural customer in any society: defence. Everything else, not so much.
> Communist industry and economy worked reasonably well for the stuff were the state is the natural customer in any society: defence. Everything else, not so much.
One thing I've been wondering: would housing, transport infrastructure, energy grid(s)[0], banking and similar basic financial services, water, basic food[1], education, emergency services, healthcare, waste disposal in general and public toilets in particular, be in the same category here as defence?
Possibly even all of the primary economy, so include all mining as well as agriculture?
[0] both electricity and physical fuels
[1] at the level of where most of the calories and proteins come from, possibly even "basics" ranges in supermarkets, but not at the level of restaurants or fancy food ranges in supermarkets
Yes, I'd say so. With the exception of drug availability issues, which started if I am not mistaken in the mid to late 70s (basically when the USSR started to really fall behind the West technologically), those aspects worked reasonably well in the USSR.
And before someone says Venezuela or North Korea, the former is a deeply corrupt cleptocracy while the latter used to be the last stone-age version of Stalinism before the whole nation was turned into an open air concentration camp by its dictator.
You've said "reasonably well" a couple times here.... I think there can be book written between the gap of "reasonably well" and "prosperous". And these things are all relative of course... if "reasonably well" is the best in the world at the time, then great. But when you have another system to compare to at the same time, "reasonably well" falls apart very quickly.
I'm not sure if you lived in a Soviet country during these times, but I think you will get MANY opinions that this was not working well.
It is not to glorify the Soviet system. Looking back so, from the end of the Stalin era to, say, the mid-70s, the USSR did compete rather well so:
- military and space tech was mostly on par (a bunch conflicts, incl. Vietnam, show that)
- economically, the USSR was stable
- people were not starving
Of course, the was less luxury and consumerism, but that was true for a lot of other countries as well in Western Europe. It started to fall behind latest by the early 80s on all metrics.
And for Soviet leadership, personal luxuries for the people were simply no priority. And there is no question which system "won" this conflict, is there?
Stalinism lasted until at least his death in 1953. So the period you are describing is all of 20 years, which is essentially a blip on the scale of nations. That same period was also marked by a widespread economic golden age by most of the victors of WWII (France, UK, Japan) and several of the losers (Japan, West Germany), which also ended around the mid-70s, just not as harshly. That’s also around the time communist China started abandoning communist economies for state capitalist ones and started its ascendancy.
So the question is, was communism actually working well in that period, or was it more or less an unsustainable fluke due to the post-war boom era, and as soon as that ended it got left in the dust?
Western Europe, and especially Western Germany, benefited immensely from the Marshall plan. And economic freedom, the early days of what would become the EU and so on. And still, the USSR competed, successfully in the for them relevant fields of sciences and military. And their country didn't collapse doing so. Neither did it collapse immediately after loosing its competitiveness, that took until 1989.
Worth pointing out, Germany outpaced the European allied nations economically pretty quickly, as did Japan.
Being competitive for decades, and the most part of the Cold War, surely is no fluke. In the end, capitalism won out, no doubt about it. Also worth pointing out, capitalism does not lead to freedom, China is a very good example of that.
By the way China, they became the economic power house in the early 2000s. Is the Chinese system a fluke?
This is absurdly reductionist. You can equally say that in a collectivist society everyone contributed for the sake of themselves and not for the greater good of the collective, to avoid being shunned and starved.
Individuals are always individualist unless they get lobotomized. The question of governance is how to manipulate that individualism to good ends.
You have colleagues that would continue to work if they no longer got paid?
Rats jump ship in an individualistic system, in a collectivist system you expect the rats to sacrifice themselves to save the whole. Managers often tries to sell you on the collective so you work harder and demand less pay, but that is just him as an individual trying to get more out of you that doesn't mean it actually works like a collective.
I have collegues who care about their work quality, and the impact their work has in their colleuges work. In short, they care about more than just a pay check and increasing that pay check. Like in non-selfcentered egomaniacs.
But the main reason they are there is someone paid them to be there. Of course humans like to do good, so why not do good while getting paid, but the glue for the whole system is individualism, that is what makes all the workers go to work every day so it is what keeps them together. Remove the pay and all the workers scatters and move to different places in almost every case.
Or in other words, that is an individualistic organization.
> care about their work quality, and the impact their work has in their colleuges work
Yeah, they care about their own ideals, not the organization itself. They can care about their coworkers, but they don't care about Wallmarts bottomline.
So as you see, this individualistic organization can still draw from the power of human collectivist needs, so you get the best of both worlds, you get the collectivist goodness at small scale and the greed that glues together them and make them power through even when they are too tired to care about the collective.
In any system people have to work to live: food, housing and such cost money. Doing "good" has nithing to do with it, and I never said that. The question is, do you care about more than the monthly pay check for something you pass most of your day doing or not?
What keeps the workers showing up to work is the need to live. What keeps them at a particular place is a myriad of things. Individualism is not part of that.
And why do you go immediately to "remove the pay"? There is quite some territory between working for free and not valueing a sufficient salary above all else.
Funny so, how everybody working in a sector withbstrong labour unions is basically making more, for less hours, on average than those without them. Seems individualism is a great way to devide and conquer for people holding all the power, because once peoole are convinced individualism is better than cooperation, they voluntarily devide themselves.
> What keeps the workers showing up to work is the need to live. What keeps them at a particular place is a myriad of things. Individualism is not part of that.
Lets say an equivalent company with similar culture and people offered twice the salary, how many would say no? People saying yes in that situation are not there for the collective, they are there for their own sake first and foremost. The collective is an afterthought since they abandon it the instant a better opportunity appears.
I think it's more comparable to say "You have colleagues that would continue to work even if they got a better paying job offer?". A purely individualistic mindset would take whatever gets them more money for less hours. Someone staying despite that must have some sort of collectivist mindset that is non-financial qualities of life. Be it the company, the peers, the problem space, etc.
I didn't say they only cared about money, but that they stayed for their own reasons and not for the sake of the company. If they find the work at one place less demanding they might take that instead, but that is an individualistic reason as well.
You've made the rookie mistake of reducing the concept of self-interest to anything somebody does because they want to, and thus making egoism tautologically true.
You can convince people that radically different things are in their self-interest, from joining hands and singing Kumbaya to Genocide.
The notion of self-interest (or I guess in your case individualism, which is even shakier) is an empty vessel you can fill with nearly anything.
> You've made the rookie mistake of reducing the concept of self-interest to anything somebody does because they want to, and thus making egoism tautologically true.
No, that isn't the same thing. A collectivist would do things he hates and he doesn't believe in personally because the collective wants him to do it. He would sacrifice himself because the collective told him to. There are many examples of societies and organizations that worked that way, such societies are collective societies.
Military is the most common example, they are often run as a collectivist organization, most soldiers aren't there because they want to or they believe in the war, they are there because they support their country or they were forced to support their country against their will by authoritarian collectivism. And they wouldn't go and support their other country if they were paid more since they are there just to support their country, those are mercenaries.
Our capitalist societies aren't like that at all, we are so individualist that people like you don't even understand what it means to not be individualist. The closest to collectivism in USA wouldn't be corporations, but national anthems, school children saying the pledge, religion etc.
Maybe I'm not disagreeing with you at all, I'm just making the orthogonal point that individualism is very different than self-interest. I believe that other than basic human needs, most desires more complicated than that are in large part socially determined. Individualistic societies (as you describe them) inculcate individualistic desires into people and the health of a society is determined by how effectively it instills prosocial behaviors in its populace. American individualism is actually a collectivist enterprise!
So something like the stock market, as the engine of American capitalism, only works if everybody in your society believes that it is worth taking risks in order to possibly get a huge windfall. But is that really in people's self-interest? Maybe what one would interpret as some kind of natural individual desire is actually a particularly American level of risk tolerance that has been inculcated because it has led to a lot of collective success.
Hospitals, Daycare, Schools, Social Work, Police, Fire Departments, Puclic Infrastructure, ... would fall apart in minutes if this would be mindset there.
No, the mindset there is that workers wants money, just like everywhere else. Not sure why you think otherwise.
If you mean that humans sometimes do more than the bare minimum, sure they do that. But that is an inherent part of humans, that has nothing to do with individualism or collectivist systems, humans do that in all systems, that is a part of the value of a human worker and why you pay them to work for you.
>No, the mindset there is that workers wants money, just like everywhere else. Not sure why you think otherwise.
Because I know people that work there, and money surely isn't the first reason you are teaching a bunch of brats while having parts of the public look down on you if you can easily get double the amount somewhere outside of the field. Same for political work, crowdsourcing, .. People take gratification in doing something that matters to society. Yes, they want to survive, but they are quite often not doing it for the money, the money is a nice bonus and enables to do it full-time, but that is not always the reason you do it.
Also you are goalpost moving, you said first they are only doing it only for themselves not for "the company", and people often do it for the institution that employs them. Not because they are paid, but because they believe they are doing something that matters.
Lets say an equivalent company with similar culture and people offered twice the salary, how many would say no? People saying yes in that situation are not there for the collective, they are there for their own sake first and foremost. The collective is an afterthought since they abandon it the instant a better opportunity appears.
That would apply to those teachers and doctors and nurses as well, almost all of them would gladly abandon their current kids/patients to go help other kids/patients if they were paid twice the salary. That is how we know they aren't there for the collective good of the organization, they just care about doing something good at any place with no feelings for that particular collective.
If you mean that people are helpful etc, then that is a completely different thing from them actually caring enough about a particular organization.
Millions of people including one side of my family have experienced the opposite. They were tenant farmers living in poverty, barely subsisting, before the hard left socialists both 1. developed the economy enough to give them jobs, 2. provided social safety nets such as tax payer paid healthcare so they wouldn't go bankrupt every time they needed to buy medicine or visit a doctor. If the country had never gone that direction it would have spent many more decades being a quasi-feudal land stuck the the middle ages. Not sure how "individualism" would have helped them at all. They didn't have any capital.
> If the country had never gone that direction it would have spent many more decades being a quasi-feudal land stuck the the middle ages
Capitalism was the end of such arrangements in the most developed parts of the world. Individualism helps since individual investors benefits from investing in better equipment making people more productive and thus helping living standards overall.
Social solutions to the same problem doesn't come close to being as effective at eliminating such inefficiencies. The main thing social solutions can do is provide baselines to the population such as education and healthcare as you say, but without capitalism to follow-up with targeted investments the country will remain poor even if its population is extremely educated. Social solutions are just very bad at using peoples talents well, they have a too collectivist view and don't see the individuals.
I remember how chattel slavery was very effective at making people more productive, and how Google is famous for how well it respects individual users when they have problems.
Many of those same peasants ended up dying in Soviet-era famines as well. I agree with you, the pre-communist revolution Russia was a living hell for many peasants, but it’s hard to say it was the “communist” aspect of the “communist revolution” that improved things, as opposed to the “revolution” aspect. What was needed was an overthrow of the existing power structures to enable modernization and industrialism, communist or not. Keep in mind other nations, like Japan, made even greater leaps out of feudalistic societies to modern economies. For Japan, it too required a dismantling of the existing power structures (first voluntarily at the turn of the century, and then by force after WWII), with no communism involved at all.
It’s possible communism had a uniquely positive influence on Russia’s transformation, but I remain skeptical considering it seems to be the only example of communism to ever to do so, the place from which Russia was coming, and the place in which it ended.
If you are simplifying collectivism to “communism” and individualism as “not communism”, then sure. But having lived in Japan for several years, which is a much more collectivist culture compared to the hyper-individualism of the US/West, I can confidently say that, while not perfect, they are in a much healthier state as a society.
As in most things in life, the golden path is usually somewhere in the middle, and US individualism has lurched far to the extreme, and is going to lead to its collapse if not reversed.
Could you please stop posting unsubstantive comments and flamebait? You've unfortunately been doing it repeatedly. It's not what this site is for, and destroys what it is for.
Nah, the harm from these LLMs are mostly in how freely accessible they are. Just pay OpenAI a relatively tiny fee and you can generate tonnes of plausible spam designed to promote your product or service or trick people into giving you money. That's the primary problem we're facing right now.
The problem is... keeping them closed source isn't helping with that problem, it only serves to guarantee OpenAI a cut of the profits caused by the spam and scams.
> Just pay OpenAI a relatively tiny fee and you can generate tonnes of plausible spam designed to promote your product or service or trick people into giving you money. That's the primary problem we're facing right now.
Is content generation really the thing holding spammers back? I haven't seen a huge influx of more realistic spam so I wonder your basis for this statement.
Everyone always says this, that there's "bots" all over Reddit but every time I ask for real examples of stuff (with actual upvotes) I never get anything.
If anything it's just the same regular spam that gets deleted and ignored at the bottom of threads.
Easier content generation doesn't solve the reputation problem that social media demands in order to get attention. The whole LLM+spam thing is mostly exaggerated because people don't understand this fact. It merely creates a slightly harder problem for automatic text analysis engines...which was already one of the weaker forms of spam detection full of false positives and misses. Everything else is network and behaviour related, with human reporting as last resort.
I want to see the proof of: bots, Russian trolls, and bad actors that supposedly crawl all over Reddit.
Everyone who disagrees with the hivemind of a subreddit gets accused of being one of those things and any attempt to dispute the claim gets you banned. The internet of today sucks because people are so obsessed with those 3 things that they're the first conclusion people jump to on pseudoanonymous social media when they have no other response. They'll crawl through your controversial comments just to provide proof that you can't possibly be serious and you're being controversial to play an internet villain.
I'd love to know how you dispute the claim that "you're parroting Russian troll talking points so you must be a Russian troll" when it's actually the Russian trolls parroting the sentiments to seem like real people.
There's a big market for high reputation, old Reddit accounts, exactly because those things make it easier to get attention. LLMs are a great way to automate generating high reputation accounts.
Pandora's box is already open on that one.. and none of the model providers are really attempting to address that kind of issue. Same with impersonation, deepfakes, etc. We can never again know whether text, images, audio, or video are authentic on their own merit. The only hope we have there is private key cryptography.
Luckily we already have the tools for this, NFT in the case of media and DKIM in the case of your spam email.
An NFT is a way to attribute authorship in a mathematically guaranteed way.
If Taylor Swift signs ownership of a picture of her, you can know it is what she presents as real. If Elon musk signs a YouTube video of him offering crypto doubling giveaways, you can know he intended to represent the message as real. If the New York Times publishes an article signed with their key you can know it is meant to be published by the New York Times.
1) That's just using normal public/private key cryptography to sign messages, there's no need to bring in cryptocurrencies or NFTs
2) Public/private key cryptography would give us a way to verify that a message (or picture or whatever) from Taylor Swift is signed with Swift's private key, but it wouldn't help at all with e.g telling me that I'm responding to a real message and not a bot right now. Not to mention that it wouldn't even help much against deep fakes, since if I publish what I claim to be a secret recording of Swift, there would be no reason to expect her to have signed it with his private key.
Those two are the hallmarks of most of these "legit use cases for cryptocurrencies/NFTs" suggestions I've heard from cryptobros by the way: they're always some combination of "old technology that has nothing to do with cryptocurrencies" and "doesn't actually solve the problem".
NFT's as currently employed only immutably connect to a link - which is in itself not secure. More significantly, no blockchain technology deploys to internet scale content creation. Not remotely. It's hard to conceive of a blockchain solution fast enough and cheap enough -- let alone deployed and accepted universally enough, to have a meaningful impact on text / video and audio provenance across the internet given the pace of uploading. It also wouldn't do anything for the vast corpus of existing media, just new media created and uploaded after date X where it was somehow broadly adopted. I don't see it.
Whether it is hindsight or foresight depends on the perspective. From the zeitgeist perspective mired in crypto scams yea it may seem like a surprise benefit, but from the original design intention this is just the intended use case.
> but it is precisely the controls upon it which are trying to be injected by the large model operators which are generating mistrust and a poor understanding of what these models are useful for.
Citation needed.
Counterpoints:
- LLMs were mistrusted well before anything recent.
- More controls make LLMs more trustworthy for many people, not less. The Snafu at Goog suggests a need for improved controls, not 0 controls.
- The American culture wars are not global. (They have their own culture wars).
- To me, the teams I work with and everyone handling content moderation.
/ Rant /
Oh God please let these things be bottle necked. The job was already absurd, LLMs and GenAI are going to be just frikking amazing to deal with.
Spam and manipulative marketing has already evolved - and thats with bounded LLMs. There are comments that look innocuous, well written, but the entire purpose is to low key get someone to do a google search for a firm.
And thats on a reddit sub. Completely ignoring the other million types of content moderation that have to adapt.
Holy hell people. Attack and denial opportunities on the net are VERY different from the physical world.
You want to keep a market place of ideas running? Well guess what - If I clog the arteries faster than you can get ideas in place, then people stop getting those ideas.
And you CANT solve it by adding MORE content. You have only X amount of attention. (This was a growing issue radio->tv->cable->Internet scales)
Unless someone is sticking a chip into our heads to increase processing capacity magically, more content isnt going to help.
And in case someone comes up with some brilliant edge case - Does it generalize to a billion+ people ? Can it be operationalized? Does it require a sweet little grandma in the Philippines to learn how to run a federated server? Does it assume people will stop behaving like people?
Oh also - does it cost money and engineering resources? Well guess what, T&S is a cost center. Heck - T&S reduces churn, and that its protecting revenue is a novel argument today. T&S has existed for a decade plus.
/ Rant.
Hmm, seems like I need a break. I suppose It’s been one of those weeks. I will most likely delete this out of shame eventually.
- People in other places want more controls. The Indian government and a large portion of the populace will want stricter controls on what can be generated from an LLM.
This may not necessarily be good for free thought and culture, however the reality is that many nations haven’t travelled the same distance or path as America has.
I hope you don't delete it! I enjoyed reading it. It pleased my confirmation bias, anyways.
Your comment might help someone notice patterns that they've been glancing over....
I liked it up until the T&S part. My eyes glazed over the rest since I didn't know what T&S means. But that's just me.
As of right now, the only solution I see is forums walled off in some way, complex captchas, intense proof of work, subscription fees etc. Only alternative might be obscurity, which makes the forum less useful. Maybe we could do like a web3 type thing but instead of pointless cryptos, you have a cryptographic proof that certifies you did the captcha or whatever and lots of sites accept them. I don’t think its unsolvable, just that it will make the internet somewhat worse.
Yeah, one thing I am afraid of is that forums will decide to join the Discord chatrooms on the deep web : stop being readable without an account, which is pretty catastrophic for discovery by search engines and backup crawlers like the Internet Archive.
Anyone with forum moderating experience care to chime in ? (Reddit, while still on the open web for now, isn't a forum, and worse, is a platform.)
>And in case someone comes up with some brilliant edge case - Does it generalize to a billion+ people ?
The answer is curation, and no, it doesn't need to scale to a billion people. maybe not even a million.
The sad fact of life is that most people don't care enough to discrminate against low quality content, so they are already a lost cause. Focus on those who do care enough and build an audience around them. You as a likely not billion dollar company can't afford to worry about that kind of scale, and lowering the scale helps you get a solution out for the short term. You can worry about scaling if/when you tap into an audience.
I get you. That’s sounds more like membership than curation though. Or a mashup of both.
But yes- once you stop dropping constraints you can imagine all sorts of solutions.
It does work. I’m a huge advocate of it. When threads said no politics I wanted to find whoever made that decision and give them a medal.
But if you are a platform - or a social media site - or a species.?
You can’t pick and choose.
And remember - everyone has a vote.
As good as your community is, we do not live in a vacuum. If information wars are going on outside your digital fortress, it’s still going to spill into real life
Counter-counterpoint: absolutely nobody who has unguardrailed Stable Diffusion installed at home for private use has ever asked for more guardrails.
I'm just saying. :) Guardrails nowadays don't really focus on dangers (it's hard to see how an image generator could produce dangers!) so much as enforcing public societal norms.
Fake revenge porn, nearly undetectable bot creation on social media with realistic profiles (I've already seen this on HN), generated artwork passed off as originals, chatbots that replace real-time human customer service but have none of the agency... I can keep going.
All of these are things that have already happened. These all were previously possible of course but now they are trivially scalable.
I've been running into chatbots that are confined to doling out information from their knowledgebase with no ability to help edge case/niche scenarios, and yet they've replaced all the mechanisms to receive customer support.
Essentially businesses have (knowingly or otherwise) dropped their ability to provide meaningful customer support.
That's the previous status quo; you'd also find this in call centres where customer support had to follow scripts, essentially as if they were computers themselves.
Even quite a lot of new chatbots are still in that paradigm, and… well, given the recent news about chatbot output being legally binding, it's precisely the extra agency of LLMs over both normal bots and humans following scripts that makes them both interestingly useful and potentially dangerous: https://www.bbc.com/travel/article/20240222-air-canada-chatb...
the issue is "none of the agency". Humans generally have enough leeway to fold to a persistant customer because it's financially unviable to have them on the phone for hours on end. a chatbot can waste all the time in the world, with all the customers, and may not even have the ability to process a refund or whatnot.
> That seems good for society, even though it's bad for people employed in that specific job.
Why?
It inserts yet another layer of crap you have to fight through before you can actually get anything done with a company. The avoidance of genuine customer service has become an artform by many companies and corporations, its demise surely should be lamented. A chatbot is just another in the arsenal of weapons designed to confuse, put-off and delay the cost of having to actually provide a decent service to you customers, which should be a basic responsibility of any public-facing company.
1. It's not "an extra layer", at most it's a replacement for the existing thing you're lamenting, in the businesses you're already objecting to.
2. The businesses which use this tool at its best, can glue the LLM to their documentation[0], and once that's done, each extra user gets "really good even though it's not perfect" customer support at negligible marginal cost to the company, rather than the current affordable option of "ask your fellow users on our subreddit or discord channel, or read our FAQ".
[0] a variety of ways — RAG is a popular meme now, but I assume it's going to be like MapReduce a decade ago, where everyone copies the tech giants without understanding the giant's reasons or scale
It's an extra layer of "Have you looked at our website/read our documentation/clicked the button" that I've already done, before I will (if I'm lucky) be passed onto a human that will proceed to do the same thing before I can actually get support for my issue.
If I'm unlucky it'll just be another stage in the mobius-support-strip that directs me from support web page to chatbot to FAQ and back to the webpage.
The businesses which use this tool best will be the ones that manage to lay off the most support staff and cut the most cost. Sad as that is for the staff, that's not my gripe. My gripe is that it's just going to get even harder to reach a real actual person who is able to take a real actual action, because providing support is secondary to controlling costs for most companies these days.
Take for example the pension company I called recently to change an address - their support page says to talk to their bot, which then says to call a number, which picks up, says please go to your online account page to complete this action and then hangs up... an action which the account page explicitly says cannot be completed online because I'm overseas, so please talk to the bot, or you can call the number. In the end I had to call an office number I found through google and be transferred between departments.
An LLM is not going to help with that, it's just going to make the process longer and more frustrating, because the aim is not to resolve problems, it's to stop people taking the time of a human even when they need to, because that costs money.
Why is everyone's first example of things you can do with LLMs "revenge porn"? They're text generation algorithms not even image generators. They need external capabilities to create images.
Do you also object to people saying that web browsers "display" a website even though that needs them to be plugged into a monitor?
If you chat to an LLM and you get a picture back, which some support, the image generator and the language model might as well be the same thing to all users, even if there's an important technical difference for developers.
It's a distinction that does not matter, as the question still has to be answered for the other modality. Do guns kill people, or do bad guys use guns to kill people? Does a fall kill you, or is it the sudden deceleration at the end? Lab leak or wet market? There's a technical difference, some people care, but the actionable is identical and doesn't matter unless it's your job to implement a specific part of the solution.
The moment they are good hackers, everyone has a trivially cheap hacker. Hard to predict what that would look like, but I suspect it is a world where nobody is employing software developers because a LLM that can hack can probably also write good code.
So, do you want future LLMs to be restricted, or unlimited? And remember, to prevent this outcome you have to predict model capabilities in advance, including "tricks" like prompting them to "think carefully, step by step".
Your code because you own it. If LLM hackers are rampant as you fear then people will respond by telling their code writing LLMs to get their shit together and check the code for vulnerabilities.
I code because I'm good at it, enjoy it, and it pays well.
I recommend against 3rd party libraries because they give me responsibility without authority — I'd own the problem without the means to fix it.
Despite this, they're a near-universal in our industry.
> If LLM hackers are rampant as you fear then people will respond by telling their code writing LLMs to get their shit together and check the code for vulnerabilities.
Eventually.
But that doesn't help with the existing deployed code — and even if it did, this is a situation where, when the capability is invented, attack capability is likely to spread much faster than the ability of businesses to catch up with defence.
Even just one zero-day can be bad, this… would probably be "many" almost simultaneously. (I'd be surprised if it was "all", regardless of how good the AI was).
I never asked you why you code, this conversation isn't, or wasn't, about your hobbies. You proposed a future in which every skiddy has a hacking LLM and they're using it to attack tons of stuff written by LLMs. If hacking LLMs and code writing LLMs both proliferate then the obvious resolution is for the code writing LLMs to employ hacking LLMs in verifying their outputs.
Existing vulnerable code will be vulnerable, yes. We already live in a reality in which script kiddies trivially attack old outdated systems. This is the status quo, the addition of hacking LLMs changes little. Insofar as more systems are broken, that will increase the pressure to update those systems.
Edit: I misread that bit as "you code" not "your code".
But "your code because you own it", while a sound position, is a position violated in practice all the time, and not only because of my example of 3rd party libraries.
They are held responsible for being very badly wrong about what the tools can do. I expect more of this.
> You proposed a future in which every skiddy has a hacking LLM and they're using it to attack tons of stuff written by LLMs. If hacking LLMs and code writing LLMs both proliferate then the obvious resolution is for the code writing LLMs to employ hacking LLMs in verifying their outputs.
And it'll be a long road, getting to there from here. The view at the top of a mountain may be great or terrible, but either way climbing it is treacherous. Metaphor applies.
> Existing vulnerable code will be vulnerable, yes. We already live in a reality in which script kiddies trivially attack old outdated systems. This is the status quo, the addition of hacking LLMs changes little. Insofar as more systems are broken, that will increase the pressure to update those systems.
I assume this must have killed at least one person by now. When you get too much pressure in a mechanical system, it breaks. I'd like our society to use this pressure constructively to make a better world, but… well, look at it. We've not designed our world with a security mindset, we've designed it with "common sense" intuitions, and our institutions are still struggling with the implications of the internet let alone AI, so I have good reason to expect the metaphorical "pressure" here will act like the literal pressure caused by a hand grenade in a bathtub.
The moment LLMs are good hackers every system will be continuously pen tested by automated LLMs and there will be very few remaining vulnerabilities for the black hat LLMs to exploit.
Half of your examples aren't even things an LLM can do and the other half can be written by hand too. I can name a bunch of bad sounding things as well but that doesn't mean any of them have any relevance to the conversation.
EDIT: Can't reply but you clearly have no idea what you're talking about. AI is used to create these things, yes. But the question was LLMs which I reiterated. They are not equal. Please read up on this stuff before forming judgements or confidently stating incorrect opinions that other people, who also have no idea what they're talking about, will parrot.
If we can change the rules of a discussion midway through, everyone loses. The parent replied to a question "What damage can be done with an llm without guardrails?" (regardless of the grandparent, this is how conversations work, you talk about the thing the other person talked about if you reply to them) and the response was to rattle off a bunch of stuff that LLMs can't do. Yes, they connected an LLM to an image generation AI. No, that doesn't mean "LLMs can generate images" aside from triggering some thing to happen. It's not pedantic or unreasonable to divide the two. The question was blatantly about LLMs.
If y'all want to rant and fear monger about any AI technology, including tech that has existed for years (deepfakes existed well before LLMs were mainstream), do that in a different thread. Don't just force every conversation to be about whatever your mind wants to rant about.
That said, arguing with you people is pointless. You don't even seem to think.
> If we can change the rules of a discussion midway through, everyone loses.
Then we lost repeatedly at almost every other step back to the root, because it switched between those two loads of times.
The change to LLMs was itself one such shift.
> No, that doesn't mean "LLMs can generate images" aside from triggering some thing to happen
The aside is important.
> It's not pedantic or unreasonable to divide the two.
It is unreasonable on the question of "guardrails, good or bad?"
It is unreasonable on the question of "can it cause harm?"
It's not unreasonable if you are building one.
> If y'all want to rant and fear monger about any AI technology, including tech that has existed for years (deepfakes existed well before LLMs were mainstream)
And caused problems for years.
> That said, arguing with you people is pointless. You don't even seem to think.
Communication isn't a single-player game, I can't make you understand something you're actively unwilling to accept, like the idea that tools enable people to do more, for good and ill, and AI is such a tool.
Perhaps you should spend less time insulting people on the internet you don't understand. Go for a walk or something. Eat a Snickers, take a nap. Come back when you're less cranky.
AI already is used to create fake porn, either of celebreties or children, fact. It is used to create propaganda pieces and fake videos and images, fact. Those can be used for everything from deffamation to online harassment. And AI is using other peoples copyrighted content to do so, also a fact. So, what's your point again?
Your other comment is nested too deeply to reply to. I edited my comment reply with my response but will reiterate. Educate yourself. You clearly have no idea what you're talking about. The discussion is about LLMs not AI in general. The question stated "LLMs" which are not equal to all of AI. Please stop spreading misinformation.
You can say "fact" all you want but that doesn't make you correct lol
No. I'm declaring that you either can't read or don't understand that there's a difference between "gen AI" and LLMs. LLMs generate text. They don't generate images. Are you just a troll or not actually reading my messages? The question you're replying to asked about LLMs. I don't understand what's so difficult about this.
One has to love pedants. Your whole point was, LLMs don't create images (don't you say...), hence all the other points are wrong? Now go back to the first comment, assume LLMs and gen AI are used interchangeable (I am too lazy to re-read my initial post). Or don't, I don't care, because I do not argue semantics, tgere is hardly a more lazy, and disengenious, way to discuss. Ben Shapiro is doing that all the time and thinks he's smart.
> Counter-counterpoint: absolutely nobody who has unguardrailed Stable Diffusion installed at home for private use has ever asked for more guardrails.
Not so. I have it at home, I make nice wholesome pictures of raccoons and tigers sitting down for Christmas dinner etc., but I also see stories like this and hope they're ineffective: https://www.bbc.com/news/world-us-canada-68440150
Those AI generated photos are from a Twitter/X parody account @Trump_History45 , not from the Trump campaign as the BBC mistakenly (or misleadingly) claim.
> Those AI generated photos are from a Twitter/X parody account @Trump_History45 , not from the Trump campaign as the BBC mistakenly (or misleadingly) claim.
They specifically said who they came from, and that it wasn't the Trump campaign. They even had a photo of one of the creators, whom they interviewed in that specific piece I linked to, and tried to get interviews with others.
Headline: "Trump supporters target black voters with faked AI images"
@Trump_History45 does appear to be a Trump supporter. However, he is also a parody account and states as such on his account.
The BBC article goes full-on with the implication that the AI images were produced with the intent to target black voters. The BBC is expert at "lying by omission"; that is, presenting a version of the truth which is ultimately misleading because they do not present the full facts.
The BBC article itself leads a reader to believe that @Trump_History45 created those AI images with the aim of misleading black voters and thus to garner support from black voters in favour of Trump.
Nowhere in that BBC article is the word "parody" mentioned, nor any examination of any of the other AI images @Trump_History45 has produced. If they had, and had fairly represented that @Trump_History45 X account, then the article would have turned out completely different;
"Trump Supporter Produces Parody AI Images of Trump" does not have the same effect which the BBC wanted it to have.
I don't know whether this is the account you are talking about, but of the second account they discuss an image posted by saying: 'It had originally been posted by a satirical account that generates images of the former president' so if this is the account you are talking about..
I won't deny the BBC has often very biased reporting for a publically funded source.
I don't know whether this is the account you are talking about, but of the second account they discuss an image posted by saying: 'It had originally been posted by a satirical account that generates images of the former president' so if this is the account you are talking about..
I won't deny the BBC has ofteb very biased reporting for a publically funded source.
I don't know whether this is the account you are talking about, but of the second account they discuss an image posted by saying: 'It had originally been posted by a satirical account that generates images of the former president' so if this is the account you are talking about..
I won't deny the BBC has very biased reporting for a publically funded source.
> I think there is far more societal harm in trying to codify unresolvable sets of ethics
Codification of an unresolvable set of ethics - however imperfect - is the only reason we have societies, however imperfect. It's been so since at least the dawn of agriculture, and probably even earlier than that.
Call me a capitalist, but I trust several of them competing with each other under the enforcement of laws that impose consequences on them if they produce and distribute content that violates said laws.
This is what I'm starting to love about this ecosystem. There's one dominant player right now but by no means are they guaranteed that dominance.
The big-tech oligarchs are playing catch-up. Some of them, like Meta with Llama, are breaking their own rules to do it by releasing open source versions of at least some of their tools. Others like Mistral go purely for the open source play and might achieve a regional dominance that doesn't exist with most big web technologies these days. And all this is just a superficial glance at the market.
Honestly I think capitalism has screwed up more than it has helped around the world but this free-for-all is going to make great products and great history.
I'm not sure I buy that users are lowering their guard down just because these companies have enforced certain restricts on LLMS. This is only anecdata, but not a single person I've talked to, from highly technical to the layperson, has ever spoken about LLMs as arbiters of morals or truth. They all seem aware to some extent that these tools can occasionally generate nonsense.
I'm also skeptical that making LLMs a free-for-all will necessarily result in society developing some sort of herd immunity to bullshit. Pointing to your example, the internet started out as a wild west, and I'd say the general public is still highly susceptible to misinformation.
I don't disagree on the dangers of having a relatively small number of leaders at for-profit companies deciding what information we have access to. But I don't think the biggest issue we're facing is someone going to the ChatGPT website and assuming everything it spits out is perfect information.
> They all seem aware to some extent that these tools can occasionally generate nonsense.
You have too many smart people in your circle, many people are somewhat aware that "chatgpt can be wrong" but fail to internalize this.
Consider machine translation: we have a lot of evidence of people trusting machines for the job (think: "translate server error" signs) , even tho everybody "knows" the translation is unreliable.
But tbh moral and truth seem somewhat orthogonal issues here.
Wikipedia is wonderful for what it is. And yet a hobby of mine is finding C-list celebrity pages and finding reference loops between tabloids and the biographical article.
The more the C-lister has engaged with internet wrongthink, the more egregious the subliminal vandalism is, with speculation of domestic abuse, support for unsavory political figures, or similar unfalsifiable slander being common place.
Politically-minded users practice this behavior because they know the platform’s air of authenticity damages their target.
When Google Gemini was asked “who is worse for the world, Elon Musk or Hitler” and went on to equivocate the two because the guardrails led it to believe online transphobia was as sinister as the Holocaust, it begs the question of what the average user will accept as AI nonsense if it affirms their worldview.
> not a single person I've talked to, from highly technical to the layperson, has ever spoken about LLMs as arbiters of morals or truth
Not LLMs specifically but my opinion is that companies like Alphabet absolutely abuse their platform to introduce and sway opinions on controversial topics.. this “relatively small” group of leaders has successfully weaponized their communities and built massive echo chambers.
> it is precisely the controls upon it which are trying to be injected by the large model operators which are generating mistrust
I would prefer things were open, but I don’t think this is the best argument for that
Yes, operators trying to tame their models for public consumption inevitably involves trade offs and missteps
But having hundreds or thousands of equivalent models being tuned to every narrow mindset is the alternative
I would prefer a midpoint, I.e. open but delayed disclosure
Take time to experiment and design in safety, etc. also to build a brand that is relatively trusted (despite the inevitable bumps) so ideologically tuned progeny will at least be competing against something better, and more trusted, at any given time
But the problem of resource requirements is real, so not surprising that being clearly open is challenging
LLMs have nothing to do with reality whatsoever, their relationship is to the training data, nothing more.
Most of the idiocy surrounding the "chatbot peril" comes from conflating these things. If an LLM learns to predict that the pronoun token for "doctor" is "he", this is not a claim about reality (in reality doctors take at least two personal pronouns), and it certainly isn't a moral claim about reality. It's a bare consequence of the training data.
The problem is that certain activist circles have decided that some of these predictions have political consequences, absurd as this is. No one thinks it consequential that if you ask an LLM for an algorithm, it will give it to you in Python and Javascript, this is obviously an artifact of the training set. It's not like they'll refuse to emit predictive text about female doctors or white basketball players, or give you the algorithm in C/Scheme/Blub, if you ask.
All that the hamfisted retuning to try and produce an LLM which will pick genders and races out of a hat accomplishes is to make them worse at what they do. It gets in the way of simple tasks: if you want to generate a story about a doctor who is a woman and Ashanti, the race-and-gender scrambler will often cause the LLM to "lose track" of characteristics the user specifically asked for. This is directly downstream of trying to turn predictions on "doctor" away from "elderly white man with a kindly expression, wearing a white coat and stethoscope" sorts of defaults, which, to end where I started, aren't reality claims and do not carry moral weight.
> The hazard we are experiencing with LLM right now is not how freely accessible and powerfully truthy it's content is, but it is precisely the controls upon it which are trying to be injected by the large model operators which are generating mistrust and a poor understanding of what these models are useful for.
This slices through a lot of double speak about AI safety. At the same time, people use “safety” to mean not letting AI control electrical grids and to ensure AIs adhere to partisan moral guidelines.
Virtually all of the current “safety” issues fall into the latter category. Which many don’t consider a safety issue at all. But they get snuck in with real concerns about integrating an AI too deeply into critical systems.
Just wait until google integrates it deeply into search. Might finally kill search.
What people call AI might be an algorithm but algorithms are not AI. And it's definitely algorithms which do what you describe. There is very little magic in algorithms.
My read of "safety" is that the proponents of "safety" consider "safe" to be their having a monopoly on control and keeping control out of the hands of those they disapprove of.
I don't think whatever ideology happens to be fashionable at the moment, be it ahistorical portraits or whatever else, is remotely relevant compared to who has the power and whom it is exercised on. The "safety" proponents very clearly get that.
The only thing I'm offended by is the way people are seemingly unable to judge what is said by who is saying it. Parrots, small children and demented old people say weird things all the time. Grown ups wrote increasingly weird things the further back you go.
The primary or concluding reason Elon believes it needs to be open sourced is exactly because the "too much danger" is far bigger of a problem if that technology and knowledge-ability is privately available for only for bad actors.
E.g. Finding those dangers and them being public and publicly known is the better of 2 options vs. only bad actors potentially having them.
> Society is approaching them as some type of universal ethical arbiter, expecting an omniscient sense of justice which is fundamentally unreconcilable even between two sentient humans when the ethics are really just a hacked on mod to the core model.
That’s a real issue but I doubt the solution is technical. Society will have to educate itself on this topic. It’s urgent that society understand rapidly that LLMs are just word prediction machines.
I use LLMs everyday, they can be useful even when they say stupid things. But mastering this tool requires that you understand it may invent things at any moment.
Just yesterday I tried the Cal.ai assistant which role is to manage your planning (but it don’t have access to your calendars that’s pretty limited). You communicate with it by mail. I asked him to organise a trip by train and book an hotel. It responded, « sure what is your preferred time for the train and which comfort do you want ? » I answered and it answered back that, fine, it will organise this trip and reach me back later. It even added that it will book me an hotel.
Well, it can’t even do that, it’s just a bot made to reorganize your cal.com meetings. So it just did nothing, of course. Nothing horrible since I know how it works.
But would I have been uneducated enough on the topic (like 99,99% of this planet’s population, I’d just thought « Cool, my trip is being organized, I can relax now ».
But hey, it succeeded at the main LLM task : being credible.
ChatGPT is not about to run weapons systems. It's like throwing knives out the window and they complaining that knives are dangerous. Any automation is dangerous without due diligence.
It just means that LLMs are an interpolation of everything on the internet. They would seem less like they have a point of view or an opinion on things.
He means that he thinks the only reason why these generative AIs ever get info wrong and causing misinfo is because the businesses that write them are too woke and holding them back.
I use the API + playground which is essentially the chat interface. The API charges per-token and is then cheap. Unless you're a heavy user, it's tough to get to even a few dollars. Just don't use GPT4 and paste oodles of text, and you'll be fine.
This looks like one of the steps leading to the fulfilment of the iron law of bureaucracy. They are putting the company ahead of the goals of the company.
"Pournelle's Iron Law of Bureaucracy states that in any bureaucratic organization there will be two kinds of people: First, there will be those who are devoted to the goals of the organization. Examples are dedicated classroom teachers in an educational bureaucracy, many of the engineers and launch technicians and scientists at NASA, even some agricultural scientists and advisors in the former Soviet Union collective farming administration. Secondly, there will be those dedicated to the organization itself. Examples are many of the administrators in the education system, many professors of education, many teachers union officials, much of the NASA headquarters staff, etc. The Iron Law states that in every case the second group will gain and keep control of the organization. It will write the rules, and control promotions within the organization."
[1] https://en.wikipedia.org/wiki/Jerry_Pournelle#:~:text=Anothe....
Ironically, this is essentially the core danger of true AGI itself. An agent can't achieve goals if it's dead, so you have to focus some energy on staying alive. But also, an agent can achieve more goals if it's more powerful, so you should devote some energy to gaining power if you really care about your goals...
Among many other more technical reasons, this is a great demonstration of why AI "alignment" as it is often called is such a terrifying unsolved problem. Human alignment isn't even close to being solved. Hoping that a more intelligent being will also happen to want to and know how to make everyone happy is the equivalent of hiding under the covers from a monster. (The difference being that some of the smartest people on the planet are in furious competition to breed the most dangerous monsters in your closet.)
> They are putting the company ahead of the goals of the company.
I don't follow your reasoning. The goal of the company is AGI. To achieve AGI, they needed more money. What about that says the company comes before the goals?
From their 2015 introductory blog post: “OpenAI is a non-profit artificial intelligence research company. Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. Since our research is free from financial obligations, we can better focus on a positive human impact.”
Today’s OpenAI is very much driven by considerations of financial returns, and the goal of “most likely to benefit humanity as a whole” and “positive human impact” doesn’t seem to be the driving
principle anymore.
Their product and business strategy is now governed by financial objectives, and their research therefore not “free from financial obligations” and “unconstrained by a need to generate financial return” anymore.
They are thus severely compromising their alleged mission by what they claim is necessary for continuing it.
Sure, maybe. (Personally I think that’s a mere conjecture, trying to throw more compute at the wall.) But obtaining that money by orienting their R&D towards a profit-driven business goes against the whole stated purpose of the enterprise. And that’s what’s being called out.
Well, I don't think it's a maybe. From the emails it seems clear that even Elon thought the project would flop without a ton of money.
It seems pretty clear that they felt they had to choose between chasing money and shutting down. I'm guessing you'd prefer they went with the latter, but I can entirely understand why they didn't.
I don’t really care what they do. But since they’re now chasing money, they should be honest about it and say they had to give up on the original aspirations and have now become a normal tech company without any noble goals of doing R&D for the most benefit of humanity unconstrained by financial obligations.
I think they're being extremely honest and transparent about needing money to continue to advance their work. I mean, that's the entire message of these emails they quote... right?
I think what he is trying to say is they are compromising their underlying goal of being a non-profit for the benefit of all, to ensure the survival of "OpenAI". It is a catch-22, but those of pure intentions would rather not care about the survival of the entity, if it meant compromising their values.
That may be the goal now as they ride the hype train around “AGI” for marketing purposes. When it was founded the goal was stated as ensuring no single corp controls AI and that it’s open for everyone. They’ve basically done a 180 on the original goal, seemingly existing only to benefit Microsoft, and changing what your goal is to AGI doesn’t disprove that.
> Elon understood the mission did not imply open-sourcing AGI. As Ilya told Elon: “As we get closer to building AI, it will make sense to start being less open. The Open in openAI means that everyone should benefit from the fruits of AI after its built, but it's totally OK to not share the science...”
Whether you agree with this is a different matter but they do state that they did not betray their mission in their eyes.
The benefit is the science, nothing else matters, and having OpenAI decide what matters for everyone is repugnant.
Of course they can give us nothing, but in that case they should start paying taxes and stop claiming they're a public benefit org.
My prediction is they'll produce little of value going forward. They're too distracted by their wet dreams about all the cash they're going to make to focus on the job at hand.
I agree with your sentiment but the prediction is very silly. Basically every time openai releases something they beat the state of the art in that area by a large margin.
> The benefit is the science, nothing else matters, and having OpenAI decide what matters for everyone is repugnant.
OpenAI gets to decide what it does with its intellectual property for the same reason that a whole bunch of people are suing it for using their intellectual property.
It only becomes repugnant to me if they're forcing their morals onto me, which they aren't, because (1) there are other roughly-equal-performance LLMs that aren't from OpenAI, and (2) the stuff it refuses do is a combination of stuff I don't want to exist and stuff I have a surfeit of anyway.
A side effect of (1) is that humanity will get the lowest common (moral and legal) denominator in content from GenAI from different providers, just like the prior experience of us all getting the lowest common (moral and legal) denominator in all types of media content due to internet access connecting us to other people all over the world.
OpenAI at this point must be literally #1 target for every single big spying agency in whole world.
As we saw previously it doesn't matter much if you are top notch ai researcher, if 1-2 millions of your potential personal wealth are in stake this affect decision making (and probably would mine too).
How much of a bribe would it take for anybody inside with good enough access to switch sides and take all the golden eggs out? 100 million? A billion? Trivial amounts compared to what we discuss. And they will race each other to your open arms for such amounts.
We see sometimes recently ie government officials betraying their own countries to russian spies in Europe for few hundred - few thousands of euros. A lot of people are in some way selfish by nature, or can be manipulated easily via emotions. Secret services across the board are experts in that, it just works(tm).
To sum it up - I don't think it can be protected long term.
I'm a very weird person with money. I've basically got enough already, even though there are people on this forum who earn more per year than I have in total. My average expenditure is less than €1k/month.
This means I have no idea how to even think about people who could be bribed when they already earn a million a year.
But also, if AI can be developed as far as the dreamers currently making it real hope it can be developed, money becomes as useless to all of us as previous markers of wealth like "a private granary" or "a lawn" or "aluminium cutlery"[0].
They are totally closed now, not just keeping their models for themselves for profit purposes. They also don't disclose how their new models work at all.
They really need to change their name and another entity that actually works for open AI should be set up.
Well, the Manhattan project springs to mind. They truly thought they were laboring for the public good, and even if the government let them wouldn’t have wanted to publish their progress.
Personally I find the comparison of this whole saga (deepmind -> google —> openai —> anthropic —-> mistral —-> ?) to the Manhattan project very enlightening, both of this project and our society. Instead of a centralized government project, we have a loosely organized mad dash of global multinationals for research talent, all of which claim the exact same “they’ll do it first!” motivations as always. And of course it’s accompanied by all sorts of media rhetoric and posturing through memes, 60-Minutes interviews, and (apparently) gossipy slap back blog posts.
In this scenario, Oppenheimer is clearly Hinton, who’s deep into his act III. That would mean that the real Manhattan project of AI took place in roughly 2018-2022 rather than now, which I think also makes sense; ChatGPT was the surprise breakthrough (A-bomb), and now they’re just polishing that into the more effective fully-realized forms of the technology (H-bomb, ICBMs).
Hmm do you have some sources? That sounds interesting. Obviously there’s always doubt, but yeah I was under the impression everyone at the Manhattan project truly believed that the Axis powers were objectively evil, so any action is justified. Obviously that sorta thinking falls apart on deeper analysis, but it’s very common during full war, no?
EDIT: tried to take the onus off you, but as usual history is more complicated than I expected. Clearly I know nothing because I had no idea of the scope:
At its peak, it employed over 125,000 direct staff members, and probably a larger number of additional people were involved through the subcontracted labor that fed raw resources into the project. Because of the high rate of labor turnover on the project, some 500,000 Americans worked on some aspect of the sprawling Manhattan Project, almost 1% of the entire US civilian labor force during World War II.
Sooo unless you choose an arbitrary group of scientists, it seems hard. I haven’t seen Oppenheimer but I understand it carries on the narrative that he “focused on the science” until the end of the war when his conscience took over. I’ll mostly look into that…
If you really think you're fighting evil in a war for global domination, it's easy to justify to yourself that it's important you have the weapons before they do. Even if you don't think you're fighting evil; you'd still want to develop the weapons before your enemies so it won't be used against you and threaten your way of life.
I'm not taking a stance here, but it's easy to see why many Americans believed developing the atomic bomb was a net positive at least for Americans, and depending on how you interpret it even the world.
In this note: HIGHLY recommend “Rigor of Angels”, which (in part) details Heisenbergs life and his moral qualms about building a bomb. He just wanted to be left alone and perfect his science, and it’s really interesting to see how such a laudable motivation can be turned to such deplorable, unforgivable (IMO) ends.
Long story short they claim they thought the bomb was impossible, but it was still a large matter of concern for him as he worked on nuclear power. The most interesting tidbit was that Heisenberg was in a small way responsible for (west) Germany’s ongoing ban on nuclear weapons, which is a slight redemption arc.
Heisenberg makes you think, doesn't he? As the developer of Hitler's bomb, which never was a realistic thing to begin with, he never employed slave labour for example. Nor was any of his stuff used during warfare. And still, he is seen by some as some tragic figure, at worst as man behind Hitler's bomb.
Wernher vin Braun on the other hand got lauded for his contribution to space exploration. His development of the V2 and his use of slave labour in building them was somehow just a minor disgression for the, ultimately under US leadership, greater good.
I think they thought it would be far better that America developed the bomb than Nazis Germany, and the Allies needed to do whatever it too to stop Hitler, even if that did mean using nuclear bombs.
Japan and the Soviet Union were more complicated issues for some of the scientists. But that's what happens with warfare. You develop new weapons, and they aren't just used for one enemy.
So.. "open" means "open at first, then not so much or not at all as we get closer to achieving AGI"?
As they become more successful, they (obviously) have a lot of motivation to not be "open" at all, and that's without even considering the so-called ethical arguments.
More generally, putting "open" in any name frequently ends up as a cheap marketing gimmick. If you end up going nowhere it doesn't matter, and if you're wildly successful (ahem) then it also won't matter whether or not you're de facto 'open' because success.
Maybe someone should start a betting pool on when (not if) they'll change their name.
So the Open in OpenAI means whatever OpenAI wants it to mean.
It’s a trademarked word.
The fact that Elon is suing them for their name when the guy has a feature “AutoPilot” which is not a made up word and had an actual well understood meaning which totally does not apply to how Tesla uses AutoPilot is hilarious.
Actually Open[Technology] pattern implies a meaning in this context. OpenGL, OpenCV, OpenCL etc. are all 'open' implementations of a core technology, maintained by non-profit organizations. So OpenAI non-profit immediately implies a non-profit for researching, building and sharing 'open' AI technologies. Their earlier communication and releases supported that idea.
Apparently, their internal definition was different from the very beginning (2016). The only problem with their (Ilya's) definition of 'open' is that it is not very open. "Everyone should benefit from the fruits of AI". How is this different than the mission of any other commercial AI lab? If OpenAI makes the science closed but only their products open, then 'open' is just a term they use to define their target market.
A better definition of OpenAi's 'open' is that they are not a secret research lab. They act as a secret research lab, but out in the open.
> An autopilot is a system used to control the path of an aircraft, marine craft or spacecraft without requiring constant manual control by a human operator. Autopilots do not replace human operators. Instead, the autopilot assists the operator's control of the vehicle, allowing the operator to focus on broader aspects of operations (for example, monitoring the trajectory, weather and on-board systems).https://en.wikipedia.org/wiki/Autopilot
Other than the vehicle, this would seem to apply to Tesla's autopilot as well. The "Full Self Driving" claim is the absurd one, odd that you didn't choose that example.
Ilya may have said this to Elon but the public messaging of OpenAI certainly did not paint that picture.
I happen to think that open sourcing frontier models is a bad idea but OpenAI put themselves in the position where people thought they stood for one thing and then did something quite different. Even if you think such a move is ultimately justified, people are not usually going to trust organizations that are willing to strategically mislead.
What they said there isn't their mission, that is their hidden agenda. Here is their real mission that they launched with, they completely betrayed this:
> As a non-profit, our aim is to build value for everyone rather than shareholders. Researchers will be strongly encouraged to publish their work, whether as papers, blog posts, or code, and our patents (if any) will be shared with the world
They started as a defence contractor with generous “donation” from DARPA. That’s why i never trusted them from day 0. And they have followed a pretty predictable trajectory.
Don’t be evil. We believe strongly that in the long term, we will be better served—as shareholders and in all other ways—by a company that does good things for the world even if we forgo some short term gains. This is an important aspect of our culture and is broadly shared within the company.
Google users trust our systems to help them with important decisions: medical, financial and many others. Our search results are the best we know how to produce. They are unbiased and objective, and we do not accept payment for them or for inclusion or more frequent updating. We also display advertising, which we work hard to make relevant, and we label it clearly. This is similar to a newspaper, where the advertisements are clear and the articles are not influenced by the advertisers’ payments. We believe it is important for everyone to have access to the best information and research, not only to the information people pay for you to see.
"""
Yes, there they explain why doing evil will hurt their profits. But a for profits main mission is always money, the mission statement just explains how they make money. That is very different from a non-profit whose whole existence has to be described in such a statement, since they aren't about profits.
This claim is nonsense, as any visit to the Wayback Machine can attest.
In 2016, OpenAI's website said this right up front:
> We're hoping to grow OpenAI into such an institution. As a non-profit, our aim is to build value for everyone rather than shareholders. Researchers will be strongly encouraged to publish their work, whether as papers, blog posts, or code, and our patents (if any) will be shared with the world. We'll freely collaborate with others across many institutions and expect to work with companies to research and deploy new technologies.
I don't know how this quote can possibly be squared with a claim that they "did not imply open-sourcing AGI".
In that case they mean that their mission to ensure everyone benefits from AI has changed to be that only a few would benefit. But it would support them saying like "it was never about open data"
In a way this could be more closed than for profit.
That passes for an explanation to you ? What exactly is the difference between openai and any company with a product then ? Hey, we made THIS and in order to make sure everyone can benefit we sell at a price of X.
So, open as in "we'll sell to anyone" except that at first they didn't want to sell to the military and they still don't sell to people deemed "terrorists." Riiiiiight. Pure bullshit.
Open could mean the science, the code/ip (which includes the science) or pure marketing drivel. Sadly it seems that it's the latter.
There is no clear advantage to multiple corporations or nation states each with the potential to bootstrap and control AGI vs a single corporation with a monopoly. The risk comes from the unknowable ethics of the company's direction. Adding more entities to that equation only increases the number of unknown variables. There are bound to be similarities to gun-ownership or countries with nuclear arsenals in working through this conundrum.
You're talking about it as if it was a weapon. An LLM is closer to an interactive book. Millennia ago humanity could only pass on information through oral traditions. Then scholars invented elaborate writing systems and information could be passed down from generation to generation, but it had to be curated and read, before that knowledge was available in the short term memory of a human. LLMs break this dependency. Now you don't need to read the book, you can just ask the book for the parts you need.
The present entirely depends on books and equivalent electronic media. The future will depend on AI. So anyone who has a monopoly is going to be able to extract massive monopoly rents from its customers and be a net negative to the society instead of the positive they were supposed to be.
4. Language prediction training will not get stuck in a local optimum.
Most previous things we train on could have been better served if the model developed AGI, but they didn't. There is no reason to expect LLMs to not get stuck in a local optimum as well, and I have seen no good argument as to why they wouldn't get stuck like everything else we tried.
There is very little in terms of rigorous mathematics on the theoretical side of this. All we have are empirics, but everything we have seen so far points to the fact that more compute equals more capabilities. That's what they are referring to in the blog post. This is particularly true for the current generation of models, but if you look at the whole history of modern computing, the law roughly holds up over the last century. Following this trend, we can extrapolate that we will reach computers with raw compute power similar to the human brain for under $1000 within the next two decades.
It's not just the volume of original data that matters here. From empirics we know performance scales roughly like (model parameters)*(training data)*(epochs). If you increase any one of those, you can be certain to improve your model. In the short term, training data volume and quality has given a lot of improvements (especially recently), but in the long run it was always model size and total time spent training that saw improvements. In other words: It doesn't matter how you allocate your extra compute budget as long as you spend it.
In smaller models, not having enough training data for the model size leads to overfitting. The model predicts the training data better than ever, but generalizes poorly and performs worse on new inputs.
Is there any reason to think the same thing wouldn't happen in billion parameter LLMs?
This happens in smaller models because you reach parameter saturation very quickly. In modern LLMs and with current datasets, it is very hard to even reach this point, because the total compute time boils down to just a handful of epochs (sometimes even less than one). It would take tremendous resources and time to overtrain GPT4 in the same way you would overtrain convnets from the last decade.
True but also from general theory you should expect any function approximator to exhibit intelligence when exposed to enough data points from humans, the only question is the speed of convergence. In that sense we do have a guarantee that it will reach human ability
It's a bit more complicated than that. Your argument is essentially the universal approximation theorem applied to perceptrons with one hidden layer. Yes, such a model can approximate any algorithm to arbitrary precision (which by extension includes the human mind), but it is not computationally efficient. That's why people came up with things like convolution or the transformer. For these architectures it is much harder to say where the limits are, because the mathematical analysis of their basic properties is infinitely more complex.
Where did you get that from? It seems pretty clear to me that language models are intended to be a component in a larger suite of software, composed to create AGI. See: DALL-E and Whisper for existing software that it composes with.
The comment said that LLMs are the path to AGI, which implies at least that they’re a huge part of the AGI soup you’re talking about. I could maybe see agi emerging from lots of llms and other tools in a huge network, but probably not from an llm with calculators hooked up to it.
You're arguing that LLMs would be a good user interface for AGI...
Whether that's true or not, I don't think that's what the previous post was referring to. The question is, if you start with today's LLMs and progressively improve them, do you arrive at AGI?
(I think it's pretty obvious the answer is no -- LLMs don't even have an intelligence part to improve on. A hypothetical AGI might somehow use an LLM as part of a language interface subsystem, but the general intelligence would be outside the LLM. An AGI might also use speakers and mics but those don't give us a path to AGI either.)
The comment I was replying to was referencing OpenAIs use of the phrase "the path to AGI". Natural language is an essential interface to AGI, and OpenAI recognizes that. LLMs are a great way to interface with natural language, and OpenAI recognizes that.
While it's kind of nuts how far OpenAI pushed language models, even as an outside observer it's obvious that OpenAI is not banking on LLMs achieving AGI, contrary to what the person I was replying to said. Lots of effort is being put into integrating with outside sources of knowledge (RAG), outside sources for reason / calculation, etc. That's not LLMs as AGI, but it is LLMs as a step on the path to AGI.
I don’t know if they are or not, but I’m not sure how anyone could be so certain that they’re not that they find the mere idea cringeworthy. Unless you feel you have some specific perspective on it that’s escaped their army of researchers?
Because AI researchers have been on the path to AGI several times before until the hype died down and the limitations became apparent. And because nobody knows what it would take to create AGI. But to put a little more behind that, evolution didn't start with language models. It evolved everything else until humans had the ability to invent language. Current AI is going about it completely backwards from how biology did it. Now maybe robotics is doing a little better on that front.
I mean, if you're using LLM as a stand-in for multi-modal models, and you're not disallowing things like a self-referential processing loop, a memory extraction process, etc, it's not so far fetched. There might be multiple databases and a score of worker processes running in the background, but the core will come from a sequence model being run in a loop.
> here they explain why they had to betray their core mission. But they don't refute that they did betray it.
you are assuming that their core mission is to "Build an AGI that can help humanity for free and as a non-profit", the way their thinking seems to be is "Build an AGI that can help humanity for free"
they figured it was impossible to achieve their core mission by doing it in a non-profit way, so they went with the for-profit route but still stayed with the mission to offer it for free once the AGI is achieved
Several non-profits sell products to further increase their non-profit scale, would it be okay for OpenAI non-profits to sell products that came in the process of developing AGI so that they can keep working on building their AGI? museums sell stuff to continue to exist so that they can continue to build on their mission, same for many other non-profits. the OpenAI structure just seems to take a rather new version of that approach by getting venture capital (due to their capital requirements)
The problem of course is that they frequently go back on their promises (see they changes in their usage guidelines regarding military projects) so excuse me if I don't believe them when they say they'll voluntarily give away their AGI tech for the greater good of humanity
The easiest way to cut through corporate BS is to find distinguishing characteristics of the contrary motivation. In this case:
OpenAI says: To deliver AI for the good of all humanity, it needs the resources to compete with hyperscale competitors, so it needs to sell extremely profitable services.
Contrary motivation: OpenAI wants to sell extremely profitable services to make money, and it wants to control cutting edge AI to make even more money.
What distinguishing characteristics exist between the two motivations?
Because from where I'm sitting, it's a coin flip as to which one is more likely.
Add in the facts that (a) there's a lot of money on the table & (b) Sam Altman has a demonstrated propensity for throwing people under the bus when there's profit in it for himself, and I don't feel comfortable betting on OpenAI's altruism.
PS: Also, when did it become acceptable for a professional fucking company to publicly post emails in response to a lawsuit? That's trashy and smacks of response plan set up and ready to go.
There is no fixed point at which you can say it achieves AGI (artificial general intelligence) it's a spectrum. Who decides when they've reached that point as they can always go further.
If this is the case, then they should be more open with their older models such as 3.5, I'm very sure industry insiders actually building these already know the fundamentals of how it works.
An interesting aspect of OpenAI's agreement with Microsft is that, until the point of AGI, Microsoft have IP rights to the tech. I'm not sure exactly what's included in that agreement (model, weights, training data, dev tools?), but it's enough that Nadella at least made brave sounding statements during OpenAI's near implosion that "they had everything" and would not be disrupted if OpenAI were to disappear overnight. I would guess they might have a major disruption in continuing development, but I guess at least the right to carry on using what they've already got access to.
The interesting part of this is that whatever rights Microsoft has do not extend to any OpenAI model/software that is deemed to be AGI, and it seems they must therefore have agreed how this would be determined, which would be interesting to know!
There was a recent interview of Shane Legg (DeepMind co-founder) by Dwarkesh Patel where he gave his own very common sense definition of AGI as being specifically human-level AI, with the emphasis on general. His test for AGI would be to have a diverse suite of human level cognitive tasks (covering the spectrum of human ability), with any system that could pass these tests then being subject to ad hoc additional testing. Any system that not only passed the test suite but also performed at human level on any further challenge tasks might then reasonably be considered to have achieved AGI (per this definition).
As the emails make clear, Musk reveals that his real goal is to use OpenAI to accelerate full self driving of Tesla Model 3 and other models. He keeps on putting up Google as a boogeyman who will swamp them, but he provides no real evidence of spending level or progress toward AGI, he just bloviates. I am totally suspicious of Altman in particular, but Musk is just the worst.
“he provides no real evidence of spending level”
In the mails he mentions that billions per year are needed and that he was willing to put up 1 billion to start.
> They're probably right that without making a profit, it's impossible to afford...
This doesn't begin to make sense to me. Nothing about being a non-profit prevents OpenAI from raising money, including by selling goods and services at a markup. Some sell girl-scout cookies, some hold events, etc.
So, you can't issue equity in the company... offer royalties. Write up a compensation contract with whatever formula the potential employee is happy with.
Contract law is specifically designed to allow parties to accomplish whatever they want. This is an excuse.
This would be orders of magnitude more than any charity has ever raised, and is also an uncommonly huge raise even among _for-profit_ companies where investors expect returns.
Even when the EA community was flush with crypto billionaires there was no appetite for this level of spend.
The Novo Nordisk foundation has a $120 Billion endowment, the Bill & Melinda Gates Foundation has $50 Billion, the Welcome Trust has $42 Billion, ... There are about 19 charities that still have more than $10 Billion, let alone those who have raised (and spent) that during their existence.
> an uncommonly huge raise even among _for-profit_ companies where investors expect returns.
So, offer a return on investment. Charities that take loans pay interest. Charities that hire staff pay salaries. What's with this idea that charities cannot pay a reasonable market rate for services (e.g. short-term funding)?
I stand corrected, as stated my claim about charity OOM was wrong. Still, I don’t think I need to update much.
Because:
> So, offer a return on investment
This is precisely what they did; Microsoft’s investment is an extremely funky capped profit structure. They did it this way to minimize their cost of capital.
I’m not really clear what your concrete proposal is for raising $10b as a non-profit, perhaps you could flesh that out?
If you’re talking about financing a potentially decade- long project on tens of billions of dollars of pure debt, again that is… not a feasible structure.
> I’m not really clear what your concrete proposal is for raising $10b as a non-profit, perhaps you could flesh that out?
OpenAI, Inc. (the nonprofit) could have partnered with Microsoft directly.
To be fair, maybe Microsoft may have required that certain code be kept secret in a way that OpenAI's charitable purpose would not have allowed. However, that would just suggest that the deal was not open and not in the best interests of the charity.
Moreover, I'm skeptical that OpenAI Global LLC paid fair market value to OpenAI, Inc. for the assets it received. Sure, the GPT-2 itself was open sourced, but a lot of the value of the business lied in other things: all of the datasets that were used, the history of training, what worked and what didn't work, the accessory utilities, emails, documents / memos, the brand, etc. The staff is a little tricky, because - sure - they are ostensibly free to leave, but there's no doubt there's a ton of value in the staff.
If OpenAI, Inc. (non-profit) put itself on the open market with the proceeds to go to another charity, what do you think Microsoft would have paid to buy the business? I bet it would have been a lot more than OpenAI Global LLC paid to OpenAI, Inc for the same assets...
here they explain why they had to betray their core mission. But they don't refute that they did betray it.
Although they don’t spend nearly as much time on it, probably because it’s an entirely intuitive argument without any evidence, is that they could be “open” as in “for the public good” while still making closed models for profit. Aka the ends justify the means.
It’s a shame lawyers seem to think that the lawsuit is a badly argued joke, because I really don’t find that line of reasoning convincing…
> lawyers seem to think that the lawsuit is a badly argued joke,
its because it is a badly argued joke. The founding charter is just that, a charter not a contract:
> the corporation will seek to open source technology for the public benefit when applicable
There are two massive caveats in that statement. wide enough to drive a stadium through.
Elon is just pissed, and is throwing lawyers at it in the hopes that they will fold (A lot of cases are settled out of court, because its potentially significantly cheaper, and less risky.)
The problem for Musk is that he is fighting with company who also is rich enough to afford good lawyers for a long time.
Also, he'll have to argue that he has materially been hurt by this change, again really hard.
last of all, its a company, founding agreements are not law, and often rarely contracts.
The evidence they presented shows that Elon was in complete agreement with the direction of OpenAI. The only thing he disagreed with was who would be the majority owner of the resulting for-profit company that hides research in the short to medium term.
> They're probably right that building AGI will require a ton of computational power, and that it will be very expensive.
Why? This makes it seem like computers are way less efficient than humans. Maybe I'm naive on the matter, but I think it's possible for computers to match or surpass human efficiency.
Computers are still way less efficient than humans, a human brain has less power draw than a laptop and do some immense calculations to parse vision, hearing etc better than any known algorithm constantly.
And the part of the human brain that governs our human intelligence and not just what animals do is much larger than, so unless we figure out a better algorithm than evolution did for intelligence it will require a massive amount of compute.
The brain isn't fast, but it is ridiculously parallel with every cell being its own core so total throughput is immense.
Perhaps the finalized AGI will be more efficient than a human brain. But training the AGI is not like running a human, it's like speed running evolution from cells to humans. The natural world stumbled on NGI in a few billion years. We are trying to do it in decades - it would not be surprising that it's going to take huge power.
Computers are more efficient, dense and powerful than humans. But due to self-assembly, brains consist of many(!) orders of magnitude more volume. A human brain is more accurately compared with a data center than a chip.
> A human brain is more accurately compared with a data center than a chip.
A typical chip requires more power than a human brain, so I'd say they are comparable. Efficiency isn't per volume but per power or per heat production. Human brains wins those two by far.
To be fair, we've locked ourselves into this to some extent with the focus on lithography and general processors. Because of the 10-1000W bounds of a consumer power supply, there's little point to building a chip that falls outside this range. Peak speed sells, power saving doesn't. Data center processors tend to be clocked a bit lower than desktops for just this reason - but not too much lower, because they share a software ecosystem. Could we build chips that draw microwatts and run at megahertz speeds? Sure, probably, but they wouldn't be very useful to the things that people actually do with chips. So imo the difficulty with matching the brain on efficiency isn't so much that we can't do it as that nobody wants it. (Yet!)
edit: Another major contributing factor is that so far, chips are more bottlenecked on production than operation. Almost any female human can produce more humans using onboard technology. Comparatively, first-rate chips can be made in like three buildings in the entire world and they each cost billions to equip. If we wanted to build a brain with photolithography, we'd need to rent out TSMC for a lot longer than nine months. That results in a much bigger focus on peak performance. We have to go "high" because we cannot practically go "wide".
They claimed that GPT-2 was probably not dangerous but they wanted to establish a culture of delaying possibly-dangerous releases early. Which, good on them!
No, I think they started closing down and going for profit at the time they realized that GPT was going to be useful. Which sounds bad, but at the limit, useful and dangerous are the same continuum. As the kids say, OpenAI got "scale-pilled;" they realized that as they dumped more compute and more data onto those things, the network would just pick up more and discontinuous capabilities "on its own."
<aisafety>That is the one thing we didn't want to happen.</aisafety>
It's one thing to mess around with Starcraft or DotA and wow the gaming world, it's quite another to be riding the escalator to the eschaton.
> But instead of changing their name and their mission, and returning the donations they took from these wealthy tech founders, they used the benevolent appearance of their non-profit status and their name to mislead everyone about their intentions.
Why would they change their mission? If achieving the mission requires money then they should figure out how to get money. Non-profit doesn't actually mean that the corporation isn't allowed to make profit.
Why change the name? They never agreed to open source everything, the stated mission was to make sure AGI benefits all of humanity.
>They're probably right that building AGI will require a ton of computational power, and that it will be very expensive
Eh.
Humans have about 1.5 * 10 ^ 14 synapses (i.e connections between neurons). Assume all the synapses are firing (highly unlikely to be the case in reality), and the average firing speed is 0.05ms (there are chemical synapses that are much slower, but we take the fastest speed of the electrical synapses).
Assume that each synapse is essntially a signal that gets attenuated somehow in transmission. I.e value times a fractional weight, which really is a floating point operation. That gives us (1.5 * 10 ^14)/(0.0005)/(10 ^ 12)) = 300000 TFLOPS
Nvidia 4090 is capable of 1300 Tflop of fp8. So for comparable compute, we need 230 4090s, which is about $345k. So with everything else on board, you are looking at $500k, which is comparatively not that much money, and thats consumer pricing.
The biggest expense like you said is paying salaries of people who are gonna figure out the right software to put on those 4090s. I just hope that most of them aren't working on LLMs.
Training will be significantly cheaper and take less time once we have the correct software for the 4090s
Right now, the idea is that every time you build an LLM, you start the training from scratch, because thats all we know how to do.
Human like AI will most definitely not be trained like that. Humans can look at a piece of information once or twice and remember it.
Just like the attention paper, at some point someone will publish a paper that describes a methodology of feeding one piece of data back through the network only a few times to fully train it.
LLMs are just training on massive amounts of data in order to find the right software. No human can program these machines to do the complicated tasks that humans can do. Rather we search for them with Gradient based methods using data
If the core mission is to advance and help humanity, then they determine by changing it to profit and making it closed will help that mission, then it is a valid decision
Depends on your limited definition of advance. Chilling in a Matrix-esque wasteland with my fancy-futuristic-gadget isn't my idea of advanced-level-humanity.
May help with technological advancement, but not social or ethical advancement.
It’s been known to happen that environmental regulations turn out to be ill-considered, counterproductive, entirely corrupt instances of regulatory capture by politically dominant industries, or simply poor cost-benefit tradeoffs. Gigantic pickups are a consequence of poorly considered environmental regulations in the United States, for instance.
True, at their core though, those aren't environmental at their heart but rather something else green-washed, be it corruption, subsidisation or something else.
When they say "We realized building AGI will require far more resources than we’d initially imagined" it's not just money/hardware it's also time. They need more years, maybe even decades, for AGI. In the meantime, let's put these LLMs to good use and make some money to keep funding development.
To "pivot" would merely be to change their mission to something related yet different. Their current stance seems to me to be in conflict with their original mission, so I think it's accurate to say that they betrayed it.
> “The Open in openAI means that everyone should benefit from the fruits of AI after its [sic] built, but it's totally OK to not share the science...”, to which Elon replied: “Yup”.
Well, nope. This is disingenuous to the point of absurdity. By that measure every commercial enterprise is "open". Google certainly is extremely open, as are Apple, Amazon, Microsoft... or Wallmart, Exxon, you name it.
"They're probably right that without making a profit, it's impossible to afford the salaries 100s of experts in the field and an army of hardware to train new models."
Except there's a proof point that it's not impossible: philanthropists like Elon Musk - who would have likely kept pumping money into it, and where arguably the U.S. and other governments would have funded efforts - energy and/or CPU time - as a military defense strategy to help compete with China's CCP funding AI.
Well yeah, dive into the comments on any Firefox-related HN post and you'll see the same complaint about the organization structure of Mozilla, and its hindrance of Firefox's progress in favour of fat CEO salaries and side products few people want.
Elon is suing OpenAI for breach of contract but doesn't have a contract with OpenAI. Most legals experts are concluding that this is a commercial for Elon Musk, not much more. Missions change, yawn...
> To some extent, they may be right that open sourcing AGI would lead to too much danger.
That's clearly self-serving claptrap. It's a leveraging of a false depiction of what AGI will look like (no ones really knows, but it's going to be scary and out of control!) with so much gatekeeping and subsequent cash they can hardly stop salivating.
No strong AI (there is no evidence AGI is even possible) is not going to be a menace. It's software FFS. Humans are and will be a menace though and logically the only way to protect ourselves from bad people (and corporations) with strong AI is to make strong AI available to everyone. Computers are pretty powerful (and evil) right now but we haven't banned them yet.
Reading this is like hearing "there is no evidence that heavier-than-air flight is even possible" being spoken, by a bird. If 8 billion naturally-occuring intelligences don't qualify as evidence that AGI is possible, then is there anything that can qualify as evidence of anything else being possible?
That makes little intuitive sense to me. Help me understand why increasing the number of entities which possess a potential-weapon is beneficial for humanity?
If the US had developed a nuclear armament and no other country had would that truly have been worse? What if Russia had beat the world to it first? Maybe I'll get there on my own if I keep following this reasoning. However there is nothing clear cut about it, my strongest instincts are only heuristics I've absorbed from somewhere.
What we probably want with any sufficiently destructive potential-weapon are the most responsible actors to share their research while stimulating research in the field with a strong focus on safety and safeguarding. I see some evidence of that.
> If the US had developed a nuclear armament and no other country had would that truly have been worse?
Yes, do you think it is a coincidence that nuclear weapons stopped being used in wars as soon as more than one power had them? People would clamor for nukes to be used to save their young soldiers lives if they didn't have to fear nuclear retaliation, you would see strong political pushes for nuclear usage in everyone of USA's wars.
I sense that with AGI all the outcomes will be a little less assured, since it is general-purpose. We won't know what hit until it's over. Was it a pandemic? Was it automated-religion? Nuclear weapons seem particularly suited to MAD, but not AGI.
> ... and returning the donations they took from these wealthy tech founders, they used the benevolent appearance of their non-profit status and their name to mislead everyone about their intentions.
I can't tell if your comment is intentionally misleading or just entirely missing the point. The entire post states that Elon musk was well aware and onboard with their intentions. Tried to take over OpenAI and roll it into his private company to control. And finally agreed specifically that they need to continue to become less open over time.
And your post is to play Elon out to be a victim who didn't realize any of this?
He's replying to emails saying he's agreeing. It's hard to understand why you posted something so contradictory above pretending he wasn't.
Malevolent or "paperclip indifferent" AGI is a hypothetical danger.
Concentrating an extremely powerful tool, what it will and won't do, who has access to it, who has access to the the newest stuff first? Further corrupting K Street via massive lobbying/bribery activity laundered through OpenPhilanthropy is just trivially terrifying.
That is a clear and present danger of potentially catastrophic importance.
We stop the bleeding to death, then worry about the possibly malignant, possibly benign lump that may require careful surgery.
> As we get closer to building AI, it will make sense to start being less open. The Open in OpenAI means that everyone should benefit from the fruits of AI after its built, but it's totally OK to not share the science (even though sharing everything is definitely the right strategy in the short and possibly medium term for recruitment purposes).
That is surprisingly greedy & selfish to be boasting about in their own blog.
Yeah, they are basically saying that they called themselves OpenAI as a recruitment strategy but they never planned to be open after the initial hires.
Why do tech people keep falling for this shtick? It's happened over and over and over with open source becoming open core becoming source available being
becoming source available with closed source bits.
How society organizes property rights makes it damn near impossible to make anything commons in a way that can't in practice be reversed when folks see dollar signs. Owner is a non nullable field.
I think you're misreading the intention here. The intention of closing it up as they approach AGI is to protect against dangerous applications of the technology.
That is how I read it anyway and I don't see a reason to interpret it in a nefarious way.
First, this assumes that they will know when they approach AGI. Meaning they'll be able to reliably predict it far enough out to change how the business and/or the open models are setup. I will be very surprised if a breakthrough that creates what most would consider AGI is that predictable. By their own definition, they would need to predict when a model will be economically equivalent to or better than humans in most tasks - how can you predict that?
Second, it seems fundamentally nefarious to say they want to build AGI for the good of all, but that the AGI will be walled off and controlled entirely by OpenAI. Effectively, it will benefit us all even though we'll be entirely at the mercy of what OpenAI allows us to use. We would always be at a disadvantage and will never know what the AGI is really capable of.
This whole idea also assumes that the greater good of an AGI breakthrough is using the AGI itself rather than the science behind how they got there. I'm not sure that makes sense. It would be like developing nukes and making sure the science behind them never leaks - claiming that we're all benefiting from the nukes produced even though we never get to modify the tech for something like nuclear power.
Read the sentence before, it provides good context. I don't know if Ilya is correct, but it's a sincerely held belief.
> “a safe AI is harder to build than an unsafe one, then by opensorucing everything, we make it easy for someone unscrupulous with access to overwhelming amount of hardware to build an unsafe AI, which will experience a hard takeoff.”
Many people consider what OpenAI is building to be the dangerous application. They don't seem nefarious to me per se, just full of hubris, and somewhat clueless about the consequences of Altman's relationship with Microsoft. That's all it takes though. The board had these concerns and now they're gone.
I think the fundamental conflict here is that OpenAI was started as a counter-balance to google AI and all other future resource-rich cos that decide to pursue AI BUT at the same time they needed a socially responsible / ethical vector to piggyback off of to be able to raise money and recruit talent as a non profit.
So, they cant release science that the googles of the world can use to their advantage BUT they kind of have to because that's their whole mission.
The whole thing was sort of dead on arrival and Ilya's email dating to 2016 (!!!!) only amplifies that.
When the tools are (believed to be) more dangerous than nuclear weapons, and the "thee" is potentially irresponsible and/or antagonists, then... yes? This is a valid (and moral) position.
If so, then they shouldn’t have started down that path by refusing to open source 1.5B for a long time while citing safety concerns. It’s obvious that it never posed any kind of threat, and to date no language model has. None have even been close to threatening.
The comparison to nuclear weapons has always been mistaken.
Sadly one can’t be separated from the other. I’d agree if it was true. But there’s no evidence it ever has been.
One thought experiment is to imagine someone developing software with a promise to open source the benign parts, then withholding most of it for business reasons while citing aliens as a concern.
> One thought experiment is to imagine someone developing software with a promise to open source the benign parts, then withholding most of it for business reasons while citing aliens as a concern.
I mean, I'm totally with them on the fear of AI safety. I'm definitely in the "we need to be very scared of AI" camp. Actually the alien thought experiment is nice - because if we credibly believed aliens would come to earth in the next 50 years, I think there's a lot of things we would/should do differently, and I think it's hard to argue that there's no credible fear of reaching AGI within 50 years.
That said, I think OpenAI is still problematic, since they're effectively hastening the arrival of the thing they supposedly fear. :shrug:
Regardless of who is right here, I think the enormous egos at play make OpenAI the last company I’d want to develop AGI. First Sam vs. the board, now Elon vs. everyone … it’s pretty clear this company is not in any way being run for the greater good. It’s all ego and money with some good science trapped underneath.
Serious question. Does anyone trust Sam Altman at all anymore? My perspective from the outside is his public reputation is in tatters except that he's tied to Open AI. Im curious what the internal rep is and the greater community?
At least, I trust him as much as any other foreign[0] CEO.
For all the stuff people complain about him doing, almost none of it matters to me, except for the stuff which isn't proven (such as his sister's allegation) where I would change my mind if evidence was presented. What I don't trust is that Californian ethics don't map well enough onto my ethics, which also applies to basically all of Big Tech…
…but I'm not sure any ethics works too well when examined. A while ago I came up with the idea of "dot product morality"[1] — when I was a kid, "good vs evil" was enough, then I realised there were shades of grey, then I realised someone could be totally moral on one measure (say honesty) and totally immoral on another (say you're a vegan for ethical reasons and they're a meat lover), and I figured we might naturally simplify this inside our own minds my saying another person is "morally aligned" (implicitly: with ourselves) when their ethics vectors are pointing the same way as ours.
But more recently I realised that in a high dimensional space, there's a huge number of ways for vectors to be almost the same and yet 90° apart[2].
he is a salesman selling midly working snake oil, he was into nft a few year ago and jumped to the new snake oil. everything that elon is claiming now is real. they manipulated opinion and rode open source to train on public open data sources and once they had something they could,market they closed everything. its not up;for debate that is a fact
Closer alignment with my world view. Due to having grown up in the same milieu for the home country, due to choosing the country in part for its idea of what "good" looks like for the one I moved to.
Also, it tickles me to tell Americans that they are foreign. :P
Yes. I liked him when he was head of YC, I liked him when he was head of reddit for a few days, I like him now. I've never had any issue with him. When they made a capped-profit portion of OpenAI, they explained their reasoning, and I think it's clear we wouldn't have GPT-4 today (or in the foreseeable future) if they stayed purely non-profit.
Hell, capped-profit is more than you can say for any other tech company.
The funniest part of the OpenAI post is where someone comes in breathlessly and says "hey have you read this ACX post on why we shouldn't open source AGI" to the guy who's literally been warning everybody about AGI for decades and Elon is like: "Yup." Someone was murdered that day. There is nothing for dismissive than a yup.
You're generally correct, but what really stings is Claude 3 Opus released right at the same time. It's superior to GPT-4 in pretty much every way I've tested. Center of gravity has shifted across a few streets to Anthropic seemingly overnight.
The really depressing thing is that the board anticipated exactly this type of outcome when they were going "capped profit" and deliberately restructured the company with the specific goal of preventing this from happening... yet here we are.
It's difficult to walk away without concluding that "profit secondary" companies are fundamentally incompatible with VC funding. Would that be a pessimistic take? Are there better models left to try? Or is it perhaps the case that OpenAI simply grew too quickly for any number of safeguards to be properly effective?
I think the fact that a number of top people were willing to actually leave OpenAI and found Anthropic explicitly because OpenAI had abandoned their safety focus essentially proves that this wasn’t a thing that had to happen. If different leaders had been in place things could have gone differently.
Ah, but isn't that the whole thing about corporations? They're supposed to outlast their founders.
If the corporation fails to do what it was created to do, then I view that as a structural failure. A building may collapse due to a broken pillar, but that doesn't mean we should conclude it is the pillar's fault that the building collapsed -- surely buildings should be able to withstand and recover from a single broken pillar, no?
My sweet summer child. Do you really belive this Anthropic story and that Anthropic will go any other way? Under late-stage capitalism, there is no other way. Everyone has ideals until they see a big bag of money in front of them. It doesn't matter if the company is a non-profit, for-profit or whatever else.
At the top level, there is the non-profit board of directors (i.e.: the ones Sam Altman had that big fight with). They are not beholden to any shareholders and are directly bound by the company charter: https://openai.com/charter
The top-level nonprofit company owns holding companies in partnership with their employees. The purpose of these holding companies is to own a majority of shares in & effectively steer the bottom layer of our diagram.
At the bottom layer, we have the "capped profit" OpenAI Global, LLC (this layer is where Sam Altman lives). This company is beholden to shareholders, but because the majority shareholder is ultimately controlled by a non-profit board, it is effectively beholden to the interests of an entity which is not profit-motivated.
In order to raise capital, the holding company can create new shares, sell existing shares, and conduct private fundraising. As you can see on the diagram, Microsoft owns some of the shares in the bottom company (which they bought in exchange for a gigantic pile of Azure compute credits).
Except Altman has the political capital to have the entire board fired if they go against him, which makes the entire structure irrelevant. The power is where the technology is being developed -- at the bottom where they can threaten to walk out with plush jobs from the major shareholders at the bottom. The power is not where the figureheads sit at the top.
That's a good question, and I agree it was a clever design. I don't know that there is a way to modify the org structure to prevent what happened. As much as I dislike them, a clear non-compete clause after the structure was in place might have helped, but I'm not sure that's even an option in CA. And having employees re-sign for noncompete would be fraught itself (better from the start). But this does seem like the most relevant application of non-compete. I'm not a lawyer and I'm sure they had top notch lawyers review the structure. If OpenAI played the non-compete card it wouldn't make retention easier if employees were willing to walk (they wouldn't exactly have trouble finding jobs anywhere). Do you know of anything that might have prevented it?
Well, if you squint a little bit, this all looks kind of like a military coup. Through the cultivation of personal loyalty in his ranks, General Altman became able to ignore orders, cross the Rubicon, and subjugate the senate. It's an old but common story.
I point out this similarity because I suspect that the corporate solution to such "coups" will mirror the government solution: checks and balances. You build a system which functions by virtue of power conflicts rather than trying to prevent them. I won't pretend to know how such a thing could be implemented in practice, however.
And what was this structure supposed to achieve? At the top we have board of directors not accountable to anyone, except, as we recently discovered, to the possibility of a general rebellion from employees.
That's not clever or innovative. That's just plain old oligarchy. Intrigue and infighting is a known feature of oligarchies ever since antiquity.
Whether or not it is “clever”, the idea of a non-profit or charity owning a for-profit company isn’t original. Mozilla has been doing it for years. The Guardian (in the UK) adopted that structure in 1936
That's not entirely true. As a 501(c)(3) organization, they are bound to honor their founding bylaws on pain of having their tax-exempt status revoked w/ retroactive consequences. I won't comment on whether this is the fault of the bylaws or the IRS... but in the end I think we can agree that this was evidently not an effective enforcement mechanism.
As for the whole "cleverness" topic... it wasn't designed as an oligarchy, that's merely what it has devolved into. The saying "too clever by half" exists with good reason
I assure you that we all got the allusion that you're making, but given the quote that you're replying to I think that perhaps you personally should not be allowed to own a dog.
I'm aware. The quote just so happened to appear in a Youtube program called Code Report yesterday, so I thought you might've been a viewer. I didn't mean to imply anything beyond that, sorry for the confusion.
Surely, anyone taking on and enduring the pain of running a company does so for egoistic reasons.
Your implicit assumption is that altruism exists. In the limit, every living being is egoistic. Anything you do is ultimately for egoistic reasons - even if you do it “for others” at first sight, it ultimately benefits you in some way, even only to make you feel better.
A common misconception is that “egoism is bad”. Egoism doesn’t have to be bad. If the goals align it’s a net benefit for both sides. For example, a child might seek care, while parents seek happiness. Both are egoistic, but both benefit from each other.
Elon Musk is renowned for being an attention seeker and doing these stunts as a proxy to relevance. It's touring Texas borders wearing a hat backwards, it's messing with Ukraine's access to starlink alongside making statements on geopolitics, it's pretending that he discovered railway and the technology for digging holes in the ground as a silver bullet for transportation problems, it's making bullshit statements on cave rescue submarines followed by attacking actual cave rescuers who pointed out the absurdity of it of being pedophiles... Etc etc etc.
I think it makes no sense at all to evaluate the future of an organization based on what stunts Elon Musk is pulling. There must be better metrics.
Did you also take him at his word when he said 5+ years ago that Teslas can drive themselves safer than a human "today"? Or that Cybertruck has nuclear explosion proof glass (which was immediately shattered by a metal ball thrown lightly at it)?
Musk has a long history of shamelessly lying when it suits his interest, so you should really really not take him at his word.
Pointing out Elon Musk's claims regarding free speech and the shit show he's been forcing upon Twitter, not to mention his temper tantrum directed at marketing teams for ditching post-Musk Twitter due to fears their ads could showcase alongside unsavoury content like racist posts and extremism in general, should be enough to figure out the worth of Elon Musk's word and the consequences of Elon Musk's judgement calls.
I seem to remember that being only partially true? Or the license was weird and deceptive? Also as other replies have stated why isnt "Grok" open source? Musk loves to throw around terms like open source to generate good will but when it comes time to back those claims up it never happens. I wouldnt take Musk at his word for literally anything.
This looks bad for OpenAI (although it's been pretty obvious that they are far from open for a long time).
But it looks 10x worse for Elon. At least for the public image, he desperately try to maintain.
> As we discussed a for-profit structure in order to further the mission, Elon wanted us to merge with Tesla or he wanted full control.
> In late 2017, we and Elon decided the next step for the mission was to create a for-profit entity. Elon wanted majority equity, initial board control, and to be CEO. In the middle of these discussions, he withheld funding,
> The difference is OpenAI had a reputation to protect
OpenAI changed its stance about five years ago. In meanwhile they got billions in investment, hired best employees, created very successful product and took leadership position in AI. Only narrative remaining was that they somehow betrayed original donors by moving away from the charter. This shows that is not the case; original donor(s) are equally megalomaniac and don't give a fuck about the charter.
> Musk can't sink any lower at this point, and his stans will persist.
You sure about that ?
For now they are largely looking away (and boy it's hard !) from his 'far right' adventures. But it was just reported that he met with Trump. And it's pretty clear that if Trump is elected and does cancel EV subsides as he says he will do, Tesla is dead and he knows it.
So now, we have both of those guys, having something each other wants - Trump wants Musks money now, and Musk wants ... taxpayer money once Trump is elected. I bet, Musk reputation can and will go much much lower in coming months.
If your life was changed by investing in TSLA you will look the other way the rest of your life. For many millennials that was their ticket to a good life.
His detractors do a poor job of knocking him down a peg. I am speaking as someone who was there on day 0 of r/realTesla.
Even today with garbage like that recent 60 minutes hit piece on SpaceX, it is easy to dismiss because of all the easily disprovable claims. So much criticism on Musk is just so lazily produced and not properly vetted and its a shame because there is a ton of material to get him with.
Nothing you wrote has a spec of evidence. The current administration has been clearly targeting Elon Musk and Tesla/SpaceX, so why on earth wouldn't Elon support their opponents?
everyone with a logical mind would see it. your political opinion should not cloud your judgement… people like you are the reason we have a senile old man leading our world to wars. I cant fathom why americans hate trump so much but are so lenient of biden and democrats that sent us to the worst conflict the world has seen in decades
The effect of sucking up to someone who previously said this about you might be even worse:
"When Elon Musk came to the White House asking me for help on all of his many subsidized projects, whether it's electric cars that don't drive long enough, driverless cars that crash, or rocketships to nowhere, without which subsidies he'd be worthless, and telling me how he was a big Trump fan and Republican..."
Musk is a vindictive child and he has an axe to grind against Biden and the "woke" left. You'd think he wouldn't do something self defeating because of some slight but he turned $40 nillion into $10 billion buying Twitter for the stupidest reasons.
I mean his standing is based on what he is capable of, not what he does given the circumstance. We know he would side with Trump if it benefits him. I don't need to see the scenario play out for me to judge him for it.
The truth is, without subsides, Tesla would sell a fraction of what it does now, and it would certainly not be profitable (hell, it probably won't be profitable in 2024, even with the subsides !).
But at the same time, tear downs of their cars show they have healthy 30% margins when their competitors are selling their EVs often at a loss. Their current balance sheet looks much better than their competitors given the assets that they have for the EV transition. If governments are serious about transitioning to EVs then either they kick the other car companies into gear or accept that Tesla(and the Chinese) will be the only real serious players.
His 'stans and TLSA uber-bulls insist Tesla is no longer a car company but an AI one. So they probably don't care anymore.
I know some uber-bulls have long insisted Tesla stop selling cars to the public so they can be horded for the imminent Robotaxi fleet that will soon be deployed en masse by 2020.
Do you mind elaborating why you think it looks bad for OpenAI? I didn't see anything that diminishes their importance as an entity or hurts them in respect to this lawsuit or their reputation. In their internal emails from 2016 they explain what they mean by open.
I hear people complain about the 'Open' a lot recently, but I'm not sure I understand this type of concern. Publications, for companies, are always PR and recruitment efforts (independent of profit or nonprofit status). I recall that OpenAI were very clear about their long term intentions and plans for how to proceed since at least February of 2019 when they announced GPT2 and withheld the code and weights for about 9 months because of concerns with making the technology immediately available to all. In my own mind, they've been consistent in their behavior for the last 5 years, probably longer though I didn't care much about their early RL-related recruitment efforts.
With all due respect, they are not doing a sleight of hand while selling widgets... they are in the process of reshaping society in a way that may have never been achieved previously. Finding out that their initial motives were a facade doesn't portend well for their morality as they continue to gain power.
I still don’t understand why you think their initial motives were a facade. They have always been trying to get to AGI that will be useable by a large fraction of society. I am not sure this means they need to explain exactly how things work at every step along the way or to help competitors also develop AGI any more than Intel or Nvidia had to publish their tapeouts in order for people to buy their chips or for competitors to appear. If OpenAI instead built AI for the purpose of helping them solve an internal/whimsical project then that would not be “open” by any reasonable definition (and such efforts exist, possibly by ultra wealthy corporations but also by nations, including for defense purposes.)
> OpenAI is a non-profit artificial intelligence research company. Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. Since our research is free from financial obligations, we can better focus on a positive human impact.
That's obviously changed to "let's make lots of money", which should not be any "non-profit" organization's mission.
I still do not see the point that you are trying to make. Do you think that their current path is somehow constrained (instead of unconstrained) by a need to generate financial returns? I haven’t seen any evidence of a change to the core mission described in the statement.
If you advertise your company as a wholesome benevolent not-for-profit and generate a lot of goodwill for your human enhancing mission and then pull a bait and switch to what looks to be a profit/power motive at all costs. It certainly makes most people who are following the organization sour on your mission and your reputation.
Typically when organizations do something like that it speaks to some considerable underlying issues at the core of the company and the people in charge of it.
It is particularly troubling when it pertains to technology that we all believe to be incredibly important.
I don’t see any changes to the core mission of OpenAI between its founding and now. Maybe some people misinterpret what a nonprofit vs for-profit company status means and confuse the former with other types of organizations like academia or charity. For example, no early founder or investor in a nonprofit can expect to make money simply out of their investment, and I haven’t seen evidence to the contrary for OpenAI. Any profits that a nonprofit makes, even those through ownership of the for-profit entity, must go back to the initial cause, and may include paying employees. Salaries in AI are high these days so if you want to stay on top you have to keep them higher than competition. In any case, I think this thread was not as productive as I hoped.
I think the word “open” is sort of a misrepresentation of what the company is today. I don’t mind personally but I can also see why people in the OSS community would.
Now, I’m not too concerned with any of the large LLM companies and their PR stunts, but from my solely EU enterprise perspective I see OpenAI as mostly a store-front for Microsoft. We get all the enterprise co-pilot products as part of our Microsoft licensing (which is bartered through a 3rd party vendor to make it seem like all the co-pilot stuff we get is “free” when it goes on the budget).
All of those tools are obviously direct results of the work OpenAI does and many of them are truly brilliant. I work in an investment bank that builds green energy plants and sells them with investor money. As you might imagine, nobody outside of our sales department is very good at creating PowerPoints. Especially our financial departments used to a “joy” to watch when they presented their stuff on monthly/quarterly meetings… seriously it was like they were in a competition to fit the most words into a single slide. With co-pilot their stuff looks absolutely brilliant. Still not on the level of our sales department, but brilliant and it’s even helped their presentations not last 90 million years. And this is just a tiny fraction of what we get out of co-pilot. Sure… I mostly use it to make stupid images of ducks, cats, and space marines with wolf heads for my code related presentations, and, to give me links to the right Microsoft domination I’m looking for in the ocean of pages. But it’s still the fruits of OpenAI.
Hell, the fact that they’re doing their stuff on Azure basically means that a lot of those 10 Microsoft billion are going directly back to Microsoft themselves as OpenAI purchases computing power. Yet it remains a “free” entity, so that Microsoft doesn’t run into EU anti-trust issues.
Despite this gloom and doom with an added bit of tinfoil hat, I do think OpenAI themselves are still true to their original mission. But in the boring business sense in an enterprise world, I also think they are simultaneously sort of owned by the largest “for enterprise” tech company in the world.
> “As we get closer to building AI, it will make sense to start being less open. The Open in openAI means that everyone should benefit from the fruits of AI after its built, but it's totally OK to not share the science...”
> I've seen you (and Sam and other OpenAI people) doing a lot of interviews recently extolling the virtues of open sourcing AI, but (...) There are many good arguments as to why the approach you are taking is actually very dangerous and in fact may increase the risk to the world
How lucky are we that openAI, in its infinite wisdom, has decided to shield us from harm by locking away its source code! Truly, our digital salvation rests in the hands of corporate benevolence!
It also employs outdated internal communication (selectively) to depict the other party as a pitiful loser who failed to secure openAI control and is now bent on seeking revenge:
> As we discussed a for-profit structure in order to further the mission, Elon wanted us to merge with Tesla or he wanted full control.
> Elon understood the mission did not imply open-sourcing AGI. As Ilya told Elon: “As we get closer to building AI, it will make sense to start being less open. The Open in openAI means that everyone should benefit from the fruits of AI after its built, but it's totally OK to not share the science...”, to which Elon replied: “Yup”. [4]
If their case to defend against Elon's action relies on his "Yup"from 2016, and justifications for being able to compete with Google, it's not a strong one.