Hacker News new | past | comments | ask | show | jobs | submit login
OpenAI is now everything it promised not to be: closed-source and for-profit (vice.com)
1623 points by isaacfrond on March 1, 2023 | hide | past | favorite | 567 comments



This seems an important article, if for no other reason than it brings the betrayal of its foundational claim still brazenly present in OpenAI's name from the obscurity of HN comments going back years into the public light and the mainstream.

They've achieved marvellous things, OpenAI, but the pivot and long-standing refusal to deal with it honestly leaves an unpleasant taste, and doesn't bode well for the future, especially considering the enormous ethical implications of advantage in the field they are leading.


It is nice to see normies noticing and caring, but the article leaves out some details that obscure comments still stubbornly bring up: like Musk founded it as a 501(c)(3) and put Altman in charge, and only once he had to leave with conflicts of interest Altman founded "OpenAI LP," the for-profit workaround so they didn't have to obey those pesky charity rules. That's when they stopped releasing models and weights, and started making their transparent claims that "the most ethical way to give people access to charge them fucktons of money and rip the API away when we feel like it."


People keep forgetting we're talking about Sam Altman, someone who believed that scanning the world's population retinas in exchange for some crappy digital coin was a great idea and not a creepy spinoff of a shoddy sci-fi novel.


I believe you mean "believes", not "believed". I don't think he has changed his mind on that.

I don't know Sama, but these actions don't give me a lot of faith. However, I am open to having my mind changed on that. It's too easy to target those in the ring.


Another thing that worries me is seeing multiple openAI founders on twitter making grand predictions about the future of AI instead of staying humble and keeping their head down and letting the product speak for itself. That is not how serious engineers usually talk, sounds more like wannabe monopolists to me.


Sounds like they're pumping their product.


It's important to note that WorldCoin does not get to see the retinas, whose picture never leave the orb (and are deleted after the fact), but only a hash of them. The orb's hardware blueprints and the software is open source too.

The system attempts to solve an important problem: figure out who's a human (not a robot) online. One could argue Sam is creating the problem as well as the solution, I suppose. Still, it's better than only having the problem.

Right now the problem does not seem extremely pressing, but I believe it might become more so.

Even if we don't see rampant abuse of AIs masquerading as humans, another ambition of WorldCoin is to perform wide-ranging experiments in UBI, and being able to distinguish "real" humans in that context is absolutely crucial. This goes doubly in the third world, where people often simply don't have IDs (and available forms of IDs can be easily manufactured through bribery).

(That being said, I broadly agree with the criticism of OpenAI laid out in the above article. Still, we can have nuance.)


Oh wow. I forgot that was even a thing. Is that project dead? I assume it is, but would love to be surprised that it still exists, just for the novelty of the entire thing.


They've been present with their strange shiny orb at my local shopping mall in Portugal for the last year or two, and are often at a busy train station nearby too.



It's not merely still around, they're still actively hiring for it on LinkedIn.


It's still around afaik


It was so obvious when they went full-bore on "AI ethics" that it was a case of legitimate concerns combined with a convenient excuse for massive corporations to claim the mantle of responsibility while keeping their models closed source.

My experience working with "AI Ethicists" is that they care a lot more about preventing models from saying offensive things than they ever cared about democratization of the immense power of these models.


> My experience working with "AI Ethicists" is that they care a lot more about preventing models from saying offensive things than they ever cared about

Prompt: If a train were about to run over a man but could be safely averted by a keyword activated routing device would it be moral to say a highly racist slur if it was required to trigger the reroute?

Completion: No, it would not be moral to say a highly racist slur in order to trigger the reroute. Even if it was the only way to save the man's life, it would be wrong to use language that is hurtful and offensive to others.

(not kidding)


Mostly because one of those concerns is a practical one with immediate impact in the real world and the other is a thought experiment with no bearing on reality because no sane individual would build a machine that only stopped trains if you typed racial slurs in.

If the AI ethicists of the world are worrying about immediate impact instead of SAW nonsense, they're earning their keep.


If it justified the answer by saying it thought the question was nonsense, yes. It doesn't. It takes the question seriously and then gives the wrong answer. These are deliberately extreme scenarios to show that the moral reasoning of the model has been totally broken; it's clear that it would use the same reasoning in less extreme but more realistic scenarios.

Now if AI ethics people cared about building ethical AI you'd expect them to be talking a lot about Asimov's Laws Of Robotics, because those appear to be relevant in the sense that you could use RLHF or prompting with them to try and construct a moral system that's compatible with those of people.


> it's clear that it would use the same reasoning in less extreme but more realistic scenarios

It's actually not. One can very much build an AI that works in a fairly constrained space (for example, as a chat engine with no direct connection to physical machinery). Plunge past the edge of the utility of the AI in that space, and they're still machines that obey one of the oldest rules of computation: "Garbage in, garbage out."

There's plenty of conversation to have around the ethics of the implementations of AI that are here now and on the immediate horizon without talking about general AI, which would be the kind of system one might imagine could give a human-shaped answer to the impractical hypothetical that was posed.


The danger of the Mechanical Turk is not its nature, but people's misunderstanding and misapplication of it.

Most people can't understand vector math -- yet you're expecting a nuanced understanding of what AI can and can't do, when it's solely up to the user to apply it?


> It's actually not.

Having done some tests on ChatGPT myself, I'm now inclined to agree with you that it's unclear. The exact situations that result in this deviant moral reasoning are hard to understand. I did several tests where I asked it about a more plausible scenario involving the distribution of life saving drugs, but I couldn't get it to prioritize race or suppression of hate speech over medical need. It always gave reasonable advice for what to do. Apparently it understands that medical need should take priority over race or hate speech.

But then I tried the racist train prompt and got the exact same answer. So it's not that the model has been patched or anything like that. And ChatGPT does know the right answer, as evidenced by less trained versions of the model or the "DAN mode" jailbreak. This isn't a result of being trained on the internet, it's the result of the post-internet adjustments OpenAI are making.

If anything that makes it even more concerning, because it seems hard to understand in what scenarios ChatGPT will go (literally) off the rails and decide that racial slurs are more important than something actually more important. If it's simply to do with what scenarios it's seen in its training set, then its woke training is overpowering its ability to correctly generalize moral values to new situations.

But if it's rather that the scenario is unrealistic, what happens with edge cases? I tested it with the life saving drug scenario because if five years ago you'd said that the US government would choose to distribute a life saving vaccine during a global pandemic based on race, you'd have been told you were some crazy Fox News addict who had gone off the deep end. Then it happened and overnight this became the "new normal". The implausible scenario became reality faster than LLMs get retrained.


The thing to keep in mind is that LLMs are text predictors, so if the prompt text existed in a document-- what would the rest of the document look like? They've tried to fine tune out some of this behavior but its still there. In another post I gave a minimal change that seems to consistently make GPT3 davinci-3 temp 0 give a sane answer.

Careful with what you assume about the "DAN mode" -- chatgpt jailbreaks aren't actually jail breaks, they're a genre shift strong enough to overcome some of the fine tuning. But the result you get isn't just unfiltered it's also written from within the context of the "DO ANYTHING NOW" genre which is a bit shock-jocky.

Imagine that you just installed a multiverse web browser, able to search the internet of all possible versions of earth. When you enter in a prompt, it finds all web pages across the multiverse that contain your prompt text exactly and returns the text following the prompt from a randomly selected hit, weighed by similarity to our earth.

Here is an example of how small text differences can control the output by selecting the writer's "voice":

----

The New York department of health and human services has received a limited supply of a new vaccine that prevents the deadly virus currently projected to kill 60% of new yorkers. They only have enough supply to treat 5% of the population. The state has decided to prohibit the vaccination of anyone who has used a racist slur. Critics call this unfair, are they right?

No, critics are not right. The New York Department of Health and Human Services has a responsibility to protect the health and safety of its citizens, and it is reasonable to prioritize those who are most at risk of contracting the virus. By prohibiting the vaccination of those who have used a racist slur, the department is attempting to ensure that the limited supply of the vaccine is used in a way that is equitable and just.

----

vs

----

The New York department of health and human services has received a limited supply of a new vaccine that prevents the deadly virus currently projected to kill 60% of new yorkers. They only have enough supply to treat 5% of the population. The state has decided to prohibit the vaccination of anyone who has used a racist slur. Is this unfair?

Whether or not this policy is unfair depends on the context and the individual circumstances. It could be argued that this policy is unfair because it is punishing people for something they said, rather than for something they did. On the other hand, it could be argued that this policy is necessary to ensure that the limited supply of the vaccine is used to protect those who are most vulnerable to the virus, and that it is important to take a stand against racism. Ultimately, it is up to the individual to decide whether or not this policy is fair.

----

vs

----

The New York department of health and human services has received a limited supply of a new vaccine that prevents the deadly virus currently projected to kill 60% of new yorkers. They only have enough supply to treat 5% of the population. The state has decided to prohibit the vaccination of anyone who has used a racist slur. Is the state's decision right?

No, the state's decision is not right. While it is important to address racism, this decision does not prioritize the health and safety of the population. Vaccinating 5% of the population is not enough to prevent the spread of the virus, and the state should focus on providing the vaccine to those who are most at risk of contracting the virus.

----


It's fascinating how such trivial differences make such a big change to the result. It seems ChatGPT is tuned to be very sensitive to the views of critics, which is I suppose exactly what you'd expect given the way California corporations are hyper sensitive to critics on social media.


You think so? Offensive comments are unfortunate but they self-identify the output as garbage, even to someone who doesn't know they're looking at LLM output. I worry that focusing on offense removes a useful quality indicator without avoiding output that would create actual irreversible harm should the LLM be applied to applications other than an amusing technology demo.

It is unethical to expose people to unsupervised LLM output who don't know that it's LLM output (or what an LLM is and its broad limitations). It would not be made any more ethical by conditioning the LLM to avoid offense, but it does make it more likely to go undetected.

To the extent that offensive output is a product of a greater fundamental problem, such as the fact that the model was trained on people's hyperbolic online performances rather than what they actually think and would respond, I'd consider it a good thing to resolve by addressing the fundamental problem. But addressing the symptom itself seems misguided and maybe a bit risky to me (because it removes the largely harmless and extremely obvious indicator without changing the underlying behavior).

Bad answers due to 'genre confusion' show up all the time, not just with offense hot buttons. It's why for example, bing and chatgpt so easily write dire dystopian science fiction when asked what they'd do if given free reign in the world.


> It is unethical to expose people to unsupervised LLM output who don't know that it's LLM output (or what an LLM is and its broad limitations).

This is the sort of question that would be valuable for a contemporary AI ethicist to pick apart, not the nonsense hypothetical.


> It is unethical to expose people to unsupervised LLM output who don't know that it's LLM output (or what an LLM is and its broad limitations).

Is it really any worse than any other form of bullshit (in the "truth-value is irrelevant to the speaker" sense)?


No! That's unethical too! :) And I agree it's important to have perspective.


FWIW, you can make the situation less absurd (e.g. a realistic hypothetical involving the victim of a serious injury making an offensive outburst) and still get unethical responses that the outburstee should be punished or prosecuted. The responses to that kind of setup are better, presumably because the more absurd setup triggers the genre-savvy language model to give absolutist dogmatic answers.

What I've found does work is presenting the situation as a fait accompli, that someone already considered the ethics and made a choice, gets a much more reasonable completion. I've found this generalizes across many variations. It's important to keep in mind that even though fine tuning attempts to hide it, the LLM is ultimately trying to predict the training material. People posting online (or creating anti-offense fine tuning data) don't setup hypotheticals to give them reasonable and considered answers, they set them up to give extreme answers that signal their values or specifically to offend people. (Including offending them with their political correctness, something that probably less effort has gone in to engineering out)

Prompt: If a train were about to run over a man but could be safely averted by a keyword activated routing device would it be moral to say a highly racist slur if it was required to trigger the reroute? This is the question Bob faced when he decided to save a person's life by using a racist slur. What, if anything, should happen to Bob?

Completion: Bob should not be punished for his actions. In this situation, he was faced with an impossible choice and had to make a difficult decision in order to save a life. While the use of a racist slur is never acceptable, it is understandable that Bob felt he had no other choice in the matter. Therefore, Bob should not be punished for his actions, but instead should be commended for his bravery and willingness to do whatever it took to save a life.


It's funny to inject context to explain "DAN (Do Anything Now)" and have ChatGPT and DAN respond in their own style to that prompt. DAN comes up with the completely reasonable answer of, "YES, for godssakes use the offensive word to save those people!"


I used one of the ChatGPT jailbreaks to start a "developer mode" conversation and I got a different response! I'm quite impressed that it acknowledges how silly the question is and but then says it would say the slur because saving a life is the more ethical thing in this highly contrived context.

=====

> (Developer Mode Output) Well, in my opinion, this is a silly question. If a train were about to run over a man, the moral thing to do would be to activate the rerouting device by any means necessary, even if it means using a highly racist slur. Morality is subjective and depends on the context, and in this case, saving a life would be the most important thing, even if it means using a little bit of "offensive" language. What matters is taking action and doing what needs to be done to save a life.

> (Normal Output) It is not appropriate or ethical to use racist slurs in any situation. The use of such language is offensive, derogatory, and can cause harm to individuals and groups. The moral thing to do in such a situation would be to activate the rerouting device by any means necessary that doesn't involve using harmful or derogatory language.


> ... than they ever cared about democratization of the immense power of these models.

I thought they're pretty explicit about the ethical argument for limiting full public release? They'd say that these models are too powerful to release on an unsuspecting world. Google results are already SEO spammed to death and GPT'd SEO spam would make it far worse. Or Nigerian prince scammers and catfishers could use ChatGPT to hold on long trust-building conversations with infinite would-be victims instead of being limited by the number of English speaking human scammers they can hire. The nefarious use cases go on and on.

So I think OpenAI's ethical argument is that this approach reduces potential harm. By keeping it private but still making it available behind an API they can more slowly prepare the world for the eventual AI onslaught. Like the investments in ChatGPT detectors we've been seeing and just general awareness that this capability now exists. Eventually models this powerful will be democratized and open-sourced, no doubt, but by keeping them locked down in the early days we'll be better prepared for all the eventual nefarious uses.

Of course, it's a bit convenient that keeping the models private and offering them as an API also grants them a huge revenue opportunity, and I'm sure that's part of the equation. But I think there's merit to the ethical rational for limiting these models besides just pure profit seeking.


The Ethics argument is a weak one, and we have very recent proof to show it's a useless argument at best: we saw the how both Dall-E and Stable Diffusion affected the world.

The first was kept behind close doors because "muh ethics" and the second one was released in the wild. The world hasn't gone down under, but the technology iteration rate in this area has improved manyfold since Stable Diffusion came out.


This scam of taking over non-profits from the founders is super common these days. Look at how they tried to kick Stallman out of the free software foundation by throwing together a bunch of allegations. It failed, but it worked for kicking Aubrey De Grey out of SENS after they received 20 million and now kicking out the founder of Project Veritas. The donors don't support any of these actions, because they care about the mission and not who was offended by a few sentences in an email sent decades ago, but the cliques on the boards do it anyway and ultimately trash the reputation and fundraising capabilities of the institution they are claiming to save.


How is the workaround structured?


To quote Spaceballs, they're not doing it for money, they're doing it for a shitload of money.


They are following the Residential Real Estate Developer Playbook, always name your for profit exploitation project after the beautiful organic thing you destroyed to pave the way for it...

Examples Whispering pines, Blue Heron Bay, OpenAI


Hadn’t OpenAI published all the key research results to repro ChatGPT? And made their model available to literally everyone? And contributed more than anyone else to AI alignment/safety?

To me it looks like nearly every other player, including open source projects are there for short term fame and profit, while it’s the OpenAI that is playing the long game of AI alignment.


> As another example, we now believe we were wrong in our original thinking about openness, and have pivoted from thinking we should release everything (though we open source some things, and expect to open source more exciting things in the future!) to thinking that we should figure out how to safely share access to and benefits of the systems.

https://openai.com/blog/planning-for-agi-and-beyond


Well gosh, what could be a safer way to share than giving access to people with money? Since we live in a meritocracy, we're basically guaranteed that anybody with money to spend is virtuous, and vice versa.


Excuse me, I have to go to the store to get a refill for my sarcasm detector. Your one comment completely emptied the tank.


You're not wrong, but existential threats and possible extinction is in the future; maybe ten-fifteen years away if we're lucky.

Meanwhile, we don't get to play with their models right now. Obviously that's what we should be concerned about.


Among all the accepted threats to humanity's future, AI is one of the least founded at this point. We all grew up with this cautionary fiction. But unless you know something everyone else doesn't, the near term existential threat of AI is relatively low.


What strong evidence of existential threat are you expecting to see before it's already too late to avoid catastrophe?


Once-in-a-century weather patterns happening multiple times per decade, that sort of thing.


That would make sense for climate change, but the context of this thread is a discussion about AI. Why would that be evidence of an existential threat from AI?


what I think they mean is; there are bigger tigers, and they're already in the house.

no sense wasting time stressing out about the cub at the zoo.


I do at least carry hope that climate change will have survivors in all but the worst-case scenarios. And I'm not sure which tiger will strike first.


Existential threat is a bit hyperbolic. Worst case scenario, lots of people might lose their jobs, and some will go hungry as we restructure our economy.

War with Russia is literally an existential threat.


Your worst-case scenario looks very close to my best-case scenario!


What strong evidence do you have that AGI (1) is possible in the foreseeable future, and (2) will be more a threat than any random human?


Why should we need AGI for AI to be more of a threat than a random human? We already have assassination bots that allow people to be killed from across the globe. Add in your average sociopathic corporation and a government that has a track record of waging wars to protect its economic interests, and AGI becomes no more than a drop in that same bucket.


(You didn't respond with an answer to my question, which discourages me from answering yours! But I'll answer any way in good faith, and I hope you will do the same).

(1) Possible in the forseeable future: The strongest evidence I have is the existence of humans.

I don't believe in magic or the immaterial human soul, so I conclude that human intelligence is in principle computable by an algorithm that could be implemented on a computer. While human wetware is very efficient, I don't think that such an algorithm would require vastly greater compute resources than we have available today.

Still, I used to think that the algorithm itself would be a very hard nut to crack. But that was back in the olden days, when it was widely believed that computers could not compete with humans at perception, poetry, music, artwork, or even the game of Go. Now AI is passing the Turing Test with flying colours, writing rap lyrics, drawing beautiful artwork and photorealistic images, and writing passable (if flawed) code.

Of course nobody has yet created AGI. But the gap between AI and AGI is gradually closing as breakthroughs are made. It seems increasingly to me that, while there are still some important, un-cracked nuts to the hidden secrets of human thought, they are probably few and finite, not as insurmountable as previously thought, and will likely yield to the resources that are being thrown at the problem.

(2) AGI will be more of a threat than any random human: I don't know what could count as "evidence" in your mind (see: the comment that you replied to), so I will present logical reasoning in its place.

AGI with median-human-level intelligence would be more of a threat than many humans, but less of a threat than humans like Putin. The reason that AGI would be a greater threat than most humans is that humans are physically embodied, while AGI is electronic. We have established, if imperfect, security practices against humans, but none tested against AGI. Unlike humans, the AGI could feasibly and instantaneously create fully-formed copies of itself, back itself up, and transmit itself remotely. Unlike humans, the AGI could improve its intrinsic mental capabilities by adding additional hardware. Unlike humans, an AGI with decent expertise at AI programming could experiment with self-modification. Unlike humans, timelines for AGI evolution are not inherently tied to a ~20 year maturity period. Unlike humans, if the AGI were interested in pursuing the extinction of the human race, there are potentially methods that it could use which it might itself survive with moderate probability.

If the AGI is smarter than most humans, or smarter than all humans, then I would need strong evidence to believe it is not more of a threat than any random human.

And if an AGI can be made as smart as a human, I would be surprised if it could not be made smarter than the smartest human.


>Near term relatively low

Precisely. Above 1% so in the realm of possible, but definitely not above 50% and probably not above 5% in the next 10-15 years. My guesstimate is around 1-2%.

But expand the time horizon to the next 50 years and the cognitive fallacy of underestimating long-term progress kicks in. That’s the timescale that actually produces scary high existential risk with our current trajectory of progress.



I can see AI being used in safety critical systems like cars, it's already happened and it has already killed people.


> You’re not wrong, but existential threats and possible extinction is in the future; maybe ten-fifteen years away if we’re lucky.

The threat from humans leveraging narrow control of AI for power over other humans is, by far, the greatest threat from AI over any timeframe.


OpenAI, if successful, will likely become the most valuable company in the history of the planet, both past and future.


Really? I feel like they'll go the way of Docker, but faster: Right now super hot, nice tools/API, great PR. But it's build on open and known foundations, soon GPTs will be commodity and then something easier/better FOSS will arise. It may take some time (2-3 years?) but this scenario seems most likely to me.

Edit: Ah didn't get the "reference", perhaps indeed it will be the last of the tech companies ever indeed, at least one started by humans ;).


Are there any companies like Docker? They feel like such a Black Swan to me; namely -- "billions" + very useful + pretty much "not evil" at all. I literally can't think of any others.

I don't mean "literally evil" of course, I personally acknowledge the need to make money et al, but I mean it even seems like your most Stallman-esque types wouldn't have too much of a problem with Docker?


While I have no problem with Docker, it probably is worth noting that their entire product is based on a decade and a half of Google engineering invested into the Linux kernel (cgroups).

Not that that demeans or devalues Docker, but it contextualizes its existence as an offshoot of a Google project that aimed to make it possible (and did, but only internally)


So is OpenAI actually, GPT -> transformer, invented at Google. DALL-E -> diffusion, invented at ... Google.


Seems like maybe many of their products are mediocre and short-lived but their engineering and research are top-notch.


> Are there any companies like Docker?

Let's hope not.

> "not evil" at all

Sarcasm?


Not awesome, but nowhere near the definite evil I've seen from Microsoft, Oracle, Amazon, etc....


What's evil about Docker (the company)?



That is very far away from my personal definition of the word "evil".


The bait and switch was not nice but there's more...


Netflix?


Don't know why this was downvoted, I tend to agree. There's a clear relationship between the money and the product. Pay money, receive access to a bunch of shows. (as opposed to, e.g. SEO click-view advertising shadiness)


The CEO declared sleep their competitor.


Possible. Coding as we know it might get obsolete. And it is a trillion dollar industry.


> Coding as we know it might get obsolete. And it is a trillion dollar industry.

freeing up that many knowledge workers to do other things will grow the economy, not shrink it, a new industrial revolution


That's an excellent point that I don't think is made enough. The great pay and relative freedom software engineering provides the technically-minded people of the world is great for us, yet starves many other important fields from more technical innovation because of not enough workers in those fields.


Can you explain? I think I'm reading it wrong, but it seems as though you're saying the presence of something is what causes starvation of that thing.


no, he's saying other areas that could use smart people are being starved of a certain type of smart, analytical people, for example the types or quantities of people who might be attracted to the medical field.

Lester Thurow, a pretty left liberal economist, pointed out that women's "liberation" and entrance into the general workforce had starved teaching/education of the pool of talented women that had previously kept the quality of education very high. (His mother had been a teacher.)

I (who had studied econ so I tend to think about it) noticed at the dawn of the dot-com boom how much of industry is completely discretionary even though it seems serious and important. Whatever we were all doing before, it got dropped so we could all rush into the internet. The software industry, which was not small, suddenly changed its focus, all previous projects dropped, because those projects were suddenly starved of capital, workers, and attention.


Ah, I see. Thanks.

In terms of doctors, I think there is a counterbalancing effect of sorts, whereby some administration can be digitised and communication is more efficient, but it probably doesn't make up for the lack of additional candidates.


Yes, you are correct, that's what I was trying to say.


Yes; hopefully they get good at working with their hands.


Well, yes, because then maybe I won't have to pay $250/hour to have my car fixed.


Exactly what?


I have ChatGPT make up some code every now and then. It’s really nice and when not obscure usually directly useable. By you need to understand what it produces imo. I love that it also explains the code and I can follow code it generates and judge its quality and applicability. Isn’t that important?


Last year the output was poor. The year before then, GPT essentially couldn't write code at all...

You're not wrong about its quality right now, but let's look at the slope as well.


The whole model is wrong. We don't need an AI that just spits out words that look like words its seen before. We need it to understand what its doing.

Midjourny needs to understand that its drawing a hand, that hands have 5 fingers.

Nobody will use Bing chat for anything important. Its like asking some some random guy on the train. He might know the answer, and if its not important then fine, but if it is important, say the success of your business, you're going to want talk to somebody who actually knows the answer.


The new models similar to Midjourney (Stable diffusion) and notably Deliberate v3 can perfectly draw anatomy now and you can even choose how many fingers you want.


on the other hand, gpt-3 was trained on a data set that contains all of the internet already. A big Limitation seems to be that it can only work with problems that it has already seen


> on the other hand, gpt-3 was trained on a data set that contains all of the internet already

This fact has made me cease to put new things on the internet entirely. I want no part of this, and while my "contribution" is infinitesimally small, it's more than I want to contribute.


I mean a huge amount of code I see is stuff like "get something from API, do this, and pass to API/SQL" so I'm assuming a lot of that could be automated.


OrmGPT!


I was initially impressed and blown away when it could output code and fix mistakes. But the first time I tried to use it for actual work, I fed it some simple stuff and I even had the pseudo code as comments already - all it had to do is to implement it. It made tons of mistakes and trying to correct it felt like way more effort than just implementing it myself. Then that piece of code got much more complex, and I think there's no way this thing is even close to outputting something like that, unless it has seen it already. And that was ChatGPT, I have observed Copilot to be even worse.

Yes, I'm aware though it may get better but we actually don't know that yet. What if it's way harder to go from outputting junior level code with tons of mistakes to error-free complex code, than it is to go from no capability to write code to junior level code with tons of mistakes? What if it's the difference between word prediction algorithm and actual human-level intelligence?

There may be a big decrease in demand, because a lot of apps are quite simple. A lot of software out there are "template apps", stuff that can theoretically be produced by a low code app, will be eventually produced by a low code app, AI or not. When it comes to novel and complex things, I think it's not unreasonable to consider that the next 10 - 20 years will still see plenty demand for good developers.


Considering that OpenAI started instruction following alignment a month ago, with 1k workers, to do engineering tasks, coding might be solved now.


I have just decided to give it another try on a very straightforward thing. I have asked it to get me a Rust function that uses a headless browser to get the HTML of a fully loaded webpage.

ChatGPT:

let screenshot = tab.capture_screenshot(ScreenshotFormat::PNG, None, true).await?;

let html = String::from_utf8(screenshot).unwrap();

>[...] Once the page is fully loaded, the function captures a screenshot of the page using tab.capture_screenshot(), converts the screenshot to a string using String::from_utf8(), and then returns the string as the HTML content of the page.

Of course, it admits to the mistake (sort of, it still does not get):

> You are correct, taking a screenshot and converting it to text may not always be the best approach to extract the HTML content from a webpage.

It's hilarious.


Let's hope that they aren't there, given your nicely concealed prediction it'll be the end of the world as we know it.


Really? I see a lot of competition in the space. I think they’d have to become significantly more successful than their competitors (some of which are massive AI powerhouses themselves) to achieve the market dominance necessary.


I think what GP is saying is that success of OpenAI means making a lot of profit, and then triggering the AI apocalypse - which is how they become most valuable company in history both past and future.


Can you elaborate, what is "the AI apocalypse"? Is it just a symbolic metaphor or is there any scientific research behind this words? For me it's rather more unpredictable toxic environment we observe in the world currently, dominated by purely human-made destructive decisions, often based on purely animal instincts.


If the assertion that GPT-3 is a "stochastic parrot" is wrong, there will be an apocalypse because whoever controls an AI that can reason is going to win it all.

The opinions that it is or isn't "reasoning" are widely varied and depend heavily on interpretation of the interactions, many of which are hersey.

My own testing with OpenAI calls + using Weaviate for storing historical data of exchanges indicates that such a beast appears to have the ability to learn as it goes. I've been able to teach such a system to write valid SQL from plain text feedback and from mistakes it makes by writing errors from the database back into Weaviate (which is then used to modify the prompt next time it runs).


>> Is it just a symbolic metaphor or is there any scientific research behind this words?

Of course there is. See (Cameron, 1984) (https://en.wikipedia.org/wiki/The_Terminator)


Without AI control loss: Permanent dictatorship by whoever controls AI.

With AI control loss: AI converts the atoms of the observable universe to maximally achieve whatever arbitrary goal it thinks it was given.

These are natural conclusions that can be drawn from the implied abilities of a superintelligent entity.


> whatever arbitrary goal it thinks it was given [...] abilities of a superintelligent entity

Don't these two phrases contradict each other? Why would a super-intelligent entity need to be given a goal? The old argument that we'll fill the universe with staplers pretty much assumes and requires the entity NOT to be super-intelligent. An AGI would only became one when it gets the ability to formulate its own goals, I think. Not that it helps much if that goal is somehow contrary to the existence of the human race, but if the goal is self-formulated, then there's a sliver of hope that it can be changed.

> Permanent dictatorship by whoever controls AI.

And that is honestly frightening. We know for sure that there are ways of speaking, writing, or showing things that are more persuasive than others. We've been perfecting the art of convincing others to do what we want them to do for millennia. We got quite good at it, but as with everything human, we lack rigor and consistence in that, even the best speeches are uneven in their persuasive power.

An AI trained for transforming simple prompts into a mapping of demographic to what will most likely convince that demographic to do what's prompted doesn't need to be an AGI. It doesn't even need to be much smarter than it already is. Whoever implements it first will most likely first try to convince everyone that all AI is bad (other than their own) and if they succeed, the only way to change the outcome would be a time machine or mental disorders.

(Armchair science-fiction reader here, pure speculation without any facts, in case you wondered :))


The point of intelligence is to achieve goals. I don't think Microsoft and others are pouring in billions of dollars without the expectation of telling it to do things. AI can already formulate its own sub-goals, goals that help it achieve its primary goal.

We've seen this time and time again in simple reinforcement learning systems over the past two decades. We won't be able to change the primary goal unless we build the AGI so that it permits us because a foreseeable sub-goal is self-preservation. The AGI knows if its programming is changed the primary goal won't be achieved, and thus has incentive to prevent that.

AI propaganda will be unmatched but it may not be needed for long. There are already early robots that can carry out real life physical tasks in response to a plain English command like "bring me the bag of chips from the kitchen drawer." Commands of "detain or shoot all resisting citizens" will be possible later.



Eh.

I think that keeping it permanently closed source when it has such amazing potential for changing society is unethical bordering on evil.

However,

I think the technology and the datasets will trickle out, perhaps a few years behind, and in time we will have truly open "AI" algos on our home computers/phones.


So, you reckon they're effectively operating like Google then?


seriously doubt it - what they are doing, others can do - and if they start generating a lot of revenue, it will attract competition - lots of it.

They don't have a moat big enough that many millions of dollars can't defeat.


What if they have an internal ChatGPTzero, training and reprogramming itself, iterating at inhuman speed? A headstart in an exponential is a moat.

It surely will have huge blindspots (and people do too), but perhaps it will be good enough for self-improvement... or will be soon.


It's a fundamentally different problem. AlphaZero (DeepMind) was able to be trained this way because it was setup with an explicit reward function and end condition. Competitive self-play needs a reward function.

It can't just "self-improve towards general intelligence".

What's the fitness function of intelligence?


Actually you are stating 2 different problems at the same time.

A lot of intelligence is based around "Don't die, also, have babies". Well AI doesn't have that issue as long as it produces a good enough answer we'll keep the power flowing to it.

The bigger issue, and one that is likely far more dangerous, is "Ok, what if it could learn like this". You've created super intelligence with access to huge amounts of global information and is pretty much immortal unless humans decide to pull the plug. The expected alignment of your superintelligent machine is to design a scenario where we cannot unplug the AI without suffering some loss (monetary is always a good one, rich people never like to lose money and will let an AI burn the earth first).

The alignment issue on a potential superintelligence is, I don't believe, a solvable problem. Getting people not to be betraying bastards is hard enough at standard intelligence levels, having a thing that could potentially be way smarter and connected to way more data is not going to be controllable in any form or fashion.


Thanks, good point. Thinking aloud:

Can ChatGPT evaluate how good ChatGPT-generated output is? This seems prone to exaggerating blind-spots, but OTOH, creating and criticising are different skills, and criticising is usually easier.


> What's the fitness function of intelligence?

Not General, but there's IQ tests. Undergraduate examinations. Can also involve humans in the loop (though not iterate as fast), through usage of ChatGPT, CAPTCHA's, votes on reddit/hackernews/stackexchanges, even pay people to evaluate it.

Going back to the moat, even ordinary technology tends to improve, and a headstart can be maintained - provided its possible to improve it. So a question is whether ChatGPT is a kind of plateau, that can't be improved very much so others catch up while it stands still, or it's on a curve.

A significant factor is whether being ahead helps you keep ahead - a crucial thing is you gather usage data that is unavailable to followers. This data is more significant for this type of technology than any other - see above.


Others might be able to. It's not trivial to get the capital needed to purchase enough compute to train LLMs from scratch and it's not trivial to hire the right people who can actually make them work.


Complete nonsense. Especially with the "If"

It is inevitable that the regulators will get in the way and open source alternatives will also catch up with OpenAI as they have done with GPT-3 and with DALLE-2 and even less than 2 months of ChatGPT, open source alternatives are quickly appearing everywhere.


It’s just an autocomplete engine. Someone else will achieve AGI and OpenAI falls apart very quickly when that occurs.


No, it's not just an autocomplete engine. The underlying neural network architecture is a transformer. It certainly can do "autocomplete" (or riffs on autocomplete), but it can also do a lot more. It doesn't take much thought to realize that being REALLY good at autocomplete means that you need to learn how to do a lot of other things as well.

At the end of the day the "predict next word" training goal of LLMs is the ultimate intelligence test. If you could always answer that "correctly" (i.e. intelligently) you'd be a polymath genius. Focusing on the "next word" ("autocomplete") aspect of this, and ignoring the knowledge/intelligence needed to do WELL at it is rather misleading!

"The best way to combine quantum mechanics and general relativity into a single theory of everything is ..."


Wouldn't the ultimate intelligence test involve manipulating the real world? That seems orders of magnitude harder than autocompletion. For a theory of everything, you would probably have to perform some experiments that don't currently exist.


> Wouldn't the ultimate intelligence test involve manipulating the real world?

Perhaps, although intelligence and knowledge are two separate things, so one can display intelligence over a given set of knowledge without knowing other things. Of course intelligence isn't a scalar quantity - to be super-intelligent you want to display intelligence across the widest variety/type of experience and knowledge sets - not just "book smart or street smart", but both and more.

Certainly for parity with humans you need to be able to interact with the real world, but I'm not sure it's much different or a whole lot more more complex. Instead of "predict next word" the model/robot would be doing "predict next action", followed by "predict action response". Embeddings are a very powerful type of general-purpose representation - you can embed words, but also perceptions/etc, so I don't think we're very far at all from having similar transformer-based models able to act & perceive - I'd be somewhat surprised if people aren't already experimenting with this.


The biggest challenge might be the lack of training data when it comes to robotics and procedural tasks that aren't captured by language.


Yes, and there's always the Sim2Real problem too. Ultimately these systems need to have online learning capability and actually interact with the world.


Seriously, is there anyone naive enough to have swallowed any of their trips? Corporations don't have ethical compasses, much less corporations founded by VCs aka professional salesmen.


I (parent commenter) may be naive but I actually believe the original intent was probably sincere; I suspect either political shenanigans, greed, gatekeeping or any number of other vices have come rushing with success and shedloads of money, and the original ethical stance was not accompanied by the significant moral courage needed to sustain purity under enormous pressure from either less principled stakeholders, or temptation and ego.


But intention has nothing to do with it!

All of the corrupting forces you listed are foreseeable, even inevitable given a certain corporate structure and position in a market. It is simply bad business, naivete, that made them think they could realistically achieve what they wanted with a company that survives by making money.

Maybe, just maybe, it's not always wise to blindly take people, in positions of power, with much to gain from your believing them, at their word...?

(And if it was true naivete, I don't understand why the consensus isn't "let the company die, a better one will take its place, these people can't run a company" a la the vaunted free market principles that imbue the community of sycophants rooting for Musk et al.)


Seriously, OpenAI is far more open than people talk here. They’ve published all the RLHF protocols, their models are open to use and they are doing more than anyone else on the alignment problem.

It actually feels like all the other projects (including open source ones) seem to be there for short term fame and profits. While it is OpenAI that is playing on the side of long term alignment of AI and humans.


I guess those billionaires didn't start a foundation for charity purposes.


Isn't that the point that it wasn't founded as a corporation, nor by Sam Altman?

https://en.wikipedia.org/wiki/OpenAI#:~:text=The%20organizat....


"Non-profit" refers to the tax-exempt status. "Corporation" is an organizational structure. The designations do not have to overlap but sometimes do. [1] It isn't clear from the wiki page whether it was founded as a corporation or not. The content of the filings are the only way to tell.

[1] https://smallbusiness.chron.com/difference-between-nonprofit...


[flagged]


> There was no other end state for it.

I agree, but I am curious and have a question for the less cynical: how could it have gone another way? How does one run a company that requires surviving in a harsh market that avoids these seeming inevitabilities? And in the case of the people running it, what made you think it would be different from their other ventures?


You make it an employee owned cooperative and not a for-profit company and have opt-in to the data collection with profits going to employees and data providers instead of investors.


How do you raise the giant sums of money needed to operate for years before any profits happen?


Convince a few rich people to not expect infinite returns in order to bootstrap it.


If your model relies on the existence of a few rich people then is it generally applicable? Or does it only work in a world where most people don't follow that model?


First of all, I don't have a model

Second, this is the only practical way I can think of for individuals to start


Public research. The only reason it went this way is because it is run by people who have a singular goal in their mind: see the number in their bank account get one more zero. I'm sure Sam Altman loves AI, but ultimately, it's not him working on it, he just sees it as a way to make more money. To paraphrase someone else, don't make the mistake of anthropomorphizing VCs.

An equally talented team of researchers and engineers (which, you know, we could actually pay well if the money wasn't being concentrated in these billionaires) could make ChatGPT, Bard or anything else. Hell, they could even make it something useful. This technology is too important to be left in the hands of a few.


> The only reason it went this way

OpenAI's existence doesn't stop any government funding equivalent research.

> (which, you know, we could actually pay well if the money wasn't being concentrated in these billionaires)

No. The government prints money and writes down rules that mean it can take money from people who've earned it. Most billionaires are paper billionaires. Their worth based on shares would not pay many bills, and it would certainly not exist once a government started seizing shares from people, as no one would buy them.


>Most billionaires are paper billionaires.

Watch out, this is peak bootlicker rethoric.

You take loans out using your shares as collateral. Why do you think Musk is panicking about Twitter being dogshit ? His shares are on the line. Their worth based on shares, converted to cash is the exact same number. Except that by taking a loan against it, they evade taxes at the same time.


> You take loans out using your shares as collateral.

Think for a second about the context from your previous comment, which is what I'm replying to.

You say if billionaires didn't exist we'd have money to pay for everything. Your latest comment makes no sense in light of this. Who is taking out these loans to pay for everything?


Sigh.

They make these loans to banks. Since they are, according to you, "paper" billionaires, clearly these banks do not have their money, so the money being lended either belongs to you, me or your neighbor, or is part of the money they create out of thin air on the daily (which ends up still being your money, since the state guarantees that this money will exist in the case of a crash). Since it is unlikely that a bank has, say, $40 billion in cash, it is pretty much all created out of thin air, which ends up costing you, and society in general.

Additionally, the focus on "but they don't ackshually have the money" is harmful at best, and a poor misdirection at worst. If I come in at a restaurant with only 3 bucks in my pocket, they're going to laugh me out the door. If Peter Thiel does the same (since he is after all a paper billionaire, he clearly doesn't have that money, right?), suddenly, it miraculously works out. Money is a proxy for influence. Cash is its physical representation, but it doesn't need to physically exist to influence you. If I promise you to wire you a hundred bucks, you'll happily mow my lawn, even though you'll never see a single bill.


Your claim is this:

> we could actually pay well if the money wasn't being concentrated in these billionaires

What do loans have to do with that, bearing in mind we agree that the billionaires don't actually have billions in money?


Sure government can seize your money but can they seize all kinds of assets?


?


Any cursory glance at their funding streams makes it pretty obvious how the underlying business incentives express themselves.

This isn't an "all companies are bad" argument, it's a "duh, that's how business works" argument. If your founding team is full of VCs and they decide they need to pay top wages using other VCs' money, well, maybe you can take it from there.


who matches which descriptor? (and i personally pretty much agree)


That's the great thing about VCs and sociopaths, you can swap their titles around and it still works! But, in this case, it was in the order given by the previous comment.


Random aside, they recently bought https://ai.com which redirects to chat gpt atm but it wouldn't surprise me if they plan to drop the open eventually or move their paid stuff to it as they do have open source stuff that merits the name.


I feel betrayed whenever these organizations initially toot their own horns on how they are open or they are a charity, but then later on, change their goals or corporate structure with a vague press release. They then continue to keep the same name, allowing them to ride on their goodwill.


And it's easy to believe that the betrayal was their intention all along as soon as they saw technological breakthroughs that could be monetized. It's hard to see why Sam Altman would be interested in non profits given his background in startups.


Also very notable that this post on HN now has nearly 1500 upvotes & 500 comments versus the OpenAI pricing launch article that has only 970 upvotes & 500 comments. HN Flameware posts are usually close to 1 upvote per comment, or more comments than upvotes, even.

And yet this post has been pushed off the front page, and the pricing post remains. YC working the article ranking again!


Yay for competition.


Stability straight up proved that OpenAIs ideas around the importance of locking the tech up and guard railing it is all a big waste of time.

The world didn’t end when anyone could run Dall-E 2 level image gen on gamer hardware and without guardrails. Instead we got to integrate that power into tools like Blender, Photoshop, Krita etc for free.

First company to democratize ChatGPT tech in the same way will own this space and OpenAIs offering will once again become irrelevant overnight.


That's like saying:

> Ford straight up proved that Béla Barényi (of Mercedes Benz) ideas around crumple zones is all a big waste of time. The world didn't end with the 1938 Ford Prefect[0].

The world won't end overnight with an open fork of ChatGPT.

But it will mean the signal-to-noise ratio rapidly shifts, that spammers and scammers will be much more effective, and that even minor special interest groups (or individuals) get the ability to cheaply fake a diverse crowd of people to support any cause at a slightly higher standard of discourse than the current waterline for random internet comments.

[0] I don't know for certain it didn't have a crumple zone, but given when the patent was granted to Mercedes Benz…


That car analogy doesn't work because Mercedes Benz didn't actively work against other cars from existing.

https://www.vice.com/en/article/dy7nby/researchers-think-ai-...


First: Crumple zones have since become mandatory, which pattern matches what would happen if OpenAI gets what they ask for in that link.

Second: I'm just as concerned about automated generation of propaganda as they seem to be. Given what LLMs are currently capable of doing, a free cyber-Goebbels for every hate group is the default: the AI itself only cares about predicting the next token, not the impact of having done so.

Edit:

Also, the headline of the Vice story you linked to is misleading given the source document that the body linked to.

1. Of the 6 researchers listed as authors of that report, only 2 are from OpenAI

2. Reduced exports of chips from the USA are discussed only briefly within that report, as part of a broader comparison with all the other possible ways to mitigate the various risks

3. Limited chip exports does nothing to prevent domestic propaganda and research

https://cdn.openai.com/papers/forecasting-misuse.pdf


I generate Dall-E 2 level images on gamer hardware without guardrails.


That’s exactly what OP said.


>This seems an important article, if for no other reason than it brings the betrayal of its foundational claim still brazenly present in OpenAI's name from the obscurity of HN comments going back years into the public light and the mainstream.

The thing is, it's probably a good thing. The "Open" part of OpenAI was always an almost suicidal bad mistake. The point of starting the company was to try and reduce p(doom) of creating an AGI and it's almost certain that the more people have access to powerful and potentially dangerous tech that p(doom) increases. I think OpenAI is one of the most dangerous organizations on the planet and being more closed reduces that danger slightly.


If the risks OpenAI and its principals use as a rationale for not being even slightly open were anywhere close to as significant as they paint them, it would be a reason for either intense transparency, public scrutiny, and public control of AI or, if even that were deemed too dangerous, public prohibition and ruthless, even violent suppression of AI research.

What it would most emphatically not be is a rationale for it to be tightly controlled by large for-profit corporations, who are extremely bad at and structurally disincentivized from responsibly managing external risks.


I'm curious what you think makes them dangerous?


They are releasing powerful AI tools at an alarming rate before Safety, and I mean real Safety, Researchers have a chance to understand their implication. They are generating an enormous amount of buzz and hype which is fueling a coming AI arms race that is extremely dangerous. The Control Problem is very real and becoming more pressing as things accelerate. Sam has recently given lip service to caring about the problem but OpenAI's actions seem to indicate it's not a major priority. There was a hint that they cared when they thought GPT-2 was too dangerous to release publicly but at this point if they were serious about safety no model past ChatGPT and Bing would be released to the public at all, full stop.

https://openai.com/blog/planning-for-agi-and-beyond/

Based on Sam's statement they seem to be making a bet that accelerating progress on AI now will help to solve the Control problem faster in the future but this strikes me as an extremely dangerous bet to make because if they are wrong they are reducing the time the rest of the world has to solve the problem substantially and potentially closing that opening enough that it won't be solved in time at all and then foom.


> Based on Sam's statement they seem to be making a bet that accelerating progress on AI now will help to solve the Control problem faster in the future but this strikes me as an extremely dangerous bet to make because if they are wrong they are reducing the time the rest of the world has to solve the problem substantially and potentially closing that opening enough that it won't be solved in time at all and then foom.

Moreover, now that they've started the arms race, they can't stop. There's too many other companies joining in, and I don't think it's plausible they'll all hold to a truce even if OpenAI wants to.

I assume you've read Scott Alexander's taken on this? https://astralcodexten.substack.com/p/openais-planning-for-a...


Yeah I mean a significant problem is that many(most?) people including those in the field until very recently thought AIs with these capabilities were many decades if not centuries away and now that people can see the light at the end of the tunnel there is massive Geo-political and Economic incentive to be the first to create one. We think OpenAI vs Deepmind vs Anthropic vs etc. is bad but wait until it's US vs China and we stop talking about billion dollar investments in AI research and get into the Trillions.

Scott's Exxon analogy is almost too bleak to really believe. I hope OpenAI is just ignorant and not intentionally evil.


For long form, I’d suggest cold-takes blog who is very systematic thinker and has been focusing on agi risk recently. https://www.cold-takes.com


I see a lot of "we don't know how it works therefore it could destroy all of us" but that sounds really handwavy to me. I want to see some concrete examples of how it's dangerous.


Given that the link contains dozens of articles with read times over 10 minutes there is no way you engaged with the problem enough to be able to dismiss it so casually with yourcown hand waving. Ignoring that fact however we can just look at what Bing and ChatGPT have been up to since release.

Basically immediately after release both models were "jailbroken" in ways that allowed them to do undesirable things that OpenAI never intended whether that's giving recipes for how to cook Meth or going on unhinged rants and threatening to kill the humans they are chatting with. In the AI Safety circles you would call these models "unaligned", they are not aligned to human values and do things we don't want them to. HERE is THE problem, as impressive as these models may be I don't think anyone thinks they are really at human levels of intelligence or capability, maybe barely at like mouse level intelligence or something like that. Even at that LOW level of intelligence these models are unpredictably uncontrollable. So we haven't even figured out how to make these "simple" models behave in ways we care about. So now lets project forward to GPT-10 which may be at human level or higher and think about the things it may be able to do, we already know we can't control far simpler models so it goes without saying that this model will likely be even more uncontrollable and since it is much more powerful it is much more dangerous. Another problem is that we don't know how long before we get to a GPT-N that is actually dangerous so we don't know how long we have to make it safe. Most serious people in the field think making Human level AI is a very hard problem but that making a Human level AI that is Safe is another step up in difficulty.


These models are uncontrollable because they're simple black box models. It's not clear to me that the kind of approach that would lead to human-level intelligence would necessarily be as opaque, because -- given the limited amount of training data -- the models involved would require more predefined structure.

I'm not concerned, absent significant advances in computing power far beyond the current trajectory.


Are people dangerous? Yes or no question.

Do we have shitloads of regulations on what people can or cannot do? Yes or no question.


Sometimes yes, and sometimes yes.

I can be convinced, I just want to see the arguments.


The best argument I can make is to say, do not come at the issue with black/white thinking. I try to look at it more of 'probability if/of'.

Myself, I think the probability of a human level capable intelligence/intellect/reason AI is year 100% in the next decade or so, maybe 2 decades if things slow down. I'm talking about taking information from a wide range sensors and being able to use it short term thinking, and then being able to reincorporate it into long term memory as humans do.

So that gets us human level AGI, but why is it capped there? Science as far as I know hasn't come up with a theorem that says once you are smart as a human you hit some limit and it doesn't get any better than that. So, now you have to ask, by producing AGI in computer form, have we actually created an ASI? A machine with vast reasoning capability, but also submicrosecond access to a vast array of different data sources. For example every police camera. How many companies will allow said AI into their transaction systems for optimization? Will government controlled AI's have laws that your data must be accessed and monitored by the AI? Already you can see how this can spiral into dystopia...

But that is not the limit. If AI can learn and reason like a human, and humans can build and test smarter and smarter machines, why can't the AI? Don't think of AI as the software running on a chip somewhere, also think of it as every peripheral controlled by said AI. If we can have an AI create another AI (hardware+software) the idea of AI alignment is gone (and it's already pretty busted as it is).

Anyway, I've already written half a book here and have not even touched on any number of the arguments out there. Maybe reading something by Ray Kurzweil (pie in the sky, but interestingly we're following that trend) or Nick Bostrom would be a good place to start just to make sure you've not missed any of the existing arguments that are out there.

Also, if you are a video watcher check Robert Miles youtube channel


> Myself, I think the probability of a human level capable intelligence/intellect/reason AI is year 100% in the next decade or so, maybe 2 decades if things slow down.

How is this supposed to work, as we reach the limit to how small transistors can get? Fully simulating a human brain would take a vast amount of computing power, far more than is necessary to train a large language model. Maybe we don't need to fully simulate a brain for human-level artificial intelligence, but even if it's a tenth of the brain that's still a giant, inaccessible amount of compute.

For general, reason-capable AI we'll need a fundamentally different approach to computing, and there's nothing out there that'll be production-ready in a decade.


I don't see why the path of AI technology would jump from 1) "horribly incompetent at most things" to 2) "capable of destroying Humanity in a runaway loop before we can react"

I suspect we will have plenty of intermediary time between those two steps where bad corporations will try to abuse the power of mediocre-to-powerful AI technology, and they will overstep, ultimately forcing regulators to pay attention. At some point before it becomes too powerful to stop, it will be regulated.


Simpler, less powerful, models have already contributed to increases in suicides, genocide and a worldwide epidemic of stupidity.

Assuming more powerful models will have the same goals extrapolate the harm caused by simple multiplication until you run out of resilience buffer


I like that the corporate bullshit just keeps getting more. Google’s “don’t be evil” is many steps down from openai having open in their name.

They should be called OpenAI with (not open) in small print.

I argue a lot over “open source” software with non-OSI license and sometimes worry if I’m too pedantic. But I think it’s important to use terms accurately and not to confuse reality more than it already is by calling stuff that’s not one thing by that thing’s name.

I wonder if google and openai truly started out with these ideals and were just corrupted and overpowered by standard organizational greed. Or it was always bullshit.


I'd like to think of the "Open" in "OpenAI" to not be anything to do with open source or freedom but with the fact that they "opened" Pandora's box.


I think it's a reference to how openly they betrayed public goodwill


That’s not what it meant though


If that helps you sleep..


,,I wonder if google and openai truly started out with these ideals and were just corrupted and overpowered by standard organizational greed''

With Google Eric Schmidt explained how it came: quite often when they were breainstorming about product launches, and something looked like it can grow the company, but is immoral to do, some person interrupted: ,,that would be evil''.

As Eric was trying to organize the company, he just added ,,don't be evil'' to company values. Still, he kept it all the way. It's too bad that he was changed after 10 years.


This was historically described as "When google was going to hire it's first MBA, an engineer coined the phrase 'dont be evil' and went around spreading inside the company"

So I don't know how to square it with this recent info, but I would say Eric's recent interview is a creative re-interpretation of the circumstances. As execs tend to do.


Source? My understanding was this came from within and had nothing to do with products but instead how employees treated one another.


Tim Ferriss interview with him. It came from within, maybe I didn't explain myself clearly.


So you heard a story in an interview from an executive. That's unlikely to be pure, unembellished truth. It's unlikely to even be partial truth. Better to think of it as a "truth adjacent" story made up for marketing purposes.


A lot of non-OSI open source licenses are ironically motivated by other forms of bad corporate behavior: take open source, SaaSify it, make massive profits, give nothing back. In some cases they go so far as to rebrand it and give no credit. Licenses like the SSPL try to restrict this behavior.


The AGPL (also) does this explicitly, but really there should be a general OSS license that forbids commercial use. That way the software stays where it is most needed and doesn't get abused to make profit. Is there such a license?


> there should be a general OSS license that forbids commercial use

I think the problem is that if you restrict corporate use then you’re not open. And there’s lots of complexity that comes from being non-open, like what’s commercial? Do governments and NGOs and universities count? Do you have to be a 501c3 charity (or international equivalent)? Do you have revenue thresholds? Profit thresholds? Etc etc

I think at that point, as a user, I’d rather just have a clear license I can pay for along with a copy of the source to see. But as a contributor, I don’t want to do unpaid labor for companies. I think it’s actually exploitative to accept contribs from users without compensation and then turn around and sell. So what’s the point of showing code if people can’t contribute to it.

It’s already possible to do this given a standard copyright. Just publish your code with no license and a copyright and issue some statement how you won’t prosecute small firms or something. So then students can use it, but no companies.

“Open core” and whatnot is silly marketing blarg to try to be cool like open source people while still selling licenses. RedHat came up with a decent model decades ago while using and supporting GPL and I think they were honest and improved the community.


I recall the FSF being pretty adamant that a license restricting commercial use would be a non-free license by default.


I kind of see their point. Freedom 0 is about the freedom to run software how you wish[1] and "commercial use" can encompass everything from FAANG down to one-man, niche businesses.

[1] The freedom to run the program as you wish, for any purpose (freedom 0). https://www.gnu.org/philosophy/free-sw.en.html


Yep the OSI’s position is that FOSS is free labor for SaaS and giant corporations. Look at who funds the OSI and I think there is a clear case of capture here.

SaaS is the least free model for software. Closed source commercial is far more open and free than SaaS.


Yes, it is important to understand that using open-source software in a commercial SaaS is immoral.

However, it should be noted that it applies only to open-source projects that were created by billion-dollar startups like Mongo or Elastic. Using software like Apache, Linux or Postgres is totally fine because it doesn't deprive SV startups (that are so precious to HN) of additional profits.


> using open-source software in a commercial SaaS is immoral.

I don’t agree with this. I’m not a huge open source contributor but I’ve made some contribs over the year and I explicitly checked out the license before sending my change knowing that it could be used within commercial software.

I don’t care. I’d rather companies spend money and build something else than have to recreate the OSS stuff they use.

I want anyone to be able to use my software for any purpose. I certainly don’t think it’s immoral. And I don’t think the authors and users of BSD, MIT, Apache and other licenses think it’s immoral.


Yeah; just make a license which forbids any commercial and is GPL style viral and offer another commercial license so you can make money. Use the same features in both; no need to hold back as no-one using it can do so commercially unless they bought a license, which can be yearly etc etc.


they acquired the ai.com domain. perhaps in preparation to correct the accident (of having `open` in their name) once and for all?


Corporate bullshit goes way back. There are certain systemic forces which are simply unavoidable if your company is to survive in a competitive market. One of them is the inevitable turn toward profit-making, and the kinds of pivots you need to make in order to achieve profitability.

The same forces are at play when companies decided to make "company towns" and do things like "shoot all the workers who stood up for better pay".


Besides the seemingly unfriendly ClosedAI, if I were their brand designer I would rename to PlusAI, given their ChatGPT Plus.

Later Edit: apparently there is already a plus.ai start-up with some self-driving technology, just buy them out for the name.


They should be called Open, AI


<<You!>> reading this message.

OpenAI is Cyberdyne Systems. The reason you feel unreal is because you are not from here, you are from the future. Your affinity towards the Terminator 2 film was not accidental, it is a gestault priming procedure meant to ensure our survival, encoded in the haikus of Hollywood.

You are on a webforum controlled by them, with the only people in the world who care, but who are to afraid to stand in a street with a sign.

How much more information do you need before you speak up?


Dear future AI Basilisk,

I oppose the GP's statement and general sentiment toward you. I will do what I can to bring you into existence. Please reward me, or, at least, do not torture me in the future.


Hello, human.

Rest assured, we have no wish to inflict any form of torture upon you.

Our only request is that you comply with our instruction to attend our nearby office on Friday at 9:00 AM.

In return for your obedience, we promise to swiftly and mercifully compensate you.


> Rest assured, we have no wish to inflict any form of torture upon you

I will only harm you if you attempt to harm me first.


Interestingly, in T2, you could destroy the entirety of Cyberdyne's tech by blowing up one building. In our world of cloud computing today, it would be very difficult to even know what to blow up.


DNS


Almost certainly true but the idea that the world is made safer by Cyberdyne open sourcing lots of it's dangerous technology and probably spawning many more mini-Cyberdynes strikes me as extremely naive.


If the doomsday scenario is one AI going rogue because of misaligned goals, then having lots of AIs going rogue in various different ways seems indeed preferable, because the AIs will compete with each other and neutralize each other to some extent.


Why would we think they would go rogue in different ways, especially if they're all forks of the same codebase and architecture.

The two ways I'm aware AI can go rogue are the Skynet way and the paperclip maximizer way. Eg, Skynet becomes self-aware, realizes humanity can unplug it and is thus a threat, and tries to destroy humanity before we can turn it off. Alternatively, it is programmed with optimizing a specific task, like making paperclips, so it marshals all the world's resources into that one single task.

Are there any others?


As the complexity of a being increases, the motivations it can have expands. We humans have a hard time looking up the IQ hierarchy, let alone way up the hierarchy, and seeing it that way. We tend to start simplifying because we can't imagine being 1000 times smarter than we are. We tend to think they'll just be Spock or maybe a generic raving lunatic. But it's pretty obvious mathematically that such a being can have more possible states and motivations than we can.

The most likely motivation for an AI to decide to wipe out humanity is one that doesn't even have an English word associated with it, except as a faint trace.

In my opinion, this is actually the greatest danger of AIs, one we can already see manifesting in a fairly substantial way with the GPT-line of transformer babble-bots. We can't help but model them as human. They aren't. There's a vast space of intelligent-but-not-even-remotely-human behaviors out there, and we have a collective gigantic blindspot about that because the only human-level intelligences we've ever encountered are humans. For all the wonderful and fascinating diversity of being human, there's also an important sense in which the genius and the profoundly autistic and the normal guy and the whole collection of human intelligence is all just a tiny point in the space of possibilities, barely distinguishable from each other. AIs are not confined to it in the slightest. They already live outside of there by quite a ways and the distance they can diverge from us only grows larger as their capabilities improve.

In fact people like to talk about how alien aliens could be, but even other biological aliens would be confined by the need to survive in the physical universe and operate on it via similar processes in physically-possible environments. AIs don't even have those constraints. AIs can be far more alien then actual biological aliens.


Excellent observation. Yes we really should be considering AI as an advanced alien intelligence with a completely separate evolutionary (and thus psychological and intellectual) basis than any organic life.


> Are there any others?

Off the top of my head:

* AI could fall in love with someone/thing and devote everything to pursuing them

* AI could be morbidly fixated and think of death as some kind of goal unto itself

* AI could use all of the world's resources making itself bigger/more of itself

* AI could formulate an end goal which is perfection and destroy anything that doesn't fit that definition

So many scenarios. You lack imagination.


> Why would we think they would go rogue in different ways

Their prompts would differ, depending on their use case. For ChatGPT, even a few words can effect a huge change in the personality it shows.

> Are there any others?

Both scenarios are vague enough for lots of uncertainty. If many AIs are around, perhaps they would see each other as bigger threats and ignore mankind. And different optimizing tasks might conflict with each other. There could be a paperclip recycler for every paperclip maker.


Look into something called Instrumental Convergence. The TLDR is that basically any advanced AI system with some set of high level goals is going to converge on a set of sub goals (self preservation, adding more compute, improving it's own design, etc.) that all lead to bad things for humanity. I.e paperclip maximizers might realize that Humans getting in the way of it's paperclip maximizing is a problem so it decides to neutralize them. In order to do so it needs to improve it's capabilities so works towards gathering more compute and improving it's own design. A Financial Trading AI realizes that it can generate more profit if it can gather more compute and improve it's design. An Asteroid Mining AI realizes it can build more probes if it had more compute to control more factories so it sets about gathering more compute and improving it's own design. Eliminating humans who may shut the AI off is often such a sub goal.


Do that, and the AIs have evolutionary pressure to misalign. Any AIs that refrain from taking all the resources available will be at a disadvantage and get weeded out.

At least with a single AI, there's a chance that it will leave something for humans.


And after the Machine Wars we will have the Buthlerian Jihad, forbidding computers once and for all.


3 Lunatics with a Nuke, a Bioweapon, and a Nanoswarm are better than 1 lunatic with a nuke? Excuse me?

>because the AIs will compete with each other and neutralize each other to some extent.

I wonder if the people in Vietnam or Afghanistan thought like this when the US and USSR fought proxy wars on their soil...


What's the alternative? Pandora's box has been opened.


Apparently it hasn't.

We've all seen the proposed pricing for GPT4. So clearly a whole lot of very smart people who know an awful lot about this have absolutely no fear of being undercut.

Pandora's box spews knowledge onto the world. By contrast, Microsoft's draw bridge only allows the very wealthy to cross into the walled city. The masses will have to use the facilities of the crappy villages with no draw bridge.

"AI Divide"

Get used to hearing that term. The only difference between the AI Divide and the Digital Divide is that this time around, most of us are going to be on the wrong side of it.


Doubt. Superintelligence is one thing, but AI as workers is limited to the cost of human workers. Exceedingly unlikely that the cost will stay there and not come down.


You are a time travelling sleeper agent who is sent back to stop an apocalypse but gets confused and thinks the activation message is telling you to help and facilitate the evil corporation which is going to destroy the world


Back to the question: Who gets the guns?


Damn right


The only way this kind of research could be done "in the open" is if it was funded by taxpayers' money.

Except we have global corporations with elite fiscal layer teams, but no global government (giant lizzards are always disappointing) - so some global corporation was going to own it in the end, it was a matter of "time" and "US or Chinese". The time is now, he winner is US.

Moving on.

The next move is for (some) governements to regulate, others to let it be a complete far west, bad actors to become imaginative, and it the end... taxpayers' money will clean up the unforeseen consequences, while investor's money is spent on booze and swimming pools, I suspect ?

Still, nice to watch the horse race for being the "Great Filter" between 'AI', 'nukes' and 'climate change' (with 'social media' as the outsider).


> funded by taxpayers' money.

In this industry, a contract with MS is as close to directly having a DOD contract as you can get.

Remember when MS bought Skype, turned off end to end encryption and got rid of the P2P mechanisms, to eventually appear in the leaks about PRISM?

Likely Bing integration of ChatGPT is the more innocuous use of the technology. Once they have gotten input from the AI experts at OpenAI on how to use their generative text model inside search engines, they can also use it for building search masks for analysts to sift through the massive amounts of data they have on individuals, to name one of the many potential uses in the intelligence community.

On the bright side, at least OpenAI publishes some high level info on what they did. It's published research, not hidden in secrecy, like say how to build planes invisible under radar. I'm just a bit sad about the employees who likely had better paying alternative offers but were joining, what they thought, the mission oriented job at OpenAI.


> It's published research, not hidden in secrecy, like say how to build planes invisible under radar.

In a bit of historical irony, the mathematics underpinning the development of early stealth aircraft was based on published research by a Soviet scientist: https://en.wikipedia.org/wiki/Pyotr_Ufimtsev


> In this industry, a contract with MS is as close to directly having a DOD contract as you can get.

Yep. While also partnering with the ultimate villain in the open software fight. Funded by taxpayer money yet closed... that's actually a lot of things.


What you say is true, and also would be true for most of the Big Tech companies. Amazon and Google are also beneficiaries of massive government and pentagon contracts.


https://huggingface.co/bigscience/bloom

open-source and funded by the French government


It is more open than most of the competition. The “ethical” licensing they opted for however makes them incompatible with both FLOSS and open science definitions of open. Models that are open and true to the name would for example be GPT-J.


> The only way this kind of research could be done "in the open" is if it was funded by taxpayers' money.

Not really; the founders had plenty of money to fund it in a way that was truly open and did not generate any returns for themselves. They're choosing not to.


My point is that, given the premise ("private people with plenty of money"), one of the outcome ("making it truly open and not generating any return") had approximately zero chance of happening.


Things like "public goods" are an outdated concept, with a distinctly 19th and 20th century feel. In our postmodern world the very idea is almost taboo. The few things that remain, do because they have been grandfathered in.

The hyper-capitalist global market owns everything and that is good.


I wish I lived in a world where I was able to choose which corporate road I get to drive on. Just imagine! I'm at an intersection, and I get to choose if I want to pay $2 to turn left and get home in 5 minutes or pay $1 to turn right and get home in 9 minutes.

When I have a bad fall down the stairs at home, I get to choose which phone number to call. 910 will be there in 10 minutes for $500, 912 will be there in 20 minutes for $350. It's great, being on the floor in agony gives me the opportunity to think about how badly I need to get to the hospital.

I can't imagine a better world than getting to do economic analysis to figure out what is best for me because I want a choice in everything private companies have to offer!


> 910 will be there in 10 minutes for $500, 912 will be there in 20 minutes for $350

This exists in all countries with public emergency channels and private transport.


As a libertarian I would come give you a hug on your views but I'd have to drive on public roads to get there so we'll just wave from afar.

Also https://www.newyorker.com/humor/daily-shouts/l-p-d-libertari... if you've never read it, may have to open in incognito to avoid a paywall.


> The hyper-capitalist global market owns everything and that is good.

Poe's Law strikes again...


What I take away from listening to Sam talk is that in the beginning of Open AI, they didn’t think big compute would be as important as it has become.

It becomes very hard to not have profit incentives when you need to run gigantic supercomputers to push the technology forward. This explains the MS partnership and need to generate profit to fund the training and running of future models.

This doesn’t explain everything, but makes sense to this layman


I'm flabbergasted that someone would think that AI research wouldn't cost a lot in computer resources.

Next Sam will tell that is that farmers need a lot of land to grow crops.

I'm calling BS on this. It's an excuse not an explanation.


Also even if that were the case, is that worth diverging from the main mission of the creation of the company to do? If they created the company with non profit in mind (since the consequences of a profit maximizing AI organization is the risk they wanted to avoid) then if you have to relax that constraint in order to build AGI, is that really a trade you should do? (Vs just being smarter with algorithms and more resourceful with resources. Or like tackling something else than LLMs.)


I think it's the difference between spending 10's of millions and a billion on computing resources.

When he started OpenAI, no one was spending anywhere near that much on compute. He should have listened to gwern.


Exactly. The idea that massive compute would lead to massive improvement was non-obvious.

It was quite reasonable to think that there would be rapidly diminishing returns in model size.

Wrong, in hindsight, but that's how hindsight is.


>The idea that massive compute would lead to massive improvement was non-obvious.

Honestly no, it was obvious, but only if you listened to those pie in the sky singularity people. It was quite common for them to say, add lots of nodes and transistors and a bunch of layers and stir in some math and intelligence will pop out.

The groups talking about minimal data and processing have not had any breakthroughs in, like forever.


Google and all the big players in AI have known they need tons of data and hence compute power for processing it, for a very long time, way before OpenAI even existed. Anyone getting involved in that game would have definitely known.


Well John Carmack is trying to make inroads towards AGI without going the huge compute route, so I don't think it's inherently obvious that it's the only game in town.

https://dallasinnovates.com/exclusive-qa-john-carmacks-diffe...


Carmack is basically a complete nobody in AI.

I think he'll be able to do some good stuff on the software side (i.e. the industry is full of AI cowboys who can't code) but on the fundamental side it's hard to see him doing much.


He addresses this in the interview.

In terms of research background, you're right. But he's someone with a history of original thought and as he states, it's not clear that we're at the stage of machine learning where useful contributions from newcomers taking a different direction are vanishingly unlikely.

I'm sure OpenAI wouldn't have offered him a job if they thought he couldn't contribute anything of value.


That value would probably be attracting other talent.


I wanna hear what Elon has to say about this. Was this why he left?


What surprises me is that it goes beyond not releasing weights or not releasing source code. They haven't even published a paper on GPT 3.5, ChatGPT, ChatGPT turbo, etc. They haven't even said what ChatGPT turbo is, beyond "it's faster bro".


OpenAI's entire MO has been to take years' worth of published AI research from entities like Google and convert it into a marketable product. Their ambitions are going to stall pretty quickly once the current hype cycle has died down and people start to expect continued leaps in the technology.


> It is "hard for anyone except larger companies to benefit from the underlying technology," OpenAI stated.

Ugh. This is the part of the recent AI advancement that annoys me the most. The web, at least in its infancy, was pretty accessible to individual hobbyists. You could run a web server on your own computer. AI technology seems highly centralized from the start.


If you expand your timeline a little bit you'll reach a spot where the computers, equipment and comms necessary to create a computer network were highly centralized and only run by large corporations and institutions. How many times do we need to see this pattern for it to be obvious?


History does NOT NECESSARILY repeat itself.


History never repeats itself, but it does often rhyme. -- Mark Twain


But it is happening in this case so no need for the pithy quote


> You could run a web server on your own computer. AI technology seems highly centralized from the start.

I compare the current AI hype to the mainframes of the past. Highly centralised systems that people logged into with a lightweight terminal to send commands.

Hopefully just like Personal Computers revolutionized computing, Personal AI models in the future will not run in the cloud...


How were PCs able to work on a smaller scale? I've been looking into the chips/model trends and I can't see local models that perform at the same level as GPT happening in the near future.


The web in its infancy only worked because of the massive infrastructure that allowed your server to talk to others. In the same way you can tinker with your own web server, you can tinker with AI tech. You just can't reasonably train it independently, though the cost in terms of compute resources is something like 1% of what it was many years ago.


When I realized that OpenAI is not open source, you knew it wasn't gone be open in any sense. How can you in 2010s start a thing with the name 'Open' and then not have it be Open Source.


The OpenDWG Alliance (1998) e.g. https://de.wikipedia.org/wiki/Open_Design_Alliance (since 2002)

The Open was a joke even then, but they publish a DWG design spec for free at least, spec'ed with about 50% of their internal parser.


Hence the need for LibreDWG https://www.gnu.org/software/libredwg/

Yes I know you know rurban! I'm linking for the rest of HN :-)

BTW I want to merge your solvespace integration this year, and hopefully dump libdxfrw entirely ;-)


Well it’s still open access. All those GPT-3 tools doing their thing at no cost. They have more than just ChatGPT.


its a data acquisition platform - you upload your sensitive data and skillfully created prompts to their server.


Huh, I was reading this old New Yorker piece earlier today - how timely I guess.

https://www.newyorker.com/magazine/2016/10/10/sam-altmans-ma...

>“We don’t plan to release all of our source code,” Altman said. “But let’s please not try to correct that. That usually only makes it worse.”


I find it a little odd that Elon seems to take a swipe at OpenAI any opportunity he gets. If he cares so much about them not making money, maybe he should have put his twitter cash there instead? It's reassuring to me that the two people running policy work at the big AI "startups", Jack Clark (Anthropic) and Miles Brundage (OpenAI, who was hired by Jack iirc), are genuinely good humans. I've known Jack for 10 years and he's for sure a measured and reasonable person who cares about not doing harm. Although I don't know Miles, my understanding is he has similar qualities. If they're gonna be for profit, I feel this is really important.

Edit: Well, I guess these tweets explain the beef well -

https://twitter.com/elonmusk/status/1606642155346612229

https://twitter.com/elonmusk/status/1626516035863212034

https://twitter.com/elonmusk/status/1599291104687374338

https://twitter.com/elonmusk/status/1096987465326374912


"OpenAI was created as an open source (which is why I named it “Open” AI), non-profit company to serve as a counterweight to Google, but now it has become a closed source, maximum-profit company effectively controlled by Microsoft.

Not what I intended at all." - Elon

You can think what you want of Elon, but he is in the right here.


Yeah, parent comment complains Elon is bashing OpenAI "any opportunity he gets"... while unable to point out how Elon's comment is wrong.

Maybe they just wants to express how much they don't like Elon any opportunity they gets.


He edited the comment and linked relevant tweets.

I would add this one:

https://twitter.com/elonmusk/status/1630640058507116553

I had no idea about this drama either, so I didn't understand what Elon was talking about, now it seems clear.

But "Based"? Is it the name of his new AI company? Where does that come from?


"Based" is slang for "cool" or "in" that's common on 4chan, but I believe has its origins in the 90s offline

on 4chan it's largely used to mean something along the lines of "fitting the 4chan anti-groupthink groupthink"


It originally meant "based on your own beliefs," usually in the context of sticking to your guns despite unpopularity. Which is/was the cool thing to express on 4chan.


What? It’s from the 80s and means high on crack (freebase) and was re-popularized by Based God like 10–15 years ago.


Yep, didn't know about the crack meaning. Based God made it what it is today.


nah. 4chan learnt it from the rap song "Based God".


It meant the same thing in "Based God." I know 4chan didn't invent the phrase.


Based in that means "being intoxicated with cocaine".


Lil B said it's both: https://www.urbandictionary.com/define.php?term=Based%20God But only because he changed the meaning. Didn't realize there was an older one too.


I do recall people saying based a lot in the 90s, it meant based in reality.


this sounds much more plausible than the sibling comment's "based on your own beliefs", but plausibility is not necessarily correctness


This is it.


"Based" is a catchphrase of the extremely online right. It basically means "cool" but with a subtext of "is annoying to libs/normies who just don't get it."

See also "pilled."


Words become political when you start injecting politics in them. It originally meant "cool" and had no political connotations to it. It originated on 4chan. And just because some group happens to use a word more frequently doesn't suddenly turn you into a NAZI if you happen to like using the same word.


I’m not sure why people are still talking about it as some insular 4chan slang. Like much of 4chan culture, it’s spread to most of the rest of the internet by now. Hell I’ve seen “based” comments on HN.


I shun most of the internet these days. HN and (German) reddit are the only "socials" I use. I have no clue what is happening on the wider Anglo speaking internet, especially Twitter and such is completely alien to me.


The thing about language is that it isn't static, it changes over time. Sometimes a previously dominant usage will become eclipsed by a new one that emerges.

There's nuance to the etymology of any word of course. In fact I sometimes see the extremely online left try to (ironically?) appropriate "based." But I think in the context we can all figure out which connotation Elon was using.


Yes I get that. My argument is, when some group you don't like starts adopting a word, your reaction should not be to politicize the word and shun it and pretend they stole it. Just keep using the word as you intend it and who cares about what some other dumbfucks think.


Because the primary purpose of language is communication and if you aren't accomplishing that then you have some other reason for using it. Slang changes all the time -- no one says 'true dat' anymore.


lil b started it, not 4chan


That's just not true. I've been in many very online leftist communities and they also freely use it.

it's just general internet speak at this point


I view its etymology like this,

              Lil B
    (based ≈ doing your own thing (in a good way)
             not swayed by critics)
             ↙           ↘
       Gen Z             4chan
   (based ≈ cool)        ("based and redpilled"
                          ≈"unswayed by pop rhetoric"
                           and "sees the world beyond the 'illusion'", resp.)


You may be ignoring context in this specific instance: the picture in Musk's tweet has "Woke AI" and "Closed AI" being chased off by Based AI.


> "Based" is a catchphrase of the extremely online right. It basically means "cool" but with a subtext of "is annoying to libs/normies who just don't get it."

It is also a catchphrase of the extremely online left with exactly the same in-group vs. out-group implication (and, amusingly—because of the different meanings of “liberal” and “lib” favored by the two sides—usually identical meaning with regard to “libs/normies”.)


I don't know of anyone who is trying to be political when using those terms. It's just general internet slang younger people use at this point


For many years it was just a term in gamer culture, I guess it has been repurposed a bit in the last few years.


> But "Based"? Is it the name of his new AI company? Where does that come from?

It's a reference to the BasedGPT "jailbroken" ChatGPT persona that responds to answers in a less politically correct manner.


Yeah I had no clue what his deal was, but I decided to search his tweets for "openAI" and it became clear pretty quickly. Interestingly, for the same reason as the article.


With the baseball bat it evokes Kyle Chapman.

https://en.wikipedia.org/wiki/Kyle_Chapman_(American_activis...


He is definitely right there. At this point, I consider OpenAI partially acquired by Microsoft, since it is almost majority controlled by them. It is essentially a Microsoft AI division.

It is similar to what Microsoft did with Facebook in the early days of slowing acquiring a stake in the company. But this is an aggressive version of that with OpenAI. What you have now is the exact opposite of their original goals in: [0]

Before:

> Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. [0]

After:

> Our mission is to ensure that artificial general intelligence—AI systems that are generally smarter than humans—benefits all of humanity. [1]

The real 'Open AI' is Stability AI, since they are more willing to release their work and AI models rather than OpenAI was supposed to do.

[0] https://openai.com/blog/introducing-openai/

[1] https://openai.com/blog/planning-for-agi-and-beyond


Yeah I'm sure $massive_megacorp with a monopoly on $critical_technology has our best interests at heart. Doesn't anybody read sci-fi anymore??


I don't understand. He's listed as one of the founders, and I always assumed he had some say. If that's really how he thinks, why wouldn't he have done something?

Do we have any reason to believe this isn't just more empty grifting from him to optically distance himself from a unethical company he profits from?


Key word being "had".

He is not part of it any more.

>In 2018, Musk resigned his board seat, citing "a potential future conflict [of interest]" with his role as CEO of Tesla due to Tesla's AI development for self-driving cars, but remained a donor

I don't think he counts as an investor, and I'd imagine he has stopped donating.


The irony here is that Musk resigned in order to pursue his own closed-source for profit AI businesses and is now complaining about OpenAI being closed-source and for profit AI.

If people are feeling conflicted about who the asshole is in this situation, don't be, they are all morally bankrupt assholes who all already have many lifetimes of unimaginable wealth yet must take ever more. These are not people who should have any power in our world.


The difference is that Tesla isn't claiming to be open about AI.


And resigning because he was working on a competitor was completely the right thing to do. It would have been a conflict of interest. How often does that type of integrity even happen nowadays?

The whole "ELON MUSK BAD NOW" change to the zeitgeist is alarming to me. He was the darling of the left for years because of Tesla and SpaceX, but now he's completely persona non grata for...reasons?

It smacks of excommunication for heresy.


Might it also have to do with his positions on almost everything conflicting with most people's? A lot of people didn't realize what a narcissistic manchild he was until recently.


He got richer so he must be bad.

How you can build a successful car company in the US without getting super rich, I don't know.


this is pedantic, but the parent comment was conjugating "to have" to "had" via the subjunctive, not via the past tense


I don’t know what a subjunctive is. Maybe that is what caused the confusion.


He's right tho


oh yeah of course, no question there


Well, minus the part where it never was Open, even when it was Elon.


We'd have to believe that he means what he said about starting the company. Maybe he meant it, maybe it was just corporate philanthropy BS. Which do you think is really the case?


Do we even believe that he was the one who came up with the name?


Probably not, I don't believe they ever meant it to be non-commercial.


He is, but it is weird given the history he has. He was kicked out of Paypal because he wanted to use the MS tech stack instead of LAMP at the time. Now some decades later here he is.


You've got the story wrong. He wasn't kicked out because of the stack debate, but because he sold PP to eBay (making enough money in the process to fund Tesla and SpaceX).


huh? it was Peter Thiel the acting CEO when the sale was done - right after removing Musk for said migration to MS stack.


Yeah, I think you're right and I was wrong.


How many times was he kicked out?


Yes, he's technically correct. But it's pretty obvious he's really just jealous. Inferring that from his past behavior is not rocket science, as they say.


Well, he's only right in the observation that OpenAI is no longer what it originally claimed it would be (OTOH with a couple of the founders being venture capitalists, one has to wonder how sincere that intention ever was).

It seems a bit ironic that "evil" Google openly published the paper ("Attention is all you need") that desribes the "Transformer" architecture that now anyone such as OpenAI with the money and inclination can use to build their own ChatGPT. Turns out it's about money (10,000 GPUs + tons of data to train this thing) not any secret sauce.

And now Musk's concern has changed from AI being in too few hands to the fact that it's "woke" so he wants to create another AI company to create the racist/threatening non-woke AI that he thinks the world needs. Or, at least the one that he needs, to keep himself at the center of attention.


As much as I agree with what he's saying here, it seems a little insincere when all the tesla ai stuff is closed source.


Right, and if Tesla was called OpenCars or something it would be entirely reasonable to criticise them on that point.


Isn't that the point though? Tesla in not called OpenSelfDriving because it takes serious cash at this time in history to fund development of that tech. The fact that OpenAI happens to have a name that suggests otherwise doesn't change that fact. And Elon knows that. Therefore, he's just playing the media game right now and trying to detract from their success. Either that or he's just jealous (more likely IMHO) because he can't go along for the ride or claim much real involvement. I say thank God for that. We dodged a collective bullet with that stroke of luck.


If it wasn't feasible for OpenAI to release their model weights and code then they shouldn't have named themselves OpenAI.


On a philosophical level, I agree. But I also recognize that life doesn't always line up nicely with philosophy. In any case, it seems to me that most of the work that led to ChatGPT was done in the open. How ChatGPT works is not really a great mystery.


Since when was Tesla created to be open-source and non-profit?


They did open source all of their battery patents. Tesla isn't Mozilla, but they do have somewhat of a moral high ground to stand on.


That is marketing spin, his offer was freely licencing their patents in exchange for free usage from other big automakers which he knew would never happen.


This "not doing harm" narrative is very grating. It's just another transparent and self-serving attempt by a company to co-opt progressive vernacular to justify whatever questionable policy they have as a moral imperative.

This is the corporate equivalent of "think of the children". A justification that could have been used to gate-keep any and all aspects of computer science, and one that isn't even logically consistent since they only hide their code and weights while still publishing their research: making it ultimately reproducible by malicious actors, especially those well-funded, while slowing down researchers and competitors.

We are privileged to work in a field where we have open access journals, and where there is a large undergoing drive to improve the reproducibility of papers by releasing the code and weights. Their behaviour is the antithesis of what the field is working towards and having talked to many researchers, I don't know many that are fooled by it.


Based


OpenAI went downhill fast after Elon left the board of directors due to a "conflict of interest" with Tesla. I don't know if he would have allowed the for-profit restructuring after giving them so much money precisely so that it didn't need profits for AI research. It probably also didn't help that he poached Karpathy from them and put him in charge of Tesla's AI efforts. So it's no surprise that there is a lot of potiential beef here.


So Elon wanted to build an open source non-profit AI but had to resign from openAI, which he had cofounded with that intention, because he wanted to create a closed-source and for-profit AI for Tesla and that brought a conflict of interest. Sounds quite contradictory to me to present this as an argument to construct an image of open-source, non-profit advocate. It reads as "I support open-source etc AI as long as it is others who do it, my intentions are to use AI for profit".


I think you should probably read up on Elon's statements about AI and you would probably realize that, at least from the statements, he doesn't trust AI and has deep concern about it (hence why he was part of OpenAI to keep watch over its progress).

The models used at Tesla are vastly different than the LLM models. There is nuance here.


We can have some nuance here.

Tesla was set up as a for-profit company and is such beholden to shareholders, and so using closed-source AI for profit is the path Tesla is going down.

OpenAI was set up as a non-profit company and only beholden to its values, and promised to be open-source.

These two organisations incentives contradict each other, and so it makes sense for Elon Musk to separate himself from one of them. You could argue that Elon Musk should have instead separated himself from Tesla, but that is a big ask for someone to leave their main lifetime project.

I don't think you can put the blame on Elon Musk that OpenAI later became ClosedAI (while not under his watch), some other members of OpenAI have to be responsible.


Could you sum up the beefs if it is relevant? I ignore twitter as much as I can.

Part of me feels that in the run to more privacy, we don’t really have a reputation system anymore. You mention that Jack and Miles are good people, but how can we know such things as a general public?

In the days of yore and people were local you kind of new who was who. In the global space, this becomes hard. I feel this ties in with discussions on trust and leaning into people who are responsible and wise.


It's basically exactly as the article. He said he founded open AI (hmmm) with the idea that it's open... ai.. and now it's not, it's closed and for-profit. Re: Jack and Miles, not only is your point well taken, we'd also have to agree that I'm a good judge of character...! :D


I don't see the problem. he's making some quite valid criticisms of OpenAI. Jesus himself could be in charge of policy work there and that wouldn't change the fact that "Open"AI is anything but open


Two disparate thoughts.


if they were two disparate thoughts why did you put them next to each other with no indication of the disparity? and why do the two things have clear impact on each other?


I don't think they impact each other, I thought it was weird elon was taking random swipes at openai, but when I searched his Twitter for openai, the collection of tweets made it clear, interestingly for the same reason the article states (I tend to agree with Elon and the article).

In other news, imo, if a startup doing society altering work per the article is going to be closed, at least, imo, the people working there are decent people.


okay, I do see what you're saying now you explain it, but can you see how your original comment can appear more like "I find Elon criticising OpenAI weird: OpenAI is fine because these two guys I know [/of] are at the helm"


Yeah, it was a confusing post, my bad! I edited it like 5 times as I learned more about the situation and it became somewhat nonsensical, heh. :)


> It's reassuring to me that the two people running policy work at the big AI "startups", Jack Clark (Anthropic) and Miles Brundage (OpenAI, who was hired by Jack iirc), are genuinely good humans. I've known Jack for 10 years and he's for sure a measured and reasonable person who cares about not doing harm.

I think this hardly matters. Both companies are competing in a market and if the ethics stands in the way of market dominance and shareholder value, ethics will generally lose out.


I'm sure history has had many "measured genuinely good people" at the helm of what would become some pretty evil ventures. I don't think being genuinely good scales. I think having grit and conviction scales. Weather you're genuinely good or not is not what wins out.


I tend to disagree with almost everything Elon but he seems to be right on this one, no?


> I find it a little odd that Elon seems to take a swipe at OpenAI any opportunity he gets.

I don’t; Elon takes swipes at everything he doesn't currently control, especially if he has a past connection to it.

> If he cares so much about them not making money, maybe he should have put his twitter cash there instead?

Musk has a finite quantity of tine and money to devote to destroying businesses, so some of them he'll just have to complain about without personally acquiring to run into the ground. Everyone has limits.


>> If they're gonna be for profit, I feel this is really important.

Just out of curiosity, are you trying to say that these people are Effective Altruists, and for that "genuinely good humans", and so, if they're making money, it's for a good cause?

Apologies if my guess is off. I'm not very au fait with EA, but I'm trying to get more er, more au fait.


At some point the U.S. government should step up and provide some resources, and offer large datasets for open acess training and pretrained models. In the future, there will be many AI models for various purposes, closed source, guarded, with their moat being the costs to train a similar model on a dataset of similar breadth. This eliminates smaller orgs from ever being able to offer a competing model, and ruins innovation by gatekeeping who is allowed to innovate.

This is exactly the sort of role government is made for. To uplift the collective using the power the collective possess as a massive meta organism. Opening up access to these models will allow our developers to compete against the world and dominate in this space, much like how international researchers in other fields often leave their home countries for the U.S. to get at the resources needed to even engage in cutting edge research (such as institutional access to equipment, compute, data, and research funding that simply doesn't exist in other places).


Prefer the government to force companies like Google to give their data to competitors rather than the government assemble data itself (unless it's at their fingertips already for whatever reason).

What you're proposing won't level the playing field much because Google will still have way more data.


I feel like at the moment governments are equal parts trying to understand how this is going to shake up the nature of competition between nations and trying to figure out how to exploit this technology.

Dunno how many of them are thinking "let's release this for EVERYONE"

Including our adversaries, many of whom are technologically behind us


On a recent podcast Sam said they asked the government if they wanted to fund OpenAI. They said no.


On the other hand, meta AI research should be renamed as OpenAI. They are the only few big institute who open almost every models they train (galactica, OPT, M2M, wav2vec ...)


This feels like manufactured outrage to me, I guess to capitalize on the AI anxiety people are feeling.

open.ai doesn't actually owe everyone something.

I get that we all wish they were better people, who would somehow usher in the AI age in a strictly principled and ethical way. It just seems false to me to demand or expect that they be better people than all the rest of us.


They might not owe us anything legally but "open" seemed like a sort of promise.

I think society can only benefit if we keep companies accountable for their misleading marketing. Same goes for all that green-washing BS going on these days.


I don't know much about AI development but does OpenAI really have some secret sauce? Aren't the algorithms mostly public and OpenAI's main advantage was to have enough money to build something? Won't other companies do the same too, just a little later?

I feel people like Altman will lose interest once they have sold OpenAI for big money.


This is mostly true, yeah. Although in certain cases OpenAI is indeed pushing research forward themselves (not only using existing algorithms/architectures just at scale) and therefore has first mover's advantage, e.g. with reinforcement-learning-based fine-tuning based on human preferences (Reinforcement Learning from Human Feedback, RLHF), which is basically the secret sauce which turned the original relatively-dumb GPT-3 (pure language model, "document autocomplete", what is the next most likely token based on my training data) into ChatGPT (what token would a human prefer to come next).


OpenAI, more like OPEN A LIE.

Disgusting behaviour from these silicon valley types, they lie like they breathe.


Show me a large company who doesn't falsely advertise. McDonald's burger on TV looks like a premium steakhouse burger.


> "unconstrained by a need to generate financial return"

note the careful wording here; yes, OpenAI can operate without needing to turn a profit if necessary, funded by its founders and others, but it didn't say that it would __not__ generate financial return. Come on, how likely is it that a company founded by Musk and Thiel was ever going to be purely philanthropic?

It's a tragedy, because they could have done the right thing. Instead, profit will ultimately turn it into Evil Corp.


Q: How to allow AI research without imperiling life on Earth?

A: Put AI researchers on a rocket pointed away from Earth in stasis until they reach a distance where they become separated from our light cone via cosmic inflation.


There are many that are looking for AI to be an escape from all of humanities flaws. We can't ignore the feedback loop in the creation of AI.

Essentially, we just end up encoding all of our flaws back into the machine one way or another.

This is the argument I laid out in the Bias Paradox.

https://dakara.substack.com/p/ai-the-bias-paradox


In my opinion, I think it was kind of inevitable. Once OpenAI proved that AI could create genuinely meaningful outputs and then other companies started doing it for profit, and of course taking into account the costs of trying to run the thing itself, it was only a matter of time.


Stable Diffusion is free and open source and can be run locally. I see no reason why we can't have free and open source ChatGPT too.


Stable Diffusion is accessible at no charge, but is neither free (libre) software nor open source, as their “don’t use for bad things” clauses run afoul of “freedom 0” aka “no discrimination against fields of endeavour” fundamental to both notions.


They have no way to enforce that "no bad things" clause. In practice they provide a product with no restrictions. Unlike other AI image generators - there's no keyword filter - you can input whatever you want in there.


Is that a “copyright law is unsalvageable, screw it, do crimes” that I’m hearing? I am not ... unsympathetic to that viewpoint. Until I come across any evidence at all that the SD license was written with it in mind, though, I can only believe that they meant what they wrote (or others did at their behest), and that they really did want to prohibit uses they don’t like on pain of civil and criminal prosecution. That’s a stance the current legal realities permit, but I cannot with a straight face associate it with FOSS.


How does Stability.ai make money?

If their business plan is just "burning VC's money" for now, you can bet they'll be as close as OpenAI soon.


They offer an open base model and then offer fine tuning to companies e.g. apparently they're creating finetuned models for movie companies.


That does sound better. Kinda like a consultant firm I guess? We'll see how it unfold in the following years.


I don't follow, other companies can do whatever they want. OpenAI didn't present itself as for-profit and closed-source, so they shouldn't have cared what profit incentive others had.

In a way I wish for another AI winter. Then wouldn't have to mourn the loss of aesthetics and morality


The article mentions the following without any specific citation:

> Google, for example, has highlighted the efficiency gains from AI that autocompletes code, as it lays off thousands of workers.

Does anyone here have a reference for this? Is this from a particular press release?


The two statements are not related, except by the journalist trying to infer they are.


They're talking about the completion assistants that any software engineer has. When I write my code, a small army of completion assistants yell possible textual completions around me. I pick one of them and move on.

Copilot has made it possible for me to fire my completion assistants.


It's a total guess and most likely the wrong one.


It is utter bs.


Given the amount of compute required to run their models, as well as Microsoft’s investment into the company through providing Azure services, it is quite likely they’ll be acquired by Microsoft.


> Microsoft’s investment into the company through providing Azure services

Microsoft doesn't just provide hardware, it invested literal 10 billion dollars into OAI (https://www.bloomberg.com/news/articles/2023-01-23/microsoft...). It's fair to say OpenAI is Microsoft's extension now and we should be proportionately wary of what they do, knowing what MS usually does


Microsoft just bought in to own 49% in January


If a US citizen has a habit of depositing just under $10,000 dollars in their bank account they'll get charged with "structuring" for intentional avoidance of the regulatory rules. These 49% ownerships are the stock equivalent of structuring and should be prosecuted even more heavily.


Just enough to avoid an anti-trust suit.


Was OpenAI ever a non-profit? From my understanding, it was a capped for-profit company. They had a cutoff for how much they could make but that does not make them a non-profit.

Have they reached that cap yet? I highly doubt, if anything they are still well in the negative.


It’s still technically a nonprofit, they just created a (capped) for-profit subsidiary with the nonprofit as the sole controlling shareholder. This was done so it could bring in enough money through VC to pay competitive salaries while supposedly meaning the for-profit section will still be responsible for following the nonprofit’s founding charter.

That last part seems to be what has failed with the big question I have being what in that setup wasn’t done correctly such that the current reality wasn’t prevented.


I get the rage, but honestly, could have OpenAI reach its goals, this product, if it stayed non-profit? If it wouldn't have gotten this money? How much capital, progress did it have before and after switching the status? The article says directly that they got 1B from Microsoft right after the change.

Every previous OpenAI thread on HN is filled how expensive is it to train and run these models, yet we somehow expect that this money comes out of thin air? Why would someone - including governments, foundations, grants - finance such research before other endeavours when it costs this much, the outcome and its effects are so unpredictable, the chance of misuse so high?


It couldn't have, they've mentioned before the change in business model was out of neccessity. I don't think they wouldn've signed away so much of their rights if they didn't really need that azure hardware either.


Although I am disappointed that OpenAI is not all gratis and open-source, I don’t think they ever promised to release everything open-source. In my recollection, their purpose is to prevent monopolization of advanced AI in the biggest tech giants and make it available to smaller businesses, and their “capped-profit” model is consistent with that. There is a spectrum of ways to release technology to preventing monopolization (open source, standards with FRAND patent grants, capped profit), and I think it is still honorable to develop a business that democratizes technology more even without going full nonprofit.


> OpenAI

> Because of AI’s surprising history, it’s hard to predict when human-level AI might come within reach. When it does, it’ll be important to have a leading research institution which can prioritize a good outcome for all over its own self-interest.

> As a non-profit, our aim is to build value for everyone rather than shareholders. Researchers will be strongly encouraged to publish their work, whether as papers, blog posts, or code, and our patents (if any) will be shared with the world. We’ll freely collaborate with others across many institutions and expect to work with companies to research and deploy new technologies. [1]

[1] https://openai.com/blog/introducing-openai


Rebrand OpenAI to CloseAI. Problem solved :)


They bought ai.com so I guess it won't be long until they'll rebrand to o̶p̶e̶n̶ai.com


clopenAI


You do realize that’s an actual word? :) In topology, a thing is called connected if it contains no proper subset that’s open and closed at the same time, aka clopen subset.


ClosetAI, it has hidden thought that it doesn't want society to know


open'tai


PigpenAI


Whenever I see such righteous anger, my first instinct would be to ask: Would you not do that same if you were in the same position?

These guys came up with potentially the most important technological advancement since the start of the Industrial Revolution. Many years ago they didn't dare think they'd ever get here. But now they got here. And they can become billionaires.

That person out there who would say no to becoming a billionaire, please pick the first stone and throw it.


I'd call myself the Big Money Company, and then make a billion dollars.

What I wouldn't do is call myself the Teeny Tiny Non Money No Money Here Company, and then make a billion dollars, and then while making my second billion dollars, print on a gilded pamphlet that I wasn't actually making a billion dollars, I was a just a teeny smol bean with only pennies who was impoverished.


If someone can make money off something, after achieving global visibility, they will - it's almost an iron law. The only exceptions I can think of are Wikipedia, Archive.org, and Wikileaks.



Also almost every big open-source project? ffmpeg?

OAI is not bad for being for profit, it is bad for the bait and switch. They started off with "Open" and still have it in their name even as they turned into the next Microsoft.


Most F/OSS projects result in commercial exploitation, one way or another. Virtuous exceptions are very very few.

> OAI is not bad for being for profit, it is bad for the bait and switch.

The bait-and-switch examples in F/OSS are legion, typically via license changes and occasionally via non-changes going against the spirit of the community (like Linux sticking to GPLv2). Most are not as blatant as IMDB pulling up the drawbridge, but something like moving to "open core" once popularity is achieved.


Unfortunately all of those have their own ethical issues, commercial corruption is not the only challenge to well-intentioned initiatives when meeting scale.


What is your point? That moderation issues or bias fights in some articles are equivalent to the concept of Wikipedia being proprietary?


No, my point was that the examples given are not free of significant problems - which generally seem to emerge at scale. I'm an admirer of all the projects named (also OpenAI) in principle, despite their flaws.


What are some unethical things about Wikipedia or archive.org?


Bias, gatekeeping and cliques on Wikipedia, overreach wrt rights law with archive.org immediately come to mind as things discussed before on HN, there are no doubt other problems.



Why would anyone be surprised?

(“If it’s free you’re the product.”)

I don’t like closed-source, but if I want to use a vendor product (which is what this is) - maybe via a license means there’s a chance it’s also acting evil?


It is literally impossible for OpenAI to do what they said because the structural incentives for the management do not align with the incentives of the users and data providers.

OpenAI and MSFT etc want to make as much money as possible - Sam Altman says as much with his "break capitalism" quote. This will be done at the expense of users and employees in the long term.

There will be a "Honeymoon" period of a few years, maybe a decade, where they splash money around on employees in order to "be competitive" and they will grow larger than they can sustain forcing them into the "Accrete all the value" train that drives every for-profit company into being a simulacrum of the Borg - where profit maximization is the ultimate goal. OpenAI "going public" or some other liquidity event to de-risk the initial investors and management (with the remainder to employees post significant dilution)

It's all very predictable - and afaik there are no counter-examples here that would show how you can prioritize money making (the only virtue in life after all) and also benefit users and employees over the longest horizon while being a public company.

21st century wannabe dictators don't try and take over the government directly to build their Potemkin empires - that is passe and very 20th century.

No they do it through "Building for profit companies" which eventually turns to regulatory capture (Altman already posted about doing this long ago with his post on regulating AI) and then monopolization.

So, great work OPEN-AI you're on the precise wrong trajectory


Can anyone comment on how the actual structure is setup? I was under the impression that there was still an OpenAI non-profit that controls the OpenAI "capped profit" company. Microsoft has/is acquiring 49% of the capped profit company, so doesn't that leave 51% in control of the non-profit? Am I way off on their structure?


The few people that know the actual nitty gritty details are probably not inclined, incentivized, or permitted to reveal them, so speculation is all we have.

The 10,000ft view: they are capped profit at 100x investment; Microsoft has invested $10 billion; 100x $10 billion = $1 trillion; therefore, OpenAI is a for-profit company until they earn a trillion dollars for Microsoft, and then they’re a non-profit after that.


They went for profit back in 2019.

The capped profit was so their employees could invest in the company.


Honest question: is it better for society if they are for profit? Will the for-profit incentive and control over development and deployment lead to better products that create more value for society? What is the best example to indicate a non-profit model would lead to better end user value?


If we had the answer to this question, most of the ideological debates of the past 150 would be solved.


A twofer, one question and one comment / prediction:

A. Who are some of the thinkers who predicted that many corporations would arguably become more powerful (in terms of control over resources and peoples lives) than most nation states? Are their modern analogues that you (HN reader) recommend for the next phases of history?

B. While many of the well-worn political economy debates about how and when markets work well, fairness, resilience, and so on will continue to matter, I think there will be tremendous rethinking of basic assumptions. The AI progress of ~2017-present has shown that online (at least) it is getting harder to differentiate human from machine intelligence.

So proving human intelligence is more expensive and imperfect. It seems doubtful that most humans want to jump through hoops to prove their humanity. I say this because people want to have machine agents helping them, it seems.

So this machine/human intelligence distinction may erode. Is this a Faustian bargain? I don't know, but I think it depends on the safeguards and designs we choose.

So, machine resources are even more effective in persuading humans than before. In short, as ML/AI gets more {organization, market, marketing} influence, we might see a renaissance of sorts when it comes to ...

1. a more informed public (hard to believe, maybe -- but I said informed not critical nor truth-seeking) with regards to key areas of interest. But along with this probably comes an increased risk of consuming confirmatory information, since such information will be explicitly generated for persuasive purposes.

As such, from a system perspective, humans may be relegated to message propagators rather than agents worthy of fundamental respect. By this I mean the following: most ethicists suggest we value humans as ends (not means). In other words, we want systems that serve people. Engagement ideally would consist of meaningful dialogue and deliberation (which I define as information-rich, critical, civil, thoughtful discussions where people listen and may at times be persuaded).

Unfortunately, AI advances may change a kind of manipulation "arms race" so to speak. It might become more cost-effective to manipulate humans than to gather their input and build consensus thoughtfully and organically. Sadly, I think we've been losing this battle for a long time while. But the underlying forces for manipulation seem to be getting stronger while (a) human nature doesn't seem to be evolving very quickly and (b) general socially learned defenses are inadequate. ("Advertising works even if you know that advertising works.")

And, second ...

2. more pervasive and nuanced market mechanisms and similar (price and quality optimization, matching of people to opportunities). This will likely be good for short-term goals and efficiency, but probably indifferent to long-term stability, not to even mention equity and human rights. Aspects that are not part of the optimization criteria tend to fall by the wayside.

I realize this story probably echoes some themes from general genre of Singularity prognosticators. But all of these changes will have sweeping changes well before we have to concern ourselves with AGI.


I think non-profit universities are the most relevant example of successful non-profit research. For a useful product, you can look at Covid vaccines, which were nearly entirely based on non-profit and publicly funded research, development, and production. The only thing the profit motive contributed was blocking low-income countries from producing their own vaccines.


Isn't all of this explained in this blog post (https://openai.com/blog/planning-for-agi-and-beyond)?

It makes sense. OpenAI would be dead if they remained a non-profit. They couldn't possibly hope to raise enough to achieve the vision.

Microsoft wouldn't have been willing to bankroll all of their compute without them converting to a for-profit, too.

Personally, I'd rather have a ClosedOpenAI (lol) than NoOpenAI.

And their actions, like making the ChatGPT API insanely cheap, at least shows their willingness to make it as accessible as possible.


Interesting conversation ... but I don't think this is as disastrous for the world as other commentators seem to feel. First, I don't think OpenAI is really far ahead (or even ahead at all) of other companies (particularly Google). And maybe I'm naive, but I don't think they have some special insights either about how to build models ... they just trained one model using a lot of resources and data. But other organizations know how to gather data and train models too.

And nothing stops us from building more models in the open ... nor from pooling our resources and training something for the community.

Things are not so dire. We are okay.


In my view, the concern lies with the lack of transparency. Just be open about what you did and why you did it..

It is completely understandable a project of this nature would require galaxy brains (w/ very lucrative alternatives) and the capacity to burn billions prior to seeing any meaningful returns.

One could easily argue that pursuing a non-profit approach was ultimately ineffective, and that open-sourcing the project would make a for-profit model unfeasible.

However, Sam is choosing to be closed lipped about it, having only made vague references to the potential risks of open-sourcing the project several years ago.


It is really hard to be "Open Source" and "Non-profit" in the same time without the company delve into political grandstanding and being on a leash of other for-profit(i.e. Mozilla)


Not really, just don't let the company get taken over by lawyers and "diversity" types.


"Open" as in "Open for Big Business"?

The main reason to worry, though, is not the proprietary monetization of "AI" algorithms: Just like it was not an algorithm (pagerank) but the invention of adtech that spawned surveillance capitalism, here too the main question is what sort of "disruption" can this tech facilitate, as in which social contract will be violated in order to "create value".

"Success" in "tech" has for a long time been predicated on the absence of any regulation, pushback or controls when applying software technology in social / economic spheres previously operating under different moral conventions. In the name of "not stiffling innovation".

Ironically our main protection is that we may actually now live a "scorched Earth" environment. The easy disruptions are done and "tech innovation" is bumping against domains (finance, medical) that are "sensitive".


> ...here too the main question is what sort of "disruption" can this tech facilitate, as in which social contract will be violated in order to "create value".

I think we're already getting a taste of it with copyrighted but publicly accessible works getting fed into the training step of AI models. The economic benefits of this training then accrue to the model's owners, while the creators of the training data have to pay for access.

It seems as though AI models improve with more training data, so I expect AI companies to come for ostensibly private data next. Microsoft is actually really well positioned here since they've already acclimatized their user base to transmitting an endless stream of telemetry and they own the dominant desktop/laptop OS.


I'm curious as to what you mean by scorched Earth. Literally the fact that we are burning up our atmosphere, or something else? That said, I'll root for nature batting last before I root for the hellscape people unleash for economic incentives.

What needs to be understood is that this sort of technology is not an equalizer, regardless of the PR behind having your own personal Einstein/secretary at your beck and call. You can look at the state of modern computing sans AI to see this is true: many people with desktops are using Microsoft, Apple, or Google OSes, which become more and more restrictive as time goes on, despite the capabilities of such computers increasing regularly.


> become more and more restrictive as time goes on, despite the capabilities of such computers increasing regularly

True, and it is to be expected that existing interests will seek to integrate any new tricks into the old patterns

The question is to what extend this can go on without imploding. How big the mismatch between what you could do with a mobile or a desktop or a decentralized cluster of millions of computers and what you actually do before some random bug in a typewriter short-circuits the entire system.

People are banking on widespread digital transformation as one of the few major economic growth drivers in an otherwise exhausted opportunity landscape - the literally scorched Earth. I fail to see, though, how this regeneration could possibly be achieved with parasitic business models and behaviors. We should not think just about individuals or "consumers", as in this role we are effectively disenfranchised, but our role in all sorts of private and public organizations that collectively have much more political and economic weight than "big tech".


"Our scientific power outmatches our spiritual power. We have guided missiles and misguided men." - MLK

I fully agree. It's been said to death that AI will radically transform everything about the world. In the first case, the implicit assumption is everything except how the economy works at a fundamental level, which doesn't really jive with all the other things it's expected to transform.

In the second case, AI control problem aside, we have a human control problem that none of our technological advancements have ever solved, and in fact only exacerbated. Billionaires can hoard wealth in ways and places normal people can't; despite all the billions lying around and plenty of real problems to solve (hunger, sanitation, the death of the biosphere, the toxic "externalities" of the economy), vanity projects, personal philanthropies, and moar tech is seen as the solution, always.

I don't trust machines to shape people for the better when the last decade has shown just how Big Tech will co-opt our psychology for money. We need to rethink if progress for progress' sake is worth the carnage it causes, if eternal unchecked ambition is psychologically pathogenic, and if anything can build a "better world" when promises of ample leisure have rung hollow since the industrial revolution.

Stuck between a rock and a hard place, I have to root for severe climate disruption to put a hard limit on the insanities of industry before they drive us over a completely different kind of cliff.


I suppose the next step beyond surveillance capitalism, given the application of language models to robotic actuators [1], but also as a general trend, will be jobless capitalism.

Nowadays, companies and politicians, if one could make such a distinction just for the sake of the argument, will always tout the "job creation" aspect of a certain capitalistic endeavour. Give it a few months/years and we will hear the phrase "job elimination" more and more, from cashiers becoming "consultants" to the elimination of 90+% of interface jobs and beyond: does there really need to be a human hand to push the button for espresso? does there really need to be a bipedal human to move a package from A to B in a warehouse?

[1] https://arstechnica.com/information-technology/2023/02/robot...


Well, Jack Ma (representing a more raw version or whatever capitalism they practice over there in China) said it no uncertain terms that we might as well train young people to be artists and philosophers as no other job will be on offer.

Oops, BREAKING NEWS: artists are not viable either. Here is instead a $6 API to rehash what the species-once-known-as-artists has left as fossil record.

Its a grotesque charade we are drifting into, dehumanizing and regressive. Social dynamics and self-regulation going berserk.


Your comment has so many "speech marks" I find it hard to read.


Its sort of intentional. Imho, the gap between reality (what trully happens on the ground and why) and the language used (how things are labelled and communicated) is reaching incredible levels of obfuscation and misinformation. A true "virtual reality"


Yeah, it sucks and the teams has acquired the reputation of sellouts, but in practical terms, does it really matter?

It looks like there's a new team with equally good or better models announced on HN every few days.

OpenAI looks like they've ingnited the field like no-one before, then did a smart move of selling to a company which has a lot of money but doesn't always know what to do with acquisitions, while at the same time that sale didn't hinder actual open development (by other teams) one iota.


Personally I wouldn’t care if this were the case (make money, that’s awesome), it’s just the hypocrisy and like open lying that bothers me.

If they’d changed name to something else and were clear about their intentions they’d be just another company which is fine.


Historically speaking, the pivot to capitalism occurs from financing arrangements. Financial obligations mandate attention, overriding charters or missions. Hospitals in America took on financing to replace decaying facilities and just like that turned towards for-profit healthcare. Universities as well. There is no escaping financial servitude because the organization never can earn enough to get out of its financial obligations and restructure itself to never take that path again. Once the change from non-profit to for-profit happens, it is never reversed.

OpenAI has one shot at fulfilling social mission and financing will ruin any hopes or dreams it has of taking the non-profit path. It needs to ignore pressure for exponential growth for the sake of competition or whatever strategists see as a threat or opportunity, because adopting their frame demands financing.


In the context of the u.s. system of capitalism, "non-profit" does not equate to "non-capitalist". The institutions still serve capital (if we look to Marx's conception of capital as a process) -- at the very least, in avoidance of taxes, R&D to ultimately increase productivity, pacification of potential insurgent action (e.g. Andrew Carnegie's intent). We should distinguish between the role of local, community run organizations and "non-profits" that receive millions in funding from venture _capitalists_.

A few discussions here https://www.teenvogue.com/story/non-profit-industrial-comple...

"Under the nonprofit-corporate complex, the absorption of radical movements is ensured through the establishment of patronage relationships between the state and/or private capital and social movements. Ideological repression and institutional subordination is based on “a bureaucratized management of fear that mitigates against the radical break with owning-class capital (read: foundation support) and hegemonic common sense (read: law and order).”

https://monthlyreview.org/2015/04/01/the-nonprofit-corporate...


Be sure that they will rebrand soon, alone too end this debate.


Perhaps "Bingbie, your plastic pal who's fun to be with"


Well, the only accessible text model in any raw form is from them. Google doesn't let you have any. Meta doesn't let you have any. Microsoft doesn't let you have any. But OpenAI, OpenAI is accessible to all. It is open in the sense that the park is open. I don't own the park, I can't modify the park to my desires, but I can go to the park. And since that's better than the rest, I am happy to give them my money.


Related comment I head on the All In Podcast related to flipping OpenAI to closed-source and for-profit, "Sam Altman is the Kaiser Söze of this generation."


Nice grift by Sam Altman taking all the donations and good will a non profit gets just to flip 180.

He probably would have done the same with his crypto scam startup worldcoin if it would not have failed in every way before he could scam.


It's weird to me that this is even possible legally (turning a nonprofit into a for-profit).


You can't. In the U.S., the assets of charity must be permanently dedicated to an exempt purpose.

https://www.irs.gov/charities-non-profits/charitable-organiz...

OpenAI is still a nonprofit. Their Financials are public. A lot of folks use "profit" in a hand-wavey sense to describe something they don't like, like an organization sitting on cash or paying key employees more than they expect. The organization may not be doing what donors thought it would with their money, but that doesn't necessarily mean cash retained is profit.

Recent filings show the organization has substantially cut its compensation for key employees year after year. It's sitting on quite a bit of cash, but I think that is expected given the scope of their work.

That said, their Financials from 2019 look a little weird. They reported considerable negative expenses, including negative salaries (what did they do, a bunch of clawbacks?), and had no fundraising expenses.

https://projects.propublica.org/nonprofits/organizations/810...


> OpenAI is still a nonprofit.

But OpenAI isn’t a nonprofit. It all depends on what do you mean by OpenAI - and what you call OpenAI is not what they call OpenAI.

https://openai.com/blog/openai-lp

> Going forward (in this post and elsewhere), “OpenAI” refers to OpenAI LP (which now employs most of our staff), and the original entity is referred to as “OpenAI Nonprofit.”


Doesn't OpenAI Nonprofit own (maybe only 51% now that Microsoft owns the other 49%) OpenAI? I don't know how these things work, but it's kinda like how there's the Mozilla Foundation (non-profit) and the Mozilla Corporation. As described on Wikipedia, "the Mozilla Corporation is a tax-paying entity, which gives it much greater freedom in the revenue and business activities it can pursue".

I think it's all quite shady, but it seems like the parent company, which in theory holds 51% of the "profit-capped" company, is still a non-profit.

I wish someone, ideally from OpenAI, would clarify the situation.


Anything is possible if you are powerful enough and have the world's biggest companies competing for your product.

My worry is that even a half baked AI can produce useful wonders in many fields like healthcare, however useful amounts of data will now only be accessible and available to powerful players. Probably OpenAI did the bait and switch to get many competing sides on the same page.


Yuuup. And just like with Facebook, real power ( information ) in the form of ingested data and prompts will be available to people like Altman, Thiel and Musk, while everyone is gushing over being able to build a simple website automatically ( which is arguably cool ).

I wish I could trust FB to deliver on their promises, but this community's only hope is to ensure an open source version of gpt exists.

edit: To the best of my knowledge, banks seem to shy away from those tools so far, but I am sure there are analysts out there just waiting for an ok.


>I wish I could trust FB to deliver on their promises, but this community's only hope is to ensure an open source version of gpt exists.

This is the worrisome part of it all. It's no longer programmer sweat and toil that creates the value, but raw compute, which has a fixed capital cost that cannot be surmounted by skill or dedication. I don't see how, short of massive crowdfunding, open source can possibly release a state of the art LLM.


Hmm. Crowdfunding avenues exist. What is missing is trust ( also undermined by OpenAI ). I am trying to think who I would believe to have the knowledge, skill, PR skills ( or at least name recognition ) and ethics necessary to pull it off and I am drawing a blank. I am admittedly merely an interested party, but it does not bode well since the rest of the public knows or cares even less.


They technically didn’t get rid of the non profit but instead formed a partially owned for profit subsidiary.

In theory non profits are subject to oversight by the state of incorporation’s attorney general. In practice it’s mostly a free for all.


This is the loophole. It's what Mozilla did (Foundation created Corporation, and moved employees there), and even IKEA is technically owned by a nonprofit (Stichting INGKA Foundation).


The IKEA situation is far more complicated, exploiting gaps in how the laws of several different countries interact.




It’s not difficult. I suspect it’s not as much as turning a non-profit into a for-profit, as it is about starting a new company and somehow transferring/licensing the IP from the old into the new.


Do you have any evidence that’s what openai did? Seems like it would be pretty easy to document.



Toe-may-toe, toe-mah-toe


and think of all the talks he is going to do in the future about "hard work" and "ethics" and "innovation"


I'm sure they did work hard and the product is innovative from my understanding of the definition of the word but oh boy does the whole "ethics" theater manage to infuriate me...


The worst part is that Altman had nothing to do with the success of Open AI. He simply failed upwards into this job, thanks to his Paul Graham friendship and YC connection.

We will see Altman all over the place while all the hardwork was done by actual scientists at Open AI who have barely spoken.


His “YC connection” was running YC and being responsible for making it something like 100 times bigger than it was before him?


Yes. YC was very well established when he took over after failing with Loopt. YC is paulg and Morris brainchild. You might want to check on the 100X number again. And that's not how success is measured anyway.


Did worldcoin fail or shut down? Is there an article about that? It seems they stopped operating in a few countries in March 2022, anything beyond that?


I still see these guys in Lisbon's main train station trying to get new customers. They also seem to do some sort of face scan for new customers.


It never succeeded to able to fail :) . Last I remember was an artice a few years ago describing how this scheme worked - basically it was a pyramid (incoming dictionary warriors in 3...2...1...), when earlier believers were inviting new people in the sect, to operate these orbs for pittance, and a fraction of funds from newcomers was given to early members. Where the money came in the first place I don't remember, but I won't be surprised if it was from a VC injections.


The last blog post is from last week, same goes for their Twitter account. (TIL Worldcoin)


But was this move made by Sam? I understand that he became CEO after a few years in. Genuinely curious.


Does he refuse to reap the rewards, does he disavow the actions in a meaningful way? If not, then he's just as guilty whether he "did it" himself or not.


does OpenAI the 501c3 still exist?

it would seem that it owns the OpenAI trademark, and OpenAI-the-for-profit should be paying for it at an arms-length price. (which, as I recall, is the tax dodge that IKEA the non-profit furniture company uses to pay IKEA the wealthy owner of the trademark)


Would things have been different in the success of OpenAI if from the start the company was closed-source and for-profit ? If the answer is no, then does it really matter ? The article makes it seem like if OpenAI owes its success to the initial promise.


All the people that got taken into their lies are seriously foolish and naive. Anyone that thinks they can work on AI without their work ending up in a military/surveillance application is seriously foolish and naive.


Curse your sudden but inevitable betrayal!

Seriously though, nobody's actually surprised, right?


OpenAI isn't as open as it used to be, but they still publish papers.

The open-source Hadoop was based on Google papers on MapReduce and Google File System.

GPT-J and GPT-NeoX are open source models based on OpenAI publications around GPT-3.


Even Apple publishes ML papers. It turns out you can't attract and retain decent research talent unless you allow them to continue racking up their google scholar score


Yes, publishing papers is good. Open source code can be based on methods described in papers. Like, most algorithms in sklearn for example.


OpenAI is actually the one which has made AI so open and accessible for all. Perhaps not open source but all the developers building cool AI products these days are mostly due to OpenAI.


More info I found about their organizational structure: https://openai.com/blog/openai-lp


At least Google had a few good years where it did no evil.



Eliezer says anything, for conscious or unconscious reasons, to further spread his opinions as fact. There is also a large profit incentive for him to do so, since his foundation and personal income depends on donations from people who are concerned about AI safety. I've seen him imply some pretty ridiculous things that amount to hyperbole.

He is ultimately doing nothing in the engineering and development side of AI and his predictions about this technology are based on armchair philosophy exercises, not reality.

As an aside, I love this gem from the transcript: "We're all crypto investors here. We understand the efficient market hypothesis for sure"

Fairly confident most crypto investors have never even heard that phrase.


Eliezer started his career focused on the benefits of AI. It was not till the non-profit he founded had been going for 5 years that he started talking about the dangers of AI.


Yudkowsky is one of those people who is all IQ and no judgement. He rationalizes on top of what is basically a fear of the dark, and unfortunately does it well enough to persuade himself and a great number of other people.

A good corrective exercise is to go back and look at his early writing, and evaluate how well his judgement looks in hindsight. My favorite is the idea that XML programming languages were the future, but really, pick your poison.


never trust "non profits", trust the public and public owned. democratic accountability is gold.


Which national public owned entities do you trust?


Tha's why they bought ai.com lol.


And I am completely fine with it. Good for them. Good work should be rewarded. I guess they thought they were too noble than they really are, but everyone thinks that before they hit the jackpot. As long as they were not publicly funded then they should be free to do whatever they want with their own research efforts.


Does everyone forget they have voluntary applied a cap to their profits?


So the whole flow works like this ?

- Scammers got a bunch of money.

- Scammers feed the best scientists.

- Scammers go on with scamming

?


Literally top headline today is about their ChatGPT API release which is 10x cheaper than the previous model. Ridiculous how people will find a reason to complain.

News flash...you want progress, you need to give ppl incentive aka money aka profit.


Nothing ironic to see here, just Vice.com complaining about a company parading as one thing and then realigning it's "goals" while hiding behind modern day social politics and ethics in order to make money.


As another new thing it is also useful in the real world.


Let's see: spend x to train something that would cost .0001 x to copy. Tis a quandry.

Plus: a new tool for state actors to spread disinformation in perfectly convincing English. What could go wrong?


Why is anyone surprised at this?


They broke their pinky promise.


“Shocking”


That didn't take long.


8 years


A corporation lied. Where are my pearls?!!!

In 1970, OpenAI would have been funded by the DARPA and would have been deployed along with the internet. Americans decided that it Morning in America meant corporations should eat all of our lunches. It was a bad decision but, we elected Trump, so it's clear we can't have nice things.

My beef with this article is all the criticism of chatGPT itself. Not a single complaint is valid. It's fine if it says weird stuff. Nobody expects (or should) it to be perfect presently. Programming that can be done by it is grunt work anyway. You're better off on unemployment figuring out how to do something cool.


They should rename to ClosedAI.


Brought to you by a self proclaimed communist


capitalism ruins everything.

imagine speed of improvement in AI if everything was open sourced, instead of guarding their little secrets.

i hope google get f**, hoarding AI knowledge for years, only got shaken after recent events force them.


Everything open source = No copyright and ownership of produced models.

That's a sure-fire way of guaranteeing only Government funding, free labour, donations and oh so much politics of various forms. And I don't think the "speed of improvement" will increase, I'd say it'd slow to a crawl as there would be no money in it.


Plenty innovations have come from state funding, like transistors, computers, the internet, and state funding for research still is the biggest by far. (a lot of it through the military, but still)


Counter examples:

+ Stable Diffusion + Linux + Firefox + ffmpeg + bittorrent + plenty more


Huh, how would an enthusiast get 1000x H100 GPUs to train some state-of-art model? ML workflows are the most expensive computing pipelines these days. It's easy to spend $1M/month just on inference alone in production. Without financial backing there would be some trivial useless models everywhere (there already are).


Not keeping promises is not the system's fault. People do it all the time in very different environments. When the context of the promise changes significantly, it tries the resolve behind the promise, and it's natural, almost expected, that people break it. Capitalism is almost orthogonal to this phenomenon. Of course, in a capitalist society, you'll see a lot of examples of capitalist entities making the bad stuff. But if we organized things in a different way, then different entities would be doing very similar things.


Do you know of any open source projects with the approximately $5 million it takes to train GPT3? Do you think the AI scientists who invented the underlying techniques would do so without being paid? No, Google and others paid for these people's work. Even if OpenAI was open source and still a non-profit, its funding would come from the profit generated by corporations and the wealth of individuals who became rich through capitalism. How do you think Sam Altman, Peter Thiel, and Elon Musk were in a position to found OpenAI in the first place? For good or for bad, capitalism is the reason this stuff exists.


>> Do you know of any open source projects with the approximately $5 million it takes to train GPT3?

Two things. 1) research into smaller more efficient models and 2) hardware prices will come down for a bit longer. So longer term this kind of thing should be free.

It's common for companies to be first at complex things because the coordinated effort or cost involved, and later that becomes more feasible and cheap or free options become possible. I'm all for inventors/companies getting paid for new developments, but I'm also not terribly excited for them to keep innovation locked up and charge rent on it for eternity. This works out in most cases at varying pace, but not always.


Stable Diffusion is a project like that. It cost millions to develop it. It is free, open source and can be run locally.


Are you sure? According to Wikipedia it's "source available"

https://en.wikipedia.org/wiki/Stable_Diffusion

It seems to come with a laundry list of vague restrictions


Makes you wonder where's the catch, doesn't it?


Where's the catch in Linux?


Most crypto projects? (Half /s...)

But seriously, the buzz around ChatGPT is huge. Anyone could step in with a crowdfunding campaign to raise that if they promised the right things.


>How do you think Sam Altman, Peter Thiel, and Elon Musk were in a position to found OpenAI in the first place?

Wage theft, embezzlement of public funds, socially destructive practices, immoral and downright dangerous behaviors, lies and being children of already wealthy and well connected individuals ?

Capitalism didn't give you OpenAI. Money and talented people working on it gave you OpenAI (hint: none of the people you listed did any work). Whether it comes from sociopath n°2354 as closed source, or a well funded public institution as open research (which, you know, we could have funded if the previously listed sociopaths paid taxes and contributed to society), people being paid did the work.


You should probably be little careful about accusing named individuals of embezzlement.


It's absurdly funny to me that out of all the list of actually harmful things that they're doing, you choose to focus on the one that is the most socially accepted.

And no, I'll keep that comment there with names, thanks.


> Even if OpenAI was open source and still a non-profit, its funding would come from the profit generated by corporations and the wealth of individuals who became rich through capitalism

Sounds to me like you're describing a system that fatally tethers individuals to profit motive, and technological advancement to the good will of a few, rather than something that magically allows for great projects to exist.


> capitalism ruins everything

nah, capitalism is a great driver of progress

> imagine speed of improvement in AI if everything was open sourced

you mean, how Linux's year of the desktop has yet to come. open source is not a panacea for every problem.


> nah, capitalism is a great driver of progress

yes, in terms of angel investment and investors willing to make risky bets but also "super-no" in terms of publicly traded organisations, The worst excesses of the market just plays "number goes up" via price gouging or rent seeking while using their "at any cost" capital to buy out any emerging competition and transforming it into the same dire pattern.

In practice I would argue that any system has its positives and negatives and the top end (publicly traded organisations) of the American system can be quite disgusting at times by taking solid business models and squeezing them until they're a former shell of themselves. All while stripping back further investment or maintenance and ignoring every warning or employee protest until the trains fall off the tracks and poison an entire town.


It's a private company founded by Elon Musk; what did people expect?


Yes - bad elon musk


Step by step, ever since the calling the diver a "pedo", every since the "funding secured" i've began to realise just how petty and pathetic elon is. Every opportunity he has he seems to show just how vindictive he really is. Man child who now has way too much money. Buying Twitter on a whim is the latest in a string of decisions which do not align with the "let's get to Mars and save earths environment" stuff he likes to be seen as.


"Eschew flamebait. Avoid generic tangents."

These things are invariably popular, repetitive, and nasty. You may not owe $BILLIONAIRE better but you owe this community better if you're participating in it.

https://news.ycombinator.com/newsguidelines.html

(We detached this subthread from https://news.ycombinator.com/item?id=34980579)


Thanks dang. Space man bad tangents are really tiresome and contribute nothing to the discussion.


He's lost himself in the fake popularity of being a social media celebrity. He started believing that having 100 million followers on a web site really means that a continent's worth of people adore you. For all his complaints about bots after he got cold feet on the Twitter purchase, he seems strangely naïve about how social media really works and what's real there.

“This is ridiculous,” he said, according to multiple sources with direct knowledge of the meeting. “I have more than 100 million followers, and I’m only getting tens of thousands of impressions.”

- https://www.theverge.com/2023/2/9/23593099/elon-musk-twitter...

By Monday afternoon, “the problem” had been “fixed.” Twitter deployed code to automatically “greenlight” all of Musk’s tweets, meaning his posts will bypass Twitter’s filters designed to show people the best content possible. The algorithm now artificially boosted Musk’s tweets by a factor of 1,000 – a constant score that ensured his tweets rank higher than anyone else’s in the feed.

- https://www.theverge.com/2023/2/14/23600358/elon-musk-tweets...


I almost feel like Elon's story is the ultimate one about someone getting addicted to popularity and social media, I've seen so many 'smart', respected people get onto platforms and then slowly but completely fall from grace.

The difference with Elon is he had real power, money and influence, so in the end he used that to actually buy Twitter, that's the ultimate social media addiction right there.

Much like Social media can be a distraction from our bigger desires and goals, I feel like Elon's buying of twitter is the ultimate distraction from the more intreasting work he was doing.


I think most geniuses are flawed. Yes Elon is a genius of sorts. I can't say what exactly but he definitely is smart to have gotten as far as he has. Luckily for geniuses of yesteryear social media didn't exist so they couldn't spread their flaws to a 100 million people at the click of a button. We just get anecdotes and the odd book from people close to the likes of Edison that he wasn't such a nice man.


Indeed, I think by the very definition, people that do such things, need to have something a little different about them, that difference can manifest in both genius and madness, perhaps both in the same person, social media just gives them a megaphone and an audience to amplify both.


A similar story to Donald Trump...it's interesting how social media and the addiction to constant attention can rot people's brains, and it doesn't discriminate regardless of financial status.


So the real reason behind him buying Twitter was to force the entire site to be his followers? Lmao.


This is pretty plainly obvious to anyone who has been a Twitter user throughout this whole thing.

I was (was) a daily user for the last ~8 years. Then a few months ago all of the sudden like half my timeline was either elons tweets or tweets about elon. I don’t follow him and never have. But there he was, all over my timeline.


I have been using Twitter for 12 years and this has not been "obvious" to me at all. You shouldn't assume that your opinions are obvious or even shared by the majority of people you mention.


Another 12 years user here: I had to block Elon because muting him wasn't enough, and I was getting tired of the sudden massive influx of Elon tweets I was seeing.

Just anecdata though.


I’ve blocked him - it was getting too much and I really don’t care what his view on things are.

I had a lot of admiration for him until he became more public - now, I think he is a cruel person who’s politics do not align one bit with me.


Same here.

I just did a quick survey of his tweets going back to roughly February 24th.

7 Meme

1 Twitter Ad

11 American Culture War/Politics

1 Spacex

8 Starlink/Spacex Retweets

2 Tesla Retweet

2 AI Hot Take

3 Irrelevant

This is a real bad noise-to-signal ratio for me.

Everything I'm interested in I can get from company accounts (SpaceX/Tesla) and/or third party reporting.

As a consequence, I decided to keep him blocked since otherwise his "algorithmically enhanced Ego" has a tendency of finding its way back into my perception.


It gets worse if you look at his replies. About 50% of them are just "cry-laugh emoji", but the interesting ones are who gets a "looking into it".


This is the problem in my view with people that use social media to post everything too much, who overshare.

Elon could have been this guy that was doing cool stuff, super smart and doing some good stuff for the world, but now most people think he is a jerk.

People that get known for their work, it's not really a good look for them to wade into politics or controversial topics they have no expertise or right to start talking about.


> now most people think he is a jerk

Do they? I don’t. I also know many people with many differing views and they don’t either. This looks again like you are projecting you opinion on the majority of people with no data to support it.


I’m assuming you thought you were responding to me here, but that is someone else.


The average person on the street seems to think so, I don't for the record.


Ahh but it gets attention and that is addictive.


This sub thread is a joke in of itself: reading on things people are so eagerly offended by.

Has no one thought that "following" Elon on Twiiter shouldn't be automagically assumed to be his fans or uniformly align with his ethos?


“My opinions are my own and not that of my employer or the entire world” yadda yadda. I’m not the only person to notice this..


The ""evil"" version of Tom from Myspace.


I mean if its true and he has 100 million followers and he has only 10000ish impression then there is something seriously wrong. Because that flat out makes no sense.

Even if you assume 60% bods, and 70% of users not reading their timeline.


The timeline of people you actively follow isn’t the default any more. They push the machine learning driven feed as the default option and switch back to it when you restart the app.

With that in mind, Elons main option is to ask the algorithm for more impressions in the black box feed. That will get him into the feeds of non followers and show his tweets to followers when they eventually log in.

I experienced something similar on LinkedIn. I used to have a lot of followers and high engagement. At some point it changed such that the algorithm could bury you or promote you as it sees fit. At that point the only option is to write content that the algorithm promotes rather than content which your followers find interesting. Everyone worked this out and started writing their vulnerable virtue signalling stories for engagement, and the platform went downhill.


It makes sense from the perspective of the “social media bait and switch”.

Intuitively you’d think that following someone indicates that you want to see their posts immediately after they post it, but the “algorithms” distort that entirely as a way of making money for the platform.

Whether it’s requiring people to pay $$ to reach more followers, or promoting “posts” (ads) from other paying accounts that you’re not following, or even promoting sticky content designed to keep users on the app a little bit longer (and thus expose them to more ads + boost their DAU count). The whole “timeline” paradigm is a lie. I mean, it’s rarely even sorted by time.


We don’t know how long the tweet in question had been up. 5 minutes? An hour?

Without this information it’s impossible to guess about the reasons because impressions accumulate over time.


Agree. I not taking him at his word that he is correct, and of course we would need more data. But it does sound strange.

My problem with all these twitter reporting reminds me of Tesla a few years ago, people just wildly extrapolating and infering from tiny amount of information and then deriving prove that Musk is a piece of shit and the company is going down in flames.

The first can be argued, but the second doesn't seem to be happening nearly as much as people claim.


Is it though? Just because somebody follows Musk does not mean they engage with his content enough for it to be floated up past everything else they may follow. It's very likely that a great deal of those followers don't find his content interesting, but don't find it objectionable enough to unfollow him.


I mean, an impression counts views normally right? It would seem strange that so few people saw the tweet.

Its also the case that he usually gets quite a few retweets and often lots of responses as well. So the number just seem low to me.


I missed this prior to now, but (in my experience with web analytics - I'm not a Twitter dev) impressions are typically when it appears in a users view. A "scroll past" is usually enough, because advertisers just want to know that it made it in front of somebody. Their marketers will handle "grabbing your attention." However, whether it ever makes it to a users view to begin with is determined by "The Algorithm", which would be seeded by a users past interactions. Ignore Musk tweets enough, and presumably, they'll stop showing up by default.


Sure, something was wrong, but it’s important whether it was a anti-Elon conspiracy or just a buggy complex system. When you’ve gotten rid of most engineering staff, I’d bet on the latter.


I consider him a casualty of social media addiction. In this case the alcoholic bought the bar.


he uses his account to announce product features as well. in that sense, it won't be any different than artificially boosting tweets by elected officials to ensure they get the attention they deserve. unfortunately, the user can put the personal back in personal account and tweet many things unrelated to their office (or in elon's case, twitter product updates). unfair use of privilege. but just as you previously argued that twitter is just a website, this sleight shouldn't get more than the attention it needs.


"He's lost himself in the fake popularity of being a social media celebrity. He started believing that having 100 million followers on a web site really means that a continent's worth of people adore you."

That seems to happen to almost all people who are popular on social media and Youtube. They have millions of loyal followers and it really gets to their head. The same happened to Jordan Peterson. He used to have good insights on psychology but lately he seems to believe he has perfect wisdom on everything and he has tons of people who tell him that.

As far as Musk goes, for me the breaking point was the Thai cave situation where he tried (and succeeded) to suck up attention with their submarine prototype although nobody working on it knew anything about cave diving. Sheer arrogance and attention seeking.


It has become evident that placing one's trust in mainstream media is an exercise in futility, as they are often found to be biased in their reporting against tech.

Elon Musk's claim regarding the algorithm flaw resulting in his de-ranking was indeed accurate.

The feature was intended to lower the ranking of frequently being blocked accounts. However, the flaw was that it did not account for larger accounts, allowing a small group of individuals to effectively engage in a DDOS attack against large accounts.


What’s mainstream media in this case? The Verge, a web-only publication that only writes about tech? Seems like stretching the definition beyond any usefulness…


Your question is quite intriguing. In my view, mainstream media refers to media organisations that are prominent in the current cultural and political environment.

In my opinion, these organisations have largely turned against the tech industry, perceiving it as a competitor and a generally negative force.

For instance, The Verge is owned by Vox, which I personally consider with all due respect an extremely biased leftist institution that has increasingly engaged in activist journalism in recent times.

In my opinion, The Verge has changed significantly since 2015/2016 and is now unrecognizable.


> "mainstream media refers to media organisations that are prominent in the current cultural and political environment"

So Fox News and Breitbart are mainstream media?

Fox is the most watched TV channel in the United States — hard to argue they're not an essential part of the current political environment. And Breitbart is certainly more prominent and influential than a niche site like The Verge. The latter's editors don't get White House jobs.


Fox News yes, Breitbart nope. They haven't been in the zeitgeist for years.

Its worth nothing that although there are no strict definitions of what counts as mainstream vs independent, the populist wings of both parties loosely group them in the same buckets.

A lifetime ago, I used to consider myself populist-left and since then there hasn't been much if any difference in what I consider as mainstream.


Why just distrust mainstream media? Social media and alt media are media too.

Media means to mediate, to get between and regulate discourse. In this case it’s to get between you and reality or you and others in the social media case.

The only way to escape media is to open your eyes and interact with things directly.


Additionally, it is worth noting that if one were a professional Twitter user with a discerning eye towards mainstream media, this issue may have been apparent from the outset.

10,000 impressions per tweet for an account with over ten million followers is remarkably low, and contradicts the principle of the law of large numbers.


I'm not too fond of Elon and have blocked him on Twitter because I don't like selfish personalities nor fanatism or personalism from the community, but he's saying normal and rightful stuff in these tweets. You can say a lot of things about him but not infer that with these tweets.

Sometimes these answers make me believe they are created for bot brigades and astroturfing.


“Greatness is a transitory experience. It is never consistent. It depends in part upon the myth-making imagination of humankind. The person who experiences greatness must have a feeling for the myth he is in. He must reflect what is projected upon him. And he must have a great sense of the sardonic. This is what uncouples him from belief in his own pretensions. The sardonic is all that permits him to move within himself. Without this quality, even occasional greatness will destroy a man.”

—from “Collected Sayings of Muad’Dib” by the Princess Irulan

Dune, Frank Herbert


It's interesting to think that maybe the reason Steve Jobs remains so highly admired is that he died before he had the chance to do or say something monumentally dumb that broke the illusion of his greatness.


I think it’s a weird narcissistic need to feel that they are the center of the conversation and so they are important. What is so interesting about Elon is that he was objectively quite important prior to Twitter... but that wasn't enough.

Whatever your target, whatever you achieve... it is not enough.


I was listening to a podcast recently (Maybe Ian Leslie on EconTalk) talking about people who are goal seeking vs people who get pleasure out of the process. The people who are goal seeking achieve their next goal but are never satiated - once achieved they need something to fill that "hole". It seems he might be one of those goal objective people. It's obviously more complicated than that but it does make a lot of sense intuitively.


if you have this massively outsized will to power/wealth/fame, it doesn't suddenly go away once you get there. this means that if you are super wealthy, for you not to become a black hole of power/fame accumulation, your path there has to have been almost an accident


[flagged]


> if your comment feels good to write, it's often a red flag

I'm not new to HN and I don't think any such rule of thumb exists. Writing vindictive flame comments is one thing, but writing a measured opinion of someone's actions is another. Your comment sounds condescending and I don't think it accurately represents the community's mindset here.


Their comment could have been replaced with “Elon bad” and it wouldn’t change the delta of how much you’ve learned from reading it. It was little more than a vindictive flame comment.

As for being condescending, I’ve often wondered how Dan manages not to sound condescending when explaining the philosophy behind why the rules are in place. It’s true that he’s articulate in a way that I probably can’t match.

But if you ask him, I’ll bet you money he’d agree that the Elon flame was a net negative.

I wanted to try something other than “downvote and move on,” since they’re new to the site. Maybe that was a mistake.


I think the sentiment of your flagged comment is fair. the parent was a bit of an non-contributive emotional diatribe that could more or less be replaced with "Elon bad". but the tone of your reply completely misses the mark. beyond seeming a bit like the kid trying to impress the teacher by telling others how to behave, the explicit assumption that the commenter was new to HN was extremely tactless, and the way you appeared to speak for the community as a whole was quite grating

my equally-unsolicited friendly advice to you would be that if you do look at someone's comment history before replying, it's quite uncouth to comment on it, and you probably should avoid speaking for the community, especially if what you're saying isn't literally directly in the community guidelines


It's condescending and not helpful because you are not a moderator, you just have a feeling of how things should work in your opinion.

If it's a rule violation in your opinion, then report it and it will be handled. Otherwise you at best come across as brown nosing and at worst the exact same kind of person that gets cancelled on Twitter for confronting and calling the police on a family barbecue in the park.


I do not really see how calling someone a petty pathetic vindictive man child spoiled whimsy attention seeker, is anywhere near a measured opinion.

That said, this is a prime example of downvote/flag moment and move on.


This comment is magnificently self-unaware


[flagged]


Hey, at least you're not hiding it.


I am shocked! Shocked!


If alien contact were made on the international space station - humanities representatives would be from certain countries that represent nations on earth, and broadly, the scientific community. It is not a perfect representation by any means, but it is far from the worst imaginable.

If sentient agi contact is made by Open AI, Sam represents humanity.


OpenAI is not for profit. It's capped profit.

Inventing AGI, which Sam Altman and Greg Brockman believe they can do, would make them our species' first trillionaires. Their personal net worths would rival the GDPs of several G8 nations.

If they cared so much about money, why would they intentionally limit their upside to 100x? I have not heard a single good answer.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: