Hacker News new | past | comments | ask | show | jobs | submit login

Wow. Screw non-profit, we want to get rich.

Sorry guys, but before you were probably able to get talent which is not (primarily) motivated by money. Now you are just another AI startup. If the cap would be 2x, it could still make sense. But 100x times? That's laughable! And the split board, made up of friends and closely connected people smells like "greenwashing" as well. Don't get me wrong, it's totally ok to be an AI startup. You just shouldn't pretend to be a non-profit then...




I agree, this sounds disappointing to me as well. My issue is how they're positioning themselves, which is basically a hyper-growth startup where you can get rich (but only 100x richer, because we're not greedy like other startups) but we're also a non-profit here to benefit humanity so don't tax us like those evil corporations. What really bothers me though is I don't know if they honestly believe what they're saying or if it's just a marketing ploy, because honestly it's so much worse if they're deluding themselves.


Any returns from OpenAI LP are subject to taxes!


First of all, thank you for taking the time to respond to people criticizing this announcement.

I just feel like you're trying to have the best of both worlds. You want the hypergrowth startup that attracts talent and investors, but you also want the mission statement for people that aren't motivated by money. I suspect trying to maintain this middle ground will be an incredibly damaging factor moving forward, as the people who are purely profit driven will look elsewhere and the people who are truly mission driven will also look elsewhere.

I fully appreciate how challenging these things can be and that making these decisions aren't trivial and obviously you're trying to do something different...but I really think you're taking the wrong step here. But because you will see short term gains from all the extra initial investment you won't realize it for years until it's too late and the culture has permanently shifted.

That said, while my trust has somewhat eroded (also because you wouldn't release details of your model recently), I still wish you luck in your mission.


Thanks for the thoughtful reply, it's much appreciated.

> you also want the mission statement for people that aren't motivated by money

I wouldn't agree with this — we want people who are motivated by AGI going well, and don't want to capture all of its unprecedentedly large value for themselves. We think it's a strong point that OpenAI LP aligns individuals' success with success of the mission (and if the two conflict, the mission wins).

We also think it's very important not to release technology we think might be harmful, as we wrote in the Charter: https://openai.com/charter/#cooperativeorientation. There was a polarized response, but I'd rather err on the side of caution.

Would love people who think like that to apply: https://openai.com/jobs


(I work at OpenAI.)

I think this tweet from one of our employees sums it up well:

https://twitter.com/Miles_Brundage/status/110519043405200588...

Why are we making this move? Our mission is to ensure AGI benefits all of humanity, and our primary approach to doing this is to actually try building safe AGI. We need to raise billions of dollars to do this, and needed a structure like OpenAI LP to attract that kind of investment while staying true to the mission.

If we succeed, the return will be exceed the cap by orders of magnitude. See https://blog.gregbrockman.com/the-openai-mission for more details on how we think about the mission.


I believe you. I also believe there are now going to be outside parties with strong financial incentives in OpenAI who are not altruistic. I also believe this new structure will attract employees with less altruistic goals, that could slowly change the culture of OpenAI. I also believe there's nothing stopping anyone from changing the OpenAI mission further over time, other than the culture, which is now more susceptible to change.


Something something money and power corrupts?

We can just look at Google and see that “do no evil” does not work when you’ve got billions of dollars and reach into everyone’s private lives.


thanks for your reply, and I appreciate that you share your reasoning here.

However, this still sounds incredibly entitled and arrogant to me. Nobody doubts that there are many very smart and capable people working for OpenAI. Are you really expecting to beat the return of the most successful start-ups todate by orders of magnitude and to be THE company developing the first AGI? (And even in this -for me- extremely unlikely case, the cap would most likely don't matter, as if a company would develop an AGI worth trillions the government/UN would have to tax/license/regulate it.)

Come on, you are deceiving yourself (and your employees apparently as well, the Twitter you quoted is a good example). This is a non-profit pivoting into a normal startup.

Edit: Additinally it's almost ironic, that "Open"AI now takes money from Mr. Khosla, who is especially known for his attitude towards "Open". Sorry if I sound bitter, but I was really rooting for you and the approach in general and I am absolutely sure that OpenAI has become something entirely different now :/..


If we succeed, the return will be exceed the cap by orders of magnitude.

Are there any concrete estimates of the economic return that different levels of AGI would generate? It's not immediately obvious to me that an AGI would be worth more than 10 trillion dollars (which I believe is what would be needed for your claim to be true).

For example, if there really is an "AGI algorithm", then what's to stop your competitors from implementing it too? Trends in ML research have shown that for most advances, there have been several groups working on similar projects independently at the same time, so other groups would likely be able to implement your AGI algorithm pretty easily even if you don't publish the details. And these competitors will drive down the profit you can expect from the AGI algorithm.

If the trick really is the huge datasets/compute (which your recent results seem to suggest), then it may turn out that the power needed to run an AGI costs more than the potential return that the AGI can generate.


AGI == software capable of any economically relevant task.

If you have AGI, it is very clear you could very quickly displace the entire economy, especially as inference is much cheaper than training: which implies there will be plenty of hardware available at the time AGI is created.


People thought the same thing about nuclear energy. A popular quote from the 50s was "energy too cheap to meter." Yet here we are in a world were nuclear energy exists but unforeseen factors like hardware costs cause the costs to be much more than optimists expected.

These claims are certainly plausible to me, but they are by no means obvious. (For reference, I'm a postdoc who specializes in machine learning.)


>These claims are certainly plausible to me, but they are by no means obvious. (For reference, I'm a postdoc who specializes in machine learning.)

I don't think I disagree very much with you then.

>People thought the same thing about nuclear energy. A popular quote from the 50s was "energy too cheap to meter." Yet here we are in a world were nuclear energy exists but unforeseen factors like hardware costs cause the costs to be much more than optimists expected.

In worlds where this is true, OpenAI does not matter. So I don't really mind if they make a profit.

Or to put it another way, comparative advantage dulls the claws of capitalism such that it tends to make most people better off. Comparative advantage is much, much more powerful than most people think. But nonetheless, in a world where software can do all economically-relevant tasks, then comparative advantage breaks, at least for human workers and the Luddite fallacy becomes a non-fallacy. At this point, we have to start looking at evolutionary dynamics instead of economic ones. An unaligned AI is likely to win in such a situation. Let's call this scenerio B.

OpenAI has defined the point at which they become redistributive in the low hundreds of billions. In worlds where they are worth less than hundreds of billions (scenerio A, which is broadly what you describe above) they are not threatening so I don't care - they will help the world as most capitalist enterprises tend to, and most likely offer more good than the externalities they impose. And as in scenerio A, they will not have created cheap software that is capable of replacing all human capital, comparative advantage will work its magic.

In worlds where they are worth more, scenerio B, they have defined a plan to become explicitly redistributive and compensate everyone else, who are exposed to the extreme potential risks of AI, with a fair share of the extreme potential upsides. This seems very, very nice of them.

And should the extreme potentials be unrealizable then no problem.

This scheme allows them to leverage scenerio A in order to subsidize safety research for scenerio B. This seems to me like a really good thing, as it will allow them to compete with organizations, such as FaceBook and Baidu, that are run by people who think alignment research is unimportant.


Lowest estimate of human brain’s compute capacity is 30 TFLOPs. More reasonable estimate is 1 PFLOPS. Achieving this level of compute would cost few thousand dollars per hour. Human costs $7-$500 per hour. On the top, energy consumption bills would be outrageous. Because of slowed down Moore’s law the price goes down by 10X about 15yrs. I also think it’s fair to assume that human brain sculpted over million year of evolution with extreme frugality is fairly optimal as far as hardware requirements are concerned. So I tend to think “affordable AGI” might be around 45 years away.


We have to raise a lot money to get a lot of compute, so we've created the best structure possible that will allow us to do so while maintaining maximal adherence to our mission. And if we actually succeed in building the safe AGI, we will generate far more value than any existing company, which will make the 100x cap very relevant.


Why not open this compute up to the greater scientific community? We could use it, not just for AI.


What makes you think AGI is even possible? Most of current 'AI' is pattern recognition/pattern generation. I'm skeptical about the claims of AGI even being possible but I am confident that pattern recognition will be tremendously useful.


What makes you so sure that what you're doing isn't pattern recognition?

When you lean a language, aren't you just matching sounds with the contexts in which they're used? What does "love" mean? 10 different people would probably give you 10 different answers, and few of them would mention that the way you love your apple is pretty distinct from the way you love your spouse. Though, even though they failed to mention it, they wouldn't misunderstand you when you did mention loving some apple!

And it's not just vocabulary, the successes of RNNs show that grammar is also mostly patterns. Complicated and hard to describe patterns, for sure, but the RNN learns it can't say "the ball run" in just the same way you learn to say "the ball runs", by seeing enough examples that some constructions just sound right and some sound wrong.

If you hadn't heard of AlphaGo you probably wouldn't agree that Go was "just" pattern matching. There's tactics, strategy(!), surely it's more than just looking at a board and deciding which moves feel right. And the articles about how chess masters "only see good moves"? Probably not related, right?

What does your expensive database consultant do? Do they really do anything more than looking at some charts and matching those against problems they've seen before? Are you sure? https://blog.acolyer.org/2017/08/11/automatic-database-manag...


You have pointed to the examples where the tasks are pattern recognition. I certainly agree that many tasks that humans perform are pattern recognition. But my point is that not ALL tasks are pattern recognition and intelligence involves pattern recognition but that not all of intelligence is pattern recognition.

Pattern recognition works when there is a pattern (repetitive structure). But in the case of outliers, there is no repetitive structure and hence there is no pattern. For example, what is the pattern when a kid first learns 1+1=2? or why must 'B' come after 'A'? It is taught as a rule(or axiom or abstraction) using which higher level patterns can be built. So, I believe that while pattern recognition is useful for intelligence, it is not all there is to intelligence.


What I'm trying to point out is that if you had asked someone whether any of those examples were "pattern matching" prior to the discovery that neural networks were so good at them, very reasonable and knowledgeable people would have said no. They would have said that generating sentences which make sense is more than any system _which simply predicted the next character in a sequence of characters_ could do.

Given this track record, I have learned to be suspicious of that part of my brain which reflexively says "no, I'm doing something more than pattern matching"

It sure feels like there's something more. It feels like what I do when I program or think about solutions to climate change is more than pattern matching. But I don't understand how you can be so sure that it isn't.


Aren't axioms just training data that you feed to the model?


> And it's not just vocabulary, the successes of RNNs show that grammar is also mostly patterns.

The shape of resultant word strings indeed form patterns. However, matching a pattern is, in fact, different than being able to knowledgeably generate those patterns so they make sense in the context of a human conversation. It has been said that mathematics is so successful because it is contentless. This is a problem for areas that cannot be treated this way.

Go can be described in a contentless (mathematical) way, therefore success is not surprising (maybe to some it was).

It is those things that cannot be described in this manner where 'AGI' (Edit: 'AGI' based on current DL) will consistently fall down. You can see it in the datasets....try to imagine creating a dataset for the machine to 'feel angry'. What are you going to do....show it pictures of pissed off people? This may seem like a silly argument at first, but try to think of other things that might be characteristic of 'GI' that it would be difficult to envision creating a training set for.


Anyone that argues AGI is possible intrinsically believes the universe is finite and discretized.

I have found Quantum ideas and observations too unnerving to accept a finite and discretized universe.

Edit: this in in response to GO, or Starcraft or anything that is boxed off -- these AIs will eventually outperform humans on a grand scale, but the existence of 'constants' or being in a sandbox immediately precludes the results from speaking to AI's generalizability.


I'm not sure what you're saying here.

Your arguments seem to also apply to humans, and clearly humans have figured out how to be intelligent in this universe.

Or maybe you're saying that brains are taking advantage of something at the quantum level? Computers are unable to efficiently simulate quantum effects, so AGI is too difficult to be feasible?

I admit that's possible, but it's a strong claim and I don't see why it's more likely than the idea that brains are very well structured neural networks which we're slowly making better and better approximations of.


Unless you assume some magic/soul/etc, then a human brain is a proof that there exists a non-impossible algorithm that learn to be a General Intelligence, and it can run on non-impossible hardware.


Yes, I assume a magic/soul/etc. and I believe that the human brain is not stand-alone in creating intelligence. Check out this exciting video for discussion on how 'thinking' can happen outside brain. https://neurips.cc/Conferences/2018/Schedule?showEvent=12487


What makes you believe that you'll get there first?


I share the sentiment with you. Majority of the 'research' from OpenAI has been scaling up the known algorithms and almost all the models have been built on top of research from outside of OpenAI. My assessment is that OpenAI is currently not the leader in the field but they want to get there by attracting talent through PR and money, which IMHO is a fine strategy.


I don't see the problem. If they get AGI, it will create value much larger than 100 billion. Much larger than trillions to be honest. If they fail to create AGI, then who cares?


> (AGI) — which we define as automated systems that outperform humans at most economically valuable work — [0]

I don't doubt that OpenAI will be doing absolute first class AI research (they are already doing this). It's just that I don't really find this definition of 'GI' compelling, and 'Artificial' really doesn't mean much--just because you didn't find it in a meadow somewhere doesn't mean it doesn't work. So 'A' is a pointless qualification in my opinion.

For me, the important part is what you define 'GI' to be, and I don't like the given definition. What we will have is world class task automation--which is going to be insanely profitable (congrats). But I would prefer not to confuse the idea with HLI(human-level intelligence). See [1] for a good discussion.

They will fail to create AGI--mainly because we have no measurable definition of it. What they care about is how dangerous these systems could potentially be. More than nukes? It doesn't actually matter, who will stop who from using nukes or AGI or a superbug? Only political systems and worldwide cooperation can effectively deal with this...not a startup...not now...not ever. Period.

[0] https://blog.gregbrockman.com/the-openai-mission [1] https://dl.acm.org/citation.cfm?id=3281635.3271625&coll=port...




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: