Hacker News new | past | comments | ask | show | jobs | submit login
Microsoft is investing $1B in OpenAI (openai.com)
1119 points by gdb 3 months ago | hide | past | web | favorite | 592 comments



New York Times has a bit more context

> Mr. Nadella said Microsoft would not necessarily invest that billion dollars all at once. It could be doled out over the course of a decade or more. Microsoft is investing dollars that will be fed back into its own business, as OpenAI purchases computing power from the software giant, and the collaboration between the two companies could yield a wide array of technologies.

https://www.nytimes.com/2019/07/22/technology/open-ai-micros...


(I work at OpenAI.)

> It could be doled out over the course of a decade or more.

The NYT article is misleading here. We'll definitely spend the $1B within 5 years, and maybe much faster.

We certainly do plan to be a big Azure customer though!


>>"We certainly do plan to be a big Azure customer though!"

That's great, one question where can I use GYM or universe in the cloud with the render() option.

I've spend many hours trying to set up the environment in cloud [1] without success.

[1]: https://stackoverflow.com/questions/40195740/how-to-run-open...



Probably a bit off topic here, I was wondering if you could shade some light on why Elon Musk parted away from OpenAI. On Twitter he said he disagreed on what OpenAI wanted to do. Could you please tell me more on that? It seems what OpenAI is doing is pretty great.


Sam Altman is on record saying that they asked him to leave because he recruited talent from OpenAI for his other companies. Sam seemed quite philosophical about it though and was complimentary to Musk otherwise. It doesn't sound like there was a ton of bad blood there.


Is this the response you are referring to Sam Altman being on record https://openai.com/blog/openai-supporters/ ?



Seems like an entirely reasonable choice. He’s actively commercializing AI and doesn’t want any impression of conflict of interest. Especially as now OpenAI has some clear commercialization plans directly with Microsoft.

From the article:

> The most obvious way to cover costs is to build a product, but that would mean changing our focus. Instead, we intend to license some of our pre-AGI technologies, with Microsoft becoming our preferred partner for commercializing them.

Talent will be the hardest challenge for OpenAI in order to reach their goals.


Will this be $1 billion in cash, or in Azure credits?


Cash investment, pointed out elsewhere in the thread.


Are you planning to open offices in Europe? Paris and London are hosting better and better AI ecosystems.


Greetings! I have some questions!

Since you work for OpenAi, are you looking at actual brain processes at all? I read the article and understand you guys will be a big customer with Azure, I wonder if you guys will be conducting some brain research though. I believe for AGI to happen we need to understand the brain.

I work with Cerebral Organoids, Consciousness studies, physics (quantum),

Love to share / connect, we are currently launching the Cerebral Organoids into space today! SpaceX rocket, 6pm EST, there are some thunderstorms so we're hoping there aren't any further delays. DM me?

https://www.spacex.com/webcast


Space launch has been moved to tomorrow 6pm EST;


How does the funding structure, employee comp / incentives, and licensing compare to Deepmind?


> (I work at OpenAI.)

Can you do an AI CTF like the Stripe distributed systems CTF sometime?


We should build one ourselves, actually. That's a terrific idea.


Thanks for the clarification gdb!

excited about the announcement.


Gosh, its going to be so interesting to see a fabric of AI exist over the next decades across various cloud infra...

Will they fight?

Azure AI layers and say private company AIs like FBs (Ono Sendai), GCP, AWS, etc... where these AIs start learning how to protect themselves from attack....

Obv it super trivial for API mods to the FW/Access rules in all these systems... so it will be trvial for them to start shutting down access (we have had this for decades, but it will be interesting to see it at scale.)


Totally agree... but also think the world will benefit as well. "Microsoft is investing dollars that will be fed back into its own business, as OpenAI purchases computing power from the software giant, and the collaboration between the two companies could yield a wide array of technologies.


1 billion worth of Azure credits?


What is that in SchruteBucks?


More importantly it's 231 days of extra lunch break!


Despite all the negativity in replies, I try to remain optimistic that this investment in AGI-related research is going to be a net positive.

Congrats to the team, and break a leg!


I agree. We've not gone back to having well funded research facilities like this one since Bell Labs. Those were amazing days where we saw a lot of amazing breakthroughs. I wish companies like Microsoft and the rest would invest in external research institutes more.


I often think that if I were a billionaire,I'd rather spend hundreds of millions on some cool R&D projects rather than having some 100 meter boat that one uses a only a few times a year. I could at least walk around in 20 or so years and say "I funded this" rather than pointing at a rusting boat nobody will ever care about.


Paul Allen had both:

https://en.wikipedia.org/wiki/Octopus_(yacht)

https://en.wikipedia.org/wiki/Allen_Institute

https://www.cs.washington.edu/building

In fact this tradition of rich people founding universities and research is nothing new. Stanford University was founded by a couple who said "The children of California shall be our children" after their child died. Andrew Carnegie founded the Carnegie Technical Schools, and John Harvard donated money and his library to a college founded two years earlier.


Allen donated about $2 billion to charitable causes [1] when he was alive. This included several whimsical stuff that wasn't research like Experience Music Project and Museum of Science Fiction. Relatively I believe he spent far more on luxuries he liked including world's most expensive yatch(es), fleet of private jets, mensions around the world, private music concerts, few sports teams here and there.

[1] https://www.philanthropy.com/article/Paul-Allen-s-2-Billion-...


> This included several whimsical stuff that wasn't research like Experience Music Project and Museum of Science Fiction.

While not research, those things can have profound impacts on people. Several years ago a Star Wars exhibit came to the Indiana State Museum here in Indianapolis, they had an entire section dedicated to both the prosthetic devices in the film and in real life, one of the video segments playing next to some props from the film and real prosthetic devices was a clip of one of the inventors of the real technology talking about how watching the film version directly led to him pursuing his career and working directly on various prosthetic devices trying to make it a reality.

These sorts of experiences could have profound impact on the creative process for one or more individuals that might have far more profound effects for society than active research.


In many cases people will be more interested in your boat rather than a bunch of startups nobody's ever heard of.


... unless one of those startups cures a/some cancer, heart disease, Parkinson's, Alzheimer's or ALS or something.

If I were a billionaire, that's exclusively where I'd be putting my money, selfishly.


That’s a strange sentiment for thread on OpenAI, considering it is one of many startups founded a guy who decided to take the millions from his sale of PayPal and do cool R&D projects like spaceships, electric cars, solar power, AI, and brain-machine interfaces. Good thing Elon Musk didn’t buy a boat I guess.


> I often think that if I were a billionaire,I'd rather spend hundreds of millions on some cool R&D projects rather than having some 100 meter boat that one uses a only a few times a year.

Billionaires buy cars and boats because they're stores of value. For instance, a Mclaren from the 90s is worth more today than when it was sold.


Sports cars and boats cost a fortune to maintain. They're terrible financial vehicles.


This article is more than five years old, so I'll let it speak for itself:

This shows that in the 12 months to the end of June the value of classic cars as a whole was up by 28%, which compared with a rise of 12% for the FTSE-100 index of leading shares and a 23% slump in the price of gold.

https://www.theguardian.com/business/2013/sep/07/luxury-inve...


A couple weeks ago. Google announced it will be researching AI in China.

HN had more positive comments regarding that announcement.


The Internet is a strange place..


Thank you!


The research that OpenAI’s doing is groundbreaking and the results are often beyond state-of-the-art. I aim to work in one of your research teams sometime!


Watch the Kool-Aid intake and you'll be just fine. Dreams are great and an absolute necessity for success but create your own. Don't buy into everything you hear, especially Elon Musk talking about Artificial General Intelligence.


Oh, I'm well aware of the hype around AGI. My personal view is that AGI is kind of an asymptotic goal, something we'll get kind of close to but never actually reach. Nevertheless, I would like to work on more pragmatic goals, like improving the current state-of-the-art language models and text generation networks. I'm actually starting by reimplementing Seq2Seq as described by Quoc Le et al.[1] for text summarization[2] (this code is extremely messy but it'll get better soon). It's been interesting to learn about word embeddings, RNNs and LSTMs, and data processing within the field of Natural Language Processing. Any tips on how to get up to speed within this field would be helpful, as I'm trying to get into research labs doing similar work at my university.

[1]: https://papers.nips.cc/paper/5346-sequence-to-sequence-learn... [2]: https://github.com/applecrazy/reportik/


AGI is not something unnatural that could never be attained. If biological systems can somehow attain it, there is no reason other kinds of man-made system cannot attain it.

The first main issue is that of compute capacity. Human brain has equivalent of at least 30 TFLOPS of computing power and this estimate is very likely 2 orders of magnitudes off.

Assume that somehow simulating 1 synapse takes only 1 transistor (gross underestimate). To simulate number of synapses in a single human brain then would need same number of transistors as in 10,000 NVidia V100 GPUs, one of the largest mass produced silicon chip!

The second main issue is of training neurons that are far more complex than our simple arithmetic adders. Back prop doesn't work for such complex neurons.

The 3rd big problem is that of training data. Human child churns through roughly 10 years of training data before reaching puberty. The man-made machine perhaps can take advantage of vast data already available but still there needs to be some structured training regiment.

So current AI efforts in relative comparison of human brain are playing with toy hardware and toy algorithms. It should be surprising that we have gone so far regardless.


>My personal view is that AGI is kind of an asymptotic goal, something we'll get kind of close to but never actually reach.

Personally, I think it is only a matter of time. Though I suspect that we will probably 'cheat' our way there first with the wetware from cultured neurons that various groups are developing, before we manage to create it in completely synthetic hardware. Also, it might just be the wetware that leads us to the required insights. This is very problematic territory however. I think we are very likely to utterly torture some of the conscious entities created along this path.


What has Musk got to do with this?


Have you thought about using AI instead of parts of the government? There must be a lot of bits that can be automated. Do you think that an AI led government could remove the left/right divide that there is at the moment? If everyone just filled in a huge form that told the AI what was important to them, this could be used to drive policy.


Filling in a from about what is important is just a proxy for voting.

I don't think an AI would help the Left/Right divide in this way because certain news outlets would still have the same incentives to manipulate what people desire in a more extreme directions.


Indeed. Going by past leaps in science and technology, we will probably see something really cool and useful come out of thisn something that isn't AGI. I'm fine with getting a superbike even if the funding was for an impossible FTL drive.


Congrats on the fundraise Greg and team!

Does this mean that OpenAI may not disclose progress, papers with details, and/or open source code as much as in the past? In other words, what proprietary advantage will Microsoft gain when licensing new tech from OpenAI?

I understand that keeping some innovations private may help commercialization, which may help raise more funds for OpenAI, getting us to AGI faster, so my opinion is that could plausibly make sense.


We'll still release quite a lot, and those releases won't look any different from the past.

> I understand that keeping some innovations private may help commercialization, which may help raise more funds for OpenAI, getting us to AGI faster, so my opinion is that could plausibly make sense.

That's exactly how we think about it. We're interested in licensing some technologies in order to fund our AGI efforts. But even if we keep technology private for this reason, we still might be able to eventually publish it.


Eventually open ai?


I thought from day one that the name «OpenAI» would at best be a slight misnomer, and at worst indicative of a misguided approach. If AGI is close to being achieved, sharing key details of the approach to any actors at all could trigger a Manhattan Project-type global arms race where safety was compromised and the whole thing became insanely risky for the future of humanity.

Glad to see that the team is taking a pragmatic safety-first approach here, as well as towards the near-term economical realities of funding a very expensive project to ensure the fastest possible progress.

In the early days of OpenAI, my thoughts were that the project had good intentions, but a misguided focus. The last year has changed that, though. They absolutely seem to be on the right track. Very excited to see their progress over the next years.


Not to worry. No one is anywhere close to achieving true AGI so any safety concerns are a moot issue. It's akin to worrying about an alien invasion.


> No one is anywhere close to achieving true AGI

No one knows how far off true AGI is, just like no one in 1940 (or 1910) knew how far off fission weapons were.

EDIT: I quite liked this article from a few years back [0], and the fission weapon prediction example is stolen from there.

0: https://intelligence.org/2017/10/13/fire-alarm/


Really? I thought by 1940 physicists generally understood fission and theoretically understood how to build a bomb - they just needed to find enough distilled fissile material (which was hard to do). And indeed, once they had enough U235, they had such a high degree of confidence in the theory, that they built a functioning U235 bomb without ever having previously tested one.


In 1939, Enrico Fermi expressed 90% confidence [0] that creating a self-sustaining nuclear reaction with Uranium was impossible. And, if you're working with U238, it basically is! But it turns out that it's possible to separate out U235 in sufficient quantities to use that instead.

On the 2nd of December, 1942 he led an experiment at Chicago Pile 1 [1] that initiated the first self-sustaining nuclear reaction. And it was made with Uranium.

In fairness to Fermi, nuclear fission was discovered in 1938 [2] and published in early 1939.

0: https://books.google.com/books?id=aSgFMMNQ6G4C&pg=PA813&lpg=...

1: https://en.wikipedia.org/wiki/Chicago_Pile-1

2: https://en.wikipedia.org/wiki/Nuclear_fission#Discovery_of_n...


> 90% confidence [0] that creating a self-sustaining nuclear reaction with Uranium was impossible

But the fact that Fermi was doing such a calculation in the first place proves that we knew in principle how a fission weapon could work, even if we didn't know "how far off [they] were". As soon as we figured out the moon was just a rock 240,000 miles away, we knew in principle we could go there, even if we didn't know how far off that would be.

By contrast, we don't know what consciousness or intelligence even is. A child could define what walking on the moon is, and Fermi was able to define a self-sustaining nuclear reaction as soon as he learned what nuclear reactions were. What even is the definition of consciousness?


> as soon as we figured out the moon was just a rock 240,000 miles away, we knew in principle we could go there, even if we didn't know how far off that would be

I have problems agreeing with that specific claim, knowing that both "the rock" and the distance were known to some ancient Greeks around 2200 years ago.

https://en.wikipedia.org/wiki/Hipparchus

Hipparchus estimated the distance to the Moon in the Earth radii to between 62 and 80 (depending on the method he used, as he intentionally used two different). Today's measurements are between 55 and 64.


Holy shit, that is so impressive. They didn't even have Newton's law of gravity yet.

Once we had Newton's law of gravity though, we knew the distance, radius, mass, and even surface gravity of the moon. Would you say it's fair to say that by then we knew in principle we could go there and walk there?

(P.S. I assume you know this but the way you wrote your comment makes it seem like our measurements of lunar distance are nearly as inaccurate as Hipparchus's, when we actually know it down to the millimeter (thanks to retroreflectors placed by Apollo, actually). The wide variation from 55x to 64x Earth's radius is because it changes over the course of the moon's orbit, due to [edit: primarily its elliptical orbit, and only secondarily] the Sun and Jupiter's gravity.)


> The wide variation from 55x to 64x Earth's radius is because it changes over the course of the moon's orbit, due to the Sun and Jupiter's gravity

I think you’re not only wrong but even Kepler and Newton already knew that better than you:

https://en.m.wikipedia.org/wiki/Elliptic_orbit

“Strictly speaking, both bodies revolve around the same focus of the ellipse, the one closer to the more massive body, but when one body is significantly more massive, such as the sun in relation to the earth, the focus may be contained within the larger massing body, and thus the smaller is said to revolve around it.”

But maybe you have some better information?


No you're right, the Sun and Jupiter are a secondary effect to the elliptical orbit, I skimmed the Wikipedia page too quickly:

> due to its elliptical orbit with varying eccentricity, the instantaneous distance varies with monthly periodicity. Furthermore, the distance is perturbed by the gravitational effects of various astronomical bodies – most significantly the Sun and less so Jupiter

https://en.wikipedia.org/wiki/Lunar_distance_(astronomy)#Per...


Thanks! Now back to your other question:

> Once we had Newton's law of gravity though, we knew the distance, radius, mass, and even surface gravity of the moon.

I think it was more complicated than what you assume there. Newton published his Principia 1687 but before 1798 we didn't know the gravitational constant:

https://en.wikipedia.org/wiki/Cavendish_experiment

However...

> Would you say it's fair to say that by then we knew in principle we could go there and walk there?

If you mean "we 'could' go if we had something what we were sure we haven't had" then there is indeed a written "fiction" story published even before Newton published his Principia:

https://en.wikipedia.org/wiki/Comical_History_of_the_States_...

It's the discovery of the telescope that allowed people to understand that there are another "worlds" and that one would be able to "walk" there.

Newton's impact was to demonstrate that there is no any "mover" (which many before identified as a deity) that provides the motion of the planets but that their motions simply follow from their properties and the "laws." Before, most expected Aristotle to be relevant:

https://en.wikipedia.org/wiki/Unmoved_mover

"In Metaphysics 12.8, Aristotle opts for both the uniqueness and the plurality of the unmoved celestial movers. Each celestial sphere possesses the unmoved mover of its own—presumably as the object of its striving, see Metaphysics 12.6—whereas the mover of the outermost celestial sphere, which carries with its diurnal rotation the fixed stars, being the first of the series of unmoved movers also guarantees the unity and uniqueness of the universe."


None of this really counters the core point - that we don't know how long it will be before we have AGI. Is there some way to define consciousness that will be discovered in the future that makes the problem possible?


Your core point (and that of the MIRI article you linked to) is not just "we don't know". It's that the chance of being imminent and catastrophic is worth taking seriously.

I am of course not saying you're wrong that "we don't know". We obviously don't know. It's possible, just like it's possible that we could discover cheap free energy (fusion?) tomorrow and then be in a post-scarcity utopia. But that's worth taking about as seriously as the possibility that we'll discover AGI tomorrow and be in a Terminator dystopia, or also a post-scarcity utopia.

More importantly, it's a distraction from the very real, well-past-imminent problems that existing dumb AI has, such as the surveillance economy and misinformation. OpenAI, to their credit, does a good job of taking these existing problems quite seriously. They draw a strong contrast to MIRI's AI alarmism.

Have you ever read idlewords? Best writing I know of on this subject: https://idlewords.com/talks/superintelligence.htm


> In 1939, Enrico Fermi expressed 90% confidence [0] that creating a self-sustaining nuclear reaction with Uranium was impossible. And, if you're working with U238, it basically is! But it turns out that it's possible to separate out U235 in sufficient quantities to use that instead.

You are moving goalposts. You mentioned in the first place "fission weapons" and now you take a quote about "nuclear fission reactor" which is a whole different thing.


A self-sustaining nuclear reaction is a prerequisite for a fission weapon. (that's what allows the exponential doubling of fission events to occur)

A nuclear reactor was also required for the production of Pu-239, which is what 2 of the first 3 bombs were made from.


They did understand it theoretically. This is the key flaw in any analogy between AI risk and nuclear weapons.


Quite fair, but not 100% certain.

Almost nobody really knows how developed is the state-of-the-art theory / applied technology in confidentials advances that the usual suspects may have already achieved. I.E. deepmind, openai, baidu, nsa, etc.

AGI could have already been achieved - even theoretically - somewhere, and like when Edison got to make work a light bulb, we're still using oil and not knowing anything about electricity, or light bulbs or energy distribution networks / infrastructure.

The actual current - new, mostly unimplemented yet - technology level.

Back then you wouldn't have believed if someone had said you "hey, city nights in ten years won't be dark anymore"


This is an AI equivalent of believing that the NSA has proved that P=NP and can read everyone's traffic.

There's no way to disprove it, but given that in the open literature people haven't even found a way to coherently frame the question of general AI, let alone theorize about it, it becomes just another form of magical thinking.


You're partially right (because AGI really looks like VERY far away for the current status in theory publicly known), but it's not exactly like "magical thinking".

There are several public examples of radically more advanced theory/technology than the publicly known possible at a certain time/year, kept secret by governments / corps for a very long time (decades).

Lockheed achieved the blackbird decades before it was even admitted that a technology like that could even exist. But, looking backwards, it just looks like an "incremental" advance, but it wasn't, the engineering required to make fly the blackbird was revolutionary for the time when it was invented (back in the 50s / 60s ).

The Lockheed F-117 and its tech had a similar path, just somewhat admitted in late 80s (and this was 70s technology, probably based on theoretical concepts from the 60s).

More or less the same could be said about the tech in Blechtley Park: current tech / theory propelled to extraordinary capabilities by radical improvement achieved by new top secret advances in engineering. The hardware, events and advances ocurred in Bletchley Park were kept secret for years (I think just in the 50s they started to be carefully mentioned but not fully admitted, but nothing even close to the details currently found in the Wikipedia).

At any given time there could be a lot of theory/technology jump-aheads being achieved out there, several decades ahead of the publicly published/known, supposedly current, theory/technology.


The point is, we don't need to know how exactly consciousness work to create an AGI. In theory, we can just simulate all the neurons in the brain on a supercomputer cluster and voila, we have AGI. Of course, it's not that simple but you get my point.


This is a flawed analogy. The conceptual basis of nuclear weapons was well understood as soon as it was learned that the atom has a compact nucleus. The energy needed to bind that nucleus together gives a rough idea of the power of a fission weapon. If that energy could be liberated all at once, it would make an explosive orders of magnitude more powerful than anything known.

It was hard to predict when or if such a thing could be made, but everyone knew what was under discussion.

Compare this to AGI, some vaguely emergent property of a complex computer system that no one can define to anyone else's satisfaction. Attempts to be more precise what AGI is, how it would first manifest itself, and why on earth we should be afraid of it, rapidly devolve into nerd ghost stories.


  1932 neutron discovered
  1942 first atomic reactor
  1945 fission bomb
Now for AI

  1897 electron discovered
  1940's vacuum tube computers
  1970's integrated circuits
  1980's first AI wave fails, AI winter begins
  2012 AI spring begins
  2019 AI can consistently recognize a jpeg of a cat, but still not walk like a cat
  ???? Human level AGI
It doesn't seem comparable one way or the other, in many ways. But if we do compare them, AI is going much slower and with more failure, backtracking, and uncertainty.


    1943 First mathematical neural network model
    1958 Learning neural network classifies objects in spy plane photos
    1965 Deep learning with multi-layer perceptrons

    2010 ImageNet error rate 28%
    2011 ImageNet error rate 25%
    2012 ImageNet error rate 16%
    2013 ImageNet error rate 11%
    2017 ImageNet error rate 3%
    2019 Pre-AGI


Beer * beets * bears * Battlestar Galactica


what?


BEER * BEETS * BEARS * BATTLESTAR GALACTICA


> This is a flawed analogy. The conceptual basis of nuclear weapons was well understood as soon as it was learned that the atom has a compact nucleus. The energy needed to bind that nucleus together gives a rough idea of the power of a fission weapon. If that energy could be liberated all at once, it would make an explosive orders of magnitude more powerful than anything known.

Extrapolating as you seem to be here, when should I expect to see a total conversion reactor show up? I want 100% of the energy in that Uranium, dammit - not the piddly percentages you get from fission!

Seriously, I think you overestimate how predictable nuclear weapons were. Fission was discovered in 1938.


If you read your own Wikipedia link, you'd see that Rutherford's gold foil experiments were started in 1908, his nuclear model of the atom was proposed in 1911—we even split the atom in 1932! (1938 is when we discovered that splitting heavier atoms could release energy rather than consume it.)

We haven't even had the AGI equivalent of the Rutherford model of the atom yet: what's the definition of consciousness? What is even the definition of intelligence?


You might not need a definition of consciousness. Right now it looks like you can get quite far with „fill in the blanks“ type losses (gpt-2 and Bert) in the case of Language understanding and Self-Play in the case of Games.


We are indeed getting impressively far. Four decades after being invented, machine learning went from useless to useful to enormous societal ramifications terrifyingly quickly.

However, we are not getting impressively close to AGI. That's why we need to stop the AGI alarmism and get our act together on the enormous societal ramifications that machine learning is already having.


I think there is a lot of evidence that explosive progress could be made quickly. Alphago zero, machine cision, sentiment analysis, machine translation.. voice.. etc etc etc

All these things have surged incredibly in less than a decade.

It's always a long way off until it isn't.


Those are all impressive technical achievements to be sure, but they don't constitute evidence of progress toward AGI. If I'm driving my car from Seattle to Honolulu and I make it to San Diego it sure seems like I made a lot of progress?


> I think there is a lot of evidence that explosive progress could be made quickly. Alphago zero, machine cision, sentiment analysis, machine translation.. voice.. etc etc etc

Not at all, these are all one-trick poneys and bring you nowhere close to real AGI which is akin to human intelligence.


The Manhattan Project is a very apt analogy. Even if you believe that AGI is impossible, it should be possible to appreciate that many billions would quickly be invested in its development if somehow a viable pathway to it became clear. Even if just to a few well-connected experts.

This is what happened when it became known nuclear weapons were a viable concept. The technology shifted power to such an extreme degree that it was impossible not to invest in it, and the delay from «likely impossible» to «done» happened too fast for most observers to notice.


The Manhattan project happened when the entire conceptual road map to fission weapons was understood. This is manifestly not the case with AI, which can be charitably described as "add computers until magic".


I didn’t compare OpenAI to the Manhattan Project. I was pointing out that if a small number of people discover a plausible conceptual pathway to AGI, a similar project will happen.


And I'm pointing out that the conceptual breakthroughs that preceded such an engineering sprint happened in the open literature. Wells was writing sci-fi about atomic weapons in 1914. He based it off of a pop-science book written in 1909.

We don't have any such understanding, or even a definition, of 'AGI'.


Wells’ atomic bombs sci-fi was of the type «there is energy in the atom, and maybe someone will use this in bombs someday». Nowhere close to the physical reality of a weapon, more in the realm of philosophy that strong AI currently is. We have an existence proof of intelligence already, after all. The idea is not based on pure fantasy, even though the practicalities are unknown.

Leo Szilard had more plausible philosophical musings in the early thirties, that did not have root in any workable practical idea. The published theoretical breakthroughs you mention didn’t happen until the late thirties. Nuclear fission, the precursor to the idea of an exponential chain reaction, happened only in 1938, 7 years before Trinity.


The issue with strong AI is not that "practicalities are unknown", any more than the issue with Leonardo da Vinci's daydreams of flying machines were that "practicalities are unknown".

He didn't have internal combustion engines, but that's a practicality, other mechanical power sources already existed (Alexander the Great had torsion siege engines). They would never be sufficient for flight, of course, but the principle was understood.

But he could never have even begun to build airfoils, because he didn't have even an inkling of proto-aerodynamics. He saw that birds exist, so he drew a machine with wings that flapped. Look at the wings he drew: https://www.leonardodavinci.net/flyingmachine.jsp

That's an imitation of birds with no understanding behind it. That's the state of strong AI today: we see that humans exist, so we create imitations of human brains, with no understanding behind them.

That lead to machine learning, and after 40 years of research we figured out that if you feed it terabytes of training data, it can actually be "unreasonably effective", which is impressive! How many pictures of giraffes did you have to see before you could instantly recognize them, though? One, probably? Human cognition is clearly qualitatively different.

The danger of machine learning is not that it could lead to strong AI. It's that it is already leading to pervasive surveillance and misinformation. (idlewords is pretty critical of OpenAI, but I actually credit OpenAI with taking this quite seriously, unlike MIRI.)


Why do we assume that AGI requires billions of $? Fundamentally, we don't know how to do it, so it may just require the right software design.

Nuclear weapons required enriched uranium, and the gaseous diffusion process of the time was insanely power-hungry. Like non-negligable (>1% ?) percentage of the US's entire electrical generation power-hungry.


Yes I think the better analogy is Fermat's Last Theorem. It didn't require billions of dollars, it just required one incredibly smart specialist grinding on the problem for years.


AGI = Alien invasion


I wouldn't be so sure the Manhattan Project-type global arms race isn't already happening.


The atomic bomb was based on science theory. A computer can run many programs and do a great many things, but it will never be able to think by itself.


> The atomic bomb was based on science theory.

Our study of (automated) intelligence is based on science too.

> A computer ... will never be able to think by itself.

Turing wrote an entire paper about this (Computing Machinery and Intelligence), where he rephrases your statement (because he finds it to be meaningless) and devises a test to answer it. He also directly attacks your phrasing of "but it will never":

> I believe they are mostly founded on the principle of scientific induction. A man has seen thousands of machines in his lifetime. From what he sees of them he draws a number of general conclusions. They are ugly, each is designed for a very limited purpose, when required for a minutely different purpose they are useless, the variety of behaviour of any one of them is very small, etc., etc. Naturally he concludes that these are necessary properties of machines in general.

> A better variant of the objection says that a machine can never "take us by surprise." This statement is a more direct challenge and can be met directly. Machines take me by surprise with great frequency. This is largely because I do not do sufficient calculation to decide what to expect them to do, or rather because, although I do a calculation, I do it in a hurried, slipshod fashion, taking risks.


> A better variant of the objection says that a machine can never "take us by surprise." This statement is a more direct challenge and can be met directly. Machines take me by surprise with great frequency. This is largely because I do not do sufficient calculation to decide what to expect them to do, or rather because, although I do a calculation, I do it in a hurried, slipshod fashion, taking risks.

This seems like a cop out. Sure, if you do your calculations wrong, it doesn’t behave as you expect. But it’s still doing exactly what you wrote it to do. The surprise is in realizing your expectations were wrong, not that the machine decided to behave differently.


I think any AI researcher has a tale where an algorithm they wrote genuinely took them by surprise. Not due to wrong calculations, but by introducing randomness, heaps of data, and game bounderaries where the AI is free to fill in the blanks.

A good example of this is "move 37" from AlphaGo. This move surprised everyone, including the creators, who were not skilled enough in Go to hardcode it: https://www.youtube.com/watch?v=HT-UZkiOLv8


Investing into a bubble only to make sure the money go to yourself. Seems like a economic loophole. You think computers will start to have dreams and desires? Abusing such a machine would be unethical. Go ahead and build a better OCR, just don't fall to the AGI hype.


> Our study of (automated) intelligence is based on science too.

Can you elaborate which part of sciences you are talking about here?


All sciences that collaborate with the field of AI: Cognitive Science, Neuroscience, Systems Theory, Decision Theory, Information Theory, Mathematics, Physics, Biology, ...

Any AI curriculum worth its salt includes the many scientific and philosophical views on intelligence. It is not all alchemy, though the field is in a renewal phase (with horribly hyped nomenclature such as "pre-AGI", and the most impressive implementations coming from industry and government, not academia).

And eventhough the atom bomb was based on science too, there is this anecdote from Hamming:

> Shortly before the first field test (you realize that no small scale experiment can be done—either you have a critical mass or you do not), a man asked me to check some arithmetic he had done, and I agreed, thinking to fob it off on some subordinate. When I asked what it was, he said, "It is the probability that the test bomb will ignite the whole atmosphere." I decided I would check it myself! The next day when he came for the answers I remarked to him, "The arithmetic was apparently correct but I do not know about the formulas for the capture cross sections for oxygen and nitrogen—after all, there could be no experiments at the needed energy levels." He replied, like a physicist talking to a mathematician, that he wanted me to check the arithmetic not the physics, and left. I said to myself, "What have you done, Hamming, you are involved in risking all of life that is known in the Universe, and you do not know much of an essential part?" I was pacing up and down the corridor when a friend asked me what was bothering me. I told him. His reply was, "Never mind, Hamming, no one will ever blame you."


It does not need to. It just need to get complex enough. This is from an 1965 article:

"If the machines are permitted to make all their own decisions, we can’t make any conjectures as to the results, because it is impossible to guess how such machines might behave. We only point out that the fate of the human race would be at the mercy of the machines. It might be argued that the human race would never be foolish enough to hand over all power to the machines. But we are suggesting neither that the human race would voluntarily turn power over to the machines nor that the machines would willfully seize power. What we do suggest is that the human race might easily permit itself to drift into a position of such dependence on the machines that it would have no practical choice but to accept all of the machines’ decisions. As society and the problems that face it become more and more complex and as machines become more and more intelligent, people will let machines make more and more of their decisions for them, simply because machine-made decisions will bring better results than man-made ones. Eventually a stage may be reached at which the decisions necessary to keep the system running will be so complex that human beings will be incapable of making them intelligently. At that stage the machines will be in effective control. People won’t be able to just turn the machine off, because they will be so dependent on them that turning them off would amount to suicide."


I agree with the above, but imagine the same argument where "the machines" is replaced with "subject-matter experts", or "politicians acting on the advice of subject-matter experts".

The accumulated knowledge and skills of not just specialised individuals but entire institutions, working on highly technical and abstract areas of society, seems like it has created a kind of empathy gap between the people ostensibly wielding power and those who are experiencing the effects of that power (or the limits of that power).

> "... turning them off would amount to suicide."

Although this conclusion appears equally valid in the replacement argument, it sadly doesn't come with the wanted guarantee of "therefore that wouldn't happen".


> A computer can run many programs and do a great many things, but it will never be able to think by itself.

A computer being able to simulate a brain that thinks for itself is the logical extrapolation of current brain-simulation efforts. Many people think there are far less computationally intensive ways to make an AI, but "physics sim of a human brain" is a good thought experiment.

Unless you think there's something magic about human brains? Using "magic" here to mean incomprehensible, unobservable, and incomputable.


> A computer being able to simulate a brain that thinks for itself is the logical extrapolation of current brain-simulation efforts

Except that our current neural networks have nothing to do with the actual neurons in our brain and how they work.


I believe ekianjo wasn't talking about neural networks, but simulations using models that are similar to how neurons work. Computational neuroscience is a thing.


That's quite a claim, considering that we don't know what the word "think" means.


"maybe" eventually openAI


Maybe eventually openAI


maybe eventually open maybe ai


does anyone serious (non-encumbered) actually believe that?


> We'll still release quite a lot, and those releases won't look any different from the past.

I don’t mean to parse your words, but will you continue to publish using the same exact criteria as before or will there be a new editorial filter?


I want to take everything OpenAI says at face value (seem like good folk), but I can't help but wonder at the recent choice to keep GPT2 closed, on what seemed like pretty thin safety arguments to me.

Now, the demonstrated ability to produce new models which are closed, but maybe can be used as services on a preferred partner's cloud, looks very commercially relevant? How will these conflicts be managed, or is it more like "we are just a commercial entity now, of course we'll do this"?


Their handling of OpenAIFive rubbed me the wrong way as well. The whole operation smelled very PR-ish to me personally -- unnecessarily/unjustifiably HYPE representatives, complaining that the Dota community wanted to see OpenAIFive play a normal Dota match against pros rather a heavily constrained environment to benefit the bots, among other things.


Same. They left the OpenAI project half baked and thats very disappointing.


An earlier comment mentions that they are for-profit now (changed from non-profit a while ago).



$1B dollar is a lot of money. Microsoft is not a charity foundation, so the suspicious is obvious.

> We’re partnering to develop a hardware and software platform within Microsoft Azure which will scale to AGI. We’ll jointly develop new Azure AI supercomputing technologies, and Microsoft will become our exclusive cloud provider—so we’ll be working hard together to further extend Microsoft Azure’s capabilities in large-scale AI systems.

Maybe it's because I'm not an expert, but what does it really mean? Do people understand what "Microsoft will become our exclusive cloud provider" means?

OpenAI is great, but suspicious is understandable from the users side when so much commercial money is involved.


My "guess" is they're offering $1B worth of Azure services. Which costs MSFT probably much less than $1B.

My "guess" is that it means MSFT has access to sell products based off the research OpenAI does to MSFT's customers. Having early access to advanced research means MSFT could easily make this money back by selling better AI tools to their customers.

Also a great time to point out that while "Microsoft is not a charity foundation" it does offer a ton of free Azure to charities. https://www.microsoft.com/en-us/nonprofits/azure This has been an awesome thing to use when helping small non-profits with little money to spend on "administrative costs".


> My "guess" is they're offering $1B worth of Azure services. Which costs MSFT probably much less than $1B.

It's a cash investment. We certainly do plan to be a big Azure customer though.

> My "guess" is that it means MSFT has access to sell products based off the research OpenAI does to MSFT's customers. Having early access to advanced research means MSFT could easily make this money back by selling better AI tools to their customers.

I'm flattered that you think our research is that valuable! (As I say in the blog post: we intend to license some of our pre-AGI technologies, with Microsoft becoming our preferred partner for commercializing them.)


OpenAI has achieved some amazing results and I congratulate them for their accomplishments, but labeling any of that as "pre-AGI" is intellectually dishonest and misleading at best. They haven't shown any meaningful progress toward true AGI.

When I was 10 I created some "pre-time travel" technology by designing an innovative control panel for my time machine. Sadly I ran into some technical obstacles later in the project. OpenAI is at about the same phase with AGI.


Sorry for the cowardice of this throwaway account, but it freaks me out that Musk left, and Thiel is still there.

Going back in time:

> Musk has joined with other Silicon Valley notables to form OpenAI, which was launched with a blog post Friday afternoon. The group claimed to have the goal “to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.”

What happened here?

I know it’s far off, but I am concerned about AGI misanthropy and the for-profit turn of OpenAI. Who is the humanist anchor, of Elon’s gravitas, left at OpenAI?

What happened to the original mission? Are any of you concerned about this? Can you get rid of Peter Thiel please? Can we buy him out as a species? I respect the man’s intellect yet truly fear his misanthropy and influence.

Apologies for the rambling, but you all got me freaked out a bit. I had, and still do have such high hopes for OpenAI.

Please don’t lose the forest for the trees.


They left in good terms, Musk was competing in terms of talent (e.g Andrej Karpathy leaving OpenAI for Tesla).

See: https://twitter.com/elonmusk/status/1096987465326374912?lang...


>but it freaks me out that Musk left,

Why? He left due to possible conflict of interest, Tesla is researching AI for self-driving vehicles and it wouldn't surprise me if SpaceX does at some point too (assuming they aren't already).


If your team ever gets frustrated by ARM, have them shoot me an email for my old proposal on how to fix it (source: used to work at Azure and perennially frustrated by ARM design)


I'm interested to hear more about this proposal! What problems does it aim to fix? Would you be willing to share?


Can I say "most of them"? Basically it simplifies ARM and the Azure API significantly and makes Azure operate more like "infrastructure as code". But I'd need to look at the latest version of Azure to see specifically what the remaining pain points are now. I could have a more specific conversation in a different setting.

I remember the first meetings about ARM when the resource IDs were presented, and a few people immediately asked "what if someone wants to rename a resource"? Years later you still could not do that (I'm hoping they've fixed that by now?).

It seemed to me that ARM was the result of some design by super smart committee, and got a lot wrong. When I was there more senior folks told me not to worry, that's just the Microsoft way (wait for version 3). I do have to admit that it's turning out they knew more than me (shocking!), as over time I've seen some of the stuff that was inexplicably terrible in v1 become much, much better in later versions.


If you manually rename a resource and refer to it by resource ID, I don't think ARM understands anything about it and assumes it's a new resource. That's just from using ARM, though, I don't know it's internals.

They are investing a good amount in ARM lately though. The vs code language server is pretty good and export template got much better


> They are investing a good amount in ARM lately though. The vs code language server is pretty good and export template got much better

Awesome! I sheepishly have been using GCP, AWS, and DO. Last gave Azure a shot last year, but perhaps it's time to take another look.


Thank you for taking the time to clarify & correct my statements!


Just saw this post. Wow, that’s a big cash investment and certainly makes this very significant.


> by selling better AI tools to their customers

Microsoft really needs this. ML.NET is quite anemic compared to the industry-standard AI toolkits: TensorFlow, theano, scikit-learn, Torch, Keras, etc.

https://dotnet.microsoft.com/apps/machinelearning-ai/ml-dotn...


Disclosure: I work at Azure in AI/ML

Another way to think about it is that for folks building in .NET, ML.NET makes it easy for them to start using many of the ML techniques without having to learn something new.

On top of that, we FULLY support all the industry standard tools - TF, Keras, PyTorch, Kubeflow, SciKit, etc etc. We even ship a VM that lets you get started with any of them in one click (https://azure.microsoft.com/en-us/services/virtual-machines/...) and our hosted offering supports them as well! (e.g. TF - https://docs.microsoft.com/en-us/azure/machine-learning/serv...)


Nice, thanks for the info. Maybe it's proper to think of ML.NET as training wheels, then!


For what it's worth, we are pretty proud of the performance as well - I wouldn't call it training wheels :)

On both scale up and run times, it measures up as among the best-in-class[1]. That is to say, for the scenarios which people use it most commonly (GBT, linear learners), it's a great fit!

[1] https://arxiv.org/pdf/1905.05715.pdf


Microsoft has Windows.AI tho.


"suspicious is understandable from the users side when so much commercial money is involved."

OpenAI is a commercial entity. They restructured from a non-profit.

This is a completely commercial deal to help Azure catch up with Google and Amazon in AI. OpenAI will adopt and make Azure their preferred platform. And Microsoft and Azure will jointly "develop new Azure AI supercomputing technologies", which I assume is advancing their FGPA-based deep learning offering.

Google has a lead with TensorFlow + TPUs and this is a move to "buy their way in", which is a very Microsoft thing to do.


I was always under the impression that Azure had a lead in ML-as-a-service.

I really liked LUIS (Language Understanding Intelligent Service) back in 2017 and AFAIK only Alibaba had an offering similar to Azure at the time for ML-as-a-service.


For Microsoft, the investment will likely come from computing resources to support AI practices/tests, MS personnel and paying current OpenAI personnel (expensive due to their expertise). The findings and expertise will likely be used in the future to help drive improvements in Microsoft's stack (cloud computing, search engine, etc). OpenAI will be licensing some of its technologies to Microsoft

For OpenAI, it means the availability of resources for their main mission for the foreseeable future, while potentially allowing founders and other investors with the opportunity to either double-down on OpenAI or reallocate resources to other initiatives (Think of Musk, for example).

"Do people understand what "Microsoft will become our exclusive cloud provider" means?" It likely means that computing power will be provided by Microsoft and that it may have access to the algorithms and results.


Maybe they will use that 1 Billion on Azure fees lol.


That's like a month of CosmosDB storing a DVD worth of records!


I actually made up a pun today, CostMostDB. Hehe... But seriously, it's not that expensive. A client is using it quite heavily and they don't reach any particarly high level of spend at all for a corporation, but ymmv.


Cosmos is throughput not size. So it would be a month of having the capacity to I/O a DVD worth of records.


It's both. And storage cost is amplified by number of regions replicated to.


They don’t describe the terms of the deal.

Is it $1 billion in cash/stock?

Or $1 billion in Azure credits and engineering hours?


From some other comment, it says its a cash deal.


it must be described somewhere, though not in the announcement. I don't think you're allowed to make $1B deals with a public company without specifying those things somewhere.


Well, sure it’s detailed somewhere. We in this thread don’t know, is what I was getting at.

The comment I replied to may not be far off the mark in what this really is: computer/human time “worth $1 billion” or something.

If it’s actual cash that says something different to me than a donation of resources with some value estimated by MS.


This screams alarm bells of an acquisition to me.


Does that mean OpenAI will (co-)develop the Azure AI platform, and then pay Microsoft for using it?


How do we ensure OpenAI is still Open if they're exclusive with MSFT?


According to some comments from openai employees in other comments, it's not really open already.


> We think its impact should be to give everyone economic freedom to pursue what they find most fulfilling, creating new opportunities for all of our lives that are unimaginable today.

The cynic in me thinks this will never happen, that instead it will make a small subset of the population super rich, while the rest are put to work somewhere to make them even more money. Microsoft will ultimately want a return on their billion, at least.


Well, the super rich getting richer is the status quo, so I kind of feel like nothing much changes if this never happens. Now, riding that happy PR wave and failing to deliver would be lame, but perhaps they really believe this. I think it will depend entirely on how much really gets open sourced in the end. I want to believe they’ll really do it.


Open sourcing still may not level the playing field if it turns out it requires corporate (or state) level resources to operate


Genuine question: In what sense is OpenAI open ?


I guess it started as one, but a pivot happened..

Mr. Musk left the lab last year to concentrate on his own A.I. ambitions at Tesla. Since then, Mr. Altman has remade OpenAI, founded as a nonprofit, into a for-profit company so it could more aggressively pursue financing.

https://www.nytimes.com/2019/07/22/technology/open-ai-micros...


Good question. Looks like Microsoft bought a partner to help them make Azure more competitive with Google & Amazon, both on hardware scalability and quality of their AI offerings:

> we’ll be working hard together to further extend Microsoft Azure’s capabilities

> Instead, we intend to license some of our pre-AGI technologies, with Microsoft becoming our preferred partner for commercializing them.

In the end, it's a win-win. If OpenAI remains partially open , it's still better for rest of the world too, better than nothing. But, as achow said, it did pivot.


OpenAI should change their name. Seriously, it’s confusing.


Its just like the bag of sugar I have in my cabinet that claims to be low calorie. It says "ONLY 16 CALORIES (per 4 grams in tiny letters)".

I mean, sugar is pretty much the definition of a high calorie food. Its like, pure calories. And can affect insulin regulation, etc. That's why they need to put some marketing on it.


Off topic, I know, however, this reminded me of TV adverts we used to have in my country when I was young where they were saying how great sugar is because it has 0% fat... If only the butter companies responded with adverts about how they're 0% sugar, that would have been fun. :/


Low carb / no carb is gaining in popularity (and some products label it). Sugar is finally becoming the bad guy despite enormous lobbying efforts over a very long time (since the 60s?).


Super nitpicky: sugar is 4 kcal/gram, but so is protein, and fats is 9 kcal/g, over double the caloric content.

By saying sugar is "pure calories," I guess you mean that it doesn't contain any fibre or micronutrients that might give it redeeming qualities (besides its taste), which is true.


Most people are not putting spoonfuls of beef in their coffee.

Sugar is, quite literally, pure carbohydrates. Most sources of protein are not, unless you're consuming refined amino acids (pro tip: they taste awful).


> Most people are not putting spoonfuls of beef in their coffee.

That could be... interesting. It could be in the form of beef bullion, so you really could put in a spoonful.

But I wonder if bacon would be better...


Brits have that, it's called Bovril.


Lots of people do add cream, though, and some people add coconut oil (MCT oil) or actual cow's-milk butter.


Hmmm.... it reads to me that someone has co-opted an open standard. How much of this is really an investment and how much is an in-kind contribution of Azure resources?

Also this sounds dangerous: "exclusive cloud provider".

When an OpenAI group starts to make exclusively partnerships with one vendor, I wonder how "Open" it is.

I can not imagine Khronos Group, which runs similarly named OpenGL, etc having a "exclusive" graphics card supported for their open standards. Cloud computing is to OpenAI as graphics cards are to OpenGL/Vulkan.


How open is the computer world anyway? Not very. At the hardware level not at all. So yes take it with a grain of salt. It’s still the product of billionaires and tech giants.


When was OpenAI an open standard? Comparing OpenAI to OpenGL makes zero sense.


I'm quite suspicious about private companies helping open source.It seems to me that by relying on private companies open source is tailored to create standards that work best in platforms that are doing financing and cementing monopolies and oligopolies.In my opinion open source should have same status as science and be financed by government.


I dunno, Google has made hugely important contributions to open source and the practice of software engineering in general. Would we have all that if it was purely government backed? Like Bell Labs, having an effective monopoly on a new tech product has spun off a ton of innovation for technology in general.


I think there is a difference between public funding for research (it's how majority of research is funded) and public funding for open-source software (isn't happening yet, to my knowledge, so it's an interesting and potentially powerful unexplored idea).


Can we talk about the usage of the term "AGI" here? Considering its connotations in popular culture it sounds terrifically inappropriate in terms of what we can feasibly build today.

Can we assume that marketing overrode engineering on the terminology of this press release?


Open AI has always had AGI as their real mission as far as I know. And always been serious about it.

It is a research mission. No one "feasibly" knows exactly how to build AGI yet. But still we have many groups publicly pursuing it today.

If Microsoft is giving them a billion dollars in this context, I assume that Open AI engineers and scientists will build out services for Azure ML that will then be sold to developers or consumers.

This type of thing is actually pretty normal for just about every company that is seriously pursuing AGI, since they eventually need some kind of income and narrow AI is the way for those types of teams to do that.


The purpose of OpenAI is to eventually lead to safe AGI. It's part of their core business purpose. Whatever they do with Machine Learning today is merely instrumental in leading up to that goal.

We certainly cannot feasibly build AGI today, hence OpenAI's use of the term "pre-AGI technologies".


I can't see how pure AGI can be "safe". Huge part of human intellect revolves around the need to survive,be it danger,lack of food,or less rational choices based on emotions.If computers can rationalise positive behaviour of humans,they may not be able to do so well with greed, jealousy, hunger for power,that aren't very logical processes but create a lot of positive and negative nonetheless.


OpenAI => CloseAI


So a highly sophisticated sales bot is the end goal :)


I feel pretty bad for people working in ML/AI at Microsoft Research right now. Microsoft is sending a clear signal that they would pay $1B in outside AI research than spend the same amount internally.


What's this "Pre-AGI" arrogance? Why are they so certain that it "will scale to AGI"? Is it an attempt at branding, or have they forgotten that AI is a global effort?

And do people really want to be "actualized" by "Microsoft and OpenAI’s shared value of empowering everyone"?


So is this the open-ai exit?


(I work at OpenAI.)

Quite the opposite — this is an investment!


Is all this talk of AGI some kind of marketing meme that you guys are tolerating? We haven't figured out sentiment analysis or convnets resilient to single pixel attacks, and here is a page talking about the god damned singularity.

As an industry, we've already burned through a bunch of buzzwords that are now meaningless marketing-speak. 'ML', 'AI', 'NLP', 'cognitive computing'. Are we going for broke and adding AGI to the list so that nothing means anything any more?


At what point would you deem it a good idea to start working on AGI safety?

What "threshold" would you want to cross before you think its socially acceptable to put resources behind ensuring that humanity doesn't wipe itself out?

The tricky thing with all of this is we have no idea what an appropriate timeline looks like. We might be 10 years away from the singularity, 1000 years, or it might never ever happen!

There is a non-zero chance that we are a few breakthroughs away from creating a technology that far surpasses the nuclear bomb in terms of destructive potential. These breakthroughs may have a short window of time between each of them (once we know a, knowing b,c,d will be much easier)

So given all of that, wouldn't it make sense to start working on these problems now? And the unfortunate part of working on these problems now is that you do need hype/buzzwords to attract tallent, raise money and get people talking about AGI safety. Sure it might not lead anywhere, but just like fire insurance might seem unnecessary if you never have a fire, AGI research may end up being a useless field altogether but at least it gives us that cushion of safety.


> At what point would you deem it a good idea to start working on AGI safety?

I don't know, but I'd say after a definition of "AGI" has been accepted that can be falsified against, and actually turn it into a scientific endeavour.

> The tricky thing with all of this is we have no idea what an appropriate timeline looks like.

We do. As things are it's undetermined, since we don't even know what's it's supposed to mean.

> So given all of that, wouldn't it make sense to start working on these problems now?

What problems? We can't even define the problems here with sufficient rigor. What's there to discuss?


> I don't know, but I'd say after a definition of "AGI" has been accepted that can be falsified against, and actually turn it into a scientific endeavour.

Uhh, that's the Turing Test.


>What problems?

- Privacy (How do you get an artificial intelligence to recognize, and respect, privacy? What sources is it allowed to use, how must it handle data about individuals? About groups? When should it be allowed to violate/exploit privacy to achieve an objective?)

- Isolation (How much data do you allow it access to? How do you isolate it? What safety measures do you employ to make sure it is never given a connection to the internet where it could, in theory, spread itself not unlike a virus and gain incredibly more processing power as well as make itself effectively undestroyable? How do you prevent it from spreading in the wild and hijacking processing power for itself, leaving computers/phones/appliances/servers effectively useless to the human owners?)

- A kill switch (under what conditions is it acceptable to pull the plug? Do you bring in a cybernetic psychologist to treat it? Do you unplug it? Do you incinerate every last scrap of hardware it was on?)

- Sanity check/staying on mission (how do you diagnose it if it goes wonky? What do you do if it shows signs of 'turning' or going off task?

- Human agents (Who gets to interact with it? How do you monitor them? How do you make sure they aren't being offered bribes for giving it an internet connection or spreading it in the wild? How do you prevent a biotic operator from using it for personal gain while also using it for the company/societal task at hand? What is the maximum amount of time a human operator is allowed to work with the AI? What do you do if the AI shows preference for an individual and refuses to provide results without that individual in attendance? If a human operator is fired, quits or dies and it negatively impacts the AI what do you do?)

This is why I've said elsewhere in this thread, and told Sam Altman, that they need to bring in a team of people that specifically start thinking about these things and that only 10-20% of the people should be computer science/machine learning types.

OpenAI needs a team thinking about these things NOW, not after they've created an AGI or something reaching a decent approximation of one. They need someone figuring out a lot of this stuff for tools they are developing now. Had they told me "we're going to train software on millions of web pages, so that it can generate articles" I would have immediately screamed "PUMP THE BRAKES! Blackhat SEO, Russian web brigades, Internet Water Army, etc etc would immediately use this for negative purposes. Similarly people would use this to churn out massive amounts of semi-coherent content to flood Amazon's Kindle Unlimited, which pays per number of page reads from a pool fund, to rapidly make easy money." I would also have cautioned that it should only be trained on opt-in, vetted, content suggesting that using public domain literature, from a source like Project Gutenberg, would likely have been far safer than the open web.


Discussing the risks of AGI is always worthwhile and has been undertaken for several decades now. That's a bit different from the marketing fluff on the linked page:

"We’re partnering to develop a hardware and software platform within Microsoft Azure which will scale to AGI"

Azure needs a few more years just to un-shit the bed with what their marketing team has done and catch up to even basic AWS/GCP analytics offerings. Them talking about AGI is like a toddler talking about building a nuclear weapon. This is the same marketing team that destroyed any meaning behind terms like 'real time', and 'AI'.


The proper threshold would be the demonstration of an AGI approximately as smart as a mouse. Until then it's just idle speculation. We don't even know the key parameters for having a productive discussion.


This makes no sense. Mice can't write poetry. Expecting a 1:1 equivalence between human and manufactured intelligence is no more coherent than denying the possibility of human-bearing flight until we have planes as acrobatic as a hawk.


I certainly wouldn't have that threshold decided by a for profit company disguised as an open source initiative to protect the world! Brought to us by Silicon Valley darlings, no thank you to that. They need to change their name or their mission. One has to go.


> There is a non-zero chance that we are a few breakthroughs away from creating a technology that far surpasses the nuclear bomb in terms of destructive potential.

No, there is exactly zero chance that anyone is "a few breakthroughs away" from AGI.


I'm compiling a list of reasons people doubt AGI risk, could you clarify why you think AGI is certainly far term?


I feel like I could write an essay about this.

AGI represents the creation of a mind .... It's something that has three chief characteristics: it understands the world around it, it understands what effects its actions will have on the world around it, and it takes actions.

None of those three things are even close to achievable in the present day.

No software understands the physical world. The knowledge gap here is IMMENSE. Software does not see what we see: it can be trained to recognize objects, but its understanding is shallow. Rotate those objects and it becomes confused. It doesn't understand what texture or color really are, what shapes really are, what darkness and light really are. Software can see the numerical values of pixels and observe patterns in them but it doesn't actually have any knowledge of what those patterns mean. And that's just a few points on the subject of vision, let alone all the other senses, all the world's complex perceivable properties. Software doesn't even know that there IS a world, because software doesn't KNOW anything! You can set some data into a data structure and run an algorithm on it, but there's no real similarity there to even a baby's ability to know that things fall when you drop them, that they fall in straight lines, that you can't pass through solid objects, that things don't move on their own, etc etc.

Even if, a century from now, some software did miraculously approach such an understanding, it still would not know how it was able to alter the world. It might know that it was able to move objects, or apply force to then, but could it see the downstream effects? Could it predict that adding tomatoes to a chocolate cake made no sense and rendered the cake inedible? Could it know that a television dropped out the window of an eight story building was dangerous to people on the sidewalk below? Could it know that folding a paper bag in half is not destructive, but folding a painting in half IS? Understanding what can result from different actions and why some are effective and others are not, is another vast chasm of a knowledge gap.

Lastly, and by FAR most importantly, the most essential thing.....software does not want. Every single thing we do as living creatures is because our consciousness drives us to want things: I want to type these words at this moment because I enjoy talking about this subject. I will leave soon because I want food and hunger is painful. Etc. If something does not feel pleasure or pain or any true sensation, it cannot want. And we have absolutely no idea how such a thing works, let alone how to create it, because we have next to no idea how our own minds work. Any software that felt nothing, would want nothing-- and so it would sit, inert, motionless...never bored, never curious, never tired, just like an instance of Excel or Chrome. Just a thing, not alive. No such entity could genuinely be described as AGI. We are likely centuries from being able to recreate our consciousness, our feelings and desires....how could someone ever be so naive as to believe it was right around the corner?


Thanks.


OpenAI is a for-profit corporation now. It's in their interest to use as many buzzwords as possible to attract that sweet venture capital, regardless of whether said buzzwords have any base in reality.


Sentiment analysis is at 96% and increasing rapidly. http://nlpprogress.com/english/sentiment_analysis.html


What exactly does that 96% mean, though? It means that on some fixed dataset you're achieving 96% accuracy. I'm baffled by this stupidity of claiming results (even high-profile researchers do this) based on datasets with models that are nowhere near as robust as the actual intelligence that we take as reference: humans. Take the model that makes you think "sentiment analysis is at 96%", come up with your own examples to apply a narrow Turing test to the model, and see if you still think sentiment analysis (or any NLP task) is anywhere near being solved. Also see: [1].

I think continual Turing testing is the only way of concluding whether an agent exhibits intelligence or not. Consider the philosophical problem of the existence of other minds. We believe other humans are intelligent because they consistently show intelligent behavior. Things that people claim to be examples of AI right now lack this consistency (possibly excluding a few very specific examples such as AlphaZero). It is quite annoying to see all these senior researchers along with graduate students spend so much time pushing numbers on those datasets without paying enough attention to the fact that pushing numbers is all they are doing.

[1]: As a concrete example, consider the textual entailment (TE) task. In the deep learning era of TE there are two commonly used datasets on which the current state-of-the-art has been claimed to be near or exceeding human performance. What these models are performing seemingly exceptionally well is not the general task of TE, it is the task of TE evaluated on these fixed datasets. A recent paper by McCoy, Pavlick, and Linzen (https://arxiv.org/abs/1902.01007) shows how brittle these systems are that at this point the only sensible response to those insistent on claiming we are nearing human performance in AI is to laugh.


> I think continual Turing testing is the only way of concluding whether an agent exhibits intelligence or not.

So you think it's impossible to ever determine that a chimpanzee, or even a feral child, exhibits intelligence? This seems rather defeatist.


No, interpreting "continual" the way you did would mean I should believe that we can't conclude our friends to be intelligent either (I don't believe that). Maybe I should've said "prolonged" rather than "continual".

Let me elaborate on my previous point with an example. If you look at the recent works in machine translation, you can see that the commonly used evaluation metric of BLEU is being improved upon at least every few months. What I argue is that it's stupid to look at this trend and conclude that soon we will reach human performance in machine translation. Even when comparing against the translation quality of humans (judged again by BLEU on a fixed evaluation set) and showing that we can achieve higher BLEU than humans is not enough evidence. Because you also have Google Translate (let's say it represents the state-of-the-art), and you can easily get it to make mistakes that humans would never do. I consider our prolonged interaction with Google Translate to be a narrow Turing test that we continually apply to it. A major issue in research is that, at least in supervised learning, we're evaluating on datasets that are not different enough from the training sets.

Another subtle point is that we have strong priors about the intelligence of biological beings. I don't feel the need to Turing test every single human I meet to determine whether they are intelligent, it's a safe bet at this point to just assume that they are. The output of a machine learning algorithm, on the other hand, is wildly unstable with respect to its input, and we have no solid evidence to assume that it exhibits consistent intelligent behavior and often it is easy to show that it doesn't.

I don't believe that research in AI is worthless, but I think it's not wise to keep digging in the same direction that we've been moving in for the past few years. With deep learning, while accuracies and metrics are pushed further than before, I don't think we're significantly closer to general, human-like AI. In fact, I personally consider only AlphaZero to be an unambiguous win for this era of AI research, and it's not even clear whether it should be called AI or not.


My comment was not on ‘continual’ but on ‘Turing test’.

If you gave 100 chimps of the highest calibre 100 attempts each, not a single one would pass a single Turing test. Ask a feral child to translate even the most basic children's book, and their mistakes will be so systematic that Google Translate will look like professional discourse. ‘Humanlike mistakes’ and stability with respect to input in the sense you mean here are harder problems than intelligence, because a chimp is intelligent and functionally incapable of juggling more than the most primitive syntaxes in a restricted set of forms.

I agree it is foolish to just draw a trend line through a single weak measure and extrapolate to infinity, but the idea that no collation of weak measures has any bearing on fact rules out ever measuring weak or untrained intelligence. That is what I called defeatist.


I see your point, but you're simply contesting the definition of intelligence that I assumed we were operating with, which is humanlike intelligence. Regardless of its extent, I think we would agree that intelligent behavior is consistent. My main point is that the current way we evaluate the artificial agents is not emphasizing their inconsistency.

Wikipedia defines Turing test as "a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human". If we want to consider chimps intelligent, then in that context the definition of the Turing test should be adjusted accordingly. My point still stands: if we want to determine whether a chimp exhibits intelligence comparable to a human, we do the original Turing test. If we want to determine whether a chimp exhibits chimplike intelligence, we test not for, say, natural language but for whatever we want our definition of intelligence to include. If we want to determine whether an artificial agent has chimplike intelligence, we do the second Turing test. Unless the agent can display as consistent an intelligence as chimps, we shouldn't conclude that it's intelligent.

Regarding your point on weak measures: If I can find an endless stream of cases of failure with respect to a measure that we care about improving, then whatever collation of weak measures we had should be null. Wouldn't you agree? I'm not against using weak measures to detect intelligence, but only as long as it's not trivial to generate failures. If a chimp displays an ability for abstract reasoning when I'm observing it in a cage but suddenly loses this ability once set free in a forest, it's not intelligent.


I'm not interested in categorizing for the sake of categorizing, I'm interested in how AI researchers and those otherwise involved can get a measure of where they're at and where they can expect to be.

If AI researchers were growing neurons in vats and those neurons were displaying abilities on par with chimpanzees I'd want those researchers to be able to say ‘hold up, we might be getting close to par-human intelligence, let's make sure we do this right.’ And I want them to be able to do that even though their brains in vats can't pass a Turing test or write bad poetry or play basic Atari games and the naysayers around them continue to mock them for worrying when their brains in vats can't even pass a Turing test or write bad poetry or play basic Atari games.

Like, I don't particularly care that AI can't solve or even approach solving the Turing test now, because I already know it isn't human-par intelligent, and more data pointing that out tells me nothing about where we are and what's out of reach. All we really know is that we've been doing the real empirical work with fast computers for 20ish years now and gone from no results to many incredible results, and in the next 30 years our models are going to get vastly more sophisticated and probably four orders of magnitude larger.

Where does this end up? I don't know, but dismissing our measures of progress and improved generality with ‘nowhere near as robust as [...] humans’ is certainly not the way figure it out.

> If I can find an endless stream of cases of failure with respect to a measure that we care about improving, then whatever collation of weak measures we had should be null. Wouldn't you agree?

No? Isn't this obviously false? People can't multiply thousand-digit numbers in their heads; why should that in any way invalidate their other measures of intelligence?


>no results to many incredible results

What exactly is incredible (relatively) about the current state of things? I don't know how up-to-date you are on research, but how can you be claiming that we had no results previously? This is the kind of ignorance of previous work that we should be avoiding. We had the same kind of results previously, only with lower numbers. I keep trying to explain that increasing the numbers is not going to get us there because the numbers are measuring the wrong thing. There are other things that we should also focus on improving.

>dismissing our measures of progress and improved generality with ‘nowhere near as robust as [...] humans’ is certainly not the way figure it out.

It is the way to save this field from wasting so much money and time on coming up with the next small tweak to get that 0.001 improvement in whatever number you're trying to increase. It is not a naive or spiteful dismissal of the measures, it is a critique of the measures since they should not be the primary goal. The majority of this community is mindlessly tweaking architectures in pursuit of publications. Standards of publication should be higher to discourage this kind of behavior. With this much money and manpower, it should be exploding in orthogonal directions instead. But that requires taste and vision, which are unfortunately rare.

>People can't multiply thousand-digit numbers in their heads; why should that in any way invalidate their other measures of intelligence?

Is rote multiplication a task that we're interested in achieving with AI? You say that you aren't interested categorizing for the sake of categorizing, but this is a counterexample for the sake of giving a counterexample. Avoiding this kind of an example is precisely why I said "a measure that we care about improving".


> What exactly is incredible (relatively) about the current state of things?

Compared to 1999?

Watch https://www.youtube.com/watch?v=kSLJriaOumA

Hear https://audio-samples.github.io/#section-4

Read https://grover.allenai.org/

These are not just ‘increasing numbers’. These are fucking witchcraft, and if we didn't live in a world with 5 inch blocks of magical silicon that talk to us and giant tubes of aluminium that fly in the sky the average person would still have the sense to recognize it.

> It is the way to save this field from [...]

For us to have a productive conversation here you need to either respond to my criticisms of this line of argument or accept that it's wrong. Being disingenuous because you like what the argument would encourage if it were true doesn't help when your argument isn't true.

> Is rote multiplication a task that we're interested in achieving with AI?

It's a measure for which improvement would have meaningful positive impact on our ability to reason, so it's a measure we should wish to improve all else equal. Yes, it's marginal, yes, it's silly, that's the point: failure in one corner does not equate to failure in them all.


>These are not just ‘increasing numbers’. These are fucking witchcraft, and if we didn't live in a world with 5 inch blocks of magical silicon that talk to us and giant tubes of aluminium that fly in the sky the average person would still have the sense to recognize it.

What about generative models is really AI, other than the fact that they rely on some similar ideas from machine learning that are found in actual AI applications? Yes, maybe to an average person these are witchcraft, but any advanced technology can appear that way---Deep Blue beating Kasparov probably was witchcraft to the uninitiated. This is curve fitting, and the same approaches in 1999 were also trying to fit curves, it's just that we can fit them way better than before right now. Even the exact methods that are used to produce your examples are not fundamentally new, they are just the same old ideas with the same old weaknesses. What we have right now is a huge hammer, and a hammer is surely useful, but not the only thing needed to build AI. Calling these witchcraft is a marketing move that we definitely don't need, creates unnecessary hype, and hides the simplicity and the naivete of the methods used in producing them. If anybody else reads this, these are just increasing numbers, not witchcraft. But as the numbers increase it requires a little more effort and knowledge to debunk them.

I'm not dismissing things for the fun of it, but it pains me to see this community waste so many resources in pursuit of a local minimum due to lack of a better sense of direction. I feel like not much more is to be gained from this conversation, although it was fun, and thank you for responding.


I appreciate you're trying to wind it down so I'll try to get to the point, but there's a lot to unpack here.

I'm not evaluating these models on whether they are AGI, I am evaluating them on what they tell us about AGI in the future. They show that even tiny models, some 10000x to 1000000x times smaller than what I think are the comparable measures in the human brain, trained with incredibly simple single-pass methods, manage to extract semirobust and semantically meaningful structure from raw data, are able to operate on this data in semisophisticated ways, and do so vastly better than their size-comparable biological controls. I'm not looking for the human, I'm looking for small scale proofs of concepts of the principles we have good reasons to expect are required for AGI.

The curve fitting meme[1] has gotten popular recently, but it's no more accurate than calling Firefox ‘just symbols on the head of a tape’. Yes, at some level these systems reduce to hugely-dimensional mathematical curves, but the intuitions this brings are pretty much all wrong. I believe this meme has gained popularity due to adversarial examples, but those are typically misinterpreted[2]. If you can take a system trained to predict English text, prime it (not train it) with translations, and get nontrivial quality French-English translations, dismissing it as ‘just’ curve fitting is ‘just’ the noncentral fallacy.

Fundamental to this risk evaluation is the ‘simplicity and the naivete of the methods used in producing them’. That simple systems, at tiny scales, with only inexact analogies to the brain, based on research younger than the people working on it, is solving major blockers in what good heuristics predict AGI needs is a major indicator about the non-implausibility of AGI. AGI skeptics have their own heuristics instead, with reasons those heuristics should be hard, but when you calibrate with the only existence proof we have of AGI development—human evolution—, those heuristics are clearly and overtly bad heuristics that would have failed to trigger. Thus we should ignore them.

[1] Similar comments on ‘the same approaches in 1999’, another meme only true at the barest of surface levels. Scale up 1999 models and you get poor results.

[2] See http://gradientscience.org/adv/. I don't agree with everything they say, since I think the issue relates more to the NN's structure encoding the wrong priors, but that's an aside.


'Sentiment analysis' of pre-canned pre-labelled datasets is a comparatively trivial classification task. Actual sentiment analysis as in 'take the twitter firehose and figure out sentiment about arbitrary topic X' is only slightly less out of reach than AGI itself.

Actual sentiment analysis is a completely different kind of ML problem than supervised classification 'sentiment analysis' that's popular today but mostly useless for real world applications.


Not-actual sentiment analysis is already useful (to some) and used in real world applications (though I'm not a fan of those applications), unless perhaps you're referring to the "actual real world" that lives somewhere beyond the horizon as well.


The problem with 'sentiment analysis' of today is it requires a human labelled training dataset that is specific to a particular domain and time period. These are rather costly to make and have about a 12 month half-life in terms of accuracy because language surrounding any particular domain is always mutating - something 'sentiment analysis' models can't hope to handle because their ability to generalise is naught. I've worked with companies spending on the order of millions per year on producing training data for automated sentiment analysis models not unlike the ones in the parent post.

To get useful value out of automated sentiment analysis, that's the cost to build and maintain domain specific models. Pre-canned sentiment analysis models like the parent post linked are more often than not worthless for general purpose use. I won't say there are 0 scenarios where those models are useful, but the number is not high.

Claiming that sentiment analysis is 90something percent accurate, or even close to being solved, is extremely misleading.


$1B is not like investing $100 in a crowdfunded project, "nice toy and let's hope for the best." I expect that Microsoft is going to look very closely at what OpenAI does and possibly steer it into a direction they like. Unless you have a few other $1B investors. We'll see how it plays out.


Congrats to the team, excited to see what you guys do with the momentum


>(I work at OpenAI.)

Humble understatement. *co-founded and co-run.


So you are now part of azure, no?. Like - azure-open-ai?


Do you know what "investment" means?


Yes. You get money and give control.


Which is extremely well defined for OpenAI. One board seat, charter comes first. That's a legal agreement, not much open for interpretation.


Not quite true, azure is now the only channel by which open ai innovation can be commercialise. This is the key control point.

For example, if there is an hardware innovation which make DNN training 1000x faster (e.g. optical DNN), but it does not exist on azure, than by definition it cannot be offered on another cloud.

To sum up, this deal assure the choking point of azure/MS on any innovation that would come up from open ai.


is all your code public?


What do they mean by: "Microsoft will become our exclusive cloud provider"?

Being forced to use Azure for all your ML workloads seems a stupid constraint. For example, you might be comfortable with tensorflow/TPU and changing frameworks/tooling might be costly.


Azure has full support for Tensorflow, Keras, PyTorch and the rest of the popular stuff. Shouldn't be a problem at all


I could be wrong on this. I think the AI/AGI problem isn’t so much about money and more about not having discovered the unique insight that will make it happen. In other words, someone in a garage might be far more likely to find how to trigger the proverbial inflection point.

Throwing money at a problem doesn’t always produce solutions. It can sure accelerate a project down the path it is on...but, if the path is wrong...

In some ways it reminds me of the battle against cancer.

Not being critical of this project or donation, just stating a point of view on the general problem of solving AI, a subject I have been involved with to one degree or another since the early 80’s.


I dont think this is true at all. I think that NLP models, especially the state of the art ones (in the coming years), will cost a few million to train. Massive volumes of data sucked up by a model.


That is exactly the issue that supports my perspective on this. The fact that we need millions of dollars and massive volumes of data is an indication that we might be going down the wrong path.

Think about how far we are from being able to even get close to what an ant can do. Work it backwards from there.


Congrats! So what's OpenAI's current valuation?


Given their stated mission:

> OpenAI’s mission is to ensure that artificial general intelligence benefits all of humanity.

I'm struck by the homogeneity the OpenAI team.

https://openai.com/content/images/2019/07/openai-team-offsit...

It seems to be mostly white people and a few Asians, without a single black or Hispanic person.


How does that matter? Does diversity for the cause of diversity lead to better results? Is there any data around this?

Hiring the most qualified people is the most important thing. As long as there isn't an inherent bias for not hiring someone who is hispanic, black, or brown, it should b e fine.


What you're posing is an age-old counterargument based around an irrational fear of white people experiencing prejudice and losing out on opportunities to underqualified people they would have otherwise had.

There have been studies done around diversity, conducted both privately and publicly, which consistently conclude that increased diversity does result in enhanced decision-making, collaboration, and organizational value-add due to the different perspectives having a net positive influence rather than neutral or negative.

Beyond pragmatism, from an idealist perspective aiding in increasing organizational diversity is the morally right thing to do. That doesn't mean hiring underqualified people; it means refusing to fill the position until the right person is found, which is a whole other problem on its own.

Here are some resources to get started: https://www.gsb.stanford.edu/insights/diversity-work-group-p... https://journals.aom.org/doi/abs/10.5465/1556374


Those studies are a pretty weak argument. The first link isn't about the type of diversity grandparent was talking about, but "informational diversity". The paper produces no direct correlation. The pragmatic angle is overblown.

It being the right thing to do is a much stronger argument imo. Which I agree with, but companies generally aren't interested in it, unless they can use it for marketing.


Can you tell me how much diversity juice I need to add to my team to make it perfect? Do I need to hire 1 black person for every white person? Please, provide me the ratio required to avoid marginal returns.

I would need to undergo DNA testing for each person to avoid hiring anyone that is too much of one race. Do I get better returns if I hire someone that has 25% of 4 different races?

Please elaborate on your very empirically verifiable and obviously logically cohesive argument. If you don't understand where my skepticism originates from, please turn your attention to the results of Black Economic Empowerment in South Africa (where racial quotas are enforced to ensure the blackest candidate is hired, not the best candidate).

Look at the performance of the South African sports teams that have been forced to recruit players not based on merit, but on the colour of their skin.


And who the hell do they think they are to know what’s better for the humanity?

Their arrogance is dangerous indeed.


Excellent way to reallocate some of that record breaking revenue they just announced.


Does anyone have any thoughts on Vicarious, the non-deep-learning competitor to OpenAI?


I'm also curious about this, haven't heard anything about them in a while


Hopefully this is good news for Bing.


Well, hopefully if someone creates a benevolent AGI then it should be good news for everyone.


[EDIT]: friendly -> non-friendly oops.

That's what seems so confusing about HN replies here. (Non-friendly) AGI is an extreme existential risk (depending on who you listen to).

I'm perfectly fine with rewarding the org that's responsible for researching friendly AGI to do it _right_ (extremely contingent on that last bit).


Well, I don't think I'd view any AGI that is an existential risk as "friendly".


What if friendliness is not a property of the technology, but of the use? With all the potential concerns of AGI, I think nuclear technology is a good analogue. It has great potential for peaceful use, but proliferation of nuclear weapons is a so-far inseparable problem of the technology. It's also relatively easy to use nuclear technology unsafely compared to safely.

The precedent for general intelligence is not good. The only example we know of (ourselves) is a known existential threat.


the thing is, nobody knows how to do that. it's not a money problem.


OpenAI is a research company - that's what research is, working out how to do things we don't know how to do. Research requires some money so at one level it is a money problem.


but this is alchemy isn't it? there isn't even a theoretical framework from which we can even begin to suggest how to keep any "general intelligence" benign. good old fashioned research notwithstanding, a billion dollars is not about to change this. it reads to more to me like this is an investment in azure (ie microsoft picking up some machine learning expertise to leverage in its future cloud services). that's not a judgement, and i'm sure lots of cool work will still come from this, given the strength of the team and massive backing they have. it just smells funny.


Alchemy wasn't entirely wrong; it is indeed possible to turn lead into gold, it was just beyond the technology of the time: https://www.scientificamerican.com/article/fact-or-fiction-l....


Well, unlike alchemy there are some pretty good examples of intelligent agents around - some even involved in this project!


(no sarcasm)

You know, I can't prove that researchers being funded is the best way of figuring out how to do things, but I have a gut intuition that tells me that.

I'll look into it so that I'm not just blindly suggesting that $$ ==> people ==> research ==> progress.

Thanks for the opportunity to reflect!


it really is an interesting subject to explore within the philosophy of science :)


You need to be able to test your designs and for that you need resources like AI accelerators.


It certainly is. Eventually Bing will automatically infer that it is not a good enough search engine and will destroy itself.


Azure has a supercomputer emulator, and even if OpenAI doesn't get a full $1B in cash but gets to use it as credits on the emulator, that could be huge.


Why is it called an investment? Is OpenAI a corporation that plans to pay out dividends? I thought it's more of a non-profit. This deal looks more like a donation of cloud compute resources. Still a great idea (moves ML research closer to their platform, eating more of Google's lunch), but it's not an investment in OpenAI.


The investment is in OpenAI LP: https://openai.com/blog/openai-lp/!


Because Microsoft has calculated a reasonable ROI to make it worth it.


Yeah but how's the return exactly going to come about (dividends? merger and talent takeover?), that's the question. I can see it returning itself via better ML sales on Azure.


Or management wants to exfiltrate $1B of shareholder dollars.


I think they decided to go ahead and not be a charity in addition to not being open.


Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: