
Microsoft is investing $1B in OpenAI - gdb
http://openai.com/blog/microsoft
======
tosh
New York Times has a bit more context

> Mr. Nadella said Microsoft would not necessarily invest that billion dollars
> all at once. It could be doled out over the course of a decade or more.
> Microsoft is investing dollars that will be fed back into its own business,
> as OpenAI purchases computing power from the software giant, and the
> collaboration between the two companies could yield a wide array of
> technologies.

[https://www.nytimes.com/2019/07/22/technology/open-ai-
micros...](https://www.nytimes.com/2019/07/22/technology/open-ai-
microsoft.html)

~~~
gdb
(I work at OpenAI.)

> It could be doled out over the course of a decade or more.

The NYT article is misleading here. We'll definitely spend the $1B within 5
years, and maybe much faster.

We certainly do plan to be a big Azure customer though!

~~~
lelima
>>"We certainly do plan to be a big Azure customer though!"

That's great, one question where can I use GYM or universe in the cloud with
the render() option.

I've spend many hours trying to set up the environment in cloud [1] without
success.

[1]: [https://stackoverflow.com/questions/40195740/how-to-run-
open...](https://stackoverflow.com/questions/40195740/how-to-run-openai-gym-
render-over-a-server/48237220)

~~~
pastafarianist
Check out this example:
[https://nbviewer.jupyter.org/github/yandexdataschool/Practic...](https://nbviewer.jupyter.org/github/yandexdataschool/Practical_RL/blob/81981aa/week01_intro/seminar_gym_interface.ipynb)

------
codelemur
Despite all the negativity in replies, I try to remain optimistic that this
investment in AGI-related research is going to be a net positive.

Congrats to the team, and break a leg!

~~~
gdb
Thank you!

~~~
applecrazy
The research that OpenAI’s doing is groundbreaking and the results are often
beyond state-of-the-art. I aim to work in one of your research teams sometime!

~~~
chrshawkes
Watch the Kool-Aid intake and you'll be just fine. Dreams are great and an
absolute necessity for success but create your own. Don't buy into everything
you hear, especially Elon Musk talking about Artificial General Intelligence.

~~~
applecrazy
Oh, I'm well aware of the hype around AGI. My personal view is that AGI is
kind of an asymptotic goal, something we'll get kind of close to but never
actually reach. Nevertheless, I would like to work on more pragmatic goals,
like improving the current state-of-the-art language models and text
generation networks. I'm actually starting by reimplementing Seq2Seq as
described by Quoc Le et al.[1] for text summarization[2] (this code is
extremely messy but it'll get better soon). It's been interesting to learn
about word embeddings, RNNs and LSTMs, and data processing within the field of
Natural Language Processing. Any tips on how to get up to speed within this
field would be helpful, as I'm trying to get into research labs doing similar
work at my university.

[1]: [https://papers.nips.cc/paper/5346-sequence-to-sequence-
learn...](https://papers.nips.cc/paper/5346-sequence-to-sequence-learning-
with-neural-networks.pdf) [2]:
[https://github.com/applecrazy/reportik/](https://github.com/applecrazy/reportik/)

~~~
sytelus
AGI is not something unnatural that could never be attained. If biological
systems can somehow attain it, there is no reason other kinds of man-made
system cannot attain it.

The first main issue is that of compute capacity. Human brain has equivalent
of at least 30 TFLOPS of computing power and this estimate is very likely 2
orders of magnitudes off.

Assume that somehow simulating 1 synapse takes only 1 transistor (gross
underestimate). To simulate number of synapses in a single human brain then
would need same number of transistors as in 10,000 NVidia V100 GPUs, one of
the largest mass produced silicon chip!

The second main issue is of training neurons that are far more complex than
our simple arithmetic adders. Back prop doesn't work for such complex neurons.

The 3rd big problem is that of training data. Human child churns through
roughly 10 years of training data before reaching puberty. The man-made
machine perhaps can take advantage of vast data already available but still
there needs to be some structured training regiment.

So current AI efforts in relative comparison of human brain are playing with
toy hardware and toy algorithms. It should be surprising that we have gone so
far regardless.

------
adamsmith
Congrats on the fundraise Greg and team!

Does this mean that OpenAI may not disclose progress, papers with details,
and/or open source code as much as in the past? In other words, what
proprietary advantage will Microsoft gain when licensing new tech from OpenAI?

I understand that keeping some innovations private may help commercialization,
which may help raise more funds for OpenAI, getting us to AGI faster, so my
opinion is that could plausibly make sense.

~~~
gdb
We'll still release quite a lot, and those releases won't look any different
from the past.

> I understand that keeping some innovations private may help
> commercialization, which may help raise more funds for OpenAI, getting us to
> AGI faster, so my opinion is that could plausibly make sense.

That's exactly how we think about it. We're interested in licensing some
technologies in order to fund our AGI efforts. But even if we keep technology
private for this reason, we still might be able to _eventually_ publish it.

~~~
DSingularity
Eventually open ai?

~~~
marvin
I thought from day one that the name «OpenAI» would at best be a slight
misnomer, and at worst indicative of a misguided approach. If AGI is close to
being achieved, sharing key details of the approach to any actors at all could
trigger a Manhattan Project-type global arms race where safety was compromised
and the whole thing became insanely risky for the future of humanity.

Glad to see that the team is taking a pragmatic safety-first approach here, as
well as towards the near-term economical realities of funding a very expensive
project to ensure the fastest possible progress.

In the early days of OpenAI, my thoughts were that the project had good
intentions, but a misguided focus. The last year has changed that, though.
They absolutely seem to be on the right track. Very excited to see their
progress over the next years.

~~~
z3t4
The atomic bomb was based on science theory. A computer can run many programs
and do a great many things, but it will never be able to think by itself.

~~~
throwawaywego
> The atomic bomb was based on science theory.

Our study of (automated) intelligence is based on science too.

> A computer ... will never be able to think by itself.

Turing wrote an entire paper about this (Computing Machinery and
Intelligence), where he rephrases your statement (because he finds it to be
meaningless) and devises a test to answer it. He also directly attacks your
phrasing of "but it will never":

> I believe they are mostly founded on the principle of scientific induction.
> A man has seen thousands of machines in his lifetime. From what he sees of
> them he draws a number of general conclusions. They are ugly, each is
> designed for a very limited purpose, when required for a minutely different
> purpose they are useless, the variety of behaviour of any one of them is
> very small, etc., etc. Naturally he concludes that these are necessary
> properties of machines in general.

> A better variant of the objection says that a machine can never "take us by
> surprise." This statement is a more direct challenge and can be met
> directly. Machines take me by surprise with great frequency. This is largely
> because I do not do sufficient calculation to decide what to expect them to
> do, or rather because, although I do a calculation, I do it in a hurried,
> slipshod fashion, taking risks.

~~~
MetalGuru
> A better variant of the objection says that a machine can never "take us by
> surprise." This statement is a more direct challenge and can be met
> directly. Machines take me by surprise with great frequency. This is largely
> because I do not do sufficient calculation to decide what to expect them to
> do, or rather because, although I do a calculation, I do it in a hurried,
> slipshod fashion, taking risks.

This seems like a cop out. Sure, if you do your calculations wrong, it doesn’t
behave as you expect. But it’s still doing exactly what you wrote it to do.
The surprise is in realizing your expectations were wrong, not that the
machine decided to behave differently.

~~~
throwawaywego
I think any AI researcher has a tale where an algorithm they wrote genuinely
took them by surprise. Not due to wrong calculations, but by introducing
randomness, heaps of data, and game bounderaries where the AI is free to fill
in the blanks.

A good example of this is "move 37" from AlphaGo. This move surprised
everyone, including the creators, who were not skilled enough in Go to
hardcode it: [https://www.youtube.com/watch?v=HT-
UZkiOLv8](https://www.youtube.com/watch?v=HT-UZkiOLv8)

------
feral
I want to take everything OpenAI says at face value (seem like good folk), but
I can't help but wonder at the recent choice to keep GPT2 closed, on what
seemed like pretty thin safety arguments to me.

Now, the demonstrated ability to produce new models which are closed, but
maybe can be used as services on a preferred partner's cloud, looks very
commercially relevant? How will these conflicts be managed, or is it more like
"we are just a commercial entity now, of course we'll do this"?

~~~
chibg10
Their handling of OpenAIFive rubbed me the wrong way as well. The whole
operation smelled very PR-ish to me personally -- unnecessarily/unjustifiably
HYPE representatives, complaining that the Dota community wanted to see
OpenAIFive play a normal Dota match against pros rather a heavily constrained
environment to benefit the bots, among other things.

~~~
rishav_sharan
Same. They left the OpenAI project half baked and thats very disappointing.

------
neiman
$1B dollar is a lot of money. Microsoft is not a charity foundation, so the
suspicious is obvious.

> We’re partnering to develop a hardware and software platform within
> Microsoft Azure which will scale to AGI. We’ll jointly develop new Azure AI
> supercomputing technologies, and Microsoft will become our exclusive cloud
> provider—so we’ll be working hard together to further extend Microsoft
> Azure’s capabilities in large-scale AI systems.

Maybe it's because I'm not an expert, but what does it really mean? Do people
understand what "Microsoft will become our exclusive cloud provider" means?

OpenAI is great, but suspicious is understandable from the users side when so
much commercial money is involved.

~~~
mattferderer
My "guess" is they're offering $1B worth of Azure services. Which costs MSFT
probably much less than $1B.

My "guess" is that it means MSFT has access to sell products based off the
research OpenAI does to MSFT's customers. Having early access to advanced
research means MSFT could easily make this money back by selling better AI
tools to their customers.

Also a great time to point out that while "Microsoft is not a charity
foundation" it does offer a ton of free Azure to charities.
[https://www.microsoft.com/en-
us/nonprofits/azure](https://www.microsoft.com/en-us/nonprofits/azure) This
has been an awesome thing to use when helping small non-profits with little
money to spend on "administrative costs".

~~~
gdb
> My "guess" is they're offering $1B worth of Azure services. Which costs MSFT
> probably much less than $1B.

It's a cash investment. We certainly do plan to be a big Azure customer
though.

> My "guess" is that it means MSFT has access to sell products based off the
> research OpenAI does to MSFT's customers. Having early access to advanced
> research means MSFT could easily make this money back by selling better AI
> tools to their customers.

I'm flattered that you think our research is that valuable! (As I say in the
blog post: we intend to license some of our pre-AGI technologies, with
Microsoft becoming our preferred partner for commercializing them.)

~~~
twayyallscareme
Sorry for the cowardice of this throwaway account, but it freaks me out that
Musk left, and Thiel is still there.

Going back in time:

> Musk has joined with other Silicon Valley notables to form OpenAI, which was
> launched with a blog post Friday afternoon. The group claimed to have the
> goal “to advance digital intelligence in the way that is most likely to
> benefit humanity as a whole, unconstrained by a need to generate financial
> return.”

What happened here?

I know it’s far off, but I am concerned about AGI misanthropy and the for-
profit turn of OpenAI. Who is the humanist anchor, of Elon’s gravitas, left at
OpenAI?

What happened to the original mission? Are any of you concerned about this?
Can you get rid of Peter Thiel please? Can we buy him out as a species? I
respect the man’s intellect yet truly fear his misanthropy and influence.

Apologies for the rambling, but you all got me freaked out a bit. I had, and
still do have such high hopes for OpenAI.

Please don’t lose the forest for the trees.

~~~
halflings
They left in good terms, Musk was competing in terms of talent (e.g Andrej
Karpathy leaving OpenAI for Tesla).

See:
[https://twitter.com/elonmusk/status/1096987465326374912?lang...](https://twitter.com/elonmusk/status/1096987465326374912?lang=en)

------
dkersten
> We think its impact should be to give everyone economic freedom to pursue
> what they find most fulfilling, creating new opportunities for all of our
> lives that are unimaginable today.

The cynic in me thinks this will never happen, that instead it will make a
small subset of the population super rich, while the rest are put to work
somewhere to make them even more money. Microsoft will ultimately want a
return on their billion, at least.

~~~
TaylorAlexander
Well, the super rich getting richer is the status quo, so I kind of feel like
nothing much changes if this never happens. Now, riding that happy PR wave and
failing to deliver would be lame, but perhaps they really believe this. I
think it will depend entirely on how much really gets open sourced in the end.
I want to believe they’ll really do it.

~~~
stevens32
Open sourcing still may not level the playing field if it turns out it
requires corporate (or state) level resources to operate

------
option_greek
Genuine question: In what sense is OpenAI open ?

~~~
ilaksh
Its just like the bag of sugar I have in my cabinet that claims to be low
calorie. It says "ONLY 16 CALORIES (per 4 grams in tiny letters)".

I mean, sugar is pretty much the definition of a high calorie food. Its like,
pure calories. And can affect insulin regulation, etc. That's why they need to
put some marketing on it.

~~~
dkersten
Off topic, I know, however, this reminded me of TV adverts we used to have in
my country when I was young where they were saying how great sugar is because
it has 0% fat... If only the butter companies responded with adverts about how
they're 0% sugar, that would have been fun. :/

~~~
rapind
Low carb / no carb is gaining in popularity (and some products label it).
Sugar is finally becoming the bad guy despite enormous lobbying efforts over a
very long time (since the 60s?).

------
bhouston
Hmmm.... it reads to me that someone has co-opted an open standard. How much
of this is really an investment and how much is an in-kind contribution of
Azure resources?

Also this sounds dangerous: "exclusive cloud provider".

When an OpenAI group starts to make exclusively partnerships with one vendor,
I wonder how "Open" it is.

I can not imagine Khronos Group, which runs similarly named OpenGL, etc having
a "exclusive" graphics card supported for their open standards. Cloud
computing is to OpenAI as graphics cards are to OpenGL/Vulkan.

~~~
Synaesthesia
How open is the computer world anyway? Not very. At the hardware level not at
all. So yes take it with a grain of salt. It’s still the product of
billionaires and tech giants.

------
lukaa
I'm quite suspicious about private companies helping open source.It seems to
me that by relying on private companies open source is tailored to create
standards that work best in platforms that are doing financing and cementing
monopolies and oligopolies.In my opinion open source should have same status
as science and be financed by government.

~~~
schnable
I dunno, Google has made hugely important contributions to open source and the
practice of software engineering in general. Would we have all that if it was
purely government backed? Like Bell Labs, having an effective monopoly on a
new tech product has spun off a ton of innovation for technology in general.

~~~
pms
I think there is a difference between public funding for research (it's how
majority of research is funded) and public funding for open-source software
(isn't happening yet, to my knowledge, so it's an interesting and potentially
powerful unexplored idea).

------
Quarrelsome
Can we talk about the usage of the term "AGI" here? Considering its
connotations in popular culture it sounds terrifically inappropriate in terms
of what we can feasibly build today.

Can we assume that marketing overrode engineering on the terminology of this
press release?

~~~
pure-awesome
The purpose of OpenAI is to eventually lead to safe AGI. It's part of their
core business purpose. Whatever they do with Machine Learning today is merely
instrumental in leading up to that goal.

We certainly cannot feasibly build AGI today, hence OpenAI's use of the term
"pre-AGI technologies".

~~~
cosmodisk
I can't see how pure AGI can be "safe". Huge part of human intellect revolves
around the need to survive,be it danger,lack of food,or less rational choices
based on emotions.If computers can rationalise positive behaviour of
humans,they may not be able to do so well with greed, jealousy, hunger for
power,that aren't very logical processes but create a lot of positive and
negative nonetheless.

------
albybisy
OpenAI => CloseAI

~~~
tiborsaas
So a highly sophisticated sales bot is the end goal :)

------
ipsum2
I feel pretty bad for people working in ML/AI at Microsoft Research right now.
Microsoft is sending a clear signal that they would pay $1B in outside AI
research than spend the same amount internally.

------
doctorstupid
What's this "Pre-AGI" arrogance? Why are they so certain that it "will scale
to AGI"? Is it an attempt at branding, or have they forgotten that AI is a
global effort?

And do people really want to be "actualized" by "Microsoft and OpenAI’s shared
value of empowering everyone"?

------
streetcat1
So is this the open-ai exit?

~~~
gdb
(I work at OpenAI.)

Quite the opposite — this is an investment!

~~~
kuzehanka
Is all this talk of AGI some kind of marketing meme that you guys are
tolerating? We haven't figured out sentiment analysis or convnets resilient to
single pixel attacks, and here is a page talking about the god damned
singularity.

As an industry, we've already burned through a bunch of buzzwords that are now
meaningless marketing-speak. 'ML', 'AI', 'NLP', 'cognitive computing'. Are we
going for broke and adding AGI to the list so that nothing means anything any
more?

~~~
andreilys
At what point would you deem it a good idea to start working on AGI safety?

What "threshold" would you want to cross before you think its socially
acceptable to put resources behind ensuring that humanity doesn't wipe itself
out?

The tricky thing with all of this is we have no idea what an appropriate
timeline looks like. We might be 10 years away from the singularity, 1000
years, or it might never ever happen!

There is a non-zero chance that we are a few breakthroughs away from creating
a technology that far surpasses the nuclear bomb in terms of destructive
potential. These breakthroughs may have a short window of time between each of
them (once we know a, knowing b,c,d will be much easier)

So given all of that, wouldn't it make sense to start working on these
problems now? And the unfortunate part of working on these problems now is
that you do need hype/buzzwords to attract tallent, raise money and get people
talking about AGI safety. Sure it might not lead anywhere, but just like fire
insurance might seem unnecessary if you never have a fire, AGI research may
end up being a useless field altogether but at least it gives us that cushion
of safety.

~~~
craigsmansion
> At what point would you deem it a good idea to start working on AGI safety?

I don't know, but I'd say _after_ a definition of "AGI" has been accepted that
can be falsified against, and actually turn it into a scientific endeavour.

> The tricky thing with all of this is we have no idea what an appropriate
> timeline looks like.

We do. As things are it's undetermined, since we don't even know what's it's
supposed to mean.

> So given all of that, wouldn't it make sense to start working on these
> problems now?

What problems? We can't even define the problems here with sufficient rigor.
What's there to discuss?

~~~
Veedrac
> I don't know, but I'd say after a definition of "AGI" has been accepted that
> can be falsified against, and actually turn it into a scientific endeavour.

Uhh, that's the Turing Test.

------
pinouchon
What do they mean by: "Microsoft will become our exclusive cloud provider"?

Being forced to use Azure for all your ML workloads seems a stupid constraint.
For example, you might be comfortable with tensorflow/TPU and changing
frameworks/tooling might be costly.

~~~
chirau
Azure has full support for Tensorflow, Keras, PyTorch and the rest of the
popular stuff. Shouldn't be a problem at all

------
robomartin
I could be wrong on this. I think the AI/AGI problem isn’t so much about money
and more about not having discovered the unique insight that will make it
happen. In other words, someone in a garage might be far more likely to find
how to trigger the proverbial inflection point.

Throwing money at a problem doesn’t always produce solutions. It can sure
accelerate a project down the path it is on...but, if the path is wrong...

In some ways it reminds me of the battle against cancer.

Not being critical of this project or donation, just stating a point of view
on the general problem of solving AI, a subject I have been involved with to
one degree or another since the early 80’s.

~~~
codingslave
I dont think this is true at all. I think that NLP models, especially the
state of the art ones (in the coming years), will cost a few million to train.
Massive volumes of data sucked up by a model.

~~~
robomartin
That is exactly the issue that supports my perspective on this. The fact that
we need millions of dollars and massive volumes of data is an indication that
we might be going down the wrong path.

Think about how far we are from being able to even get close to what an ant
can do. Work it backwards from there.

------
Tenoke
Congrats! So what's OpenAI's current valuation?

------
carapace
Given their stated mission:

> OpenAI’s mission is to ensure that artificial general intelligence benefits
> all of humanity.

I'm struck by the homogeneity the OpenAI team.

[https://openai.com/content/images/2019/07/openai-team-
offsit...](https://openai.com/content/images/2019/07/openai-team-
offsite-2019.jpg)

It seems to be mostly white people and a few Asians, without a single black or
Hispanic person.

~~~
distdev89
How does that matter? Does diversity for the cause of diversity lead to better
results? Is there any data around this?

Hiring the most qualified people is the most important thing. As long as there
isn't an inherent bias for not hiring someone who is hispanic, black, or
brown, it should b e fine.

~~~
warent
What you're posing is an age-old counterargument based around an irrational
fear of white people experiencing prejudice and losing out on opportunities to
underqualified people they would have otherwise had.

There have been studies done around diversity, conducted both privately and
publicly, which consistently conclude that increased diversity does result in
enhanced decision-making, collaboration, and organizational value-add due to
the different perspectives having a net positive influence rather than neutral
or negative.

Beyond pragmatism, from an idealist perspective aiding in increasing
organizational diversity is the morally right thing to do. That doesn't mean
hiring underqualified people; it means refusing to fill the position until the
right person is found, which is a whole other problem on its own.

Here are some resources to get started:
[https://www.gsb.stanford.edu/insights/diversity-work-
group-p...](https://www.gsb.stanford.edu/insights/diversity-work-group-
performance)
[https://journals.aom.org/doi/abs/10.5465/1556374](https://journals.aom.org/doi/abs/10.5465/1556374)

~~~
depr
Those studies are a pretty weak argument. The first link isn't about the type
of diversity grandparent was talking about, but "informational diversity". The
paper produces no direct correlation. The pragmatic angle is overblown.

It being the right thing to do is a much stronger argument imo. Which I agree
with, but companies generally aren't interested in it, unless they can use it
for marketing.

------
Pandabob
Excellent way to reallocate some of that record breaking revenue they just
announced.

------
georgewsinger
Does anyone have any thoughts on Vicarious, the non-deep-learning competitor
to OpenAI?

~~~
atlasunshrugged
I'm also curious about this, haven't heard anything about them in a while

------
logicchains
Hopefully this is good news for Bing.

~~~
arethuza
Well, hopefully if someone creates a benevolent AGI then it should be good
news for everyone.

~~~
codelemur
[EDIT]: friendly -> non-friendly oops.

That's what seems so confusing about HN replies here. (Non-friendly) AGI is an
extreme existential risk (depending on who you listen to).

I'm perfectly fine with rewarding the org that's responsible for researching
friendly AGI to do it _right_ (extremely contingent on that last bit).

~~~
modzu
the thing is, nobody knows how to do that. it's not a money problem.

~~~
arethuza
OpenAI is a research company - that's what research is, working out how to do
things we don't know how to do. Research requires _some_ money so at one level
it is a money problem.

~~~
modzu
but this is alchemy isn't it? there isn't even a theoretical framework from
which we can even begin to suggest how to keep any "general intelligence"
benign. good old fashioned research notwithstanding, a billion dollars is not
about to change this. it reads to more to me like this is an investment in
azure (ie microsoft picking up some machine learning expertise to leverage in
its future cloud services). that's not a judgement, and i'm sure lots of cool
work will still come from this, given the strength of the team and massive
backing they have. it just smells funny.

~~~
logicchains
Alchemy wasn't entirely wrong; it is indeed possible to turn lead into gold,
it was just beyond the technology of the time:
[https://www.scientificamerican.com/article/fact-or-
fiction-l...](https://www.scientificamerican.com/article/fact-or-fiction-lead-
can-be-turned-into-gold/).

------
kirillzubovsky
Azure has a supercomputer emulator, and even if OpenAI doesn't get a full $1B
in cash but gets to use it as credits on the emulator, that could be huge.

------
H8crilA
Why is it called an investment? Is OpenAI a corporation that plans to pay out
dividends? I thought it's more of a non-profit. This deal looks more like a
donation of cloud compute resources. Still a great idea (moves ML research
closer to their platform, eating more of Google's lunch), but it's not an
investment in OpenAI.

~~~
BigChiefSmokem
Because Microsoft has calculated a reasonable ROI to make it worth it.

~~~
H8crilA
Yeah but how's the return exactly going to come about (dividends? merger and
talent takeover?), that's the question. I can see it returning itself via
better ML sales on Azure.

------
Eduardo3rd
Exciting announcement for the OpenAI team.

The wording in the press release reminds me of a question I haven't been able
to answer for a while now. Can anyone point me to the moment in time when
general purpose artificial intelligence was re-branded to artificial general
intelligence? Is GAI that much worse of an acronym than AGI? What's the deal
here?

------
RIMR
Can we actually expect OpenAI to remain "open" with investments like this
getting dumped into the project?

I'm still waiting for the 1.5G GPT-2 set to get released, but they're still
going with that "too dangerous for society" BS that they're using to get
journalists' attention...

~~~
anchpop
They need to prove it's safe for society before releasing it. If they don't
believe it is then not releasing it is of course the smart thing to do.
Furthermore, anyone else in the future who creates something they think might
be dangerous now has a better argument, because they can point to OpenAI
playing it safe and say "I'm just doing the same thing"

~~~
RIMR
I think you're making a silly mistake actually giving this "too dangerous for
society" line any validity.

I understand that this tech could be used for nefarious purposes, but this
isn't world-ending tech. This is just hard to differentiate from human
writing...

The choice to keep their "too powerful model" unreleased is more an attempt to
stoke sensationalism out of journalists eager to report on "The AI too
dangerous to release" than it is actually an earnest attempt at protecting
society.

The dangerous rogue-AI is a Hollywood trope. We don't live in the Ghost In the
Shell universe, we live in reality, and a text-generating algorithm isn't
particularly dangerous when you think about it.

~~~
anchpop
> I understand that this tech could be used for nefarious purposes, but this
> isn't world-ending tech. This is just hard to differentiate from human
> writing...

When, in any OpenAI communication, have they ever implied GPT2 is world ending
tech?

> We don't live in the Ghost In the Shell universe, we live in reality, and a
> text-generating algorithm isn't particularly dangerous when you think about
> it.

Are you sure it won't be used to automatically post fake news or create
artificial group cohesion by bad state actors? We can already do stuff like
that, but this allows you to do it faster and cheaper. Don't you think that's
at least a little scary?

~~~
RIMR
Again, I already made it clear that this has nefarious purposes.

But there is zero utility in keeping the larger dataset secret. Society is not
safer as a result.

The only reason they ever framed it the way they did was to get media
attention.

"This AI is so dangerous, the creators are refusing to release it to the
public!" is going to get way more clicks than "This AI writes English
sentences that almost appear coherent.".

If you want more funding and attention towards your project, you're going to
say stuff that gets journalists' attention.

------
laxatives
> As the waitress approached the table, Sam Altman held up his phone. That
> made it easier to see the dollar amount typed into an investment contract he
> had spent the last 30 days negotiating with Microsoft.

> “$1,000,000,000,” it read.

Wow, Sam Altman sounds like an asshole.

------
ryanmercer
OpenAi to for-profit OpenAI to billion dollar partnership with Microsoft
doesn't give me the warm and fuzzy feelings I had when I first heard of
OpenAI. I saw it as "we're going to save the world by building an AGI before
someone builds SkyNet" today it is "We've got in bed with a company that had
one of the most famous anti-trust cases in the United States and one of the
most famous anti-competition cases in the EU".

And of course going after high school student Mike Rowe for registering
MikeRoweSoft.com (Seriously Microsoft, exactly no one thought he was you).

While Microsoft isn't inherently evil, I mean one could argue that via Windows
Microsoft is largely responsible for the widespread adoption of computers, it
definitely makes me feel slightly uneasy.

I'd rather see OpenAI continue to be funded by donation, and eventually
royalties/licensing of technologies it develops, not partnerships with
companies like 'IBM 2: Electric Boogaloo'.

But what do I know, I'm a Morlock not an Eloi.

~~~
SahAssar
The whole MikeRoweSoft thing was 15 years ago. I don't think it's fair to
judge the company now based on that.

~~~
ryanmercer
If it was the only instance, I'd say sure, but:

\- Lindows

\- Microwindows

\- wxWindows

\- Windows Commander

\- Suing Amish Shah

\- MikeRoweSoft

Then all of the various anti-trust/anti-competition lawsuits against them by
both companies and government entities (Be Inc, Nescape, the EU, Spain, the US
Government, Caldera, Opera Software. Also individuals in numerous class
actions).

Plenty of reason to feel slightly concerned.

~~~
SahAssar
Are any of those recent? I agree that microsoft has been pretty dickish and
I'm sure parts of it continue to be (see
[http://bonkersworld.net/organizational-
charts](http://bonkersworld.net/organizational-charts) as an example for why),
but as a whole I feel that they are better with OSS now than they have every
been.

------
pron
Congrats on the investment, but this release reads like a parody.

I believe that building a _beneficial_ warp-drive engine will be the most
important technological development in human history, with the potential to
shape the trajectory of humanity. The aliens we're sure to encounter will be
capable of mastering more fields than any one human — like a tool which
combines the skills of Curie, Turing, and Bach. An alien working on a problem
would be able to see connections across disciplines that no human could. But
even though I'm known as the warp-drive-guy, I don't actually know how to
build a warp drive, so in the meantime I am building increasingly powerful
transportation technology in the hope this would lead to a warp drive one day
soon, and have decided to focus on bicycles. They're really good bikes,
though, and unlike others who make bicycles, I like to consider those I build
to merely be _pre_ -warp, a necessary step towards warp technology. So when
you buy my bikes [1], you are literally helping me change the trajectory of
human history and meet aliens (did I mention Curie, Turing and Bach?)

This is truly a fine specimen of Silicon Valley prose. It's got something for
everyone: human history, a wild-eyed dream of a bright future, a connection to
the arts, name-dropping, the trajectory of humanity, and, of course, lots of
money in cloud services (integrated platform). They even showed some restraint
in stopping short of ending all war and curing all disease. "Making the world
a better place" is really too mundane.

[1]: The Warp Drive Corporation®'s Pre-Warp Bike™️ is now on sale on Amazon.

~~~
gdb
(I work at OpenAI.)

It comes down to whether you believe AGI is achievable.

We've talked about why we think it might be:
[https://medium.com/syncedreview/openai-founder-short-term-
ag...](https://medium.com/syncedreview/openai-founder-short-term-agi-is-a-
serious-possibility-368424f7462f),
[https://www.youtube.com/watch?v=YHCSNsLKHfM](https://www.youtube.com/watch?v=YHCSNsLKHfM)

And we certainly have more of a plan for building it than warp drives :).

EDIT: I personally think the case for near-term AGI is strong enough that it'd
be hard for me work on any other problem — and find it important to put in
place guardrails like [https://openai.com/blog/openai-
lp/](https://openai.com/blog/openai-lp/) and
[https://openai.com/charter/](https://openai.com/charter/).

Even if AGI turns out to be out of reach, we'll still be creating increasingly
powerful AI technologies — which I think pretty clearly have the potential to
alter society and require special care and thought.

~~~
pron
> It comes down to whether you believe AGI is achievable.

No, it does not. I very much believe AI (or AGI, as you call it) is
achievable, but may I remind you that some years after the invention of neural
networks, Norbert Wiener, one of the greatest minds of his generations, said
that the secret of intelligence would be unlocked within five years, and Alan
Turing -- a component of your very own post-pre-AGI era's AGI -- another great
believer in AI, scoffed and said that it will take at least five _decades_.
That was seven decades ago, and we are not even close to achieving insect-
level intelligence. Maybe we'll achieve AI in ten years and maybe in one
hundred, but you don't know which of those is more likely, and you certainly
don't know whether any of our pre-AGI technology even gets us on the right
path to achieving AGI. There have been other paths towards AI explored in the
past that have largely been abandoned.

 _OpenAI is not actually building AGI_. Maybe it _hopes_ that the things it
_is_ working on _could_ be the path to an eventual AGI. OpenAI knows this, as
does Microsoft.

This does not mean that what OpenAI does is not valuable and possibly useful,
but it does make _calling_ it "pre-AGI" pretentious to the level of delusion.
Now I know there were (maybe still are) some AI cults around SV (I think a
famous one even called themselves "The Rationalists" or something), but what
makes for a nerdy, fanciful discussion in some dark but quirky corner of the
internet looks jarring in a press release.

> If you believe AGI might be achievable any time soon, it becomes hard to
> work on any other problem — and it's also very important to put in place
> guardrails like [https://openai.com/blog/openai-
> lp/](https://openai.com/blog/openai-lp/) and
> [https://openai.com/charter/](https://openai.com/charter/)

I can't tell if you're serious, but assuming you are, the problem is that
there are many _other_ things that if you think chould be achievable any time
soon would make it hard to work on any other problem, as well make it
important to put guardrails in place. The difference is that no one actually
knows how to put guardrails on AGI. We are doing a pretty bad job putting
guardrails on the statistical clustering algorithms that some call (pre-AGI?)
AI and that we already use.

~~~
fossuser
If AGI is achievable (seems likely given brains are all over the place in
nature) and achieving it will have consequences that dwarf everything else
then doesn't it make sense to focus on it?

Yes, historically people were way too optimistic and generally went down AI
rabbit holes that went nowhere, but two years before the Wright flyer flew,
the Wright brothers themselves said it was 50 years out (and others were still
publishing articles about human flight being impossible after it was already
flying).

People are bad at predictions, in the Wright brothers case since they were the
people that ultimately ended up doing it two years later they were likely the
best to make the prediction and were still off.

Given that AGI is possible and given the extreme nature of the consequences,
doesn't it make sense to work on alignment and safety? Why would it make sense
to wait? If you accidentally end up with AGI and haven't figured out how to
align its goals then that's it, the game is probably over.

Maybe OpenAI is on the right path, maybe not - but I think you're way too
confident to be as sure as you are that they are not.

~~~
pron
First of all, I was talking the language, not the work. It makes sense to
study AI as it does many other subjects, but we _don 't know_ that it will
"have consequences that dwarf everything else" because we don't know what it
will be able to do and when (we think that it _could_ but so could, say, a
supervirus, or climate change, or the return of fascism). People hang all sort
of dreams on AI precisely because of that. That cult I mentioned, The
Rationalists, basically imagined AI to be a god of sorts, and then you can say
"wouldn't you want to build a god?" But we don't know if AI could be a god.
Maybe an intelligent being that thinks faster than humans goes crazy? Of
course, we don't know that, but my point is that the main reason we think so
much of AI is that at this time, we don't know what it is and what it could
do.

> Why would it make sense to wait?

Again, that's a separate discussion, but if we don't know what something is or
when it could arrive, it may make more sense to think about things we know
more about and are already here or known to be imminent. Anyway, anyone is
free to work on what they like, but OpenAI does not know that they're
"building artificial general intelligence."

> I think you're way too overconfident to be as sure as you are that they're
> not.

I don't know that they're not, but they don't know that they are, and that
means they're not "building AGI."

~~~
fossuser
I can understand your point about the language, but I guess I think it's
reasonable to set the goal for what you actually want and work towards it. It
may turn out to be unattainable, but I think generally you need to at least
set it as the goal. It also seems less clear to me that they are close or far
from it (I don't think it's on the same level as warp drive).

I don't know about the god thing you mention and the rationalist stuff I've
read hasn't been about that. The main argument as I understand it is:

1\. AGI is possible

2\. Given AGI is possible if it's created without the ability to align its
goals to human goals we will lose control of it.

3\. If we lose control of it, it will have unknown outcomes which are more
likely to be bad than benign or good.

Therefore we should try and figure out a way to make it safe before AGI
exists.

Maybe humans just happen to be an intelligence upper bound and anything
operating at a higher level goes crazy? That seems unlikely to me given that
humans have a lot of biological constraints (heads have to fit out of birth
canals, have to be able to run on energy from food, selective pressure for
other things besides just intelligence). You could be right, but I'd bet on
the other side.

The last bit is if we can solve this in a way that aligns the goals with human
goals (open question since humans themselves are not really aligned) we could
solve most problems we need to solve.

~~~
pron
I think discussions of AI safety at this stage -- when we're already having
problems with what passes for AI these days that we're not handling well at
all -- is a bit silly, but I don't have something particularly intelligent to
say on the matter, and neither, it seems, does anyone else, except maybe for
this article that shows that the AGI paranoia (as opposed to the real threats
from "AI" we're already facing, like YouTube's recommendation engine) may be a
result of a point of view peculiar to Silicon Valley culture:
[https://www.buzzfeednews.com/article/tedchiang/the-real-
dang...](https://www.buzzfeednews.com/article/tedchiang/the-real-danger-to-
civilization-isnt-ai-its-runaway)

~~~
fossuser
I agree with you in a way, if AGI ends up being 300yrs out then work on safety
now is likely not that important since whatever technology is developed in
that time will probably end up being critical to solving the problem.

My main issue personally is that I'm not confident if it's really far out or
not and people seem bad at predicting this on both sides. Given that, it
probably makes sense to start the work now since goal alignment is a hard
problem and it's unknown when it'll become relevant.

I read the BuzzFeed article and I think the main issue with it is he assumes
that an AGI will be goal aligned by the nature of being an AGI:

"In psychology, the term “insight” is used to describe a recognition of one’s
own condition, such as when a person with mental illness is aware of their
illness. More broadly, it describes the ability to recognize patterns in one’s
own behavior. It’s an example of metacognition, or thinking about one’s own
thinking, and it’s something most humans are capable of but animals are not.
And I believe the best test of whether an AI is really engaging in human-level
cognition would be for it to demonstrate insight of this kind."

Humans have general preferences and goals built in that have been selected for
for thousands of years. An AGI won't have those by default. I think people
often think that something intelligent will be like human intelligence, but
the entire point of the strawberry example is that an intelligence with
different goals that's very good at general problem solving will not have
'insight' that tells it what humans think is good (that's the reason for
trying to solve the goal alignment problem - you don't get this for free).

He kind of argues for the importance of AGI goal alignment which he calls
'insight', but doesn't realize he's doing so?

The comparison to Silicon Valley being blinded by the economies of their own
behavior is just weak politics that's missing the point.

~~~
pron
We don't know that "goal alignment" (to use the techo-cult name) is a hard
problem; we don't know that it's an important problem; we don't even know what
the problem is. We don't know that intelligence is "general problem solving."
In fact, we can be pretty sure it isn't, because humans aren't very good at
solving _general_ problems, just at solving _human_ problems.

------
intrasight
I'm skeptical that AGI will exist on our planet in my lifetime. I've no doubt
that it exists elsewhere in the galaxy. If an alien species does come to visit
some day, I think it more likely than not that it'll be an AGI.

~~~
Jach
You piqued my interest, what makes you think there's any intelligence,
artificial or otherwise, anywhere else but here in the Milky Way?

~~~
edm0nd
I think because the universe is so infinite and vast the math makes sense
there would be others out there. We just don't have the technology yet to
travel fast or far enough and also cant communicate/detect them.

~~~
Jach
The Milky Way is not infinite and not even that vast. As for "not having the
technology", why not postulate an equal likelihood for heavenly angels? We
just don't have the technology to perceive heaven, after all...

------
quocble
Everyone: AI will one day destroy the human race.

Microsoft: Let's invest $1 Billion in it.

------
auvi
anybody knows what happened to Elon Musk's pledge of $1B to OpenAI?

~~~
gdb
OpenAI Nonprofit’s initial funding commitment was to be consumed over many
years!

When we started, we didn't know just how fast things would be moving (e.g.
[https://openai.com/blog/ai-and-compute/](https://openai.com/blog/ai-and-
compute/)), and we've needed to scale much faster than planned when starting
OpenAI.

------
mikedh
Here's to hoping some of that goes to acquiring and open sourcing Mujoco, or
to switching OpenAI Gym's default physics to something open source.

------
block_square
I am interested in working at OpenAI and similar companies. What background
would you recommend?

What skills should graduate students focus in to be competitive?

~~~
FartyMcFarter
CS and/or ML and/or Neuroscience and/or Maths (including statistics).

------
JabavuAdams
I think I can develop GAI for less, in a shorter time-frame. Doubt anyone
believes me enough to give me $$$.

~~~
nradov
Self fund your project and build a prototype. If you can demonstrate even some
limited progress the investor community will drown you in funding.

~~~
JabavuAdams
Once it becomes clear to non specialists that AGI is possible, and possibly
imminent, that greatly changes the equation, though. This isn't a technology
like any other.

Why wouldn't governments and other groups just seize the prototype? You'd have
a hot-potato on your hands and figuring out how to survive might be your
biggest concern. Like imagine you suddenly came into possession of a trillion
dollars in bearer bonds. If that leaks out, people will come after you, not
just by legal means.

All of this skews the calculus towards immediate public disclosure, rather
than trying to gain advantage by delaying release.

EDIT> Or going first, and attempting to neutralize all possible competitors.
This is a terrible calculus.

~~~
nradov
We're discussing reality here, not paranoid fantasies and conspiracy theories.

------
uvictor
Azure is the worst...open AI better take that in cash....

------
crudbug
How much of this is Azure Computing Credit ? :)

------
danielcampos93
Any plan for a Seattle/Redmond office?

------
nutanc
How much of the $1B is in Azure credits?

------
al2o3cr
"It looks like you're trying to build an army of robots to extinguish
humanity, would you like help?"

------
thrax
Revenge of Microsoft Bob!

------
phonebucket
great news! can OpenAI please open an office in London now :)

------
adammenges
This is excellent, happy for both sides

------
terrycody
Musk + Gates

world is end, welcome to the apocalypse

------
appstorelottery
I'm surprised to see the amount of downvotes that follow negativity pointed at
Microsoft. What's going on here?

EDIT: Oh. I see. I wasn't aware that OpenAI was a YC thing. I've been a member
of Hacker News through various accounts for countless years, however this is
the first time I've seen moderation pushing towards an obvious YC agenda. Very
interesting...

EDIT2: Actually, after reading the comments - I find it more likely that
Microsoft/OpenAI stakeholders are participating heavily. Over 800 upvotes for
this post makes it quite remarkable... I'll leave my tinfoil hat over there...

~~~
dang
I can't really tell what you're saying, but if you're insinuating foul play, I
haven't seen evidence of that in this thread. In any case, the site guidelines
ask you to email hn@ycombinator.com with concerns instead of posting comments
about them.

[https://news.ycombinator.com/newsguidelines.html](https://news.ycombinator.com/newsguidelines.html)

~~~
appstorelottery
My apologies - you are correct.

>Please don't make insinuations about astroturfing. It degrades discussion and
is usually mistaken. If you're worried, email us and we'll look at the data.

------
mtgx
I imagine this comes with a ton of strings attached, written or unwritten.
This probably gives Microsoft control of the project's roadmap.

~~~
gdb
The OpenAI board remains in charge of OpenAI’s AGI-relevant decisions.
Microsoft has the right to appoint one board seat, which they have not yet
exercised.

~~~
est31
I've written up my opinion on the deal that OpenAI gives to investors here:
[https://www.reddit.com/r/MachineLearning/comments/azvbmn/n_o...](https://www.reddit.com/r/MachineLearning/comments/azvbmn/n_openai_lp/eig4fj2/)

TLDR either the public is being conned or Microsoft is. And assuming that
Microsoft probably has used top lawyers to close the deal, I doubt it's them.

~~~
vasilipupkin
sorry, your opinion appears to be nonsensical.

a cap of 100x on returns and limited voting power certainly changes the price
investors are willing to pay for the shares, but doesn't invalidate anything
or render shares worthless or anything of the sort.

~~~
est31
The difference between shares and a loan is that shares have

a) no cap on the returns

b) give you control over the company

c) limited ability to sue if the company acts against shareholder interests

d) no due date

Differences a, b and c seem to be gone here. Would you give someone a loan
where you have no ability to sue them if they never give you the money back? I
wouldn't unless that person is very important to me or the money is
insignificant. Neither is the case here.

------
ptah
this is still microsoft. are they going to drop the "open" from their name?

------
howard941
Here's hoping Microsoft applies some of these funds to inhibiting the source-
IP-veiled bitcoin sextortions facilitated by its outlook.com offering.

------
throwaway713
Well, this is about the surest confirmation that AGI is going to happen within
the next few years. Why? Because whenever HN responds with a bunch of negative
sentiment, whatever was sharply criticized ends up doing really well: see the
post announcing Dropbox or the post announcing Instagram’s acquisition.

------
benl
If you strip out the AGI hype then this just sounds like OpenAI is now moving
to monetizing their tech. This makes sense for them but probably not for the
philanthropists who originally backed them.

Sadly for them, AGI is metaphysically impossible - this will be realized
eventually but a lot of waste and possibly harm will happen first.

We are not just super sophisticated machines, so the fact that we can think
doesn’t tell us anything about what’s possible for machines. But philosophy
does - and it tells us you can’t get mind from matter, no matter what
configuration you put it in.

~~~
noir_lord
> But philosophy does - and it tells us you can’t get mind from matter, no
> matter what configuration you put it in.

Curious - do you think humans have mind? because if so we are very much matter
and if not well that's an interesting thought as well.

~~~
benl
That’s right - we have minds therefore we must be more than just matter.

I used to think the opposite, but reading the philosophy on the subject
changed my mind. There are a lot of different takes on the topic, but what
most added up for me was the philosophy of Aristotle and Aquinas. There are
many great expositions of their work out there.

~~~
tim333
AGI in the sense of robots that can do the jobs people can, design better
robots and so on would be a game changer in itself. You can leave to
philosophers to argue if they have true feelings and that.

------
chc4
Well, gotta wonder how well their charter[1] will fare against Microsoft
pressure...Microsoft isn't exactly well-known for their benevolence and
cautious approach.

1: [https://openai.com/charter/](https://openai.com/charter/)

------
houzertuch
I don’t understand how there can be no comments regarding the fatal flaw of
AGI which is that it will completely ruin the economics of the world. The
world is the way it is now because humans are the only source of intelligent
signal processing. That’s the only reason why humans enjoy the limited rights
and privileges that they do. That’s the only reason why life has gotten better
and better with advancing medicine and so on. This is a fundamental principle
that cannot be escaped. It doesn’t matter how you slice it. But people defer
everything to “ubi will work out somehow” or “nah humans will never be
replaced.” Bringing god-like super-intelligent beings online is a
fundamentally stupid thing to do. And preventing their development, relative
to how disastrous their development would be, is very easy.

I have made many predictions here on HN and they all have outlined that cloud
computing would be the substrate from which AGI will spring. Now we see this
announcement. There is a reason why OpenAI is making a deal like this with a
very large cloud compute vendor: it’s because I’m right. And that means I’m
probably also right in saying that we can stop this if we want to. You can’t
just build a computer in your bad yard. And the internet is very fragile. Some
simple regulation and global awareness and initiative could control what comes
out of fabs and shut down the infrastructure necessary for cloud computing. It
would be very easy relative to the size of the problem.

~~~
richardwhiuk
Is your alternative "Don't invent general AI?"?

If so that seems unbelievably naive. Things generally can't be un-invented,
and it's unimaginably hard to prevent people inventing things, especially with
such a large economic upside for inventing it.

~~~
houzertuch
Prove me wrong if it is so obvious.

As the people at OpenAI have rightly said, AGI is a compute-gated problem. It
is a problem that can only be solved with very, very large amounts of compute.

The world has some total amount of computing power in terms of silicon based
computation. For AGI to happen, there are two requirements: that this total be
equal or greater than some theoretical threshold value for AGI and that the
computing power is consolidated. So in layman’s terms, you have to have a lot
of computers and they have to be connected in such a way as to efficiently
share their compute. AGI will never come about if every individual computer
were used to do research by separate entities but if all of those computers
were connected into a single virtual computer, AGI might be discovered with
them.

So clearly in order to prevent AGI, the best thing to do would be to address
these two aspects. Prevent the total computing power of the world from growing
and prevent computers from forming virtual meta-computers. Both of these tasks
are _in principle_ extremely easy.

Chip fabs are huge and expensive. Nobody is fabricating chips in their garage.
This is just a hard fact. There aren’t that many fabs in the world and they
are all highly susceptible to regulation. This isn’t prohibition of alcohol so
please don’t confuse yourself. Nobody will be brewing chips in their cellar.

Let’s imagine that you could not regulate cloud computing. Let’s say the only
way to prevent computers from offering their compute on a virtual market was
to shut down the internet. This by default is the hardest way to solve the
second aspect of the AGI problem and even it is very easy. This is because the
internet is a large fragile collection of infrastructure that depends heavily
on government cooperation. Nobody is going to string a fiber backbone for
black market internet. ISPs cannot exist without regulatory approval.

If there were political awareness and motivation, and it was a global
phenomenon, yes, it would be extremely easy to do what I’ve described. And
since AGI is to the detriment of literally all people, it is not a far fetched
scenario. And unlike alcohol in the United States, bootlegging would not be a
problem. People in the USA think that banning anything whatsoever doesn’t
work. It’s just fuzzy thinking, I can assure you.

~~~
richardwhiuk
Nothing in that sounds easy.

If there are N governments in the world, and they all agree to not regulate to
not create general AI, then it's strongly in all of their interests to betray
the others, create general AI, and capture the economic growth.

Even if general AI is impossible, it's in their interests to develop huge
computing capacity, because that's demonstrably economically useful.

You are hypothesizing that's it's easy to get 7 billion people to all agree to
co-operate in a game of prisoner's dilemma, when if a small fraction of them
choose to betray, they have the potential to capture massive value.

And you want to do this under the premise that AGI _might_ be a problem.

~~~
houzertuch
Like I have said countless times, it’s easy compared to what we get in return.
It’s easy to understand in principle. It doesn’t require sophisticated
mathematics.

So you think that we should let all countries have whatever weapons they want
under your logic. They will develop nukes and chemical weapons regardless of
any international agreement that is established, so why even try? The obvious
answer is to advance our own nuke technology as fast as possible so that we,
the good guys, will lead where the arms race goes.

And AGI is a far greater existential threat than nuclear weapons. It is a
greater existential threat than anything else, including global warming. It’s
the biggest Pandora’s box in history. The idea of controlling or guiding its
impact by having “the good guys” develop AGI first is the precipice of
naivety. We lose nothing by _trying_ to stop it. And we stand to gain more
than we have ever gained from any coordinated effort. How easy or hard it
might be is irrelevant, although it is much easier than basically anyone
appreciates.

~~~
richardwhiuk
Nuclear weapons provably kill people. AGI doesn't. Be careful about your
hyperbole.

------
hootan
I'm not a genius. But, it was 2015 when Microsoft announced that, MS <3 Linux
!!! And said, it's own Linux distribution is on the way.

I never heard about it's release.

In that time I said, MS will move towards open source, because he realized
that it's a winning and correct strategy for software distribution.

It didn't take long that MS aquired GitHub with 8M$.

His team foundation was a failure, he tried to get a good one.

But He also took control over all the source codes, and their histories. It
wasn't dangerous in my opinion, before I realized that due to one-sided US
sanctions, repos of some nationalities (like Iranians) got dissactivated !!!

It's not the defenition of open source, as far as I know...

MS is a corporation, hence, it has to obey the government.

But open source belongs to no one. These kind of investment might intended to
bound the potential open source communities!!!

It is obvious that accepting these sort of money, without open access
agreement is a horrible mistake that one can do. (We should learn from the
story of GitHub)

In my opinion, when it's not clear what kind of right these investments brings
for enterprises, ppl should stop contributing to them.

Maybe it's good idea to ask what Linus and Richard see in these movements!

~~~
aliswe
Ms linux distro ... Are you talking about this:

[https://wccftech.com/microsoft-20-linux-based-
distrowindows-...](https://wccftech.com/microsoft-20-linux-based-
distrowindows-10-1809/)

Or this

[https://www.omgubuntu.co.uk/2018/04/microsoft-linux-
custom-k...](https://www.omgubuntu.co.uk/2018/04/microsoft-linux-custom-
kernel-azure-sphere)

?

