
Demo of an OpenAI language model applied to code generation [video] - cjlovett
https://twitter.com/i/broadcasts/1OyKAYWPRrWKb
======
neil_s
I had trouble accessing the relevant video snippet even after going through
the conference registration, so here's a summary.

You can view the demo at
[https://twitter.com/i/broadcasts/1OyKAYWPRrWKb](https://twitter.com/i/broadcasts/1OyKAYWPRrWKb)
starting around 29:00.

It's Sam Altman demoing a massive Open AI model that was trained on GitHub OSS
repos using a Microsoft supercomputer. It's not Intellicode, but the host says
that they're working on compressing the models to a size that could be
feasible in Intellicode. The code model uses English-language comments, or
simply function signatures, to generate entire functions. Pretty cool.

~~~
sama
Thanks, but it's Sam McCandlish doing the demo (and the project).

~~~
KhoomeiK
I'm confused. Is that not you doing the OpenAI demo around 29:00?

~~~
abdelm
Altman introduced the video at 29:00, but a different person is narrating the
demo.

~~~
KhoomeiK
Ah, makes sense. Thanks!

------
YeGoblynQueenne
So that's basically program synthesis from natural language (ish)
specifications (i.e. the comments).

I can see this being a useful tool [1]. However, I don't expect any ability
for innovation. At best this is like having an exceptionally smart
autocomplete function that can look up code snippets on SO for you (provided
those code snippets are no longer than one line).

That's not to say that it can't write _new_ code, that nobody has quite
written before in the same way. But in order for a tool like this to be useful
it must stick as close as possible to what is expected- or it will slow
development down rather than helping it. Which means it can only do what has
already been done before.

For instance- don't expect this to come up with a new sorting algorithm, out
of the blue, or to be able to write good code to solve a certain problem when
the majority of code solving that problem on github happens to be pretty bad.

In other words: everyone can relax. This will not take your job. Or mine.

____________

[1] I apologise to the people who know me and who will now be falling off
their chairs. OK down there?

~~~
gwern
I think you are underselling the potential of a model which deeply understand
programming. Imagine combining such a model with something like AutoML-Zero:
[https://arxiv.org/abs/2003.03384](https://arxiv.org/abs/2003.03384) It may
not be 'creative', but used as tab-completion, it's not being rewarded or
incentivized or used in any way which would expose its abilities towards
creating a new sort algorithm.

~~~
raghavgoyal14
I agree on the tab-completion part. Something like Gmail's smart-compose could
have potentially huge benefits here.

But I'm not sure about the "deeply understand programming" part. Language
modelling and "AI", in its current form, uncovers only statistical
correlations and barely scratches the surface of what "understanding" is. This
has restricted deployment of majority of academic research into the real-world
and this, I believe, is no different and will work only in constrained
settings.

Edit: typo

~~~
um_ya
It would be nice to have an AI that could write unit tests, or look over your
code and understand and explain where you might have bugs.

~~~
raghavgoyal14
> look over your code and understand and explain where you might have bugs.

This would certainly be interesting. I'm not aware of active research going on
in this area (any pointers would be helpful!).

This would require an agent to have thorough understanding of the logic you're
trying to implement, and locate the piece of code where it silently fails. For
this you'd again need a training dataset where the input is a piece of code
and the supervision signal (the output) is location of the bug. I could
imagine some sort of self-supervision to tackle this initially where you'd
intentionally introduce bugs in your code to generate training data. But not
sure how far this can go!

~~~
westurner
1\. Generate test cases from function/class/method definitions.

2\. Generate test cases from fuzz results.

3\. Run tests and walk outward from symbols around relevant stacktrace frames
(line numbers,).

4\. Mutate and run the test again.

...

Model-based Testing (MBT) [https://en.wikipedia.org/wiki/Model-
based_testing](https://en.wikipedia.org/wiki/Model-based_testing)

> _Models can also be constructed from completed systems_

------
tanilama
I mean it is cool.

But there is the thing, the natural description of a function is not always
this unambiguous.

When you are telling a function to 'compute XYZ', what you are actually doing
is 'check whether X.a exists, if so execute branch 1), else branch 2)'.

If the logic gets really complicated, then describing it accurately in human
language isn't necessarily faster than doing it in code directly. Otherwise,
we don't need invent programming languages like at all, we can just write
compilers to interpret and execute human languages.

And I am interested, as whether the model itself is conditioned on the type
constraint of class. It is neat that they pick Python in this case. But if it
is Java or other static typed language, would this system condition its
generation not only the natural text, but also the resulted type system? My
bet, per my understanding of the language modeling approach they use is, they
are not doing this, due to very high complexity and cost of the training, and
domain adaptation.

Overall, this again is an interesting demo. But I think for code generation
based on human language to be useful, we are really in a scenario, that you
need to go 99% accurate for it to be remotely practical.

~~~
MiroF
I agree that code generation of complex functions is hard.

But I think the example given of _unit testing_ \- ie. natural language
description of specific behavior of function -> code is extremely useful.

~~~
tanilama
Unit testing is a good use case.

But that would require the condition on the type system, meaning the code-gen
needs to understand the object's interface, which while not impossible in
current techniques, but hard enough due computation complexity.

Again I don't dispute this tool being interesting. But claims it to be ground
breaking or game changing is simply not right.

Majority of programmers time, is not typing down the code. It is to look at
the comment/description, think about it, edit some code, then rethink then
edit again.

This tool has potential to solve some typing time, but it still not going to
things fundamentally.

------
IdiocyInAction
How does this do compared to other models? Is this a totally cutting edge
result? On the surface, it seems quite impressive, but sans an environment to
try it out with, I cannot be entirely sure. Still, this does make me question
whether I chose a safe career, haha.

The thing is, I'd really need to see a live demo to see how good this is.
Making mistakes is actually kind of a big issue; as most people know,
debugging code is harder than writing it. And a lot of the language models
which can write impressive-seeming text also generate masses of garbage.
There's no way to know whether this was cherrypicked or not.

The mere fact that it can extract meaning from text like this is already
really impressive though.

~~~
bglazer
I've read a fair number of papers on neural program synthesis lately. To me,
these seemed to be obviously cherry picked examples, so you can't really
evaluate the whole system based on them.

However, this is fairly impressive for a couple reasons. First, the system
constructs programs from natural language descriptions, rather than examples
of input-output pairs or a formal specification, which are the most common
settings for program synthesis. Second, they're generating full blown python,
not a smaller, domain specific language.

Finally, and this is pretty mind-blowing, is the seamless, idiomatic use of
loops, branches, and function calls. I haven't seen previous program synthesis
tools able to generate such complex code. They're typically limited to simple
linear programs with less than about 100 lines. Complex control flow and
function calls are still beyond their reach for the most part.

I'm not an active researcher in neural program synthesis, so my statements may
not reflect the current state of the art.

I honestly thought that the most promising route forward for program synthesis
would be a model that incorporated knowledge of the syntax and semantics of
code. Most likely, a model that manipulated, or at least had some view of, the
program's AST. This seems to be just throwing a giant Transformer model at
github.

Fine tuning a vanilla language model on a giant corpus of code feels like a
dead end for the field, long-term. It seems obvious to me that humans are
doing something more than just statistical pattern recognition and generation
when we write and reason about code.

Then again, it's hard to argue with results. I'm sure lots of pre-neural
network voice recognition researchers were in love with the elegance of their
hidden markov models.

Edit: Also, everyone should go try the FlashFill feature in Microsoft excel.
As far as I know, it's the only example of program synthesis shipped in a
consumer facing production system, and it works shockingly well.

~~~
IdiocyInAction
> Fine tuning a vanilla language model on a giant corpus of code feels like a
> dead end for the field, long-term. It seems obvious to me that humans are
> doing something more than just statistical pattern recognition and
> generation when we write and reason about code.

Yeah, this is the main reason why I would be interested in more examples. But,
if this thing was trained on all of GitHub, I could imagine that it come up
with decent-looking code for a lot of examples; a beefy, smarter Google with
some rudimentary contextual understanding, if you will. Still, the presence of
any mistakes is a no-go and I'd be really interested how it reacts to more
realistic, specific requirements.

But yeah, I'd figure a model for code generation would have to have some kind
of knowledge of syntax and semantics, rather than doing pure statistical
pattern matching, to be of any real use. It would not only have to generate,
but also to debug its code (I wonder whether you could do that purely with
statistical pattern recognition). I might be wrong, of course, but I would be
surprised if that is enough to write complex code.

~~~
MauranKilom
Five years ago we were already here:
[https://karpathy.github.io/2015/05/21/rnn-
effectiveness/](https://karpathy.github.io/2015/05/21/rnn-effectiveness/)

Calling the field "statistical pattern matching" might be underselling it a
bit, even if technically accurate on some level. I mean, syntax/semantics are
clearly not the problem, those are the easiest to learn (see the paper above).
If anything, I'm scared of it writing syntactically correct nonsense (or even
worse, subtly-flawed-but-correct-looking code).

------
parksy
I have thought about this before but I can see that logical errors are
introduced which must be manually tested and reviewed anyway, so what if a
more reliable approach could be achieved by training these data sets on test
cases alongside passing code?

This way developers just write unit tests or functional tests, and the AI
generates code and retrains itself until the code passes for all tests. This
could happen silently in the background as the developer defines the tests.

A number of natural language test frameworks exist, Behat for example lets you
define tests such as:

Feature: Multiple site support

    
    
      Background:
        Given a global administrator named "Greg"
        And a blog named "Greg's anti-tax rants"
        And a customer named "Wilson"
        And a blog named "Expensive Therapy" owned by "Wilson"
    
      Scenario: Wilson posts to his own blog
        Given I am logged in as Wilson
        When I try to post to "Expensive Therapy"
        Then I should see "Your article was published."
    
      Scenario: Greg posts to a client's blog
        Given I am logged in as Greg
        When I try to post to "Expensive Therapy"
        Then I should see "Your article was published."
    

It could still fit the dream of describing to a computer what kind of program
you want and having it figure out the plumbing.

Anyway interesting work. Very interesting. I remember a few colleagues laughed
at me no more than 5 years ago when I suggested that AI would eventually write
code. And here it is, in an early version, flawed surely but only set to
improve.

Edit to add: This subject while insanely interesting to me is well out of my
wheelhouse. I'm guessing there's possibly semantic structure to the above that
the type of model being used in the demo can't deal with? Like this one use-
case has to co-exist in an entire ecosystem of dependencies and related
entities... Could the model cope with that or is it just calculating the
likelihood of the next character like other models I've seen, but with insane
accuracy when it comes to code?

~~~
BaronSamedi
Instead of Test Driven Development, Test Only Development? I like that idea.
This reminds me of an article I read a while ago on co-evolutionary training
in genetic programming: one algorithm evolving to do something, with another
evolving to break it.

~~~
parksy
Yeah that's a good way of putting it. Also has a catchy name, "TOD".

Ultimately as well we don't care what the code looks like, if it passes all
tests then it "works". You probably don't even need to generate the code in a
high level language, if people aren't ever going to really read it.

You'd probably need tests designed to ensure the code is executes quickly
enough and automatically generate edge case test data so you don't end up with
a blog where you can only post articles with the titles in the exact test data
heh.

The future seems interesting for us developer types anyway. If a product
designer could express their requirements in plain language developers would
only really need to be around for cases where the models failed and more
training data was needed to improve them.

------
Voloskaya
I'am a bit confused, is this built by OpenAI or Microsoft? Microsoft released
the paper IntelliCode Compose: Code Generation Using Transformer [1] 4 days
ago and there is no attribution to anyone from OpenAI in it.

Are those two entirely separate and yet exactly similar initiatives?

[1]: [https://arxiv.org/abs/2005.08025v1](https://arxiv.org/abs/2005.08025v1)

~~~
p1esk
_IntelliCode Compose is built around a multi-layer generative pretrained
transformer model for code (GPT-C), which is avariant of the GPT-2_

GPT-2 is built by OpenAI

~~~
Voloskaya
I am aware of this, I am referring to the video, where Sam Altman (CEO of
OpenAI) is presenting the demo and saying "we have built", while Kevin Scott
(CTO of MSFT) is saying that it's the first time he has seen that. So this is
clearly marketed as OpenAI's work, not just saying that the model is based on
their work.

------
grensley
Wow, this has the ability to be a total gamechanger. You have to be really
observant about the bugs though, I would have totally missed the one with the
price discount without executing it.

~~~
netsec_burn
By lowering the barrier of entry of programming further, I wonder if we'll see
more bugs (like the price discount) as a result of this?

~~~
bufferoverflow
You still need a programmer to find the bugs. I think it's actually harder to
spot and fix a bug, than to write a simple method that involves one.

~~~
neatze
Programming skills are directly correlated to programmer ability to debug, I
will go as far stating; programming is not about writing code, but in ability
to find bugs and figure out how to resolve them.

~~~
grensley
I've always found it easier to debug code I wrote, mostly because I find it
the easier to read the code I wrote, since I understand the author's intent.

------
swalsh
These are just baby steps, but holy shit is that impressive. It kind of feels
like working with offshore devs, but it's in real time.

~~~
nnq
...that's mildly insulting

~~~
LAMike
Only if you assume what shore he's referring to.

------
corbins
Mirror:
[https://twitter.com/i/broadcasts/1OyKAYWPRrWKb](https://twitter.com/i/broadcasts/1OyKAYWPRrWKb)

~~~
dang
Ok, we've changed to that from
[https://mybuild.microsoft.com/sessions/6c6ecd46-c39c-49d8-ba...](https://mybuild.microsoft.com/sessions/6c6ecd46-c39c-49d8-baed-3bc207bc5bec?source=sessions).
Thanks!

If anyone knows a way to link to the start of the demo at 28m30s, or
thereabouts, we can modify it again. (Edit: maybe
[https://blog.twitter.com/en_us/topics/product/2018/video-
tim...](https://blog.twitter.com/en_us/topics/product/2018/video-
timestamps.html) can be used to make that work?)

~~~
modeless
This sort of works:
[https://www.pscp.tv/Microsoft/1OyKAYWPRrWKb?t=29m19s](https://www.pscp.tv/Microsoft/1OyKAYWPRrWKb?t=29m19s)

It's not obvious that it works until you hit the play button and it starts at
the right time. Seems like the only way to get it to play automatically is to
embed it in a tweet like this:
[https://twitter.com/modeless/status/1263222139840167936](https://twitter.com/modeless/status/1263222139840167936)

------
gradys
I worked on project very much like this last summer, a transformer language
model applied to code completion.

You'd be surprised how easy it is to get a model that performs as well as what
you see in the video. And it's even easier now that people have built great
libraries for fine-tuning generative language models.

I encourage you to try it yourself! There are many interesting extensions for
people to explore:

\- Use bi-directional context (vanilla GPT-2 only sees backward context)

\- Integrate with semantic analysis tools.

\- Experiment with different context representations. You condition the model
on an arbitrary sequence of N tokens. It's not necessarily the case that you
should spend that whole budget on the N tokens that came immediately before.
What about including the imports at the top of the file? What about the
docstrings for functions that were just used? What about the filepath of the
current file?

Don't look at something like this as though watching your job be automated
away. Look at it as a tool that you can master and use to move up the stack.

~~~
arielroth
Did you explore all of these things? What were your results?

------
mring33621
Amazing!

So the developer's role will shift to:

1) writing good enough descriptions of the code to be generated by the AI
model

2) fixing any little issues in the generated code

~~~
detay
that would be supervised ai training, until 3) programmers become obsolete.

~~~
ttul
I don't think programmers will become obsolete. They will just waste less time
on boilerplate and reading Stackoverflow articles to figure out how to do XYZ.
Why not have an AI do that for you so that you can focus on the creative
stuff? Programming tools help us work more productively, which leads to larger
and more complex systems in less time - a win for everyone.

~~~
mring33621
I agree. I think this would make programming more fun and more productive.

------
simonhughes22
This is really cool. However, I doubt it can write more than very simple
functions. That may be enough to be useful however. It would be nice if they
created a demo page where we could try this out. This use case is a little
different than the auto-complete one.

------
jfoster
I wonder if this could be trained on just bug fix commits from GitHub in order
to produce a model that could suggest bug fixes for an existing code base.

------
symplee
Can this freaky A.I. also generate the corresponding unit tests?

Or, for TDD, generate the unit tests _first_ based on the function name and
description. Then, if the dev updates any of those tests, or adds more tests,
use that information in auto generating the appropriate code.

~~~
simonhughes22
Towards the end of that section, he mentions they have also used it to
generate unit tests. I doubt it's doing full TDD, but it seems they are part
of the way there.

~~~
swiley
That actually sounds bad.

------
Jach
I don't see it replacing (or even much augmenting) professional programming
any time soon... My predicted use case for this is mostly with non-
programmers. They'll be instructed to write in English what they want to be
done, and behind the scenes this will attempt to generate code, execute it,
and give the results. A fun demo would be writing "Download the recipe on this
webpage (paste link) and order the ingredients from Safeway". If it could
generate its own billing and shipping storage to remember indefinitely after
getting it from the user, then generate the relevant web scraping / web
driving or API code for various websites, that'd be pretty sweet.

------
cjlovett
Hey, now we have a reason to write proper unambiguous code comments :-)

~~~
tanilama
I'd rather take code in that case.

------
f47il
Relevant section [https://youtu.be/fZSFNUT6iY8](https://youtu.be/fZSFNUT6iY8)

------
rpiguy
Donald Knuth would be proud! (it appears proper commenting is very important
to the AI's ability to generate code)

------
chrisco255
Is this a demo of their AI 'autocomplete' tech that they've built into Visual
Studio and VS Code?

~~~
nickysielicki
It includes a segment with Sam Altman doing python code generation from
nothing other than signatures and comment strings. Pretty incredible --
assuming the demo isn't entirely smoke and mirrors.

------
imranq
Where this would be most useful is automated testing suites just by specifying
what you are testing for. A product manager looking to test portions of a
system that absolutely need to work can specify code comments and generate
1000s of tests this way.

This is a gamechanger for ensuring the reliability of software. Many more
people can be involved in the software development process, and inject their
domain knowledge into it.

Are there any plans to open source the model? I would love to play around with
it.

------
Debonnys
Glad to see it learned to use spaces instead of tabs.

In all seriousness, the demo really looks amazing. I'm curious to see more
elaborate, real world examples though.

------
raghavgoyal14
Imagine all the Stackoverflow accepted answers funneled into your code just
because the answers were repeatedly used multiple times in the training data.

------
AJRF
Very cool work.

However; I fear this moves software engineering closer to the role of
something like plumbing.

I've despaired at the state of most software I've used since as far back as I
can remember, except when it comes to tools that have the maturity of
something like linux, git, emacs, vim and the unix tools.

For software to get good - it needs to be deeply understood by at least one
person working on it. If you train an army of warrior drones who get full line
autocompletion first they'll start forgetting what types this method takes as
its parameters, they'll be less likely to explore codebases instead plugging
in the first autocompletion that comes to their editor.

There bosses will of course want this in the name of "Getting Shit Done". We
already have this sort of divide between developers, those who heavily lean on
their tools and those who use minimal editor help. Once you are forced to
learn a tool because your tool isn't spoon feeding you, you have a chance to
better reason from first principles using the code you have available. I don't
think it's a shock that a very high percentage of the very best developers use
emacs or vim with minimal tooling.

I am aware that this whole comment has subtle tones of superiority and elitism
and I am genuinely sorry for that but in my experience it's just true that
people who lean really hard on their IDEs to do everything for them are less
able to develop creative solutions and you can tell from having conversations
with them that they don't really understand what they are doing.

------
random32840
Is there an example of something like this, but trained on the actual abstract
syntax tree manipulations that are going on behind the scenes?

That seems like it would be considerably more effective, because you're
removing the noise/overhead of parsing the text and giving a much clearer
model of what's being manipulated to the AI.

------
yeldarb
I was very surprised how well it did mimicking the StackOverflow archives when
I trained GPT-2 on them last year:
[https://stackroboflow.com](https://stackroboflow.com) (Only the 345M weights
were released back then; now I'm curious how much better 1.5B would do.)

------
Avi-D-coder
GPT2 is known to be unable to track and bind variables, scaling purely
associative models beyond the trivial examples is going to be difficult or
more likely impossible.

This will end up being a better tabnine. Models like GPT2 are still just
approximating intelligence, they are not rationally cognizing.

~~~
gradys
Curious what analysis you're referencing here.

While I don't doubt people have shown that various transformer models have
certain limitations, I'm pretty bullish on transformer models in general.

Here's a post exploring the application of transformers to symbolic
mathematics for instance: [https://medium.com/analytics-vidhya/solving-
differential-equ...](https://medium.com/analytics-vidhya/solving-differential-
equations-with-transformers-21648d3a1695)

------
unixhero
Uhm. What if you could use this to produce code to improve ML libraries. Quite
recursive or what.

~~~
spookyuser

      def build_agi(CEV):
        """Build an AGI with the specified CEV"""

------
brenden2
I can't even imagine what it's like to have so much money that you can spend
time working on things like this which are so incredibly unlikely to ever
become useful. Congrats and I hope you guys discover a great product some day.

~~~
ignoranceprior
> incredibly unlikely to ever become useful

Want to bet on that?

~~~
brenden2
I don't have any money left for gambling.

------
neatze
Not going to build my hopes up, but looking forward for automated tests
generation.

~~~
adeledeweylopez
Honestly I think that will be harder for this kind of system than writing this
sort of code. It takes a certain kind of creativity to think of ways that code
could potentially fail, and it also requires fairly deep understanding of the
intent behind a feature.

------
Bjorkbat
Looks cool. If you want to temper your expectations though, play some AI
Dungeon.

------
woile
Can someone explain me how these kind of softwares are shared? Would I need to
train it again? Or usually the trained models are provided?

Is this one in particular open source?

------
monkeydust
As a product person wondering how much more productive this will make my
engineers? In the surface looks impressive.

~~~
MauranKilom
I assume you have read a few scientific papers before.

This tool is the programming equivalent of an AI writing scientific papers
based on an abstract. It can follow all the formalities really well. It can
write beautiful English sentences. It might write formulas or produce graphs.
It will dot the i's and cross the t's.

But it's unclear whether what it says is actually correct and logically
coherent when used in the main part of the paper (and not just for the
introduction of the paper) or just pleasing-sounding nonsense.

It can certainly help make engineers _look_ more productive, just like it
could help someone write papers at record speed. Whether the results can/will
have any deeper value is yet to be determined. Maybe it will just be used for
the "boring" tasks - like the paper introduction.

My personal fear is that it will be very good at writing code that _looks_ ok,
even though there is a serious flaw. Essentially, programmers tend to become
good at spotting irregularities in the code corresponding to common human
errors. The mistakes of this AI might be much harder to spot because they
don't stand out in the same way.

------
sabujp
I think have something autogenerate tests would be a good first start

------
boolcow
When is OpenAI planning to actually solve a hard problem? They have spent a
huge amount of money and time creating useless demos so far.

Creating flashy AI demos relatively easy. Creating important AI products that
actually operate in the real world is the difficulty.

~~~
forgotmyhnacc
Does it matter? OpenAI is run as a research lab, not a startup. If they run
out of money, the investors will eat the loss.

------
debbiedowner
How can I try it? And what is the compute cost?

------
mirekrusin
Comment driven development, nice.

------
master_yoda_1
the title is misleading

------
pdeligia
This is super cool!

------
bobly_today
So are we all going to be out of a job?

~~~
ben_w
When AI can reliably convert business-speak into efficient bug-free code,
everyone will be out of a job, because the business owners will ask it to
write them another AI to replace every other task their business does.

~~~
hendzen
Until the AI starts generating business-speak...

------
darepublic
I don't want to believe

------
testeur
def is_even(x):

------
rauf11
find odd numbers from list

------
alpb
I tried signing in with my Microsoft account as well, nope, they want you to
definitely go ahead and fill out a registration form for Build conference
[https://register.build.microsoft.com/](https://register.build.microsoft.com/),
not gonna happen. Hope they learn not to paywall conferences of this kind,
their competition just puts it out on YouTube live.

~~~
dang
We've since changed the URL from
[https://mybuild.microsoft.com/sessions/6c6ecd46-c39c-49d8-ba...](https://mybuild.microsoft.com/sessions/6c6ecd46-c39c-49d8-baed-3bc207bc5bec?source=sessions)
to one that anyone can view.

------
consultutah
Is there anyway to get to the video? I'm registered for build, but the page is
all but empty...

~~~
dang
We've switched the URL from
[https://mybuild.microsoft.com/sessions/6c6ecd46-c39c-49d8-ba...](https://mybuild.microsoft.com/sessions/6c6ecd46-c39c-49d8-baed-3bc207bc5bec?source=sessions)
to a video that anyone can view.

------
cjlovett
Kevin Scott demos a new AI that is writing code in collaboration with a
developer... very cool!

~~~
dang
Please post these in a form that doesn't require jumping through complex hoops
to watch or read. If that means waiting until they're uploaded to a more
accessible place, that's fine. On HN there's no harm in waiting.
[https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...](https://hn.algolia.com/?dateRange=all&page=0&prefix=false&query=by%3Adang%20%22no%20harm%20in%20waiting%22&sort=byDate&type=comment)

Also, please don't rewrite titles to make them baity. That's against the site
guidelines:
[https://news.ycombinator.com/newsguidelines.html](https://news.ycombinator.com/newsguidelines.html)

Edit: other users have helpfully posted a link to the video that anyone can
view, so we've switched to that from
[https://mybuild.microsoft.com/sessions/6c6ecd46-c39c-49d8-ba...](https://mybuild.microsoft.com/sessions/6c6ecd46-c39c-49d8-baed-3bc207bc5bec?source=sessions)
and restored the submission.

------
datlife
I can't see anything

~~~
spery
Video should be there but their site is frustrating. Both for visibility and
reliability.

~~~
alchemyromcom
Maybe their AI can whip up something better soon.

~~~
cjlovett
ha ha, maybe we'll have a.i. hackathons soon :-)

------
ipsum2
Is there a way to watch this without an account?

~~~
dang
We've changed from
[https://mybuild.microsoft.com/sessions/6c6ecd46-c39c-49d8-ba...](https://mybuild.microsoft.com/sessions/6c6ecd46-c39c-49d8-baed-3bc207bc5bec?source=sessions)
to a URL that anyone can view.

~~~
ipsum2
Thanks dang!

------
Vysero
I would much rather have an AI that is capable of interpreting what I say as
code. So if I say:

Build me a class which computes the larger of two integers.

The AI is smart enough to write it.

~~~
reducesuffering
Did you watch the video? The AI is interpreting the comments as instructions
on what code to generate. That's 95% of the solution, since the comments are
just english and there already exists an abundance of NLU models in things
like Alexa, Google, etc, that take speech input and produce english output,
like the code comments.

