
Google Research: Looking Back at 2019, and Forward to 2020 and Beyond - theafh
https://ai.googleblog.com/2020/01/google-research-looking-back-at-2019.html
======
pm90
Holy shit. I may be really dumb to fall for the marketing, but ... holy shit.
This is some incredibly exciting stuff. I'm not a fan of the company, but I
will always be grateful to Google for funding this kind of stuff. Usually, one
would expect research department of Universities to do stuff like this. I'm
glad that Google Research is allowing all these really smart scientists to
build all this stuff. Spellbinding.

~~~
rogerkirkness
Can't have one without the other, in almost all cases.

------
mpoteat
After skimming through this, it seems fair to say a lot of the research to
come out of this group has been relatively "soft", i.e. ethics and
interpretability rather than e.g. the newest most powerful models or
innovations like gauge CNNs.

Not bad, just surprising.

~~~
obmelvin
As a sibling said, I think this is a focus of the article.

At the very least Google has release multiple variations of one SOTA model
this year:

* original BERT paper:[https://arxiv.org/abs/1810.04805](https://arxiv.org/abs/1810.04805)

* ALBERT (smaller model): [https://arxiv.org/abs/1909.11942](https://arxiv.org/abs/1909.11942)

~~~
gradys
T5 is another architecture from Google that set the SOTA on GLUE and SuperGLUE
this year:
[https://arxiv.org/abs/1910.10683](https://arxiv.org/abs/1910.10683)

It uses a Transformer and a BERT-style masked language model loss, but is
fairly different in other regards. For instance, it formulates every problem
as text-to-text, allowing it to work for just about any NLP problem.

~~~
obmelvin
Very interesting, thank you! This may actually be useful for a project I had
in mind given that it can be used on more problems :)

------
unlinked_dll
I think the ultimate discovery of the next few years related to their AI
research will be formal methods for the specification and design of ML
systems/algorithms. There's some interesting work and unification of various
fields towards this end, and I think the pieces of the puzzle will fit
together very soon.

$5 on it being some monumental research in a PhD thesis, a la Otto Brune who
did the same thing almost a century ago for passive filter networks.

------
throwaway29303
>How can we build machine learning systems that can handle millions of tasks,
and that can learn to successfully accomplish new tasks automatically?

Personally, I believe the answer to this question is going to take a very long
time to accomplish, at least from a reliability point-of-view. And it's
probably not going to be achievable through classical computation alone, but
rather through a mix of both classical and quantum computation.

But I wouldn't be surprised if, given enough progress in this field, that the
"final" version is grounded solely on quantum computation.

My reasoning behind this belief ties to the fact that since quantum
computation is able to simulate reality, then it follows that you can simulate
something that resembles a human brain.

But before any of you unsheathe your pitchforks and light up your torches: I'm
perfectly aware that there are still a lot of hard challenges ahead before any
of this materializes--error correction being one of them.

Anyway, the future looks cool at least in this regard.

------
groovybits
If the author of the OP is reading this, the first link in the article:

> The goal of Google Research [...]

points to [http://research.google](http://research.google) instead of
[https://research.google](https://research.google) :)

