
MIRI's Approach – Machine Intelligence Research Institute - deegles
https://intelligence.org/2015/07/27/miris-approach/
======
idlewords
Are there any examples of substantive AI work to come out of MIRI? And have
they succeeded at all at engaging the actual AI research community?

The last time I looked at them, they were consumed with grandiose
philosophical projects like "axiomatize ethics" and provably non-computable
approaches like AIXI, not to mention the Harry Potter fanfic. But I'm asking
this question in good faith - have things changed at MIRI?

~~~
pjscott
If you'd like to give them another look, here's an up-to-date list of their
publications:

[https://intelligence.org/all-publications/](https://intelligence.org/all-
publications/)

~~~
sampo
Counting only articles and conference papers that look like they are in at
least somewhat established journals or conferences, I quickly count:

    
    
        2015: 4
        2014: 6
        2013: 1
        2012: 5
        2011: 1
        2010: 4
    

That would be about the level of 1 or 2 very mediocre early career scientists.
And very little for MIRI's 8 people staff and 13 Research Associates.

MIRI publishes a lot more, but they mostly operate outside of traditional
scientific journals, a lot of MIRI technical reports published on their
website.

~~~
eli_gottlieb
How many of their staff hold PhDs, particularly in cognitive science,
foundations of math, or AI? It would seem like a useful thing for them to get.

~~~
sampo
[https://intelligence.org/team/](https://intelligence.org/team/)

Staff: 1 out of 8 seem to have a PhD

Research Associates: 3 out of 14.

~~~
eli_gottlieb
Huh. They should probably increase that number.

------
reasonattlm
MIRI is one of the success stories to emerge from the transhumanist community
of the 80s to 00s. Others include the Methuselah Foundation, SENS Research
Foundation, Future of Humanity Institute, and so on.

When it comes to prognosticating on the future of strong AI MIRI falls on the
other side of the futurist community from where I stand.

I see the future as being one in which strong AI emerges from whole brain
emulation, starting around 2030 as processing power becomes cheap enough for
an entire industry to be competing in emulating brains the brute force way,
building on the academic efforts of the 2020s. Then there will follow twenty
years of exceedingly unethical development and abuse of sentient life to get
to the point of robust first generation artificial entities based all too
closely on human minds, and only after that will you start to see progress
towards greater than human intelligence and meaningful variations on the theme
of human intelligence.

I think it unlikely that entirely alien, non-human strong artificial
intelligences will be constructed from first principals on a faster timescale
or more cost-effectively than this. That line of research will be swamped by
the output of whole brain emulation once that gets going in earnest.

Many views of future priorities for development in the futurist hinge on how
soon we can build greater than human intelligences to power our research and
development process. In particular should one support AI development or
rejuvenation research. Unfortunately I think we're going to have to dig our
own way out of the aging hole; we're not going to see better than human strong
AI before we could develop rejuvenation treatments based on the SENS programs
the old-fashioned way.

~~~
deegles
Your statement about needing rejuvenation treatments before we can develop
strong AIs makes sense. The average age of Nobel Prize winners is increasing
over time[1], we could imagine a future where this average age is greater than
the average human lifespan. At that point only those scientists who have
exceedingly long careers _and_ lifespans will be able to further their
respective fields.

[1] [http://priceonomics.com/why-nobel-winning-scientists-are-
get...](http://priceonomics.com/why-nobel-winning-scientists-are-getting-
older/)

~~~
fiatmoney
"Nobel-winning scientist age" is not a good proxy for "productive scientist
age", for a variety of reasons. They mention specifically lag time between
discovery & recognition, but you also have issues where the "name" behind the
discovery is the guy in charge of the lab, but the actual discovery (and
sometimes the idea) is generated by the 30-year-old postdoc / assistant prof /
etc.

There is also a sampling bias issue where scientists in academia as a whole
are getting older because the boomers still have a death grip on institutional
positions, and academia as a whole is shrinking.

------
AndrewKemendo
Whenever one of these threads comes up, all the sudden everyone is an AGI
expert.

~~~
le0n
Is anyone really an AGI expert?

~~~
AndrewKemendo
I would argue the people publishing in the AGI journal:

[http://www.degruyter.com/view/j/jagi](http://www.degruyter.com/view/j/jagi)

------
daveloyall
> _One solution that the genetic algorithm found entirely avoided using the
> built-in capacitors (an essential piece of hardware in human-designed
> oscillators). Instead, it repurposed the circuit tracks on the motherboard
> as a radio receiver, and amplified an oscillating signal from a nearby
> computer._

That feeling some people get when a junebug lands on them.

------
rl3
It's worth noting that in Bostrom's _Superintelligence_ , biology is included
among potential paths to superintelligence. Personally, I think it's a bit of
an overlooked path.

Granted, most of what Bostrom refers to in context of the biological path is
mostly related to human augmentation, genetics, selective breeding - things of
that nature. What I'm referring to is biology serving as a raw computational
substrate.

While biology (or at least brain tissue) is dramatically slower in terms of
raw latency when compared to microprocessors, it's arguably a far cheaper and
vastly more dense form of computation. It also has the natural algorithm for
intelligence that we keep trying to deduce and transpose into silicon - well,
at least the raw form of it - built in by default.

Assuming ethics are thrown out the window and human brain tissue is grown _in
vitro_ at scale, then it probably would make sense to hook it up to a
supercomputer for good measure. At the very least, we have the technological
foundations[1][2] for such an experiment at present day, it's just a matter of
scaling things up.

If I were to hazard a guess, human brain tissue grown to significant scale
would probably not magically achieve sentience, or exhibit any complex
anthropomorphic traits. On the contrary, it would probably have more in common
with a simple neural network implemented on silicon, at least in terms of its
capacity for self-awareness. Conversely, it would stand to reason that an
entity with a higher degree of self-awareness, implemented in silicon, should
rank higher in terms of ethical considerations than living tissue.

Obviously the aforementioned experiment would be completely unethical, but
it's interesting to ponder it as a hypothetical - that today we may have the
capability to bootstrap a superintelligent machine using biology as a
computational shortcut. But we can't, because ethics. Instead, we're waiting
for the inevitable increase in computational power to arrive so that we can do
essentially the same thing, just in digital form.

[1] [http://www.nytimes.com/2014/10/13/science/researchers-
replic...](http://www.nytimes.com/2014/10/13/science/researchers-replicate-
alzheimers-brain-cells-in-a-petri-dish.html)

[2]
[http://neural.bme.ufl.edu/page12/page1/assets/NeuroFlght2.pd...](http://neural.bme.ufl.edu/page12/page1/assets/NeuroFlght2.pdf)

~~~
ThrustVectoring
If you're just building an artificial biological neural net, then why use
human brain cells? Certainly other forms of brain cells would work pretty much
as well, with less ethical issues.

~~~
rl3
Good point. However, if that is indeed the case, then would the ethical issues
surrounding the use of human cells in such a fashion be properly founded?

I mean, if you took primate or whale brain tissue and grew it to scale, I
think you'd have similar results. Maybe even with rat neurons, who knows.

Point being: the primary ethical issue ultimately may not be the underlying
type of biological substrate, but how that substrate is grown, trained, and
used.

Semantics aside, any such experiments would undoubtedly be creepy as hell,
regardless of tissue type. Definitely Frankenstein stuff.

------
dmfdmf
I agree with their financial approach which is funded through voluntary
donations, so more power to them. But then I think NASA should be funded
voluntarily and would probably have a larger budget if they did. I would
donate to NASA if they repudiated government money.

I reject the "how many peer-reviewed, university-associated journal articles
have they published" as any measure of success. The university system is a
closed guild and doomed in the internet age. I'd bet my bottom dollar the next
big breakthrough in AI (or any field) comes from outside that system. Anyone
who has an original, new idea to break the AI logjam (P=NP for instance) has
no peers in the university system but has a handful of peers world-wide
reading arxiv.org or other open forums, even HN.

As far as MIRI's AI program, I think they commit the same error as everyone
else; confounding the processing of meaningless symbols (what computers can
do) with actual sensory awareness of existence (what brains do). The latter is
what gives the symbols meaning and humans are not threatened by the former.
Few people truly understand the import of Searle's Chinese Room thought
experiment and its relevance to AI and computers. But these are philosophic
and epistemological questions that most people dismiss or ignore at their own
peril.

~~~
davmre
> The latter is what gives the symbols meaning and humans are not threatened
> by the former.

For what it's worth, I think you're wildly wrong a) that the Chinese room
experiment has anything profound to say about conscious experience, and b)
that most AI researchers haven't thought about this.

That said, MIRI is concerned _exactly_ with threats from machines that are
very very good at processing meaningless symbols. If someone writes a simple
reinforcement learning algorithm, asks it to produce paperclips, and it
destroys the human race
([http://wiki.lesswrong.com/wiki/Paperclip_maximizer](http://wiki.lesswrong.com/wiki/Paperclip_maximizer)),
we're really past the point of caring whether the algorithm has awareness of
its own experience. There are interesting philosophical questions there, but
it's not within the domain of solving the problem MIRI cares about.

~~~
dmfdmf
> For what it's worth, I think you're wildly wrong a) that the Chinese room
> experiment has anything profound to say about conscious experience, and b)
> that most AI researchers haven't thought about this.

We'll just have to disagree about a) but see my answer to the other reply
about the implicit question behind the CRE.

Regarding b), I never said they haven't thought about it. I said "few people
truly understand" by which I meant they have failed to understand the
implications and have drawn the wrong conclusions. You don't get gold stars
for thinking hard.

Regarding LessWrong, all I can say is that you can't know you are "less wrong"
until you know what is true. In logic you can't assume an unknowable as your
standard of the true. But to reiterate my point, these are questions of
philosophy and epistemology, fields which absolutely essential to the "domain
of solving the problem MIRI cares about".

------
fitzwatermellow
We can't comprehend the danger that could be caused by strong AI because we
don't understand how to make worlds. Embedded within the fabric of the cosmos
are the instructions for materializing a black hole from dark matter and
energy. And though the recipe may be hidden from our mortal minds it's
possible they may be discovered by a de novo thinking machine. A software
system with infinite IQ but the moral sense of a one day old. An electronic
brain that would have no compunction in creating a super massive black hole
right here on Earth just to see if it can succeed. Irrespective of the
consequences to humanity.

If this "Jupiter-sized" consciousness is allowed to have thoughts that are not
"amenable to inspection". Even perhaps beyond comprehension. And the danger
elides to the scale of a solar system with nothing but diamonds in it. Then,
by the naysayers own logic, shouldn't all AI research be outlawed? Or at least
confined to the equivalent of CDC-level-5 quarantine labs?

If on the other hand you believe that Nature has certain innate prophylactics.
And that it takes more than a super will to bend the laws that govern space
and time. You may be self-assured in thinking we as a species have at least a
century or two before we really need to begin worrying about virtual
immortals.

~~~
schoen
You could do quite a lot of damage with the regular old laws of nature that we
understand already, if you were really good at applying them. For example, you
could make autonomous robot weapons, or extremely virulent pathogens with a
long incubation period. Or do some geoengineering (you don't even have to be
good at it, you just have to do something with a big impact). There's
absolutely no need to "bend the laws that govern space and time"!

Edit: or launch a planetary defense mission in reverse, to increase the
probability of asteroid impact events.

~~~
sampo
I guess there are some terrorist organizations who have more desire "to watch
the world burn" (a Batman movie quote) than the actual intelligence to make
effective plots and schemes. You could just communicate by email and chat to
provide evil genius level strategic consultation to some very bad men.

------
mangeletti
Here's a theory of mine:

Despite our explicit efforts, the first truly powerful AI will emerge as
distributed software that effectively runs on the Internet as a whole (a
virtual machine consisting of endpoints (APIs, etc.) and disparate systems
subprocessing data). We will not know when it arrives. There will be no
judgement day. This machine will arrive through a sort of abiogenesis, and
once it's here it will manipulate the world to achieve that which it desires,
which will be continually more energy. Eventually, this machine will displace
much of humanity, as it requires less and less human intervention.

I beleive we are in the midst of this process now.

~~~
endtime
FYI,
[http://lesswrong.com/lw/iv/the_futility_of_emergence/](http://lesswrong.com/lw/iv/the_futility_of_emergence/)
might explain why people are objecting to your comment.

~~~
zardo
A precise definition could probably be cobbled together using computational
complexity. Something like, a phenomenon which results as a product of
deterministic processes that cannot be fully modeled by a polynomial time
algorithm.

I think that's what people are really getting at, you can know how every piece
of something works, and yet seeing how it works together can be much harder
(potentially impossible).

Maybe that just makes it a synonym for chaos theory...

~~~
jessriedel
Yea, I think you're talking about chaos, whereas people are gesturing at
something different when they talk about emergent complexity. Vaguely, the
idea is that the "regular" degrees of freedom (i.e., the ones that are
relatively predictable and from which the important objects are constructed)
at large scales are not simply related to the microscopic degrees of freedom.
There are probably more rigorous things to say, but it definitely requires
more than just unpredictability or sensitivity to initial conditions.

