

Cracks in Reality: How our Systems Fool Themselves - NotEvenNothing
http://ieet.org/index.php/IEET/more/eubanks20120611
Third installment in the series of articles that ask if intelligence is self-limiting because motivation ultimately undermines survival.
======
sashahart
What is the actual use case for a "self-modifying singular intelligence"?
Define the problem you are trying to solve (as an actual task, not nebulous
stuff about beliefs) and I reckon you will find more success.

There is a one nice way around the intractable problems of maintaining a huge
model of the world: don't. Rodney Brooks' famous saying: "the world is its own
best representation." And it isn't necessary or usually even helpful to
translate all incoming data into some propositional model. Again: what is your
use case? If you are just trying to satisfy some essentialist intuition of
what it means to be intelligent and what 'must be' inside minds, then you will
be lucky to ever get anything meaningful done.

If you want to write an agent to do something without ongoing guidance, and
you have not thought out the task from the beginning to get a specific
algorithm, do not start by making a monolithic program that makes lots of
vague high-level decisions like "how much reality" based on abstruse
constructions of data. Start with the data which is always available,
processed minimally, and see how little you can do. Implement walking before
you implement steering and implement steering before you implement path-
finding.

------
DanielBMarkham
_In extreme cases, state-adopted nominal realities may be purely driven by
motivations, and have little or no connection to reality._

This essay was just a little too unfocused for my tastes. As an observer of
many types of organizations of people, the predilection to spin is endemic.
It's not just "in extreme cases"

I scanned the article and kept trying to find some meat, some kind of clear
and forceful statement that the author was supporting. Instead it seemed like
he was saying a lot of good things, but lacked the intestinal fortitude or
insight necessary to actually get anywhere. I was left with a version of "it
may be a property of monolithic systems that their motivations are at odds
with an accurate perception of reality." That's almost useful, but not quite.
There are a lot of interesting places to go _from_ that statement, but we
didn't go there. Hell we could have started deconstructing Aristotle and
talking about how the tension between the usefulness of categorization versus
all abstractions are leaky creates holes in what is knowable. Or we could have
moved forward to sociology and discussed the role of the individual in large
groups. Or we could have talked AI. Lots of cool places to take this train of
thought.

That's a shame. It's obvious the author is well-educated and has thought this
through. Sure wish he would have taken us somewhere a bit more interesting.

~~~
NotEvenNothing
This is something I struggle with in writing what amounts to a long argument
in discrete chunks. I was already 1000 words over my limit, and had to find an
artificial stopping place. But partly where this is headed is the thesis that
our systems of government are not very similar to ones that would be designed
from scratch for survival as an individual intelligence apparatus, and that by
comparing the theory to practice we can (possibly) improve them. The flip side
of i is that if we aren't going to be serious about survival, we should create
bureaucracies and economies so that they fail gracefully. Or else we should
adopt an evolutionary, rather than singular, strategy: colonize other planets.

------
marcusrobbins
This is the group of problems we need to solve in order to design effective
government and stable markets.

~~~
anamax
> we need to solve in order to design effective government

Is there actually any demand for "effective govt"? (Yes, lots of people claim
to want it, but do you really think that there's large scale agreement on what
that means?)

~~~
marcusrobbins
Isn't effective government a system which can maximise the following function?

G = SUM(Fi(W)) from i to WorldPopulation

Where W is the physical configuration of matter in the world. Fi is the
function associated with citizen i that defines his notion of 'goodness'.

Bad government is one which tries to maximise an alternative function:

G = SUM(Fi(W) * Di) from i to WorldPopulation, where Di is some factor for
each individual and where some Di are much greater than other Di. i.e. some
individuals have much greater say over the configuration of the world which is
chosen...

~~~
anamax
Isn't effective government a system which can maximise the following function?

almost certainly not.

For example, many people think that "effective govt" involves some notion of
"justice" and/or "fairness".

One concrete example is Obama's position wrt capital gains taxes. He wants
higher rates even if that results in less revenue. (Higher rates with less
revenue means that there's less capital gain, which means less wealth
produced, aka less total stuff. Since there's less tax revenue, there's less
govt spending.)

A significant number think that "effective govt" propagates certain
values/behaviors and discourages others.

Yes, there is disagreement on what "justice" and "fairness" mean and there's
also disagreement as to the values/behaviors to be encouraged/discouraged.

