
IA or AI? - rudenoise
https://vanemden.wordpress.com/2015/10/09/ia-or-ai-2/
======
dsr_
The IA advance which is most obvious to me (yet somehow not yet a reality) is
the nomenclator.

In Rome, a nomenclator was a slave who remembered people's names for you, and
as they approached would whisper to you that this is Gaius Tullius Castor, his
wife is Flaminia, his eldest boy is Marcus, and he owns beanfields.

A Google Glass camera on your eyeglasses and a speaker in your ear, hooked up
to Facebook's face recognition and social web, can tell you a quick precis of
who you see across the room before they get to you. Add a touch sensor in your
pocket or on a ring for unobtrusive control, and a mic to pick up your
annotations or commands, and you've got a product that should be a major hit
by the second generation.

~~~
lnanek2
Google famously banned face recognition on Glass after I and some other
developers made demo apps and APIs for using it:
[https://www.youtube.com/watch?v=E1aeMJY1AO0](https://www.youtube.com/watch?v=E1aeMJY1AO0)

Even now that Glass has been canceled for all but commercial use, it's still
in the guidelines that you aren't allowed to use face recognition: >
[https://developers.google.com/glass/policies?hl=en](https://developers.google.com/glass/policies?hl=en)
" Don't use the camera or microphone to cross-reference and immediately
present personal information identifying anyone other than the user, including
use cases such as facial recognition and voice print. Glassware that do this
will not be approved at this time. "

Amusingly, my nursing notes demo was me trying to be politically correct.
People were more interested in things like cross referencing most wanted lists
and sexual offender lists.

~~~
protomyth
That ban made me uninterested in Glass. Face recognition would have been such
a help to elderly folks. Add object recognition and GPS and you could have had
an assistant to help the elderly through their day.

------
otoburb
_" [...] good notation is worth a whopping increment in IQ points. Except that
the really good ones allow one to have thoughts that are impossible without."_

I posit this post tangentially explains the nagging feeling that many
parents[1] experience when their children struggle with mathematics. The
benefits of basic language literacy are clear, but follow-on analogies such as
the above emphasize a point of view concluding that an inability to attain
mathematical fluency excludes the next generation from any implied augmented
intelligence benefits.

The extrapolated message would be that mathematically disinclined adults will
then be completely unable to comprehend certain important thoughts in [insert
arcane, highly-specialized technical field].

Regarding the question posed by the title and last sentence in the blog post,
I'm not sure why the thrust is framed as an XOR, and not as an AND. It's not
like we can't focus on both IA and AI at the same time.

[1] Anecdata warning: I am a parent. I have this nagging feeling.

~~~
rntz
> an inability to attain mathematical fluency excludes the next generation
> from any implied augmented intelligence benefits.

Well, only in some ways. I don't have to understand how a refrigerator works
in order to use it. Improvements in quality of life produced by use of
augmented intelligence ought to be accessible even to those without it.

~~~
smegger001
The problem only happens when no one bothers to learn how something works.
Look at all of those big iron systems out there that few people know how to
program, there is reason Cobol and Fortran programmers still make good money.

Oh and refrigeration is simple, it is just an application of the ideal gas law
PV=nRT, and a pump. Refrigerant is compressed then cooled through use of a
heat-sink then pumped into the refrigerator and allowed to expand where the
refrigerant absorbs thermal energy and is pumped out and repeated.

~~~
TrevorJ
I'm not sure it's a problem. It's encapsulaiton. Systems _should_ be designed
so that there is a difference between the knowledge required to operate the
device and the knowledge required to design/service it.

------
Practicality
It seems obvious to me that IA is where the tremendous benefits to society
occur. Imagine a world where everyone has the equivalent of a genius IQ today.
A lot of problems suddenly disappear.

AI, on the other hand, while very useful, doesn't change people. And frankly,
most problems we have are because people lack understanding. I don't know
about you, but I don't actually want to replace mankind with something else, I
just want us all better.

Of course, what "better" is--is highly debatable, so that definitely gives
pause as well.

~~~
mziel
> A lot of problems suddenly disappear.

Not to be negative but citation needed.

Also (I guess we'll cross "isolation" of the list):
[https://en.wikipedia.org/wiki/Intellectual_giftedness#Social...](https://en.wikipedia.org/wiki/Intellectual_giftedness#Social_and_emotional_issues)

~~~
Practicality
An interesting point. I think your citation adds to the point though.

In observing my "normal" peers, honestly, they do a lot of very strange things
just to be considered normal.

I mean, it's pretty expensive just to keep up with current trend of sunglasses
size or sock length, just to be seen as normal.

Not to mention that you have to hold your hands a certain way and talk
incoherently.

There is a lot of "normalizing" behavior that becomes unnecessary when
everyone has the capacity to see how inane and impractical such behavior
really is.

~~~
ycosynot
A lot of smart men have been passionate about the proportion of columns, or
the proportion of numbers, or even the aesthetics of curly braces. So why is
it inane to care about the proportion of clothing, or the angle of the hand? I
think people play to their strength. Also, the trend has to change as the
world changes, because aesthetics is about the whole. So it's not because it's
everchanging that it's necessarily arbitrary. The more one can afford not to
care about it, the more impractical it is, but I wouldn't say it is
impractical to society. It is architecture for the person.

------
ATLobotomy
EDW387 [0] (which doesn't have NN0 or NN1 pseudonyms either) seems to be
pretty clear about what the "anti-intellectualism" comment was about.

>The undisguised appeal to anti-intellectualism and anti-individualism was
frightening. He was talking about his "augmented knowledge workshop" and I was
constantly reminded of Manny Lehman's vigorous complaint about the American
educational system that is extremely "knowledge oriented", failing to do
justice to the fact that one of the main objects of education is the insight
that makes quite a lot of knowledge superfluous.

Wish the author went into more detail on why now may be different than during
Kay/Engelbart's time.

[0]
[https://www.cs.utexas.edu/users/EWD/transcriptions/EWD03xx/E...](https://www.cs.utexas.edu/users/EWD/transcriptions/EWD03xx/EWD387.html)

~~~
rasz_pl
There is this "Tim van Gelder on Douglas Engelbart, Intelligence Amplification
and Argument Mapping"

[https://www.youtube.com/watch?v=P77FvUy-
NGA](https://www.youtube.com/watch?v=P77FvUy-NGA)

------
akkartik
The EWD by Dijkstra now actually mentions Engelbart by name:
[https://www.cs.utexas.edu/users/EWD/ewd03xx/EWD387.PDF](https://www.cs.utexas.edu/users/EWD/ewd03xx/EWD387.PDF).

~~~
tlb
Props to Dijkstra for choosing the right topics to have strong opinions about,
which is the hardest part in making an intellectual contribution -- far harder
than being right or wrong about a given topic. The writeup is like code that's
correct but for a sign error.

------
yang140
John Markoff's new book "Machines of Loving Grace" is a great one about this
AI vs IA topic. [http://www.amazon.com/Machines-Loving-Grace-Common-
Between/d...](http://www.amazon.com/Machines-Loving-Grace-Common-
Between/dp/0062266683)

------
delish
> The point I am making here is that Engelbart and Kay were unrealistic in
> expecting that their technologies would give quick results in the way of
> Tools for Thought. They had no appreciation for the vast and rich culture
> that produced the tools for thought enabled by the traditional technologies
> of writing and printing. They did not realize that a similar culture needs
> to arise around a new technology with augmentation potential.

I am guilty of deifying Englebart and Kay, and castigating "our society" for
failing them. After my honeymoon period with the "tool of thought" people,
I've calmed down.

Here's my radical belief: portability is for people who can't write their own
programs. (copped from a Torvalds witticism)

Consider writing and literacy: If you really grow up in a literate culture,
you can start with a blank page and end with a bespoke document that suits
your needs. If you don't grow up in that, you have to modify others'
documents. This limits you. Hallmark cards are for people who can't write
poetically (no judgment intended).

So too for programming. Today we rely on hundreds of millions of lines of code
of others we can't even realistically modify. But I think the future resembles
Forth: in less than a hundred lines of code, you write something that suits
your needs[0]. You can't do this yet because computers suck.

I'm talking loosely and at a high-level.

[0] I think Forth is a powerful vision for the future: no operating system, no
types, no compatibility, no syntax. An executable english language.

------
bradneuberg
Great piece. I got a chance to work with Douglas Engelbart several years ago
and wrote up some responses in reply to Maarten's IA or AI post:
[http://codinginparadise.org/ebooks/html/blog/ia_vs__ai.html](http://codinginparadise.org/ebooks/html/blog/ia_vs__ai.html)

------
bytesandbots
That enforces my belief that the influx of new programming languages will
continue for some more years and it will only get better.

------
musha68k
Nothing is more high-tech than culture, it's _everything_ even if we tend to
work over the seemingly faceless Internet these days - it's people all the way
down.

------
maxander
The idea of "notation as intelligence augmentation" is the reason (or one of
them) that Haskell programmers are so enthusiastic about things like functors
and monads; type theory is its own branch of mathematics that could be
appended in the list of things like calculus and vector analysis [1], and
might bring in the same kind of new levels of thought and abstraction.

[1] Disclaimer; I am not a mathematician.

------
MaysonL
When considering intelligence amplification, the book that comes to mind is
_Psychohistorical Crisis_ , by Donald Kingsbury. Computer-to-brain interfaces
may go a long way in the next few thousand years.

[https://en.wikipedia.org/wiki/Psychohistorical_Crisis](https://en.wikipedia.org/wiki/Psychohistorical_Crisis)

~~~
arethuza
Also Vernon Vinge's _Rainbows End_ which has both IA and AI in a fairly
plausible near future scenario:

[https://en.wikipedia.org/wiki/Rainbows_End](https://en.wikipedia.org/wiki/Rainbows_End)

------
hyperpallium
oblig. [https://xkcd.com/903/](https://xkcd.com/903/)

We already have amplified memory (see also: books, mnemonics). and google
amplifies _retrieval_.

But what is "intelligence", that we might amplify it? For me, limited short-
term working memory is an obstacle (EWD's "limited size of skull"). As
complexity is added, earlier parts drop out.

There is the "technology" of hierarchical decomposition and the pyschological
instinct of chunking, but every problem has irreducible complexity... if this
is greater than my working memory, I cannot grasp it.

Artificially enhanced working memory may help here, but I suspect the limit is
due not so much short-term memory itself, but it having associations
throughout all long-term memory. That is, it's less a cache limit than a
bandwidth limit, interconnecting with the entire mind. We aren't Von Neumann
architectured.

PS: there's an argument that we might not be able to grasp intelligence
itself, if its and its components' irreducible complexity is greater than any
person's working memory - even if we formalize a correct model, we mightn't
grasp it ourselves. Thus, IA may be essential for AI. Or, AI is essential for
AI.

