
Unthinking Machines: Artificial intelligence needs a reboot, say experts - pldpld
http://www.technologyreview.com/computing/37525/?p1=A2&a=f
======
tansey
Overall, I'll say that the general sentiment of this panel is half true.

On the one hand, machine learning research has grown to such a large field
that the signal to noise ratio has dropped dramatically. Lots of people try to
squeak out another ICML or AAAI paper by making an incremental improvement
that gets 94% accuracy instead of 92% on some set of benchmark tasks. This
phenomenon is true across almost all academic disciplines, however, and is
more an indictment of the "publish or perish" environment than anything else.

On the other hand, some of the things these (famous) researchers are noting is
complete FUD:

>The answer is that there was a lot of progress in the 1960s and 1970s. Then
something went wrong.

Yep, things got hard. People early on thought that the difficulty of picking
fruit would increase linearly over time. If they could pick all this low-
hanging fruit in such a short span of time then surely in X years we'd be at
point Y! Unfortunately, it turned out that the landscape was much steeper and
fraught with local optima.

As a machine learning researcher, I do try to focus on high-level problems
that haven't be tackled before. My startup[1] is an example of that, and the
extensions to it that I'm researching are as well. But does it really count as
revolutionary from an academic sense? Probably not.

The fact is that at this point, everyone has thought of something closely
related to whatever you want to work on. Even if you've found an institution
that enables you to explore freely, big impacts are really hard to come by
these days. And when they do, older academics like the ones on this panel
don't want to give credit because it's just another incremental improvement in
their eyes.

I suppose it's just frustrating to hear these guys sit at a panel and complain
that AI researchers need to get to work-- they're AI researchers! Why aren't
they doing anything? They have tenure and all the free time in the world. But
they don't want to do that. They want to sit back and judge people while
pointing to contributions they made forty years ago as proof that they can
judge.

Being critical of others' work while not producing anything of value is just
mean-spirited. Put up or shut up.

[1] <http://effectcheck.com>

~~~
Groxx
> _Being critical of others' work while not producing anything of value is
> just mean-spirited. Put up or shut up._

That's awfully close to "it takes one to know one", which is a fallacy at
_best_. I can tell good tea from bad tea, but I've never grown it. Closer to
the programming realm, any theoretical computer scientist will lament the huge
delay between theory and practice - does this mean they don't have a valid
complaint, simply because they aren't fixing _everything_?

Specific to the piece:

> _Chomsky derided researchers in machine learning who use purely statistical
> methods to produce behavior that mimics something in the world, but who
> don't try to understand the meaning of that behavior. Chomsky compared such
> researchers to scientists who might study the dance made by a bee returning
> to the hive, and who could produce a statistically based simulation of such
> a dance without attempting to understand why the bee behaved that way._

A valid complaint from a researcher, I think, and Chomsky has hardly "not
produced anything of value". The first round of AI work in the 60s and 70s was
a lot of theoretical work getting, as you mentioned, the low-hanging fruit in
a new field. And then it turned out to be harder than the initial extreme
success implied, and took more computational power. In more recent years,
there have been business reasons to use relatively simple AI models to do
fairly basic tasks - predicting your tastes in movies, for example. I think
they're complaining about _that_ focus, taking a known and just tweaking it
until it works without doing any real foundational work, and they're not
claiming that everybody in the field today is lazy.

It's a complaint you see in every field that gains attention. The original
researchers see a decreased focus on research, and little progress for how
many people are in the field, and lament for the good old days when people
_thought_ instead of _made money_. A rose-tinted view, to be sure, but the
viewers aren't wholly value-less for doing so.

~~~
tansey
>I can tell good tea from bad tea, but I've never grown it.

That's a straw man argument. If you used to grow tea, got famous for some
awesome blend, started writing books about tea, stopped growing it, then
joined a panel with a bunch of other former tea growers and said people need
to start growing good tea again... then that'd be closer. Even better would be
if in the time since you last grew tea, the plant had become endangered and
complicated processes had to be undertaken by current manufacturers to extract
sufficient quantities for production.

You want better tea? You're in luck! You happen to have been an expert tea-
grower in the past-- welcome back! Get to work.

>Chomsky has hardly "not produced anything of value"

Quite the opposite. In fact, all of these guys are brilliant researchers who
have made huge impacts on their fields. My point was that they are not
actively contributing anything, and haven't been for the last 30+ years. When
was the last time Chomsky did any serious investigation that wasn't about how
evil governments are?

If Geoffrey Hinton, David Fogel, Jonathan Schaeffer, or any of the other
"second-wave" pioneers[1] who have remained active in the last 20 years want
to form a panel and complain about people not focusing on high-level research
topics, then sure. Because I know when they do so, they'll point to their own
research being published in the last few years as an example.

[1] The guys who got us out of the AI winter.

~~~
woodson
> When was the last time Chomsky did any serious investigation that wasn't
> about how evil governments are?

Well, he is still working on generative syntactic theory in linguistics. In
his 2005 paper "On phases" he (again...) overhauled his minimalist program.
So, yes, he is actively working (or at least has been until recently), but
perhaps not in a field you are interested in.

------
onan_barbarian
Patrick Winston, director of the AI Lab, rounded up the usual suspects in this
article: early attempts to make money off AI, not getting scads of defense
megabucks, and 'balkanization' into well-defined subspecialties such as neural
networks or genetic algorithms.

He didn't hit on the fundamental problem (to quote (allegedly) Brian Reid from
his AI qualifier): "AI is bogus".

If only we could resume pouring defense dollars into the money pit of Strong
AI; each strong AI researcher could be given a metric butt-ton of money for
vaguely defined projects like those pushed by Winston. From the article:

"Winston said he believes researchers should instead focus on those things
that make humans distinct from other primates, or even what made them distinct
from Neanderthals. Once researchers think they have identified the things that
make humans unique, he said, they should develop computational models of these
properties, implementing them in real systems so they can discover the gaps in
their models, and refine them as needed. Winston speculated that the magic
ingredient that makes humans unique is our ability to create and understand
stories using the faculties that support language: "Once you have stories, you
have the kind of creativity that makes the species different to any other."

With clear-cut and sensible goals like this, success cannot be far away now,
can it?

~~~
jimbokun
"Winston said he believes researchers should instead focus on those things
that make humans distinct from other primates, or even what made them distinct
from Neanderthals."

Seems like an arbitrary focus. There are still a lot of things that non human
primates and other animals can do that computers cannot. Why not focus on
those things? Many aspects of human intelligence are obviously shared with
other animals. Heck, there are probably aspects of the intelligence of some
animals that exceed human intelligence.

Also, why does he think computers can understand stories if they are incapable
of, say, spatial reasoning, computer vision, moving around, manipulating their
environment, non-verbal communication, using tools, planning, etc. etc. etc.
etc.

Even if understanding and telling stories is the ultimate goal, it doesn't
necessarily mean that's the best immediate goal, given where we are now.

------
fleitz
Pure AI doesn't need a reboot, they just need to start solving practical
problems, if you look at what Google does it's essentially AI. The problem
with AI as such is that it over promises and under delivers while there is
tremendous benefit possible to society with the existing research that has
been conducted. The essential problem are there are few vertically integrated
companies that can turn AI R&D into commercially successful products which can
further fuel more AI R&D.

AI needs to move out of subsidized R&D and into productization similar to how
Bell labs worked. I actually think this is a much bigger problem that extends
to most sciences. There is a lot of scientific research out there that is
being poorly monetized.

~~~
Maro
According to the panelists, what Google does (statistical inference) is not
pure AI.

------
jallmann
"AI" nowadays is a mess. I could write a book on why I feel this way, but a
lot of it has to do with the prevalence of narrowly defined, domain-specific
algorithms that need to be heavily tuned to fit your usage parameters. Even
then, you can't always be sure they'll work well.

AI is not an easy problem, otherwise we'd have made more progress by now. And
unfortunately, barring a major breakthrough, there won't ever be a "one size
fits all" approach to AI (or at least a less fractured algorithm landscape).

It's all pretty disillusioning, especially if you started out as a bushy-
tailed CS undergrad with visions of a grand unified theory of artificial
intelligence.

------
CurtHagenlocher
So, a bunch of symbolic logic vets are bitter that statistical techniques are
producing better results than they were ever able to?

~~~
_delirium
I don't see them as opposed to statistical techniques, at least in this
interview, just a focus on methods as ends-in-themselves, rather than tools to
be used.

Plenty of statistical-AI people themselves hold similar views on that subject,
and almost every machine-learning or AI conference has a panel or keynote that
expresses dismay at what the field looks like (balkanized, too focused on
methods rather than problems, too focused on incremental algorithm tweaks).

For example, Leslie Pack Kaelbling's AAAI-2010 keynote had the thesis that we
need to stop focusing on incremental algorithm improvements in tiny sub-areas
of AI, tested on implausibly pure toy benchmark problems with a focus on
optimality proofs, and start focusing more on putting things together into
integrated systems that handle real-world problems, which will probably
require more hybrid methods (like statistical-AI + logical-AI). Though she
expressed it a bit more diplomatically, with the tone that we're not
fulfilling the potential of the field with the current approach, and has
actually done some concrete work following her own proposal. (Slides:
<http://people.csail.mit.edu/lpk/AAAI10LPK.pdf>)

~~~
jimbokun
One thing I find disconcerting, are more and more complex models that "work"
in the sense of predicting what will happen, but the model is too complex to
give humans insight into what is happening. Does that mean that the universe
is just inherently too complicated for humans to understand, or do we need
models that better lend themselves to explaining the phenomena they predict?

~~~
_delirium
That's been one disconnect between machine learning and some other applied
areas as well. For example, vanilla decision trees aren't really seen as a
state-of-the-art ML technique anymore, but they still get a ton of use in
medicine, environmental science, and the social sciences, because they're seen
as more interpretable, rather than a black-box predictor. There is a bit of
work on interpretable ML aiming to bridge that gap, though.

------
brendano
Everyone on the panel is quite senior -- the comments have the flavor of "darn
young researchers these days." It might be interesting to hear from AI
researchers under 40 about this.

Second, on the question of statistics and language; there's an excellent
Fernando Pereira essay which addresses, among other things, Chomsky's old
opposition to statistical theories:
<http://www.cis.upenn.edu/~pereira/papers/rsoc.pdf>

~~~
_delirium
That's an excellent paper; thanks for the pointer. I think it correctly
notices that a lot of both sides have been talking past each other lately,
because the old 50s/60s oppositions aren't quite there anymore. In some ways,
Chomsky's critique is obsolete, but that's in part precisely because
Chomskyian ideas have now been taken up in the statistical community, in ideas
like grammar induction. ML is now willing to study grammar models higher up
the Chomsky hierarchy than the simple Markov-chain models they investigated in
the 50s, which does owe something to Chomsky's studies in formal grammar (like
the existence of the Chomsky hierarchy), even if Chomsky himself has been slow
to notice that they have indeed taken up some of his ideas.

------
davidhollander
> _Winston said he believes researchers should instead focus on those things
> that make humans distinct from other primates, or even what made them
> distinct from Neanderthals._

According to cognitive science, this is capacity for analogical reasoning.
Compared to the other self-aware, social, tool using animals exhibiting
emotion such as dolphins, elephants, and the great apes, what sets humans
apart is our massive capacity for analogical reasoning that leaves dolphins in
a distant second.

In other words, humans can not only think in terms of relations, but how
relations relate to one another much easier than any other animal.

~~~
lars
And for automatic analogical reasoning, we've had the Structure Mapping Engine
[1] since the 80's.

I don't think focusing on what makes humans unique is necessarily the way to
go, I think a better focus would be on what makes animals in general unique,
compared to our computers. Being consciousness and being able to move around
in the environment are two things that your average cat is still way better at
than any computer.

[1] <http://en.wikipedia.org/wiki/Structure_mapping_engine>

~~~
jimbokun
My cat also has a much better mental model of me and how I will respond to her
stimuli than any computer does.

------
Detrus
There was a similar theme two years ago
[http://www.popsci.com/technology/article/2009-12/scientists-...](http://www.popsci.com/technology/article/2009-12/scientists-
collaborate-rebuild-artificial-intelligence-ground)

I think the main problem is computers are still too slow, so it's difficult
for individual researchers to experiment. Saw a paper a year ago about deep
belief networks on GPUs, seems like the field is not even taking advantage of
current hardware. You need several modern GPUs to run the equivalent of a bee
brain in reasonable time.

~~~
lars
> I think the main problem is computers are still too slow

I don't think this is right. Depending on your neuron model, you'd probably be
able to simulate as many neurons as is in the human brain if you threw it
against, say, Google's server farm. The brain is modular, so you should be
able to parallelize it relatively easily. The reason no one has done that yet
is that we don't yet know enough about how to connect those neurons the way
they are connected in the brain. (The Blue Brain project aim to have this
figured out by 2019 though.)

~~~
Detrus
Oh yea Google's server farm has just about the guesstimated computational
power of a human brain, 20 petaflops. Supercomputers can simulate a rat or bee
brain with a few teraflops.

I was talking about regular personal computers that individual researches can
use to experiment. Supercomputer time is expensive and there are lots of
annoying technical problems. They also use a lot of poorly optimized software.

I figure another 5 years, a few doublings in GPU speed, new OpenCL/CUDA based
neural net software and we'll fit a bee brain in one PC. Memristors are
expected by 2015 [http://spectrum.ieee.org/robotics/artificial-
intelligence/mo...](http://spectrum.ieee.org/robotics/artificial-
intelligence/moneta-a-mind-made-from-memristors/0) they'll help run such
brains in real time.

Also these old charts <http://www.transhumanist.com/volume1/moravec.htm>
comparing brains to computers need to be adjusted because glial cells also
participate in signal processing. So we need a few teraflops of padding.

------
rsaarelm
Not sure just how important the humans being different from other animals
thing is. We've got a billion years of evolution of multicellular life,
working up to the brain of the not-quite-human primates, and something like
two million years in which humans developed their unique traits. Wouldn't be
my first guess that all the heavy lifting happened in the last couple million
years instead of the other 998 million.

~~~
lincolnq
Here, read this:

<http://yudkowsky.net/singularity/power>

I agree with you for the most part. But somehow, what makes us different from
apes was some kind of keystone. Whatever evolved in the last two million years
in our brains made a HUGE difference in our effectiveness. Perhaps if we
understand it we'll understand something fundamental about intelligence.

~~~
rsaarelm
My take on that is that mammalian intelligence was already a massively
powerful thing, and just needed to stumble into some sort of trick to make it
more, I don't know, recursive? to start the emerging human dominance. I'd bet
money that going from a genuinely wolf-level AI we understand from first
principles to a human-level AI is much easier than going from AIBO to the
level of an actual wolf.

The problem with intuiting about this is that assessing animal intelligence is
a lot harder for us than assessing human intelligence. It's Moravec's paradox
all over again, the thing that seems so simple to us we used to think of it as
an absence of skill rather than a skill, behaving like an animal, might be the
hardest thing in making an AI.

------
karolisd
To create anything resembling human intelligence, you'd have to create it the
way human intelligence was created.

Imagine you are God and you have the tools to create a universe and it's laws.
And you want humans, but you can't just make them.

The universe you create will need to have the right laws of physics such that
a self-replicating molecule (essentially a program, right?) will arise. And
that the molecules can mix with each other and new combinations can arise. And
you need molecules that can form bubbles so that cells can happen. And you
need a bunch of elements for all sorts of things.

Then the world needs to apply the right selective forces for the whole
evolutionary journey to happen.

And if you do it right, you'll end up with human like intelligence, an
intelligence motivated by something, survival and reproduction.

------
hooande
Is anyone on hn working on technology that is similar to a human (or even a
rat) in it's ability to learn and form hypotheses? I've only known one or two
people who actually tried it, and it usually didn't last long.

Personally I feel that most of the benefits that come from "strong ai" can be
duplicated with basic statistical analysis. In the spirit of Peter Norvig's
"more data beats better algorithms", I think we might all be better served by
making an effort to gather and structure as much data about the world as
possible. It's not as sexy as creating an artificial sentient being, but over
time I think the results would be similar.

~~~
lincolnq
Two from the last couple years:

<http://en.wikipedia.org/wiki/Adam_%28robot%29>

<http://creativemachines.cornell.edu/eureqa>

The Singularity Institute is working on making such a thing safe:
<http://singinst.org>

------
6ren
> Chomsky derided researchers in machine learning who use purely statistical
> methods to produce behavior that mimics something in the world, but who
> don't try to understand the meaning of that behavior

[He's not deriding these, but] statistical methods can be used to infer
models: you have a series of models, and you measure how well each one models
the data, and you include a measure of the complexity of the model (e.g. the
choices (information) needed to specify that model). The model requiring the
least information wins (related to Occam's Razor).

~~~
woodson
Yeah, but a relevant measure of complexity is not that easy to come by. And by
what measure do you judge which is more 'valid'? For example, AIC, BIC etc.
may have a strong effect on which model 'wins'.

You are right of course, but I'm wary of making claims based on inferences
from model comparisons without stating such limitations.

~~~
6ren
Yes, measure of model complexity is a problem.

A solution is to use a program for a Turing machine as the model. The length
of the program represents the complexity (which measures the choices that form
the program). There's different formulations of a turing machine (different
languages for programming one), but since any turing machine can be simulated
by any other turing machine with a program, the length of that program becomes
the error (that represents the difference in complexity introduced by your
selection of some specific Turing machine). If you are comparing models using
the same turing machine, then this constant difference doesn't matter. a-b =
(a+x) - (b+x)

One practical solution is that more data swamps minor differences between the
specification language used to describe models. More data is better.

------
olalonde
Weird, I submitted the same article yesterday and got only 5 up votes:
<http://news.ycombinator.com/item?id=2525463>.

~~~
ColinWright
Getting an item noticed on HN is now largely pot luck. It is correlated with
quality, but it's also correlated with time of day, day of the week, number of
current readers, and it's _still_ got a huge component of luck. If you don't
get four or five upvotes in the first 30 to 90 minutes, it sinks without
trace.

In part this is because of the huge volume. In part the volume is because the
focus is as wide as it always was, but the audience has increased enormously.

Many potential solutions have been discussed, but in general, no one has
produced a convincing analysis accompanied by concrete solutions that will
clearly work.

Currently it's a case of - live with it.

<http://news.ycombinator.com/item?id=2022547>

------
empiricus
In a couple of years we will start seeing a lot of computer vision
applications - and robots will be one of them. This is because the computing
power has just got at the needed level. Once you have vision you can really
start applying the other old AI stuff like planning, etc. Of course this is
not Artificial General Intelligence, but it is a step forward and it will
greatly improve the visibility of current AI.

------
knowtheory
You can tell this article was written by someone who doesn't follow artificial
intelligence and neural networks.

How? Because people in the field of neural networks and AI would never claim
that Minsky "pioneered neural networks". To the contrary (and as Minsky's
wikipedia article – i'm sure the source of this claim – obliquely notes),
Minsky's pessimism about the abilities of neural network computing lead to the
abandonment of artificial neural networks as a major research topic.

That alone should make one skeptical about this author's depth of knowledge
about artificial intelligence.

Beyond that, this article and the quotes therein, are just flat out incorrect.
There are people who are attempting to analyze behavior, model it, and build
systems that mimic this behavior. They're called cognitive scientists. This
approach is taken by linguists, psychologists and philosophers all.

But this stuff is incredibly difficult to analyze, let alone model correctly.
It annoys me to hear the opinions of the panelists reduced to "oh gee, why
isn't anyone doing more holistic research".

When i read the actual quotes by Minsky, Partee and Chomsky, i hear the three
things i expected to hear, and that each academic has been saying for years.

1) Chomsky, an old school linguist, doesn't like systems that we can't
introspect and verify as correctly modeling human behavior. 2) Partee, who is
responsible for recognizing the power and importance of Montague Semantics and
linguistic pragmatics, states that AI requires world/state modeling that is
equivalent in complexity to that required for robust natural language
processing (a position i agree with) 3) Minsky thinks nobody is trying hard
enough, and that the constraints put on researchers from actual implementation
has lead us down a blind alley.

Lastly, Sydney Brenner complains that neuroscientists can't see the forest for
the trees. I guess he's not familiar with all the research in cognitive
psychology, trying to model cognitive facilities like memory, language use,
decision making, attention switching and more.

That we haven't "solved" AI or made thinking machines is a misleading claim
that is contrary to all of the awesome stuff that humans have built in the
past 10 years. Look at all of the stuff that Google has built and tell me that
we don't have thinking machines that can understand (or if you'd like to be
more circumspect, predict) what we want. Tell me that Watson wasn't a marvel
of not just engineering but modeling intelligence.

The major editorial thrust of this article is an incorrect platitude, which
isn't supported by reality or the assertions and claims made by the panelists
(whom i each respect for the work they have contributed to the broader field
of cognitive science), and it annoys me that this claptrap pastiche is being
passed on as journalism.

We have made progress, and we will continue to make progress.

~~~
jarekr
"How? Because people in the field of neural networks and AI would never claim
that Minsky "pioneered neural networks". To the contrary (and as Minsky's
wikipedia article – i'm sure the source of this claim – obliquely notes),
Minsky's pessimism about the abilities of neural network computing lead to the
abandonment of artificial neural networks as a major research topic."

That is a very confused description of what happened. Minsky is in fact a very
important contributor to early neural networks theory and what you refer to as
his "pessimism", is in fact his proof that a neural network can not be trained
in any way to "learn" the exclusive-or logical function (among other things).
This is one of the fundamental results in NN theory.

See: <http://en.wikipedia.org/wiki/Perceptrons_%28book%29>

~~~
kleiba
_his proof that a neural network can not be trained in any way to "learn" the
exclusive-or logical function_

What you mean is a neural network without hidden layers.

------
iwwr
As an aside, is there progress in a formal definition for the English
language? Have other natural languages been formalized yet?

~~~
woodson
The problem lies with the question 'What is the English language?'

Is it what we speak? write? what we find on the web? in what social context?
Given some constraints, then perhaps yes, you could write a formal grammar and
have a parser accepting this language.

Maybe my answer doesn't seem helpful, but my point is that language isn't a
static, fixed and closed set.

~~~
gruseom
_Is it what we speak? write? what we find on the web? in what social context?
Given some constraints, then perhaps yes, you could write a formal grammar and
have a parser_

How about defining it as what the two of us speak while having a beer in San
Francisco? Is that constrained enough? If not, what constraints should be
added until it becomes possble to write a parser?

------
stralep
_Winston speculated that the magic ingredient that makes humans unique is our
ability to create and understand stories using the faculties that support
language: "Once you have stories, you have the kind of creativity that makes
the species different to any other."_

Any idea to where is this coming from? Any related articles?

------
forensic
If the status quo is a problem at all, then it's a problem with all of modern
academia.

The problem will only be solved by better ways of selecting and supporting
academics. Fix how stuff is funded and you fix the issue.

------
indrax
"Read the Sequences."

~~~
keefe
so many words... this is probably a better place to start
<http://singinst.org/singularityfaq>

------
zwischenzug
I notice Hofstader wasn't there. Maybe we could have a whip round and send
them Fluid Analogies?

~~~
alnayyir
>Maybe we could

No.

------
dmfdmf
AI is an epistemological problem. What stalled AI is the lack of a
comprehensive theory of concepts and theory of induction. All the traits cited
in the article that distinguish us from the animals are derivatives of reason.
Whenever these supposed intellectuals get around to realizing this fact they
can all eat crow and thank Ayn Rand for solving the problem of concepts and
leaving significant clues to the solution to the problem of induction.

~~~
moultano
>thank Ayn Rand for solving the problem of concepts and leaving significant
clues to the solution to the problem of induction.

Please explain.

~~~
dmfdmf
There are two fundamental epistemological issues with respect to reason. i)
how do we form concepts and what (in reality) do they refer to and ii) how do
we reason from specific observations to general conclusions and validate our
conclusions (i.e. not deduction but induction).

Both of these problems were known (and unsolved) by the Greeks. Ayn Rand (in
my judgment) solved the first problem, the problem of universals. How do we
form concepts (classes) and how are they connected to the actual things
subsumed by the concept. Her answer was that concepts are formed on the
principle of measurement omission, which means the objects that form a class
posses some trait(s) which is present but unspecified. This is what she called
the "some but any" principle. She said this is the same principle behind
algebra in that a variable "x" represents _some_ quantity but may have _any_
quantity but remains unspecified.

She explains all of it in her "Introduction to Objectivist Epistemology"

As far as induction, she never tackled that but she did say that she thought
the basic principles of induction are probably buried in higher (calculus or
above) mathematics. She was being tutored in mathematics at the time of her
death, hunting for the principle of induction. In my view the principles of
induction can be found in the work of E.T. Jaynes on (Bayesian) probabililty
theory and the logic of science.

My own view is that AI is stuck because we don't yet have the basic principles
of how reason works, so we can't program a computer to do it.

N.B. Most people know of Ayn Rand because of her advocacy of selfishness in
ethics and capitalism in politics and economics. What they don't realize is
that her views in these higher level branches follow from her deep thinking in
the more fundamental levels of epistemology and metaphysics.

~~~
Kim1776
E.T. Jaynes' work on induction entails a subtle fallacy: logic is subordinated
to math, effacing logical units and qualitative concepts as a category.

See David Harriman's book "The Logical Leap: Induction in Physics" for the
latest on Objectivist induction theory. He explains how it is possible to form
a valid generalization from a single instance under certain circumstances,
something the frequentist Bayesian approach can't do.

~~~
dmfdmf
I have not read Harriman's book and don't know Jaynes' work well enough to say
whether he inverted the hierarchy but it wouldn't surprise me if he did since
he was not an Objectivist nor was he familiar with AR's theory of concepts and
other important work. However, I suspect that even if he made such an error it
would not invalidate much of what he did achieved.

I'm a bit confused by your use of the term "frequentist Bayesian"... Perhaps
you meant "something [both] the frequentist [and] Bayesian approach [to
statistics] can't do." The Bayesians are opposed to the frequentists and
Jaynes was no exception. I think a more important unsolved issue, even for the
Bayesians, is how do you get out of probability (as a measure of belief) to
100% certainty (i.e. to truth, 99.9999% probability is not good enough). This
is the big kahuna of induction and once we solve that we are off to the races
on such things as AI.

~~~
Kim1776
Jaynes' work is valid within limits as a concept of method for some areas of
science and engineering. However, his error was in effect trying to replace
the logical law of identity with the mathematical law of equivalence, "A is A"
becoming only "A = A".

In his book, Jaynes says that Aristotelian logic is a special case of
probability theory, applicable in a situation where 100% certainty is
achieved. This makes all of logic subordinate to a mathematical process, and
as such this modeling approach is anti-conceptual.

What I meant by "frequentist Bayesian" was the property common to both the
frequentist and Bayesian approaches with respect to induction, that being the
idea that certainty is in general mathematically established by counting and
ratios. That may be the best way to bet sometimes, but it is a fallacy to
define logical certainty as such in mathematical terms.

~~~
dmfdmf
Re P1: Jaynes presented his work as a method for science and engineering but
throughout his work stated that the method was general. AR said that
mathematics is an abstracted form of conceptualization and logic which is why
she was studying higher math to understand induction. Jaynes was not an
epistemologist (nor an Objectivist) so the failure to distinguish or explain
the exact relation between "A is A" from "A = A" is understandable especially
in light of his explicit goal of targeting science and engineering. I did not
claim that Jaynes solved the problem of induction nor that his theories were
perfect, just that I think he was on the right track and made some significant
contributions.

Re P2: Jaynes' claim was that the laws of probability reduce to Aristotelian
rules of deductive logic when the Pr=1. I do not agree that this makes logic
subordinate to a mathematical process -- no more than AR's theory of concepts
makes concept formation subordinate to algebra. In any case, the claim is
mathematically true (given Jaynes' system which I admit may need revision) but
it is also a fact that deduction is dependent on induction which is
consistent. So to clarify the exact relations (without violating identity or
inverting hierarchy, etc) requires AR's theory of concepts which Jaynes did
not have. Moreover, to define the principles of when it is valid (without
enumerating all cases) to go from Pr<1 to Pr=1 IS the problem of induction
which Jaynes did not specifically address.

Re P3: Jaynes completely rejects the the counting and ratio arguments for
induction. Unfortunately he flips over to the subjectivist and vague "degrees
of belief" view of probability and is unable to define it (which is why he
needed a theory of concepts). This failure is hardly surprising given that
every field today is split on intrinsic versus subjectivist premises.

