Hacker News new | past | comments | ask | show | jobs | submit login

It is obviously not a waste when we use computers to do syntax. Chess-playing computers can now beat the best international grandmasters; this is an example of computers doing syntax.

For every task which is syntax only, it is a fantastic idea to make computers do the work, and relieve humans from the drudgery. I'm all in favor of that.

But for any task which requires semantics -- real human intelligence -- it is foolish to attempt to replace humans. It cannot be done.

What is required is the wisdom to know the difference between the two.




> But for any task which requires semantics -- real human intelligence -- it is foolish to attempt to replace humans. It cannot be done.

Well we don't know this for sure, do we? "It's an open empirical question whether there are actual deterministic physical processes that, in the long run, elude simulation by a Turing machine; furthermore, ... it is an open empirical question whether any such processes are involved in the working of the human brain." [1]

[1] http://en.wikipedia.org/wiki/Church%E2%80%93Turing_thesis#Ph...


If you accept materialism as true, then by the Church-Turing hypothesis it is a necessary and foregone conclusion that computers will achieve full human intelligence, and even more, because they do not get tired and are not distracted.

But the hypothesis of materialism is what is in question here, both by my citation of the difference between syntax and semantics, and the obvious (to anybody who understands what a computer is and does) conclusion that computers do only syntax, while humans clearly do semantics also, and also by Searle's and Nagle's work.

I think that the evidence (which most people want to deny) is very clear that materialism is false. Most people deal with this evidence by ignoring it, or by denying it exists in the first place. They never address it.

You can prove me wrong. You can prove that materialism is correct. Just produce a real AI which is every bit as intelligent and capable as a human. Produce an AI which can really do semantics. Produce an AI which clearly convinces everybody that it is really intelligent in the way humans are, without any parlor tricks (like modelling an idiot savant).

It is much harder to prove that materialism is false, but that is what Nagle has done in his recent book. Have you read it? If he has not convinced you, please critique his arguments.


> and the obvious (to anybody who understands what a computer is and does) conclusion that computers do only syntax, while humans clearly do semantics also,

You keep claiming this is "obvious". But it can only be "obvious" if you first accept that the materialistic hypothesis is false or that there is some reasonable definition of "computer" in a purely materialistic universe that does not include a brain and there is no possible alternative structure that can meet a reasonable definition of "doing semantics".

To me, there's no reasonable way of claiming that the answer to this is "obvious". Firstly, the materialistic hypothesis is my default assumption in the absence of any evidence whatsoever that it does not hold, secondly, in the absence of evidence against the materialistic hypothesis, it is my default assumption that the brain is a computer.

Thirdly, while I accept that we could define a category "sentient brains" and intentionally exclude it from the category of "computers" for the sake of argument, while I concede that using such definitions it might be possible that there is no alternative means of physically structuring computers that could give the same outcome as the structure of a brain, even then I don't see any justification for why it would be obvious.

Your arguments in this thread, when not circular, rests on a whole cloud of hand-waving away controversial issues behind claims of "obviousness".


If humans and other animals have some nonmaterialist magic sauce in us that enables us to actually think when nothing else in the universe can do that, then a few questions follow quickly from there:

1) Why have we never observed the magic sauce directly in an experiment?

2) Why does the magic sauce only ever explain the otherwise-not-yet-explained instead of making novel predictions? How can the magic sauce fit in with the "AI Effect", in which AI detractors continually move the goalposts for "intelligence" the instant an algorithm can solve any particular problem intelligently?

3) In a related matter to the "AI Effect", how can we use the magic sauce to take over the world and kill all humans? Since this is the current standard for well and truly forever defeating the "AI Effect" and getting detractors to admit (possibly from beyond the grave) that your software really was intelligent, a magic sauce of human intelligence should be able to accomplish the same goal.

4) How does the magic sauce causally interact with the material world to generate our thoughts and consciousness?

5) Where does the magic sauce come from?

6) How can we make more of the magic sauce?

7) What other nonmaterial, irreducible phenomena does the magic sauce exist alongside, and how does it interact with those other phenomena?

If you really believe in nonmaterialist magic sauce, and aren't just engaging in a "dualism of the gaps" argument, you should be able to at least propose scientific avenues for investigating these seven questions.


I am not an expert in whatever field "materialism" belongs, so I will defer to accepted knowledge (or lack thereof) as professed by layperson-friendly sources such as Wikipedia. Till the time I really get interested in these fine differences.

May I point out that the tone you use in these discussions is not, in general, of a nature which encourages a lay person to even read what you say, much less to follow up on your ideas? People really do not like being looked down upon, and your writing comes across as quite condescending (among other things). Here are a couple of things from the parent comment to illustrate this:

1. "the obvious (to anybody who understands what a computer is and does) conclusion that computers do only syntax, while ..."

Do you see what's wrong with a statement of the form "What I want to prove is obvious to anyone who knows even a little something", and how it might come across as (i) unacceptable hand-waving even in a research paper in the relevant field, and (ii) highly condescending in public discourse?

2. The whole paragraph starting with "You can prove me wrong. ..." comes across as childish. Who am I, anyway, and why should I go to all that trouble to prove you wrong, even if I was somehow capable of doing it? As an analogy, suppose in a discussion on life outside earth I quote current expert consensus as found on Wikipedia as saying that there is potentially life on Europa [1]. And someone retorts saying "No. Prove me wrong. Just build a spaceship which can go bring some life from Europa." Do you see something wrong with such a response? In particular, do you see something wrong with the use of the word "just" here?

3. Your repeated insistence that everyone should read some work and critique it before countering your arguments comes across as obnoxious behaviour.

4. A minor point: it is Nagel, not Nagle. I mention this because I see you making the same mistake in multiple comments.

You seem to have interesting points to make. It would be good if you make them in a way which people find enjoyable to read.

[1] I made this up on the spot. I don't know what the current expert consensus is, so check Wikipedia before taking this as true :).

(Edit: Formatting)


Thank you for correcting me on Nagel's name. I'm embarrassed that I misspelled it, cuz his book is right in front of me.


If a bright-line distinction between syntax and semantics requires a rejection of materialism, to me that is a pretty compelling reason to not draw a distinction between syntax and semantics.

Maybe "most people" ignore the evidence, but I think anigbrowl gave a pretty good response and I'd reiterate his suggestion that Hofstadter and Dennett have given adequate replies to Searle. I would go so far as to say Hofstadter's Godel Escher Bach is the most important popular-audience book ever written about AI.


There is clearly a distinction between the two, and we don't know how to use symbols to represent meaning [1]. But the fact that we don't know how to do so yet obviously does not mean that that is no way, or that our brains are not doing it right now.

[1] http://en.wikipedia.org/wiki/Symbol_grounding




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: