I don't think philosophy is keeping up with the progress in AI, neuroscience and game theory. If it did, there would be less discussion about the 'Chinese room', qualia and 'hard problem' of consciousness. Instead, we'd head about embodiment, rewards, agents vs environments, learning, representation, exploration-exploitation tradeoff, and such. There are theories of language and meaning in philosophy, but in AI there are actual high dimensional representations for text and images, with their affordances and limitations. It's like comparing a drawing of a car with an actual working car, with its good and bad parts. Philosophy seems not to be keeping up the progress in the last 5 years, and it has been one hell of 5 years with amazing discoveries.
I am not a philosopher, but I would respectfully suggest the problem may be more that you not keeping up with philosophy. Your timescales also seem a bit strange.
I was last in an academic context with philosophers of mind in the mid-90s, at which point Searle's Chinese room was already considered a tired argument (it was seen as an argument to incredulity). The hard problem of consciousness and qualia seem odd to present as worth 'leaving behind': they are surely even more pressing as synthetic thought becomes more capable. More than one of the other areas you suggest were certainly active research in the 90s. (Though I was on the periphery, looking at evolutionary engineering, the research group that I was part of contained both scientists and philosophers of mind and was focused on cognition and affect, most definitely including rewards, agents, learning and representation). So I wonder whether the issue is what you hear about rather than what philosophy is being done.
It's been an incredible, fate-changing 50 years. The last 5 years? Really good targeted advertising, computer vision successes passing for "machine learning" and some tentative new medical treatments.
Also: the replication crisis throwing much of science in doubt. If anything we're in a time of stagnation punctuated by vaporware commercial hype.
I agree partially. It is true that philosophy cannot entirely keep up with scientific and technical developments, which is not surprising given the amount of resources that flow into philosophy compared to the money spent on STEM fields. That being said, philosophers hardly talk about the Chinese room and qualia these days. There is even a tiny move away from the hard problem of consciousness. See: https://philpapers.org/archive/CHATMO-32.pdf
Also, there is me. I have a PhD in philosophy and work towards an MPhil in CS to cover the areas you mention. Some philosophers are on it, trying to keep up.
Please point me to a paper treating consciousness from the point of view of agent, environment and rewards (reinforcement learning). I have searched and found very few papers taking this angle. I concluded that philosophers don't know about it.
In my view even using the word 'consciousness' is a bad thing because it is not well defined and has too much baggage (some people relate consciousness to spirit or soul, for example).
Instead we should use different concepts, the ones I enumerate above. They cover sensing, prediction of future rewards (emotion), acting and learning. All are concrete and well defined, with possible implementations in AI.
The question about which concepts you should use and why (and the arguments supporting them) might be a decent philosophical one.
"Too much baggage" isn't particularly well defined, though, nor is it a strong argument to say that some people relate consciousness to spirit or soul (some people also use terms like "momentum" or "norm" in imprecise ways but it doesn't keep practitioners for whom those terms have precise well-defined meanings from deploying them effectively).
"we should use different concepts... All are concrete and well defined"
Or perhaps we should use concepts from clockwork engineering or mill machinery. Very concrete and well defined.
I'm not sure what you're getting at here. What you say sounds like a perfectly interesting starting point for thinking about consciousness (although I'm not sure how it's supposed to fit with the fact that we're not computers, and have a distinct cognitive makeup, that mainstream psychology - 'cognitive psychology' - has committed itself to studying for several decades, with great profit). You would, in any case, have to specify what you mean by 'agent, environment, reward' - that sounds like a great deal of psychology.
But what I took objection to was the suggestion that philosophy is or should be primarily interested in clearing up some of the conceptual problems attendant to contemporary computer science and engineering.
I'm sure that's one interesting area of philosophy, but it is only one, modest area.
There is the philosophy of aesthetics and literature, of ethics and politics, of religion and science, of epistemology and knowledge, of the philosophy of history and the history of philosophy, and so on.
From what you said about existing philosophy ('Chinese room', qualia and 'hard problem' of consciousness) and the philosophy you would like to see (as doing the same thing as AI but worse), you completely discarded what the vast majority of what philosophers - now, and most certainly in the past - have actually done and been interested in.
That is why I said you had a parochial view of philosophy.
All that aside: I would be interested to hear how you think AI should change the philosophy of language and meaning.
The notions of agent, environment and reward are encountered in Reinforcement Learning which is a sub-field of ML but also relevant to biological agents. There is a great RL course on YouTube by David Silver (one of the creators of AlphaGo, which is probably the most famous application of RL so far).
The course can properly set the perspective of RL and shed new light on the philosophy of mind, if you take what it says and then extrapolate to humans and other agents.
What I find fascinating about RL is that it can be defined concretely. Consciousness can only be defined by reference to other words, in a less exact and concrete way. RL can also explain how meaning appears, based on future reward prediction. The rich sensations we have can be explained by an encoding-decoding architecture based on reconstruction error. Many difficulties in RL map back to difficulties humans have in choosing how to act - the exploration-exploitation tradeoff, instinct vs reason (two different ways to perform RL, one based purely on rewards and the other based on an internal model of the world and rewards). Some of the problems related to multi-agent RL are also covered in Game Theory, such as the prisoner's dilemma.
Regarding philosophy of language: we have today numerical representations of meaning in words and phrases. They are usually represented as vectors in high dimensional space or sets of vectors. For example it is customary to use 300-1000 dimensions for representing words. These vectors have a nice property - the closer two words are in meaning, the smaller the angle between the two vectors. They are derived by trying to predict a word by the context, or vice versa, on a corpus of text several Gb in size.
Many ideas from the philosophy of language, such as the meaning of words being related to the 'game' being played (activity with a purpose), emerge naturally from successful AI models. I'd say that where philosophy had a glimpse, AI has a testable implementation that can solve real world problems. Where philosophy uses mere words, AI uses probability distributions and datasets to define such models. The brain is probably doing something similar.
Other things that AI has managed to do so far: to encode images into latent representations and back, to synthesise images. Same for speech - we have speech recognition and text to speech. Some modules used in neural networks are analogous to imagination, attention, memory, emotion, intuition and many other aspects of the mind. On narrow domains computers can already best humans at perception.
The piece that is missing in AI to match human level is the prior knowledge encoded in our genes. We have been optimised to learn and function well in our environment by evolution. That means our verbal areas in the brain have a notion of invariance to time translation, visual areas have invariance to space translation, and conceptual areas have an invariance to permutation. There might be more invariances but we just don't know yet and that's why AI models are not yet up to par with humans. But we can still learn a lot about ourselves by analogy with AI agents. And that's where I think philosophy should listen.
Recent progress in AI hasn't made a dent in questions related to consciousness and intentionality. And all those things you mentioned such as embodiment, agents, environments, etc. have been explored in psychology, cognitive science and philosophy (also computer science) prior to the last 5 years. They're not new terms.
IMO, progress in philosophy leads to science. First you think and imagine (philosophy) then you try and know and have made progress (science).
From the article: Yes (with qualification) and yes. Already in Republic (Plato again!) we have an argument—a clear and compelling rational argument—that even the highest political office should be open to women. The argument? List what it takes to be a good leader of the state, then note the conditions that distinguish the sexes. There just is zero overlap between the two lists.
IMO there is not much of interest in philosophy regarding AI (except for the thinking and imagination of AI engineers). AI is still just computations as far as we know.
From the article: Almost all believe in consciousness and most don’t have a clue how to explain it, which is wisdom.
Hubert Dreyfus claimed that AI was a test of the Cartesian theory of mind.
Dreyfus was also pretty much right about everything AI related from the 1960s on. It will take nothing short of a true self-driving car to refute something he said.
The essential blocks of Dreyfus are in Martin Heidegger (which I also haven't studied in depth). Helping you obtain a partially understood, utilitarian version of my partially understood, utilitarian version of Dreyfus's partially understood, utilitarian version of Heidegger would be... enabling.
You really need to "spend time with the philosophy of other people" if you want to move ahead with the notions explored in those two links.
I have spent some time to answer my own questions. But I have neither the motivation nor the time and maybe not even the intelligence to learn and to understand the ideas most philosophers.
I'd suggest that's not a good characterisation of philosophy, because according to that characterisation, everybody is a philosopher and doing philosophy all the time.
That definition is not specific enough to do the question discussed in the article justice.
The article defends (whether successful or not I'll leave open) philosophy against the common charge of not having progressed much recently, Whitehead's famous The safest general characterization of the European philosophical tradition is that it consists of a series of footnotes to Plato being perhaps the most popular form this criticism takes (whether that does Whitehead justice is another question I'd like to leave open here).
Here is a definition of philosophy that is much closer to the concept of philosophy the article defends: philosophy is what academics who work in philosophy departments do.
philosophy leads to science
I make a strong counterclaim: all progress in philosophy comes from progress in the hard sciences. As examples for my claim I put forward: quantum mechanics, general relativity, non-euclidean geometry, set theory and type theory as foundations of mathematics, Goede's incompleteness theorems, theory of (economic) games (leading to the first substantial progress in moral philosophy since Kant), AI/ML.
Answering what is philosphy is philosophy. And thus, IMO, I am right.
> Philosophy is what academics who work in philosophy departments do.
IMO this is too restrictive. You reduce philosophy to a job title or diploma title. I guess, many philosophers would not agree with this definition.
> I make a strong counterclaim: all progress in philosophy comes from progress in the hard sciences.
I agree that new knowledge and new abilities lead to new ideas and questions and concerns.
But philosophy is not limited to physics or chemistry; IMO the only hard sciences.
Physics and chemistry are hard sections within an open ended spectrum:
- The begin (1D view) or lower level layer (3D view) are unknown. Is string theory correct ? What are strings made of ? Is mental conscious the lower layer ? Is all a simulation ?
- The end (1D view) or upper layer (3D view) are soft science. Like biology, sociology, psychology, economics.
> IMO this is too restrictive. You reduce philosophy to a job title or diploma title. I guess, many philosophers would not agree with this definition.
I'd say that it is usually what we treat philosophy as when these discussions come up. As you said, it's probably too restrictive, and the truth is we're all philosophers to some degree, and all working off of certain philosophical underpinnings. But the article (and many philosophers) end up treating philosophy as the above poster describes ("what academics who work in philosophy departments do").
If we use a broad definition of philosophy then of course it is useful, and of course its changed over time (most would consider it progress, some wouldn't). But this doesn't seem to say anything about whether or not the small subsection of philosophers who work as academics in philosophy departments have made useful contributions recently. The fact that they struggle so much with this question suggest that they might not have.
Whether or not these topics fall under the domain of philosophy, let me assure you that people in philosophy departments are covering them. Especially philosophers working in the philosophy of mind and cognitive science have a lot to say about issues such as embodiment, the relation between agents and environment (consider the extended mind debate or the debate about affordances), representation (e.g. Fodor), and many more.
In my experience when philosophers say something about those things, they say it in natural language (typically English), and that is not precise enough for measurement. All problems can be handwaved away with ad-hoc modification.
The difference between philosophers 'doing' AI and what e.g. DeepMind do is that the latter are precise enough (indeed as precise as possible -- pace the Church-Turing thesis) about their research hypotheses that they can measure and confirm/refute their hypotheses, unlike the former.
Whence all progress in AI since Turing, Shannon, Zuse et al has come from programmers and not philosophers.
Which philosopher has laid down the foundations of the field? One good starting point for AI is Leibniz' Calculemus!, and Leibniz was a mathematician/programmer and not a philosopher in the sense that the original article by Tim Maudlin seems to defend. Leibniz even built automata, formaliesed (propositional logic) etc!
I've had this discussion on HN several times before. As soon as you start pointing out the contributions of philosophy to various fields, people start denying that the people in question were philosophers. So you really can't win. By this logic, any philosopher who made a contribution to mathematics or science was ipso facto a scientist or mathematician and not a philosopher.
I completely agree with you. It's a difficult subject.
I have proposed
the following two definitions:
1. Philosophers in the original article: best understood as acedemic
philosopers.
2. Progress in AI/maths/hard science: comes from those who actually
"do the maths/implementation/repeatable measurement" as opposed to
using natural language only for discussing their ideas.
In my opinion the purpose of all science is truth, and truth (pace
Socrates and the slave boy) must -- among other things -- be
reproducable by others, ideally by every human. Technology for truth
has improved over time, with mathematisation (and edge case
programming and exectuion on a computer) as the current state of the
art in reproducibility. When Frege succeeded in formalising
first-order logic, the sacred heart of rationality, informal methods
became second-class. All substantial progress in subjects formerly
restricted to informal methods has since come from formalisation and
empirical experiment.
If you don't agree with my (1, 2) above, than that's fine, we are talking abotu (slightly) different things.
You seem to be assuming that philosophers are somehow restricted to using natural language only, but
* the formalization and regimentation of natural language has always been a fairly central concern in philosophy (that's where formal logic comes from);
* mathematics can be, and used to be, done in largely natural language.
What was a good definition of philosopher then is not what is a good definition now. Meaning evolves!
I invite you to think historically, and in terms of ongoing differentiation of science: the drive towards formalising/axiomatising mathematics which was started in earnest at the end of the 19th, beginning of the 20th century, has been accelerating. These days mathematics is partly verified in interactive theorem provers like Isabelle/HOL, Coq, Agda and Lean. A Fields medallist (Voevodsky) dedicated his post-Fields career towards more mechanisation of Mathematics. I predict that in 100 years from now, mathematics that is not formalised in a mechanical tool will not be publishable in reputable venues.
Philosophy is also much more formal than it was 1000 years ago (e.g. compare [1] to [2]). Indeed, the formalization of mathematics was driven by philosophers trying to put mathematical reasoning on an adequate foundation.
> The difference between philosophers 'doing' AI and what e.g. DeepMind do is that the latter are precise enough (indeed as precise as possible -- pace the Church-Turing thesis) about their research hypotheses that they can measure and confirm/refute their hypotheses, unlike the former.
They still remain in a framework of axioms we made. This gains nothing, and what's more, many scientists used to know this. Everything you measure you measure according to a ruler you or someoone else ultimately made. Yes, numbers are more precise, but more importantly, they're just numbers. And like what Douglas Adams said about money.. it's very odd how much revolves around numbers, seeing how it's not the numbers that are unhappy, guilty, and so on. Never bought into that, and always preferred the company that puts me in.
> And so in its actual procedure physics studies not these inscrutable qualities, but pointer-readings which we can observe, The readings, it is true, reflect the fluctuations of the world-qualities; but our exact knowledge is of the readings, not of the qualities. The former have as much resemblance to the latter as a telephone number has to a subscriber.
— Arthur Stanley Eddington, The Domain of Physical Science (1925)
> The danger of computers becoming like humans is not as great as the danger of humans becoming like computers.
-- Konrad Zuse
> But the moral good of a moral act inheres in the act itself. That is why an act can itself ennoble or corrupt the person who performs it. The victory of instrumental reason in our time has brought about the virtual disappearance of this insight and thus perforce the delegitimation of the very idea of nobility.
-- Joseph Weizenbaum
How would you measure something like nobility? Do things you cannot measure exist? Can things you cannot prove mathematically true? Can they be right? Should a person who doesn't love wisdom, or people for that matter, even be allowed program machines that decide over the lives of others?
In game theory (such as prisoner's dilemma) there is a concept of cooperation and betrayal. When an agent interacts with another agent, she has to decide whether it is in her best interest to cooperate or exploit the other. Depending on the social environment and the existence or future interactions with the same agent, the choice can change. A noble human would be one who does not betray the larger good for its own limited gain. Thus nobility emerges from the cooperation/betrayal strategy in a multi-agent game.
Only if you think philosophy of mind is the same thing as computer science. I would consider neuroscience and psychology to be more informative for questions about the mind.
I am in awe of the empirical work in Neuroscience. The last few years have seen a "Cambrian Explosion" of new measurements. We can now measure live neurons at scale! I do think this work is also much more interesting than arm-chair thinking about the brain, consciousness, embodied cognition etc. However, as a working programmer/logician/foundations of maths person I'm in much better a position to compare and contrast formal work in my field with philosophers contribution than I can in neuroscience.
How do you see the influence of the Heideggerian critique of cognitivism, via Hubert Dreyfus, on the "Heideggerian AI" movement which preceded the shift away from classical symbolic AI towards connectionism and embodied learning?
Here's from the introduction to his paper Why Heideggerian AI failed and how fixing it would require making it more Heideggerian:
> When I was teaching at MIT in the early sixties, students from the Artificial Intelligence Laboratory would come to my Heidegger course and say in effect: “You philosophers have been reflecting in your armchairs for over 2000 years and you still don’t understand how the mind works. We in the AI Lab have taken over and are succeeding where you philosophers have failed. We are now programming computers to exhibit human intelligence: to solve problems, to understand natural language, to perceive, and to learn.” In 1968 Marvin Minsky, head of the AI lab, proclaimed: “Within a generation we will have intelligent computers like HAL in the film, 2001.”
> [...] As I studied the RAND papers and memos, I found to my surprise that, far from replacing philosophy, the pioneers in CS had learned a lot, directly and indirectly from the philosophers. They had taken over Hobbes’ claim that reasoning was calculating, Descartes’ mental representations, Leibniz’s idea of a “universal characteristic” – a set of primitives in which all knowledge could be expressed, — Kant’s claim that concepts were rules, Frege’s formalization of such rules, and Russell’s postulation of logical atoms as the building blocks of reality. In short, without realizing it, AI researchers were hard at work turning rationalist philosophy into a research program.
Dreyfus agrees with you, in a way, although where you criticize philosophers doing AI, he criticizes the philosophical prejudices of AI practitioners, who often hold beliefs derived from Cartesian views on the mind. He especially criticized the grand claims of early AI researchers, but I think the criticism is still easily applicable.
Here, for example, from his book Being-in-the-world:
> Having to program computers keeps one honest. There is no room for the armchair rationalist's speculations. Thus AI research has called the Cartesian cognitivist's bluff. It is easy to say that to account for the equipmental nexus one need simply add more and more function predicates and rules describing what is to be done in typical situations, but actual difficulties in AI—its inability to make progress with what is called the commonsense knowledge problem, on the one hand, and its inability to define the current situation, sometimes called the frame problem, on the other—suggest that Heidegger is right. It looks like one cannot build up the phenomenon of world out of meaningless elements.
I'm not familiar with the Heideggerian critique of cognitivism, or of Hubert Dreyfus' work, but some of your quotes sound agreeable. I am not convinced however that the frame problem and related issues are unsolvable. The way forward is to program, measure and improve.
The is how philosophy is, in relation to stuff outside of philosophy.
For example, there's the discipline of epistemology, study of knowledge. There are actual working versions of science, that deals with all the hairy compromises and complications of reality. Car Vs picture of a car.
Still the contribution is philosophers had a major role in creating the scientific method (eg Karl Popper).
I take your point. But actually, not so. 'Scientists' from the ancient Greeks until the 19th C were called natural philosophers, studying natural philosophy. The Greek philosophers all give their theories of the universe, they were the scientists of the day - maybe Socrates was the first to concern himself with just the human world. The word 'scientist' was coined only in 1834.
"Modern meanings of the terms science and scientists date only to the 19th century. Before that, science was a synonym for knowledge or study, in keeping with its Latin origin. The term gained its modern meaning when experimental science and the scientific method became a specialized branch of study apart from natural philosophy.
From the mid-19th century, when it became increasingly unusual for scientists to contribute to both physics and chemistry, "natural philosophy" came to mean just physics... chairs of Natural Philosophy established long ago at the oldest universities are nowadays occupied mainly by physics professors. Isaac Newton's book Philosophiae Naturalis Principia Mathematica (1687), whose title translates to "Mathematical Principles of Natural Philosophy", reflects the then-current use of the words "natural philosophy", akin to "systematic study of nature". Even in the 19th century, a treatise by Lord Kelvin and Peter Guthrie Tait, which helped define much of modern physics, was titled Treatise on Natural Philosophy (1867)."
Yes. But by the 19th century the scientific process was already doing quite well at advancing our understanding of the universe. The philosophers of science of that era mostly wrote down and formalized what was already happening. It’s not like scientists suddenly started doing their experiments differently because of the advent of a formal philosophy of science. There were some improvements, but they were incremental not transformative.
Sure. I was merely pedantically trying to say that, although, yes, it's ridiculous to claim that Popper had any kind of role in creating the scientific method, strictly speaking it was philosophers that created it - natural philosophers. The separation of science and philosophy came later.
I guess I bother making the point, because that history was surprising to me when I first learnt about it.
Describing, defining & creating are very closely related, if you're creating a system of thought/debate. It's an institution.
Karl Popper obviously didn't create the method out of whole cloth, but the ideas he formalized & championed became a real platform. Nearly any scientist/science advocate (degrasse tysson, dawkins..) dealing with an "is this science" question paraphrases him Popper.
The reason why philosophy is not keeping up is that there are 'misintegrating' frameworks with in it that take it down the wrong track and can no longer integrate new concepts into it.
But you have that problem in science too like quantum mechanics, string theory..