Hacker News new | past | comments | ask | show | jobs | submit login
Douglas Adams was right–knowledge without understanding is meaningless (theguardian.com)
192 points by pseudolus 54 days ago | hide | past | web | favorite | 69 comments



Counterpoint: The mechanism of action of a large proportion of very important drugs is a stone-cold mystery. If you have major surgery, you'll probably be anaesthetised with a drug that induces unconsciousness for no known reason. We know that the drug will render you unconscious, we know that the overwhelming majority of patients will remain unconscious for the duration of the procedure, we know that the overwhelming majority of patients will regain consciousness with no ill effects after the procedure, but we only have vague educated guesses as to why.

We don't know why acetaminophen relieves pain, we don't know why lithium stabilises mood in people with bipolar, we don't know how antidepressants actually relieve depression. Would we like to know how these drugs work? Certainly, as it might help us to develop better drugs. Do we need to know how these drugs work? Absolutely not - given a choice between a mystery drug and having surgery without anaesthesia, I'm choosing the mystery drug every time.

Human beings are black-box algorithms; we can concoct plausible explanations for our behaviour, but there is abundant evidence that we don't actually know our own minds. I think our discomfort with black-box algorithms is essentially a reflection of our discomfort at the unknowability of the human mind. These algorithms work, but we don't really know what's happening on the inside, just like literally everyone you've ever met.


You write: “Human beings are black-box algorithms; we can concoct plausible explanations for our behaviour, but there is abundant evidence that we don't actually know our own minds. I think our discomfort with black-box algorithms is essentially a reflection of our discomfort at the unknowability of the human mind.”

I tend to agree, and if you look from a species perspective there are boundaries to what other animals can also do. For instance, an ant can only do so much; they can do basic thinking and processing but they lack complex physiological structures and reasoning abilities, as far as we know, to concoct plausible explanations for their behavior. Humans can come up with explanations but it’s still hard for them to tap in deeper and ask those why questions.

On a scale of species reasoning ability complexity, I’m curious if there are other animals that have the capacity to engage with nuanced why questions. Any zoologists out there?


I think that question inherently requires us to identify some subjectively fuzzy definitions for what are "why questions".

It would be similar to how we have been adjusting our definition of what it means to feel pain. (particularly in the context of claims that entire animal species do not feel it, so there are no qualms about preparing them as food in ways that would be considered inhumanely cruel with others - see lobsters, crabs)

Chimps recognize themselves in the mirror. So they have some theory of mind.

Dogs don't, but unlike chimps, can follow pointing directions. So they have some theory of OUR mind.

My dog is stubborn and doesn't follow some directions right away. I will sometimes see her pause, consider, and ignore the command. If I then remind her in a generic way that I am serious (like a stern generic "hey"), she will then come back and do the command. So it's more complex than a direct cause and effect.

That also sounds like a "why" question that has gone through her head...


Good to think about, thanks.


I don't think you can say that reasoning is quite true or false. The main thing about that argument is it compares apples and oranges. Human beings construct beliefs about each other's processes and provide reasons for actions which are far from scientifically accurate but which are crucial for us to operate together as a species - and our ability to operate together as a species has been key to our success, despite the many limits we have to our ability to operate.

Which is to say we don't experience each other as black boxes (regardless of whether we really are these). We experience each are partial unknowns - people often do unexpected things but we often construct plausible after-the-fact explanations and those who consisting do things without any explanation make us nervous.


Would we like to know how these drugs work? Certainly, as it might help us to develop better drugs. Do we need to know how these drugs work? Absolutely not - given a choice between a mystery drug and having surgery without anaesthesia, I'm choosing the mystery drug every time.

Phrasing the need for understanding is a bit tricky. Certainly, a person can live with, must live with, some knowledge-without-understanding in their life. But a proliferation of things beyond our understanding in many ways undermines our position, our feeling of power in the world.

It seems like the broad progress of humanity, since the age of ... reason, has come through an increase in both raw data and broad understanding. And while understanding is tricky thing to describe, it seems like the rise of AI is actually helpful in describing what it isn't. Understanding is a kind of knowledge that humans can take into many domains and apply in many ways - mathematics is a prototypical example - the laws of physics, put in mathematics, can applied in a multitude of ways. In contrast, trends extracted from raw data may give us successful predictions and not tell us the reason.

Suppose we have a psychiatric drug that is a pure "black box", we know that consuming this drug changes a person's behavior in somewhat predictable fashion. But it doesn't tell you whether this consumption returns someone to "normal" or simply suppresses a person's other abilities - if we're treating symptoms, we learn nothing of mental illness (given our real minds remain black boxes too).

Moreover, it's not true that we deal with other people as black boxes. Rather, "folks psychology", our implicit understanding of each other's motivations, is a key part of our interpersonal relations [1]. This process may not be correct in scientific terms but this doesn't mean it can be dismissed as a key component of our experience.

And overall, the proliferation of knowledge-without-understanding produces the problem of lack of broad control of the world, akin to a tool chest.

Edit: Ironically, "tool box reasoning", the ability to have a group of tools available and use them as appropriate and in combination, is a pretty uniquely human ability one that gives a person a wide range of options in the world. It is itself very much a black box on a meta-level but we know quite well exists (so suppose that's a useful piece of general black box knowledge).

[1] https://en.wikipedia.org/wiki/Folk_psychology


In looking at technological mechanisms, and coming up with a fairly coherent fundamental list of nine, two of those are what I call process knowledge, generally, expertise in some technical domain, and structured knowledge, generally, scientific expertise.

I've arrived at those terms after spending a lot of time thinking and studying just what "technology" and "science" are. One of the most influential definitions I've come across is John Stuart Mill's: technology is the study of means, while science is the study of causes.

That's ... not fully bulletproof, but it's a very good start. Technology (from Greek, techne, is an art or skill or craft, a way of achieving some ends. While science is knowledge, especially of fundamental causes, principles, mechanisms, or dynamics.

Technology tells you what to do (and/or use, or apply, or manipulate). You don't have to know why it works, as in the example of your anesthesia, only that it works. It's knowledge, but a thinner level of knowledge than science provides.

Science tells you why a thing works. You may still not be able to influence it (we understand, generally, plate tectonics and stellar fusion, we control neither), but the understanding gives predictability around such phenomena. Plate tectonics tells us where earth movements may occur, with what frequency and severity, and the like. Understanding stellar fusion makes sense of the brightness, colours, frequency distribution, mass, and other propreties of stars, as well as of the occurrence of events on their lifecycles.

Jonathan Zittrain recently wrote of a characteristic of AI models that hasa been bothering him: that they provide solutions without explanation. The thing about an AI classifier is that nobody can tell how the classifier itself works. We can throw test data at it and evaluate the outputs, but there's no apparent causal link between input and output. AI is non-explanatory knowledge.

Whether that means it's technology rather than science, or some new domain that's not technology or science (as much as semantic distinctions has meaning), I'm not sure. Though the question's one I'd started asking a couple of years ago myself.

TFA is describing this phenomenon in two aspects: first, that the causal function of proteins themselves isn't understood, and may be beyond our modelling capabilities, and secondly that the AI-driven predictions over folding topologies provides accurate predictions but no causal mechanism.

Another space this resembles is inferrential statistics, which AI is in many regards an outgrowth of. Correlation is useful information, but it is not causation. Multi-stage gradient-desent AI is to a large extent more complext statistics ... but is that all it is, or is it something more, but still short of causal or explanatory mechanism? An emergent property of complex stats themselves?


Once upon a time, I worked in an area of mathematics called Algebraic Combinatorics, which dealt constantly with questions of 'how' and 'why.'

The combinatorics provided a 'how': Given a problem in algebra, we could make up some combinatorics that describe the system, and then prove some theorems that tell you precisely how to manipulate the system to get the answer you want. But this 'how' doesn't necessarily say much about 'why' such a solution must exist.

On the other end, we had representation theory, which gave a sort of algebraic reason for the solution to exist, but would give you no help in how to actually construct the solution.

It was extremely common for interesting problems to have difficulty on one side or the other. You've got a 'how' and spend dozens of grad-student-years (GSYs) trying to discover the 'why', or the reverse.

Likewise, ML is giving us relatively cheap answers to the 'how' question. How do proteins fold? Well, now we have better answers. And those answers, carefully studied, should help with the 'why'. Now, instead of having just a start structure and an end structure, you /should/ be getting an explicit sequence of moves from the neural network. Studying /why/ some moves are better than others should yield progress on the overall why question.

Along the way, getting answers to 'why' should help constrain the search space for AlphaFold, and allow it to come up with better answers faster.

This basic question going forward is 'how do we distill crystallized knowledge from this black box algorithm?' It's going to be an important question in a range of sciences, basically anywhere that data >> knowledge. Finding answers won't be easy, but, hey, no one said good science has to be easy...


Reminds me of something I've heard often about history. That it's not enough to know that things happened. It's equally important to know why things happened.

I apply this to a lot of fields, and it brings a better level of understanding.

For example, knowing that storing text in MySQL as utf8mb4 is better than storing it as utf8 will get you a job. But knowing why storing text as utf8mb4 is better than utf8 will build you a career.


You cannot compare something like understanding history with an implementation detail of a DBMS, moreover an easy one like handling BMP or the whole standard Unicode (MySQL made a mistake here of calling utf8 something that doesn’t handle characters outside BMP).


You cannot compare something like understanding history with an implementation detail of a DBMS

Sure, I can. I just did. Look at the comment you replied to. It does exactly that.


There’s very little you can’t compare. Douglas Hofstadter (and others) made the argument that analogy is the foundation of cognition. Our brains are capable to tying together almost any two concepts. We sometimes teach ourselves to avoid certain connections, but the capability is there.

Physics-wise, there will be some degree of resonance between any two circuits that aren’t an exact mirror of each other.

And mirror circuits can be compared most easily of all.


'Why' presupposes there is an answer that a human can understand. Why do self-driving cars work? Because there exists some function that maps raw inputs to outputs that correctly interact with reality. Why? Because the neural network fit the right curves.


Well, you’re still not going to get to hand-wave away the “why” question. I’ve been working on recommender systems for the past 5 or so years, and I end up spending a lot of my time trying to work backwards to explain why the recommender recommended something that seemed counterintuitive (sometimes because of a bug, sometimes because of the model, sometimes for reasonable reasons). If you work on self driving cars, you’ll similarly be researching and explaining “why” this one drove into a lake this one time.


Yeah that's a reasonable counter point to my example, you're right. I guess I used a poor example to try and make a more general point that there will and may already be a time when the black boxes we use exceed our ability to understand the causal mechanisms.


I think Feynman’s was the best answer to the “why” question: https://www.lesswrong.com/posts/W9rJv26sxs4g2B9bL/transcript....


I think that's different, first because the neural network has an indirect relationship with the physics, and second because Feynman was operating under a handicap: he was asked to give an explanation in terms the listener (who didn't know any physics) would understand.

In the case of folding, when we ask "why" we might be asking why it folds one way and not the other. Causation has consequences, like for example we might want to know whether a small change to one part of the protein will significantly change how it folds. We might say some amino acids cause the fold and others don't.

If you understand the mechanism behind something, you can control it better.

Or, if you're interested in machine learning, you could ask what changes to the neural network make prediction better or worse and which parts of it matter the most, when it might make mistakes, and so on.


It is true, we will never know in the ultimate detail how each weight of each neuron must be placed and why. But that is simply because it is too intricate to understand everything.

Just like we try to understand human behaviour not by understanding individual neurons, we reason on an abstract level, called psychology.

To take your example building a SDC will teach us which aspects are the most complex, and which solutions work better than other for those aspects. Comparing those solutions will allow us to see what working solutions have in common, and give us theories that we can then test.


That's a fair response, but I don't think it's universal. Or at least I should say it presupposes all complex phenomena have a parsimonious abstraction layer a human brain can grok. Which, to be fair, does seem to hold true a surprising amount of the time. But I don't think it's required.


> Why do self-driving cars work?

Unless there has been some breakthrough that I'm not aware of this isn't true.


They've worked conditionally for a long time now.

It's not level 5 open world "works in all conditions" self-driving - but neither is it level 0 no automation whatsoever.

Nor can humans drive unconditionally either.


A three year old can 'conditionally' drive a car. That doesn't mean it works.


Which begs a whole lot of other questions:

* What is understanding?

* How do I know that I understand?

Sooner or later you realise that all theories of knowledge have one fundamental flaw - the problem of criterion[1]. And if you so choose you can dismantle any argument with the Münchhausen trilemma[2]. Socrate's favourite trick.

I like Feynman's answer best: "What I cannot create, I do not understand.", because in a round-about way it lands us squarely at the Turing test.

Could we ever know what Consciousness is unless we create it?

1. https://www.iep.utm.edu/criterio/

2. https://en.wikipedia.org/wiki/M%C3%BCnchhausen_trilemma


Well you can understand a lot of things without being able to create it

Though ability to "act upon it" might be slightly better

Knowledge is knowing there are icebergs in the water, comprehension is knowing you should slow down


Without resorting to actually playing the silly philosophical word-games, here is how one debate might play out:

Me: Why is "there is an iceberg in the water" knowledge?

You: "because it corresponds to reality".

Me: That makes it a fact, not knowledge.


This subject has been thoroughly discussed by generations of philosophers, and there is a large body of literature about it. If you want to dig into it, you could start at: https://plato.stanford.edu/entries/knowledge-analysis/#KnowJ...


I am well aware and require no introduction. That it has been debated for thousands of years is a fact.

That it is yet unsettled as of 2019 is also a fact.

Even worse than the Gettier problem, the proposition "Tomorrow I may or may not die." satisfies JTB, rendering it useless.

Lets just say that I don't know what knowledge is, but if it's not useful - it's not knowledge.

All Philosophical debates are a form of Kobayashi Maru[1]. There is no "right" answer by design. The purpose is to make you conceptually understand the problem.

1. https://en.wikipedia.org/wiki/Kobayashi_Maru


Perhaps you should use the term "useful knowledge" instead of re-using a term that most people think means something else.

Auden used it:

  And when he occupies a college,
  Truth is replaced by Useful Knowledge;


I calibrate my language in real time to that of my interlocutors.

Some times drawing the distinction is necessary, some times it's not.

Conceptually (for one's own intellectual benefit), recognising the distinction between know-what, know-why and know-how is important. https://en.wikipedia.org/wiki/Know-how


Douglas Adams?

This is the millennia old diasctintion between knowledge and understanding (or "wisdom"). I'm pretty sure you could already find it in Homer, the Bible, Mahabharata, and the Epic of Gilgamesh...


None of the other examples involved a computer generating an answer for you, hence are a slightly worse fit for an analogy here.


This [1] is what Socrates had to say of writing, at least as written by Plato:

"He who thinks, then, that he has left behind him any art in writing, and he who receives it in the belief that anything in writing will be clear and certain, would be an utterly simple person, and in truth ignorant of the prophecy of Ammon, if he thinks written words are of any use except to remind him who knows the matter about which they are written.

Writing, Phaedrus, has this strange quality, and is very like painting; for the creatures of painting stand like living beings, but if one asks them a question, they preserve a solemn silence. And so it is with written words; you might think they spoke as if they had intelligence, but if you question them, wishing to know about their sayings, they always say only one and the same thing. And every word, when once it is written, is bandied about, alike among those who understand and those who have no interest in it, and it knows not to whom to speak or not to speak; when ill-treated or unjustly reviled it always needs its father to help it; for it has no power to protect or help itself."

The irony of my being able to present this only due to the nature of writing cannot be overstated. At the same time it's the exact scenario described here. The nature of understanding vs the nature of knowing, simply replacing 'computer' with 'one who genuinely knows.' The writings of Socrates/Plato really are quite remarkable.

[1] - http://www.perseus.tufts.edu/hopper/text?doc=Perseus%3Atext%...


Haven't we had countless movies, books, episodes of Star Trek that basically say just this.

It's like the tech industry likes sci-fi stories, but doesn't learn from any of their morals.


Is 'understanding' it is root not an efficient compression of knowledge.

We have understanding of gravity, as Newton's laws explain the behaviour of everything that falls to anything else. That's compression: You don't need a long list of measurements of falling things, you just need a much shorter list of starting conditions. Newtons laws will then decompress the long list of measurements from it.

So what does it mean for protein folding? We need more compression so the knowledge is understandable on the human scale? Or AI needs less compression, so it is reasoning in a better way for this problem. Isn't that just a more abstract way for saying AI is smarter than we are.


There is definitely more to understanding than compression of knowledge. Here is one thing. Knowledge is often represented as graph, where the links represent facts that are related in some way - perhaps causally, perhaps because they are analogs in two different scenarios etc. Understanding would include your ability to quickly traverse this graph so you can point out connections between facts which are more than one link apart.


I can think of multiple graphs representing the same knowledge, some smaller than others as irrelevant nodes are pruned, paths are shorter, etc.. The smaller ones encode that same knowledge with better understanding.


Let's consider understanding a mathematical theory - say linear algebra. In principle, the entirety of the knowledge of LA is contained in it's axioms and the standard rules of propositional logic (i.e. can be compressed into them). If you just know those, you can recover any theorem or statement of LA. But this is not how human understanding operates at all.

Usually one understands LA when one has memorized many different theorems of LA, have perhaps memorized how to derive some from others, or even the same theorem in several different ways. That doesn't even begin to cover establishing links between LA and other branches of mathematics, all of which a proficient mathematician memorizes individually, and memorizes connections between the links. The mathematicians mental knowledge graph is compressed, but not that much.


That's a fair point and an interesting view.

I would consider the axioms as necessary but incomplete knowledge. You need more knowledge to apply these axioms.

A bit like the axioms are the parts of a car. You can use the car schematic and the parts to create a car. Or you can generate the missing knowledge of the schematic by spending lots of time combining random parts.

I am a bit out of my depth here, but maths can be reformulated. I presume like Minkowski did with Einstein's relativity. Or Riemann with integration. The result of the reorganized knowledge is deeper understanding.

There is both compression and correction going on, and even correction can be seen as compression by removal of special cases.


Wouldn’t more compressed graphs generally be faster to traverse?


Depends on how you compress them. Some representation are good for storing, others good for processing in different ways.


Seems to match real-life thinking, too. For extreme case, see memorization: you learn a set of nodes in a graph, probably highly compressed through application of whatever tricks you use to quickly memorize and recall information. But by default, you learn almost no connections between those nodes, you can't really reason from the material you memorized. Building these connections requires further mental effort.


It is. But it's also more. If I give you a list of all unix commands and you don't understand them, you won't go far.


Understanding as compression would mean that you look at a function with arguments (unix command,input file) and result output file. The size of this function's implementation gives your understanding, smaller being better understanding

If you have no understanding at all, the only possible implementation is an infinite lookup table. When you start understanding commands better, you can implement parts better.


While this isn’t at all what the article is about, I think phrases like “knowledge without understanding is meaningless” are a bit dangerous out of context, because they imply that rote learning is bad.

Furnishing your brain with a library of facts on a subject means it now has more items to make connections between, which is what allows understanding to develop.

These days, when I’m learning something new, I’ll try to accumulate as many basic facts as possible. Subjects that seem impenetrable can often be conquered in this way. The more connections you can make, the faster you can develop an understanding, and the more creative you can eventually become.


Please edit the title to "right - knowledge" rather than "right-knowledge". Maybe I'm still waking up, but I read "right-knowledge" as one word.


Fans of Douglas Adams’s Hitchhiker’s Guide to the Galaxy treasure the bit where a group of hyper-dimensional beings demand that a supercomputer tells them the secret to life, the universe and everything. The machine, which has been constructed specifically for this purpose, takes 7.5m years to compute the answer, which famously comes out as 42.


Where is the wisdom we have lost in knowledge? Where is the knowledge we have lost in information?

http://www4.westminster.edu/staff/brennie/wisdoms/eliot1.htm


Knowledge without understanding is, well just information.


Let’s not let this fact lower how much we value information. You can have knowledge without understanding, but you cannot have understanding without knowledge.


You can, however, get understanding from much less knowledge than many would have you believe. This is basically the sales pitch for every data driven company out there — if you’re not collecting and analysing every last bit of data you can (through our tools, natch) you dont really understand your users, you have zero understanding of the impact of your marketing...


Indeed. From the dictionary: “Knowledge: facts, information, and skills acquired through experience or education.” You can learn things by own experience, or by someone/something teaching it.


Understanding and knowledge are, to me, the what and the why.

I understand the US president must stand down after two terms. I know this happens because of my knowledge of the US constitution.

I also understand the US constitution is a set of rules. And I know the vague historical reasons why they were formed.

Understanding and knowledge are, to me, in a constant cat-mouse chase: "okay, this happened - but why? Ah, it's because this happened - but why did that happen? Ad infinitum".

It's cause and effect until you lose interest and are happy saying 'just because it's like that'.


What is "right-knowledge"?

Edit: Can the editor tweak the title? It's not just a little confusing to me, others here have mentioned the same issue.


‘Meaningless’ does not mean ‘useless.’ You do not, for example, “really understand” math if you are not well-versed in category theory. And much of the high-school and engineering math is taught without any attempts to explain the reason why things are the way they are. You may not realize this, but people learn mostly by getting used to things.


Not disagreeing with you. But two complementary points:

The other ill of much of high-school and some engineering math teaching is not having adequate focus on "how to apply" the math in modeling real world problems. That, in turn, exacerbates further stages of learning the deeper "why" questions.

Besides, "getting used to" part is apparently inescapable, at least at some point. After all, as great a mathematician as John von Neumann said "Young man, in mathematics you don't understand things. You just get used to them."


> “really understand” math if you are not well-versed in category theory.

I bet I could find a fields medalist who barely knows what a category is.


Isn’t category theory only from like, the 1940s? We’ve had plenty of great mathematicians before then.


Slightly off topic but this goes to show how poorly hyphens/dashes are implemented in modern typography.

Idk if its because I've been reading the word "right-wing" a lot or what it is but when I saw "right-knowledge" I thought it was a specific term when it seems like they're using it as an em-dash even though the unicode for the character is for en-dash.

I get that its harder to type a dash, but c'mon, for an official online publication their publication software should have some easy shortcut for it.


The original article uses a spaced en dash: "Douglas Adams was right – knowledge without understanding is meaningless"

Not as distinguishable as an em dash but much better than an en dash with no spaces.


They could have used a colon instead to avoid the confusion.


Semicolon, rather.


In this case, either one would work. A semi-colon separates two independent clauses, while a colon can be used to define or introduce something. I think it is more natural to use a colon (“:”) here because you’re defining what Douglas Adams was right about, but either one would make the title more clear.


Sure, but you'd have to capitalise the 'knowledge' after the colon. :) If you're willing to make two edits rather than one, then yeah, either would work.


True, and this places a fundamental limit on the extent and types of problems which can effectively be tackled by machine learning and AI, generally.


Could you elaborate on which problems you believe are forever out of reach of AI?


I am thinking of highly dimensional and complex systems subject to significant noise, and operating over many time scales e.g. real-world economics and financial systems. Also, I’m talking about real-world, practical AI, which can be implemented to generate whatever output over a practical time-frame (as opposed to a theoretical AI given infinite data and time). I have no evidence to back this up, just my educated guess.


Isn’t gathering data (knowledge) literally how science works, and develops theories (understanding)?


Billions of neurons and we think we can reduce the nuances of our complex system into words such as "knowledge" and "understanding." To me, those terms are so vague, they don't mean anything.


And they're both dangerous without wisdom!




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: