Hacker News new | past | comments | ask | show | jobs | submit login
Teach Yourself Logic: A Study Guide [pdf] (logicmatters.net)
392 points by furcyd 89 days ago | hide | past | web | favorite | 44 comments

Fyi, there are many correspondences between logic and type theory / functional programming. I have found that as a programmer, many ideas are more accessible to me in type theory as opposed to logic. Languages like Haskell are great for exploring these relationships.

This is also known as computational trinitarianism - https://ncatlab.org/nlab/show/computational+trinitarianism

Robert Harper, the computer scientist that coined "computational trinitarianism," has a series of lectures on the foundations of type theory including some commentary on these correspondences - https://youtu.be/9SnefrwBIDc.

> many ideas are more accessible to me in type theory as opposed to logic

But how do you understand type theory without some basic understanding of logic?

It's possible, but I guess I should have specified "logic outside the standard curriculum of computer science studies." For example, I think understanding bits of modal logic through how monads behave is much more fruitful for a CS person. Also, I want to further add here that the correspondence between proofs and programs is known as the Curry–Howard correspondence. This is the more general term under which these relationships between type theory and logic are discussed. The term "computational trinitarianism" also adds a further correspondence to categories. ( As shown in my link above ).

Also, "Curry–Howard–Lambek correspondence" is the real name for "computational trinitarianism." I like the latter better because I don't like naming mathematical things after people, which have no bearing on the actual meaning.

Ok, but students need to understand at least ¬, ∧, ∨, →, and there are other useful equivalencies as between sets and (characteristic) functions etc.

They probably don't- its similar to how everyone that plays poker is somehow an expert at game theory.

I'd like to recommend Logic 2010[1], a application with which you can do logic exercises to learn the natural deduction system from *"Logic: Techniques of Formal Reasoning" by Kalish, Montague, and Mar.[2]

The program is officially only for MacOS and Windows, but with a little work it's possible to get it running on Linux, as it's just Java.

The exercises are fun, and the program will not only check your work to make sure it's right, but also point out where and sometimes why it's wrong.

[1] - https://logiclx.humnet.ucla.edu/Logic/Download

[2] - https://www.amazon.com/Logic-Techniques-Reasoning-Donald-Kal...

Oh man, brings back memories. I used an earlier version of this program as a philosophy undergrad at UCLA in the early 2000s. I LOVED doing the homework in my logic classes!

I'm assuming that program is more approachable than something like Coq or others? I used ACL2 a bit in college, and could imagine people being frustrated trying to learn it on their own.

It's infinitely more approachable than Coq/PVS or whatever other theorem prover since it was designed for teaching and not a research tool or potential application to actually verifying properties of complex systems.

Studying logic taught me one of the more important distinctions in life is identifying when things are always true vs sometimes true. We are taught to often group things that are mostly true into the always true category. This is effectively lossy compression that can introduce serious error into your line of reasoning.

That doesn't make sense to me as in real life most of our reasoning comes from syllogisms for practical reasons. And if something is consistently demonstrated to be true, chances are it will continue to be so, so we take our chances with that. It isn't lossy anything, it is life.

There is the mathematical logic that is deduced from axioms which is considered universally true unless the axioms are false.

Our everyday "folk" logic is always contextual and ambiguous. The deductions are probabilistic instead of being deterministic. They are sometimes true.

There’s a framework for that: https://en.m.wikipedia.org/wiki/Fuzzy_logic

It is life, but it is also still lossy.

I don't understand your confusion: you're both right. Everyday decisions are logically chosen from our perspective but still lossy in absolute terms. That manifests when we are frequently wrong about our presumptions... every damn day, more or less.

Would you describe Jpeg as "Making practical generalizations and assumptions based on scarce information." ? Because that is what we do in epistemology. We don't lose information, we never had it in the first place. If I compared a way of deduction with a pool of complete information, assuming such a thing can exist, I would be comparing apples with oranges.

What's wrong with comparing fruit with fruit?

Would you discuss what type of hammer to use in order to break a banana? Apples-oranges is an analogy for irrelevance of properties when comparing two different things. Is that clear to you?

And I was using lossy compression to abstract up a level to where objects in the same category share attributes. Is that clear to you?

Are you saying that epistemology starts from absolute knowledge and widdles it down to a mangled approximation in order to show it on a web page?

Are you conflating compression with image compression?

The abstract notion of compression is reducing the size of something by trading off some of it's other properties.

You can compress a gas, data, an image, audio, rocks can be compressed, brains can be compressed.

I'm glad we are on the same page.

As a side note, it's always kind of funny to see hackers/computer types try and frame discussions in terms of things which are familiar to them, and these comments are often highly upvoted because of the explanation. In this instance, "lossy compression". But few people ever seem to question whether the domain of such categorisations is at all applicable to real human situations, and even if it's possible to make such categorisations. Is the method of the natural sciences or mathematics applicable identically to the social sciences? Often hackers assume this to be the case.

I think there's also an element that hackers want to feel as though using the concepts they've developed which are very powerful in computer science should therefore be key to understanding the world as a whole. It's partially being eager to apply one's knowledge and partly hubris and unwillingness to defer to philosophy, sociology or critical theory which they view as less rigorous (and therefore bad).

Is this phenomenon uniquely pronounced among hackers/computer types in your experience? In a sense, it kind of sounds like your comment could be summarized as "people often try to apply already-available concepts and reasoning to new problems areas". But it's uniquely funny when hackers do it.

On the flip side, how many philosophy, sociology and critical theory types reach for concepts from biology and physics? And how well do you think they grasp and employ those concept compared to specialists in those fields?

I think any specialist who believes their specialty will necessarily magically generalize is presuming too much.

FWIW, as a formerly deeply-invested philosophy/critical theory type for over a decade... I have to say I had to begrudgingly admit that math/hard science are unequivocally more rigorous than the soft ones. Each is suited to it's purpose, but it doesn't help to pretend that Zizek is as rigorous as Knuth.

I think that philosophy offers much more general mental models and abstractions than computer science, which isn't to say they're better or worse, but applicable differently. To that end, I also feel as though many of the concepts hackers use to describe the world are actually more concrete instances of the ones philosophers might use. There's also a difference between analogy (what I have seen philosophers use more often) and trying to fit the world to the model (what I have seen hackers do more often). I also think it's one thing for a philosopher to talk about quantum mechanics (perhaps not understanding mathematical intricacies or even the ideas themselves) and another thing for a physicist to do the same with a philosophical concept. I think the physicist will have more success than the philosopher, just because the philosophical case can be made more abstractly in most circumstances.

My issue is not the application of the concept, but the apparent lack of consideration for if the concept is valid for the domain to begin with, and secondly if so, to what extent it models the world or ought to model the world. Are the levels of abstraction matched? Is it better to think about this with dialectical logic? Do the concepts use represent any particular ideologies? What sort of instrumental reason do they employ? The complete disregard for these is what makes them "funny" to me. When I saw the mention of lossy compression, I (maybe unjustifiably) cracked a smile. It's not just a mismatch in ways of thinking, it's a total mismatch in the content of the concepts.

So I would say it is more pronounced among hackers, and I feel that there is a strong current (perhaps also due to the very industry-orientedness of the field) to eschew what philosophy has to offer. This isn't only in explaining the world, but also questions of how the world should be (several other commenters over the years here have noted the general want to ignore issues of ethics in computer science). Maybe I just think it's funny because I see it very often and I only see it on Hacker News and Reddit, usually leading with "This <complex real-world sociological phenomenon> can be thought of as a...", which is a trend I've seen in some branches of philosophy itself which tries to excise the "mystical" content of "bygone" theories using new techniques, which significantly weakens the original idea's applicability in favor of putting it into a dogmatic (perhaps ideology-laden) analytical framework.

And on the last point on rigor, I agree, though I do want to salvage some dignity in my view of the world when I say that rigor as formalised through equations has its uses in some views of the world, and descriptions of ideology, power, the state, metaphysics etc. have their uses in other parts. There's also a difference between an argument (which I think must be rigorous) and a critique (to which I don't consider the concept of rigor to apply to in many cases, if we want to retain the normative force of them).

The phenomenon of people pulling in and trying to explain problems from outside their specialty occurs in every discipline and is a part of normal scientific work. Usually the overwhelming majority of these attempts fail, but they are useful nonetheless because in failing they help to define the limits of the specialty. Ironically, you're complaint that they do this witout considering "if the concept is valid for the domain" is unanswerable until an attempt is made and its claims critiqued.

That aside, you raised good questions. I think would be useful if you turned your list of critical questions into a template for criticising instances of analogous reasoning anywhere. It would help greatly in evaluating these attempts.

Why is it funny that people understand things in terms of their experience? The original comment about "lossy compression" essentially stated that humans don't have perfect thinking/memory/processing of probabilities. Do you disagree? Does it matter whether or not the wording of this idea uses computer science terms?

Sure, technical people might try to apply methods from technical disciplines to social ones. Sometimes it works, sometimes it doesn't. As a rule, however, I don't think anyone assumes mathematics will completely replace social studies. We might use statistics to better understand social studies, but the social context is still distinctive from pure math, and always will be.

Some of the best hackers are the ones who DO borrow ideas from other disciplines. And all other disciplines can often benefit from applying increasingly complex statistical methods, aided by computer and data scientists.

Critical theory certainly is less rigorous when it is used to dismiss a work rather than engage it seriously on its own terms.

Are you saying that categorising things into categories familiar to devs might be lossy?

The universe is logic and life is engineering.

side-side note: all people frame and discuss things that are in terms of things familiar to them. At least to some degree. Metaphor is a fundamental part of the operating system of our cognition. We are forever mapping useful concepts from one domain to another. Indeed human languages are filled with rich and sometimes subtle metaphors. Heck, you at some point have likely used the building metaphor "you've hit the nail on the head" to communicate to someone that they have explained something in a way that you think is "on the money" to circularly explain one metaphor using another.

Not only is it useful to attempt to map powerful concepts from math and computer science onto the social and other sciences, but it useful to map powerful concepts from the social and other sciences onto computer science! In the search for meaningful and useful abstractions you might just hit upon something useful among the junk, but it is a necessary part of the search.

Making assumptions is useful and sometimes it leads you astray. There are good analogies, analogies that are ok but something feels off, there are bad analogies. As the domains get further apart and the cost to acquire accurate models in the domain increases the search cost to answer the question of whether the mapping applies or not also increases. I would argue the search cost is infinite. Or at least it's near infinite. In practical terms the search cost is too high for most people as it relates to most things. Our time and other resources we would use to compute the applicability of any mapping is finite and so there is an economics to it. Metaphor/analogy is a shortcut trading off a lower search cost for useful abstractions with a greater possibility the models are inaccurate and/or lower resolution models. Guess and check is often a reasonable strategy. Sometimes it's the only one.

All people are engaged in hubris to some degree as their confidence in their conclusions is almost never proportional to the completeness of their models.

I find it likely that there are a vast multitude of domains that contain useful concepts necessary to understanding the world as a whole and that anyone in any domain in possession of one of those concepts should feel like it is not "the key" but a "a key" to understanding the world as a whole really doesn't surprise me. It's likely that it actually is a key to understanding the world as a whole.

The search cost for any one person to be in possession of all the keys at once is greater than anyone can afford. So to find it amusing that one group of people try to economize their understanding of the world by leveraging the tools in their possession in lieu of gaining a prohibitively expensively deep understanding of all domains of knowledge is to fail to realize how impossibly expensive an accurate view of the world is to attain.

The hubris is in thinking it's possible to compute whether something is applicable or not working from a desperately incomplete set of models and expending very few resources in the computation.

Everything is mostly true except truth about truth.

I've been working my way through Kleene's "Mathmatical Logic". It's a great read so far, touching on the details and proofs in some detail. I'm looking forward to the chapters on completeness and the like. I was mostly inspired to get his book because of the Kleene star, who knows why.

Also, though very different, I just got GEB for Christmas, so I guess I'll be reading that finally now too.

A much shorter guide by the same author in 2 pages instead of 90: https://www.logicmatters.net/tyl/shorter-tyl/

Off topic: I'm honestly kinda surprised this kind of comment isn't built into HN by this point. There's often so many good comments from old articles we rely on adventurous users to dig up.

It is. Ctrl + F “past” and you’ll see it between hide and web up top.

I've been following this guide for several months. It's made some excellent recommendations so far. Prospective type theorists, students of linear logic, etc. will have to venture beyond the reading list, but having a good grasp of the modern fundamentals of logic has made a big difference to me in how well I can follow texts like HoTT.

And for leisure, www.logicomix.com (a surprising graphical novel bio-"pic" of Russell and some episodes of the foundations saga).

This guide covers many different topics and specialties related to logic so it may help to introduce a taxonomy so we don't get intimidated by the breadth of Dr. Smith's scholarship. Let's break it down into 5 topics:

1. Using a well-known and widely accepted system of logic to produce proofs and solve problems. A working man's logic is propositional logic (AND, OR, NOT, and the material conditional) plus first-order predicate logic (quantifiers "there exits" and "for all" over bound variables), natural deduction, and ZFC set theory.

2. "non-standard" logic systems which are somehow "better" or "more expressive:" 2nd order logics, modal logics, many-valued logics, etc.

3. Meta-mathematical investigations into the properties of different formal logic systems: model theory, proof theory, forcing, consistency, completeness, etc.

4. The philosophy of logic: Semantics, sense/reference, truth, nominalism/realism, theory of descriptions, etc.

5. Logic and computability: lambda calculus, recursive functions, turing machines, complexity classes, etc. This is somewhat tangential to logic proper but there are connections and Dr. Smith has a section on it so it's worth enumerating.

While everyone will need to know (1) as a working man's logic, the other topics are more or less independent and can be approached independently.

For people just starting out and finding this study guide pretty overwhelming, let me re-emphasize two of Dr. Smith's suggestions for starting points: Smullyan's First Order Logic and Halmos's Naive Set Theory. These two books will get you to the same depth of understanding that 95% of working mathematicians need or apply in their day-to-day work, and it's a fact that 95% of modern mathematical theories are built on (formalized within) ZFC set theory and first order logic.



I also suggest the online game The Incredible Proof Machine. It's extremely helpful to have a computer formally check your proofs when you're just starting out in logic because it's incredibly easy to "cheat" and skip steps or use rules that "make sense" but aren't part of the formal system your using. TIPM lets you do that without learning any specialized syntax.


On pages 43-44, the author mentions a member of my school's faculty G. Toualakis. The criticisms are valid and his textbook is convoluted. I appreciate the recognition on unnecessary formalism and complexity.

For a 1000 level logic course that Software Engineers, Computer Engineers and Computer Science must take, it is extreme.

Overall, interesting to see!

The author is refering to a different Tourlakis text than the one used in the 1000 level logic course for compsci students.

This is a pretty epic map.

Does anyone know of any others for other domains of knowledge??

Is there an ePub version of the guide available anywhere?

Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact