
Propositions Are Not Types (2018) - shalabhc
https://medium.com/@rplevy/propositions-are-not-types-naturalizing-information-content-in-computing-e51b94ad6674
======
fauigerzigerk
Initially I thought that this could be one of those automatically generated
nonsensical but scientific sounding texts
([https://pdos.csail.mit.edu/archive/scigen/](https://pdos.csail.mit.edu/archive/scigen/))

But upon closer reading I now think it's actually the reverse. The author
apparently tried very hard to obfuscate the fact that he does have something
meaningful to say by using utterly nonsensical sounding language.

~~~
jrsala
I think the author simply uses the vocabulary of his field, and he tries to
communicate effectively with it. As a consequence, his language is terse and
full of references to concepts that are foreign to a run-of-the-mill SWE like
me ("naturalization", "symbol grounding", "intentional-referential behavior").

Effectiveness of communication is relative, of course, and what is appropriate
for an audience may not be appropriate for another. A parallel with
programming languages can be drawn here: mainstream language programmers often
have a knee-jerk reaction to being told "look, in Haskell, this program that
took you 80 lines in your favorite OOP language is just 4 lines of almost only
special characters that look like I banged my head on the keyboard!" but it's
just a matter of easing into a different language and different concepts.

~~~
fauigerzigerk
I don't disagree with you in principle, but I read a lot of papers from fields
I'm not sufficiently familiar with and I know what that is like.

I also know from my own field that some people like to write for effect rather
than effectiveness.

~~~
jrsala
I see where you're coming from. I sometimes get that feeling too. The way I
approach this is by asking myself if the author could have expressed their
ideas with simpler words or if some meaning would have been lost had they
tried to simplify. Could I have reformulated anything without loss and without
paraphrasing or resorting to examples?

It's like Dijkstra's famous quote: "The purpose of abstraction is not to be
vague, but to create a new semantic level in which one can be absolutely
precise". Did the author need to invoke these abstract concepts and
complicated wordings to be absolutely precise or did they throw around
abstractions in over-engineered sentences to make themselves look intelligent?

Since I can't claim to understand the topic and the author's message fully, I
can't be certain it's one or the other. All I have is a subjective feeling,
but I think the author was trying to be precise and that each word had its
genuine place at least in the author's point of view.

For example, when I read the last sentence of the introduction "following
Hutto & Myin’s solution that real-world content depends on shared scaffolded
practices that augment basic agents, we explore socially situated computing as
an approach to naturalizing information content in computing", my internal
bullshit alarm was ringing pretty loud, but it turns out each of the concepts
gets expanded upon in the rest of the text.

I think overall this text was a challenging read but one benefit of it beyond
the author's message is that I learned of the existence of this topic and line
of thought in the first place, and it certainly piqued my curiosity.

~~~
fauigerzigerk
It's absolutely possible that you are right and I'm not doing him justice.

------
evdev
The content of this whole wing of thinking is "you have to have a brain-like
system to have a brain-like thing". Which, fair. But what's crazy-making about
it is that people have decided this sort of insight says something about
mathematics and metaphysics, which it _does not_.

For example, this:

 _This observation which they refer to as the “hard problem of content” or the
“covariance-is-not-content principle” is that systems acting on covariance
information, while acting on information, do not constitute content-bearing
systems, because to bear content is to embody claims about how things stand,
when in fact they merely embody capacities to affect the world._

is just complete nonsense, and to extract the charitable reading I put in
quotes above, you have to read closely for paragraph after paragrah to see
that what's going on is the word "content" is reserved to mean "things brain-
like things do in a brain-like way to other brain-like things in a context
built for brain-like things."

Again, okay! But: it's _wildly_ misleading to frame this as being about
mathematical logic or the metaphysics of symbols, syntax and semantics.

------
uryga
the title seems to be an unfortunate pun. from a quick look at the article,
it's mostly cogsci stuff, where "proposition" has a very different meaning
from the one "Propositions as types" normally refers to:

 _" The proposition is a concept borrowed by cognitive psychologists from
linguists and logicians. The propostion is the most basic unit of meaning in a
[mental] representation."_ [1]

the article seems to mostly be talking about AI and the problems of making it
"actually refer to the real world". so i think title could be paraphrased as
sth like "Information about the world ([mental] propositions) is hard to
represent with symbolic, digital things (types)".

[1]([http://www.bcp.psych.ualberta.ca/~mike/Pearl_Street/Dictiona...](http://www.bcp.psych.ualberta.ca/~mike/Pearl_Street/Dictionary/contents/P/proposition.html#targetText=University%20of%20Alberta%20Dictionary%20of%20Cognitive%20Science%3A%20Proposition&targetText=The%20proposition%20is%20a%20concept,judged%20either%20true%20or%20false))

------
TheAsprngHacker
I'm very confused about what this blog post is trying to argue. Most of what
it discusses isn't related to the Curry-Howard correspondence at all?

At the risk of sounding anti-intellectual, perhaps Orwell's "Politics and the
English Language" [0] is relevant here?

[0]
[http://www.orwell.ru/library/essays/politics/english/e_polit](http://www.orwell.ru/library/essays/politics/english/e_polit)

~~~
uryga
yeah, Ctrl+F "type" gives 4 (non-title) occurrences, all of them in the first
two paragraphs. the rest seems to be related to AI and the problems of making
it "actually refer to the real world". weird!

EDIT2: i think the author is actually talking about cogsci "mental
propositions". (see my other comment)

------
mcguire
On what grounds is it argued that humanity operates on "content" grounds? If
the stuff in my head operates purely on syntactic, formal, biological,
chemical, physics methods, then the introduction of semantics or "content" is
a red herring, even if you argue that multiple humans is the magic factor, it
seems to me.

------
auggierose
Sorry. This text is absolutely useless to me. What is it trying to say? Don't
get it, although I agree that propositions are not necessarily types.

~~~
uryga
> propositions are not necessarily types

could you expand on this?

------
lupire
This nonsensicaly-titled article is somewhere between a rehash of "the map is
the not the territory / one cannot pass from the informal to the formal by
purely formal means" and Sokal-esque gobbledygook.

------
otikik
I couldn't follow this. I'm usually capable of following technical papers. The
jargon was thick as a jungle, it seems auto-generated. Use plainer language
next time, please.

------
ppod
Has the author read Brandom?

------
nathias
bad title, good article

