Hacker News new | past | comments | ask | show | jobs | submit login

I'm using the term "mathematics" in a deliberately hand-wavy fashion because people keep wanting to poke at this from various angles and even though the underlying theory is the same, it can appear that you are receiving different answers simply because you are asking the question from different epistemological frameworks. There's a reason Wittgenstein was the man and I'm not.

If you like, we can assume that different numbers have a concrete, discrete meaning. We can then add commutation and association, maybe induction. At some point, however far down the rabbit hole we want to go, these numbers in a table represent some physical thing: six apples or whatever. These diagrams and other visuals represent some tangible, agreed-upon thing: a diagram of an enzyme, perhaps.

The mapping of various calculii to either physical things or commonly-held concepts is not on the table here. The point is that the concepts themselves can have subtly-different meanings for any two practitioners or observers. A doctor in the middle ages may check your humeurs and a modern nurse may check you blood pressure. Imagining for a second that both operations look similar, even though you may have the same number, you have a completely different understanding of what those numbers mean.

So whatever formal rules of mathematics you'd like to have, and whatever visuals or measurements you'd like to take or produce, the end goal is analysis or language creation around a shared interest. If I hold up a coconut and want to trade it, then you make some odd sound, it means nothing. But then if you hold up two bananas, we may be beginning to converse and exchange meaning. (In this case by way of commerce, but this is just one example of thousands) The visuals enable higher-bandwidth conversations. Do I then think that 1=2, since I had one coconut and you had 2 bananas? I might. I might not. If we're from completely different cultures we have a lot more work to do.

That's an obvious example, the deeper and much more profound truth is that the same thing can happen with a vapor trail in a particle accelerator. I'm reminded of John Wheeler's idea that maybe there's only one electron in the entire universe. Once the web of meaning reaches some degree of complexity, the human brain shuts down and stops evaluating all the possible alternative meaning paths; we are not aware of this. In our mind we've thought through everything and are sitting on top of thousands of years of received wisdom. It has always been like this.

So yes, they are much, much more important than text or formulas, but they're more important because they assist us in the drive for common language creation. Frankly, many times they do a much better job than the others. But the job in both cases is the same.

I will make another stupid analogy. Ever see the scintillating grid optical illusion?[1] It looks like you can see every other black dot except the one you're looking directly at. Meaning is similar in that whatever the concept under observation, it appears like some analysis can work out the problems. The other ones, farther away, don't need any work. They're all set in stone. But then, given time, when you look at those, you realize that no, actually there's some problems over here. That area you were looking at before? That's all fixed now.

The lesson here is that there is a cognitive limit to the things that you can simultaneously ponder about their meaning, relationships, and relevance to any one situation. After that limit, it's all just "common knowledge" or received wisdom. It has to be this way. (no time to go into why). But then you realize that it's all a web, we're all living in our own constructed grid that's different, and the goal is to align both the concepts under observation and the "given" concepts among several of us such that we can erect a language scaffold sturdy enough to make progress towards some common goal.

I know that sucks, but that's the best I've got in ten minutes. Hope it helped. There's an entirely other conversation about how we construct these grids, how we share them, and more importantly why this is a feature of sentience and not just a stupid primate trick. No time for that.

1. https://www.illusionsindex.org/i/scintillating-grid




Apparently the ideas and mental concepts in one person's brain are roughly similar to those in another's, at least similar enough so that common sets of symbols, semantics, and syntax can be agreed such that communication is possible. The fact that various observers get the same impression of the scintillating grid is evidence for some commonality, although I think that in this case it arises from processing in the retina.[1]

I don't see how this obviates the possibility that individuals can have a private language that is in principle not understandable by others, per Wittgenstein. Consider an oenophile who has a most sensitive palate, and who can describe a wine using a whole set of adjectives that are meaningless to most and which must only vaguely represent the actual sensations being enjoyed by the expert. The expert may have a whole internal vocabulary, and due to imprecision of terms, one expert's internal vocabulary may be different from all others. You could say the same for perfumers, cheese mongers, color experts and others who have extraordinary powers of sensation, and which well might be unique to the individual. And the internal language of the individual may not be intelligible by any other because their olfactory or retinal apparatus may not be exactly alike.

I also suspect that this applies to conceptualization as well as sensation. Quite possibly Einstein's internal language was unique to him.

[1]There was an excellent MOOC on the visual system. "Light, Spike, & Sight: The Neuroscience of Vision" - https://courses.edx.org/courses/MITx/9.01x/3T2014/course/


The universe, fortunately, has a plethora of similarities, such that some basic edge detection and a bit of neural net work in the retinal layers, combined with a bit of proprioception correlation, provides an enormous amount of 90%+ confidence-level shared realities, or at least a good enough fake. Just not 100%. GANS are doing a great job of showing us that not even all of that is required to begin the "faking-out" process.

But the illusion here is the same: that given this natural input we begin processing before birth, that these concepts extend to completely invented terms. Most folks never look there, they never wonder why a car is called a car, and there's no downside at all. It's a pernicious concept and a wickedly-difficult thing to eventually realize.

I don't see where we disagree. The only thing I'd add is that whether you have a completely private language or not, in terms of problem-solving/goal-seeking, is not important. For non-formal, non-tech things, using common words and gestures provides the quickest way forward. Once you start creating a self-consistent system of symbols representing state and behavior, though, you might actually be better off if everybody has completely different private languages. The illusion of common understanding where there is none is more dangerous than misunderstanding. There are no red lights or sirens that go off when human communication failures happen. It's all silence. It could be no other way.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: