Hacker News new | past | comments | ask | show | jobs | submit login

I experienced the inner monologue you describe for several years. It haunted my dreams when working with Ruby. But one day a rhetorical thought suddenly dawned on me that has since changed my perspective quite dramatically...

"...if I'm being incessantly bothered by what I perceive as the nagging inefficiencies of some programming language's implementation, maybe I'm not thinking about or relating to programming languages (in the large) in the way I should be..."

If all programming languages are merely tools to communicate instructions to a computer, then why is human language not merely viewed by everyone as a means to an end as well? Surely, most would agree that language is more than simply an ends to a mean, and that language does far more than simply transmit information between parties. If efficiency, lack of ambiguity, etc., were the paramount goals of human language, surely formal logic, or perhaps even a programming language for interpersonal communication would be more fitting than natural language!

So why do we insist on communicating with each other with what is often such an abstract and ambiguity filled medium?

I'll let wikipedia elaborate on my behalf: http://en.wikipedia.org/wiki/Pragmatics and http://en.wikipedia.org/wiki/Deixis http://en.wikipedia.org/wiki/Literature

tldr; it is trivial, even natural for an literate individual with the proper context to understand concepts in language that seemingly transcend the words themselves. These notions would be (and are) exceedingly difficult to formalize, and any formal expression of these ideas would cause exponential growth of the output.

Ever try explaining a joke to someone who didn't "get it"? It takes a lot more "space" to convey the same sentiment than to someone who "got it".

So what has this crazy rant have to do with anything? Well, aside from revealing I am a complete nerd, it speaks to my approach to software engineering today.

We have to let go of the machine if we ever want to really move the state of the art forward.

There are an infinitude of expressible ideas, but lacking the proper medium to abstract the expression of these ideas formally (like natural language and our brains do, well, naturally) we will never get a chance to find out what we don't know!

"We're doing it wrong" is not exactly the sentiment I'm trying to express, but it's sorta that. Maybe.

Hope this comment made any sense. :) It's 4 AM after all.




I'm afraid I couldn't hang on for the ride...perhaps I'm like the one who doesn't get the joke and needs the much lengthier explanation!

It sounds like you're saying that programming languages are constrained on two ends: on one end by being too tied to the underlying microarchitecture, and on the other end by being interpreted by our minds which think about programming in terms of language features rather than Platonic ideals.

Assuming I've come at least close to understanding your point, I guess what you're saying is that by thinking too closely about what I'm trying to do at a low level, I'm negatively affecting my ability to write idiomatic Ruby code to do useful things?

This is probably true; it's one of the curses of being a kernel developer. I think you want people who care deeply about how bits are laid out in memory being the ones who are writing your operating system.


I wasn't trying to insult you or anything, my comment was just my (extremely) sleepy attempt to express an idea that I've had shuffling around in my mind for a while now.

At times there's nothing I want to do more than solder components onto a circuit board and make a radio or something. It's really, really gratifying to make something work that's so "magical" (from a certain point of view, radio is pretty magical to me) and completely understand how everything works from start (bare materials) to finish (a working radio!).

I guess what I was trying to say was that if I would ever want to make a CPU comparable to, say, what Intel produces today, I'd have to give up my soldering gun and any notion of manufacturing the CPU with any discrete process (like soldering individual transistors) and instead adopt an entirely new approach - like maybe electroplating - in any event, it's one that allows me to make incredibly powerful things at the expense of being able to "use my hands".

Experts are always going to need to know (and I mean really KNOW) the underlying fundamentals of their field regardless of how "high level" their work becomes - see theoretical physics, et al. With that in mind, I think people who care deeply about how bits are laid out in memory are exactly the same people who will always be at the forefront of computer science and software engineering - even if 99% of their practical output in life is at a level much higher than bits. :)


> It is trivial for an literate individual with the proper context to understand concepts in language that seemingly transcend the words themselves. These notions and are exceedingly difficult to formalize, and any formal expression of these ideas would cause exponential growth of the output.

My interpretation: "I find formal logic's ineptitude at resolving ambiguity disappointing. But humans resolve ambiguity without breaking a sweat. Is it possible to generalize logic to encompass ambiguity?" I think the answer you seek lies in Probability Theory.

> a word to the wise is sufficient. [1]

How does a brain quickly derive intended meaning from an ambiguous lexicon like the English Language? Realize that an infinitude of nuanced interpretations are equally possible, but not equally probable. Suppose Alice says to Bob "The sea/c". If the topic was Marine Biology, Bob will expect (assigns a high probability to the hypothesis) that Alice meant the ocean. If the topic was Typography, then Bob will expect that Alice meant the glyph. Similarly, computers which deal with ambiguity (e.g. speech interpreters, facial recognition) assign higher probabilities to some interpretations than others.

> If efficiency, lack of ambiguity, etc., were the paramount goals of human language, surely formal logic, or perhaps even a programming language for interpersonal communication would be more fitting than natural language!

Computers can communicate practically instantly, but humans are bottle-necked by how quickly we can move our lips. Therefore, I would expect spoken languages to be optimized toward articulating a little as possible. One technique is overloaded vocabulary. I think humanity prizes the ability to compress information down to a single word. This unfortunately comes at the cost of computer-level clarity. But I mean, "one-liners" do make for great movies, don't they.

> Ever try explaining a joke to someone who didn't "get it"? It takes a lot more "space" to convey the same sentiment than to someone who "got it".

As far as I know, humor is one of those things that scientists don't fully understand yet. But some have a rough idea. I'm convinced music and humor are related in the sense that they set up an ambiguous "expectation/motif/theme/ context", and then playing on that expectation.

Music is defined by tension and resolution: tension being ambiguity and resolution being validation. Google an analysis of Beethoven's 5th, and it will say that the opening intervals create tension because the key is uncertain to the listener. Google Music Theory, and you'll learn that the Chromatic Scale is built around the tension between the dominant and the tonic. Occasionally, rather than deliver the punchline, a composer will leave his or her listeners hanging on a suspended-chord or a leading-tone. To experience this cliffhanger, listen to a track with a bass drop, but turn it off right before the actual drop.

Similarly, humor resolves around setting up an ambiguous expectation, and resolving it. The proposed neural mechanisms vary. But jokes always seem to involve a set up, and punchline which is unexpected, yet satisfying. And I think this is because the context is resolved. pg shared a related idea in one of his essays about ideas: "That's what a metaphor is: a function applied to an argument of the wrong type." [2]

With the above in mind, I believe it's possible for today's computers to predict whether a human will find something funny or not. But unless they'll be taught which types of topics humans considered relevant (i.e. deixis), computers would find themselves at a significant disadvantage.

> We have to let go of the machine if we ever want to really move the state of the art forward.

Probability theory is already used in AI. That's really cool, but I don't think the art as a whole needs to move forward. Though both turing complete, speech and programming languages are optimized very differently. I've already pointed out the different constraints. But also notice that while programming primarily aims towards conveying instructions, human speech encompasses a wider spectrum of goals. Meticulous clarity will have a higher impact on instructions like "automate this task" than declarations like "broccoli tastes weird".

> There are an infinitude of expressible ideas, but lacking the proper medium to abstract the expression of these ideas formally (like natural language and our brains do, well, naturally) we will never get a chance to find out what we don't know!

I'm not sure exactly what this is getting at. Incidentally you may enjoy learning about Solomonoff Induction. [3]

[1] http://paulgraham.com/word.html

[2] http://paulgraham.com/ideas.html

[3] http://lesswrong.com/lw/dhg/an_intuitive_explanation_of_solo...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: