Hacker News new | comments | show | ask | jobs | submit login

This recent 'callbacks as goto' discussion is so utterly mundane that I've all but failed to convince my wrists to so much as twitch sufficiently to click on any of the links, just the title is enough to drive me to drink.

Callbacks are in no way "this generation's goto", they do not in any way inhibit the ability to analyse or prove correct a program, and all alternatives to callbacks amount to incredibly fancy syntax sugar for hiding what is really happening behind the scenes anyway.

A callback is a function call, or put another way, a specification of an exact set of arguments grouped in a formal specification that is sent to a deterministic little machine (the function) to execute immediately. On completion the machine will provably produce the result it promised to return in its owner's manual (i.e. the declared return code).

None of this is "goto" in any way. In goto land, without knowing the exact set of inputs a program receives and whole-system simulation, it is utterly impossible to make assumptions about even the most minimal pieces of state in the program at any given moment.

Contrast to a function call. A function call is a fixed guarantee about the state of the system at the point a machine begins executing. Furthermore a function call has a vastly restricted set of valid behaviours: at some point it must terminate, and prior to termination, update the system state (stack) to include its return value. And upon termination, result in the calling machine to resume operation.

All this syntax sugar shit is whack. Great, you typed 24 characters instead of 3 lines to declare a lambda. Hyper productive, go you. How does this progress toward the point where I can say "computer, make me a coffee!"?

If you're genuinely interested in what this generation's Goto might look like, take 30 minutes to watch http://vimeo.com/71278954 .. our UIs are utterly trapped in pre-1970s thinking, our communication networks are so utterly idiotic that we STILL have to write custom code to link disparate chunks of data and logic together, we're STILL writing goddamned function calls in a freaking text editor (something that was solved LONG ago). All the things like this.

I can't ask my computer to make me a cup of coffee, and it responds. I can't describe in restricted English a simple query problem and have the millions of idle machines around the world coordinate meaningfully to answer my question (and our best chances of this.. freebase.com.. got bought up by Google and left to rot as some vestigial appendage of their antispam dept no doubt).

Computing is in a sad state today, but not because of fucking callbacks. It's in a sad state today because we're still thinking about problems on this level at all.

Node.js wasn't an innovation. It was shit that was solved 60 years ago. I wish more people would understand this. Nothing improves if we all start typing 'await' instead of defining callbacks.

Innovation in our industry has been glacial since at least the early 90s.

Edit: and as a final note, I wholeheartedly welcome you to nitpick the hell out my rant, wax verbose on the definitions of provability, the concept of a stack, the use of English versus American language, why Vim is awesome, and in the process once more demonstrate the amoeba-like mentality 90% of our industry is trapped in.

Go and build a programming language your Mom can use. Preferably by talking to her computer. Please, just don't waste it on any more of this insanity.




>Callbacks are in no way "this generation's goto"

They are very much analogous to GOTO -- or even COMEFROM.

They have a similar effect to the control flow, and a similar adverse effect on the understanding of the program.

And Dijkstra's core argument against GOTO applies 100%:

"""My second remark is that our intellectual powers are rather geared to master static relations and that our powers to visualize processes evolving in time are relatively poorly developed. For that reason we should do (as wise programmers aware of our limitations) our utmost to shorten the conceptual gap between the static program and the dynamic process, to make the correspondence between the program (spread out in text space) and the process (spread out in time) as trivial as possible."""

>all alternatives to callbacks amount to incredibly fancy syntax sugar for hiding what is really happening behind the scenes anyway.

If, for, while, foreach --heck even map, reduce et co, etc are also syntax sugar for GOTO. Your point? Or adding the weasel word "fancy" somehow makes this particular syntax sugar bad?

Not to mention that there's nothing "incredibly fancy" about await, async, promises et co.

And if I wanted to know "what's really happening behind the scenes" I wouldn't programmer in Javascript in the first place.


You're not the first person I read mentioning "if" as a syntax sugar for "goto". It is not. In fact, "goto" needs "if" to replicate looping behavior.

Unless you imply "goto" can branch conditionally (e.g. je/jne/jg etc. from ASM) but that's not "goto". "Goto" means unconditional jump.


I thought of that Dijsktra quote too. It sheds light on what's wrong with the OP's proposal: in return for less nesting, it worsens the conceptual gap between the program (in text) and the process (in time). A poor trade.

I've found that nesting to indicate asynchronous control flow can be quite a good device for keeping the program text closely related to its dynamic execution. (It's true that such code is hard to read in JavaScript, but I don't think that's because of nesting per se.) It allows you to lay out synchronous logic along one dimension (vertically) and asynchronous along another (horizontally). I hypothesize that there's a quite good notation waiting to be brought out there; unfortunately, such experimentation tends to be done only by language designers in the domain of language design, when it really ought to be done in the context of working systems—and that's too hard to be worth the trouble unless you're programming in something like a Lisp.


>I thought of that Dijsktra quote too. It sheds light on what's wrong with the OP's proposal: in return for less nesting, it worsens the conceptual gap between the program (in text) and the process (in time). A poor trade.

That's like saying that "by using FOR instead of GOTO we worsen the conceptual gap between the program (in text) and the process (in time)".

Only we don't worsen it -- we just abstract it to a level in which we can reason about it better.

Higher level is not worse -- except if you consider "more distance from what is actually happening at the CPU" as worse. Which is not: historically the higher the distance, the more programmers got done.

We don't need to keep track of the actual low level process in time -- which with callbacks we're forced to. We just need to know how the steps of the process relate. Which await and co offer in a far better form than callbacks.


We're not talking about distance from the CPU. At least I'm not, and I'd be shocked if Dijkstra were. No, the issue is that a program has at least two different logical structures: its lexical organization in text, and its dynamic organization in time. You want these two to be as closely related as possible, because humans rely mostly on the program text to understand it.

"Time" here doesn't mean CPU time, it means logical time--the temporal (as distinct from textual) order of operations in the program. To put it another way, the "time" that matters for program clarity is not when stuff happens at lower levels (e.g. when does the OS evict my thread) but what happens when in the source code itself. This is not so far from your phrase, "how the steps in the process relate", so I don't see the disagreement.

I certainly don't agree, though, that the OP's design recommendations lead to higher-level or easier-to-reason-about code. Do you really think they do?


> ... the issue is that a program has at least two different logical structures: its lexical organization in text, and its dynamic organization in time.

That's a concise way to put it and I'll certainly remember and reuse it! Did you come up with it or did you come across it somewhere?


It's just a paraphrase of the Dijsktra quote. The same idea is implicit in the distinction between lexical and dynamic scope, which confused me for years before I got it... probably because "dynamic scope" is something of an oxymoron.


>they do not in any way inhibit the ability to analyse or prove correct a program, and all alternatives to callbacks amount to incredibly fancy syntax sugar for hiding what is really happening behind the scenes anyway.

1) EVERYTHING is 'syntax sugar'.

2) There's nothing intrinsically wrong with gotos, they just aren't very well suited for human brains. Computers can execute goto statements very efficiently.

Callback-hell smells very much like gotos to me. It's very easy to do the wrong thing and easy to create very hard to read, hard to understand, and hard to maintain code.


> 1) EVERYTHING is 'syntax sugar'.

The OPs counter argument argument is against treating some syntactic sugar as special and novel when it is in fact not. Even voice commands are syntactic sugar, but they're way better for the lay user.

Although I dunno what he's talking about with "I can't talk to my computer." My phone is learning a lot of tricks, really fast.

> Callback-hell smells very much like gotos to me. It's very easy to do the wrong thing and easy to create very hard to read, hard to understand, and hard to maintain code.

This is mostly because people try to treat higher order functional code as in fact just a fancy syntax. Writing higher-order functional code (code that creates, consumes, and modifies functions) requires a set of alternative disciplines that most people in our industry not only don't learn, but actively despise and in many cases mock.

Even otherwise brilliant, smarter-than-me people I look up to do this and then declare the entire functional concept bankrupt. It drives me nuts. Once upon a time, when people saw OO code then turned their nose and said, "You shouldn't need to define hierarchies and ontologies just to reason about your code! How stupid is that!" But then proven models came out and the industry half-assed adopting it (remember Taligent? No? Look it up).

So many developers these days are whining and grousing about how static type systems inhibit their freedom and higher order functions are whacky propeller-head notions that only nerds take seriously. And yet they wonder why the bulk of the industry moves at a glacial pace. I'm happy that Erlang's nearly-30-year-old proposition and some of the implications Needham's duality are finally reaching mainstream computing.


> This is mostly because people try to treat higher order functional code as in fact just a fancy syntax. Writing higher-order functional code (code that creates, consumes, and modifies functions) requires a set of alternative disciplines that most people in our industry not only don't learn, but actively despise and in many cases mock.

Well it's a hell of a lot harder to write a good verb than it is to write a good noun. Similarly, writing good adverbs is nearly impossible (how many books on Lisp macros do you know of?). If I had to teach my Mom to code, I wouldn't teach her how to zip, fold, map & reduce lists-of-lists on the fly, I'd teach her the FullName noun.

Callbacks are just gotos that return. They can also pass along non-global context, like error information. It's a subtle distinction, and to be up in arms about it is indeed strange.

The industry moves slowly because they can afford to. A few million dollars can feed a hundred developers. The codebases get so large, the teams so big, that lowest-common-denominator kind of code will always prevail. Remember what I was going to teach my mom? Not lisp macros, no. Simple nouns, simple mechanisms.


Prepare for a solid fisking.

> Well it's a hell of a lot harder to write a good verb than it is to write a good noun.

See, that's your indoctrination talking. Really both are about equally hard. The actual definition of zip is pretty simple; assuming you have trained yourself to think about it the right way. This is no different from OO. The idea that imperative programming is "natural" is sort of a myth.

> (how many books on Lisp macros do you know of?)

Quite a few, actually! But I'm not sure why this matters;. Lips macros have 0 to do with not only this conversation, but this entire family of abstractions. Macros bear no resemblance to what we're talking about.

> If I had to teach my Mom to code, I wouldn't teach her how to zip, fold, map & reduce lists-of-lists on the fly, I'd teach her the FullName noun.

Why? People think verb-first all the time, describe things in verb-first ways, and act in verb first ways. They do it all the time, an it's not unnatural.

> Callbacks are just gotos that return.

Not really.

> They can also pass along non-global context, like error information.

If they are implemented with continuations, they do a lot more. But see also coroutines.

> The industry moves slowly because they can afford to.

I submit that the resurgence of the small software shop and the incredible successes that small software startups have been seeing is a counter-argument to this. As backwards as the average Node.js shop is, they're still light-years ahead of the befuddled, ossified monstrosities that they compete with.

> A few million dollars can feed a hundred developers.

You should be ashamed of this remark.

> The codebases get so large, the teams so big, that lowest-common-denominator kind of code will always prevail.

Bridges are not constructed this way.

> Remember what I was going to teach my mom? Not lisp macros, no. Simple nouns, simple mechanisms.

Stop patronizing people. You're pretty smug for someone who doesn't know lisp. I thought being smug was my job as a lisp hacker!


> The idea that imperative programming is "natural" is sort of a myth.

Yes yes yes. I definitely agree. It's "sort of a myth," but it also sort of true. Zip is indeed quite simple, but it wouldn't make any sense at all unless you knew what a list was. Case in point, zipping two lazy (infinite) lists like they're eager won't work at all; each version of a list would have its own zip. The verb will be more or less derived from the noun. A verbless noun makes sense, but a nounless verb? I think there is some dependency.

Ask some programmers if they learned function calls before they learned variable assignment. I'm obviously betting they didn't, but I'd be curious if I were wrong.

I really don't know what's "natural". I'd like to know, but I don't. I do play guitar. (poorly.) Playing a good chord is a lot harder (for a beginner) than playing a good note; I've seen plenty struggle, including myself. Now though, both are about equally hard. I have no preference. The actual structure of a -7 chord is pretty simple once you start to think in the right way. For some reason, though, beginners seem to like playing major chords and single notes. Similar story: when I was a child, I learned how to write the letters before I learned how to write the words. When I was a slightly older child, I learned the chess pieces before I learned the chess openings. It all goes hand in hand, but something's got to come first. I figure it's probably got something to do with how the brain acquires new patterns, but I know even less about neurology than I do programming.

I intended to relate adverbs to lisp macros, but I don't have a rock solid thesis on the matter. Macros can arbitrarily change the nature of functions just like adverbs change the meaning of nouns. In either case, overuse leads to crappy writing. I'd argue they're more awkward to write, but not because of any indoctrination. Just a personal thought.

I'd even bet cash that even a room of grad students could brainstorm/generate new nouns much faster than they can generate adverbs. This might not prove anything.

> You should be ashamed of this remark. > Bridges are not constructed this way. > Stop patronizing people.

I got confused by all this. I didn't intend to patronize anyone. Sorry.


> The verb will be more or less derived from the noun. A verbless noun makes sense, but a nounless verb? I think there is some dependency.

Linguistically inaccurate. But it also seems irrelvant. "Zipping things" is a great example of a placeholder noun, anything that can be zippable, right? People do this all the time. "Driving" implies that you have a thing to drive, but the act of driving is clear and distinct in people's heads despite the fact that it can represent a lot of different actions.

> Ask some programmers if they learned function calls before they learned variable assignment. I'm obviously betting they didn't, but I'd be curious if I were wrong.

The answer to this question is irrelevant, but also hard to understand. Variables and functions are deeply intertwined ideas because most function calls take variables.

SICP taught functions first, and it was widely acclaimed.

> Macros can arbitrarily change the nature of functions just like adverbs change the meaning of nouns.

I do not see an interpretation of macros that is concordant with this metaphor. Macros let you define new parts of speech entirely, hand-tooled to let you perfectly express what you want the way you find most natural.

> I'd even bet cash that even a room of grad students could brainstorm/generate new nouns much faster than they can generate adverbs. This might not prove anything.

I do not think this is relevant. But if you'd like to see an example of how complicated this is, look at any note from one partner to another. Mine go like this: "Dave, Please pick these items on your way home:" and then a list. Which is a function (in IO, so monadic since it has side effects)> But that is a verb THEN a list of nouns.


> There's nothing intrinsically wrong with gotos... Computers can execute goto statements very efficiently.

That's because the problem is analyzing the goto statements, not executing them. And computers suck at it, less than humans, but still suck.


You snipped out the part where he said exactly that.


>Go and build a programming language your Mom can use. Preferably by talking to her computer.

People who think this is doable with current technology usually don't have much clue in how to build a programming language.


https://ifttt.com/

"Computer, create a rule for my car."

"OK. Which car? Do you mean Red Toyota Purchased 1997?"

"Yes."

"OK."

"Computer, when car is at location House, then boil kettle."

"Do you mean Kettle in Kitchen of 48 My Road?"

"Yes."

"OK. Is that all?"

"No. Computer, this rule is only valid on weekdays."

"OK. So when car Red Toyota Purchased 1997 is at location 48 My Road on Monday, Tuesday, Wednesday, Thursday, Friday, boil Kettle in Kitchen of 48 My Road?"

"Yes, but only if my iPhone was not inside the car."

"Do you mean Jemima's Apple iPhone 4Gs?"

"Yes."

"OK."

...


This just demonstrates that the problem, for laypeople, isn't so much the programming as the debugging. What does "your mom" do when she realizes the kettle is on for nearly 18 hours every day because that's how long her car is there?

Being able to communicate effectively, in a bidirectional manner, with a computer is a skill. It's a skill people will probably always have to learn, one way or another. It's more likely that more people will learn these skills than that we'll devise a way for computers to be the smart ones in the conversation any time soon.


The distinction between "is at" and "arrives at" is going to result in a high power or house gas bill. :)


I'm a PL researcher who has also worked a bit on dialogue systems. Getting those rules encoded for even a small domain would be a huge task, primarily because machine learning hasn't helped us in this area yet.

We will get there, but maybe in 5 or 10 years.


Exactly. I think OP's argument was that we should devote more resources to these sorts of problems instead of bickering about low-level stuff so much (if they aren't directly helping that goal).

I don't see how that view implies ignorance of current technological limitations.


The OP wrote:

>Computing is in a sad state today, but not because of fucking callbacks. It's in a sad state today because we're still thinking about problems on this level at all.

He seems to imply that it's sad today is today and not 20 years from now. So what?

People have dreamed about replacing "programming" with "dialogue systems" since at least the 60s; they are like flying cars, always 10 or 20 years away. We might be closer now, but it's not like we were not dreaming about this since before I was born.

In the meantime, we have code to write, maybe it's worth doing something about this callback thingy problem just in case the singularity is delayed a bit.


There are not that many researchers (or other people) working on 'less code'. And the response from the community is always; don't try it, it's been done and it won't work (NoFlo's Kickstarter responses here on HN for instance).

Instead I see these new languages/frameworks which have quite steep learning curves and replace the original implementation with the same amount of code or even more.

As long as 99% of 'real coders' keep saying that (more) code is the way to go, we're not on the right track imho. I have no clue what exactly would happen if you throw Google's core AI team, IBM Watson's core AI team and a few forward thinking programming language researchers (Kay, Edwards, Katayama, ...) in a room for a few years, but I assume we would have something working.

Even if nothing working would come out, we need the research to be done. With the current state we are stuck in this loop of rewriting things into other things, maybe marginally better to the author of the framework/lib/lang and a handful of fans resulting into millions of lines of not very reusable code. Only 'reusable' if you add more glue code than the original code in the first place adding only to the problem.


Nothing was new with NoFlo, it was just trying the same thing that failed in the past without any new invention or innovation. The outcome will be the same.

Trust me, the PhBs would love to get rid of coders, we are hard to hire and retain. This was at the very heart of the "5th gen computing movement" in the 80s and it's massive failure in large part led to the following second "AI winter."

> I have no clue what exactly would happen if you throw Google's core AI team, IBM Watson's core AI team and a few forward thinking programming language researchers (Kay, Edwards, Katayama, ...) in a room for a few years, but I assume we would have something working.

What do you think these teams work on? Toys? My colleague in MSRA is on the bleeding edge of using DNNs in speech recognition, we discuss this all the time (if you want to put me in the forward thinking PL bucket) over lunch...almost daily in fact. There are many more steps between here and there, as with most research.

So you are unhappy with the current state of PL research, I am too, but going back and trying the same big leap that the Japanese tried to do in the 80s is not the answer. There are many cool PL fields we can develop before we get to fully intelligent dialogue systems and singularity. But if you think otherwise, go straight to the "deep learning" field and ignore the PL stuff, if your hunch is right, we won't be relevant anyways. But bring a jacket just in case it gets cold again.


I agree with you on NoFlo and criticism like yours is good if it's well founded. I just see a bit too much unfounded shouting that without tons of code we cannot and can never write anything but trivial software. The NoFlo example was more about the rage I feel coming from coders when you touch their precious code. Just screaming; "it's impossible" doesn't cut it and thus i'm happy with NoFlo trying even if doomed to fail, it might bring some people to even consider different options.

> What do you think these teams work on? Toys? My colleague in MSRA is on the bleeding edge of using DNNs in speech recognition, we discuss this all the time (if you want to put me in the forward thinking PL bucket) over lunch...almost daily in fact. There are many more steps between here and there, as with most research.

No definitely not toys :) But I am/was not aware of them doing software development (language) research. Happy to know people are discussing it during lunch; wish I had lunch companions like that. Also I should've added MS; great PL research there (rise4fun.com).

I wasn't suggesting a big leap; I'm suggesting considerably more research should be put into it. Software, it's code, it's development and bugs are huge issues of our time and I would think it important to put quite a bit more effort in it.

That said; any papers/authors of more cutting edge work in this field?


My colleague and I talk about this at lunch with an eye on doing a research project given promising ideas, but I think I (the pl guy) am much more optimistic than he (the ML/SR guy) is. I should mention he used to work on dialogue systems full time, and has a better grasp on the area than I do. I've basically decided to take the tool approach first: let's just get a Siri-like domain done for our IDE, we aren't writing code but at least we can access secondary dev functions via a secondary interface (voice, conversation). The main problem getting started is that tools for encoding domains for dialogue systems are very primitive (note that even Apple's Siri isn't open to new domains).

The last person to take a serious shot at this problem was Hugo Liu at MIT. Alexander Repinning has been looking at conversational programming as a way to improve visual programming experiences; this doesn't include natural language conversation, but the mechanisms are similar.


I would think that this is why PL research if very relevant here; until we have an 'intelligence' advanced enough to distill from our chaotic talking about an intended piece of software, I see a PL augmented with different AI techniques to explain, in a formal structure (with the AI allowing for a much larger amount of fuzziness than we have now in coding; aka having the AI fix the syntax/semantic errors based on what it can infer about your intent after which, preferably on a higher level of the running software, you can indicate if this was correct or not) how a program should behave.

With some instant feedback a bit like [1] this at least feels feasible.

[1] http://blog.stephenwolfram.com/2010/11/programming-with-natu...


> We might be closer now, but it's not like we were not dreaming about this since before I was born.

It's not about dreaming. It's about action and attitude. Continuing down the current path of iterating on the existing SW/HW paradigm is necessary, but in that 20 years, it's not going to lead to strong AI. Our narrow minded focus on Von Neumann architecture permeates academia. When I was in college I had a strong background in biology. Even though my CS professor literally wrote a book on AI, he seemed to have a disdain for any biologically inspired techniques.

Recently, I've seen a spark of hope with projects like Stanford's Brains in Silicon and IBM's TrueNorth. If I was back in school, this is where I'd want to be.

http://www.technologyreview.com/news/517876/ibm-scientists-s...

http://www.stanford.edu/group/brainsinsilicon/


Sounds familiar!

http://en.wikipedia.org/wiki/Fifth_generation_computer

1982 was 30 years ago, BTW.


Thanks for the link. After 60 years of promising to make computers think even the fastest machines on the planet with access to Google's entire database still have trouble recognizing a cat from a dog, so I have to agree with the article that yes it was "Ahead of its time...In the early 21st century, many flavors of parallel computing began to proliferate, including multi-core architectures at the low-end and massively parallel processing at the high end."

As 5th generation shows small pockets of researchers haven't forgotten evolution has given each of us a model to follow for making intelligent machines. I hope they continue down this road because faster calculators aren't going to get us there in my lifetime. You feel differently?

As the article mentions "CPU performance quickly pushed through the "obvious" barriers that experts perceived in the 1980s, and the value of parallel computing quickly dropped", where as 30 years later single threaded CPU performance has gone from doubling every 2 years to 5-10% per year since ~2007. Combine that with the needs of big data, and the time is right to reconsider some of these "failures".

Parallel programming is the biggest challenge facing programmers this decade, which is why we get posts like this on callback hell. Isn't it possible that part of the problem lies in decisions made 50 years ago?

I'm not saying we need to start from scratch, but with these new problems we're facing, maybe it's time reconsider some of our assumptions.

"It is the mark of an educated mind to be able to entertain an idea without accepting it." -Aristotle


The problem is that we've been down this route before and it failed spectacularly (we've gone through two AI winters already!). It does not mean that such research is doomed to fail, but we have to proceed a bit more cautiously, and in any case, we can't neglect improving what works today.

The schemes that have been successful since then, like MapReduce or GPU computing, have been very pragmatic. It wasn't until very recently that a connection was made between deep learning (Hinton's DNNs) and parallel computing.


Yes, I did say continuing down the current path was necessary. From the languages to the libraries, today's tools allow us to work miracles. As Go has shown with CSP, sometimes these initial failures are just ahead of their time. Neuromorphic computing and neuroscience have come a long ways since the last AI winter.

The hierarchical model in Hinton's DNNs is a promising approach. My biggest concern is that all of the examples I've seen are built with perceptrons, whose simplicity makes them easy to model but share almost nothing in common with their biological counterparts.


Funny a cryptographer was just explaining why you should never question your field's fundamental assumptions.

http://www.extremetech.com/extreme/164050-discovery-may-make...


That's more of an orthogonal point than a counter point though. Just because we haven't accomplished something yet doesn't mean we shouldn't invest any more effort into it.

I was responding to your insinuation that OP must not be knowledgeable about language implementation to hold the views he/she does. In this context, technological limitations are irrelevant because that was not the point; the point was an opinion about the relative effort/attention these problems receive.

I'm admittedly not really invested in this area myself so I don't really care, but it's disingenuous to try and discredit OP's views like that. At least this response is more of a direct counter-opinion.


Indeed!

Intelligent machine learning is 5 to 10 years off, just as it has been since the early 1980s.


Well put. I wasn't even born then, but things have changed so little that the first thing thing that comes to mind is that I really need to see Tron someday. I doubt computers will ever learn to talk humna, rather, as we all see more and more every day, humans are going to have to learn to talk computer.


I was born in 1973 and by the 1980s a lot of people were convinced intelligent machines and nearly general purpose AI was just around the corner. There was tons of university and military funding going into projects to achieve this.

Of course, now the idea of that sort of blue-sky research is almost universally dead and the most advanced machine learning will likely be developed at Google as a side effect of much effort put into placing more relevant ads in front of us, which is kind of hilarious.


We will get there. It is just taking a lot longer than initially anticipated. I for one am not looking forward to trans-humanity, but I see it as inevitable.


If most humans eventually learn to talk computer, what's the difference between "human" and "computer" language?


The primary challenge is the stated example is speech and NLP. It's not building the decision trees. We can do that today with off the shelf technology. Hell, it's the kind of thing people love to hack together with their Arduinos.

Microsoft even has a pretty slick thing called on{X} that does some amazing tricks with rules reacting to network events and geolocation tools.

Although I confess that commercializing a system that lets you asynchronously start potentially unmonitored exothermic reactions in your home? Probably hard. :D


There are these wonderful inventions called automatic electric kettles, you should look into them. They seem to be commonplace in Europe, not so much in the US. But seriously, all it takes is a bimetallic strip to turn the kettle off when it reaches the desired temperature.

Eliza ran in 16KB of RAM. The logic flow is easy, all you need is a sufficiently large dictionary.


People who think interactive decision tree building is hard generally haven't done even basic research in the most elementary AI techniques.

It is not hard.

The real barrier to something like that is in finding the right way to infer context for speech recognition and natural language parsing. You could easily bolt a system like that onto IFTTT and Twilio and get good results right out of the gate.

Free idea for you, there. Go make a startup.


> I can't ask my computer to make me a cup of coffee, and it responds.

Question: what exactly would you like your computer--that beige box under your desk--to do, if you ask (or rather, command) it do do that? This sounds more like an embodied intelligence/robotics problem than a problem with constraint-solving goal-directed AI, per se.

Unless you want it to use Mechanical Turk to fire off a task to have a human make you a coffee. I could see that working, but it's probably not what you'd expect to happen.


If you truly believe that we're all missing the point, how do you propose it be fixed? How far down the stack do we need to start disassembling parts and rebuilding? And who, if anyone, is doing that now?


> [Callbacks] do not in any way inhibit the ability to analyse or prove correct a program

It's obvious to me that they do: humans are not that good at keeping track of these fragmented chains of flow control. They're just more balls in the air for the juggler.

In building systems, we are using many layers of abstractions all the time (or "syntax sugar" as you say).


The answer to that is obvious. At least a handful of people are expert jugglers - in fact, these people are so good at it, they've never actually been good at handling one thing at a time - they constantly juggle everything, and have been doing so for decades. They've just been dying for the world to make juggling mandatory, because now they can truly shine.

All the other 99% of us need to do is take juggling classes for the next several years and reorient all of our work around juggling. Just quit doing things the way you've done (successfully) for the past several years. Problem solved.


    Computing is in a sad state today, but not because of
    fucking callbacks. It's in a sad state today because 
    we're still thinking about problems on this level at all.
The unfortunate part about thinking about problems is it that problems are relative - i.e. they are problems only relative to a context. Nobody even in the tech world has a clue how solving a simple problem just to satisfy some intellectual curiosity can affect life on the planet. What meaning does Wiles' proof of Fermat's last theorem have on the part of the world that's sleeping hungry every night?

The stuff we're talking about here has irked some pretty decent brains enough for them to go out and make ... another textual programming language.

Folks like Musk are an inspiration indeed for boldly thinking at the high level they do and seeing it through. However, that's exactly what's easy for press to write up on. Stuff like the moon speech is what's easy to communicate. It is also simple to attribute some great deed to one of these great "visions". But behind each such vision lies the incremental work of millions - each of it, I say, worth every penny - unsung heroes, all of them.

On a related point, not all the time where we've heard a call to greater action do we see the ability in those folks to imagine how that might come about.

edit: typos.


> The unfortunate part about thinking about problems is it that problems are relative - i.e. they are problems only relative to a context. Nobody even in the tech world has a clue how solving a simple problem just to satisfy some intellectual curiosity can affect life on the planet. What meaning does Wiles' proof of Fermat's last theorem have on the part of the world that's sleeping hungry every night?

Like the city you live in right now?

> Folks like Musk are an inspiration indeed for boldly thinking at the high level they do and seeing it through.

"Folks like Musk" are smart enough to know that the progress of humanity, the great and meaningful leaps of technology to strive for, are accomplished by people solving those hard problems under a common banner.

A consequence you've evidently yet to learn, or are afraid to realize.


> "Folks like Musk" are smart enough to know that the progress of humanity, the great and meaningful leaps of technology to strive for, are accomplished by people solving those hard problems under a common banner.

That's an interesting aspect of their work you point out. It made me think about describing my "banner" and add it to my HN profile.

It would be cool if we can all identify such "banners" under which we work in our HN profiles. Note that the banner doesn't have to be a technological breakthrough. In my case, for example, I wish for cultural impact.


> It would be cool if we can all identify such "banners"

These days these "banners" are LLCs or C-corps. :)


Though true, the "banners" aren't always obvious. Some folks may have a complementary mission. The golang authors are perhaps not directly working to "organize the world's information", for instance and their banner would read "making it easy to build simple, reliable, and efficient software".

It is still, a nice exercise to do it though ... and I might turn mine into an LLC anyway :)


> Furthermore a function call has a vastly restricted set of valid behaviours: at some point it must terminate, and prior to termination, update the system state (stack) to include its return value.

Hooray! We've solved the halting problem. Without formal analysis, it's not possible to show (in general) that a function will terminate.

More seriously, I've seen function behavior that's just as bad as other types of flow control (including gotos).


Most structured programming ignores this problem as unless you assume some best effort from the programmer, all programming is futile.

You don't exercise your Big-O thinking here, you exercise your Big-Theta.


You're looking in the wrong places. Nobody thinks we're close to building a computer you mom can talk to, but we are getting closer to understanding what computer languages are and their properties. On that end research is blistering and exciting.

But the industry has no need for that stuff.


My Mom prefers to do her development in C, because she needs to make sure her search algorithms are very high performance. So it seems she already has a pretty good language for her use cases?


>incredibly fancy syntax sugar In JavaScript yes but the way python handles async stuff is not 'fancy syntax sugar' it is actually very clever - I'm not saying Javascript should do things the way python does, they are fundamentally different and I actually quite like callbacks in javascript but I think you should be clear that your comments are made in relation to Javascript (assuming they were).


I agree with you.

It occurs to me that if you're using a whole bunch of anonymous functions with nested lexical closures, then the shared state between them starts looking a lot like global variables.

And globals are evil, but relative to the other nightmares that Gotos could produce, they're pretty mild.


That's not callback's fault, but closure's. Callback's need not have shared state.


So true: `Node.js wasn't an innovation. It was shit that was solved 60 years ago.`

We're just reinventing the same thing because some sh*ty stuff got momentum. Take websockets, how is now a webserver better then IRCD hacked with an implmentation of fcgi?

PS: vim is awesome :)



I don't like callbacks but I don't have anything to add to the discussion except this irony:

Do you want callbacks? why instead of writing var a = 1 + 2; don't like to write:

var a; sum(1,2, function(result) { a = result; });


Thank you for providing some sanity to this callbacks inquisition. I am getting so tired of all the "callbacks are evil, js is evil" crusade. Sorry your pet language isn't supported by every browser ever.


> a freaking text editor (something that was solved LONG ago)

It wasn't solved, it was replaced by a different, and worse, set of problems. Other people having minds organized differently from yours isn't evidence of a usability problem.

The problem with programming isn't the syntax. The problem is teaching a Martian to smoke a cigarette: Giving instructions to something which shares no cultural background with you and has no common sense.

(Of course, the classic 'teach a Martian to smoke a cigarette' problem is rigged, because the students are never told what instructions the Martian can follow; therefore, anything they say will be misinterpreted in various creative ways depending on how funny the teacher is. On the other hand, playing the game fairly by giving the students a list of things the Martian knows how to do would reduce the thought experiment to a tedious engineering problem.)

> Go and build a programming language your Mom can use.

This sexism is worse than anything else in your barely-thought-through rant of a post. The blatant, unexamined sexism in this statement keeps fully half of the potential programmers from picking up a damn keyboard and trying.


> This sexism is worse than anything else in your barely-thought-through rant of a post. The blatant, unexamined sexism in this statement keeps fully half of the potential programmers from picking up a damn keyboard and trying.

I'd throw in 'ageism' in there as well, being someone who is now 'of a certain age'.


I think you both are overreacting. "Something your mom could use" could easily be translated into "something a typical person could use." I don't think any girls or retired folk were dissuaded from programming by this comment.


Dissuaded by that single comment, probably not, but a constant flood of low-level sexual bias does send a message to girls that they don't belong. If you follow some of the gender-flipping reactions to pop culture, it begins to strike you just how pervasive these messages are.


The alternative "build a programming language that Dad can use" would have been much worse. I hope you see that. Unless gender bias is removed completely from our language (build a PL that your parent can use), the OP actually selected the lesser of two evils.


Wasn't aware that it was a strict dichotomy. There are plenty of ways to get the point across without gendering it. And I'm not trying to bash the OP at all. It's just that I do think it's important that we all hold each other accountable. But thanks for engaging, a lot of people don't even recognize there being a problem in the first place.


"Something your dad / grandfather / uncle" can use is just as apt. The point isn't "durr hurr you need a Y chromosome to do turing complete mathematics" its "99.99% of the population can't program. Write a language they can use"


That would offend even more as it strongly implies that "mom could never program of course, but maybe we could build a language where dad could?" If we want to be all PC about it, we should use gender neutral terms; like "something your parent figure could write programs with."

Crazy.


This is so absurd. There is absolutely nothing sexist about this statement unless you WANT it to be sexist. If a woman said this, would it be sexist? No. But a man saying it would make it so. What if a female programmer said "something your Dad can use"? Would she be sexist and ageist also?

Probably not. I'm a left-of-center guy, but I'm so sick of this politically correct bullshit coming out of left field. The complete inability to understand that not everyone is thinking of the broad, social contexts of oppression everytime they utter a statement.

You KNOW WHAT HE MEANT when he said that statement, but in the typical, annoying trait unique to yuppie white assholes, you seek to distance yourself from a heritage of being an oppressor by constantly pointing out racism/sexism/ageism. It's a game, and it doesn't matter whether what you are pointing at is REAL, AND AFFECTS SOMEONE, it just matters that you score your points to prove to the professional victim class that you're not one of the bad guys, even though you look like one.

Please, feel free to protest the JIF peanut butter slogan: "Choosy moms choose JIF". But no, there isn't an organized professional victims organization around single dads, so nobody will be protesting that, because why would you? There are no points to score.

Newsflash: Your parents and grandparents were probably racist bigots like every other cracker in this country. Your game does nothing to help. A woman who is being denied a promotion at work because of her gender will get zero help from your bullshit game. She will instead be hurt by it, because of the "crying wolf" that idiots like you do for your game that causes eye-rolling in 95% of the population.


but in the typical, annoying trait unique to yuppie white assholes, you seek to distance yourself from a heritage of being an oppressor by constantly pointing out racism/sexism/ageism

It seems to be a common tactic to accuse someone of white-guilt to quiet them down.


Fuck you and anyone who uses the concept of social justice as a hammer to try and silence people they disagree with. That is another serious barrier to actually solving any of these problems.


Social justice as a hammer is exactly the idiocy I was pointing at. Nitpicking a comment taken out of context as an affront to an oppressed group is exactly that.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: