Hacker Newsnew | comments | show | ask | jobs | submit login

It's a real shame to have a language this high level, yet still have to go through this much crap just to get things done. Manual memory management is easier than this. But while including GC in the runtime has it's drawbacks, there is no reason that a language can't just handle task switching for you (like Go does, for example).

IMHO the event model / callback spaghetti / what-have-you is tricky precisely because it operates at a high level of abstraction.

Memory management, by comparison, is conceptually simpler because... well... the concept is simple. Allocating and de-allocating resources, while tricky at scale, is something for which everyone (including non-programmers) probably have existing mental models for.

Event driven programming, on the other hand, is a slippery high level concept. There are relatively few analogues for it in the "real world", and therefore requires additional mental gymnastics to internalize and understand.

Essentially, we need to 1) follow the execution pattern of event driven code (annoying), while at the same time 2) "visualizing" or conceptualizing a fairly un-natural manner of abstraction.


You do event driven programming every time an alarm wakes you up or you set an egg timer.


And you do neural networks every time you think and aolve differential equations every time you catch a baseball.

That doesn't make it any less difficult.


Applications are open for YC Winter 2016

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact