Hacker News new | past | comments | ask | show | jobs | submit login
The Birth and Death of JavaScript [video] (destroyallsoftware.com)
635 points by gary_bernhardt on Apr 17, 2014 | hide | past | web | favorite | 227 comments



For those unfamiliar, Gary Bernhardt is the same guy who did the famous "Wat" talk on JavaScript:

https://www.destroyallsoftware.com/talks/wat


Brendan Eich covered this subject at O'Reilly Fluent conference in 2012:

http://youtu.be/Rj49rmc01Hs?t=5m7s


If by cover you mean he essentially shrugs and says "it was the 90s" and moves on to ES6.


Which seems a pretty appropriate reaction, no? :-)


Seems he took the Wat talk a little personally, but I'm not sure why he defends {} + [] by saying the first { is a statement... wat?


It's because they wanted to say both:

    if (a) b;
and

   if (a) { b; c; }
which tends to make you think of curly braces as a syntactic feature which can appear anywhere, turning many lines of code into one line of code. If you think that way then these should possibly also be valid:

    {b; c}
    {b}
    {}
but, since JS scope is function-oriented and in other places (e.g. functions) the braces are ultimately needed anyways, even for one-liners, it seems like this was a stupid choice and we should have just rejected the form:

    if (a) b;
and then the reuse of {} for lightweight (if non-robust) hashmaps would perhaps be unambiguous again.


Which is exactly what Perl did, but to alleviate the clumsiness of single statement if conditionals they added a post-conditional if statement of the form STATEMENT if CONDITION; (which has the benefit of being how some people express simple conditionals in real life. "Go left if you see the blue house.")


It's automating semicolon insertion. The browser translates {} + [] into {}; + [], so + [] === 0 too. {}; is undefined.


Being pedantic: it's NOT semicolon insertion. Your actual point is correct, the {} is an empty block statement, and the +[] is a separate expression statement. It's equivalent to "{} 0".

However, semicolon insertion is only triggered when there's a newline at a position where there would otherwise be a syntax error. Here, neither is the case: blocks don't have to be terminated by semicolon (so no syntax error), and there's no newline in the source code!


Brendan Eich is a homophobe and whoever links to his videos are complicit in his bigotry.

All his opinions should be discarded.


Nobody's opinions should be discarded based on their behavior. Otherwise, we'd have a scant few scientists, artists, and thinkers in history actually worth discussing. (Also, people who link to him are complicit? Are you serious?)


You have no idea whether he's a homophobe or not, and clearly when the subject is Javascript his opinions should be considered very carefully.


Classic video, though it's wrong at times. For instance, the audience member who corrected him was right.


couldn't quite make it out, what did he correct?


The second JavaScript example, when he told someone in the audience, "No, that's just an object." It was a string.


I knew that it was a string. If you listen closely, you'll hear that he asked "is that an array of object?" He probably asked that because it's in square brackets. I said "No, it's just an object".

I've probably seen twenty people call this "wrong", which frustrates me. It's not wrong. It was a stringified object! I didn't say "stringified" because it wasn't relevant to the question of whether the object was in an array!

There are other things in Wat that are genuinely wrong, though, like the fencepost error about "16 commas", which mistake will haunt me forever.


It is wrong because it is the toString of Object, because the + operator wants to do string concatenation. You were misleading the audience, both by using a shell which doesn't show strings with quotes, and saying that the toString of the object is 'just an object'.

And that is not the only thing that is misleading, as you clealy said that {} was an object. Yes, the syntax in js is weird as it looks like an object, but it isn't. Again, a better shell would not let you do this.


As gary explained he wasn't wrong in his response to the question because the audience member was asking whether [object Object] means the object is in an array. It doesn't. The string point is moot.

I do agree the {} + [] example has always felt a bit unfair to me (for the reason that {} is a block), but whatever, it's a light-hearted talk.


Maybe worth releasing a transcript, at this point?


It's only 15 commas man, 15 commas. That extra comma could kill someone.

Speaking of which, why does WAT do different things on node.js?


"That extra comma could kill someone."

Only in old versions of Internet Explorer.


> why does WAT do different things on node.js?

Wrapping the same input in parenthesis (eg, '({} + [])') yields different results.


The video seems to be down now? Anyone have a mirror?

edit: nevermind, I clicked the download link. But I'm still wondering why the video's unplayable on the site.


Works with Chrome not Firefox. Yay! 2014 ;)


Funniest tech video I've seen! Actually maybe even minus the "tech".


That was really funny. First time I've laughed all day. Thanks for sharing.


Brilliant - everbody working with JavaScript should watch this!


> Javascript is a bad choice

Javascript is great once used in a "good" way. It's flexible & it's almost everywhere. Spending all your time complaining about how "bad" Javascript is kindof pointless. If you use Javascript, learn how to use it well.

Master sushi chefs don't sit there complaining how bad knives are because the knives need to be constantly sharpened.

If you are cooking spaghetti, learn how to strain the noodles, instead of using silly examples assuming people are incompetent at learning a tool.

https://www.youtube.com/watch?v=rbA9KAc5gZs


With this logic you can defend anything. PHP's great if used in a good way: Zuckerman's a billionaire. Right? Why is anyone even bothering with PL these days?

Sushi knives don't decide to cut you because the rice came from a different origin. And I'd guess that most craftsmen, outside of a ritual and tradition would love for their tools to have less disadvantages.

If you want to draw an analogy to spaghetti (?), it'd be like complaining that the only kind of spaghetti you can buy locally cooks only at a specific temperature, and even a bit more turns it to mush. And the reason is because the local government passed a bylaw with only a few hours of consultation that ended up banning imports of better kinds of spaghetti.

While it might be "pointless" to spend all your time complaining, there's certainly value in asking "wat" and pointing out absurdity.


The following perfectly describes my sentiments about all the complaints people have about JavaScript, PHP, <name your favourite hated language>

https://www.youtube.com/watch?v=uEY58fiSK8E


In that video, Louis is talking about himself in the third person: he's the one complaining on the plane. It's not about some group of "others" who are "bad" and don't appreciate the world; it's about our nature as humans.


> he's the one complaining on the plane.

Hmm, "the guy next to me goes 'pfft, this is bullsh*t'".

> it's about our nature as humans

Well, it's about our current generation of Americans (maybe Westerners). This complaining seems like unnecessary stress to me. I understand, because I used to do it.


I read (or watched) Louis say that he was the guy on the plane, but this was a couple of years ago and I'm failing to google it now.

I'm not convinced that this behavior is specific to Americans or Westerners; it may just be that we're most attuned to our own ways of expressing it. I'm also not convinced that it's a general property of the species, though; it would be arrogant for me to claim that kind of fundamental knowledge of how the human mind works. That kind of arrogance is the bread and butter of Hacker News, of course, so this is now necessarily a bad HN comment. ;)

I should've said something more like "it's about the way that we all act towards technology".


So we should just give up completely on trying to make better things?

I don't think that was the point of the video (some amount of gratitude for the things we have) -- but then maybe I'm misunderstanding your point.


On the contrary. We should strive to correct all the "wats" that obviously exist in all these languages, but most of what I see is just complaints, most of them ignoring the amazing things that can be done with these technologies. Over 30 year I've been programming in more languages than I care to count, and I don't remember at any point having a specific language stop me from achieving my goal because it has some traps or design flaws. Always made sure to know about them and make use of the language's strong points instead of concentrating on the weak.

And we both understood the point of the video. The fact that there is much to be grateful for does not mean that we shouldn't improve on what needs improving. But for heaven's sake, if you're not going to improve on it, stop whining about it and be grateful for the amazing things it does enable.


"I don't remember at any point having a specific language stop me from achieving my goal because it has some traps or design flaws."

I have to admit, a language has never stopped me personally. But it most assuredly has hurt me when trying to program with other people, who do not have a direct psychic hotline into my brain that tells them what preconditions must hold before my code will work properly, and what things they can and can not do with a certain library, and most importantly, why they can and can not do those things. Languages that allow me to encode more of those things into the program itself, instead of the ambient documentation-that-nobody-ever-reads-even-when-I've-put-tons-of-work-into-it, work better.

And as my memory isn't all that great, it turns out that if I'm away from my own code for long enough, I become one of those people who don't have a direct hotline to my-brain-in-the-past.


> But it most assuredly has hurt me when trying to program with other people, who do not have a direct psychic hotline into my brain that tells them what preconditions must hold before my code will work properly, and what things they can and can not do with a certain library, and most importantly, why they can and can not do those things

Programming is hard. It's an ongoing process of mastery. This is true with any programming language. There is no silver bullet.

> Languages that allow me to encode more of those things into the program itself

There are plenty of tools that almost every language provides for you. It's an architectural concern to ensure that there is as little mapping between the domain and the code.

I personally find Javascript to be flexible, which allows me to architect my software in a way that is communicative of the domain, without many restrictions.

> I become one of those people who don't have a direct hotline to my-brain-in-the-past

A story is a great way to communicate information. Automated functional (black box) testing is also good. Also, try to reduce the mapping between the domain and the software. Ideally, the software (naming) should have a 1-1 map to the domain.

Also, keep the structures flat, as this idiom tends to reduce complexity.

Keep consistent & iterate on architectural idioms between projects.

These are some ways to improve communicability of the codebase & to have insight into the business domain logic.


"business domain logic"

Ah, you see, there's the problem... this wasn't business logic. To put it in Haskell terms, I had code that was not in IO, but I couldn't actually encode that restriction in the language.

Most of your post amounts to "program better", which is vacuous advice. We've spent decades telling each other to "program better". We've proved to my satisfaction that's not enough. Have you used languages not from the same tradition as Javascript? It is possible, even likely, that you are not aware of the options that are available out there, even today.


> Ah, you see, there's the problem... this wasn't business logic.

What is "this"?

> Most of your post amounts to "program better", which is vacuous advice

No it's not. It's certainly better than dwelling on some edge case shortcomings and limiting your growth by blaming the tools.

No tool is perfect. Learn to use it better. Master it. Improve it. If you want to use a different tool, then use a different tool. There's no need to spread negativity.

There has been plenty of progress in Javascript idioms & programming idioms in the past few decades. You can accomplish many things with Javascript and the environment will only continue to improve. Programmers will continue to get better from the ecosystem & practices that have been learned over time.

Even your mighty Haskell is not perfect. Time to accept non-perfection & evolve :-)

> Have you used languages not from the same tradition as Javascript?

Yes, I have. I also draw inspiration from other languages & environments.

> It is possible, even likely, that you are not aware of the options that are available out there, even today.

Yes, I'm aware. When they prove themselves, I'll consider using them. In the meantime (and always), I'm happily mastering my craft free of unnecessary angst.


tl;dr: you can't necessarily change the troublesome technology, so you might have to leave. But in order to have a viable alternative to that "bad" technology, you need other people (case in point: mindshare of JS). In order to get more people to "your side", you might need to point out what is wrong with the original technology.

> But for heaven's sake, if you're not going to improve on it, stop whining about it and be grateful for the amazing things it does enable.

Sometimes you're not in a position to even be able to change something, even if you wanted to. The ideas you have in mind for a technology might fly in the face of how the community around that technology, or the guardians/maintainers of it, thinks of it - introducing these changes might break too much stuff that is dependent on it, the changes might fly in the face of the culture around that technology.

So if you have some technology that you think - subjectively, or even somewhat objectively if you have conviction enough - and you can not do anything about it, you only have two choices. Embrace it and try to work with it despite its flaws, or to abandon ship.

But if you want to abandon ship, you probably want to find a safe harbor, eventually. ie a place where you can develop or utilize some other technology. But that place might be sparsely populated, because everyone else is working with that other technology. So what do you do? You suggest that others jump ship. :)

Assuming that there is actually some kind of objective merit to complain about a specific technology, it might be wise to complain to others about that technology. That way they can hopefully use that info to make an informed choice, and perhaps abandon their current technology for another technology. In time, you might even get enough people to come over to this other technology that that community is big enough to support that technology as a valid alternative to the "bad" technology. But what if everyone just stfu'ed about what their "negative" thoughts are on a technology? Would that other technology be able to get enough "acolytes" in order to be a viable alternative? Probably not, because everyone was too "positive" and polite to point out how that technology might be better than the old technology.

Would JS even be so controversial if it wasn't for that it is so entrenched in Web development? Is that not a great example of how important mindshare can be?


The "wat" is a good first step in identifying problems. However, I see is people getting stuck on "wat" and not moving forward. People would rather win an argument than advance knowledge & the practice. Lot's of ego, programmers have.

In the mean time, one can learn to appreciate & use javascript strengths. It can be quite fun, liberating, & useful. Anecdotally, I have not run into these crazy issues, and I program in javascript everyday. I also have a large app and the framework I built is custom.

I liken this to using C++, Unix, & bash as a base. Yes, you could say these tools suck and spend time creating, marketing, & community-building for a new tool. Or you can iterate & improve upon these existing tools. There's no wrong answer. What do you want to accomplish?

> Sushi knives don't decide to cut you because the rice came from a different origin.

That analogy seems like a stretch. Care to explain? Javascript works with different locales. There are many international websites that use javascript.

Also, javascript does not "decide" to create a bug in your program. You create that bug by misusing the tool. You will get further if you take some responsibility and improve your practice.

> And I'd guess that most craftsmen, outside of a ritual and tradition would love for their tools to have less disadvantages.

I agree with that. Usually the improvements are iterative. One could use a laser cutter (which does not need sharpening) to cut sushi, but that would also burn it. Here's a good talk (Clojure: Programming with Hand Tools).

https://www.youtube.com/watch?v=ShEez0JkOFw&safe=active

Ritual & tradition is a social tool to propagate knowledge, idioms, & practices across generations. It makes sense to challenge ritual & tradition to they improve over time. It does not make sense to whine about it without doing anything.

> it'd be like complaining that the only kind of spaghetti you can buy locally cooks only at a specific temperature, and even a bit more turns it to mush

Not getting your analogy. This seems like a stretch, similar to the person who cannot strain the spaghetti noodles on the video. Care to explain?


The fact that something is "almost everywhere" doesn't make it good. It makes it useful at most.

And your examples make sense, javascript doesn't (in some cases) so it's very appropriate to complain.


> so it's very appropriate to complain

In that case, it's appropriate to complain about gravity & being restricted to the speed of light?

No, it's better to learn about and use these properties to your advantage.


Are you seriously equating the laws of physics and human artefacts? That would be ludicrous. While the laws of physics are set in stone, human artefacts can be remade.

That changes everything

I'm sure you have the skills required to, say, write a preprocessor for whatever language you are using, and add some special constructs in it. Missing feature? Done in a few days. So…

If the laws of physics suck, suck it up.

If your tools suck, change them.


> Are you seriously equating the laws of physics and human artefacts?

Yes I am. The property that they share is they will not be changed or avoided in the near future.

Another property is that despite certain limitations, you can still accomplish many things. If you focus on these limitations, you will accomplish less.

> If the laws of physics suck, suck it up.

Not in all cases. Physics is just a model of our understanding of physical existence. Einstein demonstrated that.

> If your tools suck, change them.

I guess if it's worth it to spend that much effort, then go ahead. Just know that the frequent examples of javascript's "problems" are easily surmountable, that is if you don't dwell on these "problems". Javascript has some great attributes to it.

Indeed, it does not "suck". That's like saying the human body sucks because we have this ridiculous tail bone and wisdom teeth. No accounting for taste, I suppose.

I choose to focus on that and progress in mastery of my craft. If you want to complain and/or change your tools, go ahead. I don't judge you.


> Yes I am. The property that they share is they will not be changed or avoided in the near future.

You vastly overestimate the effort it takes to change your tools. When I was talking of a few days to add a feature to a language, that was a conservative estimate. With proper knowledge it's more like hours. And I'm not even assuming access to the implementation of the language. Source-to-source transformations are generally more than enough.

Heck, I have done it to Haskell and Lua. And it wasn't a simple language feature, it was Parsing Expression Grammars (the full business). I used no special tools. I just bootstrapped from MetaII (I wrote the first version by hand, then wrote about 30 compilers to the nearly final version). (For Haskell, I took a more direct route by using the Parsec library.)

Granted, writing a full compiler to asm.js is a fairly large undertaking. But fixing bits and pieces of the language is easy. Real easy.

> Not in all cases. Physics is just a model of our understanding of physical existence.

Oh, come on, don't play dumb. You know I was talking about the way the universe really works, not the way we think it works.

> I choose to focus on that and progress in mastery of my craft. If you want to complain and/or change your tools, go ahead. I don't judge you.

I'm not sure what you're saying. It sounds like you want to focus on particular programming languages. This would be a mistake, pure and simple. You want to master the underlying principles of programming languages. It can let you pick up the next big thing in a few days. It can let you perceive design flaws (such as dynamic scoping). It can let you manipulate your tools, instead of just using them.

Your way leads to obsolescence.

---

My advice to you: if you haven't already, go learn a language from a paradigm you don't know. I suggest Haskell. Also write an interpreter for a toy language. Trust me, that's time well spent. For instance, knowing Ocaml made me a better C++ programmer.


First, I'd like to point out that your tone is attacking & condescending. Why?

> You vastly overestimate the effort it takes to change your tools.

Cool! If you don't mind the asset overhead, having to recreate the existing javascript ecosystem, & the abstraction mapping, & the other unknown unknowns, then it's all good. Are there any well-known production sites that use such techniques? I don't doubt there will be, but are such techniques "ready for prime time"?

I personally have not experienced enough pain to be motivated to all that.

> Oh, come on, don't play dumb. You know I was talking about the way the universe really works, not the way we think it works.

The thing about existence is we don't know about it in it's entirety. Even if we know the rules, there are many mysteries to explore. It's wonderful :-)

> It sounds like you want to focus on particular programming languages. This would be a mistake, pure and simple. You want to master the underlying principles of programming languages.

I am mastering the underlying principles of programming languages.

I want to focus on getting better, faster, & smarter. For the web, it's nice to have everything in one language. Lot's of sharing of logic. Keeping DRY. Being efficient with time. Smaller team sizes. More stuff getting done.

Maybe compiling to javascript will help for other languages.

I'm a fan of dynamic languages. There's more than one way to master the craft. Asserting your one true way is a failure of imagination.

> Your way leads to obsolescence.

I doubt it. You vastly underestimate my ability to adapt & evolve ;-)

> My advice to you: if you haven't already, go learn a language from a paradigm you don't know. I suggest Haskell.

Maybe one day. In the mean time, I'm focusing on becoming a more fully rounded thinker. That means subjects outside of programming. Learning yet another language has diminishing returns.

I'm humble enough to not give you unsolicited advice, which would only serve my ego.

Ooh, and I agree. OCaml, Erlang, & Lisp are fun languages. Javascript is also fun.


First, I very much love the material of the talk, and the idea of Metal. It's fascinating, really makes me think about the future.

However, I also want to rave a bit about his presentation in general! That was very nicely delivered, for many reasons. His commitment to the story, of programming from the perspective in 2035, was excellent and in many cases subtle. His deadpan delivery really added to the humor; the fact that he didn't even smile during any of the moments when the audience was laughing just made it all the more engaging.

Fantastic talk, I totally loved it!


Also, Java-YavaScript


It sounds so natural that I immediately started thinking I had actually been saying it wrong all these years.


I think I'm going to adopt this new pronounciation.


pronunciation*

(it's one of few weird words that change spelling when you add a suffix, such as fridge/refrigerator)


This is actually how you say it in Russian. Try Google Translate.


fwiw, it's the way everyone in Russia pronounces it


Yup. I thought it was part if the joke (that in a few generations, we might pronounce old language names differently).


How do you usually pronounce it?


Many people from Europe do that. It does sound cooler.


Since JavaScript and Java have almost nothing in common, I think that's a very reasonable pronunciation. The words look similar, but have very different functional meaning.


I was lucky enough to hear Gary give this talk in January at CUSEC and it was even better in person. Everyone in the room was clearly hanging off his every word, the actual technical content was pretty insightful and his humour was spot on.


The reason why metal doesn't exist now is because you can't turn the memory protection stuff off in modern CPU's.

For some weird reason (I'm not an OS/CPU developer) switching to long mode on an x86 cpu also turns on the mmu stuff. You just can't have one without the other.

There's a whole bunch of research done on VM software managed operating systems, back when the VM's started becoming really good. Microsoft's Singularity OS was the hippest I think.[0]

Perhaps that ARM cpu's don't have this restriction, and we will benefit from ARM's upmarch sometime?

[0] http://research.microsoft.com/en-us/projects/singularity/


I didn't want to go into this level of detail in the talk, but... I think you still want the MMU enabled, just not used for process isolation. With virtual memory totally disabled, a 1 GB malloc takes 1 GB physical memory even if it's not touched, you can't have swap at all, memory fragmentation kills you dead, etc. It still has a lot of utility outside of isolation.

I don't have a good sense of how the performance cost of hardware isolation breaks down into {virtual memory enabled,TLB thrashing,protection ring switching}. That's one of the reasons that I reduced the speed-up from "25-33%" in the MSR paper down to 20% in METAL. Maybe the speed-up would be less than that if virtual memory were still enabled.

Unfortunately, that distinction may have been blurred in the talk. That is, I may have implied that METAL would turn the MMU off entirely. If so, it was an oversight. I've done the talk end-to-end at least fifty times, which is how I smooth my execution out. Occasionally it can "smooth" the ideas out a bit too, leading to small inaccuracies. It's sort of like playing the telephone game with yourself (which is a very strange experience).

The MSR paper that I quote came from the Singularity team, so your reference is right on. Reading "Deconstructing Process Isolation" in fall of 2012 was probably the germ of the core narrative of the talk.


On linux, system calls don't result in a TLB flush - kernel data structures and code are in a different portion of the virtual address space (starting from the top of VM memory, if I remember right) that is tagged as not being available from ring 3. So system calls are quite fast.

EDIT:

Kernel memory begins at PAGE_OFFSET, see here: https://www.kernel.org/doc/gorman/html/understand/understand...

Kernel memory lacks the flag _PAGE_USER so that it isn't accessible from userspace: https://www.kernel.org/doc/gorman/html/understand/understand...


I didn't know that! It certainly makes sense. Context switches still thrash the TLB, though. The performance cost of that has gotten better as time has gone on, but I wonder how many transistors (and how much power) CPUs are burning for that mitigation. The "how computers actually work" digression originally had a section on context switches, but I removed it early on because I felt like that section was dragging.

To try to paint a very rough picture of the larger thoughts from which this talk was taken: I think that microkernels and the actor model are both the right thing (most of the time). When implemented naively, they both happen to take a big penalty from context switch cost. But Erlang can host a million processes in its VM, and we're using VMs for almost everything now anyway.

The obvious (to me) solution is to move both the VM and an Erlang-style, single-address-space scheduler into the kernel. Then you can have a microkernel and a million native processes without the huge overhead of naive implementations. There are surely many huge practical hurdles to overcome with that, and maybe some that can't be overcome at all, but it sure sounds right when written in two paragraphs. ;)


What you seem to be missing with re: asm.js is that, while the JIT to native code gets you your super-fast integer operations, it's still critically incomplete with regard to memory access. Every single individual memory access has to be bounds checked or pushed through some other indirection inside the runtime. Google demonstrated similar ideas with NaCl, which achieved safety with a similarly restricted native code and a just-in-time verification step. Even if these memory accesses could be made as efficient as those performed by the CPUs access protection, you're still not gaining anything you don't already have.

Regarding context switches: A full CPU context switch on x86 (not to ring1 but between two arbitrary points within a single userland address space) takes a few dozen instructions and about 40-80 cycles. A single cache-line miss resulting in a load from main memory on the other hand takes at least twice that (~200 cycles). Again, hits from jumping around in memory will dominate.

How significant is a 20% overhead from virtual memory? Probably about the same as getting 1% more of your memory accesses back in to high level caches.


On x64 in Firefox, at least, there are no bounds checks; the index is a uint32; the entire accessible 4GB range is mapped PROT_NONE with only the accessible region mapped PROT_READ|PROT_WRITE; out-of-bounds accesses thus reliably turn into SIGSEGVs which are handled safely after which execution resumes. Thus, bounds checking is effectively performed by the MMU.


Interesting approach, might have to look at the code. Nonetheless it highlights how useful the MMU is and how none of this is free.


I agree, and I think the whole premise of the performance gain is based on a "have the cake and eat it too" fallacy. Sure, the virtualized syscall to the virtualized OS will be free, but the painting of the font on the screen or the reading of the socket data will be done by the actual bare metal OS which the VM will invoke to get the actual job done.

So as long as we are talking about interprocess communication there will be a gain, but not for the actual hardware facing operation.

Then again, you are trading a hardware enforced isolation which is simple and proven for a isolation enforced by a complex and fragile VM.


You know about http://erlangonxen.org/ ? Also something like http://corp.galois.com/halvm . That's probably still not quite low enough level to turn off the MMU, but it's getting there.


There's a new CPU architecture in the pipe(should have good silicon within five years, if their projections can be trusted), and it has a very good design around system calls, and also in terms of virtual memory.

It has a single 64-bit address space and only protection contexts, and due to the general design of the system it doesn't require any register push(or registers at all, at least in the traditional sense). In addition, it has primitives which would allow programs to call directly into drivers and kernel services without a context switch.

Anyway, I don't mean to sound like an advertisement, and we've yet to see any silicon, so the jury's out.

Aside: Starting process address spaces at 0 is not really a convenience as far as I know(other than offering consistent addresses for jumping to static symbols), it's a way to enable PAE on 32-bit machines so that single contexts(typically processes) can use the whole address space.


You forgot to provide a link: http://millcomputing.com/docs/


There's an even bigger revolution in CPU design, with Ivan Sutherland's Fleet - which does away with the clock and sequential execution of instructions - instead, the programming model is based on messaging and traffic control - you direct signals to the units in the chip which perform the computations you want, asynchronously by default - if you need synchronicity, you need to program your own models for it.

While these probably won't be available in the next 5 years, and probably won't be acknowledged by existing programmers for decades - I think these ideas will take over.

http://arc.cecs.pdx.edu/publications https://www.youtube.com/watch?v=jR9pAaQlVRc


After the Operating Systems and Computer Organization courses in my first year of universisty I became a little obsessed with the idea of software managed operating systems. Cool to hear Singularity inspired you as well.

Now I didn't read the paper, but I think the 20% is purely the MMU, I think the protection ring switching thing is much less significant, so I think if you leave the MMU still on that 20% profit is still very optimistic.

Now, if you forget about compiling C (which defeats the purpose of your talk) and just compile managed languages like regular JS, the garbage collector can build a great model of memory usage. Therefore I think it could be much better to let the garbage collector manage both the isolation, and the swapping. So everything in software.

The swapping process would suffer some performance, but that's just CPU cycles, as everyone knows persistent data access isn't even in the same league as CPU memory access.

So yeah, that would mean that you would have to run all untrusted code in managed mode. And with untrusted I would mean code you can't trust with full physical memory access.


In a way that is no different from the older Xerox PARC systems or the Oberon based ones at ETHZ.

All of them are based on the concept of using memory safe languages for coding while leaving the runtime the OS role.

Except for C and C++, language standard libraries tend to be very rich and to certain extent also offer the same features one would expect from OS services.

As such, bypassing what we know as standard OS with direct hardware integration, coupled with language memory safety, could be an interesting design as well.

That is why I follow the Mirage, Erlang on Xen and HaLVM research.


Looks like Erlang is already getting one step closer to the metal:

http://erlangonxen.org/ http://kerlnel.org/

Also there is another project that can be related to that goal:

"Our aim is to remove the bloated layer that sits between hardware and the running application, such as CouchDB or Node.js"

http://www.returninfinity.com/


I guess this is in a way a response to Bret Victor's "The Future of Programming"?

https://vimeo.com/71278954


It is in a sense. I had an early form of the idea that became this talk in the spring or early summer of 2013. Bret's talk (which I loved!) was released shortly after. That made me think "I have to do this future talk now in case the past/future conceit gets beaten into the ground."


Thanks for link. Liked the 70s vibe and humour.

From about 14:40 he gets animated, basically conducting! Would love to know a programmer's explanation for the function or purpose of arm waving and hand signals in a presentation. Not knocking, just curious!


Well, he just tries to reinforce that we have two symmetrical interconnected systems and yet they have to figure out how to talk to each other. What he has on the screen.


It's not far off my predictions: https://news.ycombinator.com/item?id=6923758

Though I'm far less funny about it.


Coincidentally, I just released a podcast interview with Gary right after he gave this talk at NDC London in December 2013: http://herdingcode.com/herding-code-189-gary-bernhardt-on-th...

It's an 18 minute interview, and the show notes are detailed and timestamped. I especially liked the references to the Singularity project.


I'm missing some obvious joke...but why is he pronouncing it yava-script.


He's in character of it being 2035 and the pronunciation was lost/changed.


I think you're probably right -- he almost slips up at one point, but corrects himself before pronouncing the "va".


I was hoping he'd drop in some reference that would explain it, like the take over of world government by Norway after the war (sort of like Poul Anderson's Tau Zero http://en.wikipedia.org/wiki/Tau_Zero). But I guess he just wanted it to be inscrutable.


I thought it was supposed to be some future pronunciation thing, imagining the way languages evolve. I've seen SciFi movies where in the future english is heavily influenced by spanish.


I thought it was supposed to be a callback to this scene in Anchorman, but I'm not sure. https://www.youtube.com/watch?v=N-LnP3uraDo


No hard 'J's in many languages (Like Slavic languages). It's pronounced 'y'. Anyone have a list?


The sound ʤ seems to occur in most Slavic languages [1], I guess the primary reason why "Java" is read as "Yava" is because people tend to apply local pronunciation rules to commonly used foreign words, either because of lack of knowledge of native pronunciation or because native pronunciation sounds just silly.

[1] http://en.wikipedia.org/wiki/Voiced_palato-alveolar_affricat...


YavaScript is a very common pronunciation in Germany, the dj sound only appears in "loannames" like Jennifer. My grandfather always told me to find a nice yob :)


Ask a Hispanic friend.


Rather, ask a Scandinavian friend.


My scandinavian friends call it yay-va-script.


I'm Icelandic and we use yava-script, I find it hard to figure out how yay-va sounds.


Perhaps it would help to see that pronunciation rendered in IPA for English [0]: /ˈiː.və.ˌskrɪpt/

Note particularly the italicized phoneme (), which corresponds to the "long A" sound in English, e.g., the 'a' in 'face'. I'm unfamiliar with the Icelandic language, but according to the English equivalents listed on Wikipedia's page on IPA for Icelandic [1], the corresponding phoneme in that language appears to be ei.

[0]: http://en.wikipedia.org/wiki/Help:IPA_for_English

[1]: http://en.wikipedia.org/wiki/Help:IPA_for_Icelandic


YAY as opposed to nay VUH as in vagina SCRIPT pronounced normally


I'm from Scandinavia and have never heard anyone pronounce it like that.


gotcha.


At the 8:00 mark, he accidentally pronounces it correctly for a moment, and then "corrects himself" by mispronouncing it :-)


I'm assuming the original pronounciation was lost in the war.


How else would you pronounce it?


For context, this was one of the most enjoyed talks at PyCon this year.


JavaScript at PyCon?


Yep. It was on the schedule.


I think you'll find that the python ecosystem is very large and varied.


apropos, Bokeh.


Well, you need JS for the client side even if you use python for server one (eg. flask or django)


I was fortunate enough to get to see this at CUSEC (the Canadian University Software Engineering Conference) and would similarly agree that this was one of the most enjoyed talks there, too.


> xs = ['10', '10', '10']

> xs.map(parseInt)

[10, NaN, 2]

Javascript is beautiful.


There are so many good WTFs in JS, but this is not one. parseInt expects 2 arguments and Array.prototype.map provides 3 to the callback it is given. Both of these facts are very well documented and known.

    var mappableParseInt = function(str){
        return parseInt(str, 10);
    };

    ['10', '10', '10'].map(mappableParseInt);
I'd suspect this snippet is more a snipe at people who don't know JS very well and expect parseInt to be base-10 only.


I don't think `parseInt` accepting an optional second argument is the surprising behavior there. The real WTF is `map` passing more than one argument, and the loose behavior of JS regarding argument passing overall.


It's only WTF because it's not the same as other implementations of map. Once you can internalize the map implementation, it's no longer WTF & actually makes sense.


It makes sense but it still strongly violates the principle of least surprise. No other language I know of does this, nor do I think this would be a particularly desirable feature.


That's the weird part. Why does array provide three arguments? But I agree, that's something you can learn. I guess.

But it's WTF anyway. I have a function that takes either one or two arguments, I provide three, and everyone seems to be OK with that.


Well that decision is pretty necessary when you realize that JS has no syntax to indicate a function is variadic (we use the arguments magic variable, but use of it does not necessarily indicate that a function is variadic) and that implementation supplied functions are not required to have their arity exposed via Function.prototype.length (http://es5.github.io/#x15.3.5.1).

There's no way to know, even at run-time, whether a function is being called with too few or too many arguments, since that's equivalent to the halting problem. So the sensible alternative is just to default everything to undefined, and silently ignore extraneous arguments.

But yes, if JS was strict with how it handled argument definition lists and had support for indicating infinite arity, I'd agree, this would be a WTF, or at least strange. But I think it makes a lot of sense, all things considered.


So much for abstraction if you need to understand the implementation of every function you'll every use in JavaScript.


> understand the implementation of every function

Rather: remember three things that make up the majority of Array iterators' callback functions' signatures:

Element, Index, Array.

Shared by: .map, .every, .forEach, .filter, and probably some that I am forgetting. The exception I think is just .reduce[Right], which by definition requires its previous return value, so you have (retVal, elem, i, arr).

Quite literally, if you remember .map callback, you remember .every callback :)

Javascript deserves shtick for its truly bad parts (with, arguments, ...) and some missing parts, but .map and its friends aren't it.


You could say it's a snipe at the weak type system that Javascript has.

I dunno, as someone without much experience with Javascript, it is a little odd that arrays return the index alongside the value by default.


It has little to do with the type system. Variadic arguments can be typed given a type system that supports it.


arrays don't do that, but map() does. Normally in js you can just ignore arguments you don't care about, but it does lead to surprises like this one.


Alternatively, if a function expects two arguments, the language could take exception at the fact that three were handed in. Quietly accepting arbitrary arguments could be considered breaking contract. It does have a wtf-ey whiff.


A bit less annoying with ES6:

  >>> ['10', '10', '10'].map(x => parseInt(x, 10))
  [10, 10, 10]


with a function named "parseInt" anyone would expect the inputs as base 10.. otherwise this shoud be called "parseHex" for 15 or at least "parseBytes(input, base)"

The programmers are not the ones to blame on that.. this is really a bad contract between the language and the programmer

Its the equivalent of a function named "getStone()" to return you a " Paper{} " :)


"int" doesn't imply anything about the base.


It does for human beings. We use base-10 for basically everything. This is true even for most programmers in most situations. Human beings aren't computers, and we aren't abstract math processing units who by default consider numbers abstracted (e.g. as elements of Rings). This goes double for string representations of numbers--in the majority of cases a number represented by a string is a number meant for human consumption in a normal human context; not some machine running in base-2 (or -8 or -16). It is certainly reasonable to expect "parseInt" to parse an integer out of a string in base-10 by default, and entirely unreasonable to expect to be required to provide a base as anything except an optional argument, and certainly it is unreasonable to expect that that second optional argument is treated as not optional in a composition operation.


> It is certainly reasonable to expect "parseInt" to parse an integer out of a string in base-10 by default

It does.

> and entirely unreasonable to expect to be required to provide a base as anything except an optional argument

It is optional.

> and certainly it is unreasonable to expect that that second optional argument is treated as not optional in a composition operation

I don't understand what you're saying here. It's never treated as required, it's just map supplies a parameter in that position, so it get's used. That's how optional parameters work.

The wat (if there is one) is that map provides extra arguments.


>> ...parse an integer out of a string in base-10 by default

> It does.

Not quite: in some browsers (IE), a string starting with '0' gets interpreted as octal. So parseInt('041') === 33 in IE.

Guess how I found out about that.


I agree with your conclusions. However, the poster I was responding to was suggesting that the category of "int" necessarily excludes non-decimal representations in the same sense that the category of "stone" excludes "paper".

I think in this case it's not parseInt that's at fault, it's the fact that map optionally passes additional arguments.


It's due to parseInt having an optional second parameter, the radix, and map passing the index as the second paramater, hence:

  xs = [
    parseInt('10', 0),
    parseInt('10', 1),
    parseInt('10', 2)
  ]


And since 0 is falsy we get 10 for base 0.


It's not exactly checking for falsy values. Although all falsy values will lead to a radix of 10 being applied.

parseInt internally uses the ToInt32 abstract operation on the radix parameter. Once it has that value, it explicitly looks to see if the value is 0. If it is, it uses a radix of 10.

https://people.mozilla.org/~jorendorff/es6-draft.html#sec-pa...

Edit: I hope that doesn't come off as pedantic. My point wasn't to disagree as much as it was to just add some further explanation.


This is HN, there's no such thing as being pedantic. :)


Although keep in mind that excessive pedantry is frowned upon.


Or rather, there is but it's thoroughly welcome.


Which is odd. I wonder why they checked for falsiness and not the argument being undefined.


Thank you. I read many comments to find an explanation.


It's not optional if you lint your code :)


Just like with any language, as long as you read the docs of the stuff you use, you don't get this problem (you might get others with automatic type conversion and missing arguments like the speaker says, but not this)... this is just stupid.

Try this:

int subtract(int b, int a) { return a - b; }

int test = subtract(5, 3); // != 2, just read the damn docs

Oh, C sucks now !

The talk is quite fun and interesting to watch though.

And the end is pretty cool.


The thing is, good language design means you don't have to read the docs.

The number one thing taught at user interaction / usabillity courses is that users don't read the documentation. Or skim it and go directly to one or two parts they want to check (Sure, some bizarro outliers do read it all).

Besides, a golden rule from the UNIX era is the "principle of least surprise". Don't define stupid behavior as default, as in this case (both for parseInt and Map).


> good language design means you don't have to read the docs

Let's assume JavaScript is a poorly-designed language, and Clojure is a well-designed language. In the first month of language use, the user of Clojure will have looked at the docs many more times.


That's because:

1) Clojure has a larger API -- Javascript doesn't have 1/10 that.

2) Javascript has a familiar (to many) Algol-derrived braced syntax and lots of common C/C++/Java/etc keywords. Clojure is only familiar to Lisp/Scheme users.

If those things were equal, Clojure would win the "don't have to look caveats up" contest, because its design is more coherent, and doesn't give you unexpected results and undefined behavior like Javascript does.

Obviously, you somehow you need to first know that "parseInt" is called "parseInt()" and not "atoi()" for example. But I wasn't implying never reading anything, including function reference. Just being able to code without needing to study and/or memorize lots of arcane edge cases.


But the user of javascript will have more bugs.


Both behaviors make sense in isolation. It's not always so easy.


Juggling these interactions is exactly what makes good language design so difficult and time consuming. In the talk, I mention JS' ten-day design time several times for exactly this reason. Language design is hard and ten days just isn't enough time to carefully consider how everything will fit together. Try to imagine a programmer, even a brilliant one, noticing the map/parseInt interaction ten days after starting to learn JS, especially in 1995 when these high level languages were far less common. Seems unlikely!


Arguably the issue is not with the behavior but rather a deeper design problem within the language itself. Notice that in Obj-C no one would ever get confused regarding the second parameter of parseInt:withRadix:.


By "this problem" do you mean "the this problem"?

And by "this is just stupid" do you mean "this === just stupid"?

If you think that's bad, you should see this!


I just wrote this line about two hours ago and my tests weren't thorough enough to catch the bug it introduced. Just when I thought I knew JavaScript. Thanks for saving me some time.


Always give the base to parseInt in Javascript.

Always. The moment you don't, all kinds of bugs follow.

I have to go cry now at the number of times this has bitten me.


Not just javascript, a whole bunch of language's parseInt implementation will interpret the base from a leading zero etc.


Are there any other problems aside from octal?


I'm sure you know this by now but you can keep the syntax and use Number instead

xs.parseInt(Number)


I think xs.map(Math.floor) takes the cake if I recall speed tests properly


Always remember - it's not a language but a loosely parsable texty thing.


A useful function

    function overValues(f) { return function(x) { return f(x); } }
Then you can do

    ['10', '10', '10'].map(overValues(parseInt));
However, usually you're going to want to do the equivalent of this

    ['0101', '032'].map(function(s) { return parseInt(s, 10); })
Because Javascript interprets a leading zero as an indicator of base.


I think you accidentally a word...

> as an indicator of base.

As an indicator of base 8 (octal).


You need:

['10', '10', '10'].map(Number);


  var xs = ['10', '10', '10'];
  xs.map(function (str) {
    return parseInt(str, 10);
  });
  
  > [10, 10, 10]
Fixed that for you. Why?

  map callback params: (value, index, originalArray)
  parseInt params: (string, radix)
Your code is passing the map array index to parseInt's radix.


Where people use parseInt, they usually should use Number: ['10', '10', '10'].map(Number); // [10, 10, 10]

;)


I suspect Nashorn, the just released edition of JavaScript for the JVM, will be heavily promoted by Oracle and become heavily used for quick and dirties manipulating and testing Java classes, putting a dent into use of Groovy and Xtend in Java shops. After all, people who learn and work in Java will want to learn JavaScript for the same sort of reasons.


Very impressive to have been recorded "April 2014" and released "April 2013." Seriously, though, great presentation.


Agreed! I too was wondering isn't the discovery of time travel the bigger story here? /s


No, at the end of the day, the discovery of time travel ended up being a really trivial achievement because of paradox. Now, the scientific knowledge we picked up en route was monumental, but that's something else.


He says several times that JavaScript succeeded in spite of being a bad language because it was the only choice. How come we're not all writing Java applets or Flash apps?


Well, about ten or fifteen years ago, "we all were" would have been the answer. Except that back then, there were multiple choices -- plug-ins meant you could choose Java, or Flash, or ActiveX (Visual Basic 6, anyone?), or VRML for that matter.

The number of security issues that plug-ins have had in the last two decades makes most of them non-starters nowadays, although there are still plenty of sites that use them extensively (say, Childrens' game websites like Neopets and Nick Jr.'s website) depending on the target audience.


Also, apparently internet banking and ecommerce in South Korea relies heavily on ActiveX

http://www.washingtonpost.com/world/asia_pacific/due-to-secu...


Because Java, Flash, etc. couldn't easily manipulate the DOM.

Javascript won for this reason.


There were other advantages. To write JS you just need a text editor, and it's easy to pick up. To write Flash requires spending several hundred dollars. To write Java requires the JDK and to learn Java.


Flash and Java also required a compile.

Javascript just required that you click refresh.

Especially on 1995 technology, that mattered. Compiling Java took a while. I didn't use Flash enough to retain an impression of speed, but it sure wasn't instantaneous.


It's also the reason why Flash was so prevalent until recently and is still installed in 90-something % of desktop computers: it's faster. Significantly faster, and very specially so in the 90s and early 2000s.


Used to to do a good amount of flash development - you could actually do it with just a text editor and a compiler (which was free). There were also quite nice free IDEs, like FlashDevelop.


We're talking late 90s/early 2000s here. If anything like that existed during Flash's heyday, I certainly wasn't aware of it.


If I had to pinpoint it, I'd say Flash's primetime was around 2005-2008 perhaps, and FlashDevelop was available then. Guess we probably define it's prime differently haha, I'm thinking more of when it matured - AS3 as a language, lots of tooling choices, etc.


I wasn't ever anything close to a professional Flash developer, I'll take your word for it if you say that was the best time to be developing for it.

I was thinking about the days of Homestar Runner, Weebl and Bob, Newgrounds, and so on, when flash cartoons and games were (for kids, at least) a huge part of internet culture, and everyone wanted to be a Flash animator. Youtube kinda killed the Flash cartoon medium, sadly. Sure, videos are simpler and don't rely on a proprietary binary blob, but there's nothing like loading up a Strong Bad email and clicking random things (or, uh, holding down tab) trying to find secrets.


Ah, don't give me too much credit haha, was more of a side-project thing for me, definitely wasn't a professional, especially at the animation side of things (as opposed to the programming side). I was also more involved with the games side of Flash, which Flash became much stronger at as ActionScript 3 came out which coincided with much better Flash performance. Flash advertising and simple animations were probably stronger earlier.

I'm just interested in the topic because it's kind of neat to look back at the internet and observe its history and the changes its gone through. Just did a little wikipediaing for fun - here are when a few different websites / notable games were released:

Newgrounds: 1995

Homestarrunner: 2000

Miniclip: 2001

Armor Games: 2005

Kongregate: 2006

Fancy Pants Adventures: 2006

Desktop Tower Defense: 2007


or VBScript for that matter.. I think there's some confusion about why JS won. JS couldn't easily manipulate the DOM either until JQuery in 2005-2006.

The fact that Java, ActiveX etc.. had full control of the system and causes problems ensuring security was an issue, but it is not the reason why JS beat them all.

Don't discount the power of 1) free and 2) easy to use software that is 3) not controlled by a single corporation. JS is the only web programming language that is all of these.

Yea, maybe Python or Clojure in the browser would be cool. I would argue Clojure is absolutely more difficult for a novice to learn, and Python provides what additional benefit? JS was there first.

The only reasons why plugins existed is you couldn't do these kinds of things in the DOM. JQ, and the subsequent advances in browser technology, HTML, CSS, JS - made it so you can. Also, other things being equal - programmers will choose elegance over bloat, less layers of abstraction over more. Plugin architecture became just an unnecessary layer between the programmer and the browser, after HTML/JS/CSS caught up.

JS did not become ubiquitous by accident, or because it was the only choice. There were many choices (all being pimped by big well-funded companies). JS won because it was the better than the alternatives.


DOM in that time wasn't that fancy it is now. The real reason is security.


While security is the main answer, it was also that Java and Flash aren't necessarily available. That is, getting them to run on another machine was frequently a huge issue, especially if you tried to put in any kind of complexity.

Javascript, on the other hand, was omnipresent and comparatively accessible. It was the least bad option by a wide, wide margin. For a different comparison, I switched from Java applets to PHP in the early 2000s. I didn't really get into Javascript until many, many years later around 2009: before that, Javascript was mostly a way to make Flash work properly.


Oh, yeah, especially after Microsoft stopped shipping Java.

There was also the version issue to worry about. "Pardon me, Mr./Ms Customer/User -- would you mind terribly going and downloading and installing a 20 MB Java update on your 14.4k dialup connection before using this page?"

Nightmarish, it was.


I always found it a bit hilarious how Sun, after getting Microsoft rather onboard the Java train, albeit with their necessary native extensions, decides to sue them and put an end to it. And promptly kills off Java distribution and adoption by the largest software developer in the world.

Even stranger is how Sun, a hardware/platform company, decided making a popular platform that's hardware and platform independent would help their business. Sometimes I wonder if there was a really well thought-out plan, or people were just doing things.


The "necessity" of those extensions is debatable, and they meant that code wouldn't be portable to Sun's implementation. There was real cause for concern, and there weren't a lot of other options for fixing it.

Sun probably also realized that they weren't about to compete directly with mighty Microsoft on platform lockin of all things, so they played a different game.


Flash still powers Youtube for most users, Silverlight for Netflix and Unity's plugin is required for most 3D games on Chrome's Marketplace (not sure where else to look for successful HTML5 games).


because there a lot of bad programmers use it to write a lot of page effects, like alert("log in required"), not apps.


Stellar stuff. Hugely enjoyable. Very interesting thought experiment. I won't spoil it for any of you, just go and watch! Mr. Bernhardt, you have outdone yourself sir :)


Consider the relationship between Chromebooks and METAL.

(I'm typing this from my Pixel...)


Bernhardt later tweeted:

"I gave The Birth & Death of JavaScript seven times and no one ever asked why METAL wasn't written in Rust."

https://twitter.com/garybernhardt/status/456875300580651009


It was assumed because that was/will be a foregone conclusion.


Extraordinarily entertaining and well presented.


Where did you get the footage of Epic Citadel used in the talk?

http://unrealengine.com/html5 seems to have been purged from the internet (possibly due to this year's UE4 announcements?) and I can't find any mirrors anywhere.

Which is a shame, because that demo was how I used to prove to people that asm.js and the like were a Real Thing.



Not Sure If Serious, but this doesn't work at all in any browser I've tried it in. I don't think archive.org especially knows how to mirror a giant weird experimental single page app.


I have a question, because this video confused me. I don't have background to follow through all the assertions Gary Bernhardt did, but I'll try to watch it again, since it was fun.

I want to become a full stack developer. I can program and write tests in ruby, I can write applications using Sinatra and now I am learning rails. I bought a book to start learning JavaScript because it's the most popular language and basically will allow me to write modern applications. After I'm done with JS I'll probably jump into something else (rust, go, C, C++, Java, whatever helps do the staff I want).

But watching this video, I'm confused: I avoided CoffeScript because I read in their documentation that in order to debug the code you have to actually know JavaScript so I figured that the best thing to do is learn JS and then use an abstraction (i.e. Coffescript) and tools like AngularJS and Node.js... Is my approach wrong? :-/


You can get around it to some extend with source maps and so on - just make sure you're generating them with whatever build process you use.

In practice, however, all that lovely Coffeescript syntax can easily trip you up; often something will compile successfully, but not to the 'right' Javascript. I wouldn't recommend CS until you get your head around the fundamentals of JS. In particular, CS does some very 'clever' things with the Javascript this object; I have certainly lost my scope at unexpected points in CS programs (often in loops). When you're optimising code, furthermore, you definitely need a strong sense of what JS code you'll get out the other end.

I'd recommend Reginald Braithwaite's Javascript Allonge as an overview of JS semantics - the material on scopes and environments is very useful, given that JS behaviour on that score is ... idiosyncratic. https://leanpub.com/javascript-allonge/read


I guess I don't really get the point here. This video walks a line between comedy and fact where I'm not really satisfied in either.

I can't always tell what's a joke, does he actually believe people would write software to compile to ASM instead of javascript because there are a few WTFs on js's "hashmaps." Much likely a newer version will come out before 2035? Or was that a joke?

I also feel like poking fun at "yavascript" at a python conference is cheap and plays to an audience's basest desires.

Really I see a mixture of the following: - Predictions about the future, some of which are just cleary jokes (e.g. 5 year war) - Insulting javascript preferring clojure - Talking about weird shit you could, but never would do with ASM js - Talking about a library that allegedly runs native code 4% faster in some benchmarks, with a simplistic explanation about overhead from ring0 to ring3 overhead.


I'm not sure I understand the claims toward the end of the talk about there no longer being binaries and debuggers and linkers, etc. with METAL.

I mean, instead of machine code "binaries", don't we now have asm blobs instead? What happens when I need to debug some opaque asm blob that I don't have the source to? Wouldn't I use something not so unlike gdb?

Or what happens when one asm blob wants to reuse code from another asm blob -- won't there have to be something fairly analogous to a linker to match them up and put names from both into the VM's namespace?


nice nice,ultimatly languages dont die,unless they are closed source and used for a single purpose ( AS3 ). In 2035,people will still be writing Javascript. I wonder what the language will look like though. Will it get type hinting like PHP? or type coercion? will it enforce strict encapsulation and message passing like Ruby ? will I be able to create adhoc functions just by implementing call/apply on an object? or subclass Array? Anyway , i guess we'll still be writing a lot of ES5 in the 5 years to come.


Adhoc functions can be written using ES6 Proxy [1].

[1]: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...


I think there is a good time that an alpha-version of ES6 will be tentatively rolled out by 2035, 2036 latest.


AS3 is not dead, and it is now open source


Dead for all practical purposes.


source?



I like that he mentions "integer". It is still very incredible how JavaScript can work well without a integer construct. Or threads and shared memory. Or bells and whistles.


"JavaScript can work well" - depends on what is understood by 'well'. Some craftsmen are capable of building cars from junk.


Yes as a high-level scripting language it has many uses. You're never gonna write a kernel in it admittedly.


And it would work better with integers. Or are we claiming now it was a good decision to force all numbers to be floats because look how awesome node is?


modern browsers support webworkers so you get "threads" but still no shared memory.


I wish some of those talks were available for purchase on their own and not in the season packets. Definitely a few I'd buy since I liked this talk and the demo on the site.

Guy has good vim skills for sure.


The pricing model used to be different -- $8/mo. He changed it when he stopped producing the series. I agree the current pricing doesn't make sense. I feel slighted for having subscribed for several months, but would now have to pay _more_ for content that I used to have access to. Ach, schade. That said, the material was compelling enough to buy at the time!


A bit OT but what is the problem with omitting function arguments?


Not necessarily anything as such, but it's the sort of thing that can easily lead to bugs if you don't know what you're doing. It's the only way to overload a function with multiple signatures, though, so most libraries and frameworks make heavy use of it.


They'll have the value undefined which will do god knows what after some implicit type coercion


I absolutely loved this.


I always enjoy Gary's tasks


I want a C interpreter


LLVM ships with lli, which can interpret LLVM bitcode generated by clang, a C compiler.



why not put that in browsers ?


To make a complete platform, you also need APIs, so there's more to it than just picking a language. You also need to figure out how to sandbox it; CINT appears to give programmers access to unrestricted pointers. You also want to get multiple browser vendors to agree, so some kind of specification is desired; CINT targets its own unstandardized subset of C. And you ideally want it to go fast, but CINT appears to be pretty slow:

http://benchmarksgame.alioth.debian.org/u32/compare.php?lang...

So there'd be some work to do. You could also compile the code, but complete C compilers are not fast, in browser terms.


ha-zum yavascript


Video tl;dw:

Gary Bernhardt (rightly) says that JavaScript is shit (with some other insights).

HN comments tl;dr:

50%: "Waahhh, JavaScript is awesome and Node.js is wonderful, shut up Gary Bernhardt."

25%: Smug twats talking about how they're too busy changing the world with JavaScript to even bother to comment.

25%: Pedants and know-it-alls having sub-debates within sub-debates.

Pretty standard turnout. See you tomorrow.


Thanks for the summary, but I didn't exactly see that he was saying JavaScript is shit so much as that it was imperfect (10 days, etc.) but that didn't even matter.


It's been kind of fun watching JS developers reinventing good chunks of computer science and operating systems research while developing node.

This talk has convinced me that their next step will be attempting to reinvent computer engineering itself.

It's a pretty cool time to be alive.


"I get back to the DOM"


somebody tell this to the node.js crowd


Can so many lemmings be wrong?


This is actually not a bad lecture. Very interesting, a nice idea and surprising.


"It's not pro- or anti-JavaScript;"

OK


Did you watch it?




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: