Hacker News new | comments | show | ask | jobs | submit login
C++ on the web: emscripten, nacl, flascc (slideshare.net)
122 points by BruceM 1694 days ago | hide | past | web | favorite | 42 comments

I can't wait for the time when we will use C++ in the browser and JS on the backend. Oh the irony.

Turns out we need performance on the front-end, and ease-of-development on the backend, not the other way 'round.

Really, though, the strong motivating consideration is that while backend code scales horizontally (i.e. you can throw money at an app and get more concurrency), the user only has one computer, so you have to optimize frontend code for performance. If we could just pay money to make users' computers faster I guarantee we wouldn't be worrying about browser JITs. :)

I mean, I understand the reasons and more than anything I'm excited. But it's still one of the things that if you time traveled to the 90's and told some developers about the future, they'd lock you up in a mental asylum.

If you traveled back to the 90's, it would be a good fictional story about computing dystopia.

In fact, it is still a good story about computing dystopia. But it's no longer fiction.

Actually if you are using C++ on the server there would be an argument for using it in a sandbox like NaCl. After all, you don't want to get pwned on the server any more than the user wants to get pwned by a web game.

C and C++ are already being heavily used on basically all servers.

Linux, the BSDs, the various commercial UNIX systems, OS X, and Windows all make very heavy use of C, and also C++ in some cases.

Apache, nginx, lighttpd, and many other web servers are written in C or C++.

The same goes for the major database systems commonly used today.

The main interpreters for languages like PHP, Python, Perl, and Ruby are written in C. Even if you're using a Java implementation of those languages, or some other language that targets the JVM, the runtime you're using is most likely written using C and C++.

So I find it kind of silly to hear Ruby and PHP advocates say how it's "dangerous" to use C or C++ for server-side development. Their preferred stacks are already essentially all C and C++! The amount of C and C++ code powering their applications dwarfs the amount of Ruby or PHP code they might have written.

Well, it would be dangerous to let those developers write application code in C++.

True, but the other languages don't necessarily help with that, either. PHP doesn't prevent programmers from writing code susceptible to SQL injection attacks. Ruby doesn't prevent programmers from writing poorly-performing web apps that are brought down or rendered unusable with a even minimal level of traffic. It's dangerous to let those kind of developers write any code.

You can make C safe, it just takes lots of effort, as you see in the software you mentioned. Other languages may require less effort to write safe programmes in.

I gave a similar presentation at GDC this year. Same conclusion, slightly different supporting details. If you're interested: http://www.slideshare.net/chadaustin/multiplatform-c-on-the-...

Thanks. Both sets of slides were very interesting. While upon seeing presentation slides I inevitably wish I could be privy to the talk, both were informative enough to convey substance via slides alone.

The OP had a slide dedicated to platform-dependent components of building a 3D game engine (memory management, threading, game loop, audio, etc.). While much of the action happened "off camera", is the conclusion essentially that [emscripten -> [ asm.js + webGL] etc.] makes this process simpler (if still tricky to debug)?

Floh's website is at http://www.flohofwoe.com/ and he's got links to demos of using Emscripten there as well as some other interesting stuff.

impressive stuff http://www.flohofwoe.com/demos/dsomapviewer_asmjs.html

But the most amazing thing is that they basically just compile their existing engine that has years of development to the browser using emscripten and it works. Thats a bit of a death blow to the new (and still very limited) JS 3D Engines, but of course their ease of use is a big benefit and they still make sense for smaller/web integrated projects.

C++ is awesome, but I kinda went the other way with my stuff for more flexibility. JavaScript as the main programming language and C++ only for the performance and driver access bottlenecks. (More info: http://multiplay.io)

One of my first languages was C++. I cut my teeth on Bjarne's book.

These days, I'd much prefer seeing C fill such a perf-specific role. I can't stand C++, and I've read through some massive, and what most people would call "well written" code bases in C++. Every one is its own world of WTFs with abuse of templates and operator overloading and weird RAII and scoping strategies. I can never read code in C++ and really understand what's going on until I've made a cursory pass, identified everything that semantically changes the meaning of the code and internalized that. That's incredibly frustrating considering readability is paramount in really any project, but especially when the code base is tens of thousands of lines. And given that I've never seen a real world example where these features have actually provided any benefit other than terseness, I can't understand why anyone would ever use them.

Outside of the IOCCC, I haven't had trouble reading C except things that abuse the preprocessor, and that's frowned upon. So why not C?

I never really understood why C gets such a bad rep for readability. It's a very compact language, just having a couple of keywords and a few very fundamental concepts. There's no "guessing" what a line does, no "obfuscation by design", etc.

I've mostly seen the anti-C-syntax attitude come from two communities. The first is made up of those who prefer Pascal and Pascal-derived languages. The second is the Ruby crowd.

The Pascal community was most vocal in the 1980s and 1990s, but has since dwindled in size. Today, it's often the Ruby community that's most vocal, and they are often very loud, indeed.

Curiously, we don't really see this attitude from the Python community nearly as much as one would expect. I'm not completely sure why this is, but it may be due to many Python developers having extensive knowledge of C and the various languages influenced syntactically by C (C++, Java, C#, Objective-C, and so forth). Due to their breadth of knowledge, different syntaxes just aren't an issue worth getting upset about.

Nobody in this thread has claimed that C was unreadable, and now these non-present people are already being divided into groups. And whereas the Ruby crowd is loud, Python programmers have extensive knowledge of other language. Can we please not form artificial factions?

We aren't talking about people in this specific thread of discussion. You do realize that, right? We are talking about people we work with, people we have worked with in the past, people we meet at conferences, people we deal with at other online venues, people we read articles written by, and so forth.

Some of those people are the ones who are responsible for (unjustifiably, in my opinion) giving the C-style syntax a bad reputation by expressing their dislike for it.

And, yes, the Pascal, Ruby and Python communities, among many others, do exist. They do have very different traits and attitudes. You're free to pretend that they don't exist, but the reality is that they do.

(forgot you (the LISP/s-expr people.))

They've always been a pretty small community, though. They're much less vocal today than in the 1980s, for example, so their impact is negligible. This differs from the Ruby community, which is much larger, and whose messages loudly reach a much more significant crowd.

"JavaScript as the main programming language"

Using node (or the v8 shell directly) javascript is competitive with python or perl for most common scripting tasks

The thing i've always found weird is how javascript didn't catch on for windows sysadmin scripts instead of batch files, given that it's been integrated into windows since w2k. People spend a lot of time learning powershell, while the cscript engine exposes the same level of ability in javascript.

I did all of my scripting in Windows in JS. You had full access to COM objects, which means you could do a lot of stuff out of the box. One thing I hated was that it wasn't even fully ES3 conformant (it's jscript), and it seemed not to be maintained. I think they threw out cscript considerations when they built powershell. For me, a simple bash shell or even a jscript REPL would've been preferable.

I think my favorite least-known thing about writing simple tools for Windows during my time as a sys admin was HTAs. You had full jscript COM access with a web page presentation that's packaged into something that you can run without opening a browser explicitly. It was awesome building full web applications that could do pretty much any administrative task on Windows.

HTAs were! I believe the user accounts management screen in Windows XP was actually one.

In my experience of the big problems is that it's so hard and inconvenient to call other programs and capture/process or print-to-stdout (as required) its output from WSH (javascript or vbscript). Also, if cscript had been the default WSH handler (rather than wscript), that would've made it much more like batch files. It's in the details - I tried for years to switch to WSH over batch files, but each time it's those little annoying things that make me switch back.

I think many sys admins thought of js as a web technology and back during w2k, systems and the Web were considered totally unrelated and many systems guys were dismissive of the Web. Things are much different today.

"javascript is competitive with python or perl"

Well they are all three terribly designed languages, but JS is way faster and probably more secure: It's battle tested in environments where the assumption is that the attacker controls the code being executed, which is normally not the case on the server side. On the other hand, JS does not have anything like perl's tainted strings, which are a great security feature IMO.

That's really interesting actually. Is JavaScript still considered a scripting language or given it's modern web development application, is it now a fully fledged programming language?

Define "fully fledged"

Javascript is a scripting language like python , ruby or php. It just lack of some spec for modules for instance , Node has its own module system.

But the node.js modules and packet manager are really nice.

emscripten is absolutely awesome. Some time ago, I used it to port libmp3lame to JavaScript to encode MP3 files in the browser: https://github.com/akrennmair/speech-to-server

Wasn't there a rumor that Sony would use the full native OpenGL for the PS4? Was that true?

> Sony is building its CPU on what it's calling an extended DirectX 11.1+ feature set, including extra debugging support that is not available on PC platforms. This system will also give developers more direct access to the shader pipeline than they had on the PS3 or through DirectX itself. "This is access you're not used to getting on the PC, and as a result you can do a lot more cool things and have a lot more access to the power of the system," Norden said. A low-level API will also let coders talk directly with the hardware in a way that's "much lower-level than DirectX and OpenGL," but still not quite at the driver level.

source: http://arstechnica.com/gaming/2013/03/sony-dives-deep-into-t...

edit: that said, I wouldn't be surprised if there was an OpenGL stack available for ps4, especially if they want to attract indie-devs.

Is this presentation available for download outside of slideshare?

I'm just not getting it. I understand the whole "let's make JavaScript the assembly language of the web", but "real" games (not 1987ish 2d scrollers) seem to have an uphill battle in the browser. IE probably won't ever implement WebGL, and even if it did, what's the point? I can point and click on Steam and download 20+ gig for some MMO. How's that suppose to work in the browser?

Can we take baby steps, and get really rich, hardware-accelerated UIs (maybe canvas based), WebSockets, a nice debugging experience (source maps), etc... before we're trying to play Crysis 3 in the browser.

> IE probably won't ever implement WebGL

Au contraire, http://www.theverge.com/2013/3/30/4165204/microsoft-bringing...

You beat me by 30 seconds. Congrats!

The new version of IE supports WebGL: http://www.theverge.com/2013/3/30/4165204/microsoft-bringing...

*I totally nerded out when I found out by the way.

It might not be the best thing for AAA multi gigabyte games like Crysis 3 (yet), but it makes sense for alot of smaller scale projects, especially online games (typically free to play) where the barrier to entry is low.

Longterm in also makes sense to think of games loading on demand while you play vs downloading everything to disk and install it somewhere. It also solves alot of the piracy problems of the industry.

Today there are Game Engines like Unity3D that already offer Native Client deployment and have their own browser plugin for IE or anything unsupported. Add to that a HTML5 deployment option through Emscripten & ASM.js and the games will basically run anywhere, just in edge cases requiring a plugin.

Today's games make you wait while they download 20 gigs of content up front, but there's no good reason for it. It's already been announced that the PS4 will let you start playing games before they've completely downloaded, and Web games will work the same way. You only need the first part of the first level to start playing.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact