Hacker News new | past | comments | ask | show | jobs | submit login
MathBox 2 (acko.net)
657 points by teamonkey on Aug 14, 2014 | hide | past | web | favorite | 88 comments



I gave up trying to understand it and just clicked through for the eye candy. Cool stuff as usual from Steven Wittens.


If I understand it correctly a lot of the geometry effects build on the ability to do texture reads in the vertex shader (OpenGL calls it "vertex texture fetch"), a not much-noticed, but incredibly powerful feature of modern WebGL implementations. The reason it is so powerful is because one texture can be used as a write target for the fragment shader and as a read target for the vertex shader, essentially creating a feedback loop that lives entirely on the GPU.

Not all browsers support the feature though (check the MAX_VERTEX_TEXTURE_IMAGE_UNITS constant). Mobile devices could be problematic too since most (if not all) OpenGL ES 2.0-era devices don't support it in hardware.

Still this is one of the most impressive WebGL demos I've seen. Fantastic stuff.


The major weakness in using textures as intermediate targets is the loss of precision from the texture formats as well as the intermediate values. OpenGL ES 2 (and thus WebGL) does not require a full 32 bit floating point pipeline, so the results may vary if you run on mobile devices (that are not the latest generation GL ES 3.x devices).

In proper OpenGL, you'd be able to use transform feedback to write in to buffers with no loss of precision. And using buffers is less limited than texture fetches in the vertex pipeline.

For applications where precision matters (ie. everything scientific), WebGL on GLES2 devices is a no-go. WebGL standardization should pick up the pace to better match the development of OpenGL.


It's a bit of a shame that WebGL settled for the lowest common denominator (i.e. OpenGL ES 2.0 capabilities).

This was probably to enable WebGL on mobile devices that would have otherwise been locked out, but it heavily restricted things on the desktop which for the most part would have OpenGL 4 capable GPUs these days.

However given that WebGL on mobile still mostly sucks anyway, not sure if going for the lowest common denominator was the right decision.


WebGL 1.0 is almost 4 years old. OpenGL ES 2.0 was then latest and greatest.

And WebGL 1 has taken this long to reach mostly-working in implementations, it would have probably died in the crib if it had targeted the nascent GLES 3 feature set.

Running GLES shaders safely and reasonably fast in a sandbox (on top of insecure & crash prone drivers) is high wizadry.


> WebGL 1.0 is almost 4 years old. OpenGL ES 2.0 was then latest and greatest.

Latest and greatest for mobile yes but the desktop world was already on OpenGL 4 at that point.

My whole point was that they could have just ignored mobile and delivered a much more powerful WebGL based on OpenGL 4 instead.


> WebGL standardization should pick up the pace to better match the development of OpenGL

Your wish is granted: WebGL 2 draft supports transform feedback. http://www.khronos.org/registry/webgl/specs/latest/2.0/#3.5

(See my other reply about problems tracking latest GLES tightly)


Same :)


> Please view in Chrome or Firefox. Chrome is glitchy, Firefox is stuttery.

I really want to get behind WebGL, but when is it going to have decent performance/compatibility? I tried this out in both FF and Chrome on a powerful desktop computer (i5-4670K, GTX760, 16GB RAM) and it was glitchy/stuttery as described. Firefox rendered some scenes at what seemed like 2-3 FPS. Chrome was much smoother, but I couldn't tell what parts were glitches. For example, the "classic demoscene water effect" looked completely different in Chrome. But neither FF nor Chrome produced an effect remotely resembling water.

Although this looks like a great library, personally I prefer to stick with OpenGL programming until WebGL's quircks are sorted out.


... when the GPU drivers fix their bugs, when all the undefined behaviour gets defined, when the security holes get patched, and triple-A gaming studios with intimate connections to the vendors aren't the only ones driving the tech. Any day now.

WebGL devs are playing the longest game of chicken seen on the web yet.


Runs OK on my Surface Pro 2. Core i5, Intel GMA, 4GB of shared RAM.

There were some places where the framerate dropped but they were the more complex demos. The fan kicked in almost immediately though. In general, even if the framerate was low it was stable.

Interestingly IE11 seemed to render almost as well as Chrome and I didn't notice a speed difference. I guess that's what happens if you offload work to the GPU.


I viewed this whole presentation on a MacBook Air plugged into a 32" monitor, and while 1 or 2 of the slides would pause here and there, overall it was amazingly smooth. Mind blown.


We've come a long way from cheesing a little extra performance out of the DOM by applying CSS 3D transforms, that's for sure ;)


This is still emerging tech but it's stabilizing rapidly — both Firefox and Chrome play back at 60fps on a MacBook Air. On a sub-$1K laptop it's noticeably slower because things are disabled due to Intel HD 3000 driver bugs but those bugs are now getting a lot more attention since they're so visible. I'd expect the situation to be a lot more viable in a year or two, so now seems to be a good time to start learning the technology and thinking about how to use it.


I was expecting another disappointment but no, on this somewhat dated system it runs great, and I was really impressed with several of the demos, even manipulating/rotating them in real time with the mouse, all running quite smoothly: Chromium Version 36.0.1985.125 Ubuntu 14.04 (283153), i5 2.8 Ghz, 6 Gb RAM


Take a look at chrome://gpu in Chrome. It should tell you a bit about your GPU is capable of.


I have your exact hardware configuration and all his slides got a solid 60FPS. I'm curious what your setup was. What version of Firefox? What version of Windows? Latest NVidia drivers? Any weird extensions installed, screen/desktop capture apps running?


Not really great specs.. works great on my lappy(4GB RAM, i5, Radeon HD 7670M)


On my macbook pro (2011), things were much smoother in FF31 than Canary. The complex stuff did show some stutter though.


On my Radeon 6850, under FF, everything is super smooth. Intel i7 quad-core.


Works great in IE11 on an X1 Carbon.


I'm pretty excited about it. I think there are three impressive things about it.

First is that you can write vertex shaders in a reactive DOM. That makes it much easier to get pictures up on the screen. If any of you have ever messed around with vertex shaders, it can be a bit of a nuisance.

Second is that while the reactive DOM doesn't really exist as XML, it can be expressed as such, and would be easily diffable. This is important for collaboration.

Lastly, because it's making the GPU do all the work, data visualizations can be done by pushing large amounts of data to it. We should be able to see more patterns from data as a result.


This is one of the most beautiful things I've seen for some time. And to think this is all in a browser, usable from JavaScript. I feel like there could be so many applications for this, for more complex, interdependent visualizations, yet easier than D3 and the like. Also, in the end it's described as Reactive DOM. So, now I wan't to see TodoMVC redone with this. It must be the fastest yet (I'm only half joking!).

I wonder what it needs to handle text presentation and input. HTML overlays are mentioned. Perhaps there are already WebGL text renderers that could be integrated. Of course visualizations this complex make my Macbook scream, but that's all right since I'm seeing something new (in a browser) and delightful. I have a few million data points that could benefit from vantage point like this, which need complex dependencies and controls.


To handle HTML overlays, I basically need to add read back capabilities to find the final on-screen positions of points, so I can sync with CSS 3D matrices. GL text is a rabbit hole I'd prefer to avoid, especially since I often need math notation. It would turn into HTML/CSS-for-GL right away.


Are you the unconed from TermKit? What's its status?


Right now, very dead. It was more of an idea than a real thing, badly architected, but with some good ideas waiting to be reimplemented on a non 0.x stack. Still perpetually disappointed every new "neo terminal" is monospace tho.


> Still perpetually disappointed every new "neo terminal" is monospace tho.

Have you thought of ways around the path dependence[1] on monospace imposed by existing bodies of textmode UIs (and source code)? It seems unlikely that a new terminal-esque tool would succeed without some kind of legacy support. The best concept I've come up so far with is to build in affordances which handle legacy vs. new-world user interaction and app I/O models.

Related, I continue to hold out (vain) hope that elastic tabstops[2] will someday gain traction.

[1] https://en.wikipedia.org/wiki/Path_dependence

[2] http://nickgravgaard.com/elastictabstops/


Formatting legacy stuff was always part of the deal, but at the same time, I was never interested in being able to host vim. Some people disagreed rather vocally.

One of the things I discovered was just how much legacy cruft is really around. Not just things like ANSI colors, but e.g. grotty syntax. It made no sense until I realized it was created for teletype printers... it underlines things by backspacing after every character and printing a "_". It bolds by backspacing and repeating the character. I had to parse this to support man pages, and I assume the default TTY still does too.

The other thing was that so much of Unix workflow really only works by accident. The fact that you can ssh + sudo + ssh + ... is because the pipes are too dumb to fuck it up. Take for example SSH escape sequences... [1] they only work on the first hop. The proper solution is out-of-band signaling.

From an architecture point of view, the whole termcaps / stdio thing is crazy. The Unix principle is supposed to be about simple agnostic composition, and yet most tools have to sniff out their environment in order to maintain this illusion. Text files are for people, not machines. And if you want to see a never ending discussion, just ask a bunch of greybeards how to write a shell script that can handle files with spaces in their name.

[1] http://lonesysadmin.net/2011/11/08/ssh-escape-sequences-aka-...


Sort of unrelated, but when I saw TermKit I couldn't help the surface similarities with PowerShell. In fact, I believe what you had there could almost work as a PowerShell host, although things like http://poshconsole.codeplex.com/ share some of the same ideas.


> I had to parse this to support man pages, and I assume the default TTY still does too.

Actually, it's less that interprets this and converts it into the appropriate terminal formatting, not the TTY itself.


"Education is the art of conveying a sense of truth by telling a series of decreasing lies."

Nice.


And here I am, learning to do 2d visualizations with d3.js.


Looks great, seems like this takes advantage of implicit calculations a lot. For example, there are two ways to draw a graph:

Calculate just the points to be drawn, then draw them (explicit generation).

Calculate the entire surface/volume, and draw values where they exist (or based on magnitude or whatever properties are used) (implicit generation).

The second method is in some circumstances less efficient, especially if the graph is very simple and takes up little screen space, but overall much easier to work with. Its similar to the difference between ray casting and rasterization, in a way.


Yes and no, everything is still sampled on grids. But the intermediate calculations can be doing tons of implicit look ups. So it's more like lazy evaluation, though there's no auto-memoizing (because the memory/time tradeoff is highly context dependent).

So if you wanted to render an implicit surface this way, you could do e.g. marching cubes or tetrahedra on a grid, and only feed in a scalar 3D field, either as an array or as a procedural function. Or you could do a <raymarch> operator for raymarching a distance field. On the inside, this could be a dumb per-pixel loop, or do recursive quad-tree subdivision. You shouldn't need to care.

It's all vaporware right now, but it's just a matter of fitting it in neatly.


wow, just wow. amazing stuff Steven. I've been working on a framework for very easy data structure creation and instance management in a 3D environment. I was building it in 3D flash first, and have tried to build exactly those kind of curved arrows, though everything was calculated on the CPU. I've also been wanting to get to generating geometries from a static set of properties/datatypes for a while, and I was wondering to what degree I'm gonna have to get my 'hands dirty' and learn new things to do that. So wow, am I glad there's people like you building libraries like these!

I'm just about ready with rewriting the underlying semantic web framework to typescript and will soon be plugging it in to either Away3D TS or Three.js. Since I already know Away3D and it's written itself in Typescript I thought I might try that first, but seeing this ... and knowing how much more tested three.js is... I think I'm gonna go with Three.js

I really can't wait to play it with once you release it. I hope you can find some time for good documentation though. Cause at the moment I know just too little of the concepts involved to understand everything you explain in the slides.

Thank you already for this amazing presentation


Wow. Simply stunning.

It's eye candy AND it's interesting at it's core... wow. Beautiful work.

I just can't articulate a better thing than "wow". Really. This is incredible.


His website [1] is one of the most impressive websites on the internet. Famo.us got nothin' on him!

First time I heard about Steven was when I saw this [2] post last year.. the best part is that he leaves many easter eggs or "achievements" around for you to discover :)

[1] acko.net [2] https://news.ycombinator.com/item?id=6268610


While we (at famo.us) have had our fair share of neckbeard faux pas, we and Steven are both fighting for the same future of the web; one where more sane low level primitives, such as a proper scene graph, are exposed to developers as a foundation upon which we can build better libraries and frameworks like MathBox 2. [0][1]

These is no need to make this into a pissing contest or a rivalry. We're fans of Steven's work and incredibly impressed with how he has pushed the state of the art on the web forward. Anyone who works on the bleeding edge like this helps build a brighter future for the web and creates more knowledge upon which others may build. Anyways, please keep the discussion focused on what Steven has achieved here instead of trolling.

Steven, many kudos for this. Extraordinary work.

[0] http://acko.net/blog/shadow-dom/

[1] http://extensiblewebmanifesto.org/


Seems to work pretty well in Safari 8, with occasional mild stuttering.


Finally, something useful with WebGL. So far we've seen lots of techdemos, but WebGL being so far behind the state of the art, it's like watching techdemos from 10-15 years ago but in the browser, with glitches.

But this is something I really want to see. WebGL and GPU acceleration being put to use in the Web proper. Not just a box of 3d graphics inside a web page. Plotting neat 3d graphs with nice shading, fast and smooth rotate and zoom, etc. While you could probably do this using Canvas or SVG, you probably couldn't match the performance.

Now I'd like to see this technology being used outside of tech demos. Some real world data plotted this way.


It's driven by code though, it's not a graphical UI.

I hoe someone builds the latter on top of it, since the flow-based paradigm is so effective in these contexts. Excellent presentation.


I'd be interested in making a Flowhub [1] runtime for ShaderGraph 2 graphs. (Flowhub runtimes talk a protocol [2] to define what nodes are available, code new nodes, and build up graphs.)

1. http://flowhub.io/ 2. http://noflojs.org/documentation/protocol/


Oh, this would be right up my street! I like your meemoo website too. I'm busy the rest of the day but I'll be in touch over the w/e after I've had time to take a closer look.


I'd really really like some in depth post on how these callback capabilities have been implemented. This is quite a big accomplishment for GPU code.


Just pretend it's C and imagine how you might merge by hand a couple of .c + .h files into a single .c file that compiles. That's basically how it works.


Ah, so you do inlining of your script code? I see, that's the most straightforward way. I don't know WebGL well, but in CUDA it's a little bit easier nowadays, you can call kernels within kernels and you can link kernel code together.


What is a BOF in the context of a conference? I've seen this in several places but haven't seen a definition.


http://en.wikipedia.org/wiki/Birds_of_a_feather_(computing)

A discussion group, sometimes informal, interested in a particular topic.

Conferences often refer to their themed tracks as "BoF" sessions.


Wow that's impressive. Could make for a cool visualization of DNA replication.



Scroll down, and see if you can find these buttons: http://i.imgur.com/7k0u2FH.png


Right arrow key. For those without arrow keys, I have no idea. I assume swipe left.


The visual representation of calculus, speed, velocity, and acceleration taught me more in 60 seconds than would take in 4 hours worth of lectures. Fantastic! (Makes my laptop catch on fire though)


Any plans for Oculus support? I'm building a code analysis framework with a visualization tool and if MathBox were to support the Rift it would be a no-brainer over using raw SVG or D3.


MathBox 2 is built on top of threestrap (don't google it, you get shoes), to enable exactly this kind of extensibility without me having to do it all myself. Just by following a few basic conventions (e.g. binding the VR headset to three.camera), it should just work. Haven't tried it yet, too much to do, but I do know these guys who have a mocap studio being repurposed for free-walking VR experiments using a wireless headset. I sat on the couch in the Unreal Engine 4 apartment two weeks ago, and picked up a mug from the coffee table (i.e. real couch/table + real mug + mocap balls attached to the mug and the headset). Magic. Would be even better with a mathbox chandelier.

Now I hate 2D screens even more. So yes.

https://github.com/unconed/threestrap http://thesawmill.ca/ http://wavesine.com/


I've used Mathbox some, and am still trying to get more familiar with it. Is MathBox 2 architecturally separate, or will the original Mathbox become a subset of MB2? I'll just keep plugging away at MB (the 1st).

The comparison to D3js seems apt. MathBox is -- somewhat -- a 3D version of what D3 does. But D3 takes a bring-your-own-data approach, whereas mathbox is more directly about defining the mathematical structures. Both are fairly low-level. Mathbox is more opinionated, maybe. Vega might be a more direct comparison [1].

[1]: https://github.com/trifacta/vega/wiki/Vega-and-D3


MB2 is completely separate from v1. I replaced the tQuery dependency with Threestrap, which is much less opinionated and the opposite of monolithic. The shaders are now compiled in so it runs over file://. The API works mostly the same, only now you can nest views.

I could provide a best-effort v1 compatibility API if there is a demand for it, so you'd only need to replace your initialization code and e.g. call mathbox.v1() to get the old API. I don't know many people using MB1 though.

With regards to D3, I actually see it as quite complementary to MathBox 2. Take away all the DOM/SVG wrangling and you are left with tons of useful components, like all the geospatial stuff, for which MathBox can be the output layer. You don't actually have to use live expressions or GLSL transforms, you can just pass in a float array or a regular array of numbers, even a nested one.


Thanks for the comments, and for the immense amount of work you've put into building this. I wouldn't care much if it didn't support MB1, just wanted to understand that relationship.

As I thought about it more after posting, I imagined what you describe -- feeding in data sets (via an internal REST interface, say) and figured that would be simple enough.

I'm most interested in the multi-viewport idea, which I imagine is related to nested views. Presumably it lets you define linked representations of the same structures? Linked in the sense of brushing-and-linking [1]. I'm curious to try building some linked representations of real- and phase-space diagrams.

[1]: http://bl.ocks.org/mbostock/4063663


> By adding only three operators: RTT, compose and remap, MathBox has suddenly turned into Winamp AVS or Milkdrop.

I have waited so long for a good hardware-accelerated 3D screensaver in my browser! ;)


Site crashes both Safari and Chrome on my iPad. What is it about?


It's an amazing implementation of 1995 desktop 3D graphics in a vonstrained and buggy browser environment.


Amazing as usual!

I don't get why he says vertex shaders aren't doable in web GL though. Don't the various shadertoy type sites let you write vertex shaders right now?


I believe he was referring to geometry shaders, not vertex shaders.


Understood, thanks.


This runs surprisingly well in chrome/android.


Pretty, but I don't need a single tab consistently eating up 50--75% of the overall (quad-core) CPU capacity on my laptop.


Please do another talk on this! This library looks amazing, I can't wait to test out some data viz on this.


I feel overwhelmed. It is really beautiful but the math is inaccessible to me.


People are smarter than me.


Wow, after viewing the examples, almost 10% battery was consumed.


... but it's still more efficient than pure JavaScript, asm or not. And so much faster than MathBox 1.


Some probably helpful debugging questions to ask:

Laptop or mobile? How much time did you spend watching the examples? What is your battery's storage capacity (milliamp-hours, usage hours, etc.)? Does it have a GPU? CPU usage? Is that battery drain consistent w/ other things that use that amount of CPU? etc.


I'm using 2013 late rMBP 13inch .


Totally agree, under Chrome / OS X, fans went crazy on my rMBP 15 iGPU and CPU raised 80 C degrees just to watch a few slides of that site. I closed the tab to stop overheating.


Huh. I use Safari / OS X on a 4 year old MacBook Pro. I keep it locked on the integrated CPU and it worked fine, barely increasing the system load. Only one or two of the effects near the end made any noticeable difference.

Of course you're pushing 4x as many pixels as me.

Try Safari (you have to enable WebGL in the Develop menu). I wonder it's a Chrome issue.


Safari on my rMBP 13" did the same thing, heh. Still worth it. Amazing work.


Ran completely fine on my 13" air.


Well, it's about 3D graphics. What do you expect?


This is great (really, it is), but showing off a "classic demoscene water effect" that was classic in 1996 serves as much to highlight how far WebGL has to go as much as what can be done with it.


You're kind of missing the point. Doing a single classic demoscene effect is indeed trivial. Doing arbitrary multi-stage, multi-frame video feedback effects is not, and you'd need to write dozens of lines of unique GL API code for each stage. Avoiding that work is what this is about.

The fact that computer graphics from 1996 are still taught as if it was 1996 should be greater cause for concern. Or that math from the 19th century is taught as if it's the 19th century.

See: http://acko.net/blog/how-to-fold-a-julia-fractal/


Fair point, and I should have made my comment clearly about WebGL, rather than the work of the author in creating this post - that is very impressive.

What I really wanted to say is that I still find it disappointing that after so long WebGL seems to made so little progress when compared to any game running on the same underlying hardware. I'm happy that the graphics can be constructed more elegantly, but I wish they didn't stutter, stumble, and drive my computer fan to max.


Some of this is because I'm wrapping it inside a ghetto CSS 3D presentation framework I've been reusing for almost 2 years, built when this stuff was buggy as hell. Mea culpa.

But compared to any game running on the same underlying hardware... Remember all the aimbots, wallhacks and more that people have been hacking in for years? How many crashes you've experienced? "Please install the latest driver". "You must restart the game to apply this setting". How about the fact that every game pretty much freezes the UI while it's first loading? You don't want web sites to work like that. WebGL has fundamentally different priorities, but they're not all bad.

GPU drivers have favored performance over stability for years. Modern games are a giant pile of hacks, but devs can afford the massive QA operation required to hide this fact. Heck, Nvidia turned game engine hacking into a feature, allowing you to add modern effects into old engines through their drivers.

See for example if you can figure out which vendor is which in this Valve developer's tell-all:

http://richg42.blogspot.co.uk/2014/05/the-truth-on-opengl-dr...


A classic demoscene water effect in quite possibly 1% of the lines of code that would have been required in 1996.


I almost thought this was related to CandyBox 2


Amazing. Seems to work great in IE11 also.


genius!


best site ever


What an infuriating to use site. My god.




Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: