Not all browsers support the feature though (check the MAX_VERTEX_TEXTURE_IMAGE_UNITS constant). Mobile devices could be problematic too since most (if not all) OpenGL ES 2.0-era devices don't support it in hardware.
Still this is one of the most impressive WebGL demos I've seen. Fantastic stuff.
In proper OpenGL, you'd be able to use transform feedback to write in to buffers with no loss of precision. And using buffers is less limited than texture fetches in the vertex pipeline.
For applications where precision matters (ie. everything scientific), WebGL on GLES2 devices is a no-go. WebGL standardization should pick up the pace to better match the development of OpenGL.
This was probably to enable WebGL on mobile devices that would have otherwise been locked out, but it heavily restricted things on the desktop which for the most part would have OpenGL 4 capable GPUs these days.
However given that WebGL on mobile still mostly sucks anyway, not sure if going for the lowest common denominator was the right decision.
And WebGL 1 has taken this long to reach mostly-working in implementations, it would have probably died in the crib if it had targeted the nascent GLES 3 feature set.
Running GLES shaders safely and reasonably fast in a sandbox (on top of insecure & crash prone drivers) is high wizadry.
Latest and greatest for mobile yes but the desktop world was already on OpenGL 4 at that point.
My whole point was that they could have just ignored mobile and delivered a much more powerful WebGL based on OpenGL 4 instead.
Your wish is granted: WebGL 2 draft supports transform feedback. http://www.khronos.org/registry/webgl/specs/latest/2.0/#3.5
(See my other reply about problems tracking latest GLES tightly)
I really want to get behind WebGL, but when is it going to have decent performance/compatibility? I tried this out in both FF and Chrome on a powerful desktop computer (i5-4670K, GTX760, 16GB RAM) and it was glitchy/stuttery as described. Firefox rendered some scenes at what seemed like 2-3 FPS. Chrome was much smoother, but I couldn't tell what parts were glitches. For example, the "classic demoscene water effect" looked completely different in Chrome. But neither FF nor Chrome produced an effect remotely resembling water.
Although this looks like a great library, personally I prefer to stick with OpenGL programming until WebGL's quircks are sorted out.
WebGL devs are playing the longest game of chicken seen on the web yet.
There were some places where the framerate dropped but they were the more complex demos. The fan kicked in almost immediately though. In general, even if the framerate was low it was stable.
Interestingly IE11 seemed to render almost as well as Chrome and I didn't notice a speed difference. I guess that's what happens if you offload work to the GPU.
First is that you can write vertex shaders in a reactive DOM. That makes it much easier to get pictures up on the screen. If any of you have ever messed around with vertex shaders, it can be a bit of a nuisance.
Second is that while the reactive DOM doesn't really exist as XML, it can be expressed as such, and would be easily diffable. This is important for collaboration.
Lastly, because it's making the GPU do all the work, data visualizations can be done by pushing large amounts of data to it. We should be able to see more patterns from data as a result.
I wonder what it needs to handle text presentation and input. HTML overlays are mentioned. Perhaps there are already WebGL text renderers that could be integrated. Of course visualizations this complex make my Macbook scream, but that's all right since I'm seeing something new (in a browser) and delightful. I have a few million data points that could benefit from vantage point like this, which need complex dependencies and controls.
Have you thought of ways around the path dependence on monospace imposed by existing bodies of textmode UIs (and source code)? It seems unlikely that a new terminal-esque tool would succeed without some kind of legacy support. The best concept I've come up so far with is to build in affordances which handle legacy vs. new-world user interaction and app I/O models.
Related, I continue to hold out (vain) hope that elastic tabstops will someday gain traction.
One of the things I discovered was just how much legacy cruft is really around. Not just things like ANSI colors, but e.g. grotty syntax. It made no sense until I realized it was created for teletype printers... it underlines things by backspacing after every character and printing a "_". It bolds by backspacing and repeating the character. I had to parse this to support man pages, and I assume the default TTY still does too.
The other thing was that so much of Unix workflow really only works by accident. The fact that you can ssh + sudo + ssh + ... is because the pipes are too dumb to fuck it up. Take for example SSH escape sequences...  they only work on the first hop. The proper solution is out-of-band signaling.
From an architecture point of view, the whole termcaps / stdio thing is crazy. The Unix principle is supposed to be about simple agnostic composition, and yet most tools have to sniff out their environment in order to maintain this illusion. Text files are for people, not machines. And if you want to see a never ending discussion, just ask a bunch of greybeards how to write a shell script that can handle files with spaces in their name.
Actually, it's less that interprets this and converts it into the appropriate terminal formatting, not the TTY itself.
Calculate just the points to be drawn, then draw them (explicit generation).
Calculate the entire surface/volume, and draw values where they exist (or based on magnitude or whatever properties are used) (implicit generation).
The second method is in some circumstances less efficient, especially if the graph is very simple and takes up little screen space, but overall much easier to work with. Its similar to the difference between ray casting and rasterization, in a way.
So if you wanted to render an implicit surface this way, you could do e.g. marching cubes or tetrahedra on a grid, and only feed in a scalar 3D field, either as an array or as a procedural function. Or you could do a <raymarch> operator for raymarching a distance field. On the inside, this could be a dumb per-pixel loop, or do recursive quad-tree subdivision. You shouldn't need to care.
It's all vaporware right now, but it's just a matter of fitting it in neatly.
I'm just about ready with rewriting the underlying semantic web framework to typescript and will soon be plugging it in to either Away3D TS or Three.js. Since I already know Away3D and it's written itself in Typescript I thought I might try that first, but seeing this ... and knowing how much more tested three.js is... I think I'm gonna go with Three.js
I really can't wait to play it with once you release it. I hope you can find some time for good documentation though. Cause at the moment I know just too little of the concepts involved to understand everything you explain in the slides.
Thank you already for this amazing presentation
It's eye candy AND it's interesting at it's core... wow. Beautiful work.
I just can't articulate a better thing than "wow". Really. This is incredible.
First time I heard about Steven was when I saw this  post last year.. the best part is that he leaves many easter eggs or "achievements" around for you to discover :)
These is no need to make this into a pissing contest or a rivalry. We're fans of Steven's work and incredibly impressed with how he has pushed the state of the art on the web forward. Anyone who works on the bleeding edge like this helps build a brighter future for the web and creates more knowledge upon which others may build. Anyways, please keep the discussion focused on what Steven has achieved here instead of trolling.
Steven, many kudos for this. Extraordinary work.
But this is something I really want to see. WebGL and GPU acceleration being put to use in the Web proper. Not just a box of 3d graphics inside a web page. Plotting neat 3d graphs with nice shading, fast and smooth rotate and zoom, etc. While you could probably do this using Canvas or SVG, you probably couldn't match the performance.
Now I'd like to see this technology being used outside of tech demos. Some real world data plotted this way.
I hoe someone builds the latter on top of it, since the flow-based paradigm is so effective in these contexts. Excellent presentation.
A discussion group, sometimes informal, interested in a particular topic.
Conferences often refer to their themed tracks as "BoF" sessions.
Now I hate 2D screens even more. So yes.
The comparison to D3js seems apt. MathBox is -- somewhat -- a 3D version of what D3 does. But D3 takes a bring-your-own-data approach, whereas mathbox is more directly about defining the mathematical structures. Both are fairly low-level. Mathbox is more opinionated, maybe. Vega might be a more direct comparison .
I could provide a best-effort v1 compatibility API if there is a demand for it, so you'd only need to replace your initialization code and e.g. call mathbox.v1() to get the old API. I don't know many people using MB1 though.
With regards to D3, I actually see it as quite complementary to MathBox 2. Take away all the DOM/SVG wrangling and you are left with tons of useful components, like all the geospatial stuff, for which MathBox can be the output layer. You don't actually have to use live expressions or GLSL transforms, you can just pass in a float array or a regular array of numbers, even a nested one.
As I thought about it more after posting, I imagined what you describe -- feeding in data sets (via an internal REST interface, say) and figured that would be simple enough.
I'm most interested in the multi-viewport idea, which I imagine is related to nested views. Presumably it lets you define linked representations of the same structures? Linked in the sense of brushing-and-linking . I'm curious to try building some linked representations of real- and phase-space diagrams.
I have waited so long for a good hardware-accelerated 3D screensaver in my browser! ;)
I don't get why he says vertex shaders aren't doable in web GL though. Don't the various shadertoy type sites let you write vertex shaders right now?
Laptop or mobile? How much time did you spend watching the examples? What is your battery's storage capacity (milliamp-hours, usage hours, etc.)? Does it have a GPU? CPU usage? Is that battery drain consistent w/ other things that use that amount of CPU? etc.
Of course you're pushing 4x as many pixels as me.
Try Safari (you have to enable WebGL in the Develop menu). I wonder it's a Chrome issue.
The fact that computer graphics from 1996 are still taught as if it was 1996 should be greater cause for concern. Or that math from the 19th century is taught as if it's the 19th century.
What I really wanted to say is that I still find it disappointing that after so long WebGL seems to made so little progress when compared to any game running on the same underlying hardware. I'm happy that the graphics can be constructed more elegantly, but I wish they didn't stutter, stumble, and drive my computer fan to max.
But compared to any game running on the same underlying hardware... Remember all the aimbots, wallhacks and more that people have been hacking in for years? How many crashes you've experienced? "Please install the latest driver". "You must restart the game to apply this setting". How about the fact that every game pretty much freezes the UI while it's first loading? You don't want web sites to work like that. WebGL has fundamentally different priorities, but they're not all bad.
GPU drivers have favored performance over stability for years. Modern games are a giant pile of hacks, but devs can afford the massive QA operation required to hide this fact. Heck, Nvidia turned game engine hacking into a feature, allowing you to add modern effects into old engines through their drivers.
See for example if you can figure out which vendor is which in this Valve developer's tell-all: