
MathBox 2 - teamonkey
http://acko.net/files/pres/siggraph-2014-bof/online.html
======
zabcik
I gave up trying to understand it and just clicked through for the eye candy.
Cool stuff as usual from Steven Wittens.

~~~
rsp1984
If I understand it correctly a lot of the geometry effects build on the
ability to do texture reads in the vertex shader (OpenGL calls it "vertex
texture fetch"), a not much-noticed, but incredibly powerful feature of modern
WebGL implementations. The reason it is so powerful is because one texture can
be used as a write target for the fragment shader and as a read target for the
vertex shader, essentially creating a feedback loop that lives entirely on the
GPU.

Not all browsers support the feature though (check the
MAX_VERTEX_TEXTURE_IMAGE_UNITS constant). Mobile devices could be problematic
too since most (if not all) OpenGL ES 2.0-era devices don't support it in
hardware.

Still this is one of the most impressive WebGL demos I've seen. Fantastic
stuff.

~~~
exDM69
The major weakness in using textures as intermediate targets is the loss of
precision from the texture formats as well as the intermediate values. OpenGL
ES 2 (and thus WebGL) does not require a full 32 bit floating point pipeline,
so the results may vary if you run on mobile devices (that are not the latest
generation GL ES 3.x devices).

In proper OpenGL, you'd be able to use transform feedback to write in to
buffers with no loss of precision. And using buffers is less limited than
texture fetches in the vertex pipeline.

For applications where precision matters (ie. everything scientific), WebGL on
GLES2 devices is a no-go. WebGL standardization should pick up the pace to
better match the development of OpenGL.

~~~
rsp1984
It's a bit of a shame that WebGL settled for the lowest common denominator
(i.e. OpenGL ES 2.0 capabilities).

This was probably to enable WebGL on mobile devices that would have otherwise
been locked out, but it heavily restricted things on the desktop which for the
most part would have OpenGL 4 capable GPUs these days.

However given that WebGL on mobile still mostly sucks anyway, not sure if
going for the lowest common denominator was the right decision.

~~~
zurn
WebGL 1.0 is almost 4 years old. OpenGL ES 2.0 was then latest and greatest.

And WebGL 1 has taken this long to reach mostly-working in implementations, it
would have probably died in the crib if it had targeted the nascent GLES 3
feature set.

Running GLES shaders safely and reasonably fast in a sandbox (on top of
insecure & crash prone drivers) is high wizadry.

~~~
rsp1984
> WebGL 1.0 is almost 4 years old. OpenGL ES 2.0 was then latest and greatest.

Latest and greatest for mobile yes but the desktop world was already on OpenGL
4 at that point.

My whole point was that they could have just ignored mobile and delivered a
much more powerful WebGL based on OpenGL 4 instead.

------
reedlaw
> Please view in Chrome or Firefox. Chrome is glitchy, Firefox is stuttery.

I really want to get behind WebGL, but when is it going to have decent
performance/compatibility? I tried this out in both FF and Chrome on a
powerful desktop computer (i5-4670K, GTX760, 16GB RAM) and it was
glitchy/stuttery as described. Firefox rendered some scenes at what seemed
like 2-3 FPS. Chrome was much smoother, but I couldn't tell what parts were
glitches. For example, the "classic demoscene water effect" looked completely
different in Chrome. But neither FF nor Chrome produced an effect remotely
resembling water.

Although this looks like a great library, personally I prefer to stick with
OpenGL programming until WebGL's quircks are sorted out.

~~~
elsigh
I viewed this whole presentation on a MacBook Air plugged into a 32" monitor,
and while 1 or 2 of the slides would pause here and there, overall it was
amazingly smooth. Mind blown.

~~~
cdata
We've come a long way from cheesing a little extra performance out of the DOM
by applying CSS 3D transforms, that's for sure ;)

------
iamwil
I'm pretty excited about it. I think there are three impressive things about
it.

First is that you can write vertex shaders in a reactive DOM. That makes it
much easier to get pictures up on the screen. If any of you have ever messed
around with vertex shaders, it can be a bit of a nuisance.

Second is that while the reactive DOM doesn't really exist as XML, it can be
expressed as such, and would be easily diffable. This is important for
collaboration.

Lastly, because it's making the GPU do all the work, data visualizations can
be done by pushing large amounts of data to it. We should be able to see more
patterns from data as a result.

------
jarpineh
This is one of the most beautiful things I've seen for some time. And to think
this is all in a browser, usable from JavaScript. I feel like there could be
so many applications for this, for more complex, interdependent
visualizations, yet easier than D3 and the like. Also, in the end it's
described as Reactive DOM. So, now I wan't to see TodoMVC redone with this. It
must be the fastest yet (I'm only half joking!).

I wonder what it needs to handle text presentation and input. HTML overlays
are mentioned. Perhaps there are already WebGL text renderers that could be
integrated. Of course visualizations this complex make my Macbook scream, but
that's all right since I'm seeing something new (in a browser) and delightful.
I have a few million data points that could benefit from vantage point like
this, which need complex dependencies and controls.

~~~
unconed
To handle HTML overlays, I basically need to add read back capabilities to
find the final on-screen positions of points, so I can sync with CSS 3D
matrices. GL text is a rabbit hole I'd prefer to avoid, especially since I
often need math notation. It would turn into HTML/CSS-for-GL right away.

~~~
judk
Are you the unconed from TermKit? What's its status?

~~~
unconed
Right now, very dead. It was more of an idea than a real thing, badly
architected, but with some good ideas waiting to be reimplemented on a non 0.x
stack. Still perpetually disappointed every new "neo terminal" is monospace
tho.

~~~
saidajigumi
> Still perpetually disappointed every new "neo terminal" is monospace tho.

Have you thought of ways around the path dependence[1] on monospace imposed by
existing bodies of textmode UIs (and source code)? It seems unlikely that a
new terminal-esque tool would succeed without some kind of legacy support. The
best concept I've come up so far with is to build in affordances which handle
legacy vs. new-world user interaction and app I/O models.

Related, I continue to hold out (vain) hope that elastic tabstops[2] will
someday gain traction.

[1]
[https://en.wikipedia.org/wiki/Path_dependence](https://en.wikipedia.org/wiki/Path_dependence)

[2]
[http://nickgravgaard.com/elastictabstops/](http://nickgravgaard.com/elastictabstops/)

~~~
unconed
Formatting legacy stuff was always part of the deal, but at the same time, I
was never interested in being able to host vim. Some people disagreed rather
vocally.

One of the things I discovered was just how much legacy cruft is really
around. Not just things like ANSI colors, but e.g. grotty syntax. It made no
sense until I realized it was created for teletype printers... it underlines
things by backspacing after every character and printing a "_". It bolds by
backspacing and repeating the character. I had to parse this to support man
pages, and I assume the default TTY still does too.

The other thing was that so much of Unix workflow really only works by
accident. The fact that you can ssh + sudo + ssh + ... is because the pipes
are too dumb to fuck it up. Take for example SSH escape sequences... [1] they
only work on the first hop. The proper solution is out-of-band signaling.

From an architecture point of view, the whole termcaps / stdio thing is crazy.
The Unix principle is supposed to be about simple agnostic composition, and
yet most tools have to sniff out their environment in order to maintain this
illusion. Text files are for people, not machines. And if you want to see a
never ending discussion, just ask a bunch of greybeards how to write a shell
script that can handle files with spaces in their name.

[1] [http://lonesysadmin.net/2011/11/08/ssh-escape-sequences-
aka-...](http://lonesysadmin.net/2011/11/08/ssh-escape-sequences-aka-kill-
dead-ssh-sessions/)

~~~
ygra
Sort of unrelated, but when I saw TermKit I couldn't help the surface
similarities with PowerShell. In fact, I believe what you had there could
almost work as a PowerShell host, although things like
[http://poshconsole.codeplex.com/](http://poshconsole.codeplex.com/) share
some of the same ideas.

------
gravity13
"Education is the art of conveying a sense of truth by telling a series of
decreasing lies."

Nice.

------
thebokehwokeh2
And here I am, learning to do 2d visualizations with d3.js.

------
gavanwoolery
Looks great, seems like this takes advantage of implicit calculations a lot.
For example, there are two ways to draw a graph:

Calculate just the points to be drawn, then draw them (explicit generation).

Calculate the entire surface/volume, and draw values where they exist (or
based on magnitude or whatever properties are used) (implicit generation).

The second method is in some circumstances less efficient, especially if the
graph is very simple and takes up little screen space, but overall much easier
to work with. Its similar to the difference between ray casting and
rasterization, in a way.

~~~
unconed
Yes and no, everything is still sampled on grids. But the intermediate
calculations can be doing tons of implicit look ups. So it's more like lazy
evaluation, though there's no auto-memoizing (because the memory/time tradeoff
is highly context dependent).

So if you wanted to render an implicit surface this way, you could do e.g.
marching cubes or tetrahedra on a grid, and only feed in a scalar 3D field,
either as an array or as a procedural function. Or you could do a <raymarch>
operator for raymarching a distance field. On the inside, this could be a dumb
per-pixel loop, or do recursive quad-tree subdivision. You shouldn't need to
care.

It's all vaporware right now, but it's just a matter of fitting it in neatly.

------
rpwverheij
wow, just wow. amazing stuff Steven. I've been working on a framework for very
easy data structure creation and instance management in a 3D environment. I
was building it in 3D flash first, and have tried to build exactly those kind
of curved arrows, though everything was calculated on the CPU. I've also been
wanting to get to generating geometries from a static set of
properties/datatypes for a while, and I was wondering to what degree I'm gonna
have to get my 'hands dirty' and learn new things to do that. So wow, am I
glad there's people like you building libraries like these!

I'm just about ready with rewriting the underlying semantic web framework to
typescript and will soon be plugging it in to either Away3D TS or Three.js.
Since I already know Away3D and it's written itself in Typescript I thought I
might try that first, but seeing this ... and knowing how much more tested
three.js is... I think I'm gonna go with Three.js

I really can't wait to play it with once you release it. I hope you can find
some time for good documentation though. Cause at the moment I know just too
little of the concepts involved to understand everything you explain in the
slides.

Thank you already for this amazing presentation

------
saganus
Wow. Simply stunning.

It's eye candy AND it's interesting at it's core... wow. Beautiful work.

I just can't articulate a better thing than "wow". Really. This is incredible.

------
mck-
His website [1] is one of the most impressive websites on the internet.
Famo.us got nothin' on him!

First time I heard about Steven was when I saw this [2] post last year.. the
best part is that he leaves many easter eggs or "achievements" around for you
to discover :)

[1] acko.net [2]
[https://news.ycombinator.com/item?id=6268610](https://news.ycombinator.com/item?id=6268610)

~~~
malandrew
While we (at famo.us) have had our fair share of neckbeard faux pas, we and
Steven are both fighting for the same future of the web; one where more sane
low level primitives, such as a proper scene graph, are exposed to developers
as a foundation upon which we can build better libraries and frameworks like
MathBox 2. [0][1]

These is no need to make this into a pissing contest or a rivalry. We're fans
of Steven's work and incredibly impressed with how he has pushed the state of
the art on the web forward. Anyone who works on the bleeding edge like this
helps build a brighter future for the web and creates more knowledge upon
which others may build. Anyways, please keep the discussion focused on what
Steven has achieved here instead of trolling.

Steven, many kudos for this. Extraordinary work.

[0] [http://acko.net/blog/shadow-dom/](http://acko.net/blog/shadow-dom/)

[1] [http://extensiblewebmanifesto.org/](http://extensiblewebmanifesto.org/)

------
MaysonL
Seems to work pretty well in Safari 8, with occasional mild stuttering.

------
exDM69
Finally, something useful with WebGL. So far we've seen lots of techdemos, but
WebGL being so far behind the state of the art, it's like watching techdemos
from 10-15 years ago but in the browser, with glitches.

But this is something I really want to see. WebGL and GPU acceleration being
put to use in the Web proper. Not just a box of 3d graphics inside a web page.
Plotting neat 3d graphs with nice shading, fast and smooth rotate and zoom,
etc. While you could probably do this using Canvas or SVG, you probably
couldn't match the performance.

Now I'd like to see this technology being used outside of tech demos. Some
real world data plotted this way.

------
anigbrowl
_It 's driven by code though, it's not a graphical UI._

I hoe someone builds the latter on top of it, since the flow-based paradigm is
so effective in these contexts. Excellent presentation.

~~~
meemoo
I'd be interested in making a Flowhub [1] runtime for ShaderGraph 2 graphs.
(Flowhub runtimes talk a protocol [2] to define what nodes are available, code
new nodes, and build up graphs.)

1\. [http://flowhub.io/](http://flowhub.io/) 2\.
[http://noflojs.org/documentation/protocol/](http://noflojs.org/documentation/protocol/)

~~~
anigbrowl
Oh, this would be right up my street! I like your meemoo website too. I'm busy
the rest of the day but I'll be in touch over the w/e after I've had time to
take a closer look.

------
m_mueller
I'd really really like some in depth post on how these callback capabilities
have been implemented. This is quite a big accomplishment for GPU code.

~~~
unconed
Just pretend it's C and imagine how you might merge by hand a couple of .c +
.h files into a single .c file that compiles. That's basically how it works.

~~~
m_mueller
Ah, so you do inlining of your script code? I see, that's the most
straightforward way. I don't know WebGL well, but in CUDA it's a little bit
easier nowadays, you can call kernels within kernels and you can link kernel
code together.

------
KerrickStaley
What is a BOF in the context of a conference? I've seen this in several places
but haven't seen a definition.

~~~
stephendicato
[http://en.wikipedia.org/wiki/Birds_of_a_feather_(computing)](http://en.wikipedia.org/wiki/Birds_of_a_feather_\(computing\))

A discussion group, sometimes informal, interested in a particular topic.

Conferences often refer to their themed tracks as "BoF" sessions.

------
chatmasta
Wow that's impressive. Could make for a cool visualization of DNA replication.

------
lukasm
I see this and I'm clueless what to do.

[http://test.co.s3.amazonaws.com/Screen%20Shot%202014-08-15%2...](http://test.co.s3.amazonaws.com/Screen%20Shot%202014-08-15%20at%2001.07.11.png)

~~~
Shamiq
Scroll down, and see if you can find these buttons:
[http://i.imgur.com/7k0u2FH.png](http://i.imgur.com/7k0u2FH.png)

------
MattyRad
The visual representation of calculus, speed, velocity, and acceleration
taught me more in 60 seconds than would take in 4 hours worth of lectures.
Fantastic! (Makes my laptop catch on fire though)

------
jschrf
Any plans for Oculus support? I'm building a code analysis framework with a
visualization tool and if MathBox were to support the Rift it would be a no-
brainer over using raw SVG or D3.

~~~
unconed
MathBox 2 is built on top of threestrap (don't google it, you get shoes), to
enable exactly this kind of extensibility without me having to do it all
myself. Just by following a few basic conventions (e.g. binding the VR headset
to three.camera), it should just work. Haven't tried it yet, too much to do,
but I do know these guys who have a mocap studio being repurposed for free-
walking VR experiments using a wireless headset. I sat on the couch in the
Unreal Engine 4 apartment two weeks ago, and picked up a mug from the coffee
table (i.e. real couch/table + real mug + mocap balls attached to the mug and
the headset). Magic. Would be even better with a mathbox chandelier.

Now I hate 2D screens even more. So yes.

[https://github.com/unconed/threestrap](https://github.com/unconed/threestrap)
[http://thesawmill.ca/](http://thesawmill.ca/)
[http://wavesine.com/](http://wavesine.com/)

~~~
clebio
I've used Mathbox some, and am still trying to get more familiar with it. Is
MathBox 2 architecturally separate, or will the original Mathbox become a
subset of MB2? I'll just keep plugging away at MB (the 1st).

The comparison to D3js seems apt. MathBox is -- somewhat -- a 3D version of
what D3 does. But D3 takes a bring-your-own-data approach, whereas mathbox is
more directly about defining the mathematical structures. Both are fairly low-
level. Mathbox is more opinionated, maybe. Vega might be a more direct
comparison [1].

[1]: [https://github.com/trifacta/vega/wiki/Vega-
and-D3](https://github.com/trifacta/vega/wiki/Vega-and-D3)

~~~
unconed
MB2 is completely separate from v1. I replaced the tQuery dependency with
Threestrap, which is much less opinionated and the opposite of monolithic. The
shaders are now compiled in so it runs over file://. The API works mostly the
same, only now you can nest views.

I could provide a best-effort v1 compatibility API if there is a demand for
it, so you'd only need to replace your initialization code and e.g. call
mathbox.v1() to get the old API. I don't know many people using MB1 though.

With regards to D3, I actually see it as quite complementary to MathBox 2.
Take away all the DOM/SVG wrangling and you are left with tons of useful
components, like all the geospatial stuff, for which MathBox can be the output
layer. You don't actually have to use live expressions or GLSL transforms, you
can just pass in a float array or a regular array of numbers, even a nested
one.

~~~
clebio
Thanks for the comments, and for the immense amount of work you've put into
building this. I wouldn't care much if it didn't support MB1, just wanted to
understand that relationship.

As I thought about it more after posting, I imagined what you describe --
feeding in data sets (via an internal REST interface, say) and figured that
would be simple enough.

I'm most interested in the multi-viewport idea, which I imagine is related to
nested views. Presumably it lets you define linked representations of the same
structures? Linked in the sense of brushing-and-linking [1]. I'm curious to
try building some linked representations of real- and phase-space diagrams.

[1]:
[http://bl.ocks.org/mbostock/4063663](http://bl.ocks.org/mbostock/4063663)

------
dj-wonk
> By adding only three operators: RTT, compose and remap, MathBox has suddenly
> turned into Winamp AVS or Milkdrop.

I have waited so long for a good hardware-accelerated 3D screensaver in my
browser! ;)

------
abroncs
Site crashes both Safari and Chrome on my iPad. What is it about?

~~~
judk
It's an amazing implementation of 1995 desktop 3D graphics in a vonstrained
and buggy browser environment.

------
socialist_coder
Amazing as usual!

I don't get why he says vertex shaders aren't doable in web GL though. Don't
the various shadertoy type sites let you write vertex shaders right now?

~~~
nightski
I believe he was referring to geometry shaders, not vertex shaders.

~~~
socialist_coder
Understood, thanks.

------
bla2
This runs surprisingly well in chrome/android.

------
jackmaney
Pretty, but I don't need a single tab consistently eating up 50--75% of the
overall (quad-core) CPU capacity on my laptop.

------
joeblau
Please do another talk on this! This library looks amazing, I can't wait to
test out some data viz on this.

------
cessor
I feel overwhelmed. It is really beautiful but the math is inaccessible to me.

------
starterblock
People are smarter than me.

------
helpbygrace
Wow, after viewing the examples, almost 10% battery was consumed.

~~~
z-e-r-o
Totally agree, under Chrome / OS X, fans went crazy on my rMBP 15 iGPU and CPU
raised 80 C degrees just to watch a few slides of that site. I closed the tab
to stop overheating.

~~~
MBCook
Huh. I use Safari / OS X on a 4 year old MacBook Pro. I keep it locked on the
integrated CPU and it worked fine, barely increasing the system load. Only one
or two of the effects near the end made any noticeable difference.

Of course you're pushing 4x as many pixels as me.

Try Safari (you have to enable WebGL in the Develop menu). I wonder it's a
Chrome issue.

~~~
girvo
Safari on my rMBP 13" did the same thing, heh. Still worth it. Amazing work.

------
vosper
This is great (really, it is), but showing off a "classic demoscene water
effect" that was classic in 1996 serves as much to highlight how far WebGL has
to go as much as what can be done with it.

~~~
unconed
You're kind of missing the point. Doing a single classic demoscene effect is
indeed trivial. Doing arbitrary multi-stage, multi-frame video feedback
effects is not, and you'd need to write dozens of lines of unique GL API code
for each stage. Avoiding that work is what this is about.

The fact that computer graphics from 1996 are still taught as if it was 1996
should be greater cause for concern. Or that math from the 19th century is
taught as if it's the 19th century.

See: [http://acko.net/blog/how-to-fold-a-julia-
fractal/](http://acko.net/blog/how-to-fold-a-julia-fractal/)

~~~
vosper
Fair point, and I should have made my comment clearly about WebGL, rather than
the work of the author in creating this post - that is very impressive.

What I really wanted to say is that I still find it disappointing that after
so long WebGL seems to made so little progress when compared to any game
running on the same underlying hardware. I'm happy that the graphics can be
constructed more elegantly, but I wish they didn't stutter, stumble, and drive
my computer fan to max.

~~~
unconed
Some of this is because I'm wrapping it inside a ghetto CSS 3D presentation
framework I've been reusing for almost 2 years, built when this stuff was
buggy as hell. Mea culpa.

But compared to any game running on the same underlying hardware... Remember
all the aimbots, wallhacks and more that people have been hacking in for
years? How many crashes you've experienced? "Please install the latest
driver". "You must restart the game to apply this setting". How about the fact
that every game pretty much freezes the UI while it's first loading? You don't
want web sites to work like that. WebGL has fundamentally different
priorities, but they're not all bad.

GPU drivers have favored performance over stability for years. Modern games
are a giant pile of hacks, but devs can afford the massive QA operation
required to hide this fact. Heck, Nvidia turned game engine hacking into a
_feature_ , allowing you to add modern effects into old engines through their
drivers.

See for example if you can figure out which vendor is which in this Valve
developer's tell-all:

[http://richg42.blogspot.co.uk/2014/05/the-truth-on-opengl-
dr...](http://richg42.blogspot.co.uk/2014/05/the-truth-on-opengl-driver-
quality.html)

------
chandrew
I almost thought this was related to CandyBox 2

------
shanselman
Amazing. Seems to work great in IE11 also.

------
jypepin
genius!

------
maurizzzio
best site ever

------
forrestthewoods
What an infuriating to use site. My god.

