Speaking purely from a selfish standpoint, it would be awesome to SEE what back-propagation "looks like" with different neuronal activation functions, or what feature learning by restricted Boltzmann machines "looks like," or how dropout causes networks to generalize better -- to name just a few possibilities.
If anyone can visualize neural network algorithms in a way that is intuitive and beautiful, it's you!
It basically lets you do exactly what you're looking for. I'm going to eventually add support for visualizing each layer of a DNN during back-propagation training, but that's several releases away.
In this toy problem, you can get more of a "feel" for the complexity capability of a network (by trying different clusterings of classes, etc...). Unfortunately, I hard coded it to have 2 hidden units. In retrospect, it would have been better to make the number of hidden units tunable as well so that one could visualize how a network with more non-linearities can draw increasingly complex decision boundaries.
(And by “book”, I’m including online publications like Mark Pilgrim’s excellent /Dive into HTML5/. I’d almost certainly publish online given the interactive nature of the subject.)
In any case, thank you for all your work. It's an inspiration.
thanks for all the time/effort you've put into d3. my side project (machete.io) certainly wouldn't be possible without it.
I love these sorts of interactive, visual essays and would love to find ways to lower creation times.
Also, there were a few things in my talk that I wanted to fix, like replacing the rainbow color scale with something more effective.
The examples themselves I worked on intermittently, typically for an hour or two in the evenings. I got interested in mazes as an analogy for design process when I gave a talk at OpenVis earlier in the year.
If it makes you feel better, the time was well spent. I found the prose lucid yet concise, and I know it takes a lot of refinement to get to that.
One nitpick with the maze visualizations though. I found that the maze color flood animations have the same issue that you mentioned sorting has: animations are frustrating to watch because you have to wait and then rely on memory to recognize patterns. Specifically, I found the color scale rotation was much too fast to see large patterns, and even small patterns were too dense to be able to trace backwards after the maze had been fully colored.
I have an idea for an alternate visualization: Only show the fully colored maze (no intermediates), but vary the color rotation length over time from frame to frame. You'd be able to see color rippling through the maze and be able to follow the ripples over both large scale and small scale features.
Varying the rotation length sounds interesting and, like you suggested, could be great for seeing both micro and macro features. There was a bug previously where the Prim’s visualization rotated twice as fast, and it looked quite different!
1. Recording the missteps to Perfection
In future could you record the number of edits required to come up with your finished essays ? I've always found that it takes a lot of steps to make something that elegantly looks as though no mis-steps were taken. Some form of screen recording style evolution of the essay over time
I know Paul Graham had a live essay session recording.
2. Archiving live and interactive works.
I guess your piece will archive okay as it is the sum of static files. But I do hope that your essay plays well with archive.org so that future people can enjoy your article.
3. The future of the interactive essay
Relating to 1. I can't imagine that preparing an interactive essay is an easy affair. A lot of effort goes into a static essay. How much work is it to interactivate it? Do you feel it will become more widespread as an essay form or restricted to a select bunch of interactivists ?
E books - smartphones really make interactive essays possible to disseminate.
We have a similar concern with published graphics on The New York Times. It’s funny and sad now how so many animations on University course websites are practically unviewable because of waning support for Java; I expect it will be similarly awkward to run Flash plugins in ten years. On the other hand, content written to web standards seems to have a longer shelf-life, as the standards are widely supported by many organizations, not just one. So my hope is that standards-based graphics will both archive well and continue to run on evolving browsers.
3. Yes, it’s already the case that graphics (and further interactive graphics) are increasingly integrated with prose, rather than being relegated exclusively to standalone content. That’s not to say standalone graphics are bad — there are many viable forms for graphics, and sometimes you want it to be standalone — but that we’re figuring out ways to integrate “multimedia” more elegantly and less gimmick-ly.
I still think the hard part is expressing the ideas & communicating effectively rather than the technology. Designing interaction is hard because there are so many ways to do it, and you don’t always know what will be intuitive to readers.
- any plans on trying to make a canvas based d3 adapter/library? Also thinking about webgl here, although I believe x3dom works well enough
- have you played with other programming languages to evaluate their support for data vis expressiveness? If so, what would you recommend to try out?
That said, DOM/SVG performance is still a problem with mobile browsers. We tried to do D3 based visualization tool that would work well in mobile. Especially with panning and zooming, which are natural gestures in mobile and often useful because of the limited screen estate, you will hit rendering performance problems easily.
With a proper use of CSS transforms, culling of data points and intelligent redrawing, you can get the performance to adequate levels on iOS Safari, but you have to throw away the pure data-driven documents approach and start to think it more from the "rendering pipeline" angle.
Well actually, I find that when displaying more than a few hundred polygons (or even simpler elements, actually) then SVG often becomes problematic (especially on Firefox).
Thinking about it, maybe a good solution (to have both a detailed map and tooltips or other contextual things) would be a canvas with precise borders, overlaid with simplified transparent SVG polygons (say, a Voronoi diagram from each polygon centroid, or something).
Take a look at OpenLayers 3. They have a few examples they use as benchmarks for canvas rendering: http://ol3js.org/en/master/examples/synthetic-lines.html http://ol3js.org/en/master/examples/synthetic-points.html
Granted, these absolutely crush Firefox (Chrome handles them fantastically; IE about average), but they're still great examples to how performant the canvas can be. As far as interactivity goes, all you need are a little bit of extra attention to your events and rendering, and it works just as well, if not better, than SVG in many cases. Look at http://ol3js.org/en/master/examples/draw-and-modify-features... for a good example.
And you're right regarding SVG overlays on a canvas - it actually works quite well: http://ol3js.org/en/master/examples/d3.html
Of course, OL3 is both highly specialized to mapping applications (you know, being mapping library and all), and highly optimized for canvas rendering, but it does serve to show flexible canvas can be.
(edit for formatting)
You can make the hidden canvas 2x larger than the visible canvas to get good precision on the borders of regions. It's admittedly hacky, but has good performance characteristics since it only uses two canvas elements.
http://bl.ocks.org/syntagmatic/6645345 (hover only activates on mousemove, so the selected region can slide off the cursor)
However, I believe that the bit about light being a "continuous" signal in the first paragraph invites conflicting thoughts of wave / particle duality, which distracts from and is not at all relevant to your point.
The eye samples light because it connects to a machine with a fixed number of inputs. This setup would also require sampling if the signal were not continuous at all, but instead consisted of a much larger number of discrete parts than the "sensing" equipment could handle.
Is this just a stylistic choice to signal the uses for those variables? (Though I confess I still don't know what that signal would be, anyway.)
Any illumination would be appreciated, but should MBostock still be following this, thanks so much for your work! So inspiring.
The maze is defined as a rectangular grid of cells, where each cell is bit field specifying whether you can navigate from that cell to each of its four neighbors: the cell above (N), the cell below (S), the cell to the right (E), and the cell to the left (W). The bit masks are powers of two (N = 1 << 0 = 1, S = 1 << 1 = 2, W = 1 << 2 = 4, E = 1 << 3 = 8) to uniquely assign each bit to each of the four directions.
For example, say that you have a cell that’s open to the north and the south, as part of a vertical passage. The bit field therefore is 0011. To check whether you can go south from the cell, you use the bitwise AND (&) operator: 0011 & S = 0011 & 0010 = 0010 = truthy. To likewise check whether you can go east from the cell: 0011 & E = 0011 & 1000 = 0000 = falsey.
What about vizualizing randomness ?
Like in this article ?
But, on this blog, TONS of dynamic code running and not a peep.
Found a ton more interesting examples here: http://bl.ocks.org/mbostock
Especially maze turning into spanning tree. That one was truly mind blowing.
Design and implementation of the UW Illustrated compiler
by Andrews, Henry, and Yamamoto
The University of Washington illustrating compiler
by Henry, Whaley, and Forstall
I've seen in the piece of code corresponding to the Fisher-Yates algorithm this snippet: "n-- | 0". Has the "| 0" any importance?
I know it's more times but I haven't found those threads..