Hacker News new | comments | show | ask | jobs | submit login

> that the future of interface is essentially data visualisation

Does that imply that ideal framework would be a data visualization layer? Something that transforms a generic representation of data into markup with style? Could we build an interface out a descriptive layout of domains with associated actions? (Here's some photos, here's an action to create a new one, now build an interface. These actions apply to a user account.)

Maybe something along those lines but what I envisage is more like (CAVEAT: what follows is highly conceptual, I am describing things I have seen in my imagination...) a reduction in the distance between an interface and the data generated by it or that it manipulates.

Just like we've seen layers of metaphor removed in the journey from mouse and pointer to touch of the finger - I imagine interfaces that leap a similar gap, interfaces that more directly represent the underlying data structures they are built upon.

Imagine a volume control for instance, but without a silly virtual silver knob controlling it, instead of that: a visual representation of the value the volume is set to much like a graphic equalizer but you directly control the graphic, without a proxy like a pointer and mouse or a proxy on top of a proxy like a pointer and a mouse controlling a virtual knob that then controls the volume.

That's what I mean about layers of metaphor. It's built up like cruft around the modern interface. That's why it's correct to look at skeumorphism with renewed skepticism but wrong to simply ape off into the opposite direction.

A lot of time is spent taking the data resultant of great number of interfaces, quantifying them and then presenting them in some way, often visually... I am saying that those visualisations and those interfaces needn't live on either side of an invisible fence, that they should commune to form something intuitive and elegant and modern.

Right, in your EQ example it may be an envelope displayed as an appropriate graph with a particle engine (or 2d representation) which when manipulated modifies the audio output. The analog version is gone, which is just a bad mapping of "bands" of frequency anyway, when we can more closer approximate a continuous system.

As for the general case, I see something like the shadow DOM and web components growing into a set of observers implemented at the native level, instead of built on top like Ember.js. HTML gets the ability to do spreadsheet like formulas where the value of one element can be dependent on another element, express in javascript or a CSS-like "data markup" language. It would support databindings directly to JSON/plain-old objects.

Maybe there could still be a symbolic design language that approximates real-world objects, but that would be crafted on top of a more pure data layer, without CSS hiding of components, or custom display values to turn one thing into another.


  * video

  * playbackControl

  * audioShaper -> GL/ES CL waveform processing -> audioOutput
<c:slider c:boundTo="playbackControl1" c:dataRole="seek" /> <c:volume id="v1" c:boundTo="audioShaper1" c:dataRole="gain" />

<span observes="jsval:percentage([#audioShaper1].gain/[#audioShaper1].maxGain)"></span>

dss: #v1 {volume-maximum: 90%}

css: /* turn red if we reach 70% of maximum */ #v1 [volume::=jsval:percent(audioShaper1.gain/audioShaper1.maxGain)>=0.7] {color: red}

Or something like that.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact