Hacker News new | comments | show | ask | jobs | submit login

Right, in your EQ example it may be an envelope displayed as an appropriate graph with a particle engine (or 2d representation) which when manipulated modifies the audio output. The analog version is gone, which is just a bad mapping of "bands" of frequency anyway, when we can more closer approximate a continuous system.

As for the general case, I see something like the shadow DOM and web components growing into a set of observers implemented at the native level, instead of built on top like Ember.js. HTML gets the ability to do spreadsheet like formulas where the value of one element can be dependent on another element, express in javascript or a CSS-like "data markup" language. It would support databindings directly to JSON/plain-old objects.

Maybe there could still be a symbolic design language that approximates real-world objects, but that would be crafted on top of a more pure data layer, without CSS hiding of components, or custom display values to turn one thing into another.


  * video

  * playbackControl

  * audioShaper -> GL/ES CL waveform processing -> audioOutput
<c:slider c:boundTo="playbackControl1" c:dataRole="seek" /> <c:volume id="v1" c:boundTo="audioShaper1" c:dataRole="gain" />

<span observes="jsval:percentage([#audioShaper1].gain/[#audioShaper1].maxGain)"></span>

dss: #v1 {volume-maximum: 90%}

css: /* turn red if we reach 70% of maximum */ #v1 [volume::=jsval:percent(audioShaper1.gain/audioShaper1.maxGain)>=0.7] {color: red}

Or something like that.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact