Synthesizers have had the same capability as shown in the video, but you had to reach to the side and twirl a wheel to get the same effect, which detracts from the playability.
Initially, this is not suited for traditional classical (probably better for jazz) but who's to say someone couldn't write a new classical based on this instrument (although goodness knows what the musical notation would be).
But it also reminds me of one reason why analogue (acoustic) instruments can so incredibly powerful: they are ultimately extensible! You don't need to explicitly describe how the thing will react to input, and its not ignorant (though maybe not very responsive) of any physical input. Throughout history, people have been continually discovering new techniques on so many different instruments, even though some of those instrument's designs have changed very little. This is, of course, all predicated on a good initial instrumental design, but it makes me appreciate the wonders of, say, a guitar or piano even more.
Also "That 1 guy" "Haaken continuum", Wessel's touchpads (a better Kaossilator, if you will, and Fluid Piaano
(I don't have any nonstandard instruments (except maybe fretless guitar) but I've thought about this as amateur player of piano/keys, guitar, strings (cello), woodwinds and former mallet percussionist, how you could combine mouthpiece and foot controllers with keyboard, how to do glisses, tremolo, vibrato etc, i.e. shape the attack/decay, timbre and pitch (continuously or as in Don Ellis' quarter tone music) like you can with cello and clarinet
There is another such "keyboard" which is more like a horizontal bass/chello fret than a keyboard. I played around on it when it was being demoed at Guitar Center in San Jose. As a former Trombone player it felt similar in the 'feel' of the intonation was there rather than explicitly hitting a particular key. And the ability to 'slide up' or 'slide down' into the correct not if you were close but not quite. Something I've never been able to do on a keyboard (although I have heard folks do that)
One of my instruments is an Arrick synth  which has a 1V/octave keyboard (and input) which is handy for prototyping unusual types of input.
There are two separate tracks being played back at the same time with the video cutting between the two, the bass (black room) and synth (white room). Now it feels kind of obvious to point out, but on the other hand I wasn't really conscious of it the first time I watched it. I think that's all; I'm not sure if there was anything else you noticed to indicate that some notes don't "start or end with movements".
So you noticed that but you didn't notice that there is a track playing, and the performer lays another on top of it?
When I had a Yamaha DX-11 for a while I also had the Yamaha breath controller (trying to recapture my trombone days I guess :-) but was never really satisfied with it. Later I really wanted to try a Morrison Digital Trumpet  but really really had a hard time spending $4K on something I might not use more than a couple of times.
A quick bit of synthesiser history:
Traditional electronic keyboards sense how hard you hit the key with two switches, set a distance apart vertically. Measure the time between each switch closing and you have one side of v=d/t. Simple, cheap and all you need for controlling an emulation of a piano.
That was improved by adding aftertouch, a simple pressure strip underneath the keybed which provided an additional axis of control. That approach evolved quite quickly, peaking in 1976 with Yamaha's CS-80 synthesiser, which could sense both vertical and horizontal pressure on each key polyphonically. This was a hugely elaborate and expensive instrument but was extremely expressive, as best heard on Vangelis' soundtrack to Blade Runner.
Progress in this field came to a grinding halt in 1983, with the release of MIDI, an interoperable standard for electronic music instruments that became completely ubiquitous. MIDI was a tremendous breakthrough and made all sorts of previously very difficult things quite easy, but it had all of the usual failings of a successful standard.
MIDI has left us stuck with design decisions from 1983, the worst of these being the incredibly low data-rate. MIDI is an 8-bit protocol, operating at 31.25kbaud. The standard doesn't include any means of transmitting expression data other than velocity on a per-note basis. You can bend the pitch of all the notes currently sounding, but not one note out of a chord. If you try and send too much controller information over a channel, the timing goes to pot as you run out of bandwidth.
There's an obvious problem of platform lock-in, which nobody seems able to break - a controller of this type can't usefully control any existing sound source. There are numerous extant controllers of similar sophistication to the Seaboard, but they've all failed commercially because of their inability to usefully control existing sound generators.
The hoped-for solution to this problem is the Open Sound Control protocol, but this faces numerous problems. The most obvious is that MIDI is deeply entrenched, to the point that its shortcomings are rendered non-obvious to most musicians - our sense of the boundaries of electronic music are often inseperable from the limits of MIDI, so we don't often think about the sounds that we're unable to make with current technology. The other big issue is that of all reforms of old standards - an inability to manage scope and complexity.
The developers of OSC are obviously fearful of repeating the problems of MIDI, so they've gone in completely the opposite direction and designed a totally open-ended protocol. This has massively limited adoption of OSC by musicians, because it's extremely difficult to understand. A MIDI message is just a single byte and it's not too difficult to memorise the entire protocol - until the development of graphical computer-based sequencers, it was quite common to tidy up a recorded sequence of MIDI messages in a hex editor - any given MIDI message was just a single octet. OSC is designed to deal with every possible edge case, which of course makes it needlessly complex for the most common use-cases.
Controllers like these are doomed to niche appeal unless the manufacturers focus on the real problem - how to use the control data they generate in a manner which is both musically useful, and comprehensible to the musician.
The last major attempt was the Eigenharp, which used a bespoke software suite with a number of software instruments specifically designed for the instrument. It garnered a great deal of attention in the popular press, but nobody has made any worthwhile music with it yet.
False. That is but one transport option for MIDI data. USB-MIDI is another, much higher bit-rate protocol.
The standard doesn't include any means of transmitting expression data other than velocity on a per-note basis.
False. MIDI provides polyphonic aftertouch as a standard (i.e. not NRPN or SysEx) message type.
If you try and send too much controller information over a channel, the timing goes to pot as you run out of bandwidth.
Again false, if you're using a more modern transport such as USB-MIDI.
A MIDI message is just a single byte
False; this tells me you haven't read the MIDI spec. This is not true of either the messages themselves (which are generally multi-byte messages) nor of the message types (which are a single nibble for the basic types, 7 bits for the controller types, and 14 bits for each of RPN and NRPN messages).
Your counterpoints are correct, but MIDI is a better choice over OSC only due it's widespread compatibility. However, it's pointless to stick to USB-MIDI in the long run. It couldn't support the more complex DIY controller requirements as far as 6 years ago, so I can only imagine the chasm increasing.
A new inter-operable standard that surpasses MIDI and OSC was envisioned back on 2009 called ioFlow, but unfortunately it hasn't taken off yet.
P.s.: The website the story refers to is returning a 403, so my comment is here only to support the claim, that starting out with MIDI in 2013 is not the best idea.
Edit: Now that the site is online, I can respond to the product itself. It's probably a good fit for keyboard musicians, but I'd recommend the true hacker to pick up this indy marvel: Madrona SoundPlane-A, which has been in development for approximately 4 years. The concept is very, very similar. But it's not bound to the linear piano scale like the Roli. You can play it like guitar frets or better yet, explore 2 dimensional note layouts, such as the ones invented by Euler, popularized by Kaosscilator and now Ableton Push. Also, it runs on OSC, so you can enjoy the low latency, and build your own paradigms by transforming the signals in Cycling 74 Max or other OSC capable programming environments.
Can't agree enough. I picked up a Launchpad since it was cheap and looked fun to hack on. I wrote a Python module to talk to it and then watched it chugging along while writing a sequencer on it; the bandwidth just isn't there to do intelligent updating of the pads. It is a fun little device, though, and I can't wait to get my Push to continue the work (the protocol seems to be more sane, given what Ableton is doing with it)
I've read the standard and this sounds very wrong to me. If it is in the spec then I know a few synths that are broken.
(Edit: whoops, I'm bad at converting bin->hex.)
FYI, I wasn't worried about synths generating the message but rather receiving it. It's not too hard to tell whether a synth has a concept of per-note aftertouch, but I guess if it's two distinct messages it's okay to be selective about which you support.
In case you're interested, I e-mailed to the address on your GitHub a copy of the message formats from the MIDI spec. I don't know where I found it (it's not freely available) or else I'd have linked it.
Some keyboards can detect a change in pressure on each key, while they are held by the player; these keyboards can report "key pressure" over MIDI. Some keyboards may sense the overall pressure on the device, such as with a weight sensor beneath the entire keyboard. These devices can report "channel pressure."
Sounds pretty nice.
I know the rolling page layouts are popular these days, but in this case it just looks bad. Lots of transparency around text, which makes some of it hard to read or focus on.
The navigation at the top left should always display all the sections (I shouldn't have to mouse over the little cubes to reveal what they are).
When the nav text for "technology" passes over the blue background text quote area, it nearly disappears.
The grey block area with "what the press are saying" is some kind of twisted attempt at straining my eyes by putting tiny grey text on a grey block in a larger grey area. Why would anybody do that to text?
The we are ROLI block near the bottom is a transparent nightmare for the text meant to be read.
The form at the bottom at least pops out a bit, but still suffers from a heavy abuse of transparency with text on it.
I've been playing piano for... over two decades now, and additional ways of modulating synthesized sound are welcomed with open arms. There are songs I play where I wish so hard that I had polyphonic aftertouch on my keyboard, but alas, it has monophonic aftertouch only. I play a lot of classical, and I even want modulation there.
Your concerns... they are carbon copies of the same complaints people had about the introduction of the piano in the early 18th century!
Nowadays, the idea that you'd play "The Well-Tempered Clavier" on anything BUT a piano relegates you to a niche in classical (or rather baroque) music, despite the fact that the songs were written for the harpsichord. The assumption that the piano will be how we play Beethoven 50 years from now -- well, I'm sure the piano will still be alive and well in 2063...
Instruments come and go, it's the music that lives on.
My concerns may be similar to those of 18th century people but the changes that we're experiencing now are not similar to the changes they experienced. We don't have the technology to replicate the acoustic sound of a piano. And I quite honestly don't see this being used to perform classical music. I'm not talking about all the stuff (mind you, I'm very partial to calling these music) that's being "composed" these days, I'm talking about the music up to the 1950s.
I'd like to touch on another aspect of your post, you say that you want modulation and polyphonic aftertouch when you play the piano. And you say that it's the music that lives on. For classical music, the music is the composer's, s/he composed the music with the limitations of his/her era and re-interpreting their music with new technologies in ways they didn't even imagine. This is not making their music live on as far as I'm concerned.
Basically my point is that, considering that I only play classical music, I don't see a use for this. It's good to read about it but I don't think that this will ever be used for classical music performance. And no, I don't mean the odd youtube videos here and there, I mean used for performance by concert pianists.
I believe I'm entitled to my opinion about this. It's a cool piece of tech but it's just that. The fact that Jordan Rudess from DT endorses this doesn't mean anything to me. He's not a classical music performer (although he has been educated as one) and this may be good for his uses. I'll be amazed if Martha Argerich or Maurizio Pollini say that they will use this product.
And just a little note, and I know this can sound like I'm attacking you but I'm not, I'm just trying to share a bit of information. The pieces in The Well-Tempered Clavier are not "songs" per se, they are individual pieces. Song is another form in classical music and employs the use of human voice.
You're drawing a line in the sand, and saying that technological changes are okay for classical music as long as they don't cross that line, but I'm not sure you realize exactly where that line is drawn. Have you ever played Bach or Beethoven? You might be shocked to learn just how different the modern piano is from the devices that these composers worked with.
Bach composed within the limitations of the harpsichord: harpsichords lack modulation of timbre and volume, except perhaps with an una corda pedal or by use of a separate manual, both of which are extremely crude methods. It is neither practical nor desirable to emulate this on the piano: the piano is capable of dynamics, and so we play Bach's pieces by inventing dynamics for them. (I'm not going to discuss trills, talk to a musicologist if you like.)
Beethoven composed within the limitations of the piano, as it was around the year 1800. You might find such an instrument for sale somewhere, but I doubt it. The piano action has not changed, but the instrument has still evolved considerably from a musical standpoint. I am speaking, of course, of the sustain pedal. Sustain pedal technique is an essential part of classical pianists' training, but it is not historically accurate for classical pieces. Old pianos did not have nearly as much sustain as even cheap modern pianos, and it turns out that pianists in the day would just hold the sustain pedal down. Imagine what that would sound like on a modern piano: a muddy mess of notes.
Just as keyboard dynamics were not part of the music of Bach's era, sustain pedal technique was not part of the music of Beethoven's era. You'll find similar discrepancies with other instruments, such as the enormous difference between modern violin bows, which are of Italian descent, and baroque German violin bows.
Then there's the question for some keyboard pieces of what instrument they were actually written for. There are theories that certain organ pieces were actually clavinet pieces, for example.
Footnote: Yes, Beethoven and Bach composed for other instruments too.
> We don't have the technology to replicate the acoustic sound of a piano.
That's simply incorrect: the keyboard instruments are the easiest to replicate. Go listen to some samples from Synthogy's website, for example. The problem of "how do we make a computer sound like a piano" has been solved for quite some time now.
And I worded that wrong. Technology is and should be a part of classical music performances. I just don't see the relevance of this product from a classical music standpoint.
And I did look at Synthogy's website. They have a good product but if you're saying that that product does replicate the sound of a real grand piano, we have to agree to disagree. They have solved some good problems, like half-pedaling. Harmonic resonance modeling is impressive. But in I can't say that these replicate the sound of a true acoustic 100%.
Small note: Dynamics were part of Bach's era. Bach himself was a very talented organ player and there are dynamics in organs. Piano is a descendant of harpsichord, true, but it's also a descendant of organ.
And as I said in my other comments, this is getting pretty off-topic and I don't want to derail the thread. I'll be more than happy to discuss this with you klodolph through mail or whatever.
I understand how my comment has been misunderstood and I didn't want to say that this was useless, just that it was useless for classical music. But your comment is by far the most... hmm... interesting so far.
This is getting out of hand though. If all of you guys want to discuss and throw shit at me and try to convince me that this is the best thing ever, go ahead and create a new submission about technology and classical music or whatever. I have no wish to derail the submission.
I agree with the poster who said "this isn't for you." Then again, it's not for many people. Electronic instruments have only changed the basic design slightly because they are tied down by a short-sighted standard. It's why mod wheels have been around forever but these things haven't. See the post at the top about the MIDI spec and such.
When I wrote the first comment, I couldn't understand what the hype was about this and now, on top of that, I can't understand the way people acted over the comment.
> Seen this one a couple days ago and I honestly can't understand what the hype is all about.
> Change for the sake of change is pointless.
You are saying MUCH MORE than just "I don't see this used for classical music." You were giving an actual criticism to a product for which you are not the target audience. This sort of criticism makes zero sense, which is why I recommended (more gently than others I might add) to just move along.
The thing that's really great about all our acoustic musical instruments is that when you push on them, they push back. I don't mean this simply in the normal force sense. Consider a guitar. You've got a string which you press over a fret with your left hand. This can be naively emulated by a switch. But with the guitar, the timber and pitch change depending on how you fret the note, on the finger pressure and position and motion. With your other hand, you might be plucking the string in any of a number of different places, with your finger or with a pick. The sound is affected by your attack angle, how hard you pick, the pick's composition, and so on.
A guitar string is clearly a complicated system. There are lots of variables at play. But more importantly, it's a coherent system. It makes sense to us a physical object that can be manipulated. When you pluck the string, you can feel it vibrate in your fretting hand. When you bend the string its tension increases. If you amplify it, you get the sense that you are physically touching the sound.
(This is incidentally, why audio latency absolutely KILLS when doing amp simulation)
The experience playing a wind instrument is similar. While a saxophone may appear to be something you blow into that has keys, things are really far more complicated than that.
Keyboard-based instruments are a little different. Unlike most any symphonic instrument, the piano actually has relatively few parameters per key. There's note velocity... and that's about it. The various sustain pedals also apply. The piano's design trades single note expressivity for the ability to play ten of them at once.
It should be noted that computer synthesis (procedural or sampled) of keyboard-based is very convincing. They same cannot be said for any other instrument.
Now, what about new kinds of control systems? Most of them tend to fall into two categories. One tries to improve on the piano harmonically, by coming up with a better arrangement of where the notes go. Here's an overview of some: http://sequence15.blogspot.jp/2010/03/alternative-keyboards..... They try to fix the fact that it's hard to play in different keys on a piano. Whereas on the guitar you can learn a single scale or chord and move it up and down the neck to transpose, things change radically on a piano keyboard.
The second category is those like the Seaboard, which try to add new dimensions of control to a regular piano keyboard. Another example is the Contiuum (http://www.hakenaudio.com/Continuum/) It's very common now to have both velocity and continuous pressure sensitivity (aftertouch) on a regular keyboard as well as various side controllers for dealing with pitch or an abstract "modulation" parameter.
These controllers nearly always buy into the separation of control from synthesis. It makes perfect technical sense. But most of the instruments we would consider to be "expressive" don't work that way! In fact piano-style instruments are pretty much the only ones that do.
Which leads my to my point: A control mechanism should be considered together with the instrument it controls. It's fantastic that this new keyboard has all these new dimensions that you can map to sound, but what is it REALLY good for? What is the instrument that wants to be controlled in this way? The spiffy new control surfaces nearly always leave this problem unsolved and thus remain little more than novelty items.
The classical pipe organ, often referred to as "the king of instruments", is a synthesiser. The organ console is an electrical or pneumatic controller, with no direct connection to the pipes. Most large organs have considerable "latency", due to the distance from the console to the pipe room. Some pipes may be as much as a hundred feet away from the player, so the sound will take over 90ms to reach them - several orders of magnitude more delay than a modern computer system.
The organ is still regarded as a highly expressive instrument, in spite of the relatively modest control a player has beyond simple pitch and duration. Although most organs have many stops which provide similar timbres to existing instruments, it is understood that the organ is an instrument in its own right and should be played as such, and is not merely a tool for emulating other instruments.
The idea that an electronic instrument should imitate acoustic instruments is simply a poverty of imagination. Electronic instruments, used in a manner that is sympathetic to their natural properties, can be absolutely as expressive as any acoustic instrument. The theremin has perhaps the worst user-interface of any musical instrument as the player has no physical contact with the instrument whatsoever, but is utterly beautiful when played by a master.
The challenge for electronic musicians is that they are often both performer and instrument-maker. A monosynth of any quality can be configured in a near-infinite variety of ways, many of which were completely unforeseen by the designer. Musicians working with modular systems or DSP programming environments have a blank canvas. We do not as yet have a good theoretical framework for this task, but electronic music is extremely young - no more than ninety years old at most. We are only just beginning to scratch the surface of what is possible.
Additional points of interest which do not affect your actual argument, but which are good for clarification: the most responsive pipe organs use tracker action, which being mechanical allows much more control than the more "modern" but less responsive electropneumatic actions necessary for the very largest instruments. IT makes the action from pressing key to opening the pipe faster - much of the latency in the organ is often in the pneumatics - but of course cannot change the distance issues.
also Attack makes a huge difference to the sound on a well-voiced instrument, and this makes a huge difference. Subtleties in duration are of course also as vast on the Organ as they are with the piano and other instruments.
You don't even need to go as far as organs to see latency.
Low-pitched stringed instruments such as fretless/upright bass guitars have latency due to the mechanics of the strings themselves; e.g. jazz bass players have to account for this when playing. Most good jazz bass players do this unconsciously.
Polyphonic reed instruments (e.g. harmonicas, melodicas, accordions) all have this issue as well. Reeds of significant mass (i.e. lower-pitched reeds) can take tenths of a second to sound (more on older instruments).
Also: if you can feel the string in your fretting hand vibrate on a fretted instrument, you're doing it wrong. (The string does not vibrate past the fret -- that's the point of frets! If it does, you're not pressing firmly enough and you get fret buzz. Either that or your finger's on the wrong side of the fret and you're muting the sound.)
Generally the entire body of an acoustic guitar vibrates with the sound of a plucked string - the fretting hand (being the only hand currently attached to the guitar) would likely feel the vibration carrying from the body, through the neck of the guitar. It's faint, but nevertheless perceivable.
> The organ console is an electrical or pneumatic
I've no criticism of your theremin example though.
When I depress the fingerboard of a Continuum I can feel it vibrate in my ears, which is quite enough feedback to manipulate a sound expressively. Sure, playing the guitar is a beautiful, unique, rich sensory experience (which I love), but it does not follow that the lack of the "guitar experience" leads to a lack of musicality. Each instrument has its own mode of interaction, from the guitar to the piano, and talented people seem to find ways to be expressive with all of them.
> These controllers nearly always buy into the separation of control from synthesis. It makes perfect technical sense. But most of the instruments we would consider to be "expressive" don't work that way!
So? Why does what already exists matter? There are plenty of people willing to experiment with a new input surface to find out what it's good for. You may not be one of them, but why do you need to be "suspicious?" These experimenters don't take away your ability to play a guitar.
> but what is it REALLY good for?
I can't imagine this being played on any other instrument:
> What is the instrument that wants to be controlled in this way?
You could also ask, "what is the music that wants to be made by a stringed instrument?" People have been exploring that question for thousands of years, and we're still finding out new answers. Electronic instruments are very, very new compared to that, and there's been comparatively very little time to learn about them. I say let's go wild and create myriad new instruments and find out what works.
Personally, I think the decoupling of input surfaces and sound generators is one of the all-time best developments in music. As a brass musician, it's nontrivial for me to produce the sound of a oboe. However, given a very expressive input surface I can produce an extraordinary range of timbres without dedicating another decade to practicing each individual instrument.
This is exactly my point! In the abstract the continuum doesn't have much to say, musically. Paired with this sound source and played by Jordan Rudress, it works. (Who also has something to do with this new company, it seems)
Pat Metheny's approach to the guitar synth speaks to this, I think:
Unlike many guitar synth users, Metheny limits himself to a very small
number of sounds. In interviews, he has argued that each of the timbres
achievable through guitar synthesis should be treated as a separate
instrument, and that he has tried to master each of these "instruments"
instead of using it for incidental color. One of the "patches" that Pat
used often is on Roland's JV-80 "Vintage Synth" expansion card titled
"Pat's GR-300". 
> You may not be one of them, but why do you need to be "suspicious?" These experimenters don't take away your ability to play a guitar.
Point taken, I'm probably not one of them. But I used to be, and there's a lot of snake oil out there. My experience with newfangled instruments is as follows:
- Korg Padkontrol: This was my only real contact with an MPC-style interface. I wanted to use it to sequence drums in real time, and it was decidedly mediocre for that. When using it for other things, part of the musical task turned to the configuration of the controller. It's creativity, but a different kind to be sure. It blurs the line between performance and composition.
- Zendrum: Seeing that people were able to play live on these things somewhat convincingly led me to try it. It never really clicked for me. I spent too much time configuring and never enough actually practicing the instrument. There are many reasons this could be my fault, not the least of which is that I'm not a drummer.
- Chapman Stick knockoff: I was never able to get beyond just piddling around on this thing. My imagination was captured by a video I saw on the web at some point, and I guess the ad copy closed the sale. But some weeks after I got it, I was left with a distinct feeling of "now what?" It was an instrument without any useful context. I've been told that the real thing is far more compelling than the knockoffs; perhaps I'll try one of those some day.
This is far more likely a commentary on myself than on these three instruments. For me, searching for the perfect instrument was something like creating a new programing language before writing your program. It's a never-ending task that inevitably fizzles out. I have since been better served by my Telecaster.
I think the video does a reasonable job showing how it can be used creatively. I get the point you're trying to make about harmony between the input device and sound source. However, I would argue in modern days with really incredible software synthesizers both software and hardware (I own one of these: http://www.studioelectronics.com/products/synths/omega8/) the inverse is more common.
Input devices do a poor job of exposing the features of the instruments.
This is the first device I've seen that seems to actually care about the performability and feel without making you look like a douchebag. That's what makes it interesting to me.
There's actually a recurring pattern for anyone who cares to notice. Most new instruments are developed to emulate something else and they do it badly. Pipe organs were developed to emulate choirs. Now we love pipe organs precisely because of their limitations. They have a "distinctive sound."
The same thing happened with the Fender Rhodes piano, the Mellotron, the Moog, the Fairlight Synthesizer, and (believe it or not) MIDI. They started as cheap substitutes (masked by a "wow" factor), then people bemoaned their limitations relative to what they emulated and ditched them, then they were resurrected for the uniqueness of their sound.
Instruments are defined by their limitations. No matter how much we claim to hate those limitations, we end up loving them for them.
I agree that keyboard instruments are the most convincing, and certainly the easiest virtual instruments to play (since playing a digital piano is the same as playing an acoustic piano), but there are some pretty expressive and convincing virtual instruments out there:
Bass guitar (acoustic and electric): http://www.spectrasonics.net/products/trilian-audio.php
Acoustic guitar: http://www.youtube.com/watch?feature=player_embedded&v=q...
Electric guitar: http://www.vir2.com/instruments/electri6ity
Orchestra (perhaps not the best solo instruments, but pretty great results for a whole symphony): http://www.vsl.co.at/en/67/702/703/413.htm and http://www.soundsonline.com/Symphonic-Orchestra and http://www.garritan.com/products/personal-orchestra-4/
And, just to mention it, the best piano virtual instrument I've heard so far: http://www.synthogy.com/demos/grandpiano.html
I'll have to agree that it might just have to remain as a novelty for now.
How exactly do you expect to get this to work with your existing tools? It's obviously not usefully MIDI-compatible.
If the instrument could really stand on its own, then compatibility isn't a problem.
While I would like to have this kind of control, I would much rather have the flexibility that a standard MIDI keyboard with aftertouch affords with respect to synthesis.
My plugin/VST host doesn't know about this either, so I would need to assign 10 different MIDI channels and all 10 CCs every time I want to change instruments. And for those instruments without a per-note bend, I would need to create 10 instances as well.
So nice to touch. A high resolution / super responsive controller.. of wood! Maybe I'm just not enough of a cyborg but the wood surface made the Haken Continuum (wetsuit material) far less compelling. I worry the same thing of the Seaboard.