
Virtualsity: Inventing the instruments of the future - prostoalex
http://harpers.org/blog/2014/08/virtualsity/
======
anigbrowl
If you're taken with the Opals hexagonal note grid but aren't picky about it
being made out of Walnut burl, you can get an Axis controller built on the
same principla for a fraction of the price: [http://www.c-thru-
music.com/cgi/?page=home](http://www.c-thru-music.com/cgi/?page=home)

------
ahaefner
It's interesting that a lot of these instruments are electronic recreations of
pre-existing instruments. As a saxophone player who has played an EWI, the
electronic versions take time to learn and often end up feeling not as
expressive.

------
unwind
I think the title would have been better if it had said _musical_ instruments.
My first thought was that this was about _measuring_ instruments of some kind.
Oh well.

------
bane
Making new instruments is pretty hard stuff. One thing that I've noticed lots
of new instruments seem to flake on is making the instruments really, _truly_
expressive. At a simple level, most new instrument ideas I've seen seem to
basically be a trigger sound on/off and if you're lucky some kind of volume
control and if you're _really_ lucky a pitch bend -- all tied into some kind
of synth.

But most real instruments offer a tremendous amount of rich flexibility in
performance. I grew up playing the violin and one of the reasons you don't
really hear good synth approximations of a string instrument is because on a
real instrument there are thousands of subtly different ways of playing a
given note. Subtle differences in attack, bowing, bow position, string
position, pressure of your fingers on the strings, incredibly slight
intonation changes, and on and on and on. I can say with some confidence that
no two notes played on a violin will ever be exactly the same.

The example of the Continuum instrument is probably the most advanced I've
ever seen, and even then there's all kinds of flaws in the sound, bits that
sound noticeably synthy -- and I can think of a few performance issues that
this instrument can't easily replicate. That's not to say some other kind of
sound wouldn't be amazing (I was more impressed with the out and out synth
sound than the string sound). They've tackled an impressive number of
challenges.

This makes learning these kinds of instruments very hard, but it also means
that they can be performed at incredibly advanced virtuoso levels.

This article does a good job at highlighting some of the major efforts to
produce expressive instruments. It's pretty exciting to me that technology has
started to reach a point that really expressive instruments, capable of
virtuoso level live performances have started to show up. My personal favorite
is the Eigenharp, but it also has a little bit too much of a one-man-band
aspect to it what with the mouthpiece and all.

One point that the article makes however is that these instruments are still
remarkably rare to see. I think one of the problems is that they are hideously
expensive for the most part. There's no entry level cheapo middle-school music
student version they can bang around on the bus like when you're learning an
acoustic instrument. I think my first violin cost $150, and I got $100 of that
back when I traded it in for a better instrument.

Musicians, being notoriously poor, often simply can't afford one of these
things and thus can't build mastery on them from a very young age.

Some examples:

Continuum - $3400 - $5300

Eigenharp - $4000 - $8260 (pico version is $800)

Seaboard - $2000 - $9000

Linnstrument - not sure, but looks like between $1000-2000

Still, as a musician, I find all of these far more interesting than something
like Imogen Heap's MIT medialab-like (of which I disliked everything I ever
saw coming form there) wearable instrument tech.

~~~
abruzzi
Part of this is the complexity of the interface (I.e. Piano keyboard vs.
violin, etc.) need to be matched by complexity in controllable modulation a in
the synthesis engine. The history of musical synthesis was a piling on of
complexity up to about the late 90s then there was stagnation (or to a degree
even reversion.). Part of this was simply the ability for users to reasonably
build sounds with some of the more complex synth engines, but I think a lot of
instruments (at the time they were mostly sample playback subtractive
synthesis) couldn't build the complex nuance of sound from a natural system.
In the mid 90s Yamaha came out with a synth called the VL1. It was hugely
expensive and used physical modeling waveguide synthesis. It modeled a
vibrating column of air in a wind instrument, and to program it you needed
special software to design the construction and shape of the instrument (this
was radical because all synthesis at the time was about controlling the
sound.) It was actually possible to create instruments that couldn't play in
tune, or wouldn't even make a sound. This instrument really responded well to
complex inputs and was especially sought after by EWI wind players (the
original was a keyboard instrument, but required a breath controller to play
most sounds since the sounds were designed to require constant energy going
into the system to make sound.). But it turned out to be exceptionally
difficult to make musically useful sounds, and unlike most pro synths of the
time, you couldn't program the models until some years later yamaha released
their internal sound design software.

Since then most commercial synths have either been multipurpose sample
subtractive, or virtual analog that simulate the old analog synths from the
70s.

~~~
bane
Yeah, looking at the Continuum demo, it looks like thousands of hours went
into trying to build up a virtual string instrument sound. _Somebody_ has to
splice the samples and build up all the velocity levels and all that.

I'm definitely not disparaging it as a ton of work. But I think it also gives
us some ideas about the limitations of our models of how sound is actually
made. Along the way we've ended up producing some truly unique and interesting
sound models which have given birth to any number of entire musical genres.

I think part of the problem is also build some kind of control interface which
can even detect the elements of a nuanced performance. Take something as
straightforward as an electronic drum head. You've got velocity, edge to
middle playback area, different parts of the stick of hand, the side of the
drum (not just the head). Different stick head materials. You can stop a
ringing drum or cymbal, or disrupt the vibration in subtle ways. For something
that just seems like it's "onImpactPlaybackSample()" it's not anywhere near
that simple.

~~~
abruzzi
I think that why the most successful electronic controllers have essentially
been patterned after acoustic controllers. It's easier to start with a basic
model of a saxophone, and then add wind pressure, embouchure, and other real
techniques. That way you know people will take to them because all the wind
people already know these techniques. History has a lot of unique musical
interfaces that never got past curiosity because most performers couldn't
adapt to it (theremin...)

