Although the comments seem against it, I'm for it - I've played piano (and organ) for some 45 years now. You have to remember that the piano is mainly a percussion instrument, and there are few options to temper the sound, outside of sustain and such.
Synthesizers have had the same capability as shown in the video, but you had to reach to the side and twirl a wheel to get the same effect, which detracts from the playability.
Initially, this is not suited for traditional classical (probably better for jazz) but who's to say someone couldn't write a new classical based on this instrument (although goodness knows what the musical notation would be).
I love the idea of creating new instruments that are sensitive to new and different parameters. I think the seaboard is pretty cool, and it looks like there are some things you can do with it that you can't do with traditional midi keyboards (velocity modulation after initial keypress- i think that is called aftertouch in midi land) and key specific pitch bending.
But it also reminds me of one reason why analogue (acoustic) instruments can so incredibly powerful: they are ultimately extensible! You don't need to explicitly describe how the thing will react to input, and its not ignorant (though maybe not very responsive) of any physical input. Throughout history, people have been continually discovering new techniques on so many different instruments, even though some of those instrument's designs have changed very little. This is, of course, all predicated on a good initial instrumental design, but it makes me appreciate the wonders of, say, a guitar or piano even more.
I agree that a great analogue instrument is still greater, but here, given an electric predicate, this works with what keyboardists already can do, and makes it not just a percussion instrument (piano, harpsichord), or a timed note (synth), or a timed note with initial attack control (tracker action pipe organ). In other words pactically the same interface provides much more control of more variables much more easily. Whether great playing, music making and improvising can and does ensue is another issue, but I hope so.
People might want to check out Roger Linn, Ed Goldfarb, Geert Bevin (I'd really like to have a continuum and eigenharp), and actuated instruments site.
(I don't have any nonstandard instruments (except maybe fretless guitar) but I've thought about this as amateur player of piano/keys, guitar, strings (cello), woodwinds and former mallet percussionist, how you could combine mouthpiece and foot controllers with keyboard, how to do glisses, tremolo, vibrato etc, i.e. shape the attack/decay, timbre and pitch (continuously or as in Don Ellis' quarter tone music) like you can with cello and clarinet
I assume it's the way it was shot that not all of the notes that are being played start or end with movements and/or touches of the keyboard. (not being snarky, its just noticeable is all) I think its great that people are doing alternative instruments like this. For a while we've had synthesizers providing the tone generator for recreated inputs (like the laser harp) but this gives us more input options.
There is another such "keyboard" which is more like a horizontal bass/chello fret than a keyboard. I played around on it when it was being demoed at Guitar Center in San Jose. As a former Trombone player it felt similar in the 'feel' of the intonation was there rather than explicitly hitting a particular key. And the ability to 'slide up' or 'slide down' into the correct not if you were close but not quite. Something I've never been able to do on a keyboard (although I have heard folks do that)
One of my instruments is an Arrick synth [1] which has a 1V/octave keyboard (and input) which is handy for prototyping unusual types of input.
> I assume it's the way it was shot that not all of the notes that are being played start or end with movements and/or touches of the keyboard.
There are two separate tracks being played back at the same time with the video cutting between the two, the bass (black room) and synth (white room). Now it feels kind of obvious to point out, but on the other hand I wasn't really conscious of it the first time I watched it. I think that's all; I'm not sure if there was anything else you noticed to indicate that some notes don't "start or end with movements".
>I assume it's the way it was shot that not all of the notes that are being played start or end with movements and/or touches of the keyboard. (not being snarky, its just noticeable is all)
So you noticed that but you didn't notice that there is a track playing, and the performer lays another on top of it?
No I did not notice that they were in the process of laying down additional tracks. Here on Sunday the site seems to be down but the youtube video (http://www.youtube.com/watch?v=8n-bEy9ISpM) it still there. Listening to the upper register track (the 'white' room) I still feel the mis-match between my ear and my eyes (ignoring the base line). Its not a big deal I just noted it. There has been a lot of 'fakery' in the synth market and so that twitch is perhaps stronger than it would be for other things.
When I had a Yamaha DX-11 for a while I also had the Yamaha breath controller (trying to recapture my trombone days I guess :-) but was never really satisfied with it. Later I really wanted to try a Morrison Digital Trumpet [1] but really really had a hard time spending $4K on something I might not use more than a couple of times.
They've got excellent PR, but they're not doing anything particularly novel. They're advertising a solution to a long-solved part of the puzzle, but saying nothing about the really difficult bit of electronic music control - interoperability.
A quick bit of synthesiser history:
Traditional electronic keyboards sense how hard you hit the key with two switches, set a distance apart vertically. Measure the time between each switch closing and you have one side of v=d/t. Simple, cheap and all you need for controlling an emulation of a piano.
That was improved by adding aftertouch, a simple pressure strip underneath the keybed which provided an additional axis of control. That approach evolved quite quickly, peaking in 1976 with Yamaha's CS-80 synthesiser, which could sense both vertical and horizontal pressure on each key polyphonically. This was a hugely elaborate and expensive instrument but was extremely expressive, as best heard on Vangelis' soundtrack to Blade Runner.
Progress in this field came to a grinding halt in 1983, with the release of MIDI, an interoperable standard for electronic music instruments that became completely ubiquitous. MIDI was a tremendous breakthrough and made all sorts of previously very difficult things quite easy, but it had all of the usual failings of a successful standard.
MIDI has left us stuck with design decisions from 1983, the worst of these being the incredibly low data-rate. MIDI is an 8-bit protocol, operating at 31.25kbaud. The standard doesn't include any means of transmitting expression data other than velocity on a per-note basis. You can bend the pitch of all the notes currently sounding, but not one note out of a chord. If you try and send too much controller information over a channel, the timing goes to pot as you run out of bandwidth.
There's an obvious problem of platform lock-in, which nobody seems able to break - a controller of this type can't usefully control any existing sound source. There are numerous extant controllers of similar sophistication to the Seaboard, but they've all failed commercially because of their inability to usefully control existing sound generators.
The hoped-for solution to this problem is the Open Sound Control protocol, but this faces numerous problems. The most obvious is that MIDI is deeply entrenched, to the point that its shortcomings are rendered non-obvious to most musicians - our sense of the boundaries of electronic music are often inseperable from the limits of MIDI, so we don't often think about the sounds that we're unable to make with current technology. The other big issue is that of all reforms of old standards - an inability to manage scope and complexity.
The developers of OSC are obviously fearful of repeating the problems of MIDI, so they've gone in completely the opposite direction and designed a totally open-ended protocol. This has massively limited adoption of OSC by musicians, because it's extremely difficult to understand. A MIDI message is just a single byte and it's not too difficult to memorise the entire protocol - until the development of graphical computer-based sequencers, it was quite common to tidy up a recorded sequence of MIDI messages in a hex editor - any given MIDI message was just a single octet. OSC is designed to deal with every possible edge case, which of course makes it needlessly complex for the most common use-cases.
Controllers like these are doomed to niche appeal unless the manufacturers focus on the real problem - how to use the control data they generate in a manner which is both musically useful, and comprehensible to the musician.
The last major attempt was the Eigenharp, which used a bespoke software suite with a number of software instruments specifically designed for the instrument. It garnered a great deal of attention in the popular press, but nobody has made any worthwhile music with it yet.
MIDI is an 8-bit protocol, operating at 31.25kbaud.
False. That is but one transport option for MIDI data. USB-MIDI is another, much higher bit-rate protocol.
The standard doesn't include any means of transmitting expression data other than velocity on a per-note basis.
False. MIDI provides polyphonic aftertouch as a standard (i.e. not NRPN or SysEx) message type.
If you try and send too much controller information over a channel, the timing goes to pot as you run out of bandwidth.
Again false, if you're using a more modern transport such as USB-MIDI.
A MIDI message is just a single byte
False; this tells me you haven't read the MIDI spec. This is not true of either the messages themselves (which are generally multi-byte messages) nor of the message types (which are a single nibble for the basic types, 7 bits for the controller types, and 14 bits for each of RPN and NRPN messages).
Many OSC based applications for the monome[1] were ported to work with the copy-paste-avalanche of grid controllers (Launchpad, APC). The USB-MIDI data rate limitations are clearly visible when trying to change the state of 64 keypads at once. 64 individual signals with 8ms latency completely break the flow[2], compared to a single optimized OSC message.
Your counterpoints are correct, but MIDI is a better choice over OSC only due it's widespread compatibility. However, it's pointless to stick to USB-MIDI in the long run. It couldn't support the more complex DIY controller requirements as far as 6 years ago, so I can only imagine the chasm increasing.
A new inter-operable standard that surpasses MIDI and OSC was envisioned back on 2009 called ioFlow[3], but unfortunately it hasn't taken off yet.
P.s.: The website the story refers to is returning a 403, so my comment is here only to support the claim, that starting out with MIDI in 2013 is not the best idea.
Edit: Now that the site is online, I can respond to the product itself. It's probably a good fit for keyboard musicians, but I'd recommend the true hacker to pick up this indy marvel: Madrona SoundPlane-A[4], which has been in development for approximately 4 years. The concept is very, very similar. But it's not bound to the linear piano scale like the Roli. You can play it like guitar frets or better yet, explore 2 dimensional note layouts, such as the ones invented by Euler[5], popularized by Kaosscilator and now Ableton Push. Also, it runs on OSC, so you can enjoy the low latency, and build your own paradigms by transforming the signals in Cycling 74 Max or other OSC capable programming environments.
> The USB-MIDI data rate limitations are clearly visible when trying to change the state of 64 keypads at once. 64 individual signals with 8ms latency completely break the flow[2], compared to a single optimized OSC message.
Can't agree enough. I picked up a Launchpad since it was cheap and looked fun to hack on. I wrote a Python module to talk to it and then watched it chugging along while writing a sequencer on it; the bandwidth just isn't there to do intelligent updating of the pads. It is a fun little device, though, and I can't wait to get my Push to continue the work (the protocol seems to be more sane, given what Ableton is doing with it)
I was going to ask about the message details but I figured it would make me sound like an asshole, so thanks for that. I've looked for this kind of message specifically and not found it in the spec. Does it specify a note number or do something odd?
FYI, I wasn't worried about synths generating the message but rather receiving it. It's not too hard to tell whether a synth has a concept of per-note aftertouch, but I guess if it's two distinct messages it's okay to be selective about which you support.
Yes, the message format of polyphonic aftertouch is similar to that of a note on event (= note # + pressure). Channel aftertouch includes only the pressure.
In case you're interested, I e-mailed to the address on your GitHub a copy of the message formats from the MIDI spec. I don't know where I found it (it's not freely available) or else I'd have linked it.
This page: http://www.srm.com/qtma/davidsmidispec.html (which is quite obviously highly unofficial) mentions both 0xAx and 0xDx, calling them "Key Pressure" and "Channel Pressure", respectively. The explanation says:
Some keyboards can detect a change in pressure on each key, while they are held by the player; these keyboards can report "key pressure" over MIDI. Some keyboards may sense the overall pressure on the device, such as with a weight sensor beneath the entire keyboard. These devices can report "channel pressure."
Nitpick but surely you can still try to send too much data over USB-MIDI and have similar issues. And given the nature of USB it might even be harder to estimate where that limit lies.
I know the rolling page layouts are popular these days, but in this case it just looks bad. Lots of transparency around text, which makes some of it hard to read or focus on.
The navigation at the top left should always display all the sections (I shouldn't have to mouse over the little cubes to reveal what they are).
When the nav text for "technology" passes over the blue background text quote area, it nearly disappears.
The grey block area with "what the press are saying" is some kind of twisted attempt at straining my eyes by putting tiny grey text on a grey block in a larger grey area. Why would anybody do that to text?
The we are ROLI block near the bottom is a transparent nightmare for the text meant to be read.
The form at the bottom at least pops out a bit, but still suffers from a heavy abuse of transparency with text on it.
I was astonished by the loading times. Click an article in the "News" section and it takes about 5-10 seconds (while looking at the huge spinner) to load. That's ages for one paragraph of text and a 900x600 .jpg. For a quick reader, the page takes longer to load than to read. That's not good.
A professor I had in college created something very similar back in the 90's. He would have them in the lab occasionally and they were pretty interesting to play. http://www.hakenaudio.com/Continuum/
Seen this one a couple days ago and I honestly can't understand what the hype is all about. I've been playing the piano for 11 years now and I can't see myself or any classical pianist using this product. I'm aware that this is most likely not intended for classical musicians but still, I fail to see the reason to change the design of an instrument that's been around for a very long time in one form or another. Change for the sake of change is pointless.
I've been playing piano for... over two decades now, and additional ways of modulating synthesized sound are welcomed with open arms. There are songs I play where I wish so hard that I had polyphonic aftertouch on my keyboard, but alas, it has monophonic aftertouch only. I play a lot of classical, and I even want modulation there.
Your concerns... they are carbon copies of the same complaints people had about the introduction of the piano in the early 18th century!
Nowadays, the idea that you'd play "The Well-Tempered Clavier" on anything BUT a piano relegates you to a niche in classical (or rather baroque) music, despite the fact that the songs were written for the harpsichord. The assumption that the piano will be how we play Beethoven 50 years from now -- well, I'm sure the piano will still be alive and well in 2063...
Instruments come and go, it's the music that lives on.
The jump from the harpsichord to piano was a huge one but the key profile didn't change as much as this. Integrating new technologies with instruments is all good and dandy but we're now fabricating the sounds with computers. We're changing the way these instruments function and we're changing the way we interact with them.
My concerns may be similar to those of 18th century people but the changes that we're experiencing now are not similar to the changes they experienced. We don't have the technology to replicate the acoustic sound of a piano. And I quite honestly don't see this being used to perform classical music. I'm not talking about all the stuff (mind you, I'm very partial to calling these music) that's being "composed" these days, I'm talking about the music up to the 1950s.
I'd like to touch on another aspect of your post, you say that you want modulation and polyphonic aftertouch when you play the piano. And you say that it's the music that lives on. For classical music, the music is the composer's, s/he composed the music with the limitations of his/her era and re-interpreting their music with new technologies in ways they didn't even imagine. This is not making their music live on as far as I'm concerned.
Basically my point is that, considering that I only play classical music, I don't see a use for this. It's good to read about it but I don't think that this will ever be used for classical music performance. And no, I don't mean the odd youtube videos here and there, I mean used for performance by concert pianists.
I believe I'm entitled to my opinion about this. It's a cool piece of tech but it's just that. The fact that Jordan Rudess from DT endorses this doesn't mean anything to me. He's not a classical music performer (although he has been educated as one) and this may be good for his uses. I'll be amazed if Martha Argerich or Maurizio Pollini say that they will use this product.
And just a little note, and I know this can sound like I'm attacking you but I'm not, I'm just trying to share a bit of information. The pieces in The Well-Tempered Clavier are not "songs" per se, they are individual pieces. Song is another form in classical music and employs the use of human voice.
I'm not a musician, and I don't like going around the internet calling people names, but I'm sorry, you come off as stodgy. From reading your comments and watching a couple of Jordan Rudess videos I'm pretty sure that I'd rather watch him playing than you. Just saying.
That was far from my intent to be honest. Between me and Jordan Rudess, I'd rather watch him as well. But between Jordan Rudess and Martha Argerich I'd watch her playing. This is just another form of labelling stuff. Just because a well known and talented performer is backing something doesn't mean it's going to be useful. Then again my comment was only concerned with how this relates to classical music.
> For classical music, the music is the composer's, s/he composed the music with the limitations of his/her era and re-interpreting their music with new technologies in ways they didn't even imagine. This is not making their music live on as far as I'm concerned.
You're drawing a line in the sand, and saying that technological changes are okay for classical music as long as they don't cross that line, but I'm not sure you realize exactly where that line is drawn. Have you ever played Bach or Beethoven? You might be shocked to learn just how different the modern piano is from the devices that these composers worked with.
Bach composed within the limitations of the harpsichord: harpsichords lack modulation of timbre and volume, except perhaps with an una corda pedal or by use of a separate manual, both of which are extremely crude methods. It is neither practical nor desirable to emulate this on the piano: the piano is capable of dynamics, and so we play Bach's pieces by inventing dynamics for them. (I'm not going to discuss trills, talk to a musicologist if you like.)
Beethoven composed within the limitations of the piano, as it was around the year 1800. You might find such an instrument for sale somewhere, but I doubt it. The piano action has not changed, but the instrument has still evolved considerably from a musical standpoint. I am speaking, of course, of the sustain pedal. Sustain pedal technique is an essential part of classical pianists' training, but it is not historically accurate for classical pieces. Old pianos did not have nearly as much sustain as even cheap modern pianos, and it turns out that pianists in the day would just hold the sustain pedal down. Imagine what that would sound like on a modern piano: a muddy mess of notes.
Just as keyboard dynamics were not part of the music of Bach's era, sustain pedal technique was not part of the music of Beethoven's era. You'll find similar discrepancies with other instruments, such as the enormous difference between modern violin bows, which are of Italian descent, and baroque German violin bows.
Then there's the question for some keyboard pieces of what instrument they were actually written for. There are theories that certain organ pieces were actually clavinet pieces, for example.
Footnote: Yes, Beethoven and Bach composed for other instruments too.
> We don't have the technology to replicate the acoustic sound of a piano.
That's simply incorrect: the keyboard instruments are the easiest to replicate. Go listen to some samples from Synthogy's website, for example. The problem of "how do we make a computer sound like a piano" has been solved for quite some time now.
As a matter of fact I have indeed played Bach and Beethoven among others. Currently, I'm working on the 21st piano sonata of Beethoven for a performance. The example you have given is very accurate, in the 14th piano sonata, Moonlight Sonata as it's affectionately known, Beethoven instructs the performer to hold down the pedal for the whole duration of the first movement. That simply won't work on modern pianos. What do we do now? We try to replicate the sounds Beethoven himself would have gotten from his own piano. There are books written about pedal techniques.
And I worded that wrong. Technology is and should be a part of classical music performances. I just don't see the relevance of this product from a classical music standpoint.
And I did look at Synthogy's website. They have a good product but if you're saying that that product does replicate the sound of a real grand piano, we have to agree to disagree. They have solved some good problems, like half-pedaling. Harmonic resonance modeling is impressive. But in I can't say that these replicate the sound of a true acoustic 100%.
Small note: Dynamics were part of Bach's era. Bach himself was a very talented organ player and there are dynamics in organs. Piano is a descendant of harpsichord, true, but it's also a descendant of organ.
And as I said in my other comments, this is getting pretty off-topic and I don't want to derail the thread. I'll be more than happy to discuss this with you klodolph through mail or whatever.
I'm mainly a guitar player but spend most of my time on piano lately, and this instrument reminds me a bit of both. You can slide and vibrate like a guitar, but with the clear musical vision of piano.
OMG, are you taking the piss with this comment? As if "classical" music, whatever that is, is a perfected form, the terminal point of all musical development. Ironically, it is terminal, as in culturally fading, passed by by the unstoppable rivers of human creativity.
The current musical trends still have a lot of ground to cover to catch up with that "terminal point of all musical development". I don't consider classical music to be the terminal point of all musical development by the way. Those were your words, not mine. You could say that classical music is getting less culturally relevant but you could say that about sculpture or painting.
I understand how my comment has been misunderstood and I didn't want to say that this was useless, just that it was useless for classical music. But your comment is by far the most... hmm... interesting so far.
This is getting out of hand though. If all of you guys want to discuss and throw shit at me and try to convince me that this is the best thing ever, go ahead and create a new submission about technology and classical music or whatever. I have no wish to derail the submission.
It has already been changed. Pianos don't have mod wheels, for example. I could list a hundred or so other changes but the good ones are mostly beneath the surface. But these things aren't really pianos, they just use a piano keyboard.
I agree with the poster who said "this isn't for you." Then again, it's not for many people. Electronic instruments have only changed the basic design slightly because they are tied down by a short-sighted standard. It's why mod wheels have been around forever but these things haven't. See the post at the top about the MIDI spec and such.
And just as a closing note for my own comment, I'm amazed by how hostile people can get over a comment. The fact that I don't see this used for classical music (the music that I enjoy and perform) should be free to express here. What should I have said? Oh great, cool stuff, this is the future of music? I just don't see it that way. It will be useful to some, and will have no effect on others.
When I wrote the first comment, I couldn't understand what the hype was about this and now, on top of that, I can't understand the way people acted over the comment.
> Seen this one a couple days ago and I honestly can't understand what the hype is all about.
> Change for the sake of change is pointless.
You are saying MUCH MORE than just "I don't see this used for classical music." You were giving an actual criticism to a product for which you are not the target audience. This sort of criticism makes zero sense, which is why I recommended (more gently than others I might add) to just move along.
The thing that's really great about all our acoustic musical instruments is that when you push on them, they push back. I don't mean this simply in the normal force sense. Consider a guitar. You've got a string which you press over a fret with your left hand. This can be naively emulated by a switch. But with the guitar, the timber and pitch change depending on how you fret the note, on the finger pressure and position and motion. With your other hand, you might be plucking the string in any of a number of different places, with your finger or with a pick. The sound is affected by your attack angle, how hard you pick, the pick's composition, and so on.
A guitar string is clearly a complicated system. There are lots of variables at play. But more importantly, it's a coherent system. It makes sense to us a physical object that can be manipulated. When you pluck the string, you can feel it vibrate in your fretting hand. When you bend the string its tension increases. If you amplify it, you get the sense that you are physically touching the sound.
(This is incidentally, why audio latency absolutely KILLS when doing amp simulation)
The experience playing a wind instrument is similar. While a saxophone may appear to be something you blow into that has keys, things are really far more complicated than that.
Keyboard-based instruments are a little different. Unlike most any symphonic instrument, the piano actually has relatively few parameters per key. There's note velocity... and that's about it. The various sustain pedals also apply. The piano's design trades single note expressivity for the ability to play ten of them at once.
It should be noted that computer synthesis (procedural or sampled) of keyboard-based is very convincing. They same cannot be said for any other instrument.
Now, what about new kinds of control systems? Most of them tend to fall into two categories. One tries to improve on the piano harmonically, by coming up with a better arrangement of where the notes go. Here's an overview of some: http://sequence15.blogspot.jp/2010/03/alternative-keyboards..... They try to fix the fact that it's hard to play in different keys on a piano. Whereas on the guitar you can learn a single scale or chord and move it up and down the neck to transpose, things change radically on a piano keyboard.
The second category is those like the Seaboard, which try to add new dimensions of control to a regular piano keyboard. Another example is the Contiuum (http://www.hakenaudio.com/Continuum/) It's very common now to have both velocity and continuous pressure sensitivity (aftertouch) on a regular keyboard as well as various side controllers for dealing with pitch or an abstract "modulation" parameter.
These controllers nearly always buy into the separation of control from synthesis. It makes perfect technical sense. But most of the instruments we would consider to be "expressive" don't work that way! In fact piano-style instruments are pretty much the only ones that do.
Which leads my to my point: A control mechanism should be considered together with the instrument it controls. It's fantastic that this new keyboard has all these new dimensions that you can map to sound, but what is it REALLY good for? What is the instrument that wants to be controlled in this way? The spiffy new control surfaces nearly always leave this problem unsolved and thus remain little more than novelty items.
The classical pipe organ, often referred to as "the king of instruments", is a synthesiser. The organ console is an electrical or pneumatic controller, with no direct connection to the pipes. Most large organs have considerable "latency", due to the distance from the console to the pipe room. Some pipes may be as much as a hundred feet away from the player, so the sound will take over 90ms to reach them - several orders of magnitude more delay than a modern computer system.
The organ is still regarded as a highly expressive instrument, in spite of the relatively modest control a player has beyond simple pitch and duration. Although most organs have many stops which provide similar timbres to existing instruments, it is understood that the organ is an instrument in its own right and should be played as such, and is not merely a tool for emulating other instruments.
The idea that an electronic instrument should imitate acoustic instruments is simply a poverty of imagination. Electronic instruments, used in a manner that is sympathetic to their natural properties, can be absolutely as expressive as any acoustic instrument. The theremin has perhaps the worst user-interface of any musical instrument as the player has no physical contact with the instrument whatsoever, but is utterly beautiful when played by a master.
The challenge for electronic musicians is that they are often both performer and instrument-maker. A monosynth of any quality can be configured in a near-infinite variety of ways, many of which were completely unforeseen by the designer. Musicians working with modular systems or DSP programming environments have a blank canvas. We do not as yet have a good theoretical framework for this task, but electronic music is extremely young - no more than ninety years old at most. We are only just beginning to scratch the surface of what is possible.
I don't think your argument is inconsistent with the argument you're countering. There is room for both emulation-enhancement of analogue instruments - a very useful function and one which the makers of this seaboard seem to be leaning towards - and the creation of brand new instruments through synth (and new interfac) technology. Indeed these two functions come closer together as we create new interface-noise style hybrids.
Additional points of interest which do not affect your actual argument, but which are good for clarification: the most responsive pipe organs use tracker action, which being mechanical allows much more control than the more "modern" but less responsive electropneumatic actions necessary for the very largest instruments. IT makes the action from pressing key to opening the pipe faster - much of the latency in the organ is often in the pneumatics - but of course cannot change the distance issues.
also Attack makes a huge difference to the sound on a well-voiced instrument, and this makes a huge difference. Subtleties in duration are of course also as vast on the Organ as they are with the piano and other instruments.
You don't even need to go as far as organs to see latency.
Low-pitched stringed instruments such as fretless/upright bass guitars have latency due to the mechanics of the strings themselves; e.g. jazz bass players have to account for this when playing. Most good jazz bass players do this unconsciously.
Polyphonic reed instruments (e.g. harmonicas, melodicas, accordions) all have this issue as well. Reeds of significant mass (i.e. lower-pitched reeds) can take tenths of a second to sound (more on older instruments).
Also: if you can feel the string in your fretting hand vibrate on a fretted instrument, you're doing it wrong. (The string does not vibrate past the fret -- that's the point of frets! If it does, you're not pressing firmly enough and you get fret buzz. Either that or your finger's on the wrong side of the fret and you're muting the sound.)
"if you can feel the string in your fretting hand vibrate on a fretted instrument, you're doing it wrong."
Generally the entire body of an acoustic guitar vibrates with the sound of a plucked string - the fretting hand (being the only hand currently attached to the guitar) would likely feel the vibration carrying from the body, through the neck of the guitar. It's faint, but nevertheless perceivable.
I'd never thought of that, but yes, the lowest of the high-pressure reed pipes do sound later on the Organ, and you do have to allow for that. And at the bottom of the cello. Well, remarkable the things I'd not consciously noticed all these years of playing!
> The organ console is an electrical or pneumatic
> controller
When asked about the manual (keyboard) on an organ, many experienced organists will make fond noises about the mechanical action windchest. In this, the fingers get feedback from the windbox. Modern systems tend to control the wind supply through electronics which means no feedback for the organist.
I've no criticism of your theremin example though.
I think, the problem is that whenever a state-of-the-art synthesiser comes close to an expressive acoustic instrument it lives in the uncanny valley more often than not (see some YouTube videos of the Eigenharp for example). It will need a lot more sensors or better software to overcome that.
> When you pluck the string, you can feel it vibrate in your fretting hand.
When I depress the fingerboard of a Continuum I can feel it vibrate in my ears, which is quite enough feedback to manipulate a sound expressively. Sure, playing the guitar is a beautiful, unique, rich sensory experience (which I love), but it does not follow that the lack of the "guitar experience" leads to a lack of musicality. Each instrument has its own mode of interaction, from the guitar to the piano, and talented people seem to find ways to be expressive with all of them.
> These controllers nearly always buy into the separation of control from synthesis. It makes perfect technical sense. But most of the instruments we would consider to be "expressive" don't work that way!
So? Why does what already exists matter? There are plenty of people willing to experiment with a new input surface to find out what it's good for. You may not be one of them, but why do you need to be "suspicious?" These experimenters don't take away your ability to play a guitar.
> but what is it REALLY good for?
I can't imagine this being played on any other instrument:
> What is the instrument that wants to be controlled in this way?
You could also ask, "what is the music that wants to be made by a stringed instrument?" People have been exploring that question for thousands of years, and we're still finding out new answers. Electronic instruments are very, very new compared to that, and there's been comparatively very little time to learn about them. I say let's go wild and create myriad new instruments and find out what works.
Personally, I think the decoupling of input surfaces and sound generators is one of the all-time best developments in music. As a brass musician, it's nontrivial for me to produce the sound of a oboe. However, given a very expressive input surface I can produce an extraordinary range of timbres without dedicating another decade to practicing each individual instrument.
> I can't imagine this being played on any other instrument
This is exactly my point! In the abstract the continuum doesn't have much to say, musically. Paired with this sound source and played by Jordan Rudress, it works. (Who also has something to do with this new company, it seems)
Pat Metheny's approach to the guitar synth speaks to this, I think:
Unlike many guitar synth users, Metheny limits himself to a very small
number of sounds. In interviews, he has argued that each of the timbres
achievable through guitar synthesis should be treated as a separate
instrument, and that he has tried to master each of these "instruments"
instead of using it for incidental color. One of the "patches" that Pat
used often is on Roland's JV-80 "Vintage Synth" expansion card titled
"Pat's GR-300". [1]
I'm not trying to argue against new ways of controlling sound. However, I do think we should ask far more of the makers than "think of the possibilities." Part of the control surface design process should be to think deeply about and experiment with the way the additional dimensions can be used, and the fruits of that process should be passed to the person who buys it. (The continuum may be a good example of this, as it looks like they ship it with an internal synth.
> You may not be one of them, but why do you need to be "suspicious?" These experimenters don't take away your ability to play a guitar.
Point taken, I'm probably not one of them. But I used to be, and there's a lot of snake oil out there. My experience with newfangled instruments is as follows:
- Korg Padkontrol: This was my only real contact with an MPC-style interface. I wanted to use it to sequence drums in real time, and it was decidedly mediocre for that. When using it for other things, part of the musical task turned to the configuration of the controller. It's creativity, but a different kind to be sure. It blurs the line between performance and composition.
- Zendrum: Seeing that people were able to play live on these things somewhat convincingly led me to try it. It never really clicked for me. I spent too much time configuring and never enough actually practicing the instrument. There are many reasons this could be my fault, not the least of which is that I'm not a drummer.
- Chapman Stick knockoff: I was never able to get beyond just piddling around on this thing. My imagination was captured by a video I saw on the web at some point, and I guess the ad copy closed the sale. But some weeks after I got it, I was left with a distinct feeling of "now what?" It was an instrument without any useful context. I've been told that the real thing is far more compelling than the knockoffs; perhaps I'll try one of those some day.
This is far more likely a commentary on myself than on these three instruments. For me, searching for the perfect instrument was something like creating a new programing language before writing your program. It's a never-ending task that inevitably fizzles out. I have since been better served by my Telecaster.
How about a synthesizer? I can immediately think of many useful ways to use a synthesizer with this controller. For example, touch pressure to filter cut-off or LFO amount. Finger vibrato to OSC pitch, OSC detune, filter, and so on.
I think the video does a reasonable job showing how it can be used creatively. I get the point you're trying to make about harmony between the input device and sound source. However, I would argue in modern days with really incredible software synthesizers both software and hardware (I own one of these: http://www.studioelectronics.com/products/synths/omega8/) the inverse is more common.
Input devices do a poor job of exposing the features of the instruments.
This is the first device I've seen that seems to actually care about the performability and feel without making you look like a douchebag. That's what makes it interesting to me.
There's actually a recurring pattern for anyone who cares to notice. Most new instruments are developed to emulate something else and they do it badly. Pipe organs were developed to emulate choirs. Now we love pipe organs precisely because of their limitations. They have a "distinctive sound."
The same thing happened with the Fender Rhodes piano, the Mellotron, the Moog, the Fairlight Synthesizer, and (believe it or not) MIDI. They started as cheap substitutes (masked by a "wow" factor), then people bemoaned their limitations relative to what they emulated and ditched them, then they were resurrected for the uniqueness of their sound.
Instruments are defined by their limitations. No matter how much we claim to hate those limitations, we end up loving them for them.
Up to a point. Most of your argument I love, but The pipe organ was developed to make sounds, music of its own and particularly to accompany choirs, not emulate them. It wasn't really until the Victorian era that the orchestral organ was a thing, with stops deliberately created to emulate and even replace orchestra sounds (some fairly realistically, especially with the clarinet and flute families for obvious reasons). Pipe organs are excellent at both analogue simulation and their own sounds and performance! And, no two of them are the same, most remarkably.
A refinement on your point: music does not exist independent of sound. A musician is responsible for creating that sound, whether the expressiveness of the instrument is great or limited. A mistake (IMO) I hear a lot is a musician playing an electronic keyboard as if it was a piano, ignoring the fact that the sound they are generating is bad, simply because they are actually ignoring the sound. Just pushing the keys down as if they were playing a piano. For an keyboard the instrument is the control interface, sound generator, amplifier, and resonating body (speaker). Anything that makes sound can be used to make wonderful music, if there is a true listening process.
> It should be noted that computer synthesis (procedural or sampled) of keyboard-based is very convincing. They same cannot be said for any other instrument.
I agree that keyboard instruments are the most convincing, and certainly the easiest virtual instruments to play (since playing a digital piano is the same as playing an acoustic piano), but there are some pretty expressive and convincing virtual instruments out there:
Those are samplers, not synthesizers. (Yes, technically they are synths, but not what's usually meant by the term.) They're just many many GB of samples of real musicians playing real instruments.
While there probably aren't any existing acoustic instruments that would benefit from being played on this keyboard, there is tons of potential for interesting synthesized sounds that can take advantage of both the range of a keyboard and the expressiveness of an analog instrument.
The reason their product hasn't been dominating the market for the past 20 years is not because they were the first to dream it up (they weren't, though there appear to be some incremental improvements here), it's because you can't get it to talk to the other $50k+ worth of equipment in your studio.
That doesn't add up, unless you're counting everyone with a trial version of fruity loops on their system as having a studio. But even then, I think it's a stretch (everyone I know who does any kind of music production has at least $1k in non-computer hardware.)
How exactly do you expect to get this to work with your existing tools? It's obviously not usefully MIDI-compatible.
Sure, but when you depend on synthesis compatibility is a big deal. History seems to support that, with respect to other novel electronic instruments. I was just now responding to a claim that this will work with existing synthesizers.
While I would like to have this kind of control, I would much rather have the flexibility that a standard MIDI keyboard with aftertouch affords with respect to synthesis.
Which is utterly useless in a 1:1 channel:instrument configuration. Most of my MIDI hardware expects to listen on a single channel, as does most I've come across.
My plugin/VST host doesn't know about this either, so I would need to assign 10 different MIDI channels and all 10 CCs every time I want to change instruments. And for those instruments without a per-note bend, I would need to create 10 instances as well.
I feel like most of the stuff that surround us were created THIS way because we just didn't have the technology to make it better yet. That's what I see with those new Pianos and also with the 48fps movies https://www.facebook.com/notes/peter-jackson/48-frames-per-s...
As far as interesting musical interfaces go, the axis 64 is the most amazing thing I've ever seen from a conceptual standpoint. It's like someone just laid down the same of all of these relationships that make music work and put into something your hands can push on. What's weird is that even though it's just an /interface/ it makes the /concepts/ easy to grasp, manipulate, etc.
http://madronalabs.com/ makes something called the Soundplane - it feels like a present someone sent me from the future.
So nice to touch. A high resolution / super responsive controller.. of wood! Maybe I'm just not enough of a cyborg but the wood surface made the Haken Continuum (wetsuit material) far less compelling. I worry the same thing of the Seaboard.
I'm not musical – I can't play an instrument and I don't analyse the music I listen to. But I like listening to music, and to me, this sounds worse than a normal keyboard/piano. It looks (and sounds) really difficult to produce a precise note with, so I wonder if it will take off at all.
The cached version worked but the site kept giving me 403 errors.
Unfortunately I know so little of music/keyboards that I couldn't really tell the difference between this and any other keyboard.
Synthesizers have had the same capability as shown in the video, but you had to reach to the side and twirl a wheel to get the same effect, which detracts from the playability.
Initially, this is not suited for traditional classical (probably better for jazz) but who's to say someone couldn't write a new classical based on this instrument (although goodness knows what the musical notation would be).