
Apple iOS 4.2 to Support MIDI - J3L2404
http://www.tuaw.com/2010/11/05/ios-4-2-to-support-midi/
======
travisjeffery
The only difference it seems is that is will there will be MIDI support
through a USB connection, MIDI support already exists in tons of apps through
WIFI and unlike the article said is being used very seriously in quite a few
top DJs I know, Deadmau5, Michael Woods, Laidback Luke, Richie Hawtin at
least. The latter who actually got involved writing Griid.

So it's already very serious, Griid, BeatMaker, MidiPad, etc. etc. do exactly
what the author of the article says he wants -- they're basically a grid full
of clips that you can quickly launch through the iPad, and most of them you
can switch the interface from clip launching to controlling effects and other
parameters of your DAW through MIDI. Actually not only that but some like
TouchOSC also support OSC which is light years better than MIDI.

One good thing at least will be the reliability of having a hard connection
rather than sending your MIDI over WIFI.

In terms of professional use you're still going to need at least a sound card
and mixer as well, so the computer is not going away. So all-in-all not really
exciting as they're making it out to be.

~~~
rayboyd
With TouchOSC you have to use a bridge to convert the OSC messages sent over
the network to MIDI for the DAW to pick up. (I use OSCulator with Ableton).

2 way messaging can be a pain, it can be laggy, is quite time consuming and
unintuitive to map controls compared to the plug and play nature of a normal
hardware midi controller. This should alleviate that by removing the bridge,
the need for OSC and the ability to directly send MIDI over USB to the DAW.

~~~
travisjeffery
Yeah I know how it works.

MIDI is just one of those things that you're glad happened and it works but
you really wish everyone at once would just switch over to OSC as soon as
possible.

"This should alleviate that by removing the bridge, the need for OSC and the
ability to directly send MIDI over USB to the DAW."

But no it won't, the TouchOSC and OSCulator guys choose to use OSC and the
only way they're going to send MIDI directly is if they build it into the app
which they just do now utilizing WIFI.

~~~
rayboyd
Can you clarify this please. This is something I am reading up on at the
moment (MAX python API, OSC protocol etc).

"This should alleviate that by removing the bridge, the need for OSC and the
ability to directly send MIDI over USB to the DAW."

I was not referring directly to TouchOSC I was using it as an example
beforehand. Currently if I write my own controller, I need to implement the
OSC protocol to send messages over the network. These messages then need
converted to MIDI for the DAW to utilize. With direct MIDI support I can send
midi straight out of my application over USB for the DAW to pick up. Is that
essentially correct?

EDIT. Just noticed your Soundcloud profile. Nice work.

~~~
travisjeffery
Yep, MIDI Out. And then a lot of controllers sync with the DAW and so they
have MIDI In as well. I.e. let's say I have a fader on my controller mapped to
channel A's volume if I change the volume on either the DAW or on the
controller they will sync so that they both display the correct volume level
automatically. If I change the volume through the controller then it's sending
MIDI Out to the DAW if I change the volume on the DAW directly with my mouse
say then it's going to send MIDI In to my controller.

Basically you're cutting out the need for a middle-man.

And thanks. :D

------
andybak
Damn shame that Android MIDI and audio support is lagging as then there would
be scope for companies to innovate with Android based specialist hardware.

------
elblanco
This is great, all of the modern iOS devices are powerful enough to run as
soft synths...combined with a multitouch interface, there's no reason I can't
have a bank of 303s and 808s such with knobs and sliders and stuff I can move
about as well as on a real piece of hardware. If I setup a few iPads I can
interface with all that stuff at the same time.

~~~
sudont
Isn't midi somewhat hacked for theater to coordinate stage events?

If so, interesting uses of soft controls to mix together banks of lights in
realtime, in addition to ambient sounds and other stage controls could result.

~~~
icegreentea
Yeah, most of them run off a version of MIDI.

Though to be honest, I can't really imagine this resulting in anything more
'interesting' than what we have so far. The 'hard' controls used pretty
optimized. Whatever clunkiness is there is kinda hard to get around, due to
the sheer level of complexity involved. I mean, what's the interface on the
iPad going to be? A bunch of sliders. What are the interfaces in real life?
Sliders. Granted, an iPad might be able to do funner things than equipment of
comparable price, but top of the line equipment (which have motors to move the
sliders/knobs to the presets you saved!) is pretty much as flexible and
powerful as it can be.

~~~
anigbrowl
_I mean, what's the interface on the iPad going to be? A bunch of sliders._

Sure, sliders are convenient. But there are accelerometers, image processors,
DSP on the microphone input...and multitouch lets you do some thing that don't
make sense with a mouse, like pinching to alter the ratio of two two-
dimensional values at once (or more). The CPU in the phone or pad is fast
enough to do some fun things with complex numbers or whatever and spit out the
results via MIDI to a heavy duty DSP. That would be quite useful for
simulating vocal tracts or physical modeling things like woodwind reeds or
brass tonguing (the different sounds a trumpet player can make by how they put
their lips/tongue to the trumpet mouthpiece). There is gear to do this sort of
thing already, but most hardware has only multiple 1-dimensional controllers
or a joystick, effectively requiring a whole new set of playing techniques.
Smartphones make it easier to adapt the input device to the specifications of
the player rather than the other way around.

~~~
sudont
That's pretty clever. I hadn't though about muxing sound and controls together
through DSP. The quicktime layer in cocoa would probably make it fairly easy
to use a video stream as data input.

Another thing the iPad has over a computer is simultaneity of interface
elements, i.e. two people working a dashboard, vs one on a computer.

------
zandorg
I once played a song at a gig in 2000 (amongst other songs), which was just me
pressing keys to trigger samples on my Psion Series 5 handheld. I had wired
the speaker to a 3.5mm socket to go directly to the mixer!

I'm sure it was a first.

~~~
illumin8
Probably not. I'd be willing to bet people were playing live performances on
Palm Pilots in the late 90s, or even Nintendo Gameboys years before that.

~~~
meatsock
computer music has been alive and well since the 50's and 60's =)

------
jbarham
Somewhat tangential, but Jaron Lanier has some interesting comments/criticisms
of MIDI in his book "You Are Not a Gadget". I'd quote him, but it would go
against the spirit of the book. ;) Well worth reading in full.

~~~
anigbrowl
I haven't read the book, but if my synthesis of 10 serious
commentaries/reviews is accurate then I think he is dead wrong about MIDI.

Crash course: MIDI is a serial protocol for encoding of musical control
signals at ~32k baud, for an _effective_ timing resolution of ~1 millisecond.
Most signals are 7-bit, yielding 128 possible note or control _values_ ,
freely distributable across 16 _channels_. There are 120 possible control
_assignments_ ; the other 7 (128 - 120 - note) are mostly reserved for meta-
control, allowing private realtime parameters with up to 14-bit resolution, or
bidirectional non-realtime block data transfer for firmware updates and
configuration backup.

About 2/3 of the control assignments are standardized, and any MIDI device is
supposed to offer a 'general MIDI' configuration with standardized sounds
(trumpet, cello, clarinet etc.) controlled by standardized control assignments
(cc#32 is always Volume, cc#74 is brightness aka treble, etc.). With these you
can specify pretty much anything that could be written in standardized music
notation in much the same manner as a music box or player piano. It is
supposed to deliver a reasonable approximation of all the instruments in an
orchestra and allow control of the way they're actually played, giving you up
to 16 'virtual musicians' playing up to 24 notes at once. Of course, this is
_way_ too good to be true.

The hideous sounds found buried in the control panel or on Geocities-type web
pages are coming out of the cheap-ass General MIDI 'synthesizer' implemented
in windows and/or on your soundcard. Nobody in the real world uses this except
for those awful 'home keyboards' - usually cheap, although there are also
vastly overpriced versions aimed at rural churches and community centers,
where people who live far away from a music store learned to play on a cheap
Casio and want something that sounds better but works the same way - Bank 0,
program 47 will always give you a harp sound, changing controller # 74 will
always let you 'damp' the strings etc. This is a _huge_ market which
subsidizes the development of more interesting instruments. There is a
parallel sub-market devoted to painstaking digital reproductions of acoustic
and electric organs & pianos for more serious musicians, but which fit into a
car instead of a truck.

Now, General MIDI sounds awful for three reasons: poorly sampled source
sounds, crudely implemented modulation of same, and lazy transcription of
musical data. The sounds are typically short (50 ms) recordings originally
compressed to fit on a 16k EPROM. The modulations are usually bad
approximations of inappropriately linear DSP transforms designed to run on
some 8-bit uController. The transcriptions are generally bare minimum with no
attention paid to articulation or the ebb and flow of musical performance.
After all, it's going to sound pretty awful no matter how much you fine tune
the control data, so why polish a turd?

In fairness, MIDI has been around as a protocol since 1980 and General MIDI
was defined in 1991. The specification came before there were any commercial
products capable of implementing it, hence the cut corners described above in
a rush to look standards-compliant. Although the technology of audio
reproduction has improved enormously and allows effectively perfect fidelity
since then, the 'bare minimum' ethic required for standards compliance has
never gone away and you can buy quite expensive keyboards that make awful-
sounding GM 'music,' identical to that of 10+ years ago. It is this ugliness
and sterility that Lanier objects to: far from being a highest common factor,
General MIDI instead became a lowest common denominator, and is only ever
heard cranking out robotic versions of jaded pop hits.

Where Lanier goes wrong is in assuming that this is a limitation of the
_protocol_. And with only 7 bits for any given musical gesture, it _is_ quite
limited. But the main reason 'MIDI sounds so bad' is the ubiquity of the
General MIDI _standard_. You might as well say a guitar has 21 frets and 6
strings, so it can't produce more than 126 sounds, or that your computer
keyboard's 104 keys impose an unacceptable limitation on what you can write.

The reality is that while you'll never get a pleasing rendition of 16
musicians playing 24 notes from a single MIDI device, You can get
astonishingly good renditions of a single musician's performance. Pretty much
any real-world instrument can be beautifully synthesized and/or sampled now,
and the audible changes from musical modulation accurately reproduced. Though
they may use a grand piano on stage, most pro musicians will (secretly) admit
they are perfectly happy with the digital version they have at home, and its
multi-gigabyte sample library of conservatory pianos that are not for sale at
any price. The latency and timing limitations of a ~32k serial protocol are so
minute that are they below the threshold of perception for most people; they
are orders of magnitude below the fastest muscular repetition rates (note that
the much higher sensitivity of acoustic discrimination is for audio rather
than control signals).

Latency and inconsistencies can be further reduced in importance both
technically (on-the-fly reserialization yields ~100 uSec accuracy) and/or
musically-aware error correction: for example, quickly strumming a guitar
necessarily limits the variations in force with which each string is plucked,
such that the 7-bit data for this parameter can be scaled to an appropriate
range to yield a much finer expressive resolution than if it were limited to
absolute values. More importantly, a quality electronic instrument allows
incoming MIDI parameter data to be freely filtered, assigned and scaled to DSP
parametric data running at audio speeds, from where the sky is pretty much the
limit, whether your goal is the accurate reproduction of natural sounds or the
creation of entirely new expressive posibilities. The main barrier to more
musical expressivity is not the protocol but the physical hardware of piano
keys, knobs, pedals and so on; most devices require everything to done via the
fingers or tapping one's foot, and the market for more exotic sensors is small
enough that they've been expensive. So MIDI on one's phone offers various new
(-ly affordable) possibilities, such as waving it through the air like a
conductor's baton and sending the orientation and acceleration data out via
MIDI, or using the camera to track changes and reparameterizing them as MIDI.
This is not to say that acoustic instruments and musicianship are any less
interesting than they ever were, but to dispute the idea that music made with
(or even by) computers is necessarily mechanical or musically restricted.

Now there are other standards for digital notation and reproduction, from
CSound (infinite accuracy, utterly unmusical) to OSC (Open Sound Control -
great system, but more for people who like synthesizer construction than for
performance, and hence ignored by most instrument manufacturers). But the
expressive possibilities of MIDI are so great that 30 years after its
inception it remains the standard protocol. If you don't need things like the
'standard library' of General MIDI - and most serious musicians don't - then
version 1.0 of the standard is still the norm, and considered perfectly
acceptable even on very expensive gear (indeed, the more expensive the more
likely it is to have the base protocol implementation than any
'improvements').

Incidentally, JL is also an expert on ancient instruments, and skilled in
playing them. In this video, he introduces (among other things) the _Khaen_ ,
an organ-like instrument from Laos and provides a neat explanation of why
organs are the direct intellectual forebears of computers:
[http://www.youtube.com/watch?v=XW1BBbvrEYA&feature=relat...](http://www.youtube.com/watch?v=XW1BBbvrEYA&feature=related)

I imagine Don Knuth's fetish for pipe organs is motivated by a similar
perception.

------
edkennedy
A victory for musicians and visualists alike. Musicians have been using touch
screens ever since the several thousand dollar Lemur[1], now access will be
available to all with iPhone.

[1]<http://www.jazzmutant.com/lemur_overview.php>

------
tibbon
I'm just excited to have a nice midi 16-step sequencer in it. Too bad I'll
still need a Midi-CV convertor for most modular stuff.

~~~
anigbrowl
You're kinda stuck there, because the capacitors on the audio output of your
phone/soundcard/digital synth filter out out the DC that your CV inputs want.
I know people who have removed the caps on keyboard outputs and got working
audio -> CV but it's pretty drastic unless you're very electronics-savvy.

One cheap workaround you might want to investigate, depending on what analog
modules you have, is to send a fixed, high-frequency oscillator out from your
digital gear to the input of an analog envelope follower, and modulate the
digital oscillator with a digital LFO (using MIDI to play with the LFO
frequency). This is likely to yield 'interesting' sounds but unless you have
precision gear and can do log-linear scaling (eg freq. > pitch) it'll be hard
to do anything useful.

PAIA has a fairly cheap converter in module rack form factor, just in case you
didn't know: <http://www.paia.com/midi2cv.asp>

~~~
meatsock
a less cheap workaround is <http://www.motu.com/products/software/volta>

it'll work with anything with DC-coupled TRS outputs, i.e. any decent quality
soundcard.

very glad to see more music tech threads on HN. i think the iphone is gonna
revolutionize computer music, the problem from the beginning has been the
interface -- WIMP doesnt make a lot of sense for the way music works in the
brain, imo

------
smackfu
Just like a real computer!

------
derefr
Warning, completely tangential: I misread this headline as "OSX 10.7 will
support MDI." Oddly, that makes about as much sense, given 10.7's fullscreen-
app focus, since there haven't been any indications of what happens if you try
to fullscreen an app with palette windows.

~~~
sudont
I believe that fullscreen mode is a separate view that the application can use
if the developer creates one.

And the palette windows will most likely be in the style of iPhoto, so Apple
will most likely public classes for these, a la BWToolkit:
<http://brandonwalkin.com/blog/images/transparent3.png>

