Hacker News new | past | comments | ask | show | jobs | submit login
Student Builds Daphne Oram’s Unfinished ‘Mini-Oramics’ (gold.ac.uk)
66 points by teh_klev on June 29, 2016 | hide | past | web | favorite | 22 comments



Tom Richards, who built the machine, has a very short blog here with a couple of bits of extra info on the machine:

http://minioramics.blogspot.co.uk/

Unfortunately, there's not much in the way of info about how the lines on the film map to the synthesizer controls - I found it very odd to watch the film go by, and see busy sections with short discrete notes, sections with long interesting curves, and almost empty sections, but to hear no discernable difference in the audio.


I think the disjoint between the audio and the visual is amplified due to the fact that the read head(s) are a couple inches to the left under the cover. So there is always a perceived delay to the audio from where you are looking. This magnifies if you for example look to the right edge of the view while it's playing a couple inches past the left, less so if you can project the last "bar" past the left edge in your mind.

This device would be better I think if the read head could be moved directly to the left edge or even perhaps placed in the center of the view. But I have a feeling it needs to be in darkness to read best.

Interesting idea none the less.


The Radio 3 documentary on Daphne Oram, "Wee Have Also Sound-Houses":

http://www.bbc.co.uk/programmes/b00ct1y1

- is, predictably, not available to listen to on their site, but you can find it on YouTube:

https://www.youtube.com/watch?v=NNaqvAH7R34


Another documentary that anyone interested in this period of the Radiophonic Workshop might like is one on a colleague of Oram's: Delia Derbyshire, famous for being the creator of the Dr. Who theme.

https://www.youtube.com/watch?v=nXnmSgaeGAI


The music which it plays in the demo video is a meaningless mix of scribbled sounds.

Whatever the musical potential of the instrument might be, it is not shown in the video.


Well, it was designed in the 1970s, at a time when experimental musicians were were thinking about what was next after the sort of post-Dada deconstruction of music, and how computers might fit in to that.

Like this is the time, approximately, when Laurie Anderson has got analog tape affixed to a bow of her violin and is bowing the tape, and is about just go bat-shit crazy with MIDI for things like Oh Superman in the early 1980s.

So, drawing is cool. Avant-garde musical experimentation is cool. It may not have a beat, but it's possible you could dance to it, with the right perspective. :)


I'm pretty sure that was some Schoenberg actually.


I respect if others discover meaning in this music. To me it sounds like a child's scribble.


I find the art of children to often be much more unaffected, creative, and interesting than that of most adult artists. In fact, so do some adult artists themselves, who admire and even seek to themselves emulate the art of children.


I almost heard Flight of the Bumblebee at one point (0:50 in the video). I think something like this could be useful as a music transcribing device -- listen to a piece and move your pen up/down as the pitch rises/falls while the medium scrolls by. Then rewind and play.


Could someone explain to me what the creative advantage of this is? Particularly over a computer? I don't have a background in electronic music, so maybe I am missing something?


In the 60s/70s, composing on a computer wasn't the same experience as today. Incidentally, today it's still pretty frustrating.

The creative advantage is in being able to express pitch, (and presumably generic control voltages) intuitively as continuous lines, which is a natural fit for analog synthesizers. The alternative is sending discrete events, which is how midi works, something not invented until later. There is a school of thought that to limit synthesizers to the discrete world of "notes," with quantised durations and pitch values as a hangover from our ideas of keyboard instruments, is to waste their potential.

Ribbon controllers were popular in that period for expressing continuous pitch, and I suspect the Oramics was designed as a way of recording these sorts of expressions, like a kind of analog punch card.

Also consider that the main compositional method of the radiophonic workshop at that time was splicing magnetic tape with razor blades and sellotape.


Thanks, to me the article wasn't clear on whether the machine was built for historical reasons, or whether it had modern uses.

I agree that it seems restrictive to limit synths to discrete notes (like playing guitar without bends) although I probably wouldn't use the word mellifluous to describe the sounds made in the video.


I agree that quantized pitches are a great waste of a synth's potential. William Sethares did some very interesting research on consonance perception, which can be used to construct microtonal tuning systems without the dissonance usually associated with them. Here's an good demonstration of what you can do with arbitrary pitches:

http://sethares.engr.wisc.edu/mp3s/three_ears.html

"As each new note sounds, its pitch (and that of all currently sounding notes) is adjusted microtonally (based on its spectrum) to maximize consonance. The adaptation causes interesting glides and microtonal pitch adjustments in a perceptually sensible fashion. "


Wow, that makes me feel seasick. Looking forward to playing it at the tail end of parties.


There are ideas where the concept was far ahead of the available technology. This is definitely one of those.

It's pretty trivial to do this kind of thing with automation curves in modern sequencers now. In fact you can draw tens of curves in parallel.

The sound you get depends on which softsynths you use and how flexible their modulation routing is.

Most people think of sequencers as discrete notes + modulation, not as curves or functions. So the good thing about this idea is that it can break people out of that way of working - even though there's no real need to make this device as a commercial product now.

But generally "drawing" music doesn't work all that well. The information in music needs much finer control - either with conventional sequencing, or (more interestingly now) with generative code.


Those automation curves actually get sent to the synth as discrete events, with the rate depending on the output buffer length: there may be tens of milliseconds between updates. Plugins have to implement their own smoothing functions to get natural sounding results. This is one good reason to run your DAW at a high sample rate.

The difference here to the analog domain, where you can continuously modulate any parameter at audio rate or higher with zero latency, is obvious.

Of course softsynths own inbuilt modulation sources will be oversampled these days along with everything else they do, which is why they sound pretty good now.

The Oramic is presumably using some kind of filtering on the output of those photodiodes or whatever it has, for similar reasons, but the point remains.


I agree. It did say that local artists were given "a few days" to play with the thing, which sounds frustrating.

Like all primitive synthesizer music, with a generous helping of tape delay and spring reverb it will probably sound wicked.


The original Oramics machine was designed in the early to mid 1960s

Computers were something few people had.


Previous Daphne Oram on HN: https://news.ycombinator.com/item?id=10993961.


Someone made an iPhone app which simulates one of these machines, if anyone is interested in playing around with one:

https://itunes.apple.com/us/app/oramics/id454505541?mt=8


Sounds interesting, but the article doesn't show up for me.

Ad-block shows it's blocking some font files, but everything else is getting through. I hope it's not a new trend to not display anything if custom fonts don't load.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: