Hacker News new | past | comments | ask | show | jobs | submit login

Oh wow. I am days away from finishing a PhD thesis on sketching sound and music [1], and Daphne Oram figures highly into my background research. A friend of mine built an iPhone app that emulates Oram's system [2].

Another notable figure in the world of drawing music is Norman McLaren, who literally painted or drew directly on the soundtrack portion of his animated films. The soundtrack for his film Neighbors [3], which won an academy award in 1952, was produced this way. Theres a short documentary on his process here [4].

On the more avant-garde side, Iannis Xenakis had designed a computer-based system called the UPIC in the late 1970s in which you could directly draw waveforms and then direct their frequencies over time by drawing into a graphics tablet. He used this to compose his piece Mycenae Alpha [5].

This page [6] has a really great run-down of optical synthesis in general, which includes a number of individuals and systems involved in directly drawing sound and music. Some of the visual sound designs used in these systems are quite striking, for instance the variophone [7] or Yankovsky's painted soundtracks [8].

[1] Demo of my research software here: https://www.youtube.com/watch?v=Tdj5e82nPHQ

[2] https://itunes.apple.com/us/app/oramics/id454505541?mt=8

[3] https://www.youtube.com/watch?v=P-o9dYwro_Q

[4] https://www.youtube.com/watch?v=Q0vgZv_JWfM

[5] https://www.youtube.com/watch?v=yztoaNakKok

[6] http://www.umatic.nl/tonewheels_historical.html

[7] http://www.umatic.nl/tonewheels/historical/vario3.jpg

[8] http://www.umatic.nl/tonewheels/historical/painted_soundtrac...




In the McLaren video you linked, the uploader edited out his music and put in some other track. Here's the link to his film with the original score:

https://www.youtube.com/watch?v=e_aSowDUUaY


Thanks, good catch- its really worth watching the whole thing.


This is a very interesting way to draw music, lately i have done some experiments with something akin to the Russian ANS synthesizer which was a photoelectronic synthesizer where the sound spectrum was sketched onto glass disks, in my program all is driven by visuals produced by a GPU fragment shader, one can use a webcam as a texture to replicate something akin to the ANS workflow where you draw the sound spectrum although it is much less limited, one could also capture a painting software and feed their drawing in realtime.

The synthesizer is available here : https://www.fsynth.com


Whoa, very cool. I like the integrated live coding aspect.


Neat links, thanks for the references. Would be very interested in knowing more about your thesis when its something you're able to share ..

One wonders what Daphne would have thought of tools such as yours, and as well of course things like U&I's MetaSynth:

http://www.uisoftware.com/MetaSynth/


For sure. Its a long-ish video, but I defended a few months ago. The recording is online, and hits all of the major points: https://www.youtube.com/watch?v=B4xZQxVVnJA

Maybe Ill post a Show HN here when I publicly release the app over the summer.


Your app looks amazing. I love how you can leave handwritten notes next to the audio control nodes. I know these kind of hybrid sketching/programming environments have been a HCI dream for a long time now but I haven't seen it look as good as in your demo.


Thanks for the links, your PhD sounds promising.

Not sure which machine came first, but around the time of the UPIC system you mentioned the Fairlight CMI was released, which (amongst its other functions) allowed users to draw out waveforms using a lightpen:

https://en.wikipedia.org/wiki/Fairlight_CMI#CMI_Series_I

This video shows Peter Gabriel playing around with a Fairlight CMI, from what I remember I don't think it shows the waveform drawing but it gives some idea of what it was capable of:

https://www.youtube.com/watch?v=ON8lVgJxMQA


Love your work, Spencer. One of these days I'll get around to forking/pushing changes I've made to Chuck to support DirectX 8 and later while still allowing Chuck to build with the older DX5....


Thank you, awesome, please do! Though hopefully some day ChucK will have proper WASAPI and ASIO.


I have a hard time visualizing FM algorithms and Auraglyph looks like it would be really useful. When will it be available?


Awesome. Releasing it publicly later this summer!


Very interesting list of links. This is such a good topic for popularization because of its visual and concrete aspects. (e.g. Papert and Turkle's idea of the "revaluation of the concrete" http://www.papert.org/articles/EpistemologicalPluralism.html ). (Drawing is always concrete, I guess, not just because it's a hand-eye coordination thing, but because of its particularity.)

FWIW The most general wikipedia article and category that comes close to covering what you describe as "optical synthesis" is

https://en.wikipedia.org/wiki/Graphical_sound https://en.wikipedia.org/wiki/Category:Graphical_sound

The page emphasises the optical film soundtrack (on celluloid) as a medium for experimentation, which is something I'm fascinated by.

The gender aspect is interesting too, maybe related to ideas like Papert and Turkle's. There are at least two female film artists who have done interesting critical work with the optical film track:

Ute Aurand http://www.uteaurand.de/filme/paul_celan_liest.php

Lis Rhodes https://lux.org.uk/work/dresden-dynamo

also, in a different way,

Aura Satz https://www.thewire.co.uk/video/p=15036

whose work has, AFAIK, included providing replica tapes for the Oramics machine when it was displayed at the Science Museum.

There's also of course the very famous and female Delia Derbyshire, working in a similar milieu to Oram, but without pursuing the graphical aspect.

I'm sure you're aware of this, but for the record, there's a physical archive of Oram material at Goldsmiths in London: http://www.gold.ac.uk/ems/oram/ The man in charge is Mick Grierson, an old Dorkbot London hand.

Your software looks excellent. Very intuitive version of what others try to do textually e.g. http://functional-art.org/2013/hudak.pdf

On slide 16 of that set Hudak uses the dreaded Haskell monads (well, arrows) to compose a synthesis setup (a model of a flute), but it's an excellent example of something that could be done so much less opaquely in a graphical environment like yours)


Awesome, really interesting thoughts here and a lot of material for future study. Ive not been to the archive at Goldsmiths, though I once gave Mick a haircut as "performance art" :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: