This really needs to be properly EQ'd though. Watch these tutorials, and you will be able to make the thing sound 10x better with ~30 minutes work. It doesn't matter that the tutorials are for guitar music and are using specific EQ plugins, the advice is universal. Once your spectrum is less saturated, you'll be able to dial back the limiting as well.
I've done about 4 years of research on sonification, which is using non-speech audio to represent patterns in data (http://mags.acm.org/interactions/20120102/?pg=37#pg37 for some specifics). This article is a subset of sonification in some ways, since we're representing some quantitative data using auditory parameters.
There's an entire class of scenarios where conventional HCIs can't represent data for analysis: where people have an occupied visual sense (doctors during surgery), where people are mobile, where people are overloaded by visual data (stock analysts), where the visual sense isn't suited for extracting data from noise (during the Voyager 2 mission), etc. We tend to rely only on our visual sense for communicating data, and as we start using computers for data display in more places, we're reaching the limitations of conventional HCI.
My research was on proving the viability of sonification - looking at the accuracy of comprehension, the cognitive and physiological processes, demonstrating shared mental processes with visual graph comprehension, etc. It's still something I'd love to revisit and commercialize someday.
Years ago, a buddy hosted corporate email, dns, etc. Racks of servers. He added ambient audio to everything he monitored. Nature sounds, weather, birds, insects, etc. The volume, samples, and tempo would change dynamically. Happy soothing sounds when all was well. Disruptive sounds when bad stuff happened.
(I don't know if you'd classify that as sonification.)
Walking around, visiting with guests, talking on the phone, his crew always knew the health of their systems.
Yep, exactly - applications like that are just cool, useful and intuitive all at once.
What's interesting is that for visual displays, we have decades of detailed research into visual perception - we know from experience how to design a graph so that patterns can be quickly extracted and understood. We don't yet have that same level of understanding for sonifications, but once we do, applications like this will be even better
Can't play the audio today because of the snow - I use a satellite internet, so I can't comment on the music, sorry. I trade though, and I also compose, so this project got my attention. What I'd like to add is this: My focus on trade in this period is penny stocks. From my own experiences and what others say, I can say Apple and other blue chips usually don't move as "dramatically" as much cheaper penny stocks. It's in the math. If a current price is in the $500 range, like Apple, changes of a few $s only represent less than 1%, whereas in penny or sub- penny stocks, a single trade can move the price up or down a few pennies or less that can easily amount to 10, 30 or bigger % change. Movements like that going back and forth within very short period sometimes resemble needle-sharp alien teeth, it's funny to watch. Price % changes are that much crazier, and likewise the volumes when a stock suddenly gets attentions etc. I'd imagine that music generated off that kind of data can present different kinds of compositional experiences.
We at Adcloud.com – a technology provider for exchange of online advertising – did something similar with our Real-Time Data (clicks, conversions, retargeting, impressions etc) in our last Hackathon. WIP, though ;)
lol You've succeeded. Well it sounds awful. Just the way dubstep does.
For your next project try using "paul stretch" to create a serene ambient track. It's software that allows you to stretch audio tracks from 101% to 800% and up. Here's Justin Bieber's song "U Smile" 800% slower http://www.youtube.com/watch?v=QspuCt1FM9M
You could use all sorts of data to make ambient music. Heck, call it Ambient Data and release numerous tracks. Make dark themed tracks with rain in the background and ambient music created using how many wars have happened in human history and when.
> lol You've succeeded. Well it sounds awful. Just the way dubstep does.
In defense of dubstep, wobble bass is not a defining characteristic of the genre.
It was popularized by later, more club-friendly strands of dubstep (a precursor to the current 'brostep' trend), and is largely absent in the older, most critically acclaimed dubstep productions (e.g. Burial's Untrue or self-titled album)
Don't see how this is an instance of the genetic fallacy. When a sound evolves significantly beyond that of an existing genre, it makes more sense to create a new genre for it than expand the existing genre to encompass it.
My comment was intended simply to lament the fact that people a) consider offensive wobble bass the distinguishing feature of dubstep; and b) dismiss the entire genre as 'sounding awful' seemingly on the basis of this association with wobble bass.
I imagine it can be automated. Sources can be any moving data be it weather, market, stats, transcoded visuals, medical signals, etc, and can be output in any media for different purposes. Speed can be variable and programmable along with timbre, ADSR tone shapes and spectra et al ala some sound-shaping programs in the market. Knowledge got2surf presented can be applied.