I wonder if software exists that is aware of the overtones of each particular note of a given instrument. Theoretically, this would be a more precise way to pull out a chord from a recording without having to manually compensate for them later. Standard Fourier analysis is based on sine waves which aren't the best model for complex tones like that of a piano or a 12-string guitar.
Mathematically, the best approach I'm aware of in the long run would be to study the harmonic spectrum of the exact instrument as closely as possible, and then perform inner products against the spectrum of the mystery sound being analyzed. There's no guarantee that the note-vectors would be orthogonal (probably not), so the problem becomes more interesting in that you might want to perform a sparsity maximization on the resulting output - that is, look for the least number of notes capable of matching the recording within epsilon error in sample (or spectrum) space. Sparsity maximization is NP-complete, so a good way to approximate this is to actually perform L1-minimization using linear programming. Another approach might be to look for a balance between the absolute error term and the sparsity term.
Yes, software like this exists. Using multiple overtones to reconstruct a single note is pretty basic in this field, if you search for "wave to midi" software (for example) you'll find a few that do this. The most advanced right now might be Melodyne  (used in the article), which also allows you to pitch-shift the detected notes (and all their partials), without affecting the rest of the sound. However it's quite proprietary, and I don't think the details of their algorithms are available, but it does a good job on a variety of instruments. The polyphonic detection can never be perfect, of course, especially with layered organic instruments such as voice and winds, it becomes impossible to determine what note a particular partial belongs to.
Agreed -- Just looking for individual spikes in the harmonic spectrum is naive. However, I think even if you have a good idea of which instrument it is there are other factors involved which could make matching a generic instrument model more difficult (e.g. the player, the playing space, instrument variations, etc.). For instance with the french horn (it's what I know) there are sometimes drastic differences between players and instruments -- the most stark being possibly the vienna horn vs. the usual double horn.
If you know the number of notes being played you can solve in polynomial time. You could run it with a few variations in the assumed number of notes.
The harmonic spectrum of each instrument will be extremely difficult to get, though, as each note can have a different waveform and will be affected by unknown non-linear effects in the instrument itself as well as in mixing (i.e. compression or distortion).
Not really. First, you can simulate overtones by where you strike a guitar. Secondly, we don't know the model of the tube amps they were using. Tube amps do some really funky mixing, which can exasperate FFTs by giving weird/wrong results.
I used to work at Medium on the product science team before this tool was created. I didn't work on it myself, although I can appreciate how well it works in practice. The simplicity-of-use of this tool, both as a data provider and as a visualization consumer, is incredible.
I can imagine some HN readers thinking they want more detailed graph control, but the few options + the defaults on this tool cover a surprising range of use cases. I think you have to try it on your own data to appreciate the subtle decisions that went into the design.
This is a neat idea, but I think it's a bit too minimal. What I like about it is it tries to just Do The Right Thing for your chart setup. Unfortunately, a lot of that assumes that the data is already prepared in The Right Way. Therein lies the crux.
Playing around with it briefly, it looks like your data already needs to be filtered to only the columns you want displayed and presumably pre-aggregated. The question is should your tool help you do that. That's where BI comes in, but the tradeoff is you lose the simplicity with all the BI tools out there (eg can you explain a join to your marketing manager? How about the difference between a measure and dimension?)
At DataHero.com we're building the best of both worlds -- both a tool that can automatically do The Right Thing in terms of your chart setup, but also have enough intelligence under the hood to prepare it The Right Way without having any technical background. BI and visualization are still a technical person's playground... once a tool can automatically understand what's in your data AND automatically understand the best way of putting it all together, that's how you make "Data" accessible to the larger market.
This is a great tool in making Data as accessible as possible, but it's missing the step on going from the raw data to the prepared data (and making it easy/intelligent/automatic enough that you don't need to understand a join to do it).
Hi, I'm on the team at Medium that launched Charted today. No, Charted wasn't built to be used in blog posts. It was built as an internal tool to, among other things, share simple data query results and create ad-hoc dashboards (as part of the general data analysis our Product Science team does).
This interface is solid. So many easy-to-use graphing tools have poor defaults. This is perfect for the use case of a quick visualization for queried data or (even better) live data you'd like to dashboard.
The thought experiment with infinite hats is interesting.
It sounds funny to ever say "the AoC is wrong," though, because axioms are things you get to choose as true or false. They are essentially subjective, and in practice are as "socially true" as they are commonly used. I guess the author is arguing that the more intuitively satisfying set of axioms excludes AoC.
I personally disagree, although I admit there's still room for debate because of compelling arguments on either side (most of which are appeals to intuition by the nature of the problem).
Here's a small counter-argument: Since we're appealing to intuition, consider that we're asking a countably infinite number of people to (a) see and reason based on seeing an infinite sequence in front of them; and (b) all memorize a specific function which is essentially from all the reals in [0,1] to a subset of [0,1], which we may think of as binary numbers with infinite expansions. (The intuitive point is clear without having to worry about 0.01111... being the same value as 0.1). This function is insanely weird and hard to describe succinctly; basically, it must be memorized. My intuition tells me that as soon as we expect agents to perform these feats, they are no longer humans and no longer Turing machines. Since this memorized function is so weird, they are even beyond some kind of infinite-time Turing machines with finite memories (or infinite-memory TMs with finite time). I think they are even beyond infinite-time Turing machines with countably-infinite memories, although I haven't proven this carefully.
In the long run, I suspect an alternative axiomitization (or possibly an elegant classification of set theory statements) will shed light on these problems, somewhat analogous to the way we now consider non-Euclidean geometry as interesting and "true" in its own universe that we mentally separate from the Euclidean plane.
[ps Not all comments about axioms are appeals to intuition. E.g., you can prove some are independent or dependent. But arguments along the lines of this post are appeals to intuition when used for/against AoC.]
What these expressions of formalism usually miss is that provably inconsistent/dependent/independant and all that mess actually are mathematical claims, and you're saying they're true or false. There are real attempts to get around that problem, but things get complicated here (http://plato.stanford.edu/entries/formalism-mathematics/).
Actually, that is where the argument goes wrong. You cannot have a function that maps the infinite numbers in [0,1] to a finite subset (which is the 'finite initial sequence'. So the argument collapses to the conclusion: after an infinite number of or incorrect guesses, everyone guesses correctly. Which is to say there is no such point where everyone starts to guess correctly.
I recently published an open source win/mac wrapper library in C. It is not well tested, and only covers the specific subset of functionality I needed for my current project. It may still be useful for others learning which pieces of each os handle which common tasks - things like CoreGraphics vs GDI, file system quirks, how to play audio, etc.
gcc and clang can handle mixed Objective-C and C++ in a single source file (some call it Objective-C++). The only tricky part is linking since C++ and ObjC++ use a different symbol name convention than C/ObjC; but you can still link between them by using extern "C" declarations in the right places.