Hacker Newsnew | comments | show | ask | jobs | submit | tylerneylon's comments login

It sounds like you're arguing that people wouldn't understand new results if they were more open. Most people I know well are educated/intelligent and could understand leading-edge research results within their own field, so I disagree. (And I don't work at a university!)

Even if only a minority of the public understood publicly-available research, having all results open could change the way the world works. It was hard to see how the information-sharing of the internet would change the world, so I can understand how this is hard to see as well.

You also hint that science deserves more wide-spread respect, and that the world be better off if people were better educated and tended to be more scientifically-minded. I agree with that!

-----


I was arguing that we live in a post-literate post-numeracy world where most people have not learned the basics of science. Even people who have scientific or engineering education don't know much about other disciplines. Most modern scientific research does not play well to the public because they do not have the tools and knowledge necessary to understand.

I would like to see science and technology more accessible, but that means the public needs to learn more science and mathematics. And, it means that some members will need to abandon some of their beliefs, which are not supported by evidence.

-----


Many mathematicians seem to have an attitude that a well peer-reviewed proof is simply correct and etched in stone for good. This seems to me like publishing programs - some of them thousands of lines long - and expecting them to be essentially bug-free without ever running them.

Von Neumann is not exactly making the same point, but he is arguing against the sense that math is a kind of purely abstract discovery that has escaped from the messiness of day-to-day reality.

One could ask: will we ever get to a point when all mathematical proofs are rigorously checked, so that we know they're correct? I hope we do get to a point when program-checked proofs are standard, but even then there's the possibility that a proof-checker itself is buggy. So, philosophically, it seems we may never justifiably feel certain beyond all doubt that any given proof is correct.

But we can have greater confidence in a proof verified by both expert humans and by a trusted program than in a proof simply reviewed by humans. So the effort is meaningful and worthwhile - all in the context of seeing math as a field as fallible as any other.

-----


> Many mathematicians seem to have an attitude that a well peer-reviewed proof is simply correct and etched in stone for good. This seems to me like publishing programs - some of them thousands of lines long - and expecting them to be essentially bug-free without ever running them.

I think that the important distinction is 'peer-reviewed'. A random programme published, even by a very good programmer, is indeed likely to have errors; but I think that one can have more confidence in a peer-reviewed programme (which is essentially why we trust open source over closed source, after all).

Of course there are distinctions: software is automatically tested when used, whereas mathematics is not automatically tested when read, so that one might reasonably have more confidence in software purely on the basis of its longevity than one does in mathematics; but, on the other hand, although individual pieces of reasoning can be extremely arcane, 'flow control' in mathematical reasoning tends to be much more elementary than in software, so that one is unlikely to uncover the mathematical analogue of an infrequently visited branch in a code path going untested despite heavy use of the software.

-----


You might be interested in this article [1]. Basically, even if many proofs are incorrect, their results are almost always still true. Also, there have been many large changes in what counts as 'rigorous' over time. I agree with you that computer-checked proofs seem like a logical next step in the rigor of mathematics.

[1] http://www.gwern.net/The%20Existential%20Risk%20of%20Mathema...

-----


Felix Hausdorff committed suicide in 1942, after having put in quite some effort to escape Nazi Germany.

-----


This was enjoyable.

I wonder if software exists that is aware of the overtones of each particular note of a given instrument. Theoretically, this would be a more precise way to pull out a chord from a recording without having to manually compensate for them later. Standard Fourier analysis is based on sine waves which aren't the best model for complex tones like that of a piano or a 12-string guitar.

Mathematically, the best approach I'm aware of in the long run would be to study the harmonic spectrum of the exact instrument as closely as possible, and then perform inner products against the spectrum of the mystery sound being analyzed. There's no guarantee that the note-vectors would be orthogonal (probably not), so the problem becomes more interesting in that you might want to perform a sparsity maximization on the resulting output - that is, look for the least number of notes capable of matching the recording within epsilon error in sample (or spectrum) space. Sparsity maximization is NP-complete, so a good way to approximate this is to actually perform L1-minimization using linear programming. Another approach might be to look for a balance between the absolute error term and the sparsity term.

-----


Yes, software like this exists. Using multiple overtones to reconstruct a single note is pretty basic in this field, if you search for "wave to midi" software (for example) you'll find a few that do this. The most advanced right now might be Melodyne [1] (used in the article), which also allows you to pitch-shift the detected notes (and all their partials), without affecting the rest of the sound. However it's quite proprietary, and I don't think the details of their algorithms are available, but it does a good job on a variety of instruments. The polyphonic detection can never be perfect, of course, especially with layered organic instruments such as voice and winds, it becomes impossible to determine what note a particular partial belongs to.

[1]: http://en.wikipedia.org/wiki/Celemony

-----


Agreed -- Just looking for individual spikes in the harmonic spectrum is naive. However, I think even if you have a good idea of which instrument it is there are other factors involved which could make matching a generic instrument model more difficult (e.g. the player, the playing space, instrument variations, etc.). For instance with the french horn (it's what I know) there are sometimes drastic differences between players and instruments -- the most stark being possibly the vienna horn vs. the usual double horn.

-----


If you know the number of notes being played you can solve in polynomial time. You could run it with a few variations in the assumed number of notes.

The harmonic spectrum of each instrument will be extremely difficult to get, though, as each note can have a different waveform and will be affected by unknown non-linear effects in the instrument itself as well as in mixing (i.e. compression or distortion).

-----


Not really. First, you can simulate overtones by where you strike a guitar. Secondly, we don't know the model of the tube amps they were using. Tube amps do some really funky mixing, which can exasperate FFTs by giving weird/wrong results.

-----


>First, you can simulate overtones by where you strike a guitar.

If you know the make of the guitar and the strings and where it was picked each time and the location of each of the tone and volume knobs and the position of the pickup switches.

>Tube amps do some really funky mixing, which can exasperate FFTs by giving weird/wrong results.

>unknown non-linear effects...distortion

-----


I used to work at Medium on the product science team before this tool was created. I didn't work on it myself, although I can appreciate how well it works in practice. The simplicity-of-use of this tool, both as a data provider and as a visualization consumer, is incredible.

I can imagine some HN readers thinking they want more detailed graph control, but the few options + the defaults on this tool cover a surprising range of use cases. I think you have to try it on your own data to appreciate the subtle decisions that went into the design.

-----


This is a neat idea, but I think it's a bit too minimal. What I like about it is it tries to just Do The Right Thing for your chart setup. Unfortunately, a lot of that assumes that the data is already prepared in The Right Way. Therein lies the crux.

Playing around with it briefly, it looks like your data already needs to be filtered to only the columns you want displayed and presumably pre-aggregated. The question is should your tool help you do that. That's where BI comes in, but the tradeoff is you lose the simplicity with all the BI tools out there (eg can you explain a join to your marketing manager? How about the difference between a measure and dimension?)

At DataHero.com we're building the best of both worlds -- both a tool that can automatically do The Right Thing in terms of your chart setup, but also have enough intelligence under the hood to prepare it The Right Way without having any technical background. BI and visualization are still a technical person's playground... once a tool can automatically understand what's in your data AND automatically understand the best way of putting it all together, that's how you make "Data" accessible to the larger market.

This is a great tool in making Data as accessible as possible, but it's missing the step on going from the raw data to the prepared data (and making it easy/intelligent/automatic enough that you don't need to understand a join to do it).

-----


So is this supposed to be an easy way for people to write blog posts that have data attached to them?

-----


Hi, I'm on the team at Medium that launched Charted today. No, Charted wasn't built to be used in blog posts. It was built as an internal tool to, among other things, share simple data query results and create ad-hoc dashboards (as part of the general data analysis our Product Science team does).

-----


considering what medium is - yes precisely.

-----


This interface is solid. So many easy-to-use graphing tools have poor defaults. This is perfect for the use case of a quick visualization for queried data or (even better) live data you'd like to dashboard.

-----


The Fourier transform is the map between sound waves and music as written on a staff.

That's an informal statement with some hand-waving since, for example, music notation is discrete and sound waves aren't, but that's the main idea.

-----


Saw this after posting my comment. I agree this is the simplest way to think of an FFT.

-----


Thank you!

-----


The thought experiment with infinite hats is interesting.

It sounds funny to ever say "the AoC is wrong," though, because axioms are things you get to choose as true or false. They are essentially subjective, and in practice are as "socially true" as they are commonly used. I guess the author is arguing that the more intuitively satisfying set of axioms excludes AoC.

I personally disagree, although I admit there's still room for debate because of compelling arguments on either side (most of which are appeals to intuition by the nature of the problem).

Here's a small counter-argument: Since we're appealing to intuition, consider that we're asking a countably infinite number of people to (a) see and reason based on seeing an infinite sequence in front of them; and (b) all memorize a specific function which is essentially from all the reals in [0,1] to a subset of [0,1], which we may think of as binary numbers with infinite expansions. (The intuitive point is clear without having to worry about 0.01111... being the same value as 0.1). This function is insanely weird and hard to describe succinctly; basically, it must be memorized. My intuition tells me that as soon as we expect agents to perform these feats, they are no longer humans and no longer Turing machines. Since this memorized function is so weird, they are even beyond some kind of infinite-time Turing machines with finite memories (or infinite-memory TMs with finite time). I think they are even beyond infinite-time Turing machines with countably-infinite memories, although I haven't proven this carefully.

In the long run, I suspect an alternative axiomitization (or possibly an elegant classification of set theory statements) will shed light on these problems, somewhat analogous to the way we now consider non-Euclidean geometry as interesting and "true" in its own universe that we mentally separate from the Euclidean plane.

[ps Not all comments about axioms are appeals to intuition. E.g., you can prove some are independent or dependent. But arguments along the lines of this post are appeals to intuition when used for/against AoC.]

-----


What these expressions of formalism usually miss is that provably inconsistent/dependent/independant and all that mess actually are mathematical claims, and you're saying they're true or false. There are real attempts to get around that problem, but things get complicated here (http://plato.stanford.edu/entries/formalism-mathematics/).

-----


Actually, that is where the argument goes wrong. You cannot have a function that maps the infinite numbers in [0,1] to a finite subset (which is the 'finite initial sequence'. So the argument collapses to the conclusion: after an infinite number of or incorrect guesses, everyone guesses correctly. Which is to say there is no such point where everyone starts to guess correctly.

-----


Well, "wrong" itself can be subjective, if not interpreted in the "provably inconsistent" sense, and more in the "to take it as an axiom leads to problematic mathematical models".

-----


I'm a big fan of self-analytics. I also like small-but-useful command-line tools of this nature. This is pleasantly documented and purpose-driven.

If you're looking for something more focused on short daily summaries of what you've done, I wrote a similar tool called wj:

http://tylerneylon.com/a/wj/

It's more about organization and presentation than privacy. For example, it can generate a tex file that converts your entries into a nicely formatted pdf file.

I also like the automated self-analytics Stephen Wolfram uses:

http://blog.stephenwolfram.com/2012/03/the-personal-analytic...

-----

More

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: