And they weren't angered because of misconstrued intent, but because it was stated that anyone who supports basic income "borders on innumeracy", and the claim was backed up by a toy simulation that addressed roughly none of the potential effects of that policy.
The article also attempted to reduce the concept of policy debate into a STEMlord "show me the code or you're wrong" absurdity.
Obviously though, the author has learned his lesson. His lines weren't squiggly enough.
This is a blatant misrepresentation of what he said. The exact quote is:
"Like most political arguments, the discussion of a Basic Income borders on innumeracy."
First of all, it says nothing about the supporters of a basic income being innumerate, but that the discussion itself (on both sides) is. If you bother to actually _read_ that paragraph, his point is that discussions about basic income don't include an attempt to make things concrete with hard numbers. It's a fair complaint that his analysis is only a baby-step in that direction, but he also explicitly addresses that, saying that he was just giving an example of _how_ one would go about bringing more numbers into the debate (i.e. the missed potential effects that you noticed).
I definitely did not walk away from reading it the first time with the impression that it was a simple illustrative example of how one might bring evidence into policy debate and how simulation can provide evidence when other forms aren't available. I'll try a re-read with that in mind, as I would have quite enjoyed that conclusion. Perhaps opponents to BI were able to get to that bit undeterred.
It might be that unassuming squiggly imprecise graphs might be only the beginning of the required revisions if that was to be the core takeaway.
This is another great example of an attempt to bring numbers into a political argument, this time about sustainable energy: http://www.withouthotair.com
I'd argue its stronger the other way around: people in political arguments falsely insert (crap) data to intimidate other people and polish their own ego.
Rereading that section, and the code, I slowly decipher it... they are presenting histograms... I see labels "cost_benefit", so presumably that's a ratio... oh wait in the next paragraph he writes "cost - benefits" so I guess it's a scalar... but I still don't see why the side-by-side histograms are presented with varying-width buckets... oh, because he fixed the Y-axis, which constrains the bucket widths....
The problem, as lotyrin implies, isn't that the "lines weren't squiggly enough". It's that it's a graph that's hard to read and understand what it's trying to say, due to a total lack of labels or context.
(I also take issue with a lot of his assumptions and even his definition of Basic Income, but if we take in good faith his claim that his Monte Carlo article wasn't about economics and public policy but about python and using the scipy library, those are not necessarily complaints about the article.)
It is remarkable that someone would stick to such an obviously broken way of modeling the costs and benefits of transfer-payment-like policies (or of policies which have large transfer-payment-like parts, like Basic Job). He even goes out of his way to call attention to his continued insistence on this method months after the fact.
From this comment it sounds along the lines of classic "man marries his maid" macro critiques.
The only thing I strongly advocate is actually thinking things through numerically rather than tribally. If you have some alternate accounting scheme I'd love to hear it. The only thing I object strongly to is the anti-rationalist "accounting is impossible, yay/nay basic income" types.
> From this comment it sounds along the lines of classic "man marries his maid" macro critiques.
I'm actually not very well versed in this area, so I had to look this up. You're correct, the basic mechanism of the critique is identical.
There have been a lot of scathing critiques of your position, so I'll try not to repeat them, especially since I basically agree.
I'll put more thought into this, but coming up with a numerical justification for favoring one policy over another, without real-world data, seems really hard. If you take out the transfer-payment-like portion of both policies (nearly all of Basic Income and most of Basic Job), you can see that Basic Job has purchased something (jobs that need doing, but wouldn't pay well) and Basic Income has not. At this point, Basic Job has done pretty well.
The disincentive to work when receiving a stipend that is sufficient to live on is definitely real. However, it's a bit hard to address the benefits associated with increased physical mobility and entrepreneurship under BI. Apparently some studies observe a decrease in economic activity upon the introduction of BI and others observe a decrease. (Interestingly, some of the situations described are shades of "man marries his maid", like the mothers who stop working to take raise their children).
The important thing to me is that I don't think there's much use in comparing the balance-transfer-like parts of the programs. We should compare the outcomes, because the fact that one program taxes and pays a stipend to everyone who already has a job and the other does not isn't a huge difference on net. I was a bit too snarky in the past, but this is the portion of the policy that is NOPTransferPayment-like. I'll go a bit further and claim that programs which arbitrarily redistribute income don't have a big economic effect; Social Security for instance does not greatly increase or greatly decrease the level of economic activity (assuming that it just pipes a fixed amount of income from young people to old people, which it doesn't, but anyway...).
* ahem *
Now that I've gotten myself to write this all down, I realize that my objection is actually compatible with your model, and I've just made the error of thinking otherwise because the dominant factors in your analysis were the ones I wish to disregard. I have installed the required packages, made some tweaks, and run your simulation. If we do not count money taxed and handed right back to citizens as a cost, the difference is small enough that reasonable people could disagree about which program to prefer:
This quick and dirty run is biased in favor of BI - we've accounted for the disincentive to work at all due to BI, but not for whatever disincentive to economic activity introduced by setting taxes higher than under BJ. This wasn't a problem in the original analysis because it regarded the whole taxed amount as a cost. This factor should be some fraction of the difference of the amounts of tax between the two programs, but it is not clear what fraction. In fact, I've found a source that seems to indicate it should be a fraction greater than 1, in which case my whole objection could just make BJ win by an even larger margin.
Perhaps I should not say this, because I have not modeled it... Here we go anyway: It seems very likely to me that both of these systems are better than our current system of somewhat excessive overhead and near-100% marginal tax rates for the non-working poor.
That's not really the same thing. The point of "man marries his maid" is that after the marriage GDP goes down in spite of an increase in economic activity - the maid continues cleaning his house, but also cleans his pipes.
Women staying home for family work is a real change in economic activity. Domestic labor may not be properly accounted for, but it's not a no-op as with the man marrying his maid.
In any case, merely tracking production and ignoring the paper cost of transfers is a fairly valid way of looking at things. I don't fully agree with it, because I do consider it a cost when a productive worker loses some of his productive capacity, even if another person gains an equivalent amount. So I certainly do want some penalty on the actual amount transferred - but you are certainly right that this should be sum(tax(i) - transfers_to(i)), not sum(tax(i)) (where i represents a given person).
I'm definitely feeling like an idiot, ignoring one of the most interesting comments on post. You've definitely given me something to think about.
It's hard to take many interesting ideas out of this implication of bad faith on Chris's part. Wouldn't it be better to talk about the topic of the post?
It's also very nice to see an adjustment on that proposal with some new information. It wouldn't have occurred to me that making a chart look hand-drawn would be effective in implying less precision, but now I have a new idea I can try out for graphics demonstrating rough ideas.
I think Chris's tone can be blunt and read as unaccommodating, which may be distracting from the his points, but I think I prefer it to the overly casual tone many adopt when stating an idea. That tone isn't great for customer service but it's very helpful when trying to understand an idea and whether or not to accept it.
Also, it is important to base comments in fact. He accused people on both sides of innumeracy: "Like most political arguments, the discussion of a Basic Income borders on innumeracy. So I’m going to take the opportunity to launch into a far more interesting mathematical tangent, and illustrate how to use Monte Carlo simulation to understand uncertainty and make business/policy decisions. I promise that this post will be far more interesting to python geeks than to policy wonks. I’m just picking Basic Income as a topic to discuss since it showed up on hacker news yesterday and it’s a topic where I see basically no numbers whatsoever."
(cf the recent renders of "London's Garden Bridge" which were gently mocked for their "optimistic" view)
Think all 3D computer games before, like, 2010. Part of me wishes they made this realization; the other reminds me that we wouldn't be where we are today without a decade of crappy-looking games.
It drives at the same point as the original author, just from a different angle.
Context is also important. Sometimes, your audience will want to know exact numbers and imprecise plots will look bad. Imprecise plots should only be used to explicitly show a trend, which your audience should be expecting. If this is the context, they'll understand and squiggly lines are unnecessary.
I was, in fact, told later that the xkcd style graph was extremely helpful in conveying that it was not a graph of real data yet.
And if the numbers still feel important, don't put more than one or two of them. Knowing that the axis goes from 0 to 10k is probably enough information for something deliberately imprecise.
I've been in the matplotlib site a few times before and I noticed that the logo was weird. Turns out the the matplotlib website has a mode where all examples are rendered with xkcd, all text is converted to Comic Sans and all other sort of funny things happen. You just have to add the xkcd keyword in the url.
Here's another example of a documentation page completely unrelated to xkcd.
Original Style: http://matplotlib.org/examples/lines_bars_and_markers/line_d...
xkcd style: http://matplotlib.org/xkcd/examples/lines_bars_and_markers/l...
Compare with an actual xkcd graph: https://xkcd.com/1306/
And I think you're missing the point of the post...
1.) Imputing the imprecision of a mathematical model through stylized graphs.
2.) Choosing a firestorm of a political topic to demonstrate mathematical modeling.
2. This doesn't work for bar graphs, lengths, etc. The uncertainty is often going to be symmetric around the point estimate, but your opacity forces an asymmetric representation of the uncertainty.
3. Box plots are great. If you want more detail than that, a thin vertical histogram or density is going to convey much more information than shading.
For example, the Bank of England occasionally uses it in plots of economic forecasts, where time is on the x-axis and things like GDP might be on the y-axis, eg. here http://www.bankofengland.co.uk/publications/Documents/inflat.... The fading out of intensity over time is a great visual reminder that predicting the future is hard.
It is much better when your chart is supposed to be targeted at the general public, because the "smearing out" of the data is very hard to misunderstand, unlike confidence intervals.
Also this comic is somewhat related too: https://xkcd.com/1133/
If you can't figure out a way to relate your ideas in those 10 hundred words, you aren't thinking clearly enough. To help, here is the list of words: http://splasho.com/upgoer5/phpspellcheck/dictionaries/1000.d...
and here is a checker for you to proof against: http://splasho.com/upgoer5/
An interesting example is science and scientific methods. Something starts as a theory (eating fat makes you fat) and accumulates evidence for and against it. Evidence "against" the theory doesn't necessarily outright disprove it. It just weakens the claim in different ways. It can lower the probability that it is true (maybe it just seems like eating fat makes you fat, but it really doesn't). Sometimes the magnitude is smaller (eating fat makes you a little fatter, but not much). Sometimes it's kind of true, but the whole story is more complicated and the statement in the their theory needs to be made more precise (increasing fat consumption makes you fatter if everything else stays the same, but it also makes you feel full which makes you eat less of other things the effect only exists in controlled conditions).
The above example needs to be considered from multiple dimensions. People who haven't been thinking about this as a multidimensional thing have a hard time evaluating your statement in this way from a cold start. Experts used thinking the way they are explaining things think the public is stupid or uninterested. Even if they humbly agree that the public are just not experts, they still conclude the same thing. The public want everything boiled down to a simple statement that hides the texture.
X Causes Cancer
This KXCD style graph subtly conveys a little of that texture. I think this is a good thing. I really like XKCD. It's very good art for a very interesting definition of the word art.
So authors: Another plus is that you can include a humorous caption for your graphs. I'm sure there's always a funny angle that you wish you could include but that wouldn't be an official part of the paper ...
They show that users can judge the degree of "sketchiness" on an ordinal scale but that the judgement varies extremely between individuals.
and a webapp version with slightly more advanced XKCD tweaks here:
Everything starts with matplotlibs xkcd lib but has been tweaked to produce plots based on the output of the Google Ngram Viewer.
Often getting real error bars is much harder than getting the original quantity.