Yes, modularity is important. However, in some cases, this philosophy has resulted in the "tangled mess held together by duct tape" kind of systems architecture that no one dares to touch for fear of breaking things.
I think Unix philosophy is struggling with a fundamental dilemma:
On one hand, creating systems from programs written by different people requires stronger formal guarantees in order to make interfaces more reliable, stronger guarantees than interfaces within one large program written by one person or a small team would require.
On the other hand, creating systems from programms written by different people requires more flexibile interfaces that can deal with versioning, backward and forward compatibility, etc, something that is extremely difficult to do across programming languages without heaping on massive complexity (CORBA, WS-deathstar, ...)
I think HTTP has shown that it can be done. But HTTP is also quite heavy weight. It doesn't exactly favor very small programs. Handling HTTP error codes is not something you'd want to do on every other function call.
In any event, I think Unix philosophy is a good place to start but needs a refresh in light of a couple of decades worth of experience.
Pretty much, just as Booch's philosophy has given us tangled messes of class hierarchies with cross references pointed every which way. And just as The Gang of Four gave us AbstractedTangleMessFactorySingleton.
The distinction with the Unix-style pipe mess is that it exists and works an order of magnitude faster than other examples of bad code.
When the reality is that you can't take an opinion as the whole thought. But people do. Almost always. Regardless of the subject, people will think you're a fanatic for your position, no matter how temporary or experimental it is, unless you pay your dues in apologies and waffle terms.
Believe it or not, there is a significant contingent of crazy, death-threat-making jerks fixated on "The Unix Philosophy". As in, an experimental Unixy project that made the rounds (including on HN) some years back and got a fair bit of attention. The project had a vision with scale similar to that of Atom, which is soaring on the front page today. But unlike Atom, this project was quietly dropped from public view because the author received a full blast of nutball hate from misguided Defenders of the Faith/Purity/Whatever.
I've been using, studying, and working with *nix systems basically forever (longer than almost everyone reading this). But that experience made me rather allergic to ever using or hearing "The Unix Philosophy". I realized that it's heavily overused as an argument-stopper: "that's not the unix way (so <EOF> off)." I've since witnessed similar language used in many projects, taken in whole context, as a dogmatic excuse rather than an actual source of architectural wisdom.
 Apologies for being deliberately vague. It's not my place to risk riling up the hate monsters vs. the creator of that project again. I'm still absolutely disgusted that Gamergate-level human insanity was leveled at someone putting forth what was, IMO, one of the most interesting Unix tooling experiments in the past decade.
When in Rome, code as the Romans do.
I think we have actually made progress with HTTP and all the REST. But I feel that the great failure of distributed object systems in the 1990s and 2000s has left a great void when it comes to new thinking about more fine grained sort of interfaces between programs.
Functional programming has influenced so many things but hasn't really arrived in the systems corner of the world yet. Also, looking at the sort of query capabilities of online streaming databases and complex event processing systems I get the feeling that there may be a lot of potential in combining that with Unix pipes and text streams.
Publish-subscribe ala zeromq is also something that could be a core part of modern Unix.
Sorry about the half-baked mess I'm dumping on you here :)
The Gang of Four didn't give us that. An army of OOP novices who were overwhelmed by all of the new choices and only skimmed Design Patterns gave us that.
If you actually read the book (which surprisingly few people have, given the number of people who have strong opinions about it), you'll see the Gang of Four are actually quite clear on the limitations or and ways to abuse the patterns.
The GUI-oriented 90's vibe, meaning zero examples that anybody would ever bother learning. The bizarre stuff nobody ever uses (Flyweight? Bridge?). Singleton -- 'nuff said.
You're left with what, maybe three usable patterns? Factory, which favors opaque, magical return-type polymorphism. Visitor, which is a solution in search of a problem, and also gets confused with trivial tree-walking operations. Interface, which rocks, and became part of Java, but really just shows how much inheritance sucks.
The resulting confusion is the proud sponsor of an AbstractStrategyFactoryBuilderDelegate near you.
It was written in the 90's. GUIs were what application developers did back then. Web apps didn't exist yet.
> The bizarre stuff nobody ever uses (Flyweight? Bridge?).
If you've used enums in Java or instanced rendering in a game, you're effectively using Flyweight.
> Singleton -- 'nuff said.
They caution really hard against overuse of Singleton in the book, but all of those C programmers being dragged into OOP didn't know where else to stuff all their global state so they went to town on it.
> Factory, which favors opaque, magical return-type polymorphism.
If your language doesn't have first-class classes, Factory is really handy. If it does have first-class classes, well that means you basically have the Factory pattern in your language. :)
> Visitor, which is a solution in search of a problem
The day you write a compiler in an OOP language is the day you realize how unbelievably, amazingly, incredibly useful Visitor is.
The main issue that most of the common programming languages in use notably Java and C++ have very poor support for composition thus forcing developers to have to extend things.
It cannot be done. As an example see Categories in Objective-C.
Java's support for composition has improved somewhat with default methods on interfaces in Java 8 but it's still poor compared to the above languages. This leads to the "abundance of classes" anti-pattern you see with Java projects as you are forced to extend to introduce new functionality.
Hey, not funny. I actually googled that pattern just to make sure it was a joke.
The second part is a "frontend", a command-line client that does all the unixy stuff with text streams etc; but is just another RPC client to the backend.
This provides flexibility; for quick duct-tape situations you use the front end and pipe anf ilter to your hearts content. Then when stuff needs to get serious, you can bypass the command-line front end and connect directly over RPC using a strict message format which has the ability to handle a couple (doesn't have to many) of concurrent requests.
Of course this is early doors yet, but I think it might have legs in the flexibility in the text streams/hard message format debate, for very little additional work.
The difference is that in what I've described, the ABI is the common interface and an RPC server would be just another consumer of the library.
Well there was an implementation of something like that, see https://en.wikipedia.org/wiki/DCE/RPC
or DCOM, https://en.wikipedia.org/wiki/Distributed_Component_Object_M...
then there was dcop, dbus, and a bunch of others and now nobody remembers what the goal was
tape_robot start -tapeid=1
tape_robot --protobuf 'msg:start;tapeid:1'
tape_robot --protobuf_file filename
proto_serve tape_robot 127.0.0.1:8080
This (to me) is closer to the unix idea - one server program, one actual processing app. Why should my tape_robot program actually have a server embedded in it?
If in the future, I want to add authentication, I only have to add it (once) to the proto_serve program, rather than to every single application that is a 'server', for instance.
It would also allow a version which 'pre-forked' the processes, and left them waiting for the data on the socket/filehandle, or whatever.
You could do a bunch of this already using nc or similar, I suspect.
My reinterpretation was to say how about factoring out the server part, and leave the command only understanding either flags or protobuf commands - which could be delivered to the command either as an arg, or as a file given to it by an arg.
You could go a stage further by having all commands only accept protobuf (or similar), and distribute a spec/human-mapping to go with it. Then your shell would parse the args that you give to the command using the spec, and actually call the command using protobuf.
This would allow very awesome shell completion / highlighting / etc. It should also allow much simpler end-point/commands, as they'd not have to do hardly any type checking / re-parsing, as it would arrive in protobuf.
I actually kind of like that idea. Hmm...
But nowadays lots of things only come for the GUI.
In this case the 'library' is often the base programming language it's self, as the shipped parts are entirely glue and configuration.
Personally, I enjoy the nix philosophy of using small programs that do one thing well. Also, I prefer text to proprietary binary formats that _I_ can't do anything with.
As a *nix and Windows developer, I've never understood the mentality of wanting to build huge monolithic applications. But, I do think the tide is turning. I think more and more developers are allowing themselves to be influenced by ideas outside of what they are used to and that's a great thing.
Personally, I think the LLVM project shows what the Right Thing is here: package most of the real functionality into libraries while putting the user interfaces into runnable programs. Everyone who wants to communicate with the functionality can just link the library.
And if we want to do it statefully, well, handling global state across a large system is basically the unsolved problem of programming.
The one downside I found when trying to use it was that the 3rd-party C# Protobuf libraries didn't yet support it.
It's always going to be a good place to start because it embodies a realistic, documented, proven and actionable compromise between engineering purity (the "formal guarantees" you describe) and just shipping something.
For a collection of related thoughts, not limited to traditional unix philosophy but still unix philosophy-centric, check out my ansified fortune clone @ https://github.com/globalcitizen/taoup
I think it's not an accident or historical vestige that Unix philosophy uses the word "program". It has technical as well as social implications that have barely changed in the past 50 years.
Functions and programs are certainly not an either-or kind of thing but making them one and the same would require a very different kind of operating system than we have ever seen. Say, emacs OS :)
It's also free online:
An example "Reading it, it looks like a total hack job by a poor programmer."
A better example might be this post, where ESR is implied to be racist when his actual post is a reflective one about correcting irrational racist reactions: https://news.ycombinator.com/item?id=6884767
I don't understand that way of thinking since no human being will ever 100% agree with any other on everything.
Normally this would just be a quirk, but the fact is that a lot of technical people here on HN and other places would happily throw the baby out with the bath water just because they disagree with somebody's politics.
It's stupid and unprofessional. With so many companies focusing on such technically boring problems, image management is perhaps legitimately more of a business concern than having the best tech available.
So, unfortunately, we have people with dissenting opinions but excellent work slandered or ostracized...even if their opinions are actually worth considering. Then again, that just means that those of us who are more genuinely tolerant will have an edge during hiring. :)
Also, on ESR in particular:
You have to understand that, rightly or wrongly, his worldview is long-term Culture War. Literally anything which prevents The Right People from breeding faster (homosexuality) or defending themselves (attacks on the 2nd amendment) or arguing (kafkatraps) is suspect. Because he's playing for keeps, he'll do whatever it takes (including, perhaps, being less than perfectly equal in presentations on things) to further his agenda. That's just how it is, and it doesn't reflect on his technical contriubtions or aptitude at all.
Hell, the bitch of it is, he's even arguably correct on some of his cultural points, if he himself (much less his detractors) didn't spend so much time sounding so disagreeable and grumpy and wingnutty.
Anyways, it's just a sign of the times, as I said. It seems that most people are unable to handle a mental model which accounts for biased or unreliable narrators while still allowing the work of those narrators to be taken advantage of.
On the other hand, someone's political opinions change what they're likely to write, so if you don't know whether it's "true" or not, knowing their political opinions can justifiably alter your beliefs about the text.
I think people tend to overweight the second consideration, and it seems particularly irrelevant in this case.
(Scare quotes because it's rarely as simple as true-or-false, e.g. "literally true but horribly misleading".)
Many programmers / writers / programmer-writers may well have equally strong political views in one direction or another, with which others may strongly agree or disagree, but they just don't say much about them in public.
Research ethics forbids you from doing certain things when performing research, but it doesn't say anything about the political opinions of researchers.
The exclusionary rule says that if evidence is obtained by breaking the rules, it can't be used in court. But again, it doesn't say anything about the political opinions of the person obtaining the evidence.
So the question is, should one really refuse to read/recommend ESR's book because of his opinions? Frankly, this smells to me like Index Librorum Prohibitorum all over again.
Edit: I really don't have any idea what they are...
We are so worried about Evil Government dictating what we can and cannot think that we haven't noticed the current organic trend to prosecute every other person for thoughtcrimes. It's not the jackboot that keeps us on the ground, it's social media, and the public outrage you get when you disagree with whatever's the most popular opinion on a topic this week.
That actually directly calls into question engineering validity, as solid engineering is solidly based in reality and a realistic interpretation of facts. Also the ability to discard frames which no longer fit.
My own work and research of the past several years puts a very high significance on both frames (or more generally, models), and on the psychology of interacting with those, with strong emphasis on denial in various forms.
ESR's political views call much of his work into question. I say that as someone who was strongly influenced by much of what he said, and enjoyed a fair bit of it. He's become a tremendous disappointment.
TAOUP has its merits. It's rather like recommending Ted Kaczynski's Manifesto a a social-technological critique. It's got some really solid points (see what Bill Joy's had to say on it: http://archive.wired.com/wired/archive/8.04/joy.html). But damned if the rest of the author's views and actions don't muddy the waters a tad.
"Engineering validity"? This is just a dressed up ad hominem. If some technical argument ESR has made is inconsistent or doesn't match up with empirical evidence, criticize away, but his positions on what exactly the Second Amendment means or what the best role of government is can't possibly inform that criticism. It could, perhaps, explain why he's made an error, but it can't identify the error for us.
ESR's political views call much of his work into question.
Which questions about what work? If you're going to cast aspersions like this, you'd probably best be specific.
> This is just a dressed up ad hominem
Ad homimen would be "people named Eric cannot be trusted".
This is calling into question ESR's general credibility, based on his record. That's a character judgement.
I'm also not saying ESR is wrong in all things -- a consistently wrong indicator is useful (read the opposite of what it says). An inconsistently wrong one is maddening: you've got to pay close attention to what its doing and determine the pattern to its errors. That's the taxing part.
There's a somewhat related comment I'd seen recently which I've found useful:
Nota bene: a fallacious ad hominem only occurs when an accusation against the person serves as a premise to the conclusion. An attack upon that person as a further conclusion isn't fallacious and may, in fact, be morally mandatory.
That's not quite what I'm doing here: I'm leveraging the attack on credibility to discount further statements from ESR. But for numerous reasons of psychology and general reputation, if not a strict formal logic sense, there's a strong rationale to this.
Or: the narrator has been shown unreliable.
Traditionally, modern, credibility has two key components: trustworthiness and expertise, which both have objective and subjective components. Trustworthiness is based more on subjective factors, but can include objective measurements such as established reliability.
Since you actually wrote this sentence 9 hours ago, it is safe to infer that you really don't know anything about logical fallacies or what you're talking about in general, since you can't possibly have learned all you need to know about them in 9 hours. Given this level of confidence in something that is both wrong and easily checked, why should we trust any of your claims at all?
Or... should we trust you? But not ESR? Would that not be hypocrisy?
"An Ad Hominem is a general category of fallacies in which a claim or argument is rejected on the basis of some irrelevant fact about the author of or the person presenting the claim or argument."
"Ad hominem is Latin for "to the man." The ad hominem fallacy occurs when one asserts that somebody's claim is wrong because of something about the person making the claim. The ad hominem fallacy is often confused with the legitimate provision of evidence that a person is not to be trusted. Calling into question the reliability of a witness is relevant when the issue is whether to trust the witness. It is irrelevant, however, to call into question the reliability or morality or anything else about a person when the issue is whether that person's reasons for making a claim are good enough reasons to support the claim."
"It is important to note that the label “ad hominem” is ambiguous, and that not every kind of ad hominem argument is fallacious. In one sense, an ad hominem argument is an argument in which you offer premises that you the arguer don’t accept, but which you know the listener does accept, in order to show that his position is incoherent (as in, for example, the Euthyphro dilemma). There is nothing wrong with this type of argument ad hominem."
"An ad hominem attack is not quite as weak as mere name-calling. It might actually carry some weight. For example, if a senator wrote an article saying senators' salaries should be increased, one could respond:
"Of course he would say that. He's a senator.
"This wouldn't refute the author's argument, but it may at least be relevant to the case."
There are also those who tend to know their limits and note when they're out of their depth or area(s) of expertise. So no, that's not a universal guide either.
His making up shit (or buying in to others' made-up shit) to justify them does, as does his ignoring contradictory evidence and record..
Which questions about what work?
The problem is one of an unreliable narrator. If you cannot trust someone's judgement, and they spew crap, repeatedly, then the odds that they're blowing smoke elsewhere increase.
It's the same reason that lawyers seek to impugn witnesses or call into question credibility. Or, to pick another hobby horse of mine, there are news and media organizations which spew crap. Fox News gets a lot of much-deserved scorn for this, but they're not the only one. Bullshit in media (in the most general meaning of the word: any information delivery system) is something I've been paying a lot of attention to, and I'm rather sensitive to it.
Case in point recently involves a 123 year old quote I'd seen attributed to J. P. Morgan, the Gilded Age banker. It struck me as curious, and I dug into it. My conclusion: it's a hoax.
The item in question is referred to as the Banker's Manifesto of 1892, or as the Wall Street Manifesto. Almost certainly the fabrication of one Thomas Westlake Gilruth, lawyer, real estate agent, community activist, and some-time speaker and writer for People's Party causes in the 1890s and 1900s. (Pardon the digression: there is a point, it happens to be both fresh in my mind and sufficiently detached from contemporary affairs to be a fair foil.)
Among the evidence I turned up, several contemporaneous newspapermen who'd drawn the same conclusion. Mind that this was a time of highly partisan press, but these were editors of People's Party papers in various locales.
From The advocate and Topeka tribune. (Topeka, Kan.), 7 & 14 Sept. 1892:
The Great West and one or two other exchanges reproduce the Chicago Daily Press fake purporting to be a Wall street circular. The thing originated in the fertile brain of F. W. Gilmore [sic: should be T. W. Gilruth], who held a position for a time at the Press. He has been challenged time and again to produce the original if it is genuine, and has failed to do so. The thing is a fraud and so is its author, and neither of them is worthy of the confidence of the people.
The following week's issue corected the typo with a emphasis on why naming and shaming mattered:
We desire to make this correction lest there be somebody named Gilmore who might object to the charge, and because the fraud should be placed where it belongs. Gilruth is a snide, and if anyone who knows him has not yet found it out, he is liable to do so to his sorrow.
From the Barbour County index., July 06, 1892, p. 1
If the genuineness of this dispatch cannot be established, it should be taken in at once. If reform writers put it alongside the Huscard and Buell circulars and various other documents of like character, the public faith in the genuineness of all may be shaken. We cannot afford to father any fakes.
(My own analysis turned up other internal inconsistencies within the documents as well, detailed at the reddit link above.)
Much as those late 19th century editors, a hueristic I've increasingly taken to applying is looking at what sources (publications, companies, politicians, authors, online commentators, monitoring systems) do and don't provide reliable information. There's also a distinction I draw between occasionally being wrong (errors happen), and systematic bias. As the Tribune and Index called out, Gilruth was being systematically misleading. And apparently intentionally.
My issue with ESR isn't that I know he's bullshitting on any one point or antoher, it's that I don't know when he is, and, as with other unreliable data streams, sussing out the truth is a lot of work for low reward. He's like an unreliable gauage or monitoring system that sends off false alerts when it shouldn't ,stays silent when it should alert, and highlights the wrong areas of trouble when it does manage to go off at the right time. You simply start to lose your faith in it.
ESR's problem is he believes his own bullshit.
I've decided I don't need that problem.
I have to confess that I don't have specific instances at hand, for two reasons. One is that much of his more technical writing on programming is outside my own area of expertise. The other is that, given his tendencies, I largely ignore him.
My point, however, wasn't where he is specifically mistaken, but why the traits he exhibits in his rantings on other topics do have a bearing on his engineering judgement.
I do hope that's clear now.
Now, his attitudes on HIV denialism and IQ and race don't deal with engineering, but they're pretty objectionable. And, you know, we can get good engineering writing and thinking from a lot of places. ESR doesn't have a monopoly on writing about operating systems. I'd rather promote the writers who don't carry around a ton of wrong/distasteful baggage.
Presumably this question, whatever it is, can be answered by looking at his work. Do you think ESR's technical work and technical writings fail to stand up to scrutiny?
I'm not enough of a programmer to judge his programming texts, though I am enough of a sysamdin to find his Unixy sysadminish stuff generally valid.
I've found CatB itself aging poorly and question a number of the assumptions behind it, particularly as concerns anthropology. It seems shaky. Though I think the general principles behind Free Software and the open source model have their merits. Just, possibly, not quite those ESR describes.
So you tell me: how would we know if his technical work stood up to scrutiny? If I have experience that agrees, does that mean it does? What if my experience contradicts IT?
Fundamentally, calling things into question has little value until we generate an answer to the question it was called into. Considering it's relatively easy to judge him on the technical work, why not?
I say: good! You should never take an author's work at face value. Every bit of nonfiction you read should be read critically. Nobody's judgment is infallible – not even Nobel prizewinners.
If ESR would pose credible arguments and facts, exhibit critical thinking facility, not stoop to denigrating his counterparts, etc., I'd find his points of view more substantive.
But he does none of that, and, rather, the opposite.
I do seek out contradicting evidence, among my mantras (and a conspicuous posted note to myself) is "seek to disprove". I've changed my mind and/or views on a number of significant points and in some cases major views over the past few years. I do that based on evidence and argument, though. It's not a casual process, and doesn't happen easily.
But being able to admit I'm wrong is a large part of it. Also: not insisting on being wrong (valuing belief consistency with time over consistency with observed reality).
Questioning everything is, however, rather exhausting. Developing heuristics for when to start digging in to apparent bullshit claims helps a lot.
ESR is free to speak his beliefs in public, and in return people are free to criticize him, not recommend his books, refuse to invite him to conferences, etc.
Freedom of speech is about prior restraint, not immunity from consequences.
Maybe this is how democratic - as opposed to totalitarian - oppression looks like. When you have to avoid discussions out of fear you'll get fired and blacklisted in the industry, this suddenly doesn't look so different than what refusal to government "truth" looked like several decades ago.
(Of course, if it's a libertarian favourite being boycotted, the same libertarians reliably shit - c.f. Brendan Eich - but anyway.)
Believing things that have no scientific merit in favour of things that have plenty of scientific evidence would be a huge concern in an engineer.
Given is (mostly memory-holed) mysogyny, racism, and homophobia that leads him to discard the views of people in a most un-meritocratic fashion and I think you have another concern.
What I found so far is , in which he seems to just be doing a bit of a "show me the data" thing wrt global warming (its fairly old I guess in defense). In  he's definitely saying IQ is race-related, and gender related to a lesser extent, in my quick readings.
I quite like that he doesn't mince his words, sugar coat things or seem to take any notice of popular opinion/political correctness. Not agreeing with him, but I find that refreshing.
> And the part that, if you are a decent human being and not a racist bigot, you have been dreading: American blacks average a standard deviation lower in IQ than American whites at about 85. [...] And yes, it’s genetic; g seems to be about 85% heritable, and recent studies of effects like regression towards the mean suggest strongly that most of the heritability is DNA rather than nurturance effects.
So is the problem with him saying this that (a) this is factually false or (b) that it's an inconvenient fact that should be glossed over? He seems to be saying that it's factually true since he obviously read it in some or other study. If it is factually true it's disingenuous to label him as a racist.
"Of course humans and chimps have a common ancestor! We have looked at the genetic code, and found that more than 95% is shared. Give up, it's over." The right will hate you for saying that.
"Of course there is inherited variation in intelligence, no matter how you define it! Otherwise evolution, in particular evolution of intelligence, could not possibly work. Give up, it's over." The left will hate you for saying that.
"Of course our moral intuitions come from game theory, not apriori reasoning!" And now everyone hates you, both the left and the right.
If you see a 15 point difference and immediately attribute it to genetics, you're making an unwarranted assumption.
> I guess everyone and every group finds certain truths uncomfortable. It's especially sad that the theory of evolution seems to make literally everyone uncomfortable.
In context, this is a claim that race and IQ is a consequence of the theory of evolution. If you don't see this, then you don't understand communication with humans, or are being disingenuous.
There are a myriad of factors that might influence that IQ score, and I haven't looked at the studies. Lack of wealth/opportunity I think is definitely a factor in the healthy development of the grey matter, as is access to good education.
Long/short, not sure. I'd be surprised if the colour of your skin objectively made a difference in IQ. Same with gender. Though, if the latter is true, I would probably use it with great exuberance on certain people I know e.g. my ex.
It's actually not possible to make a genetic argument, because genetically there are no non-trivial markers that robustly correlate with "black" or "white."
Suggesting otherwise is simple ignorant bigotry.
For comparison, the Flynn effect demonstrates you can get 20-30 points difference in the same gene pool, with the difference being social circumstances. So any difference under 30 points doesn't necessitate invoking genetics.
"One was: their skin color looks fecal. The other was: their bone structure doesn’t look human. And they’re just off-reference enough to be much more creepy than if they looked less like people, like bad CGI or shambling undead in a B movie. When I paid close enough attention, these were the three basic data under the revulsion; my hindbrain thought it was surrounded by alien shit zombies."
Not true. Something can be factually true but uninteresting or of no consequence; pushing that 'truth' forward as something that other should acknowledge betrays an agenda beyond just 'the search for truth'. (Note that this is not a judgement on ESR per se, just a comment on your specific point)
Thus, if someone wants to claim that "black people have lower IQs because they are black", they need to dissolve the concept of race entirely and not only find much broader evidence than studies on African Americans who are, after all, something like half "white", but in fact just cut to the fucking chase and locate the relevant genes.
But of course, if you located the genes and alleles that make some ethnic groups smarter or stupider, you could invent a gene therapy that would make everyone as smart as the smartest ethnic groups, or at least understand what sort of trade-offs are involved in genetic treatments of that sort (ie: Africans often carry a gene that helps them resist malaria but can cause sickle-cell anemia if you get two copies of the recessive allele). If you located the genes and alleles, then within 10 +/- 5 years (depending on how quickly your treatment gets funded) we could eliminate all genetically-caused racial gaps in intelligence.
This, of course, would greatly displease the racists, who don't actually want people to get smarter; they want to justify a peculiar social hierarchy. This is why you always see certain people waving their hands at "racial IQ gaps" and "heritability" but not funding research into intelligence-enhancing gene therapies.
Let's say that some kind of link between race and intelligence were proved and universally acknowledged, what possible positive outcome could entail?
Intelligence enhancement would become a cheap, simple, universally-available gene therapy, since we would have found that it only relies on a few alleles of a tiny number of genes, so simple that it can differ significantly between ethnic groups that can still interbreed, rather than being a complex, many-gene feature that evolved chiefly among the species as a whole.
The first two are healthy amounts of scientific doubt and the third is a political opinion. (Though it could be highly dependent on how these views are expressed.)
I was expecting hate speech or Nazism, instead I see overreactions to opposing opinions people confuse with moral failings.
I am all about reducing gun violence, but if you want to do that you have to do something to stem the tide of illegally acquired handguns in areas of concentrated poverty. That's where a lot of your gun violence comes from.
The recent happenings in Charleston are unicorns. Unpredictable and very rare events that you can't actually make a special law for, without the G-men physically going to every household in America and confiscating firearms. That is a policy I assure you you don't actually want.
On top of that, there is big business in demonizing guns--related to the big business (I suspect) in demonizing fighting, aggression, machismo, independence, or what have you.
I'll be the first to admit that there is no peaceful practical purpose outside of sport or investment for owning firearms in an urban area.
That said, it never ceases to amaze me that in an age of such universal and pervasive surveillance--an age of such unaccountability of authority figures in the .gov and .mil--that folks here are still more than happy to trash on the final safeguard they've got if things get too bad.
It's also unrealistic and implausible to imagine that the .gov and .mil are going to make "things get too bad".
That is no more likely to happen than a return to some sort of monarchy or crowning of an American king/queen.
Really? Because it caused us a lot of trouble during our occupations in Iraq and Afghanistan. If anything, that pretty much proves it as a check on US doctrine.
Fifteen years ago, even in the wake of Ruby Ridge and Waco, I might've been tempted to agree with you. Unfortunately, there's been a whole lot of history since then, yeah?
return to some sort of monarchy
What are your betting odds on Bush III, or Clinton II, again? Your countrymen are apathetic and easily-manipulated when it comes to politics.
About the "whole lot of history since Ruby Ridge and Waco," well, I don't see any specific pattern of things getting "too bad". I'm not seeing the history you apparently are.
About the 2016 presidential election, couple of things: the presidency's just a job, and a short-term one at that, and the president doesn't have much power. Presidents run the country, they don't rule it.
We have seen a continual increase in the militarization of police, the surveillance and fining of private citizens, the violation of privacy, and the bullying and exploitation of the poor.
If you're not seeing the history that I'm looking at, we're considering different news sources. I'm thinking of the Snowden leaks, the killings of citizens by police without cause (some in my own city, sadly), and so forth. I'm thinking of the delightful interplay of the prison-industrial complex with the justice system.
As for the presidency--we've seen pretty much directly the actual effectiveness of the executive branch in causing shenanigans, both in George Bush's administration and Obama's.
We're simply going to have to agree to disagree on this.
I agree about the bullying and exploitation of the poor, I just don't think that's anything new.
I think what's new is that now, techie types like you and me are learning about the police-based murders of black people, and how hard life is for the poor. That stuff's always been going on, it just didn't make it onto our radar until very recently.
Note that example was done specifically as a study of armed irregulars vs the US--your examples of other Western states were in a very different vein. :)
"‘No Way To Prevent This,’ Says Only Nation Where This Regularly Happens"
> “For each 1 percentage point increase in proportion of household gun ownership,” Siegel et al. found, “firearm homicide rate increased by 0.9″ percent.
I'd also like to say something about societies with ubiquitous government public and private surveillance, chilling of freedom of speech and repressive cultures, but I live in the USA so I can't throw stones.
It's worked in Australia: http://www.slate.com/blogs/crime/2012/12/16/gun_control_afte...
It's in no small part due to the politicking and goodthink of ESR that we have Open Source as it is today.
As with most things, it takes too much time to do that in detail for every argument one hears.
Thankfully with the magic of brain's pattern matching and previous experience to BS arguments we don't have to.
We can eliminate tons of opinions from the list of "potentially interesting to investigate" by their mere showing of certain characteristics we already know lead to bogus thinking. ("Hey man, I made a perpetual motion machine. Wait, where are you going? Don't you wanna hear how I made it? You're so close minded" -- or "I don't believe in climate change, it's all bogus. Here's what I think about educational reform...").
Sure, we might get a few false negatives (some good suggestions lost because their originator is a bigot etc), but the system overall works wonders for reducing the signal to noise ratio.
I don't see why you think I haven't considered that.
My whole argument is based on the idea that ad-hominens are pefectly fine in some cases.
When? For people with a bogus claims record.
How? Under the observation that a person making some bogus claims is also likely to make more bogus claims -- and thus the person can be dismissed as a general bogus-claims-maker.
Why might lose some good arguments he might make here and there, but life's too short, and dismissing the person completely gives us time to listen to people with a better "claims" track record.
In essense, the very basic of filtering, that everybody does (more or less well), and you undoubtly do as well.
>I call your "climate change, it's all bogus" claim a straw man, and indicative more of your thinking than of reality, because that's not even a claim that skeptics make.
Actually lots of "spectics" make it. Some make a lesser claim, that's its not human-caused, but others also claim it's not happening altogether. There's even a term for that:
>I get the impression you are just looking for ways to dismiss arguments which make you emotionally uncomfortable.
Nope, I'm looking for ways to dismiss arguments which waste my time.
>because it doesn't look like you know how to properly respond to threads when there's a comment cooldown timer.
I actually do, but am too lazy to click on the message to open alone...
Further, your "climate change, it's all bogus" ad hominem isn't even a real claim that skeptics make, and is more more indicative of your thinking than of reality.
I get the impression you are just looking for ways to dismiss arguments which make you emotionally uncomfortable. Consider religion instead, it's a lot more unapologetic about simply declaring who the heretics are.
If, however, some other tech blogger I liked started writing about "kill the gays," well, I don't need to overlook that just because I like their tech writing.
Like I said elsewhere, none of these folks has a monopoly on good engineering writing/technical thinking.
I can read, and ultimately promote, writers whose writing I wouldn't be ashamed to share with all of my friends.
A better title would have been the Art of Linux Programming or the Art of Open Source Programming.
Except that a program-to-program interface based on formatting and parsing text is anything but clean.
As a general rule of thumb: if you have a command pipeline that would break because programs in it handle locale differently, it's most definitely text-based program.
I like the idea of Power shell: programs pipe formatted (binary) objects between them, and there exist commands to print them out in a nicely formatted way to the terminal. But it's objects. I can extract information through named fields instead of through magic indexes and regexpes in awk/sed/perl oneliners.
IOW, proper object (meta)model, not sucky text streams. Text is great for long-term storage, but awful for data in flight. UNIX ignored this distinction and decided to use text for everything.
The alternative - binary structures - require complex data definitions where you have to care about stuff like exact byte lengths when you have to convert between big/little endian. Some people like to claim that binary is faster, but your bottleneck is not going to be strtol(3). As for advantages in parsing - you still have to parse the binary structure you just read. I suspect most people that think "parsing text" is difficult are confusing the parser with lexer; the latter is the only part that changes when you switch between text and binary formats.
In the long run the initial investment in a text interface can be cheaper than wasting time debugging binary structures. Even more important when thinking long-term: it is a lot easier to inspect a text interface from the outside without the consent or availability of the original author. If you have an old program binary that was used in production for years and no source code or documentation, which would you rather try to debug? An opaque binary file? Or something in JSON or INI format?
 however, this is not an excuse to skip proper documentation
Ironically, the tools that are happiest about these streams are of course those tools that don't care about text but just stream bytes. The pain occurs when the tools actually need to parse the text.
I would agree much more with the unix design, and think it was a much better implementation of the unix philosophy, if the glue layers were strictly specified or demanded that programs could negotiate things like encodings.
Somewhere between "just bytes" and "binary structures" there has to be a reasonable simple format for program communication. Structured text, or text with a small metadata header (like BOM, but done right) for example.
This is why you the predefined character classes in regular expressions. For example, setting LC_CTYPE changes the meaning of '[[:alpha:]]' and sort(1) respects LC_COLLATE. A lot of work has already been done to solve these problems.
Not nearly enough. You write a script which expects a 'file' at some place, but it will break in french locale because 'file' will be replaced with something like 'fichier'. Sure, you can run under C locale, but then some other script, written for french locale will break. Not to mention date formatting, number formatting, etc.
In comparison, binary or XML (!) become very attractive.
You really shouldn't ever run under the C locale unless your need to support older (pre-locale/unicode) software or data of the same vintage. Doing so defeats the entire purpose.
You're confusing issues here: locale support lets you parse text. Your "file" example is not relevant, and would be part of something like gettext(3). Yes, gettext can be configured from the locale, but that is a separate feature from what I was talking about.
The locale support is how you automatically handle lexing the input stream, which is why I brought up the character classes that, unfortunately, most people seem to ignore, resulting in broken text support.
If you properly support locales, you program will support the user with LC_ALL="fr_FR.utf8" typing their floats as 3,14 automatically.
The fact that your program expect a field name to be 'file' is unrelated, but is something the user could learn by 1) reading your man page, 2) reading your text output and copying it (if appropriate), or 3) reading your error messages that you should be generating when you see 'fichier="foo.txt"' when you were expecting 'file="foo.txt"'. Note: if you used [:alpha:] to lex the input, you would automatically be able to extract the incorrect word for your error message when a user in LC_CTYPE="jp_JP" enters 'ファイル="foo.txt"'.
I believe the problem here is related to a confusion of lexing with parsing. The various locale features solve the lexing problem, but you still have a parsing problem no matter the syntax of how the data is serialized. You have to check that /[[:alpha:]]+/ was 'file' not 'fichier' or 'ファイル', just the same as you would have to check the output of your XML parser that the <file> tag was not <fichier> or <ファイル>, just the same as you would have to check that a binary field was 0x0003 or whatever the flag value was.
You may be looking for a way to automatically discover the semantics (schema) that another program expect - and that would be a nice feature - but that is generally orthogonal to the syntax used for IPC. (must like how XML can be converted to YAML/JSON/BSON/etc). It is also a far more complicated feature, and I'm not sure it can ever really be solved (halting problem, possibly), but maybe a "good enough" solution could be created.
Anyway, I've been with my company for a few years and soon my contract expires and I'm going to study the field our company is in and get a degree in that, then I'm going to apply for a position doing our core business. I would still like to be involved with the software my current position is touching on, though, if possible. (Our company has 1000+ employees and several different sub-sections, so even though I might get back into the company, it's not a given that I'll be working with the group of people I am now even though I'd like to.)
I also sometimes think that if possible, perhaps I'd like to work for that other company in our neighbouring country for a few years and be on the dev team of the software. After all, I have experience from the user side which the dev team has not and the dev team has seen some of the tools I've made and a couple of the guys seemed to think that some of that stuff was pretty decent.
> Rule of Diversity: Distrust all claims for “one true way”.
Although, does the python rule "There should be one-- and preferably only one --obvious way to do it." contradict this one ?
> Rule 5. Data dominates. If you've chosen the right data structures and organized things well, the algorithms will almost always be self-evident. Data structures, not algorithms, are central to programming.
I wonder what rob pike has to say about OOP or java, I wish I could listen to it.
Also it says that text is a good representation of data, but I think he meant it as intermediary. I don't think xml or html are really good choices when you see all the CPU cycles spent parsing those.
> Rule of Economy: Programmer time is expensive; conserve it in preference to machine time.
I prefer this rules versus the "no premature optimization" rule.
“object-oriented design is the roman numerals of computing.” - Rob Pike (http://harmful.cat-v.org/software/OO_programming/)
Some other gems from that page:
“Object-oriented programming is an exceptionally bad idea which could only have originated in California.” – Edsger Dijkstra
“The phrase "object-oriented” means a lot of things. Half are obvious, and the other half are mistakes.“ – Paul Graham
“I used to be enamored of object-oriented programming. I’m now finding myself leaning toward believing that it is a plot designed to destroy joy.” – Eric Allman
“The problem with object-oriented languages is they’ve got all this implicit environment that they carry around with them. You wanted a banana but what you got was a gorilla holding the banana and the entire jungle.” – Joe Armstrong
Which is weird because Pike's Rule 4 is one of the fundamental principles of OO design.
The irony is that Python has more ways of doing things in general and often the choice is very much non-obvious, compared to a language like Ruby which is supposedly embracing TIMTOWTDI. In other words, that Python rule doesn't match reality and I've always found it funny.
The statement in question is from The Zen of Python and is more of a guiding principle for the design of Python than a rule.
The usual observation with Python is that it can have many libraries and frameworks which are quite similar and choosing one can be hard (e.g. web frameworks). This is a function of popularity and not something which can fairly be subject to that principle.
On a different note: missing is the Zen of Python's following line:
Although that way may not be obvious at first unless you're Dutch.
The language itself is a prime example of a language with features that aren't orthogonal. For example count how many features in Python are solved in other languages just with proper support for anonymous functions.
I find python's restriction in this case fairly strange. If I wanted to do that in python I'd have to just declare a function body for what I want to put in the anonymous function elsewhere, which defeats the purpose.
avar = aval
.... (some code)
def throwaway(x, y):
some_obj.somefunc = throwaway
12 minutes where Pike talks about a few things, that seem to be wrong in our software world: https://www.youtube.com/watch?v=5kj5ApnhPAE
I don't think so. To me Python's design rule implies there should be one way clear path to solve a specific problem and generally you'd also want to stay as consistent as possible with other similar solutions.
The "one true way" argument is to force everything to fall into a pattern even when it's a less than ideal way to solve a problem. An example would be trying to force everyone to use the same language/database/framework combination in a company for everything. Sometimes you want to be flexible and use the right tools for the job (while also accounting for the cost of deviating from the general accepted standard in making that decision).
From this talk: http://www.infoq.com/presentations/Go-Google
Nope, that's a design guideline, it's not taking choice away. A rule like "Python is the only language you'd ever need" would contradict it.
edit: Really? Downvotes? Not cool.
As for co-author, the Cambridge dictionary says:
"To write a book, article, report, etc. together with another person or other people:"
Which given Pike's participation on what defined the initial UNIX graphics userland, makes him a co-author.
If you exclude one person because they disagree with X and don't know Y, then you must equally disregard all who agree with and disagree with X, because they don't know Y. Otherwise you're just cherrypicking :)
However, it has become quite trendy to bash OOP itself, especially here on HN, without acknowledging that there are less awful ways of doing it that just aren't mainstream. Try making similarly blanket criticisms of FP here using Python (Why, it has lambdas and list comprehensions after all--isn't that enough?) or some other impure language and see what happens.
I'd argue that the unix way is to make it JSON until it becomes a performance problem by which time the format is so entrenched in so many programs and since there's no real central coordination, you just get to live with it.
Which means that no program is more un-UNIX-y than Emacs...
Emacs also interfaces with external processes (e.g. using comint mode). It's the ultimate bottle of glue.
Where it butts against contemporary practice is in places like threads where it is still not proven that YAGNI doesn't apply.
But yes, it's funny.
Saying that these are modern UNIX principles might be more fair. Yet, Kemp's article inspires doubt in even that:
For example, being able to draw a shape in a a graphical editor and then paste the shape directly to a window as the shape that window should take was rather cool - and this was in the early 90s!
The Single responsibility Principle (SRP), interfaces, cohesion and coupling are all relevant patterns and techniques that also describe the same design ideas.
The UNIX philosophy is simply one example of the above principles implemented at a user interface level, but they apply equally well in low-level embedded code, and device drivers just as well as in desktop and server land.
Software complexity grows exponentially with the number of internal interactions. by reducing the scope of any given module, you greatly simplify it.
Thinking about, designing and implementing fixed interfaces between small cohesive modules helps this process and you get much more simple (as in the Rich Hickey sense) software as a result.
These same principles pop up all over the place in software, the Unix Philosophy, SRP, Microservices... they're all manifestations of the same thing.
The reason that 'systemd' is the ball of mud it is that Linux being just a kernel does not provide the basics for systemd to build on so it has to provide them itself.
The broader tools that systemd also provides (stub DNS/LLMNR resolution, network management [DHCP, PPP...], container registration, message bus introspection and library, event loop library, EFI stub loader, device and mount management, hardware database, TTYs, NSS plugins, session and seat management, SNTP client, time/date control, dynamic configuration population, etc.) exist mostly for philosophical reasons of being a one true toolkit/middleware that sits between GNU and Linux.
launchd isn't Unix-y at all, anyway.
Here's an LCA talk by a dude what works on X and Wayland: https://www.youtube.com/watch?v=RIctzAQOe44
Meaning that to draw the desktop they effectively pain a single large window inside X that is then filled with the output of the GPU.
Thing is that Wayland seems more comparable to svgalib than X. This in that Wayland pretty much a lib/protocol for talking to the graphics hardware. Something else, be it their reference implementation Weston, GTK, Qt, or some other alternative, has to handle the handling of windows, desktops etc.
Right now you can use Wayland as a driver for Xorg.
Hell, the X primitives are so outdated, all the toolkit developers do client side rendering now.
Wayland is the bits of X that people actually use.
If all the developer thrust is behind toolkits that "badly misuse the protocol", maybe the protocol is a really shitty fit for app development?
Welcome to why Wayland is the future and X is deprecated. Daniel Stone's talk should be required viewing before anyone speaks on the X vs. Wayland issue: