Hacker News new | comments | ask | show | jobs | submit login
The Unix Philosophy (catb.org)
353 points by dorsatum on June 25, 2015 | hide | past | web | favorite | 256 comments

I keep wondering about what seems to be the most important component of Unix philosophy: write many small programs that do one thing well and interface using text streams!

Yes, modularity is important. However, in some cases, this philosophy has resulted in the "tangled mess held together by duct tape" kind of systems architecture that no one dares to touch for fear of breaking things.

I think Unix philosophy is struggling with a fundamental dilemma:

On one hand, creating systems from programs written by different people requires stronger formal guarantees in order to make interfaces more reliable, stronger guarantees than interfaces within one large program written by one person or a small team would require.

On the other hand, creating systems from programms written by different people requires more flexibile interfaces that can deal with versioning, backward and forward compatibility, etc, something that is extremely difficult to do across programming languages without heaping on massive complexity (CORBA, WS-deathstar, ...)

I think HTTP has shown that it can be done. But HTTP is also quite heavy weight. It doesn't exactly favor very small programs. Handling HTTP error codes is not something you'd want to do on every other function call.

In any event, I think Unix philosophy is a good place to start but needs a refresh in light of a couple of decades worth of experience.

> in some cases, this philosophy has resulted in the "tangled mess held together by duct tape" kind of systems architecture that no one dares to touch for fear of breaking things.

Pretty much, just as Booch's philosophy has given us tangled messes of class hierarchies with cross references pointed every which way. And just as The Gang of Four gave us AbstractedTangleMessFactorySingleton.

The distinction with the Unix-style pipe mess is that it exists and works an order of magnitude faster than other examples of bad code.

Additionally, it's not a religion. Every programmer should be exposed to Unix style programming just like every programmer should be exposed to functional programming. 100% purity in either case is just creating unnecessary problems.

Amen. Extremism in programming, as in religion, is evil.

I would like to point out that most extremism is from the perspective of the listener, not the speaker. If I just wrote a book or article called "Functional programming is great," there'd be plenty of people replying with "Oh, but what about MY favorite style? It's not that great! You're just being extreme!"

When the reality is that you can't take an opinion as the whole thought. But people do. Almost always. Regardless of the subject, people will think you're a fanatic for your position, no matter how temporary or experimental it is, unless you pay your dues in apologies and waffle terms.

> I would like to point out that most extremism is from the perspective of the listener, not the speaker.

Believe it or not, there is a significant contingent of crazy, death-threat-making jerks fixated on "The Unix Philosophy". As in, an experimental Unixy project[1] that made the rounds (including on HN) some years back and got a fair bit of attention. The project had a vision with scale similar to that of Atom, which is soaring on the front page today. But unlike Atom, this project was quietly dropped from public view because the author received a full blast of nutball hate from misguided Defenders of the Faith/Purity/Whatever.

I've been using, studying, and working with *nix systems basically forever (longer than almost everyone reading this). But that experience made me rather allergic to ever using or hearing "The Unix Philosophy". I realized that it's heavily overused as an argument-stopper: "that's not the unix way (so <EOF> off)." I've since witnessed similar language used in many projects, taken in whole context, as a dogmatic excuse rather than an actual source of architectural wisdom.

[1] Apologies for being deliberately vague. It's not my place to risk riling up the hate monsters vs. the creator of that project again. I'm still absolutely disgusted that Gamergate-level human insanity was leveled at someone putting forth what was, IMO, one of the most interesting Unix tooling experiments in the past decade.

I think in terms of declarations, steps, procedures, methods, objects and a clear, logical program flow.

Which is good if you're programming in an object-oriented imperative language that has a defined program flow. But while that style is the most popular, applying the same concepts to anything that isn't imperative OO will result in unfathomable suffering.

When in Rome, code as the Romans do.

Absolutely, but that doesn't mean we shouldn't try to make some progress.

I think we have actually made progress with HTTP and all the REST. But I feel that the great failure of distributed object systems in the 1990s and 2000s has left a great void when it comes to new thinking about more fine grained sort of interfaces between programs.

Functional programming has influenced so many things but hasn't really arrived in the systems corner of the world yet. Also, looking at the sort of query capabilities of online streaming databases and complex event processing systems I get the feeling that there may be a lot of potential in combining that with Unix pipes and text streams.

Publish-subscribe ala zeromq is also something that could be a core part of modern Unix.

Sorry about the half-baked mess I'm dumping on you here :)

REST is a kludge but it's easy. SOAP as much as I dislike giving Microsoft credit was brilliant, but it was hard.

How is REST a kludge and SOAP not?

> And just as The Gang of Four gave us AbstractedTangleMessFactorySingleton.

The Gang of Four didn't give us that. An army of OOP novices who were overwhelmed by all of the new choices and only skimmed Design Patterns gave us that.

If you actually read the book (which surprisingly few people have, given the number of people who have strong opinions about it), you'll see the Gang of Four are actually quite clear on the limitations or and ways to abuse the patterns.

They did, though, kind of.

The GUI-oriented 90's vibe, meaning zero examples that anybody would ever bother learning. The bizarre stuff nobody ever uses (Flyweight? Bridge?). Singleton -- 'nuff said.

You're left with what, maybe three usable patterns? Factory, which favors opaque, magical return-type polymorphism. Visitor, which is a solution in search of a problem, and also gets confused with trivial tree-walking operations. Interface, which rocks, and became part of Java, but really just shows how much inheritance sucks.

The resulting confusion is the proud sponsor of an AbstractStrategyFactoryBuilderDelegate near you.

> The GUI-oriented 90's vibe, meaning zero examples that anybody would ever bother learning.

It was written in the 90's. GUIs were what application developers did back then. Web apps didn't exist yet.

> The bizarre stuff nobody ever uses (Flyweight? Bridge?).

If you've used enums in Java or instanced rendering in a game, you're effectively using Flyweight.

> Singleton -- 'nuff said.

They caution really hard against overuse of Singleton in the book, but all of those C programmers being dragged into OOP didn't know where else to stuff all their global state so they went to town on it.

> Factory, which favors opaque, magical return-type polymorphism.

If your language doesn't have first-class classes, Factory is really handy. If it does have first-class classes, well that means you basically have the Factory pattern in your language. :)

> Visitor, which is a solution in search of a problem

The day you write a compiler in an OOP language is the day you realize how unbelievably, amazingly, incredibly useful Visitor is.

There's no good idea you can't turn into a bad idea if you just do it hard enough.

The Gang of Four book recommends "composition over extension" which fits neatly into the UNIX philosophy of building things by composed of smaller independent programs.

The main issue that most of the common programming languages in use notably Java and C++ have very poor support for composition thus forcing developers to have to extend things.

Can you explain how exactly Java and C++ have poor support for composition? Classes can have member variables in both languages. What else is there?

Example: How in Java do you add a method to the String class so that all instances of String gain the method without extending String and introducing a new subtype ?

It cannot be done. As an example see Categories in Objective-C.

Java's support for composition has improved somewhat with default methods on interfaces in Java 8 but it's still poor compared to the above languages. This leads to the "abundance of classes" anti-pattern you see with Java projects as you are forced to extend to introduce new functionality.

Objective-C is years along of Java extending existing classes.

That's true as far as it goes, but you can write beautiful code in most languages too, even Java. The point I was making is just that "able to write bad code" is a poor metric for a philosophy. And also that "bad unix-style pipe hackery with script glue" has the advantage of actually working more often than bad code produced by other disciplines.

> And just as The Gang of Four gave us AbstractedTangleMessFactorySingleton.

Hey, not funny. I actually googled that pattern just to make sure it was a joke.

Recently I've been exploring the idea that small programs should consist of two parts; a "backend" that you can communicate with via a strict message protocol like protobuf. It can handle multiple requests because it is a mini server, the message format is locked to a "type" and you communicate with it via RPC.

The second part is a "frontend", a command-line client that does all the unixy stuff with text streams etc; but is just another RPC client to the backend.

This provides flexibility; for quick duct-tape situations you use the front end and pipe anf ilter to your hearts content. Then when stuff needs to get serious, you can bypass the command-line front end and connect directly over RPC using a strict message format which has the ability to handle a couple (doesn't have to many) of concurrent requests.

Of course this is early doors yet, but I think it might have legs in the flexibility in the text streams/hard message format debate, for very little additional work.

You're sort of describing the design idea common in unix-y areas to build libfoo that does all the work, then a foo frontend that makes tasks useable from the command line. I don't know how widely used this model is, but it's certainly out there.

The difference is that in what I've described, the ABI is the common interface and an RPC server would be just another consumer of the library.

Yup, even better!

> Recently I've been exploring the idea that small programs should consist of two parts; a "backend" that you can communicate with via a strict message protocol like protobuf. It can handle multiple requests because it is a mini server, the message format is locked to a "type" and you communicate with it via RPC.

Well there was an implementation of something like that, see https://en.wikipedia.org/wiki/DCE/RPC

or DCOM, https://en.wikipedia.org/wiki/Distributed_Component_Object_M...

then there was dcop, dbus, and a bunch of others and now nobody remembers what the goal was

I like this idea! It seems very Unix-philosophy-y (sorry) itself; break up the programme even further, from "do one thing" to "figure out what to do" + "do it".


That's how a lot of well-designed software works, including Windows (in general) and old-school Mac Classic with AppleEvents (not sure if OS X still does that.)

Yeah it kinda reminds me of the "plumbing and porcelain" approach of some programs like git. It also is nice that a lot of modern init process can fire up the back ends on demand and kill them after a while.

Surely a lot of that could be done with a commandline switch?

    tape_robot start -tapeid=1
as the quick hacky version, and:

    tape_robot --protobuf 'msg:start;tapeid:1'
(replace the 'single:quote;string:thing' with your protobuf message.), or

    tape_robot --protobuf_file filename
The filename could be a fifo or socket, of course. Then use a general purpose server which runs the programs:

    proto_serve tape_robot
or whatever.

This (to me) is closer to the unix idea - one server program, one actual processing app. Why should my tape_robot program actually have a server embedded in it?

If in the future, I want to add authentication, I only have to add it (once) to the proto_serve program, rather than to every single application that is a 'server', for instance.

It would also allow a version which 'pre-forked' the processes, and left them waiting for the data on the socket/filehandle, or whatever.

You could do a bunch of this already using nc or similar, I suspect.

Interesting. This seems very non-unix-y to me. The `tape_robot` doesn't "do one thing", it does many things, including parsing protobuf from shell strings and files, which seems error prone and way outside its core competency.

Maybe I misunderstood @jalfresi's idea - as I understood it, it was that each command would do not only parse protobuf and unix style flags, but also contain a RPC server of some sort.

My reinterpretation was to say how about factoring out the server part, and leave the command only understanding either flags or protobuf commands - which could be delivered to the command either as an arg, or as a file given to it by an arg.

You could go a stage further by having all commands only accept protobuf (or similar), and distribute a spec/human-mapping to go with it. Then your shell would parse the args that you give to the command using the spec, and actually call the command using protobuf.

This would allow very awesome shell completion / highlighting / etc. It should also allow much simpler end-point/commands, as they'd not have to do hardly any type checking / re-parsing, as it would arrive in protobuf.

I actually kind of like that idea. Hmm...

This is actually a step toward how PowerShell (is it still called that?) works, where you pipe objects (ie. a schema) around instead of arbitrary strings.

A lot of people don't know this but that's pretty much how Windows actually works underneath everything (DCOM, RPC, pipes etc)

Incidentally, this is what neovim does.[1]

[1]: http://neovim.io/doc/user/msgpack_rpc.html#msgpack-rpc

Thanks for the solid example. I especially like the use of the NVIM_LISTEN_ADDRESS environment variable for service discovery of the backend!

They are working on making the servers aware of each other, so calling the `serverlist()` can be used to connect to other servers.

For a while, Linux mostly had the philosophy that your software should come with a text interface, optionally a GUI, and a C library. That library part is for solving the messes created by tangled text interfaces.

But nowadays lots of things only come for the GUI.

A lot of those are actually just task specific business logic style applications in scripting languages (often python, sometimes perl, etc).

In this case the 'library' is often the base programming language it's self, as the shipped parts are entirely glue and configuration.

Anyone can make a mess of anything if they are not sufficiently careful. This is not unique to nix.

Personally, I enjoy the nix philosophy of using small programs that do one thing well. Also, I prefer text to proprietary binary formats that _I_ can't do anything with.

As a *nix and Windows developer, I've never understood the mentality of wanting to build huge monolithic applications. But, I do think the tide is turning. I think more and more developers are allowing themselves to be influenced by ideas outside of what they are used to and that's a great thing.

>On the other hand, creating systems from programms written by different people requires more flexibile interfaces that can deal with versioning, backward and forward compatibility, etc, something that is extremely difficult to do across programming languages without heaping on massive complexity (CORBA, WS-deathstar, ...)

Personally, I think the LLVM project shows what the Right Thing is here: package most of the real functionality into libraries while putting the user interfaces into runnable programs. Everyone who wants to communicate with the functionality can just link the library.

And if we want to do it statefully, well, handling global state across a large system is basically the unsolved problem of programming.

The big problem I've found is serialization of sum types. thrift (or protobuf or any number of similar systems) is very good at serializing most of the data one tends to work with, with declarations that are suitably strict but easy to write. But it doesn't have a good way to represent "this field is an A or a B".

Thrift has unions, so does Apache Avro. Afaik Protocol Buffers is the only one where you have to string together one yourself with a required tag and a bunch of optionals.

Protobufs has unions now too (they're called "oneofs"), as of version 2.6.0 released last year:


Protobuf has somewhat recently added the 'oneof' feature which tries to deal with these union types.

The one downside I found when trying to use it was that the 3rd-party C# Protobuf libraries didn't yet support it.

See: https://developers.google.com/protocol-buffers/docs/proto#on...

Unix philosophy is a good place to start but needs a refresh in light of a couple of decades worth of experience.

It's always going to be a good place to start because it embodies a realistic, documented, proven and actionable compromise between engineering purity (the "formal guarantees" you describe) and just shipping something.

For a collection of related thoughts, not limited to traditional unix philosophy but still unix philosophy-centric, check out my ansified fortune clone @ https://github.com/globalcitizen/taoup

It's your viewpoint that makes you think it becomes a mess of duct tape and not some failing of the philosophy. To improve the philosophy you look outside the box not inside it i.e. not at http. Something like, "everything is a file" is an improved philosophy (but I'm cheating of course because I know this is a plan9 philosophy to improve upon unix).

I think maybe the Unix philosophy applies to programs that users (read: programmers) use in their own time, and to make other programs.

Are you saying you disagree with the use of subroutines, functions and interfaces? After all, they are essentially small programs that do one thing and do it well.

Note the part about text streams in fauigerzigerk's comment. If all functions were limited to accepting a text stream and returning a text stream, the overall program would quickly become a ball of mud.

No, I don't disagree with the use of functions or interfaces at all, but functions in particular are a programming language construct and as such not easily extended across programming languages, processes and networks, mostly due to different type systems and error states.

I think it's not an accident or historical vestige that Unix philosophy uses the word "program". It has technical as well as social implications that have barely changed in the past 50 years.

Functions and programs are certainly not an either-or kind of thing but making them one and the same would require a very different kind of operating system than we have ever seen. Say, emacs OS :)

Functions in particular are a programming language construct and as such not easily extended across programming languages, processes and networks, mostly due to different type systems and error states [...] it's not an accident or historical vestige that Unix philosophy uses the word "program". It has technical as well as social implications that have barely changed in the past 50 years.


Then I think you are confused or inexperienced because Unix does not treat everything as a text stream and you say functions in a programming language are different so they are better is not a wise statement. It appears you don't understand the fundamental concept and think Unix hasn't improved over the years.

People will hang shit on ESR, but The Art of Unix Programming is one of my all time favourite books. If you haven't read it, even if you're not a *nix developer, do yourself a favour and just skim the table of contents... something may pique your interest and you may learn a thing or two.

It's also free online:


This book is ESR's masterwork, and that's taking into account everything else he's done.

"People will hang shit on ESR"

An example "Reading it, it looks like a total hack job by a poor programmer."


That's not hanging shit on ESR, that's just a valid critique of poor code.

A better example might be this post, where ESR is implied to be racist when his actual post is a reflective one about correcting irrational racist reactions: https://news.ycombinator.com/item?id=6884767

It's a really great read and did a lot to inform my approach to writing all kinds of software. It's a shame that I cringe to recommend in now due to the politics of its author.

That's really bizarre. A person's political views have no bearing on the validity or usefulness of what they write.

I don't think jumblesale is arguing that his writings are incorrect or not useful, just that he doesn't like popularizing his works because of the authors political views.

It's still a shame. I certainly don't feel like I have to love everything a person does in order to learn something from them, or even respect some of their work. Likewise, I don't require everyone to agree with my personal opinions in order to pass my courses. Things would really become cumbersome were I to ever change my mind about something.

I agree. Seriously, who gives a toss about what anyone's opinions are on subjects unrelated to what you're reading by them? And even if they do creep in, what's to say you're right or they're wrong? I thought this was part of being an adult but I see it all too often: "so-and-so wrote an excellent technical manual on X but he's right wing so I won't buy it!"

I don't understand that way of thinking since no human being will ever 100% agree with any other on everything.

Unfortunately, the general zeitgeist of the times seems to be that a person's politics are somehow a litmus test for whether or not they should be listened to at all, about anything.

Normally this would just be a quirk, but the fact is that a lot of technical people here on HN and other places would happily throw the baby out with the bath water just because they disagree with somebody's politics.

It's stupid and unprofessional. With so many companies focusing on such technically boring problems, image management is perhaps legitimately more of a business concern than having the best tech available.

So, unfortunately, we have people with dissenting opinions but excellent work slandered or ostracized...even if their opinions are actually worth considering. Then again, that just means that those of us who are more genuinely tolerant will have an edge during hiring. :)

Also, on ESR in particular:

You have to understand that, rightly or wrongly, his worldview is long-term Culture War. Literally anything which prevents The Right People from breeding faster (homosexuality) or defending themselves (attacks on the 2nd amendment) or arguing (kafkatraps) is suspect. Because he's playing for keeps, he'll do whatever it takes (including, perhaps, being less than perfectly equal in presentations on things) to further his agenda. That's just how it is, and it doesn't reflect on his technical contriubtions or aptitude at all.

Hell, the bitch of it is, he's even arguably correct on some of his cultural points, if he himself (much less his detractors) didn't spend so much time sounding so disagreeable and grumpy and wingnutty.

Anyways, it's just a sign of the times, as I said. It seems that most people are unable to handle a mental model which accounts for biased or unreliable narrators while still allowing the work of those narrators to be taken advantage of.

If someone writes something, it's "true" or "false" regardless of their political opinions.

On the other hand, someone's political opinions change what they're likely to write, so if you don't know whether it's "true" or not, knowing their political opinions can justifiably alter your beliefs about the text.

I think people tend to overweight the second consideration, and it seems particularly irrelevant in this case.

(Scare quotes because it's rarely as simple as true-or-false, e.g. "literally true but horribly misleading".)

Further, I suspect the main reason there's any concern at all about recommending ESR's writings due to his politics is because he bothers to write a lot about his political views in the first place, presenting something to potentially disagree with.

Many programmers / writers / programmer-writers may well have equally strong political views in one direction or another, with which others may strongly agree or disagree, but they just don't say much about them in public.

Undoubtably a deep philosophical argument, but brighter minds than ours have written at length about it both in science and jurisprudence. "Research ethics" and "Exclusionary rule" would be good illustrating Wikipedia articles.

Those seem only tenuously linked to me.

Research ethics forbids you from doing certain things when performing research, but it doesn't say anything about the political opinions of researchers.

The exclusionary rule says that if evidence is obtained by breaking the rules, it can't be used in court. But again, it doesn't say anything about the political opinions of the person obtaining the evidence.

Both are cases where the truth of the situation is put aside from a moral standpoint, which is analogous to suggesting that someone's political views could influence the reception of their engineering views.

Both exist mostly to disincentivize people from doing something immoral and/or unwanted, not to give light on the truth or false value of observations/evidence.

So the question is, should one really refuse to read/recommend ESR's book because of his opinions? Frankly, this smells to me like Index Librorum Prohibitorum all over again.

Unfortunately, you can't neatly compartmentalise one aspect of an individual's life and totally separate it from the others. Sometimes, the aspects with which you happen to agree might be used to indirectly (even invisibly) support the aspects you don't. Although it's difficult to judge that, it seems a reasonable condition for exercising caution. I say this as a huge fan of TAOUP.

Out of interest, what are these political views that are so terrible that they should influence our views of his thoughts on operating system design?

Edit: I really don't have any idea what they are...

Among other unpopular opinions, ESR denies AGW, believes that race and IQ are correlated, and was strongly in favor of the Iraq war.

None of these seem even remotely relevant to engineering. Moreover, research, discovery and innovation requires letting people having freedom to think. That includes holding unpopular, controversial or politically incorrect opinions.

We are so worried about Evil Government dictating what we can and cannot think that we haven't noticed the current organic trend to prosecute every other person for thoughtcrimes. It's not the jackboot that keeps us on the ground, it's social media, and the public outrage you get when you disagree with whatever's the most popular opinion on a topic this week.

ESR's views particularly on guns, libertarian economics and politics, and AGW, all present pretty standard cases of assuming a frame and fitting all data to that frame. Chopping, discarding, and/or fabricating data as necessary to do so.

That actually directly calls into question engineering validity, as solid engineering is solidly based in reality and a realistic interpretation of facts. Also the ability to discard frames which no longer fit.

My own work and research of the past several years puts a very high significance on both frames (or more generally, models), and on the psychology of interacting with those, with strong emphasis on denial in various forms.

ESR's political views call much of his work into question. I say that as someone who was strongly influenced by much of what he said, and enjoyed a fair bit of it. He's become a tremendous disappointment.

TAOUP has its merits. It's rather like recommending Ted Kaczynski's Manifesto a a social-technological critique. It's got some really solid points (see what Bill Joy's had to say on it: http://archive.wired.com/wired/archive/8.04/joy.html). But damned if the rest of the author's views and actions don't muddy the waters a tad.

ESR's views particularly on guns, libertarian economics and politics, and AGW, all present pretty standard cases of assuming a frame and fitting all data to that frame. Chopping, discarding, and/or fabricating data as necessary to do so.

That actually directly calls into question engineering validity, as solid engineering is solidly based in reality and a realistic interpretation of facts. Also the ability to discard frames which no longer fit.

"Engineering validity"? This is just a dressed up ad hominem. If some technical argument ESR has made is inconsistent or doesn't match up with empirical evidence, criticize away, but his positions on what exactly the Second Amendment means or what the best role of government is can't possibly inform that criticism. It could, perhaps, explain why he's made an error, but it can't identify the error for us.

ESR's political views call much of his work into question.

Which questions about what work? If you're going to cast aspersions like this, you'd probably best be specific.

    > This is just a dressed up ad hominem
No, it's explicitly and purposefully ad-hominem, suggesting that a person's lack of judgement in one area leads to questions about his judgement in a related one.

Actually, it's not ad hominem at all.

Ad homimen would be "people named Eric cannot be trusted".

This is calling into question ESR's general credibility, based on his record. That's a character judgement.

I'm also not saying ESR is wrong in all things -- a consistently wrong indicator is useful (read the opposite of what it says). An inconsistently wrong one is maddening: you've got to pay close attention to what its doing and determine the pattern to its errors. That's the taxing part.

Ad hominem is where you discount an argument based on the person making it. I'm not sure where the name thing is meant to come in to it.

I suppose the fallacy is where the attributes are irrelevant to the argument.

There's a somewhat related comment I'd seen recently which I've found useful:

Nota bene: a fallacious ad hominem only occurs when an accusation against the person serves as a premise to the conclusion. An attack upon that person as a further conclusion isn't fallacious and may, in fact, be morally mandatory.


That's not quite what I'm doing here: I'm leveraging the attack on credibility to discount further statements from ESR. But for numerous reasons of psychology and general reputation, if not a strict formal logic sense, there's a strong rationale to this.

Or: the narrator has been shown unreliable.


Traditionally, modern, credibility has two key components: trustworthiness and expertise, which both have objective and subjective components. Trustworthiness is based more on subjective factors, but can include objective measurements such as established reliability.

> Ad homimen would be "people named Eric cannot be trusted".

Since you actually wrote this sentence 9 hours ago, it is safe to infer that you really don't know anything about logical fallacies or what you're talking about in general, since you can't possibly have learned all you need to know about them in 9 hours. Given this level of confidence in something that is both wrong and easily checked, why should we trust any of your claims at all?

Or... should we trust you? But not ESR? Would that not be hypocrisy?


"An Ad Hominem is a general category of fallacies in which a claim or argument is rejected on the basis of some irrelevant fact about the author of or the person presenting the claim or argument."


"Ad hominem is Latin for "to the man." The ad hominem fallacy occurs when one asserts that somebody's claim is wrong because of something about the person making the claim. The ad hominem fallacy is often confused with the legitimate provision of evidence that a person is not to be trusted. Calling into question the reliability of a witness is relevant when the issue is whether to trust the witness. It is irrelevant, however, to call into question the reliability or morality or anything else about a person when the issue is whether that person's reasons for making a claim are good enough reasons to support the claim."


"It is important to note that the label “ad hominem” is ambiguous, and that not every kind of ad hominem argument is fallacious. In one sense, an ad hominem argument is an argument in which you offer premises that you the arguer don’t accept, but which you know the listener does accept, in order to show that his position is incoherent (as in, for example, the Euthyphro dilemma). There is nothing wrong with this type of argument ad hominem."


"An ad hominem attack is not quite as weak as mere name-calling. It might actually carry some weight. For example, if a senator wrote an article saying senators' salaries should be increased, one could respond:

"Of course he would say that. He's a senator.

"This wouldn't refute the author's argument, but it may at least be relevant to the case."

ESR's claims about politics : ESR's claims about technology :: your claims about logical fallacies : your claims about ESR

Many (most) people, including many highly respected scientists and engineers, are fully capable of displaying incredible judgment in their discipline yet awful judgment in other aspects of their lives. Should we put an asterisk on papers published by researchers in the middle of messy divorces?

There are some flagrant examples. Peter Duesberg, the UC Berkely molecular biologist prof who thinks that the HIV/AIDS link is bogus, comes to mind.

There are also those who tend to know their limits and note when they're out of their depth or area(s) of expertise. So no, that's not a universal guide either.

In the unlikely case you feel most science and engineering is a "related area" to family life, yes.

...but his positions...

His making up shit (or buying in to others' made-up shit) to justify them does, as does his ignoring contradictory evidence and record..

Which questions about what work?

The problem is one of an unreliable narrator. If you cannot trust someone's judgement, and they spew crap, repeatedly, then the odds that they're blowing smoke elsewhere increase.

It's the same reason that lawyers seek to impugn witnesses or call into question credibility. Or, to pick another hobby horse of mine, there are news and media organizations which spew crap. Fox News gets a lot of much-deserved scorn for this, but they're not the only one. Bullshit in media (in the most general meaning of the word: any information delivery system) is something I've been paying a lot of attention to, and I'm rather sensitive to it.

Case in point recently involves a 123 year old quote I'd seen attributed to J. P. Morgan, the Gilded Age banker. It struck me as curious, and I dug into it. My conclusion: it's a hoax.

The item in question is referred to as the Banker's Manifesto of 1892, or as the Wall Street Manifesto. Almost certainly the fabrication of one Thomas Westlake Gilruth, lawyer, real estate agent, community activist, and some-time speaker and writer for People's Party causes in the 1890s and 1900s. (Pardon the digression: there is a point, it happens to be both fresh in my mind and sufficiently detached from contemporary affairs to be a fair foil.)

Among the evidence I turned up, several contemporaneous newspapermen who'd drawn the same conclusion. Mind that this was a time of highly partisan press, but these were editors of People's Party papers in various locales.

From The advocate and Topeka tribune. (Topeka, Kan.), 7 & 14 Sept. 1892:

The Great West and one or two other exchanges reproduce the Chicago Daily Press fake purporting to be a Wall street circular. The thing originated in the fertile brain of F. W. Gilmore [sic: should be T. W. Gilruth], who held a position for a time at the Press. He has been challenged time and again to produce the original if it is genuine, and has failed to do so. The thing is a fraud and so is its author, and neither of them is worthy of the confidence of the people.

The following week's issue corected the typo with a emphasis on why naming and shaming mattered:

We desire to make this correction lest there be somebody named Gilmore who might object to the charge, and because the fraud should be placed where it belongs. Gilruth is a snide, and if anyone who knows him has not yet found it out, he is liable to do so to his sorrow.



From the Barbour County index., July 06, 1892, p. 1

If the genuineness of this dispatch cannot be established, it should be taken in at once. If reform writers put it alongside the Huscard and Buell circulars and various other documents of like character, the public faith in the genuineness of all may be shaken. We cannot afford to father any fakes.



(My own analysis turned up other internal inconsistencies within the documents as well, detailed at the reddit link above.)

Much as those late 19th century editors, a hueristic I've increasingly taken to applying is looking at what sources (publications, companies, politicians, authors, online commentators, monitoring systems) do and don't provide reliable information. There's also a distinction I draw between occasionally being wrong (errors happen), and systematic bias. As the Tribune and Index called out, Gilruth was being systematically misleading. And apparently intentionally.

My issue with ESR isn't that I know he's bullshitting on any one point or antoher, it's that I don't know when he is, and, as with other unreliable data streams, sussing out the truth is a lot of work for low reward. He's like an unreliable gauage or monitoring system that sends off false alerts when it shouldn't ,stays silent when it should alert, and highlights the wrong areas of trouble when it does manage to go off at the right time. You simply start to lose your faith in it.

ESR's problem is he believes his own bullshit.

I've decided I don't need that problem.

I can't help but notice you managed to write something approximating the length of a short essay without once pointing out any "bullshit" in ESR's technical writing, let alone explaining how said "bullshit" must derive from his wrongthink.

I'm afraid I must apologize for failing to make myself clear: it's that his practices call into question his statements in other areas.

I have to confess that I don't have specific instances at hand, for two reasons. One is that much of his more technical writing on programming is outside my own area of expertise. The other is that, given his tendencies, I largely ignore him.

My point, however, wasn't where he is specifically mistaken, but why the traits he exhibits in his rantings on other topics do have a bearing on his engineering judgement.

I do hope that's clear now.

I pasted this link elsewhere in the comments; the rationalwiki page talks about ESR's objectionable ideas:


Now, his attitudes on HIV denialism and IQ and race don't deal with engineering, but they're pretty objectionable. And, you know, we can get good engineering writing and thinking from a lot of places. ESR doesn't have a monopoly on writing about operating systems. I'd rather promote the writers who don't carry around a ton of wrong/distasteful baggage.

I wrote a bit of that page, and I still listen avidly to his technical expertise, which is qualitatively greater than most people's. But the rest ...

I continue to take everything ESR says about technology seriously. His opinions on everything else ... are an instructive example in the non-transferable nature of expertise.

That's like saying DEKs belief in the supernatural calls into question TAOCP, totally irrelevant

Sorry, but DEK?

Don E Knuth - The Art of Computer Programming

> ESR's political views call much of his work into question.

Presumably this question, whatever it is, can be answered by looking at his work. Do you think ESR's technical work and technical writings fail to stand up to scrutiny?

It definitely gives me pause. ESR clearly doesn't know when he's out of his depth (classic Dunning-Krueger).

I'm not enough of a programmer to judge his programming texts, though I am enough of a sysamdin to find his Unixy sysadminish stuff generally valid.

I've found CatB itself aging poorly and question a number of the assumptions behind it, particularly as concerns anthropology. It seems shaky. Though I think the general principles behind Free Software and the open source model have their merits. Just, possibly, not quite those ESR describes.

ESR expressed views on all sorts of items which are unknowable and much debated; programmers cling to this idea that there's a right and a wrong (protobuffers not JSON! One True Way vs TMTOWTDI! Emacs vs Vi! JavaScript is a reasonable choice etc etc). Generally there's not, there are just ideas and opinions without hard data. Systemd contradicts pieces of the original article.

So you tell me: how would we know if his technical work stood up to scrutiny? If I have experience that agrees, does that mean it does? What if my experience contradicts IT?

Indeed, this is the critical question. If L. Ron Hubbard secretly but accurately predicted the lottery numbers for last week, it doesn't mean we have to go back and change them. Things can seem wrong/impossible/against your worldview, but that sense doesn't help quite so much as _just looking_.

Fundamentally, calling things into question has little value until we generate an answer to the question it was called into. Considering it's relatively easy to judge him on the technical work, why not?

You say you distrust the author's views and opinions because of his politics.

I say: good! You should never take an author's work at face value. Every bit of nonfiction you read should be read critically. Nobody's judgment is infallible – not even Nobel prizewinners.

No, not because of his politics, but because of (among other elements) his political argument methods.

If ESR would pose credible arguments and facts, exhibit critical thinking facility, not stoop to denigrating his counterparts, etc., I'd find his points of view more substantive.

But he does none of that, and, rather, the opposite.

I do seek out contradicting evidence, among my mantras (and a conspicuous posted note to myself) is "seek to disprove". I've changed my mind and/or views on a number of significant points and in some cases major views over the past few years. I do that based on evidence and argument, though. It's not a casual process, and doesn't happen easily.

But being able to admit I'm wrong is a large part of it. Also: not insisting on being wrong (valuing belief consistency with time over consistency with observed reality).

Questioning everything is, however, rather exhausting. Developing heuristics for when to start digging in to apparent bullshit claims helps a lot.


"The fool doth think he is wise, but the wise man knows himself to be a fool."

It's disingenuous to equate huge numbers of people disagreeing with you on the internet to the government suppressing you with law or force.

ESR is free to speak his beliefs in public, and in return people are free to criticize him, not recommend his books, refuse to invite him to conferences, etc.

Freedom of speech is about prior restraint, not immunity from consequences.

Is it now? When you can get fired from your job over your private beliefs, when even a Nobel prize winner can have his (and her - completely innocent - wife's) career ended on the spot, when you can lose your home over disagreeing with "status quo", I say something is wrong.

Maybe this is how democratic - as opposed to totalitarian - oppression looks like. When you have to avoid discussions out of fear you'll get fired and blacklisted in the industry, this suddenly doesn't look so different than what refusal to government "truth" looked like several decades ago.

The libertarian alternative to discrimination laws is people boycotting those they find reprehensible.

(Of course, if it's a libertarian favourite being boycotted, the same libertarians reliably shit - c.f. Brendan Eich - but anyway.)

I'm glad I'm not the only one who's noticed this

> None of these seem even remotely relevant to engineering.

Believing things that have no scientific merit in favour of things that have plenty of scientific evidence would be a huge concern in an engineer.

Given is (mostly memory-holed) mysogyny, racism, and homophobia that leads him to discard the views of people in a most un-meritocratic fashion and I think you have another concern.

You have the freedom to think........ what the hive mind whats you to think.

That's a surprise to me, I hadn't heard any of that. Not that I've followed things that closely.

What I found so far is [1], in which he seems to just be doing a bit of a "show me the data" thing wrt global warming (its fairly old I guess in defense). In [2] he's definitely saying IQ is race-related, and gender related to a lesser extent, in my quick readings.

I quite like that he doesn't mince his words, sugar coat things or seem to take any notice of popular opinion/political correctness. Not agreeing with him, but I find that refreshing.

[1] http://esr.ibiblio.org/?p=1436 [2] http://esr.ibiblio.org/?m=200311

Regarding [2]. So I guess this quote is the problem:

> And the part that, if you are a decent human being and not a racist bigot, you have been dreading: American blacks average a standard deviation lower in IQ than American whites at about 85. [...] And yes, it’s genetic; g seems to be about 85% heritable, and recent studies of effects like regression towards the mean suggest strongly that most of the heritability is DNA rather than nurturance effects.

So is the problem with him saying this that (a) this is factually false or (b) that it's an inconvenient fact that should be glossed over? He seems to be saying that it's factually true since he obviously read it in some or other study. If it is factually true it's disingenuous to label him as a racist.

I guess everyone and every group finds certain truths uncomfortable. It's especially sad that the theory of evolution seems to make literally everyone uncomfortable.

"Of course humans and chimps have a common ancestor! We have looked at the genetic code, and found that more than 95% is shared. Give up, it's over." The right will hate you for saying that.

"Of course there is inherited variation in intelligence, no matter how you define it! Otherwise evolution, in particular evolution of intelligence, could not possibly work. Give up, it's over." The left will hate you for saying that.

"Of course our moral intuitions come from game theory, not apriori reasoning!" And now everyone hates you, both the left and the right.

Flynn Effect gets you 20-30 points in the same gene pool, therefore via non-genetic factors.

If you see a 15 point difference and immediately attribute it to genetics, you're making an unwarranted assumption.

Re your deleted comment:

> I guess everyone and every group finds certain truths uncomfortable. It's especially sad that the theory of evolution seems to make literally everyone uncomfortable.

In context, this is a claim that race and IQ is a consequence of the theory of evolution. If you don't see this, then you don't understand communication with humans, or are being disingenuous.

Evolution doesn't necessarily have a direct link with intelligence; nor even does natural selection which is the theory that, I think, you're basing your argument on.

That's a pretty weak objection. It's implausible that intelligence wouldn't be influenced by any evolved traits.

I think the problem is, its a bit more complicated than that.

There are a myriad of factors that might influence that IQ score, and I haven't looked at the studies. Lack of wealth/opportunity I think is definitely a factor in the healthy development of the grey matter, as is access to good education.

Long/short, not sure. I'd be surprised if the colour of your skin objectively made a difference in IQ. Same with gender. Though, if the latter is true, I would probably use it with great exuberance on certain people I know e.g. my ex.

I wouldn't be surprised if skin colour made a difference to IQ. But I wouldn't be surprised for social and political reasons, not genetic ones.

It's actually not possible to make a genetic argument, because genetically there are no non-trivial markers that robustly correlate with "black" or "white."


Suggesting otherwise is simple ignorant bigotry.

"heritable" is the weasel word here. It is applied when the kid matches the parents - but the leap from there to "genetic" is unwarranted, because parents and their children tend to be in the same social circumstances.

For comparison, the Flynn effect demonstrates you can get 20-30 points difference in the same gene pool, with the difference being social circumstances. So any difference under 30 points doesn't necessitate invoking genetics.

ESR wrote, among other things, this:

"One was: their skin color looks fecal. The other was: their bone structure doesn’t look human. And they’re just off-reference enough to be much more creepy than if they looked less like people, like bad CGI or shambling undead in a B movie. When I paid close enough attention, these were the three basic data under the revulsion; my hindbrain thought it was surrounded by alien shit zombies."

and http://esr.ibiblio.org/?p=4256

To clarify, I was careful not to call ESR a racist. (I was attempting to describe his positions in terms that I think he'd agree with.) And I haven't seen anyone else explicitly do so on this thread, which is good going for HN.

"If it is factually true it's disingenuous to label him as a racist."

Not true. Something can be factually true but uninteresting or of no consequence; pushing that 'truth' forward as something that other should acknowledge betrays an agenda beyond just 'the search for truth'. (Note that this is not a judgement on ESR per se, just a comment on your specific point)

But it is interesting and of consequence. Especially as it highlights how truth ever becomes the slave of fashion. If it it is demonstrably true that race and IQ is linked then stating that as a fact does not make one a racist.

"g is 85% heritable" does not actually mean, imply, or even give strong evidence for the likelihood that "racially-linked genes cause the racially-correlated outcome differences in IQ tests." A trait that's 85% heritable is actually a complicated mix of many different biological factors, whose various causal powers (abilities to cause a specific outcome if interfered-with) we simply don't know, except that 15% of them don't seem to pass from parent to child in twin-studies (and I would certainly hope that separated-twin studies were actually done at all, because that's Genetic Causality 101 stuff).

Thus, if someone wants to claim that "black people have lower IQs because they are black", they need to dissolve the concept of race entirely and not only find much broader evidence than studies on African Americans who are, after all, something like half "white", but in fact just cut to the fucking chase and locate the relevant genes.

But of course, if you located the genes and alleles that make some ethnic groups smarter or stupider, you could invent a gene therapy that would make everyone as smart as the smartest ethnic groups, or at least understand what sort of trade-offs are involved in genetic treatments of that sort (ie: Africans often carry a gene that helps them resist malaria but can cause sickle-cell anemia if you get two copies of the recessive allele). If you located the genes and alleles, then within 10 +/- 5 years (depending on how quickly your treatment gets funded) we could eliminate all genetically-caused racial gaps in intelligence.

This, of course, would greatly displease the racists, who don't actually want people to get smarter; they want to justify a peculiar social hierarchy. This is why you always see certain people waving their hands at "racial IQ gaps" and "heritability" but not funding research into intelligence-enhancing gene therapies.

I'm not sure what you mean by "fashion" in this context.

Let's say that some kind of link between race and intelligence were proved and universally acknowledged, what possible positive outcome could entail?

>Let's say that some kind of link between race and intelligence were proved and universally acknowledged, what possible positive outcome could entail?

Intelligence enhancement would become a cheap, simple, universally-available gene therapy, since we would have found that it only relies on a few alleles of a tiny number of genes, so simple that it can differ significantly between ethnic groups that can still interbreed, rather than being a complex, many-gene feature that evolved chiefly among the species as a whole.

The first two are scientific, rather than political questions, aren't they?

Political in the sense that they have been politicized. And in the sense that a lot of people seem to believe the questions are Settled For Good, and anyone who disagrees with them is Just Plain Ignorant and/or Lying For Personal Benefit.

They are scientific questions if ESR publishes or attempts to publish scientific papers to support his opinions, or otherwise attempts to represent himself as a scientist.

Is that really it?

It's unclear whether you're suggesting his views are not sufficiently objectionable, or you're suggesting he has other views that are very objectionable that weren't enumerated above.

> ESR denies AGW, believes that race and IQ are correlated, and was strongly in favor of the Iraq war

The first two are healthy amounts of scientific doubt and the third is a political opinion. (Though it could be highly dependent on how these views are expressed.)

I was expecting hate speech or Nazism, instead I see overreactions to opposing opinions people confuse with moral failings.

He's pro 'gun-rights'.

What's wrong with this? Being completely serious here, what is wrong with citizens owning firearms?

I am all about reducing gun violence, but if you want to do that you have to do something to stem the tide of illegally acquired handguns in areas of concentrated poverty. That's where a lot of your gun violence comes from.

The recent happenings in Charleston are unicorns. Unpredictable and very rare events that you can't actually make a special law for, without the G-men physically going to every household in America and confiscating firearms. That is a policy I assure you you don't actually want.

Because guns scare people. Why do they scare people? Because mostly they're just seen either in the hands of cops, grunts, or criminals. Most folks (especially here) aren't hunters, or are so far removed from rural life that they have no experience of firearm-as-tool.

On top of that, there is big business in demonizing guns--related to the big business (I suspect) in demonizing fighting, aggression, machismo, independence, or what have you.

I'll be the first to admit that there is no peaceful practical purpose outside of sport or investment for owning firearms in an urban area.

That said, it never ceases to amaze me that in an age of such universal and pervasive surveillance--an age of such unaccountability of authority figures in the .gov and .mil--that folks here are still more than happy to trash on the final safeguard they've got if things get too bad.

By the time things get that bad we've already crossed my threshold for "final" safeguard.

Private gun ownership isn't a safeguard against "if things get too bad".

It's also unrealistic and implausible to imagine that the .gov and .mil are going to make "things get too bad".

That is no more likely to happen than a return to some sort of monarchy or crowning of an American king/queen.

Private gun ownership isn't a safeguard against "if things get too bad".

Really? Because it caused us a lot of trouble during our occupations in Iraq and Afghanistan. If anything, that pretty much proves it as a check on US doctrine.

It's also unrealistic and implausible to imagine that the .gov and .mil are going to make "things get too bad".

Fifteen years ago, even in the wake of Ruby Ridge and Waco, I might've been tempted to agree with you. Unfortunately, there's been a whole lot of history since then, yeah?

return to some sort of monarchy

What are your betting odds on Bush III, or Clinton II, again? Your countrymen are apathetic and easily-manipulated when it comes to politics.

It doesn't make sense to me to compare the military occupation of those countries with the paranoid proposal that things "could get too bad" in the U.S. Too much seems too different about those two to be meaningful; I could point to the strict gun laws in most of the western nations and ask, "why haven't they degenerated into `could get too bad'?"

About the "whole lot of history since Ruby Ridge and Waco," well, I don't see any specific pattern of things getting "too bad". I'm not seeing the history you apparently are.

About the 2016 presidential election, couple of things: the presidency's just a job, and a short-term one at that, and the president doesn't have much power. Presidents run the country, they don't rule it.

What other countries are doing/not doing is a red herring--their people are not ours, their demographics are certainly not ours, their pain points are not our pain points. They additionally don't have the same political foundations and history that we do.

We have seen a continual increase in the militarization of police, the surveillance and fining of private citizens, the violation of privacy, and the bullying and exploitation of the poor.

If you're not seeing the history that I'm looking at, we're considering different news sources. I'm thinking of the Snowden leaks, the killings of citizens by police without cause (some in my own city, sadly), and so forth. I'm thinking of the delightful interplay of the prison-industrial complex with the justice system.

As for the presidency--we've seen pretty much directly the actual effectiveness of the executive branch in causing shenanigans, both in George Bush's administration and Obama's.


We're simply going to have to agree to disagree on this.

"What other countries are doing/not doing is a red herring" ..... but you brought up Iraq and Afghanistan.

I agree about the bullying and exploitation of the poor, I just don't think that's anything new.

I think what's new is that now, techie types like you and me are learning about the police-based murders of black people, and how hard life is for the poor. That stuff's always been going on, it just didn't make it onto our radar until very recently.

but you brought up Iraq and Afghanistan.

Note that example was done specifically as a study of armed irregulars vs the US--your examples of other Western states were in a very different vein. :)

so you are saying there is a correlation between guns-per-capita and gun-deaths-per-capita?

No, I'm (foolishly) responding to an Onion article of all things. The claim is being made that the U.S. is the only place where mass killings with firearms happen. I'm merely stating this isn't true.

Just to clarify, here's the headline (my emphasis):

"‘No Way To Prevent This,’ Says Only Nation Where This Regularly Happens"

More guns == more murder.

> “For each 1 percentage point increase in proportion of household gun ownership,” Siegel et al. found, “firearm homicide rate increased by 0.9″ percent.


And what homicide rate diminished? If it just moves the weapon of choice, its largely irrelevant.

I'm not sure I can say there's something absolutely wrong with it, but my opinion is that citizens shouldn't be allowed to own firearms. I live somewhere (UK) where they can't, and there is very little gun violence. That's not to say, of course, there aren't problems, but I just think - on balance - the world would be better off with fewer killing machines in it.

I'm not pro gun but many countries like Canada have pretty high gun ownership yet have low gun violence numbers. I think there is more to the problem then just disallowing private gun ownership would solve. It's a band aid fix in my view.

Its kind of like prisoners' dilemma. Who loses their firearms first, citizens or criminals? They lose.

I'd also like to say something about societies with ubiquitous government public and private surveillance, chilling of freedom of speech and repressive cultures, but I live in the USA so I can't throw stones.

Why wouldn't he/she want some sort of mass, nationwide gun buy-back program?

It's worked in Australia: http://www.slate.com/blogs/crime/2012/12/16/gun_control_afte...

When they constantly mix technology and politics, yes it does matter. Open Source is a political movement, and that requires taking ESR's version of it with a grain of salt.

Who did a lot of the popularization of that term again? Who cofounded the Open Source Initiative?

It's in no small part due to the politicking and goodthink of ESR that we have Open Source as it is today.

Open Source is not a political movement; you're thinking of Free Software.

That's a bizarre statement in itself. Of course it does.

Except if he writes anything about politics, economics, society, etc -- in which case his political views have quite a bearing on the validity or usefulness of what they write.

Can't you evaluate his claims on their own merits? Unless he's saying "trust me, I won't expose my reasoning but it's solid", or unless you are outsourcing your thinking, such things would seem to be irrelevant.

>Can't you evaluate his claims on their own merits?

As with most things, it takes too much time to do that in detail for every argument one hears.

Thankfully with the magic of brain's pattern matching and previous experience to BS arguments we don't have to.

We can eliminate tons of opinions from the list of "potentially interesting to investigate" by their mere showing of certain characteristics we already know lead to bogus thinking. ("Hey man, I made a perpetual motion machine. Wait, where are you going? Don't you wanna hear how I made it? You're so close minded" -- or "I don't believe in climate change, it's all bogus. Here's what I think about educational reform...").

Sure, we might get a few false negatives (some good suggestions lost because their originator is a bigot etc), but the system overall works wonders for reducing the signal to noise ratio.

>You are conflating a property of the claim itself (violating the laws of physics) with a straw-man property of the person making the claim. They aren't the same thing - one enables a simple proof by contradiction, the other is ad-hominem.

I don't see why you think I haven't considered that.

My whole argument is based on the idea that ad-hominens are pefectly fine in some cases.

When? For people with a bogus claims record.

How? Under the observation that a person making some bogus claims is also likely to make more bogus claims -- and thus the person can be dismissed as a general bogus-claims-maker.

Why might lose some good arguments he might make here and there, but life's too short, and dismissing the person completely gives us time to listen to people with a better "claims" track record.

In essense, the very basic of filtering, that everybody does (more or less well), and you undoubtly do as well.

>I call your "climate change, it's all bogus" claim a straw man, and indicative more of your thinking than of reality, because that's not even a claim that skeptics make.

Actually lots of "spectics" make it. Some make a lesser claim, that's its not human-caused, but others also claim it's not happening altogether. There's even a term for that:


>I get the impression you are just looking for ways to dismiss arguments which make you emotionally uncomfortable.

Nope, I'm looking for ways to dismiss arguments which waste my time.

I'm going to dismiss everything you have to say because it doesn't look like you know how to properly respond to threads when there's a comment cooldown timer.

>I'm going to dismiss everything you have to say

Feel free!

>because it doesn't look like you know how to properly respond to threads when there's a comment cooldown timer.

I actually do, but am too lazy to click on the message to open alone...

You are conflating a property of the claim itself (violating the laws of physics) with a straw-man property of the person making the claim. They aren't the same thing - one enables a simple proof by contradiction, the other is ad-hominem.

Further, your "climate change, it's all bogus" ad hominem isn't even a real claim that skeptics make, and is more more indicative of your thinking than of reality.

I get the impression you are just looking for ways to dismiss arguments which make you emotionally uncomfortable. Consider religion instead, it's a lot more unapologetic about simply declaring who the heretics are.

They absolutely do, but I get your point that that's not necessarily so in every single case. ESR's page on the rationalwiki gets it right, though, when they call him a "stopped clock"--right every once in a while (and by coincidence):


How then do you evaluate the validity of writings by an author whose political views you do not know?

I don't, but I also don't feel the need to.

If, however, some other tech blogger I liked started writing about "kill the gays," well, I don't need to overlook that just because I like their tech writing.

Like I said elsewhere, none of these folks has a monopoly on good engineering writing/technical thinking.

I can read, and ultimately promote, writers whose writing I wouldn't be ashamed to share with all of my friends.

Personally I found it disappointing as it seemed to take the view that the only UNIX in existence since 1992 has been Linux.

A better title would have been the Art of Linux Programming or the Art of Open Source Programming.

Thanks for the book. Unlike most UNIX fans, he's unusually honest in his critiques of it and even references material from UNIX Hater's Handbook haha. His comparisons to other OS's are fair except for the robustness of VMS: main reason many companies kept using it despite its uncertainty. Downloading it to re-read and see if I find some more enlightenment as a system designer.

Chapters 1 and 2 (Philosophy and History) are required reading for all junior devs and aspiring hackers.

> The ‘Unix philosophy’ originated with Ken Thompson's early meditations on how to design a small but capable operating system with a clean service interface.

Except that a program-to-program interface based on formatting and parsing text is anything but clean.

I thought the same but what are the alternatives? Use a format like json? What happens when the format can't express what I want? The other issue is the output is now deeply intertwined with the program because it's now an API, although the same could be said about text output.

PowerShell gives a good alternative: the things passed between the programs should be objects. It requires, of course, that everything is communicating with a common runtime, but for a shell that is not the biggest burden.

What do you mean by "text"?

Semi-rigid, informally specified output meant for easy inspection by humans. The output of `ifconfig -a` for example. (That's the worst offender, most other commands have a better-formatted output.)

As a general rule of thumb: if you have a command pipeline that would break because programs in it handle locale differently, it's most definitely text-based program.

I like the idea of Power shell: programs pipe formatted (binary) objects between them, and there exist commands to print them out in a nicely formatted way to the terminal. But it's objects. I can extract information through named fields instead of through magic indexes and regexpes in awk/sed/perl oneliners.

IOW, proper object (meta)model, not sucky text streams. Text is great for long-term storage, but awful for data in flight. UNIX ignored this distinction and decided to use text for everything.

env -i ifconfig -a | ... solves a lot (like locale or other special env vars), but yeah...

Exactly. That's one of the biggest problems with text-only interfaces.

Maintaining IPC/RPC as text has the significant advantage of keeping the programs involved honest, approachable, and easier to debug. Just being able to see the interface by running the program (or reading the config file) makes it a lot easier to learn[1], and bugs in text formats can often be seen visually (or marked in an editor).

The alternative - binary structures - require complex data definitions where you have to care about stuff like exact byte lengths when you have to convert between big/little endian. Some people like to claim that binary is faster, but your bottleneck is not going to be strtol(3). As for advantages in parsing - you still have to parse the binary structure you just read. I suspect most people that think "parsing text" is difficult are confusing the parser with lexer; the latter is the only part that changes when you switch between text and binary formats.

In the long run the initial investment in a text interface can be cheaper than wasting time debugging binary structures. Even more important when thinking long-term: it is a lot easier to inspect a text interface from the outside without the consent or availability of the original author. If you have an old program binary that was used in production for years and no source code or documentation, which would you rather try to debug? An opaque binary file? Or something in JSON or INI format?

[1] however, this is not an excuse to skip proper documentation

"Text" in a computer is bytes + metadata about those bytes. Without that metadata, it's just bytes. Exactly that is what unix pipes are: a streams of bytes, not streams of text.

Ironically, the tools that are happiest about these streams are of course those tools that don't care about text but just stream bytes. The pain occurs when the tools actually need to parse the text.

I would agree much more with the unix design, and think it was a much better implementation of the unix philosophy, if the glue layers were strictly specified or demanded that programs could negotiate things like encodings.

Somewhere between "just bytes" and "binary structures" there has to be a reasonable simple format for program communication. Structured text, or text with a small metadata header (like BOM, but done right) for example.

see locale(1) and locale(7)

This is why you the predefined character classes in regular expressions. For example, setting LC_CTYPE changes the meaning of '[[:alpha:]]' and sort(1) respects LC_COLLATE. A lot of work has already been done to solve these problems.

> A lot of work has already been done to solve these problems.

Not nearly enough. You write a script which expects a 'file' at some place, but it will break in french locale because 'file' will be replaced with something like 'fichier'. Sure, you can run under C locale, but then some other script, written for french locale will break. Not to mention date formatting, number formatting, etc.

In comparison, binary or XML (!) become very attractive.

(apologies for the late reply)

You really shouldn't ever run under the C locale unless your need to support older (pre-locale/unicode) software or data of the same vintage. Doing so defeats the entire purpose.

You're confusing issues here: locale support lets you parse text. Your "file" example is not relevant, and would be part of something like gettext(3). Yes, gettext can be configured from the locale, but that is a separate feature from what I was talking about.

The locale support is how you automatically handle lexing the input stream, which is why I brought up the character classes that, unfortunately, most people seem to ignore, resulting in broken text support.

If you properly support locales, you program will support the user with LC_ALL="fr_FR.utf8" typing their floats as 3,14 automatically.

The fact that your program expect a field name to be 'file' is unrelated, but is something the user could learn by 1) reading your man page, 2) reading your text output and copying it (if appropriate), or 3) reading your error messages that you should be generating when you see 'fichier="foo.txt"' when you were expecting 'file="foo.txt"'. Note: if you used [:alpha:] to lex the input, you would automatically be able to extract the incorrect word for your error message when a user in LC_CTYPE="jp_JP" enters 'ファイル="foo.txt"'.

I believe the problem here is related to a confusion of lexing with parsing. The various locale features solve the lexing problem, but you still have a parsing problem no matter the syntax of how the data is serialized. You have to check that /[[:alpha:]]+/ was 'file' not 'fichier' or 'ファイル', just the same as you would have to check the output of your XML parser that the <file> tag was not <fichier> or <ファイル>, just the same as you would have to check that a binary field was 0x0003 or whatever the flag value was.

You may be looking for a way to automatically discover the semantics (schema) that another program expect - and that would be a nice feature - but that is generally orthogonal to the syntax used for IPC. (must like how XML can be converted to YAML/JSON/BSON/etc). It is also a far more complicated feature, and I'm not sure it can ever really be solved (halting problem, possibly), but maybe a "good enough" solution could be created.

Fully agree with you here. Wanted to also add that even the 'parsing' of text is easy by using standard tools like awk, sed etc.. rather than writing custom code to debug binary protocols. In addition with protocols like JSON, it is easily extensible without worrying about where new data fields are added as long as backward compatibility is maintained - i.e. if designed correctly, the server can be upgraded to accept new fields while still working with older clients.

At my place of work, we have a client relationship to a company like ours in a neighbouring country. They develop a software that is of much use to us so we pay a license to them for it. They are nice people but god damnit, there is one thing they just never got right. They only support one platform, one that is certified UNIX, yet no matter how severe an error, their cli tools and scripts exit with 0 no matter what. I'm the guy who writes some smaller tools and scripts on our side integrating with their software so you probably understand why I get a bit upset about this at times. Still, I enjoy my work and as I said they are nice people and also they are a quite small team so I don't want to burden them with these concerns when there are other things that our company need from them more.

Anyway, I've been with my company for a few years and soon my contract expires and I'm going to study the field our company is in and get a degree in that, then I'm going to apply for a position doing our core business. I would still like to be involved with the software my current position is touching on, though, if possible. (Our company has 1000+ employees and several different sub-sections, so even though I might get back into the company, it's not a given that I'll be working with the group of people I am now even though I'd like to.)

I also sometimes think that if possible, perhaps I'd like to work for that other company in our neighbouring country for a few years and be on the dev team of the software. After all, I have experience from the user side which the dev team has not and the dev team has seen some of the tools I've made and a couple of the guys seemed to think that some of that stuff was pretty decent.

This can be a very good fit. Smart companies hire their customers' best engineers with the most familiarity with their product. The best, smartest feedback comes with that package.

This should be taught in any programming class.

> Rule of Diversity: Distrust all claims for “one true way”.

Although, does the python rule "There should be one-- and preferably only one --obvious way to do it." contradict this one ?

> Rule 5. Data dominates. If you've chosen the right data structures and organized things well, the algorithms will almost always be self-evident. Data structures, not algorithms, are central to programming.

I wonder what rob pike has to say about OOP or java, I wish I could listen to it.

Also it says that text is a good representation of data, but I think he meant it as intermediary. I don't think xml or html are really good choices when you see all the CPU cycles spent parsing those.

> Rule of Economy: Programmer time is expensive; conserve it in preference to machine time.

I prefer this rules versus the "no premature optimization" rule.

>I wonder what rob pike has to say about OOP or java, I wish I could listen to it.

“object-oriented design is the roman numerals of computing.” - Rob Pike (http://harmful.cat-v.org/software/OO_programming/)

Some other gems from that page:

“Object-oriented programming is an exceptionally bad idea which could only have originated in California.” – Edsger Dijkstra

“The phrase "object-oriented” means a lot of things. Half are obvious, and the other half are mistakes.“ – Paul Graham

“I used to be enamored of object-oriented programming. I’m now finding myself leaning toward believing that it is a plot designed to destroy joy.” – Eric Allman

“The problem with object-oriented languages is they’ve got all this implicit environment that they carry around with them. You wanted a banana but what you got was a gorilla holding the banana and the entire jungle.” – Joe Armstrong

When I see the never end complaints about object-oriented programming, I'm reminded of Bjarne Stroustrup's quote "There are only two kinds of languages: the ones people complain about and the ones nobody uses." I think that applies equally well to software design concepts.

> “object-oriented design is the roman numerals of computing.” - Rob Pike

Which is weird because Pike's Rule 4 is one of the fundamental principles of OO design.

I would love to throw those quote at any programming class that promotes OOP.

They're only relevant for Simula-like class-based OO. Message passing OO is actually a good and criminally underrated paradigm.

Message passing as in argv ? that's at a very high level, that's not really underrated nor so life changing.

> does the python rule "There should be one-- and preferably only one --obvious way to do it." contradict this one ?

The irony is that Python has more ways of doing things in general and often the choice is very much non-obvious, compared to a language like Ruby which is supposedly embracing TIMTOWTDI. In other words, that Python rule doesn't match reality and I've always found it funny.

Is it the Python language or the Python ecosystem you are referring to?

The statement in question is from The Zen of Python and is more of a guiding principle for the design of Python than a rule.

The usual observation with Python is that it can have many libraries and frameworks which are quite similar and choosing one can be hard (e.g. web frameworks). This is a function of popularity and not something which can fairly be subject to that principle.

On a different note: missing is the Zen of Python's following line:

  Although that way may not be obvious at first unless you're Dutch.

It's both the language and the ecosystem.

The language itself is a prime example of a language with features that aren't orthogonal. For example count how many features in Python are solved in other languages just with proper support for anonymous functions.

Python's support for closures is perfectly fine. Its anonymous function support is crappy, but it doesn't stop you from doing what you want, it just looks as ugly as sin.

Python only lets you have one line in a lambda - the return line. In languages with nicer anonymous function support I can make an anonymous function span multiple lines and branches eg. in C# you could assign to a delegate something like myObj.stringToHash = (s) => {int x = 0; foreach(var c in s) x += ... ; return x;}

I find python's restriction in this case fairly strange. If I wanted to do that in python I'd have to just declare a function body for what I want to put in the anonymous function elsewhere, which defeats the purpose.

Functions can be defined in a nested way setting up closures. It isn't that different from what you wrote to do:

    def some_method(...):
        avar = aval
        .... (some code)
        def throwaway(x, y):
            something_else(avar, x+y)
        some_obj.somefunc = throwaway
The major difference is you do the function definition just prior to the assignment. The "namespace pollution" is literally limited to the scope of "some_method", so you have an extra local var in a function somewhere. The semantics of python mean that the function is created on the stack, and a closure is created each time the some_method is called, and the closure is later GCed as any other object would be, so there is nothing fundamentally different about the "named" function, and the "anonymous" function in you example, except a single local variable.

That's what I meant. Defining the throwaway function directly before the assignment is a good compromise at least.

Well, if you need multiple lines, you probably don't need your function to be anonymous. (Your example is just `sum`...)

I've needed it many times in the past where the function I'm assigning can't be done as a one-liner like that.

I thought the opposite for Ruby, for instance all those aliased methods on classes.

Perl is arguable much closer to the Unix philosophy than Python, for better or worse. That said, the "One obvious way" in Python is not exactly the same as "Only one true way".

The rule of composition is an important counterpoint to the rule of diversity. Unix's most important feature is treating everything as a file. That interface allows a lot of freedom in composition, but every process has to conform to exactly the same interface. Perhaps not surprisingly, TCP's stream-based interface has also allowed a great deal of freedom in composition in networks.

> I wonder what rob pike has to say about OOP or java, I wish I could listen to it.

12 minutes where Pike talks about a few things, that seem to be wrong in our software world: https://www.youtube.com/watch?v=5kj5ApnhPAE

"Although, does the python rule "There should be one-- and preferably only one --obvious way to do it." contradict this one ?"

I don't think so. To me Python's design rule implies there should be one way clear path to solve a specific problem and generally you'd also want to stay as consistent as possible with other similar solutions.

The "one true way" argument is to force everything to fall into a pattern even when it's a less than ideal way to solve a problem. An example would be trying to force everyone to use the same language/database/framework combination in a company for everything. Sometimes you want to be flexible and use the right tools for the job (while also accounting for the cost of deviating from the general accepted standard in making that decision).

> Although, does the python rule "There should be one-- and preferably only one --obvious way to do it." contradict this one ?

Nope, that's a design guideline, it's not taking choice away. A rule like "Python is the only language you'd ever need" would contradict it.

I think you'll find your answer to the Rob Pike question by using The 'Go' Programming Language

Rob Pike doesn't know Smalltalk, Self, or Common Lisp/CLOS--why should you care what his opinions are concerning OO as a paradigm?

Funny how HNers have such a high regard for his opinion on a domain he is well known for being sceptical and yet disregard his opinion, as UNIX co-author, that UNIX is well past its due date and is time to move on.

He's not a UNIX co-author at all, he joined Bell Labs in 1980.

Except he contributed to many of the userland tools and was part of the UNIX working group, which does make him a co-author, even if he didn't wrote the first kernel lines.

I must admit you have a strange definition of "co-author".

I must admit you have a strange way to ignore Pike's contributions to UNIX initial GUI architecture, his book together with Kernighan about the UNIX programming environment and other contributions.

Please don't twist my words. I'm just saying "co-author" have different meanings for you and me.

edit: Really? Downvotes? Not cool.

I didn't downvote you.

As for co-author, the Cambridge dictionary says:

"To write a book, article, report, etc. together with another person or other people:"

Which given Pike's participation on what defined the initial UNIX graphics userland, makes him a co-author.

That's where we disagree. You think Blit is one of things what makes UNIX UNIX while I don't. We have different interpretations of "co-author". I don't see any further benefit in discussing that.

Most developers touting OOP don't know those languages; so why should we care what their opinions are concerning OOP as a paradigm.

If you exclude one person because they disagree with X and don't know Y, then you must equally disregard all who agree with and disagree with X, because they don't know Y. Otherwise you're just cherrypicking :)

Fine by me, since those developers are doing so on behalf of systems like Java and Python, which fall far below the mark.

However, it has become quite trendy to bash OOP itself, especially here on HN, without acknowledging that there are less awful ways of doing it that just aren't mainstream. Try making similarly blanket criticisms of FP here using Python (Why, it has lambdas and list comprehensions after all--isn't that enough?) or some other impure language and see what happens.

He definitely knows Smalltalk and Lisp

<voice type="Lucky Charms Leprechaun"> That's magically elitist! </voice>

When making a microservice application you have to choose between a text (JSON) or binary (Thrift) interface. You cloud argue that the unix way is to make it JSON until it becomes a performance problem.

This is a false choice - you can serialize/deserialize Thrift objects to/from JSON. I think the choice here is between having a clearly-defined /schema/ or not.

Thanks for the insight.

>You could argue that the unix way is to make it JSON until it becomes a performance problem.

I'd argue that the unix way is to make it JSON until it becomes a performance problem by which time the format is so entrenched in so many programs and since there's no real central coordination, you just get to live with it.

>Make each program do one thing well. To do a new job, build afresh rather than complicate old programs by adding new features.

Which means that no program is more un-UNIX-y than Emacs...

Because Emacs is a system of its own, the Lisp Machine legacy leaked too much. Inside of emacs many things are tiny components, after all it's a lisp, which had a stronger unixy feel than so many things since. I mean lambda, compose, map... hard to separate concerns more than that.

Emacs is a collection of separate Elisp programmes that all cooperate and share data structures. You would not call Unix tools un-UNIX-y only because they all share procedures from the C library.

Emacs also interfaces with external processes (e.g. using comint mode). It's the ultimate bottle of glue.

Emacs is an example of a complex application written consistent [more or less] with the Unix philosophy. One might argue [anything, it's the internet after all] that Emacs four decade shelf life demonstrates the robustness of its embedded Unix philosophy [at the risk of being a bit circular].

Where it butts against contemporary practice is in places like threads where it is still not proven that YAGNI doesn't apply.

Can't upvote this enough. My takeaway from this gem is that we need to keep thinking about our craft even as we evolve our technologies, and we need to have the courage to stand up for it. The hardest thing in the world to see is your own point of view because you have to step outside of it to see it.

The Unix Hater's Handbook (https://en.wikipedia.org/wiki/The_Unix-Haters_Handbook) has a few sections on of some of the ills of the Unix philosophy.

The UNIX Haters Handbook shows its age, it's criticizing a system that has only a passing similarity with modern unixes.

But yes, it's funny.

Nonetheless, the specific examples show the UNIX system contradicted many of the supposedly "UNIX" principles. The article seems a bit revisionist in a pro-UNIX way in that sense. Unsurprising.

Saying that these are modern UNIX principles might be more fair. Yet, Kemp's article inspires doubt in even that:


Agreed, I posted mostly for the humor, though there are some grains of truth there still.

Agreed. Modern unixes are a lot worse.

How debacles such as micro-kernels and XServer fits into Unix Philosophy?

Easy: a number of microkernel-based systems today actually have the properties UNIX pretends to. And to the core.

Sun had the NeWS windowing system before XWindows. Be glad that did not become the standard. Be very glad.

I rather liked NeWS - certainly it was a lot more fun to develop for at that time than X and had some wonderful UI toolkits (particularly HyperNeWS).

For example, being able to draw a shape in a a graphical editor and then paste the shape directly to a window as the shape that window should take was rather cool - and this was in the early 90s!

in case of mankind extinction, we should preserve the above text and pass it on to the next race.

I wonder how relevant is it nowadays. Many good things we can't have, like systemd, if we have to follow strictly to the unix philosophy.

The principle stands just as it always has.

The Single responsibility Principle (SRP), interfaces, cohesion and coupling are all relevant patterns and techniques that also describe the same design ideas.

The UNIX philosophy is simply one example of the above principles implemented at a user interface level, but they apply equally well in low-level embedded code, and device drivers just as well as in desktop and server land.

Software complexity grows exponentially with the number of internal interactions. by reducing the scope of any given module, you greatly simplify it. Thinking about, designing and implementing fixed interfaces between small cohesive modules helps this process and you get much more simple (as in the Rich Hickey sense) software as a result.

These same principles pop up all over the place in software, the Unix Philosophy, SRP, Microservices... they're all manifestations of the same thing.

For a lot of people, the set of "good things" does not include systemd.

Not to mention you can certainly get feature parity with systemd while still being Unix philosophy-robust. See the nosh service manager.

No. Systemd isn't ununixy because it has to be, it's ununixy because that's the way it was written. It's superior to System V style init IMO, but it got that way in spite of its poor monolithic design, not because of it.

Sure you can. See 'launchd' on OSX.

The reason that 'systemd' is the ball of mud it is that Linux being just a kernel does not provide the basics for systemd to build on so it has to provide them itself.

The main service management-related components of systemd very much exploit the Linux kernel's facilities, so your claim is completely incorrect.

The broader tools that systemd also provides (stub DNS/LLMNR resolution, network management [DHCP, PPP...], container registration, message bus introspection and library, event loop library, EFI stub loader, device and mount management, hardware database, TTYs, NSS plugins, session and seat management, SNTP client, time/date control, dynamic configuration population, etc.) exist mostly for philosophical reasons of being a one true toolkit/middleware that sits between GNU and Linux.

launchd isn't Unix-y at all, anyway.

the rule of separation makes me wonder if the long term legacy of Wayland will be as a hardware accelerated backend for X.

X as a protocol is long gone. Direct memory access, kernel drivers etc are not really compatible with network protocol. Wayland is just streamlining and formalizing current situation.

At my old job (which I left about ten days ago) I ran X servers and clients on different machines daily. Typical example was running my IDE on my desktop, connected to an X server running on my laptop.

The X developers didn't break remoting, but that doesn't mean contemporary gui toolkits still use the features of X that made X "network transparent." As I understand it, most features of the protocol are largely ignored except for the parts needed to pump bitmaps over a network.

Here's an LCA talk by a dude what works on X and Wayland: https://www.youtube.com/watch?v=RIctzAQOe44

The main reason, as best i can tell, is that every DE out employ OpenGL compositing that effectively bypass X.

Meaning that to draw the desktop they effectively pain a single large window inside X that is then filled with the output of the GPU.

Thing is that Wayland seems more comparable to svgalib than X. This in that Wayland pretty much a lib/protocol for talking to the graphics hardware. Something else, be it their reference implementation Weston, GTK, Qt, or some other alternative, has to handle the handling of windows, desktops etc.

Right now you can use Wayland as a driver for Xorg.

Nobody cares about your use case anymore. The use case that matters is the one that provides a smooth, non-janky, tear-free desktop experience.

Hell, the X primitives are so outdated, all the toolkit developers do client side rendering now.

Wayland is the bits of X that people actually use.

Nonsense - I use it all the time, as do many other people I know. It is mostly GTK+ and Qt that badly misuse the protocol by rending everything client side. (OpenGL is it's own world, and works with X just fine, even remotely)

GTK+ and Qt are the two toolkits that people develop against because no one wants to use X directly, nor abominations like Xt and Motif.

If all the developer thrust is behind toolkits that "badly misuse the protocol", maybe the protocol is a really shitty fit for app development?

Welcome to why Wayland is the future and X is deprecated. Daniel Stone's talk should be required viewing before anyone speaks on the X vs. Wayland issue:


Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact