Hacker News new | past | comments | ask | show | jobs | submit login
Do One Thing (oreilly.com)
330 points by steelbird on Oct 30, 2015 | hide | past | web | favorite | 128 comments



On the 'do one thing and do it well' issue, there seem to be two spectrums:

    1. suitability of a tool to multiple tasks
    2. number of features a tool has
This makes for sort of a field with four quadrants:

    1. simple but flexible tools
    2. complex and featureful tools
    3. specialized but simple tools,
    4. highly specialized and highly featureful tools
So take a tool like grep and it clearly belongs in the first quadrant. You can do a lot with it, but it's pretty simple to learn and use. Emacs (or Eclipse) belongs in the second quadrant as it can basically be used for anything and is complicated to learn. Something like MyPaint belongs in the third category and there are numerous domain specific tools that belong in the fourth.

So the question that remains when one is asking for a tool that 'does one thing well' is, do you mean for it to be specialized, or simple, or both?


I came up with a three-tier model myself:

   1. API or API-like things (grep)
   2. Highly configurable app (Photoshop, Word)
   3. Default-heavy app (Instagram)
The tricky thing is that 1 and 3 are the easy slices, relatively speaking. If you only need the 80% case, it can probably be automated into something as simple as a selfie app, with a finite, well-understood level of design and engineering. And if you need something extremely custom, you want the data model exploded into its smallest parts so that you can put it back together again - and the small parts can again be relatively simple, well documented, and easy to maintain.

But the middle part, the product that is complex and configurable, but not really "programmed" with code except through a limited script layer, is the fat middle, because it demands so much more UI, and an experience that is well-integrated, amenable to default workflows, and yet also very easy to customize. Over time, big software projects always drift towards the middle.

The article specifically fits my model's type 1. A type 1 product is nearly ideal for the hacker who wants to glue together a bunch of different technologies. It only falls down when the abstractions are too crude or mismatched to the problem. But the operating system itself is more like a type 2 product - there are numerous assumptions about what the environment, services, access methods, etc. look like, conventions that were established early on in computing and haven't yet been revised. Every programming language makes accommodations for working in that environment, and not your weird custom operating system.


In this context...

"specialized" = Do one thing

"flexible" = and do it well.

That said, the key ingredient of the Unix philosophy that is missing isn't specialised, flexible tools, it's composability of those tools. Imagine if you couldn't pipe grep output to another program, wouldn't it be a lot less useful? I'd say so.

There are ways we can get around this, but they would require a fair amount of development time. To give you one example, consider how the JACK audio server allows programs to share audio streams. Similar data streaming arrangements could be made for other types of data.


How does flexible mean do it well?

"Do it well" is so vague of a saying that it really could mean any of those things, whether it be flexibility, or simplicity, or etc.


My understanding of the phrase 'do it well' isn't as an injunction but as a way of saying that if you restrict your program to one thing, you'll have the ability to do it very well. It's to contrast it to a program that does multiple things, not so well.


What I understand flexible to mean is "serving 99% of use case scenarios". Arguably Photoshop does this, Instagram does not. Or maybe a better analogy is Photoshop vs Photoshop Elements, which probably only servers 80% of use case scenarios. Or Chrome OS (80% of desktop use case scenarios) vs Windows (99.9%).


As I said, in this context, it means flexibility, but that flexibility must be tempered with composibility.

Think about how programs become bloated. What helps the core Unix command line tools avoid this fate?


There are quite a few things in the fourth category, and I'd like to say, for instance: Beeminder. Specialized, but there is a lot going on. Honestly, I like it. I could just keep a text file if I didn't want those features, and I honestly can't think of any features I would subtract. Which raises the larger question: are the advocating flexible but simple tools, straightforward but domain specific tools, or are they just advocating against building something that is a complete mess and a waste of time?

It's important to remember that while quite involved and extremely domain specific, Beeminder has an API. He seems to care a lot about that, and I have no idea if his real point is "make things that don't suck and avoid digital vertical integration".


I could just keep a text file if I didn't want those features, and I honestly can't think of any features I would subtract.

But the point about simple & flexible tools is not to miss on features for the user, but to separate those into different tools. The people who prefer a simple text editor to an IDE aren't doing everything else by hand, but using a set of different tools to accomplish the same as the IDE built-in features would.

Taking the Beeminder example, and with the caveat that I've never used it: the graphing system seems cool, but it can't be used by anyone who doesn't have access to a credit card; if the graphs and the commitment payment systems were two separate tools that you could connect instead of a bundle, one could use the former with an alternative commitment system.


I'm tempted to quibble that you can use Beeminder without a credit as long as you never go off track. But ultimately I think you're right, we're guilty of this. Ideally the Quantified Self part and the commitment contract part would be independent but interoperable tools.


This feels like an over-simplification. Life doesn't fit into 4 quadrants, and the only thing models are guaranteed to do is be inaccurate. "One thing" and "well" are fundamentally subjective and based on a particular use-case. You have to focus on the problem(s) you're solving, and be disciplined about the scope of the problem(s) you're solving.


>the only thing models are guaranteed to do is be inaccurate.

My model of gmail is gmail. This model is 100% accurate.


I don't mean this negatively, but I don't understand what point you're making...


I had two thoughts while reading this.

1 - The need for the web to be profitable in some way makes 'single tools' hard to build. You have to grow users, add features, etc, etc. So even if you have an API or something useful it likely won't scale economically. Unless it's government supported or behind a foundation of some sort. The Twitter API comes to mind here.

2 - This is tricky because for the web to meet the UNIX philosophy, everyone has to agree. You can't have the team that manages the equivalent of the `ls` website decide to change their output, or strike up a deal with the `diff` team to force `diff3` out.

Once again, capitalism ruins everything fun. That's hyperbole, kinda :-)


> Once again, capitalism ruins everything fun

And creates everything fun.


Depends. I accept the capitalism's ability to raise resources (capital) for larger-scale projects, but generally profit motive fucks everything up. The Internet was fine until every pathetic "enterepreneur" out there smelled easy money and come to ruin it all. It happens in every industry. Capitalism creates abundance - but abundance of worst possible crap sellers can still get away with.


The Internet wasn't even accessible to the general public without capitalism motivating the creation of ISPs.


That's true. The free market was mostly [1] absent from the research into packet switching in the 1960s, the actual deployment in 1969 of the packet-switched network that became the internet, the invention of email, FTP, the domain name system, the mailing list, newsgroups, IRC, the MMORPG (initially referred to as a multi-user domain or a MUD), the web, webmail, the blog, the RSS feed, the wiki and Wikipedia, but, yeah, starting about 1987, profit-motivated ISPs started slowly to appear that offered access to the internet to organizations and individuals.

1: I use the qualifier "mostly" because the government did farm out most research and engineering tasks to a handful of for-profit companies (Bolt Beranak Newman, the Mitre Corporation, SRI International) that specialized in governmental contracts.


That may be true. But, the Internet existed because of massive multi-decade investment from folks without a profit motive. All of the fundamental Internet protocols were created through those organizations.

Look at what capitalism gave us as an Internet like experience: compuserve, aol, etc. Those were all horrible closed wall systems.


But there are many massive investments at the academic and government research level that lead to nothing directly useful for consumers due to lack of investment interests in bringing it to market. Just look at the next generation Internet architectures that go nowhere. You can't ignore economics.


What's the Internet you envision sans profit motive?


The one with free knowledge and kittens and no ads and SaaS bullshit.


Who pays for hosting the kittens?


The same parties that hosted the old usenets and mailing lists; the users themselves.


Not so sure about that - the web has become a lot less fun since capitalism took it over.


Wat. You want to go back to pre-1995, the Netscape IPO?


I'd date the actual, functional takeover somewhat later than that, but the web really was a lot more fun pre-dotcom-bubble. I'm not sure I would say I want to go back to it, exactly, but there sure is a lot that I miss.


Eh, there is a lot to be missed from the 'net, yes; but from the web? Like what? I mean, even when we're generous, 'pre-dotcom-bubble' is 'pre-2000' - the web was crap back then.


- It was much lighter; entire websites used to weight less than a single JS file today. We've increased the size of pages by an order of magnitude (and processing expense by at least two) for no real reason except laziness.

- The SaaS/cloud model wasn't so popular, which means trying to lock you in by stealing your data, or doing absolutely ridiculous things like IoT does, wasn't something you saw.


But that's how it's presented, not content. Look I dislike 5mb pages with 2 paragraphs of text content as much as the next guy, but if I had to choose between that and 1000 shitty Geocities 'personal homepages' and the vast wealth of information that can be found on the internet today, I'd choose today in a heartbeat.

How can someone claim with a straight face "but the web really was a lot more fun pre-dotcom-bubble" ? Good luck trying to find anything outside of nerd subculture and physics/math/CS content (exaggeration of course, but the core is true). (of course it could be 'true' if all one cares about is nerd subculture and physics/math/CS content...)


There is little actual content in present day Internet relative to its size. Look, what is produced on those websites that also tend to carry heavy presentation is not content. Real content is Wikipedia, or Hacker News, or various people and their various topical subpages, personal blogs, up and including stuff hosted at Geocities and Tripod Lycos. What is not content is most of the stuff that's created for money, including majority of today's "journalism". Information that is shallow, false and/or useless, and that exists only to make you click or buy.

90% of for-profit content could disappear just like that, and humanity would be much better off. Every information that is valuable, you can almost always find for free, posted by people who don't try to use you.


I don't care about content/download size ratio, I care about content (quantity and quality) in absolute size. Download speeds have progressed faster than page bloat.

Wikipedia didn't exist in 1999 (not for years, and even Slashdot was barely more than a gossip site. Good quality discussion back then was on usenet or irc. The concept of a 'mooc' was just a wet dream of some 'cypherpunks' and 'technoanarchists'. Download a manual for your microwave or car, do online banking, email anyone but your hardcore nerd friends? Forget it. Price comparison shopping, ordering something from another continent? Lol. I remember riding my bike to a local bank branch to pick up foreign currency, which I stuffed into an envelope and snailmailed. Had to ask at the counter of the post office how much the postage was, because there was no way to look that up online.

Mp3? Sort of existed, you could download song by song from geocities pages; that was before the crackdown that led to Napster even. But the "selection" was minuscule, compared to today.

Oh and back then, when you wrote a site, you chose between supporting ie or netscape, or browser sniffing and serving two versions, or sticking to yhe lowest common denominator which wasn't much, to put it mildly. Ffs people won 'best of the web' awards for html that worked on two browsers and didn't look like crap! Like obscure competitions today where you build a file that is both a pdf and a jpg! My first job, back around that time, was to 'port' a website from ie to netscape.

The more I think about, the more convinced I get - any claim that it was better in those days is just rose-colored glasses.


I didn't make any claims about the things you are talking about, only that it was more fun. Which it was. It's boring now. I used to go poke around, following random links just to see what was out there, encountering all kinds of oddball pages which represented nothing more than some person's individual gift to the world, and it was fascinating. It was fun to watch it all grow. I felt motivated to contribute, to share my own pages full of whatever information or opinions I happened to have. Photos of interesting places, stories about cities I'd visited, didn't matter; it all seemed like it was worth sharing, because that's what the web basically was.

I don't find that anymore. Homepages came and went, blogs came and went, and if there's still a thriving web out there beyond the big commercial sites, I don't know where to look to find it.

I still have my personal site, and I still post on my blog occasionally, but it increasingly feels like calling into the wind. I'm not going to just switch over and post all the same stuff on some big commercial site; fuck that. The web was awesome when it looked like it was going to be a new way for humanity to talk to itself; now it seems like little more than another way for rich people to make money. It does that very impressively, but who cares? Rich people always had ways to make money and will always find more; it's not very exciting when they turn yet another collaborative community project into a profit center.


Remember Usenet? Ah the good old days.


Bandwidth was more scarce too. If not for the commercial explosion the Qwests and Level 3s of the world would not have spent billions laying fiberoptics.


Love that social bonding that capitalism provides.


Capitalism ruins everything fun? Not sure I follow. There are so many amazing APIs available now, which aren't as frictionless as using Unix tools -- but why would they be? Unix tools are installed locally and don't fall under the same constraints as communicating with an external system. It's apples and oranges.

I was just using an API the other day to validate addresses. It's an amazing thing, being able to hit an endpoint and not worry about the myriad of complex processes that must happen for that validation. If that's capitalism, I'd say it's pretty good.


Sure it's pretty good. But can it interop with other APIs predictably forever without paying Zapier?

And the capitalism dig has to do with when your address verification endpoint decides that an API isn't profitable. Or that it's better for them to change the format of the response. Or require you to have a account of some sort. Or rate limit. Or...

In the overall picture, if you're not helping someone earn money, you're living on borrowed time with the services they provide.


So what magical system did you have in mind that would suffer none of your potential problems? Even a publicly funded service is still as vulnerable. Services you run yourself still cost money and have their unique set of problems. The cost benefit to using APIs is that you're outsourcing that complexity to someone else in exchange for money so you can concentrate on your core business. It's why AWS is so popular. Yes, they could shut down, but the risk is worth it. Doing it all in house is the other risk... maybe your programmers can't develop the solution needed and your company tanks because you wasted a bunch of time and money building your own address verification service for fear the external API might shut down.

The USPS (which is quasi-private, I'll admit) offers address verification, but their service hasn't been reliable, which is why we have competitors now. Your fallacy is that because something isn't flawless that the whole system is a failure. And throwing around the word 'profit' as some pejorative, dirty word is an emotional manipulation. How else would people hire or expand if not for profit? The service I'm using (Smartystreets) now offers international address verification; I'm not sure how they would have paid for that if it weren't for profit. That extra money you bring home from your job after taxes and other expenses... that's profit. Your employer is living on borrowed time from the services you provide. What happens when you quit, change your rate, or become bored with your job? I guess it's worth the risk to them.


I think Tarsnap is an apt counterexample to your first point. It has a stable API, a very very conservative feature set, and a stable customer base.


Sure... but Tarsnap is basically Colin, serving a very niche clientele and unlikely to be worth a few billions. Almost every tool discussed in the article is targeting mass market where such considerations become important at some point.


I don't follow your first point. For most web things (where there's zero marginal cost) profitability has nothing to do with scaling.


I thought the parent's first point made lots of sense and I don't really follow your point. Marginal cost for web services isn't literally zero, it is just close enough to zero that real businesses with revenue can treat it as if it is zero. But that doesn't mean that it is at all cheap in the usual sense to run a popular web service.


Fill your moats.


What do you mean?


The moat metaphor is common way to talk about a business's competitive advantage. In this case I'm talking about improving or at least not actively impeding web application data interchange, and maybe even exposing smaller, more granular services.

Take Wikipedia, for example, who already offer their compiled data. Now to take that a step further they could expose the transformations they use to compile that data as separate endpoints.


I'm definitely guilty of holding onto a philosophy and thinking it applies universally, so I can't blame people that hold onto the Unix philosophy extremely strongly. But I can't help but think that the world requires a hell of a lot more pragmatism than the Unix philosophy can provide.

The first problem I see is that "One Thing" is really subjective. Some people might see Postgres as doing one thing really well (It is, after all, an incredible Relational Database). But others might look at Postgres entirely differently: It is a client and a server. It is a data protection and credential management system. It is an in memory caching layer. It is a NoSQL key value engine. It is a SQL database. It is a data retrieval operation optimization system. It is a storage format. Hell, it does a million SQL things that other SQL databases don't...databases that probably also qualify as doing "One Thing" in other people's eyes.

The second problem is that the world is really fucking complex, and sometimes doing one thing well is impossible unless you also do one other thing well. Rich's big example in this article is Evernote, and his claim was that Evernote did one thing well, which was note synchronization. But notes are almost always more than just text...which was why they added photos and tables and web clippings. Who would ever want just the text aspect of their notes synchronized across devices, but not their photos that they took of powerpoint slides, their data tables, their diagrams, their emails, etc.? If Evernote wanted to do "Note Taking" well, they couldn't just stop at text synchronization across devices. So they should have stopped trying to do "Note Taking" well, because someone only used them for the text synchronization that they already did? Evernote is dying, that's for sure...but its not because it did more than one thing, it is because it didn't do them well.

I get it. People like simplicity. But the world is complex, everybody's view of the world is different, and that means that sometimes you just end up not being the target market. And I also get that some things actually do do one thing and do it extremely well. But trying to extrapolate that out infinitely across all things (or even just across all software things) just doesn't pan out in reality. And what does that mean for the philosophy? It should probably just be extended to "Do things well". But that is no longer a distinctive philosophy, is it?


> but its not because it did more than one thing, it is because it didn't do them well.

In general I agree with your line of thinking, but I will nitpick on this particular sentence (or the way it's phrased) and say: the whole idea of doing only one thing is that if you try to do more than one thing, you will definitely not do them all well.

I do agree with you though. The "many things" that postgresql does are all actually just one thing: database system.

Yea some people could argue that under the cover it's many different things, but the whole idea is about focus, not about counting features. There are always multiple aspects to the "one thing" (whatever your one thing happens to be). Doing the one thing well means tackling all of these aspects.

The reason you can't do multiple things well is that each one of the multiple things you want to do will in itself have many aspects, and your focus will be fragmented trying to tackle too many things with very limited resources.


What if something if really good at being a website?


Or really good at being a platform.


> Who would ever want just the text aspect of their notes synchronized across devices, but not their photos that they took of powerpoint slides, their data tables, their diagrams, their emails, etc.?

That's one thing I don't get. Dropbox could have made a decent markdown or (please, pretty please) org-mode editor and would have blown everyone out the water. Editor over Dropbox is a perfect thing - unlike every other butt solution out there, including Google Drive, it doesn't limit you to your butt provider. Files on Dropbox are still your files, in your own filesystem, and you can do whatever the fuck you want with them. A simple editor for people who don't like (or don't want to, on a particular device) playing with file juggling, and it would be a perfect note taking solution.


I've actually asked Dropbox to do exactly this, but no dice. Their mobile app allows you to edit text files fairly easily, but no web client.

It would be amazing if they would just let me use this on their website:

http://coolwanglu.github.io/vim.js/streamlinejs/vim.html


Your comment describes my thoughts, as well. It also strikes me that there have been tools (like Yahoo Pipes) that did more than what IFTT did. Also, if you want to beat IFTT, and feel there's a market, then do it, right?

I also felt like there was a lack of imagination in some of the comments. Poo-pooing images in notes as related selfies is nice hyperbole, but I used it all of the time for System Diagrams and other bits that are so much easier to draw on a large white board than to transcribe or even draw on a digital canvas.

I DO wish there was a easier way to donors than just 1 thing with IFTT, though. Perhaps that is in the cards someday.


Ironically the Unix philosophy doesn't even really apply to Unix itself - it mainly just applies to GNU tools. Unix is a general purpose operating system for computers of all shapes and sizes. You can't get much more generic than that.


It's still built on a simple kernel that doesn't do much and a user land of many independent processes. The philosophy doesn't forbid generality, it just encourages a "federalist" organization of cooperating programs communicating through simple kernel institutions (files, sockets).


It sounds like what he's actually arguing for is cohesion, not just simplicity. And if that's the case, I'm with him.

Cohesion is achieved when the degree of complexity is determined by both domain and audience, so Photoshop's relative complexity and Pixelmator's relative simplicity are both fine - in each case the domain is satisfactorily handled for the needs of the respective audience. They're both cohesive tools. Now if Photoshop decided to throw in a chat client (hello gmail), we'd be having a different conversation.

However, if the author is actually advocating breaking up Photoshop into a thousand pluggable little tools that everyone would have to piecemeal assemble into some sort of shared "workspace," that's where we (and most non-geeks) part company.


1. Unix shell solved one thing very well: (unary) function composition based on a text protocol. It gets complicated already when you want to beyond a pipe - e.g. A graph like a makefile. Then you need to leave the pipe metaphor in nost cases and deal with artifacts like files. Even more so, map-reduce was a big shift that went beyond the unic metaphor.

2. There was a different metaphor that had its merits: object composition through standard interfaces, e.g. OLE/COM and the likes. One might argue about its implementation (embedding a Visio object in a Word document still produces crashes, 25 years later), but as a UI metaphor it was very powerful.

3. The web's metaphor is coupling of disparate content through URLs and HTTP (and HTML). One of the most mind-boggling metaphors ever introduced to man (talk about doing one thing well). Today we use REST APIs as the atomic pieces (do one thing well) and Javascript/ObjectiveC/Java as glue. Same thing.

4. As for application it goes, "Any sufficiently complicated C or Fortran program contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp" (Greenspuns tenth rule). Slightly updated this means that as programs get more complex, they tend to become more integrated abd customizable, to the point where you use a high-level language to glue the different parts. If you're lucky they may have replaced a custom common lisp with a mature V8 engine, but the lore is still the same.

5. Outside of academia, people (product managers in particular) tend to be focused on solutions, not abstractions. And for good reason: solutions are often shortcuts for common applications of abstractions, and therefore they provide lots of value. File.ReadAllLines() vs. doing the same with 3-4 Java classes in the old days is the best example.

6. In the end, we need people to think about abstractions: unix pipes, map-reduce, URLs. And we need other people who think about solutions: The IPhone, the Google world etc.

7. As for the OP: curl might be a good start of a pipe. Add a program that parses tables, and a program that posts to REST APIs...


> That philosophy was great, but hasn’t survived into the Web age.

The modern "web operating system" is Amazon's AWS, and I'd argue that the unix philosophy survived surprisingly well there. AWS has many services; each one does a specific job really well, and they interoperate very well together. That's the very embodiment of the unix philosophy.


Yep. AWS is a great example. Many products that can be used very well together (S3 and Cloudfront, for example), but the chosen abstractions are such that they can be used independently as well.


I would like to propose a "cloud" addendum to Zawinski's law, which states that "Every program attempts to expand until it can read mail. Those programs which cannot so expand are replaced by ones which can".

My addendum is: "Every cloud app expands to the point where it can host group chat. Those apps which cannot so expand are replaced by ones which can."

I think it's also possible that I may have to augment the addendum to "group chat and screen sharing."


Benedict Evans has talked about this as well

http://ben-evans.com/benedictevans/2015/3/24/the-state-of-me...


I suggest it is package manager. Everything has it's own little little plugin management tools, for retrieving, installing etc.


If you're NOT writing a web browser, and the little voice in your head says something like "I know! I'll solve that by writing a plug-in system and let other developers write to my API, and all their code will run in a sandbox, and I'll write a virtual filesystem for them to use, and I'll provide an installation/uninstallation system, and maybe a debugger..."

STOP

Get up and walk away from the keyboard for a bit. Maybe take a refreshing shower. Clear your head and let yourself think this through a bit. You'll come to the right conclusion. Now go back to your keyboard and work on something that delivers value for your customers :)


Unix pipes only work in the CLI with a text based interface and even then they break down when the applications aren't able to parse the data.

So one always ends up massaging the data to make them be understood.

Piping concept was actually more powerfull in Xerox PARC systems by using the respective language REPL and do LINQ style data transformations.

However the problem of the article is that once you scale out of the CLI, you need a standard communications API to this type of stuff.

One that is able work in distributed systems, dealing with all types of failure issues.

If anything, native programming on the mobile offers some kind of piping thanks to intents, contracts and extensions.


The Amiga ecosystem perfected this: most (good) software featured an ARexx 'port' - an API, based on the REXX language, that enabled any one program to communicate with another. Or one could simply write an ARexx script to automate a process.

Here's an example of the first occasion I seriously used this, when producing a 3D anaglyphic animation for video: graphics frames - two images for left and right - were rendered in VistaPro (the original 3D landscape generator); as soon as rendering was complete, ImageFX (image processing software) picked up the output and combined the channels to make an anaglyph; this was then sent to the PAR animation recorder (a video recorder). So, THREE separate large packages from different software companies, working in synchrony with each other via ARexx. The whole operation worked so smoothly and efficiently, first time, that I still recall it with awe.

Sadly, yet another killer Amiga feature that never made it to modern computing...


The libraries concept used to extend the OS were another cool Amiga feature.

I would say the closest we have today to ARexx experience are Powershell and AppleScript.


I am a great fan of the Unix philosophy and can sympathize with the author and his points, but I really don't know how realistic the whole "make the Internet a new Unix"-scheme is.

As others have pointed out, there are great practical difficulties in integrating different web services. Unix has pipes built into the core of the OS - anything analogous would have to be "bolted on" to the Internet, and thus probably turn out to be not as powerful or simple to use. And how about the UI and user friendliness? On Unix, every program has more or less the same UI - a couple of lines of text on a black background. The Internet? Everybody has a different fancy graphic layout. If you can do everything within the same "walled garden", that reduces confusion on this count.

For these (and other reasons, such as the aforementioned capitalism), I do not think that the mainstream Internet is ever going to behave the way the author envisions. What I could imagine, however, is a sort of parallel Internet that displays this property to some extent - a range of services explicitly targeting technical users (who are more likely to value the "do one thing well" approach and less likely to care if the GUI isn't quite as snazzy). These services would never grow big, but they could build up a loyal following. Kind of like HN, really...


The internet already has a very simple interoperable protocol, HTTP. There is nothing to bolt on really since the pipe is just a channel for text and sockets on port 80 can just as easily act as those channel.


The thing about the pipe isn't, as I see it, a protocol issue, but a UI issue: that is, a simple method of telling your shell (browser, for an OS) to compose tools rather than requiring you to interact with each of them interactively.

To be fair, I don't think we have a good way to do that with graphical OS shells even on Unix, just CLIs, so its probably not surprising that we don't have it in web browsers.


But HTTP is one-directional (disregarding Websockets which is a client thing).


HTTP is asymmetric (the two sides are not interchangeable), but the information flow is bidirectional. I don't see how that it prevents something like Unix piping from being implemented between web services using it. Seems to be mostly a UX design issue.


I agree on the face of things, but what the web is missing from the unix philosophy is an equivalent to |.

Without the ability to string multiple, focused tools together even tools that did one thing well(i.e., Evernote) will continue to add features until it does a bunch of things meh.


Absolutely! And just follow that to logical conclusions, to me it seems to answer the question of why we don't have it, right?

On the web, "|" (pipe) means integration with other services, and it comes with all the associated trials and tribulations. To make web pipes work, just to even get started, you first have to solve authentication and data format/transfer standards. Those things are a big pain, its not surprising that most services attempt to provide the service directly.

Unix pipes were designed for builders, technical people who understand the structure of the data they're piping around. Posix commands come in a minimum standardized set that work well together, and while there are some commands here and there to pipe structured and/or binary data, the core set and what most people use pipes on and love pipes for is plain text.

The internet just doesn't have the same foundation or purpose or user base, its usage is not centered around technical people building pipelines, so its not surprising that web services by and large haven't flocked to meet a unix analogy -- its technically much harder to do than writing unix, and its not even clear its even close to a useful thing to do, for most people.


The other big factor is that pipes only work because of the power of plain text. The output of every UNIX command is ASCII text, and most of the time, it's ASCII text with columns delimited by \t and rows delimited by \n. And there are command-line utilities like awk, cut, sed, & xargs for parsing & rearranging that text, and funneling it into formats that other commands understand.

The web has a variety of other content types - images, videos, applications, structured data - that don't map well to this model. Before you have interoperable webapps, you need to define common data formats for them to interop.


I disagree with that. Pipes seem to work in spite of plain text, being only made incredibly inefficient by it. A much better way is to send data structures through them (for which, btw. modern web is perfectly well-suited, with present JSON domination) - you can always render the data to text if you need, but you don't have to write arcane and bug-laden shotgun parsers with sed and awk because every step in UNIX polka means throwing away metadata.

Also: ditch plaintext for structured data and suddenly handling of other content becomes much, much easier, and they map perfectly to this model.


I pipe things in and out of ffmpeg, curl, convert (and the rest of imagemagick), mysql, tar, netcat, jq, etc pretty frequently, and that's all images, video, audio, applications and structured data. It seems to map quite well for all that.


> To make web pipes work, just to even get started, you first have to solve authentication and data format/transfer standards.

Unix pipes have already solved the authentication problem. For instance, to stream a compressed harddisk copy safely between two hosts you can use the following statement:

  dd if=/dev/harddisk | gzip -9 | ssh user@xyz dd of=hdcopy.gz
Data format standards could be implemented right now using encrypted JSON or something like that. I would prefer Lisp s-expr by the way because they could be handled immediately in client/server lisp without any transformation.


RSS, Yahoo Pipes (RIP), ... come to mind as potential candidates for your pipes. Not a well-supported ecosystem sadly.


Webhooks are pretty pipe-y... but not every service implements them.


That's such a natural reaction to draw given the setup, that even the author of the article draws it, if one reads through.


So... you agree on the depth of things as well? That's exactly what the author argued for.


There is some merit to "doing one thing well", but taken to the extreme it can be horrible in a different way:

tar cvf - FILE-LIST | gzip -c > FILE.tar.gz

(https://xkcd.com/1168/)


Rob in that comic clearly hasn't used Unix long enough to know about the Arnold mnemonic[1].

[1] http://i.imgur.com/Vf0An8J.png


What kind of tar are you both using that doesn't detect the compression type? tar -xf FILE is fine.


I just use GNU long options now, e.g.:

$ tar --extract --verbose --gzip < foo.tar.gz

$ tar --create --gzip --file sql.tar sql/

After switching to this style I never have problems coming up with the correct command. It's a lot easier to read in shell scripts too.


Just alias it.


Since no one mentioned Plan 9, I thought I will mention one of the many features that makes Plan 9, one notch above Unix to make it easy in the web era.

In Plan 9, literally everything has a file abstraction. This also includes sockets. So, even shell programs can be network programs without external programs like curl or wget or anything like that. For anyone interested, look at webfs(4). You may say that this can be implemented with fuse. But having something first class, designed to operate well within the file abstraction, is very different from something that has been added as an afterthought. In some sense, the BSD folks who added sockets into Unix really screwed it up and missed the Unix philosophy altogether.


Modern 'Linux' systems are full of programs that violate Unix philosophy.

cdrecord, ffmpeg, emacs, dbus, imagemagick, all modern browsers ... the list goes on and on. plumber(4) does what dbus is trying to do, with very little amount of code and text file based rules.

I see it as a missed opportunity for linux kernel and hence to the wider audience to experience a computing environment that is a joy to use.


Ironically, his own blog includes a screen reader that appears whenever you select some text. Don't blind people already have good screen readers? I select text to remind me the place I'm up to, and get annoyed when little buttons pop up.


economic incentives typically lead firms to look for new sources of revenue. they hope to achieve this extra revenue with minimal cost, ideally by not creating new products or developing new customer relationships. the natural side-effect of this situation for software companies is that they add feature-upon-feature to existing products in an endless quest for growth. at the very end of the road this situation doesn't usually work out well for the product, the user, or the company.

on the other hand there are plenty of services/applications that keep stable interfaces for many years at a time. they do not extend themselves too far beyond solving the problem they originally tried to solve. we can all imagine what craigslist would look like in the hands of short-term profiteers, endlessly tweaking the interface for more ad clicks and "user engagement".

the success of sites like the original google search, craigslist, and HN proves that the "do one thing, keep it simple" model is successful and can often be very profitable in the long term. sadly, it is very easy to forget about such ideals when people are constantly dangling fresh money in your face and/or you have salaries to pay. while page rank might be considered the key element to google's genesis and explosion, we also owe much respect to the people that decided and continually insisted that the UI stay clean and minimal.


There is always <iframe>. You can easily embed a youtube or vimeo video even if your core product is not displaying videos. I always felt that iframe is a bit of lost potential, what it needs is a javascript api so that you can communicate back and forth with the frame/plugin, last time I checked this was not allowed due to same origin policy.

After the communication channel is in place it needs a few standard interfaces, maybe there could be some video player interface with play/pause/seek functions. These dont have to be included in any w3 standard, its more of a de facto standard or agreement that if you make a video player and don't implement the IAwesomeVideoplayerInterface, websites will not allow embedding of your product.

What's sad is that it seems like vendors are trying to move away from this model, Facebook is the prime example of this with hosting of a copy of embedded videos and displaying linked news inline in their own format.


IMO there is a limited market for mature ideas that is often much smaller than the previous prediction. When startups start meeting this boundary, instead of reigning in expectations and become an efficient tool serving a small market, it's "go big or go bust" with desperate attempts to validate the previous market size predictions.

Sooner or later, you're bumping into the markets of other companies and it's impossible to compete against them b/c you're using your weaknesses against their strengths.

There is only so much money and attention (time) in the world for people to spend on things. The low hanging fruit in developed countries has already been plucked clean so it's quite hard to 'grow' new markets. Your new tv series better be REALLY good if you're competing with game-of-thrones and breaking-bad.

Interestingly enough, the Chinese companies actually have many more features and are often better (for their users) than the american counterparts. The population is more homogenous and has more of a crowd mentality. Network effects are huge. Monopoly-like companies also prevent too much fragmentation meaning technology as a tool becomes standardized. Rather than 20 different things competing for the attention of everyone, there is one big company with apps that do everything that is trusted and reliable and thus the 'best'.

While american companies compete for slices of a certain size of pie, one chinese company owns the entire huge pie. As long as the users have pie, they are happy.

In allocation of limited resources, this usually fails as equal distribution of resources leaves everyone poor which creates huge incentive for corruption and hoarding. For abundant resources that can be duplicated infinitely and have network effects to boot this is perhaps a better strategy.

After all, it's all about everyone having an abundance of pie, not who has more or less pie.


There is nothing inherently wrong with complexity. Complex things can be made to appear simple, usable and open too. It is just that, when it comes to software, we haven’t yet figured out a way of building complex systems that are beautiful (in a very broad sense of the term). Whenever we try to create anything complex in software it inevitably turns into a mess.

Also software engineering sounds unglamorous today and research in the field seems to have stalled. And therefore the craving for the simplicity of the Unix philosophy. Not that the Unix way of working actually solves the problems posed by software complexity, it just asks us to avoid complexity.


> The first person to create a tool that can pipe a table from a browser into a spreadsheet...

Wait a minute. This is called the clipboard. Try it -- you can just copy and paste the table into a Google Doc and it Just Works. What's missing?


It's missing the ability to build an automated process. Clipboard operations are designed to be manual.


Or called by a script. Problem solved.


Evernote never even did the one thing well. I would add an item to a list on my computer, and then later add another one on my phone, and the next day both sources would be missing one of the edits.


Hardware suffers from the same "walled garden" mentality. Why the heck would I want a "smart TV"? A TV exists to display video. It should have lots of inputs and maybe some outputs for audio. If I want to get video from a VCR, a flash drive, or a web service, I should be able to plug in a device to do that. And unplug it when it becomes obsolete.


Then along came systemd....


> I’ve been lamenting the demise of the Unix philosophy: tools should do one thing, and do it well.

1983 was a while ago: http://harmful.cat-v.org/cat-v/. That's a long time to lament something.


>The first person to create a tool that can pipe a table from a browser into a spreadsheet, a Google doc, or even a text file without massive pain will be my hero.

Pigshell.com


Everyone wants to be the walled garden https://xkcd.com/927/


"That philosophy was great, but hasn’t survived into the Web age."

As one who still writes shell scripts for my work that do such a thing, and programs, too, I disagree as every Unix and BSD coder I know still does these things. It still serves us very well; far better than the glut of so-called package managers that pretend they can do better than 'make'.

I chalk a large portion of this up to those creating web pages without any real programming knowledge, training or background. Those who only know how to cut/past/npm everything they do. These are the same who think Unix is old and not modern.

I took an interview with a small company yesterday. There the creative director asked me what tools I knew and spewed out everything but the kitchen sink that they use. I was aware of all of them but questioned why he needed any of them.

You see, I've been running a web dev company for 11 years and have never found an advantage to any of it. He asked how we survived without npm or bower or etc. but, when I asked him if he knew how to write a Makefile, he didn't even know what it was or what it did.

A lot of the tools we use are things we built up over time ... or last week. Today's "modern" tools may be "instant on" for those who can't write a Makefile either but that's a fault and not a feature. If you need npm or bower to manager these things then what happens when something breaks, goes away, or becomes unsupported?

I stuck with npm and bower cause, when I tried to write about Angular and other things it got too long winded.

One of my points is, all the tools you need are already built into any Unix/BSD system so why look elsewhere? Those who do are only looking for quick fixes, as I pointed out earlier, and not interested in the science behind it. Creatives who want to build a web site but have no interest in the technology. They can get it to work, eventually, but "it works" is good enough.

No it's not. That's why smart companies hire mine.


What does your comment have to do with interoperability between web services?


Part of my comment was about his statement that the Unix philosophy is something of the past, which is false. Parts of the rest of my comment dealt with what he said about the plethora of "one size fits all" tools people are using now instead of the simple tools.

Can you not make the connection?


The OP was talking about web apps. What do Makefiles and npm have to with Evernote or Dropbox?


Why use a web app to take notes when one has vi or emacs?


Because it's really convenient to be able to make a note on your phone or tablet, then access it and make additional post-meeting notes shortly thereafter, on a laptop, all in the same interface (so you have the same features, or at least a set of common features).

That's just one, assuming you never want formatting, tables, pictures, etc.


Oh Emacs does tables[0] better than anything out there, except maybe MS Excel. Definitely eats stuff like Evernote or Google Drive for breakfast.

[0] - http://orgmode.org/


I love org mode but due to the fact that emacs doesn't run on my phone and Evernote does, I take notes on Evernote when I'm not at my computer


> Because it's really convenient to be able to make a note on your phone or tablet, then access it and make additional post-meeting notes shortly thereafter, on a laptop, all in the same interface (so you have the same features, or at least a set of common features).

That's why I personally want emacs on my phone and tablet. I don't know yet the best way to expose its functionality with a touch interface, but it's still, hands-down the best way to edit information.

Maybe something where a tap in the minibar offers some sort of helm- or ido-like command-picking mode, and with taps on the side to enable quick execution of text-editing commands? I dunno, really.

> That's just one, assuming you never want formatting, tables, pictures, etc.

Emacs can handle formatting, tables and pictures if you want.


Maybe we need some sort of unix on the web.


More of a fun experiment than anything else, but the Pig Shell is worth playing with:

http://pigshell.com/


It's cool, but I always found such things to be cargo-culting and going in the completly wrong direction. You don't want to make a web shell, you want to use normal shell for web.

For a proper "Web UNIX" we need:

- websites talking in structured data (not just plain text)

- less propertiary bullshit (hint: keep sales & marketing people away from APIs)

- ability to conveniently pipe them together anywhere (not on a third-party, complexity-hiding, feature-limiting site like IFTTT)

When I can start typing things like these in my own, local shell:

  @twitter.com/me/tweets/latest | sort > tweets.log
  echo tweets.log | @facebook.com/me/post/new --activity "Feeling: Happy" --photo /tmp/HN.jpg
(where @ is a somewhat general web communication utility)

then we'll have a web UNIX.


This is just a wild guess.... because they have a web browser and not vi or emacs.


I don't know, but that's also completely beside the point.


I believe they are putting forth the case that the kind of developer that doesn't make Unix-like tools, doesn't use Unix-like tools. And that this issue is not a case of everyone doing things shitty (not making things Unix-like), but that it is the god damn creatives making all this bad software. These fucking people don't appreciate the art of programming, and think software is about the destination and not the journey. That's my take anyway.


By "OP" you mean the author of the article then, no, the article is not about web apps.


Then we clearly read two different articles.

This is essentially the thesis statement:

> Unix has pipes, which make it easy to build complex applications from chains of simpler commands. On the Web, nobody may know you’re a dog, but we don’t have pipes, either. [emphasis mine]

It is explicitly lamenting the complexity of web-based applications as opposed to other kinds of applications. If you disagree that that's the premise, then you and I are living in rather distinct universes.


Ironically, just this week I had several cases of attempting to install Python modules with pip, and every error that came up had to do with compiling parts. Then I used apt-get to install binary packages and they all installed just fine, in 1/10 the time.


People mean a lot of things when they say "one thing". "Do one thing" can mean "replace all occurrences of one string with another". Or it could mean "browse the web", which of course isn't really one thing but a thousand things.


I've always wondered how Emacs fits with Unix philosophy? With it you can replace strings AND browse the web, but also play games, do file management, email, chat, etc...


Emacs doesn't really fit with the Unix philosophy. Emacs came from the MIT AI Lab and ITS, not Bell Labs and Unix - it got included with most Unix distributions later.

Editors that do fit with the Unix philosophy (they were written by its original developers) are ed sam and acme.


It does one thing and does it well: an interactive Lisp runtime ;)

(Actually, most of the individual applications you're thinking about are these days provided through an Emacs package system.)


That is something that got even ESR wondering - chief "Unix philosophy"-advocate and Emacs patriot par excellence that he is...

(Although one thing that can be said for Emacs as regards the Unix philosophy is that it is programmable.)


It fits the same way a shell does.


I think the primary issue here is, the more comfortable you are with technology, the harder it is to see the forrest (the goal) for the trees (the discrete tasks involved to get you there).


I think it's best stated "solve one problem".




Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: