Hacker News new | past | comments | ask | show | jobs | submit login
Programmers are not different, they need simple UIs (antirez.com)
428 points by dwaxe on May 24, 2016 | hide | past | web | favorite | 213 comments



One problem is that people are too macho to admit they have cognitive limits.

I know people in the finance business who are sending their CTO's out to take the Series 7 exam because to write code for some domain you need to understand something about the domain. That is on top computer science principles, your programming language, the "standard library" for your programming language, libraries you use on top of that, your build system, your version control system, and all kinds of stuff.

And now people think they are going to be left behind if they don't learn Angular, Rust, Shiny7 and all of that.

Every bit of bullshit steals your cognitive capacity and turns you into an 0.1x programmer.


As an artist as well as a programmer, I've learned a lot about the techniques of professional artists. Most are specialists, not generalists. They have a specific, focused toolbox, and they do not leave it. If a guitarist gets that sound from this guitar and amp and pedals and scales and rhythms, he sticks to that. If a painter prefers these subjects and those brushes and paints and canvas sizes, she sticks to that.

Become proficient in a medium to the point where absolutely no conscious thought is required to use it to its fullest extent. That's the secret of great artists.

Apply that to programming, now. You can learn a field and the core tech of the field, and get a lot done for a long time. Or you can go chasing the new and shiny every six months and drive yourself up a wall and spend all your time learning instead of making.


Some short advice from an older programmer: this is the best way to ensure that you are unemployed in 20 years. Art is old (even older than me!). You can specialize in something and have a pretty darn good feeling of assurance that it isn't going to go away before you die.

Programming is a young field. It's changing dramatically from year to year. My first paying job was writing FORTH code. At one point I was writing OO C code on an embedded platform because we didn't have a C++ compiler. Eventually I became an expert in C++ (like an actual expert). A few years ago I went to a job interview and they asked me questions about Boost. I had to tell them that back when I was an expert in C++, the STL hadn't even been written. We were all using RougueWave. I've been paid to work in Perl, Python, Java, C#, Ruby, Go, Javascript and several other languages you will never have heard of because they were proprietary languages written by the companies I work for. I've built embedded systems, real time systems, distributed systems and now I'm trying to fix 10 year old Ruby on Rails code.

The thing is that none of the stuff I do is really all that different. It's just the details that change. Every time I learn something that is truly different, I always find a way to apply it to what I'm doing (if you look above you will notice a huge hole where functional programming should be... it's getting filled).

This field is about learning every day. If you want to be more than average and you want to stay that way for more than 10 years, then you have to accept that you will be spending all of your time learning. And making.

HTH!


I am an older programmer, and what you described is my career, more or less. And I fight off recruiters with a stick, switch jobs every year, and laugh at the articles about how there's ageism in the industry.

The problem I typically see isn't that someone focused on a technology for too long and became unemployable. It's that they focused on an employer for too long and became unemployable. I started my career doing C and sql 20+ years ago. I could still easily make a living at them if I wanted. My second major technology was enterprise Java. I could still make a living at that if I wanted (and to an extent, I still do).

But people spent 10-20 years in a culture. They don't know how to even look for a job anymore. Their skills may be great, but they've never put in a job application online. They're unknown to the recruiters. They don't know how to write a good resume that reflects their abilities. I watched my wife go through it last year. Laid off after 15 years at one company, her problems were twofold. First, she had narrow but deep domain expertise. Second, she was rusty at job hunting and out of touch with industry standards that had evolved without her. She still believed in waterfall and thought agile was a joke - not good for someone looking for product owner roles! She learned, but it took a while.


I don't think that those two options are mutually exclusive, you can specialize and keep up with the times...I think you kinda need to recognize the direction the times are moving in to keep up though.


I can see validity to both of your perspectives. How about the t shaped engineer concept? I like this description of the t shaped engineer: http://business.stackoverflow.com/blog/should-you-hire-t-sha...


Yes. I can agree with that. Career strategy is tricky in this business so it's best to keep a flexible mind.


Seems to me that most technical people in this business have no career strategy. They settle into a job and inertia takes hold, and a body at rest tends to stay at rest until a force is applied to move it. So they will sit their butt in a seat and be underutilized, and perform mind numbing work, with a nagging voice in their head telling them that their tech muscles are experiencing atrophy, but yet still they stay put until the bitter end.


Really applies to all people, not just tech people.


What if you make your core competence as a programmer "gaining steady footing in new tech", along with basic like code structuring, version control and some automation (CI/testing etc.)?

That way you might not become "the greatest X86 assembly programmer" or whatever niche you like, but you might become a very proficient system integrator, which is useful in its own right.


Oh, that's me. I'm a mile wide and an inch deep. So I can do a lot of things, most of them not very well. My niche friends are much better than I am at anything I can do, but I can do a lot more things.

It's worked out kind of weird, pushing me (after a long career) into being a founder, and building a tool for generalizing common problems in systems integration. :) My desire to wear lots of hats finally works! But really? I'm more or less a specialist at being a generalist. And I've learned not to chase shiny new tools, because they're very distracting.


I'm the same way! I try to get good at something, and then immediately move on. I am the best at nothing, and better than almost no one. But I am the only person I know that can write code, swordfight, surf, use a ham radio, play a handful of instruments, build things out of electronics, draw and paint, shoot, blacksmith, and blow glass (all at the same time -- just kidding)! I make myself very happy by being able to try something and then practicing until I get "good" in my own opinion. It's absolutely true that up against pros I am pretty terrible! I have also come to understand over the years that this is a byproduct of having a hacker mentality. I am loathe to use that word in reference to anything I am or anything I do, but that's just what I see. Hackers are curious creatures, and so it follows that they can do a lot of things.


It's often really hard to judge how skilled you are at something. https://en.wikipedia.org/wiki/Dunning%E2%80%93Kruger_effect is overblown, but being able to say do a crappy weld in 4 hours that a competent person can do better in 15 minutes is often just dangerous. Many a handyman has caused 20,000+$ worth of damages or gotten someone injured.

That said, hobbies don't need to be productive and half assing something is often good enough. Yes, a simple wedge doorstop may damage the door, but well sometimes that's just not an issue.


Did you mean to reply to my post?


This is the direction I try to take. Every time I hear about a new technology I go and read the basics of it and try to understand where I might use it. Then when an opportunity comes up, I investigate and see if it's truly applicable and worth learning. In my experience, trying to dive right in to something you don't have a relatively immediate use for tends to produce a lot of wasted time and make learning slower and harder given the lack of direction.


My suspicion is that the answer is "learning isn't doing".

These are two different functional modes. The mastery of acting without conscious thought is very hard to attain. See also flow (Cziczentmihaly) and related topics.

Learning incrementally can help, and I'd structured my own career such that I was doing that. I ultimately opted out of it when that stopped being an option. I was simply spending all my time catching up, and not finding myself (nor, to a very large extent, any of my peers) actually proficient with the New Hawtness.


I've been in the industry 20+ years. The only technologies I've used consistently are Unix, and relational databases. The languages I work with, the frameworks, all of that stuff is new since I started, long ago. So yes, you have to keep learning.

But that doesn't mean you have to live on the bleeding edge, either! A tech that is proven out, used widely and stable, that's worth investing time to learn. In Crossing the Chasm terms (great book on marketing!), you need to learn early majority tech, not early adopter tech. You can be an early adopter for kicks, of course, but don't pretend it's to be more valuable.


Fair point, though many of the tools I've used tended toward evolutionary development. Came a point a bunch of stuff was being binned and/or wall-flung pasta-style.


Maybe you just aren't very good at picking languages? No offense but I have been using C, C++, C#, Python, and JavaScript for what - almost 15-20 years now?


More I hop jobs a lot. I was using C at the beginning of my career. Lately, I'm using C again (and more importantly, using arcane C skills that no one has anymore, doing a 32-64 bit port). I picked up Java in 2001. I've done a fair bit of Ruby, too. Doing Python in the current gig, which is mostly new to me but trivial to learn.

The point is, I haven't had to deal with C for a long time. It's odd to be doing it again.


The problem is you get stuck as a Junior developer in skill set if not job role. We regularly get collage interns who can often get productive in new technology in around 2 weeks of effort just like senior developers. It's the curse of specialization that the more people who can and will do something the less that thing pays.


If you were a college professor, you would generally spend most of your time reading, researching, and _synthesizing_ learning into digestible information for others. Your ability to do new and groundbreaking things would be based partially on the mastery of a discipline and partially on the ability to be constantly exposed to and challenged by new ideas.

I don't think programmers are more like professors than artists, or vice versa. These are just two different modalities applicable within the large, varied landscape that is computer engineering.


Leonardo, Michelangelo, Picasso, Mozart, Cezanne, Bach, Rembrandt, Goya - none of these artists stuck to one instrument or medium, most were masters of many and all experimented freely. Your thesis does not hold true for many great artists I can think of.


True, but then they are the all-time greats for a reason.


Some people that grew up being told that they were smart tend to think of themselves as that kind of master polymath type and end up ignoring that detail.


Most successful artists in my generation are media-agnostic and work fluidly between sculpture, installation, image-making, digital, video, and sound as media.


> One problem is that people are too macho to admit they have cognitive limits.

Yes, I experienced this and it seems to be a real problem, for various reasons: if it does not look complicated, investors won't like it, won't buy it; if it is too simple to use, people will think I am dumb. If the setup is too simple, people will think I am incompetent, etc.

Simplicity is a prerequisite for reliability, not a nice-to-have.


Well, I AM dumb. I am powerless in the face of my own stupidity.

Everything I do hinges on the acknowledgement of that fact. And you know what? My stuff works, because I can whack all the complexity down into a manageable ...widget that enforces all the rules for me. I know how to develop enough introspection of such things to keep from having real problems. I'm down to reported bugs where like the file system crashed.

No, I think too many people are still trying to win the science fair.


When my son turned 4 or 5 he got obsessed with Rube Goldberg machines which really made me sick to my stomach because at the time I spent all day debugging problems in Rube Goldberg machines.


Maybe for some people, building Rube Goldberg machines is an essential prereq to being able to code without building them.


Is it a coincidence that both of you almost quoted C. Moore?

"There seems to be a propensity to make things complicated. People love to make them complicated. You can see that in television programming you can see it in products in the marketplace, you can see it in internet web sites. Simply presenting the information is not enough, you have got to make it engaging. I think there is perhaps an optimal level of complexity that the brain is designed to handle. If you make it to simple people are bored if you make it too complicated they are lost"

http://www.ultratechnology.com/1xforth.htm


Is it a coincidence that both of you almost quoted Edsger Wybe Dijkstra?

"Simplicity is a great virtue but it requires hard work to achieve it and education to appreciate it. And to make matters worse: complexity sells better. " (EWD 896)


> And now people think they are going to be left behind if they don't learn Angular, Rust, Shiny7 and all of that.

The question you always need to ask is: "Why is <this> better than what I have now?" If you can't answer that DON'T LEARN IT.

10% better isn't enough to take up brain space. Your proficiency with your current tools easily exceeds that. 100% better probably is good enough.

The only thing I've seen in the last 5 years that has been good enough to make me start learning it is Rust.

Looking at how nice Glium made programming OpenGL was the final push. Programming OpenGL is painful on a good day, and Glium managed make a lot of the pain go away. The fact that nobody manged to do this in any other language caught my attention.

Maybe Rust isn't actually required, but I'm willing to learn something just to listen to a community/ecosystem that can produce something so nice.


Being left behind is a very real problem if you're a contractor. You just have to learn the tools that everyone is using, including Angular, React, React Native, and Node.js. There's still Rails jobs out there, but they seem to be disappearing.


> And now people think they are going to be left behind if they don't learn Angular, Rust, Shiny7 and all of that.

I have a bit of ADD-ness so shiny objects look the best. I love learning and have learned to curb my learning appetite to skills / areas that I think might benefit me in the near term.

> Every bit of bullshit steals your cognitive capacity and turns you into an 0.1x programmer.

Agree3


... I want to stress how important is the concept of simplicity, not just in graphical UIs, but also in UIs designed for programmers.

I've always felt sad that this dichotomy is so strong in our industry. There are graphical tools for proles and then there are text-based tools for programmers, and never the two shall mix.

It didn't have to be this way. Smalltalk was an integrated graphical system for both programmers and users already 40 years ago. Something like the Smalltalk class browser with visual editing of live code would feel revolutionary even today compared to most of the systems we have.

But we're seemingly stuck with another '70s approach, that of Unix: streams of plain text, UIs based on undiscoverable and inconsistent command line argument arrays, all glued together with a spider's web of incredibly fragile string-replacement tools like shell scripts and templates.


I'm actually okay with text-based systems; good UI can be implemented in that format too. For example, the principle of least surprise has clearly NOT been followed by lots of utilities in defining the "recursive" option - is it capital R or lowercase r? And do you use single-dash for long options or double-dash?

And there are advantages to text-based UIs (scriptable, composable, copy-pastable) that I think would be hard to reproduce in graphical UIs.


Argument styles are themselves characteristics of systems.

There's initial BSD-style arguments. Almost always single-dash and single-letter. Not entirely consistent.

There's GNU-style arguments. While (usually) consistent with equivalent BSD-style, there are long args, given by double-dash and multi-character arguments.

Then there are the utilities which eschew either. Single-dash multi-character args, random-character args, alternative delimiters.

(dd is a special case: it's derived from the mainframe JCL data-dump command, and syntax.)

Recognising these ontologies often helps.

The fact that CLIs get incorporated into scripts gives really fucking strong incentive to upstream devs not to randomly change shit. It breaks fucking everything, and often will cause users (usually admins or coders themselves) to flee tools which practice this.


And that's annoying from a UX perspective; a user should not have to know the genealogy of a particular utility in order to guess at some of its command-line arguments.


But is it really more inconsistent than UI interfaces? It feels like every graphical coding tool I use these days, have the menu in a different corner. Some requires me to use a web browser, others run in heavy virtual machines that take an hour to start up.

Is's one thing to keep a consistant UI within your own application, but doing it across a hundred different ones is something I have still to see, in the graphical world as well as the text based.


That is true; and it is generally considered a failure of UI design, especially if these differences show up side-by-side in the same system. That graphical coding tools have these problems is in part my point - many hops up this chain, pavlov bemoaned the lack of GUI coding tools, and I pointed out whether a UI is good and whether it's text-based are orthogonal properties

As an example of how it's possible to make a consistent UI among diverse applications: both Android and iOS have had a lot of effort put in to make app developers converge on consistent idioms; early Android did not do this, and was widely panned for it in the UI design community. The way iOS did this from the start, and Android does it now, is by having the "owner" of the system publish libraries that implement these standard UI idioms.

In the Unix and command-line world this has mostly been the province of GNU, which published the widely (but sadly, still not universally) used getopt_long, which was a big step forward for usability by imposing some level of uniformity on at least the syntax level of command-line args; more universal adoption of them would be great. There's still a problem of word choice and abbreviation choice, which hasn't yet been standardized across commands (quick, which of these recurses onto subdirectories! grep -R, grep -r, sed -r, cp -r, cp -R, less -r, less -R)


grep -R, grep -r, cp -r and cp -R all recurse into subdirectories. Sed and less don't because what would that even mean?


sed and less can both look at multiple files (sed with --in-place, and less always). sed -r is extended regexp (as opposed to -E in grep - why? who knows), less -r is to disable control character filtering, and less -R is to disable control character filtering for color codes only.


With a well-designed graphical UI, options and menus are (or at least should be) easily discoverable. But for text-based UIs, I'm forced to resort to scrolling through "man tmux" or "java -help", or etc. to find the option that I want.


It's human systems.

Cities, buildings, books, philosophy and theology all bear the marks of their time.

For me it becomes a modal guide: given XYZ attributes I can expecr MNO properties (usually).

As another commenter notes, GUIs have similar tendencies. And Web frameworks.


Said the other way, knowing the genealogy of a tool can allow a user to their existing experience, rather than starting from scratch.

One applies to lowering the bar for novice users, the other is a force-multiplier for experienced users.


The force-multiplier for experienced users would be even more powerful if each convention covered more utilities, and in the optimal case if there were only one (I can dream, right?).

I agree that it's very useful for an experienced user to learn the genealogy of all these tools for command-line wizardry, but they shouldn't have to.


But that's unlikely to happen.

A given utility exists in what are quite likely miilions if not billions of scripts / instances, each of which would have to be changed.

The only way to fix this is to go back in time.

NB: this is another reason to get shit right from the start. Which is what TFA is about.


CLIs can be automated when you need to run things on a server.

And CLIs are relatively cheap because calling main() and doing some stuff is just a few extra functions.

Graphical tools usually need to hook more deeply into the software to extract the data and transform it into interactive UI components.

Eclipse implemented their own java compiler they could instrument for syntax parsing so the IDE has total knowledge of the code for example.

Many CLI tools are also non-interactive. Adding that interactivity adds complexity.


Apple solved the "automating GUI applications" problem decades ago with AppleScript. But of course, this being the IT industry, everything old is forgotten and constantly re-invented as it goes in and out of vogue.

> Graphical tools usually need to hook more deeply into the software

I don't even know what that means, exactly. You mean they need more dependencies?

> Many CLI tools are also non-interactive. Adding that interactivity adds complexity.

Most things worth doing are difficult.


I tried using AppleScript a couple of times. One of the worst user experiences of my life. Zero documentation for the syntax and semantics of the language, almost no documentation for the library part, the documentation that existed was 7 years out of date (I was on leopard, the documentation still referenced OS 8) and even after those hurdles I was handcuffed to the sparse and poorly designed API that OS X programs had, that was put in as an afterthought.

Calling that a "solution"...


I suspect GP means AppleEvents rather than AppleScript.

AppleEvents are a high level event framework, it lets you tell applications things like 'select the 5th word of the 2nd paragraph', or 'import all mp3s from /Volumes/USB'. AppleScript tries to allow you to use those exact phrases and creates an uncanny valley language.

Automator and other automating applications use AppleEvents to do their magic. Cocoa has AppleEvents built in for the basics, but support has been spotty beyond that so a lot of people are turning to the accessibility framework.


To script a UI application you also need the developers to stop moving things around. I suspect this would be hard, since they are not used to it mattering, if they swap the order of two input fields or such. In the text based world, there is much more aversion to changing the flags.


If you use AppleEvents correctly, moving things around wouldn't matter, because the scripting interface is completely divorced from the GUI displayed on screen.

AppleEvents are things like: "File Open with this path", "Print page 4 of file", "Save As to this path".

If someone creates AppleEvents like, "Click button 4 on dialog 2", then they're doing it very wrong.


What do you mean by "uncanny valley language?" Is it too close to natural language, but not close enough or what?


Yes, exactly that. For example:

    tell application "Finder" to duplicate (every item in folder "Documents" of home whose modification date comes after (current date) - 7 * days) to folder "Documents" of disk "Backups" with replacing
It's almost, but not quite, English. It'll trip you up when writing it.


I switched to Windows, so I honestly have no idea, but I get the impression that AppleScript/AppleEvents has been pretty much utterly ignored by OS X except for a small amount of backwards-compatibility lip-service.

I stopped using Mac OS when OS X dropped Classic compatibility.


It was still a solution, covering all aspects of the issue, and people have done great things with it.

But you conflate the solution with the particular implementation.

It could be whatever else language instead of AppleScript, e.g. Python or Javascript -- it's the idea that matters.


Hooking into graphical software that's not designed for that is a pain in the ass. Compared to stringing together various CLI tools that primarily deal in text interfaces.

What I, personally, prefer to do is construct libraries and simultaneously develop a collection of CLI tools to access them. Then build the GUI afterward, once I've already vetted and tested the underlying business logic.

The benefit of this is the GUI for most users. The CLI for power users. The library for the developers that want to build additional interfaces or plug it into their projects directly.


> I don't even know what that means

I already gave an example, but I can expand on that.

A CLI compiler only has to take its input and run it through the various compilation stages to the end or abort if something is invalid.

To do what eclipse does for code completion as you type on the other hand you have to to hook into the AST generator stage and the type inference engine given partial, invalid inputs, try to fix them up and then do searches on what other known pieces can be dropped into a particular place.

Which means your compiler can't just bail out at its earliest convenience. It needs public, not internal, APIs to massage state back into working condition so the next steps can then run until the IDE has gathered enough information.

Or take a memory profiler (with allocation callsite recording). The first step of interactivity is being able to turn it on and off at runtime. That means hot-swapping out some code, imagine replacing a malloc with a call-stack capturing one. It gets even more complex if you're using custom allocators.

And for the analysis part: In a simple CLI tool you might just run some graph analysis that takes 20 seconds on a multi-GB heap dump which then spits out the top 10 dominators and a class histogram and you'll hopefully know what to optimize. And you might want to diff those outputs (yay for text processing!) for test runs so you know how things change from build to build.

In a GUI you want interactivity, which means you can't just run those 20-second calculations every time the user navigates through some tree view. You need incremental algorithms, caching sub-results - but not too much otherwise the profiler itself will munge too much memory - etc. etc.

So you need APIs that go deeper into your graph logic than what you would think of public APIs needed for the CLI. And of course you need to build the whole GUI on top of that too.


Having used AppleScript in the past and dealt with application stacks that relied on it, I would not characterize it as a "solution".


>> Many CLI tools are also non-interactive. Adding that interactivity adds complexity.

> Most things worth doing are difficult.

The most important thing is to identify and get rid of all accidental complexity.


> Most things worth doing are difficult

Complexity does not mean that it's worth doing because it's difficult.


Uh... huh? You lost me.


The culture that kept Smalltalk alive failed. So it goes. My understanding is that it had schisms. I've worked with Smalltalkers; they talked like conquered people.

I PREFER - strongly - text-based because graphical based means some obscure binary protocol/file format that bitrots into obscurity. You can always learn to do hardcore text manipulation. You can't necessarily reverse-engineer, say, a Word 1.0 document.

Text is the cockroach format.


> Text is the cockroach format.

I.e. ugly and hard to get rid of? ;).

The problem with Unix isn't text - it's unstructured text. Every utility needs to implement its own copy of ad-hoc, bug-ridden parsers to operate on the same data - and if you want to string some of those utilities together, you often end up writing ad-hoc parsers too.

The sad thing is that we knew how to work with structured text. Then the Unix came and ignored all that experience.


Yup, having more structure and types would be nice. However, that would require effort from all application writers, where as unstructured text is what they'd naturally output, and it works well enough for most things.


>The sad thing is that we knew how to work with structured text.

Which is?


S-expressions for instance, which were popular back then. JSON is a modern reinvention of that particular wheel.


Here, let me show you my ASN.1 ... :) Or worse, my TL1. :)



Cuts both ways, of course. The expressiveness programmers enjoy by manipulating a directory-structure full of files using a syntax-highlighting text editor with autocompletions and a version control system to create a description of a complex set of interrelated entities, which they then process with toolchains to produce the real artifacts of their trade, is a powerful capability which for some reason we rarely try to offer to non programmers. With the right DSLs, almost anything could be 'programming'.


I completely agree with the idea of simple, flexible tools for interacting with programs and providing good tools for users.

A while back I was putting together a DB query tool for volunteer staff to use. I deliberately made it simple and as unambiguous as I could within the limits of "simple". For example, allowing a free form search of principle database fields, I set up a very basic regular expression syntax, just substring match, and optionally '*', '+', '|', and '.'.

This part of the interface is 3 text entry fields, with a nicely worded help message right next to the entries. In person, I showed the primary users how the interface worked and demoed the simplified regex.

Months later there are sometimes still questions like "how do I find somebody named 'John Q. Smith'?" ("Just enter 'smi' and pick the name off the list of names sent back.") I don't mind going over it again, especially for someone new, but I think it shows it may be harder than we anticipate to introduce "programming-like" methods to typical users.


One area I have been looking at are "business rules" systems which, I think, won't go mainstream until they get facile enough that you can do more ordinary programming tasks with them.

A core idea of them has been that the things that make coding hard for business people are different from the things that make coding hard for programmers and I don't believe that at all. (If programmers can't stand to use it to do simple things, how do you expect business people to go anywhere with it?)


Overly "helpful" systems are adopted by people who don't bother to learn the domain before they make sweeping business decisions. And, strangely, once people do know the domain the last thing they want is a kindergarten toy where they program by clicking through menus. Imagine a novelist writing by navigating right-click menus of phrases organized by "Prepositional phrases", "Snappy responses", etc.

It's not that a GUI can't be helpful, it's that "GUI" isn't a synonym for helpful.

If there's a problem that can be solved by looking at a map (perhaps dataflow through components) then yes, it'd be great to present that visually.

The problem is that most of the people having this discussion know programming from movies like Hackers. They want a picture of an oil-tanker rolling over despite the fact that the code in question (theoretical code - never shown in the movie) wouldn't know what an oil tanker was, it'd simply react to one number changing by changing another. The meaning is all dependent on what platform it's installed on.


It is not easy to make GUI systems that are easy to use. Back before people were used to reading help on the internet there used to be a brisk market in 1000 page books with titles like "How to use Microsoft Word".

Closer to the domain my favorite example of malware is the ontology editor protoge. Every day I see freshers who are highly confused getting started with protoge who are then blown away with how easy it is to write an RDFS ontology in a few lines of Turtle with an ordinary text editor.


IMO, K2 really has a good entry in the business rules engine problem with a solid, kinda-accessible UI.

I am confident, having scoped many a project, that often the case is the stakeholder not quite knowing what they want. A good UI then is a good step forward to allowing the stakeholder to iterate on refining their interpretation of their own problem.


I just went to the k2 website and it looks like I can't get access to the software without buying it? Bummer. I set up a request asking for a demo, but dealing with sales reps is such a PITA.

I am in the market for a decent BPM that doesn't want to charge crazy amounts and forcing us to buy support (I'm looking at you, Bonitasoft). Right now we're using Activiti it is awful. It also seems like the least bad of the FOSS offering.


I only dealt with K2 in '10, so things may have changed, but even in the sales process they had engineers on the line. One of the better 'sales' experiences IMO.

Oh, and if you are genuinely interested in trying K2 before buying, I'll be they'd set up a VM for you to remote into. They did for me in '10.


I'm slightly digressing, but I always thought it was interesting how the printed circuit board software Eagle was a combination of a graphical and text based tool. You can use the buttons like any other CAD program or you can enter commands into a limited command line. I like that UI but I've not seen other programs like it.


It's pretty common in 3D modeling software. I've always liked how Rhino 3D integrates the command line with the GUI, allowing you to use pretty much any combination of the two for most functions.

Here's a screencast showing some basic functionality: https://youtu.be/As0WUtBV3NQ?t=66


And you can turn those commands into scripts if you'd like with the ULP language. It's not necessarily pretty, but it's a straightforward way to automate repetitive tasks without requiring the authors of Eagle to figure out every possible use case.

There's a lot of things that I dislike about Eagle, but I definitely miss those things when I'm using other CAD packages.


Autodesk CAD has had a command line since the very beginning.


We were on DOS back then. Doing a GUI was hard.

Compare Autodesk Inventor. No command line, and it doesn't need one. All the new Autodesk tools (Fusion, etc.) are based on Inventor.

It's very hard to do a GUI for an elaborate 3D program, because you need to select multiple objects, talk to the GUI, and change the viewpoint, all within one operation. Yet it's been done successfully. Inventor doesn't even use hotkeys much; you can do almost everything with the mouse except enter numeric values.


Agreed. Mostly a result of "if I had to learn it so should you" and stuck-in-the-mud attitudes.

Look at the Linux systemd / init debate. Despite the fact that sysvinit is clearly ancient, awful technology there are still people who have spend time learning it and don't want something better.

Another example: bash. Clearly an awful awful language/shell but it sticks around because too many people can't admit that the thing they know is rubbish.


This feels like bulverism: you've decided that people are wrong, and now you've come up with a fully general explanation for why they're wrong. https://en.wikipedia.org/wiki/Bulverism

If you ask people why they use bash, and why they don't like systemd, you get perfectly reasonable answers. (I'm not going to go into them here for obvious reasons.) You can't just dismiss these people as stick-in-the-muds.


> Another example: bash. Clearly an awful awful language/shell but it sticks around because too many people can't admit that the thing they know is rubbish.

You should try thinking about what a better bash would look like, it's not as simple as you make it to be. I think the best I've seen in this direction is rc and I think that's still not enough to compensate for the loss of ubiquity.

The stuff people come up when they try to "redesign the unix terminal" (usually some javascript contraption) is always extremely lacking.


It's not simple, but it's not hard to imagine. Some of these things exist already:

* Image support (I can't remember where I saw this but it exists)

* Structured data piping (in Powershell)

* Machine-readable command-line arguments, which would enable...

* ...Proper autocomplete (like in IDEs)

* Real types (Bash is 'stringly typed')

* Sanity (e.g. look at how `[` is implemented in Bash; none of that)

* A better way to integrate with programs than running them on the command line (e.g. shell bindings).

That last one would be tricky and really requires moving away from the C ABI as the Lingua Franca of libraries. That is probably quite far away in the future unfortunately.


Regarding (1), a proposal that had some impact was TermKit[1], which was supposed to auto-detect the type of data coming from stdout and display it intelligently. It died on the vine, though[2].

[1] http://acko.net/blog/on-termkit/

[2] https://www.reddit.com/r/programming/comments/137kd9/18_mont...


> You should try thinking about what a better bash would look like

It's kind of a "cheat," but I've been really happy playing with xonsh (which is essentially bash + ipython)[0]. Bash with better control flow and types... it's suddenly not so maddening anymore.

[0] https://news.ycombinator.com/item?id=9207738


I do not think that this:

    >>> print("my home is $HOME")
    my home is $HOME
    >>> echo "my home is $HOME"
    my home is /home/snail
constitutes progress in user experience. And that's from the tutorial (I appreciate the honesty though)


People forget, repeatedly, at length and often boisterously, that one of the fundamental points of bash is to continue to function when the rest of your machine is completely screwed up.

Loading an entire development environment into your CLI is ten tons of bad news in a five pound bag.


> "redesign the unix terminal"

... is not the same as the Bourne Again shell and scripting language, that IshKebab was talking about.


There are lots of reasons people resisted systemd initially that had nothing to do with resisting learning a new system.

Likewise with bash, what's the better alternative? And don't say fish. Just because it bills itself as such doesn't make it an improvement. All tools have some overhead to learning them.


We don't need to hypothesize an answer. We live in a world where we can see an answer, if we know our history. People decided almost 10 years ago that there was a better way to do certain things than use the Bourne Again shell, and switched to using the Debian Almquist shell.

* http://unix.stackexchange.com/a/250965/5132


Interesting bit of history, I had assumed the parent was complaining about bash interactive shells with the usual "they're hard to use for humans" position. The link suggests that the major gain of Debian Almquist was performance.


I suspect it is more that it sticks around because it has stuck around. Quirks it may have but you can learn them and count on consistency. It is all well and good to pick a better shell but you can't expect any machine but yours to have it.


Fair enough - bash has a lot of tough features to learn.

IMO, bash sticks around because a lot of people still find it useful.

Example: I have 2 mins to check if a duplicate line has occured in a log file with some arbitrary logging format. One pipeline later, I have an answer.


Your example shows why having a shell is useful, but isn't specific to bash.


I do not use bash in any way where it's bad. It's purely a "save me some typing" thing.


Bash is quite cool, it just has a dreadful syntax.


It's worth thinking about the reasons Unix beat Smalltalk. Interoperability is important. Being able to use the same tools on everything is important.


I think you'd like Apache Nifi. It has a drag and drop interface for building ingest and transformation pipelines. Everyone I've talked to that's used it says it makes them 10x more productive for ingest problems.


Another example, and one almost as old: HyperCard.


I honestly believe this is the reason why React is currently winning the web tech wars. The API is minimal, clear and concise. Once you've mastered the few functions and concepts, you rarely have to reference the documentation and understand what tools you have to solve a problem. A major source of frustration with me as a programmer is implementing something only to find the library or framework I'm using has already included it in its core, it's just not in the documentation or hidden in a huge API reference. It's a refreshing experience when libraries and frameworks appreciate the product and the programmer.


> I honestly believe this is the reason why React is currently winning the web tech wars.

On what are you basing that assertion? I'm primarily a back-end developer, so I'm not necessarily disputing it. However, every other recruiter who contacts me asks about Angular experience, whereas I have yet to encounter a single React job listing in the wild.

I do agree that React seems to get a bit more chatter and cheerleading on HN and Reddit recently... but I never know how much of that chatter is legit professional usage, versus what people are tinkering with in their personal side projects. If I took web forums at face value, I would be under the impression that Rust is taking over the enterprise right now.


Same holds true for functional programming & languages. If you believed that HN forums were a reflection of reality, you'd be shocked when you looked for Haskell jobs.


I can't necessarily agree with this, as while React itself is very simple, you will find that after sufficient experience with it in the real world, you need its ecosystem and surrounding libraries, and this can be absolutely daunting for some.

e.g. NPM, Webpack, Flux/Redux, React-Router, Thunk, DevTools, CSS, etc.


You really don't need those - React was popular far before any of those came about (well, except CSS). :)


Perhaps a good team lead / architect is helpful here to set things up and teach people who find these things hard how to use it, as well as build domain specific abstractions to simplify building apps? One example might be the Facebook UI Infrastructure team.


But compared to something like angular react is minimalist.


But compared to something like GWT, angular is minimalist :) Surely that doesn't mean we should all settle with angular, does it?


This sentiment—that application programming interfaces are user interfaces, and that programmers are users—is why I’m spending so much effort improving the API of D3 4.0. I wrote about that in March: https://medium.com/@mbostock/what-makes-software-good-943557...


Please consider making it easier to debug as well. JavaScript has some excellent debuggers available to it-- D3 (at least the older version I used) seemed almost designed to defeat every useful feature of those debuggers.


It’s interesting—or frustrating?—that you should mention this, because D3 is explicitly designed with debugging (and debuggers, developer tools) in mind. I’d like to understand your frustrations better.

There are two main strategies that D3 uses to facilitate debugging.

First, whenever possible, D3’s operations apply immediately; for example, selection.attr immediately evaluates and sets the new attribute values of the selected elements. This minimizes the amount of internal control flow, making the debug stack smaller, and ensures that any user code (the function you pass to selection.attr that defines the new attribute values) is evaluated synchronously.

This immediate and synchronous evaluation of user code is in contrast to other systems and frameworks, including D3’s predecessor Protovis, that retain references to your code and evaluate them at arbitrary points in the future with deep internal control flow. I talk about this a bit in the D3 paper (and the essay I linked above): http://vis.stanford.edu/papers/d3 Even in the case of transitions, which are necessarily asynchronous, D3 evaluates the target values of the transition synchronously, and only constructs the interpolator asynchronously.

Second, and more obviously, D3 uses the DOM and web standards. It doesn’t introduce a novel graphical representation. This means you can use your browsers developer tools to inspect the result of your operations. Combined with the above, it means you can run commands to modify the DOM in the console, and then immediately inspect the result. D3’s standards-based approach has also enabled some interesting tools to be built, like the D3 deconstructor: https://ucbvislab.github.io/d3-deconstructor/


It's been awhile since I had to use it, but I recall the most common frustration from me was getting an unhandled exception with a call stack like:

AnonymousMethod -> AnonymousMethod -> AnonymousMethod -> AnonymousMethod -> AnonymousMethod -> AnonymousMethod -> AnonymousMethod -> AnonymousMethod -> AnonymousMethod -> AnonymousMethod

Occurring hours after I'd actually created the code that caused the exception.

Since I've moved jobs since then, I can't give a more specific example, sorry.

> that retain references to your code and evaluate them at arbitrary points in the future with deep internal control flow.

For the record, that appears to be exactly what was happening to me when I was struggling with D3.


Without a more specific example to go on, it’s difficult to speculate. The only case where you’d get an error asynchronously should be in the case of transition.tween or an event listener—cases where the code is necessarily asynchronous—and not something like selection.attr or transition.attr, where the code can be evaluated synchronously.

It’s true that D3 uses closures and anonymous functions internally. But assuming you are using a debugger and the non-minified code, you can use that debugger to see exactly what the code is doing. To continue with the example of selection.attr, the implementation is here:

https://github.com/d3/d3-selection/blob/master/src/selection...

So a typical call stack would be three deep: selection.attr > selection.each > attrFunction’s closure.


I completely agree with the sentiment. And this is why such a statement made by antirez leaves me puzzled.

I admire Redis for its performance and simplicity of setup, but if there is one thing that Redis does not have is a coherent API.

Whenever I use Redis, I absolutely need the command cheatsheet, as it seems that every command works differently. There is no commonality in data structures. The whole thing seems more dictated by the underlying implementation than the desire of providing a simple API.

For instance, why LPUSH accepts multiple parameters but LPUSHX does not? Why there is even hyperloglog in a database? Where did the need for RPUSHLPOP come from? Why the options for SCRIPT DEBUG are YES, SYNC and NO?

I don't want to be too negative. I appreciate the great work that has been done on Redis, but I feel it really needs to rethink the whole API layer.


Well hyperloglog is simply a golden bullet for all the time-series systems currently en vogue. "How many people with Windows 7 visited google.com yesterday?" -> come back in a year and the query is done. "+/- 2% is ok." -> Instant answer.


Hello pathsjs,

well first of all, one thing is that I make efforts towards a goal, another thing is to reach it. In your analysis I don't reach the goal of simplicity, but I can ensure you I really try hard.

Now more to the point, I still think Redis is a simple system:

> one thing that Redis does not have is a coherent API.

Unfortunately coherence is pretty orthogonal to simplicity, or sometimes there is even a tension between the two. For instance instead of making an exception in one API I can try to redesign everything in more general terms so that everything fits: more coherence and less simplicity. In general complex systems can be very coherent.

Similarly the PHP standard library is extremely incoherent but is simple, you read the man page and you understand what something does in a second.

> Whenever I use Redis, I absolutely need the command cheatsheet [snip]

This actually means that Redis is very simple, in complex systems to check the command fingerprint in the manual page does not help, since it is tricky to understand how the different moving parts interact.

However it is true that in theory Redis could have more organized set of commands, like macro-type commands with sub operations: "LIST PUSH ..." "LIST POP ..." and so forth. Many of this things were never fixed because I believe that is more aesthetic than a substantial difference, and the price to pay to give coherence later is breaking the backward compatibility.

> why LPUSH accepts multiple parameters but LPUSHX does not?

Since nobody uses LPUSHX almost and is a kinda abandoned command, but this is something that we can fix since does not break backward compatibility.

> Why there is even hyperloglog in a database?

Because Redis is a data structures server and HLLs are data structures.

> Where did the need for RPUSHLPOP come from?

It is documented and has nothing to do with simplicity / coherence.

> Why the options for SCRIPT DEBUG are YES, SYNC and NO?

They have different fork behavior, as explained in the doc.

I think Redis is a system that is easy to pickup overall, but that's not the point of my blog post. However our radically different point of view on what simplicity is is the interesting part IMHO. I bet you could easily design a system that is complex for me because of, for instance, attempt to provide something very coherent, because breaking coherency for a few well-picked exceptions to the rule is a great way to avoid over-generalization.


Hi antirez, I think we just have two opposing views about what constitutes simplicity.

I agree that Redis has the rare advantage that one can understand a command at a time, and the cheat sheet is essentially all that is needed to work with it efficiently. Many systems have a documentation that is much more involved, and in this sense Redis is simple.

Still, the reason I find it non simple is that it seems like you (or other contributors) added a random selection of data structures and operations in it. It is difficult to imagine which operations or data structures will be available without consulting the instructions. For instance, there is hyperloglog, but there are no trees or graphs, or sorted maps. And lists have LPUSHX, but no LSETX, nor there is a LLAST operation (I guess it may be for efficiency reasons, but then LINDEX has the same complexity). Sets have SUNION and SUNIONSTORE, but there is no LCONCAT or LCONCATSTORE.

Let me add an example, since I think it highlights the difference in approach we may have. I find the Scala collections very well designed and easy to work with. Each collection has essentially the same (large) set of operations, from `map` and `filter` to more sophisticated ones such as `combinations(n)` or `sliding(len, step)`. Not only that, but usually this operations will preserve the types, whenever possible. This means that, say, `map`ping over a list will produce a list, while `map`ping over a string will produce a string (if the function is char -> char) or a sequence otherwise. Similarly mapping over a bitset will produce a bitset if the function is int -> int, or a set otherwise, since bitsets only represent sets of integers. This allows me to write even very complex algorithms succintly, without needing to consult the API. I find this very simple from the point of view of the user, although the design for writers themselves is pretty complex.

On the other hand, comments such as http://stackoverflow.com/questions/1722726/is-the-scala-2-8-... prove that some other people find it complex and daunting.

In short: I find Redis easy to use (and this is one of the reasons I do use it often!), but not simple in the sense that it is easy to grasp the design.


Most of redis' API is quite coherent. Some of the incoherence is due to the fact that some commands get less attention, such as LPUSHX (which I've never really noticed and I've been using redis for 6 years). Some is because of the core idea of remaining backwards compatible, so even if you have option or commands added, they will never replace old ones, which can cause a bit of complexity and inconsistency.

> Why there is even hyperloglog in a database?

Remember that redis is not really a database but a "data structure server". I have to say I personally don't think this should have been part of the redis core, and it's a textbook example of what redis modules should be, but it came 2-3 years before modules.

> Where did the need for RPUSHLPOP come from?

It came from distributed work queues, which are a popular use case for redis. The documentation actually goes into great detail explaining how to use it.


"data structure server" hits the nail on the head. Because Redis is such a paradigm shift from a traditional database, it doesn't matter how simple Redis is, it will always feel complicated if people go into it anticipating a traditional database.

...Some of us don't like Chipotle because it has too many choices, even though we could just order the get/set burrito.


LPUSHX got my attention so I opened a pull request making it variadic (along with other commands) three years ago [1]. Of course since then the codebase changed and it can no longer be merged.

If there is still interest I could try to fix it, although I don't use Redis as much anymore and I have not looked at its code for a long time.

[1] https://github.com/antirez/redis/pull/1232


Hey! Yes I'm interested, the PR was not handled for the usual reasons but making the commands variadic is a good idea. However I'm surprised there is not just the need to change the command table since I remembered, incorrectly, all the variants were handled by a single function.


Yeah, as you can see the issue was that it is a different function, pushxGenericCommand(), which is used for LPUSHX, RPUSHX and LINSERT. For me having the same function for LPUSHX / RPUSHX and LINSERT does not make much sense (it uses different code depending on whether or not an argument is NULL anyway) so I had started by making LINSERT not rely on it.

My initial goal at the time was not to make LPUSHX and RPUSHX variadic, since like you said they are not used so much. It was actually to make LINSERT variadic, which can result in a significant performance boost compared to calling it several times.

Anyway, I will try to find the time to adapt it to the current code base.


try to go on the #redis freenode channel and ask antirez about it.


I love the rise in appreciation for a solid user experience for programmers these days. I've heard this referred to as DX (Developer Experience).

I have such a feeling of empowerment and productivity when I'm working with well-designed tools and APIs, with interfaces that fit into my mind the way the handle of a hammer fits perfectly into my hand.


I would say the concept of DX is rising not just in terms of UI, but in the holistic sense. For example, Windows releasing the Ubuntu Bash and Microsoft Code in an attempt to make Windows a better DX environment.


Maybe it's because Microsoft (and others) feel developers can be powerful advocates for their platform.

People tend to ask developers they meet about what system, what software, what hardware they should use, and then trust that advice.


Windows had an excellent DX before WSL; Visual Studio in particular is considered a best-of-breed development tool and ecosystem, years ahead of anything that's available under Linux. Linux only won in the Web space because it's easier to deploy at massive scale because no licensing costs. It is not the easiest OS to develop for.


JetBrains IDE seems much more productive than Visual studio while at the same time being lightweight. So much so, lot of people find it worthwhile paying 300usd per year for Resharper, to get the same tools on Visual studio.


A good example of a DX initiative done right is the PHP framework Symfony2.6+. Using it is just ... heaven. Can't really describe it any other way. When ever I'm forced to use something different (Wordpress, or even worse, Drupal) I can not believe how much better Symfony ecosystem is. Not just the framework design ... but (especially!) the documentation.


Good API design is part of why I try to follow TDD.

I'm not dogmatic about it, but I've noticed that when I do follow it, I usually have a better understanding of how I'll use my code by the time I'm done implementing it. Usually because, in the process of testing my code, I'm getting to 'use' it before I've even written it.


This resonates with me. But, I have found that it is often more important to do several to "throw away" than it is to TDD the first.

That is, I have yet to find a process TDD/BDD/Whatever, that can truly up the viability of iteration 1 of a program. Instead, it is by iteration 3 that things have a chance. And, it seems, any process is likely to succeed then.


I took a class on 3D modeling the other week, and the instructor was teaching the "sculpt" method. Instead of getting a bunch of measurements, and building shapes with those exact measurements, you would start with a big ol block, and push and pull the vertices and loops until you have something that looks like what you want. But a huge part of that technique is to throw work away. You do some work, get stuck, and then start over. Sounds wasteful, but by doing so, when you start the next time, you have a much better idea of what you need, and so you're able to go faster the next time. The lesson was designed so that you would have to do this a couple times.


It's completely normal to do a throwaway prototype when exploring a new space. There's not many alternatives to that.

But when you do have a fairly good understanding of what's going on (and you're unlikely to make major incorrect assumptions), TDD is a big help in designing APIs.


BDD allows you to "think through" the consequences of your user stories and potentially discard or amend bad stories before they end up getting implemented and causing problems.


Again, this resonates well. However, I have had many cases where I did not really see what made a story bad until I had almost succeeded in getting it implemented.

In the end, I always want a way to make iteration 1 work. For it to be the only time I have to write something. However, experience is having done many many first iterations.


Following TDD only gives you testable library, not one with good API. Good requires something different.

To have a good interface you want to think how would you use it, and only writing it after this step. Probably stopping along the way and stepping backward when problems arise.


I'd agree that it doesn't guarantee a good api in all cases. But if you do red green refactor tdd, _you_ are the user of the api from the beginning, so you get to experience what your api consumers will be experiencing. This allows you to think how you and others will use it. Maybe you meant "write unit tests after" instead of "red green refactor tdd" doesn't help with the design of a good api? For me, the benefit of unit testing is in a good design more than the reduced refactoring risk or production bug risk, although those are both beneficial for sure.

Now, getting management to invest in tdd is another story :-p


> Maybe you meant "write unit tests after" instead of "red green refactor tdd" doesn't help with the design of a good api?

No. My point was, if you want good API, aim specifically for that, not for some semi-related proxy like tests.

And with good interface you may suddenly find out that you actually don't need most of the tests. Tests are heavily overrated when it comes to anything but what they were designed for (i.e. preventing regressions).


But when you're writing tests for an API that doesn't exist yet, isn't aiming for a good API almost inevitable? I mean, if you're inventing an API from scratch from the user perspective, you're almost bound to aim for something that makes sense, even if turns out to be hard to implement.

At least the way I write these tests, and I find it hard to think of another way, is to essentially write the usage examples of the documentation, except they're actually runnable.


Exactly! And to the gp comment, how do you quantify "good api" until someone has used it? And even when someone ends up using it, how will they tell you if they find it easy or hard to use?

Also, once it starts getting heavily used, won't that make it much riskier and harder to sell to management to refactor it if you get feedback that it is hard to use? Unit, integration, and functional tests all help here, but let's say your abstraction has adoption at 100 api consumer call sites. There's a lot that could go wrong.

Now, if you are the only person using the api for a long time, that's another way to determine if it's a "good api", because you'll use it to build things. Refactoring it is still a tough sell to management in that case though. Not impossible, but tough.


The goal is to maximize developers’ investment. Sometimes simplicity is needed but there are other ways.

For example, consistency: it takes long enough to figure out a new module without also having to re-learn all of the things you did arbitrarily differently from the previous module. Pay attention to naming, argument order, styles of error-handling, etc. and don’t abbreviate things that are not extremely common abbreviations (e.g. "HTML" is OK; getHypTxMrkpLang() is a PITA).

Also, if you include lots of examples that obviously work then developers are more likely to trust what they see. Python "doctest" is brilliant for this, since you can trivially execute whatever the documentation claims. Don’t document the ideal world in your API: document what it actually does.

Make your API enforce its own claims in code. Don’t assume that a developer will have read paragraph 3 where it “clearly says” not to do X; instead, assert that X is not being done. This is extremely valuable to developers because then they can’t do what you don’t expect them to do and they won’t start relying on undefined or untested behavior.


I would argue that simplicity and elegance are even more important for mental models than they are for physical objects.


To take that one step further, I would even say that a physical object is merely a touchable proxy for a mental model. Think of how many things you just "know" how to use because of elegant or established design patterns. Or how you move about your own kitchen when preparing a meal.


> I would even say that a physical object is merely a touchable proxy for a mental model.

This resembles Plato's Theory of Forms: https://en.wikipedia.org/wiki/Theory_of_Forms

> These Forms are the essences of various objects: they are that without which a thing would not be the kind of thing it is. For example, there are countless tables in the world but the Form of tableness is at the core; it is the essence of all of them.


>> The act of iterating again and again to find a simple UI solution is not a form of perfectionism, it’s not futile narcissism. It is more an exploration in the design space, that sometimes is huge, made of small variations that make a big difference, and made of big variations that completely change the point of view. There are no rules to follow but your sensibility. Yes there are good practices, but they are not a good compass when the sea to navigate is the one of the design space.

Love this paragraph. That's why it's art.


This isn't wrong, but the problem in general is that non-programmers have pre-decided that "simple UI" means a clickable GUI, probably with a Paperclip type assistant.

The benefit of the GUI is to present something that text can't - such as lines between components, or pictures, etc. But what's the (specific) domain use for those thing? If there isn't one then a GUI is just a text UI with a mouse pointer and a bunch of overhead.

There's almost zero connection between what seems simple to your boss in a screenshot and a productive tool. If someone really removed unneeded complexity from a burdened UI, then great, but one person's bell and whistle is another's required tool.


Agreed. I've seen many simple, bulletproof UIs and APIs that only support a discrete set of workflows, so they ultimately fail as expert tools.

Sometimes you don't want an expert tool, you want an easy button. That's fine. But often there's an inherent tension between 'can never fail' and 'lets you open up the throttle and tear through your problem'.

Compare the editing and reviewing workflows supported by a share drive and those supported by git. A shared folder is undoubtedly simpler, but good luck maintaining a coherent revision history or editing a document in parallel on different isolated subnets.


The central thesis here is that APIs should be treated like GUIs. Make them as intuitive as possible to minimize brain space waste (ie. memorization of non-transferable, vendor-specific knowledge)

Sadly, the post is too short to discuss what "intuitive" means for an API (which could probably fill several volumes).


I've built GUIs that completely contravened every possible thing I've ever seen written about GUIs. For example, one ugly button. No "exit" button even - you can hit the X in the corner for that.

Behind that button, there was no possible way to make a mistake. Now, to be fair when a constraint was violated ( which only happened when something physically broke or somebody didn't set a physical thing up right ) there was a comprehensive explanation of the failure - "The U42 wire for the Gerfish Space Defarbrulator is disconnected or broken. Please refer to section 1.145.2 of the service manual." That happened in a popup.

To me, that's a good GUI - "just tell me when to go to it." But there is an ostensible public choice theory problem with this approach - who will be paid for training in it? Where will a support network for it exist? Nobody, and nowhere. Show this to people, and you can see it on their face - "there goes my job." They think this even after I show 'em the popup.

It's asocial and that's more important that "it's correct." But it worked to sabotage any expectations people might have about me writing GUIs.

Also, when people tell me that corporations are cost driven these days, I just laugh because of this.


Maybe the thesis is not totally right. A bad API can (sometimes? often?) be "fixed" by building a better layer on top of it. A bad UI can be "fixed" if its author allows it through configuration or plugins.

Fixing the API is what you do, for instance, when you write language bindings for the horrible BSD sockets API.

The fact that the user can fix it doesn't mean that API authors shouldn't care about simplicity, easy of use, consistency, etc., but it does mean that they shouldn't overdo it.


When people say we need to teach everyone how to code, I agree. But first we need tools that allow (any) people to code with different levels of knowledge and experience.

Like math: you learn basic Math. Then you have a calculator for basic operations. Then you have MS Excel for general professional use. Then you have MatLab/R and stuff for specific use.

For programming I have the impression that you learn to code and then you, from start, have to learn how to use the most sophisticaded tools in place. And several of them.


> For programming I have the impression that you learn to code and then you, from start, have to learn how to use the most sophisticaded tools in place. And several of them.

Right; and if someone makes a tool which is (attempts to be) more friendly to use, it's constantly derided by "real" programmers. Look at, for example, Access, Visual Basic, PHP to some extent.

The correct answer to, "people are using Access to write software" shouldn't be:

"that is garbage! They need a real DBMS! With multi-user support!"

It should be:

"well, Access is clearly showing us that there's a huge under served market of people here, let's figure out what it's doing that appeals to them then work on an improved version."


While I agree, it bears mentioning that sometimes the thing that appeals is popularity itself, or good marketing, or "written by Microsoft". It would be a wonderful world if people always chose solutions on technical merits, and the best solution always won, but I don't think we live in that world.


or "It's what I have on my office computer and I don't get admin rights"


We don't need to teach everyone how to code We need to at least make sure everyone understand that there is such a thing as rigor and that it is important.

I doubt rather severely that coding helps with that at all.

Once upon a time, if you trained in computers, you met The Jeremiah who told you "woe betide you who enter here." That was a good thing; you were prepared for the onslaught of unreason surrounding this discipline.

It's kind of like all the recruiters in the movie "Starship Troopers" being maimed.

Then that seems to have stopped.


I find this insightful. It reminds me of what Alan Kay and team are doing with HARC ( https://blog.ycombinator.com/harc ). Seems like people have been working on this problem for ages, but maybe someday we'll solve it ;-)


Sorry, couldn't bring it up to read the article. It was rendered on my screen in what looks like a "programmer" font on a page that looks to be made for "programmers". It is basically plain white with a wall of black text in a font that strains my eyes. Or at least at my phone.

Programmers are not different, they need equally readable websites.

P.s. ;) or ;(

P.s.2. In before the "HN is also xyz". HN might not have fancy styling but it did carefully consider readability.


This is the first thing I noticed also. Maybe the use of a fixed-width was meant to be ironic since the author is comparing against UX design?


I'm afraid not :(. Also, fixed-width is not the only issue. The chosen font type and spacings look real programmy but are not pleasant to read.


In my opinion, a poor developer experience is the primary reason GWT never got more traction than it did.

It's quite the technical accomplishment, but it's challenging to use in ways that have no reason to be challenging from an experience perspective.


It's frustrating for me that people seem to be trying to force everyone into the same UI patterns, regardless of the target audience's skill level and difficulty of problem. Novice users need different UI's than do advanced users, ditto people solving easy problems vs hard problems.

Michael Schumacher and your 15yr old new driver need very different UI's. The new driver needs a simple UI and lots of protection from making mistakes to solve the easier problem of getting from A to B without crashing. Mr Schumacher on the other hand needs a complicated UI and 0 hesitation in the implementation of his decisions to solve the much harder problem of breaking the world record at Monaco.

Determine who your audience is, and how hard the task they're attempting is, then implement the appropriate UI. Defenestrate anyone who tells you simple or complex, GUI or CLI is "inherently better" without considering the problem.

Is red-eye reduction in MS Paint or Python PIL better than in Photoshop? Conversely, would you rather use visual studio or notepad to write a 2-line script?


> Michael Schumacher and your 15yr old new driver need very different UI's

The UI isn't different because the users are different. The cars are. An F1 car is crammed for a reason. This is a deeper problem than adjusting for users. And this is what most still don't get.

This UI vs functionality dichotomy does not exist. Any compromise is artificial. UI must be purely functional, and from function emerges pure form. A good designer is focused just as much on function, as is a good developer on design. In other words, they are the same person. And this is basically Apple's design philosophy in a nutshell.

A UI designer didn't design the cockpit of the F1 car, nor the Camry. They may have chosen the font, the shapes, the icons... but everything was already there. Function necessitates it. And the interfaces of both cars are incredibly simple.


Most still don't get it because it's not true. Look at all of the additional options with non-obvious discovery that Apple has added to the iPhone because some more advanced users want e.g. swipe up quick settings. If you as a user want more control, the design needs to include more controls. Open up MS Paint on one monitor and Photoshop on the other to see what I'm talking about. I realize that this concept is offensive to designers, but designer happiness isn't the goal. Good UI is the goal.

The UI's are different because Schumacher wants more precise control (brake-bias, suspension rate, turbo angle of attack, etc) and you can't get that level of control without a more complicated UI (you need the appropriate buttons and levers for all of those things).

The title of the person who lays out the Ferrari F1 car's cockpit is literally UI designer. If you think an F1 car's cockpit is simple then you should reconsider. The cognitive load of those two sets of controls is very different, made possible and desirable because one set of users is expert and the other isn't.


No. The UI designer does not choose functionality on their own. They need intimate knowledge of how the car and driver work together, and then must simplify to the bare necessities. The better they are at the job, the better they know about both. And same applies to the developer and even the driver. The best drivers are deeply involved with every aspect of their cars. And in the end, nothing is redundant.

And no, MS Paint and Photoshop are not the same program with one just having more options. Photoshop is a beast, and it's UI has improved over the years -- to match that functionality.

The UI is different because Schumacher is driving an F1 car.

> additional options with non-obvious discovery

This is bad design. iTunes and Apple Music are horrible also. Not everyone at Apple gets it. But everything that made Apple successful was about getting it. Steve Jobs got it.

> designer happiness

There is only designer happiness among those who don't get it. There is no designer-developer distinction to the user. Ultimately all there is is user happiness.


I feel one of the worst API tooling right now are the ones that come with AWS. Everything from the AWS Console's UX and accessibility to Dynamo's APIs are just a nightmare to understand and deal with day to day.


I don't understand why AWS' free-tier requires putting things into a VPC.

It is a complete turn-off. It took me days to figure how to configure it correctly to access it through SSH without following a guide that builds out some giant infrastructure.

It should just be one-click to throw everything up that I need and hide all of that extra config behind a wall that I can access if I want / need to.


It used to be much better. It's gotten worse and worse as they've just added on more and more "products" with very little integration to the "products" already present, and with widely varying quality. (There was a HN post just yesterday about their missteps on the Lambda product.)

It feels like AWS has just kind of mutated organically with absolutely no kind of central planning.


Much of the trouble with IDEs revolves around not coding, but finding all the parts. The usual IDE Does Not Play Well with Others. Most IDEs have some strong ideas built in about where Things are Supposed to Be. This causes trouble when they have to interoperate with some other system which also has strong ideas about where Things are Supposed to Be. (Makefiles, Cargo, Github, Go's tools, packaging tools, etc.)

Working around this involves either manually editing text files or trying to find the right dialog box into which to put some pathname.

The other big problem with IDEs is that they usually have no understanding of the tools they invoke. This is a UNIXism - programs take in parameters, but just return an error code. There's usually no machine-processable output from compilers or linkers. If you're lucky, the IDE might be able to associate an error message with the right source line in the right file.


There is tension between the benefit of designing user-friendly APIs and the temptation to overspend on effort to design great APIs. It is sometimes hard to anticipate how often an API will be used.


> There is tension between the benefit of designing user-friendly APIs and the temptation to overspend on effort to design great APIs. It is sometimes hard to anticipate how often an API will be used.

I think in antirez's case there is little 'underestimation' :)

Returning to the more general practice - I agree, it's easy to spend several days building a nice abstraction for something that is not extended again, or something that is extended in a way different from what was anticipated and the abstraction doesn't help. From my experience what has worked the best in terms of internal structure, is to write a basic and working version (this is when you don't know that there are any other users) and to refactor it into a nice abstraction when you reach the point that it needs to be extended - also a huge positive is once it needs to be extended, you know what the abstraction is going to be used for.


I think that's really smart. That's what I do too. When you do the YAGNI way and start seeing it over and over refactor it to an abstraction. Challenging in the agile world though. Businesses don't want to pay for refactoring.


Not to mention there is no way to tell how an API will be used. People may find uses for your software that you never thought of. These use cases may turn out to be more important than the original ones.


That is why there is versioning.


>Learning to configure Sendmail via M4 macros, or struggling with an Apache virtual host setup is not real knowledge. If such a system one day is no longer in use, what remains in your hands, or better, in your neurons? Nothing. This is ad-hoc knowledge. It is like junk food: empty calories without micronutrients.

I agree with this. Too much time is spent in learning how to work with poorly designed interfaces. This sort of knowledge of man made, arbitrarily designed tools doesn't really teach us anything that applies outside the highly specific use case. And it will likely be obsolete within a couple few years. The half life of IT knowledge is short.

While complex tools might be unavoidable, we need better ways to interface with them. And better documentation. People rely on SO and the like to understand how to interface with an API, decode cryptic error messages or configure some tool because the documentation and interfaces are often lacking.


UI's a funny term for APIs. I don't think it's that complex.

APIs are written documents describing external controls to a system's behavior. A good document can describe both the controls and behavior clearly and succinctly. A bad one can describe the same in long and convoluted language that effectively hides the underlying system. Often in a mess of "should" and "can" clauses, with lots of passive voice, vague assumptions, and unnecessary complexity and verbiage.

For some reason, I feel many programmers write documentation the same way that they wrote their X-page essays for English class. They're just trained to write fluffed-up junk that fills some imaginary requirements of a long-irrelevant class.

EDIT: To clarify: I'm describing the actual types and methods themselves. They infer an underlying mechanism and a means to control it.


What you're describing is an "API reference". APIs are the external controls themselves, the things that you're documenting in the reference.

I think it's reasonable to say that APIs are UIs. They are how users and/or machines interact with a system.


An API isn't the document - it's the interface. (That's what the "I" stands for!) One which developers use, which would make it a type of UI even though it's not necessarily the UI that's exposed to end users. And that's kind of the point: the documentation should certainly be clear and well written, but the necessity of all that can be minimized by carefully designing the way developers can work with your tool in the first place.


Every identifier in the system is a bit of documentation. I submit that there's a difference when you name a function:

bool ufklsjblsboabfds(int, int)

and

bool operator<(int, int)

I know that to many, the idea of individual identifiers being documentation is a bit radical, but I think it's the first documentation a developer sees. It's the documentation built into the body of every piece of code that they read and write.


> I know that to many, the idea of individual identifiers being documentation is a bit radical

I think the opposite – it's totally mundane to most people. Aside from the intellectual exercise it requires, people don't have any reason to use a language that used bool ufklsjblsboabfds(int, int) any more than they would use Brainfuck.

So, sure, the API "documents" the functionality in the most minimal way possible. But when you say "documentation" you're using a term that almost no one will understand in the way you mean it.


It's surprising that so many good tools are regretted because of lack of clear documentation and intuitive API (looking at you, Webpack).

No, a good user experience is not a luxury you can afford to omit. Stop creating backlog tickets to "simplify code". It's just plain selfishness.


Nowhere other than in programmers' tools would it be considered an acceptable UI architecture to layer a GUI (usually a turgid dialog box) over a CLI. Moreover the dialog gives no indication whether the inputs are correct or consistent with one another until an unhelpful error is displayed. The components of a toolchain need real APIs and real libraries implementing those APIs, designed for the kind of object-action manipulation that makes desktop productivity software productive: You can't apply the wrong action to the wrong object. The UI simply won't let you.


Never allow a user to do the wrong thing on your GUI - they must be guided toward the "pit of success."

The same thing can be said for an API. I've be pair architecting/coding a pretty substantial spike over the past two weeks with some hairy revision control and dependency management stories. The main thing we've been aiming for is making extremely sure that developers don't drop a dependency into our DAG. The solution was obvious, but the API design has outright dominated those two weeks.


Lets take Firefox as an example. Over the years they have been simplifying the interface, especially once Chrome came into the picture. Every time they remove something I find myself annoyed that a function I used, which was in an intuitive location, is now hidden away in a deep sub-dialog. That is, if it even survived at all.

I'll keep my complex UI thank you very much. Complex doesn't have to mean complicated.


Removing something is almost forgivable if it survives as a low-level configuration option.

Yet Firefox has changed behaviors that I can’t fix anymore, such as always remembering everything in a Downloads list (they outright removed any way to prevent this so one has to manually clear it).


I guess there is a difference in starting out with a lot of features/options and then removing them one at a time compared to starting out simple and then adding features/options.


I want simplicity and longevity.

A simple interface that is changing or obsolete is still useless knowledge I have learned.

Longevity is another reason a developer(s) should spend extra time getting their design polished. Get it right so the product can grow and does not need to introduce many breaking changes.

Make upgrades a joy, not a PITA.


An example I currently have, Statistics Norway recently released version 2 of their API. You can now download about every data source they have. However they prefer that developers use JSON-Stat(https://json-stat.org/) instead of CVS's. Luckily they do support CSV, but it is messy with inconsistent layout. I still prefer it over JSON-Stat though, as I feel JSON-Stat is way too complicated for serious use. That is also probably why there are so few JSON-Stat clients.

CSV's are simple to parse! JSON, XML are more complex beasts to parse, they are especially hard to parse when they don't fit in memory. I do use a lot JSON, dont get me wrong, but it's mostly for small data sizes.

Do anyone else here feel the same about JSON-Stat?


Anyone who claims that CSVs are easy to parse have obviously never had to deal with the kind of junk that you see in real CSV files. Quite apart from quotes (is a "a,b" two fields with '"a' and 'b"' or a single field with three characters?) there are also things like new lines like

like,"this newline"

Which are part of the field and not two records. Or having to deal with user names like Blue, Al (outlook's preferred format for representing people's names).

Or you end up with O'Reilly - is the ' part of a quoted string or not?

And of course a comma isn't necessarily the only separator - what about tabs? Or a document where an intermediary has saved it in an editor that has "helpfully" converted tabs to N spaces (where N is a universally disagreed upon nonzero positive integer)


You're absolutely right, but this is still a problem with JSON and XML. I have seen dates being represented in all kind of formats in the same document. Data cleansing has always been important when working with data in any format.


The difference is that JSON and XML can be described by a grammar and parsed by any standards-conforming parser. Good luck doing that with CSV.


I typed "stream json parse java" this afternoon and got a pretty good answer from Google.


Good interfaces are hard. Bad interfaces add chafe, cognitive load, friction, frustration, are hard to document, etc.

Speaking of which, I'd prefer cut & paste friendly this:

$ redis-cli hostname:port

To slays me every time this:

$ redis-cli -h hostname -p port


Naggum on UIs:

"this is my _workbench_, dammit, it's not a pretty box to impress people with graphics and sounds. when I work at this system up to 12 hours a day, I'm profoundly uninterested in what user interface a novice user would prefer."

Full post: http://www.xach.com/naggum/articles/3065048088243385@naggum....


A simple UI is not necessarily one a novice would prefer.


I agree with the author, and I think the idea is somewhat applicable to everyday tools like text editors. This is partially why I've switched to Emacs (Spacemacs/evil) after using "conventional" text editors like Sublime Text for ages. Although GUI of ST3 may feel nicer at times, the UI (in a more general sense of this term) of Emacs is far more logical and is definitely a lot of fun to use.


Programmers can also benefit from the readability of proportional fonts. I like this article, but it is hard to read for reasons beyond the writing itself.


It's amazing that Docker has not been cited in the comments so far. LXC, jails, zones, etc were around for some time. What changed things imo was the incredible simplicity of the Dockerfile and the tooling around it. Re-watch the first public demo: https://www.youtube.com/watch?v=wW9CAH9nSLs


I find docker increasingly complex and frustrating, at least for the pretty common use case of isolating dependencies for development. See vagga for an example of simplicity. There you have no daemon, no file permission conflicts, a simple way to run commands in containers and more


...and some people drive complexity by design: be complex to be consistent (resistance is futile). I've seen that in big orgs where managers like "achieving" consistency (yay), even if developers are wasting time and money to adapt to cumbersome systems. Sometimes I see the same in communities, hype can promote a bad solution, while the lack of advocates (and time) can kill a better one.


It's not just simple UIs, though -- As a programmer, I generally need to be able to dig down through the code and figure out what's going on behind the scenes.

This means that the implementation is often just as much the UI as any other aspect of the program. Simple implementations are just as important as simple UIs.


That's why redux is really nice - it's ~100 lines of code at its core.


The other benefit of simplicity - on top of the wasted time and learning - is reducing the opportunity for mistakes.

Misunderstanding of what's happening (or more commonly what is not happening) with some API endpoint due to an overly complex - often over-abstracted - design can be a very expensive mistake.


This article reminds me as a designer how Hacker News (and Reddit) could use a face lift. I have a hard time reading the small text and identify in the listings that what's a link and what's not. :/


Yes, all programmers are the same; none of them have different needs or desires.


To me, it's clear the author means, "Programmers are not different [from other types of computer users]; they need simple UI's [too]."

You seem to be interpreting it as, "Programmers are not different [from each other]; they need simple UI's [that limit their expressiveness]."

I guess? Honestly, I'm not really sure where your comment's coming from. Please elaborate.


i don't think this actually addresses the post.


I would love a UI for my httpd.conf :)

Trillions of checkboxes with cryptic names and gigantic tooltips to dispay the detailled documentation (plus the zillions of relevant StackOverflows for the real-life bugs).

Nope, that's just sarcasm. Sometimes the best UI is a conf file, a web browser and Google.


What you're describing is a GUI or Graphic UI. A conf file is a UI. As is a library or service API which is what he's discussing.


And what does that have to do with what we're talking about here?




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: