I know people in the finance business who are sending their CTO's out to take the Series 7 exam because to write code for some domain you need to understand something about the domain. That is on top computer science principles, your programming language, the "standard library" for your programming language, libraries you use on top of that, your build system, your version control system, and all kinds of stuff.
And now people think they are going to be left behind if they don't learn Angular, Rust, Shiny7 and all of that.
Every bit of bullshit steals your cognitive capacity and turns you into an 0.1x programmer.
Become proficient in a medium to the point where absolutely no conscious thought is required to use it to its fullest extent. That's the secret of great artists.
Apply that to programming, now. You can learn a field and the core tech of the field, and get a lot done for a long time. Or you can go chasing the new and shiny every six months and drive yourself up a wall and spend all your time learning instead of making.
The thing is that none of the stuff I do is really all that different. It's just the details that change. Every time I learn something that is truly different, I always find a way to apply it to what I'm doing (if you look above you will notice a huge hole where functional programming should be... it's getting filled).
This field is about learning every day. If you want to be more than average and you want to stay that way for more than 10 years, then you have to accept that you will be spending all of your time learning. And making.
The problem I typically see isn't that someone focused on a technology for too long and became unemployable. It's that they focused on an employer for too long and became unemployable. I started my career doing C and sql 20+ years ago. I could still easily make a living at them if I wanted. My second major technology was enterprise Java. I could still make a living at that if I wanted (and to an extent, I still do).
But people spent 10-20 years in a culture. They don't know how to even look for a job anymore. Their skills may be great, but they've never put in a job application online. They're unknown to the recruiters. They don't know how to write a good resume that reflects their abilities. I watched my wife go through it last year. Laid off after 15 years at one company, her problems were twofold. First, she had narrow but deep domain expertise. Second, she was rusty at job hunting and out of touch with industry standards that had evolved without her. She still believed in waterfall and thought agile was a joke - not good for someone looking for product owner roles! She learned, but it took a while.
That way you might not become "the greatest X86 assembly programmer" or whatever niche you like, but you might become a very proficient system integrator, which is useful in its own right.
It's worked out kind of weird, pushing me (after a long career) into being a founder, and building a tool for generalizing common problems in systems integration. :) My desire to wear lots of hats finally works! But really? I'm more or less a specialist at being a generalist. And I've learned not to chase shiny new tools, because they're very distracting.
That said, hobbies don't need to be productive and half assing something is often good enough. Yes, a simple wedge doorstop may damage the door, but well sometimes that's just not an issue.
These are two different functional modes. The mastery of acting without conscious thought is very hard to attain. See also flow (Cziczentmihaly) and related topics.
Learning incrementally can help, and I'd structured my own career such that I was doing that. I ultimately opted out of it when that stopped being an option. I was simply spending all my time catching up, and not finding myself (nor, to a very large extent, any of my peers) actually proficient with the New Hawtness.
But that doesn't mean you have to live on the bleeding edge, either! A tech that is proven out, used widely and stable, that's worth investing time to learn. In Crossing the Chasm terms (great book on marketing!), you need to learn early majority tech, not early adopter tech. You can be an early adopter for kicks, of course, but don't pretend it's to be more valuable.
The point is, I haven't had to deal with C for a long time. It's odd to be doing it again.
I don't think programmers are more like professors than artists, or vice versa. These are just two different modalities applicable within the large, varied landscape that is computer engineering.
Yes, I experienced this and it seems to be a real problem, for various reasons: if it does not look complicated, investors won't like it, won't buy it; if it is too simple to use, people will think I am dumb. If the setup is too simple, people will think I am incompetent, etc.
Simplicity is a prerequisite for reliability, not a nice-to-have.
Everything I do hinges on the acknowledgement of that fact. And you know what? My stuff works, because I can whack all the complexity down into a manageable ...widget that enforces all the rules for me. I know how to develop enough introspection of such things to keep from having real problems. I'm down to reported bugs where like the file system crashed.
No, I think too many people are still trying to win the science fair.
"There seems to be a propensity to make things complicated. People love to make them complicated. You can see that in television programming you can see it in products in the marketplace, you can see it in internet web sites. Simply presenting the information is not enough, you have got to make it engaging. I think there is perhaps an optimal level of complexity that the brain is designed to handle. If you make it to simple people are bored if you make it too complicated they are lost"
"Simplicity is a great virtue but it requires hard work to achieve it and education to appreciate it. And to make matters worse: complexity sells better. " (EWD 896)
The question you always need to ask is: "Why is <this> better than what I have now?" If you can't answer that DON'T LEARN IT.
10% better isn't enough to take up brain space. Your proficiency with your current tools easily exceeds that. 100% better probably is good enough.
The only thing I've seen in the last 5 years that has been good enough to make me start learning it is Rust.
Looking at how nice Glium made programming OpenGL was the final push. Programming OpenGL is painful on a good day, and Glium managed make a lot of the pain go away. The fact that nobody manged to do this in any other language caught my attention.
Maybe Rust isn't actually required, but I'm willing to learn something just to listen to a community/ecosystem that can produce something so nice.
I have a bit of ADD-ness so shiny objects look the best. I love learning and have learned to curb my learning appetite to skills / areas that I think might benefit me in the near term.
> Every bit of bullshit steals your cognitive capacity and turns you into an 0.1x programmer.
I've always felt sad that this dichotomy is so strong in our industry. There are graphical tools for proles and then there are text-based tools for programmers, and never the two shall mix.
It didn't have to be this way. Smalltalk was an integrated graphical system for both programmers and users already 40 years ago. Something like the Smalltalk class browser with visual editing of live code would feel revolutionary even today compared to most of the systems we have.
But we're seemingly stuck with another '70s approach, that of Unix: streams of plain text, UIs based on undiscoverable and inconsistent command line argument arrays, all glued together with a spider's web of incredibly fragile string-replacement tools like shell scripts and templates.
And there are advantages to text-based UIs (scriptable, composable, copy-pastable) that I think would be hard to reproduce in graphical UIs.
There's initial BSD-style arguments. Almost always single-dash and single-letter. Not entirely consistent.
There's GNU-style arguments. While (usually) consistent with equivalent BSD-style, there are long args, given by double-dash and multi-character arguments.
Then there are the utilities which eschew either. Single-dash multi-character args, random-character args, alternative delimiters.
(dd is a special case: it's derived from the mainframe JCL data-dump command, and syntax.)
Recognising these ontologies often helps.
The fact that CLIs get incorporated into scripts gives really fucking strong incentive to upstream devs not to randomly change shit. It breaks fucking everything, and often will cause users (usually admins or coders themselves) to flee tools which practice this.
Is's one thing to keep a consistant UI within your own application, but doing it across a hundred different ones is something I have still to see, in the graphical world as well as the text based.
As an example of how it's possible to make a consistent UI among diverse applications: both Android and iOS have had a lot of effort put in to make app developers converge on consistent idioms; early Android did not do this, and was widely panned for it in the UI design community. The way iOS did this from the start, and Android does it now, is by having the "owner" of the system publish libraries that implement these standard UI idioms.
In the Unix and command-line world this has mostly been the province of GNU, which published the widely (but sadly, still not universally) used getopt_long, which was a big step forward for usability by imposing some level of uniformity on at least the syntax level of command-line args; more universal adoption of them would be great. There's still a problem of word choice and abbreviation choice, which hasn't yet been standardized across commands (quick, which of these recurses onto subdirectories! grep -R, grep -r, sed -r, cp -r, cp -R, less -r, less -R)
Cities, buildings, books, philosophy and theology all bear the marks of their time.
For me it becomes a modal guide: given XYZ attributes I can expecr MNO properties (usually).
As another commenter notes, GUIs have similar tendencies. And Web frameworks.
One applies to lowering the bar for novice users, the other is a force-multiplier for experienced users.
I agree that it's very useful for an experienced user to learn the genealogy of all these tools for command-line wizardry, but they shouldn't have to.
A given utility exists in what are quite likely miilions if not billions of scripts / instances, each of which would have to be changed.
The only way to fix this is to go back in time.
NB: this is another reason to get shit right from the start. Which is what TFA is about.
And CLIs are relatively cheap because calling main() and doing some stuff is just a few extra functions.
Graphical tools usually need to hook more deeply into the software to extract the data and transform it into interactive UI components.
Eclipse implemented their own java compiler they could instrument for syntax parsing so the IDE has total knowledge of the code for example.
Many CLI tools are also non-interactive. Adding that interactivity adds complexity.
> Graphical tools usually need to hook more deeply into the software
I don't even know what that means, exactly. You mean they need more dependencies?
> Many CLI tools are also non-interactive. Adding that interactivity adds complexity.
Most things worth doing are difficult.
Calling that a "solution"...
AppleEvents are a high level event framework, it lets you tell applications things like 'select the 5th word of the 2nd paragraph', or 'import all mp3s from /Volumes/USB'. AppleScript tries to allow you to use those exact phrases and creates an uncanny valley language.
Automator and other automating applications use AppleEvents to do their magic. Cocoa has AppleEvents built in for the basics, but support has been spotty beyond that so a lot of people are turning to the accessibility framework.
AppleEvents are things like: "File Open with this path", "Print page 4 of file", "Save As to this path".
If someone creates AppleEvents like, "Click button 4 on dialog 2", then they're doing it very wrong.
tell application "Finder" to duplicate (every item in folder "Documents" of home whose modification date comes after (current date) - 7 * days) to folder "Documents" of disk "Backups" with replacing
I stopped using Mac OS when OS X dropped Classic compatibility.
But you conflate the solution with the particular implementation.
What I, personally, prefer to do is construct libraries and simultaneously develop a collection of CLI tools to access them. Then build the GUI afterward, once I've already vetted and tested the underlying business logic.
The benefit of this is the GUI for most users. The CLI for power users. The library for the developers that want to build additional interfaces or plug it into their projects directly.
I already gave an example, but I can expand on that.
A CLI compiler only has to take its input and run it through the various compilation stages to the end or abort if something is invalid.
To do what eclipse does for code completion as you type on the other hand you have to to hook into the AST generator stage and the type inference engine given partial, invalid inputs, try to fix them up and then do searches on what other known pieces can be dropped into a particular place.
Which means your compiler can't just bail out at its earliest convenience. It needs public, not internal, APIs to massage state back into working condition so the next steps can then run until the IDE has gathered enough information.
Or take a memory profiler (with allocation callsite recording). The first step of interactivity is being able to turn it on and off at runtime. That means hot-swapping out some code, imagine replacing a malloc with a call-stack capturing one. It gets even more complex if you're using custom allocators.
And for the analysis part: In a simple CLI tool you might just run some graph analysis that takes 20 seconds on a multi-GB heap dump which then spits out the top 10 dominators and a class histogram and you'll hopefully know what to optimize. And you might want to diff those outputs (yay for text processing!) for test runs so you know how things change from build to build.
In a GUI you want interactivity, which means you can't just run those 20-second calculations every time the user navigates through some tree view. You need incremental algorithms, caching sub-results - but not too much otherwise the profiler itself will munge too much memory - etc. etc.
So you need APIs that go deeper into your graph logic than what you would think of public APIs needed for the CLI. And of course you need to build the whole GUI on top of that too.
> Most things worth doing are difficult.
The most important thing is to identify and get rid of all accidental complexity.
Complexity does not mean that it's worth doing because it's difficult.
I PREFER - strongly - text-based because graphical based means some obscure binary protocol/file format that bitrots into obscurity. You can always learn to do hardcore text manipulation. You can't necessarily reverse-engineer, say, a Word 1.0 document.
Text is the cockroach format.
I.e. ugly and hard to get rid of? ;).
The problem with Unix isn't text - it's unstructured text. Every utility needs to implement its own copy of ad-hoc, bug-ridden parsers to operate on the same data - and if you want to string some of those utilities together, you often end up writing ad-hoc parsers too.
The sad thing is that we knew how to work with structured text. Then the Unix came and ignored all that experience.
A while back I was putting together a DB query tool for volunteer staff to use. I deliberately made it simple and as unambiguous as I could within the limits of "simple". For example, allowing a free form search of principle database fields, I set up a very basic regular expression syntax, just substring match, and optionally '*', '+', '|', and '.'.
This part of the interface is 3 text entry fields, with a nicely worded help message right next to the entries. In person, I showed the primary users how the interface worked and demoed the simplified regex.
Months later there are sometimes still questions like "how do I find somebody named 'John Q. Smith'?" ("Just enter 'smi' and pick the name off the list of names sent back.") I don't mind going over it again, especially for someone new, but I think it shows it may be harder than we anticipate to introduce "programming-like" methods to typical users.
A core idea of them has been that the things that make coding hard for business people are different from the things that make coding hard for programmers and I don't believe that at all. (If programmers can't stand to use it to do simple things, how do you expect business people to go anywhere with it?)
It's not that a GUI can't be helpful, it's that "GUI" isn't a synonym for helpful.
If there's a problem that can be solved by looking at a map (perhaps dataflow through components) then yes, it'd be great to present that visually.
The problem is that most of the people having this discussion know programming from movies like Hackers. They want a picture of an oil-tanker rolling over despite the fact that the code in question (theoretical code - never shown in the movie) wouldn't know what an oil tanker was, it'd simply react to one number changing by changing another. The meaning is all dependent on what platform it's installed on.
Closer to the domain my favorite example of malware is the ontology editor protoge. Every day I see freshers who are highly confused getting started with protoge who are then blown away with how easy it is to write an RDFS ontology in a few lines of Turtle with an ordinary text editor.
I am confident, having scoped many a project, that often the case is the stakeholder not quite knowing what they want. A good UI then is a good step forward to allowing the stakeholder to iterate on refining their interpretation of their own problem.
I am in the market for a decent BPM that doesn't want to charge crazy amounts and forcing us to buy support (I'm looking at you, Bonitasoft). Right now we're using Activiti it is awful. It also seems like the least bad of the FOSS offering.
Oh, and if you are genuinely interested in trying K2 before buying, I'll be they'd set up a VM for you to remote into. They did for me in '10.
Here's a screencast showing some basic functionality: https://youtu.be/As0WUtBV3NQ?t=66
There's a lot of things that I dislike about Eagle, but I definitely miss those things when I'm using other CAD packages.
Compare Autodesk Inventor. No command line, and it doesn't need one. All the new Autodesk tools (Fusion, etc.) are based on Inventor.
It's very hard to do a GUI for an elaborate 3D program, because you need to select multiple objects, talk to the GUI, and change the viewpoint, all within one operation. Yet it's been done successfully. Inventor doesn't even use hotkeys much; you can do almost everything with the mouse except enter numeric values.
Look at the Linux systemd / init debate. Despite the fact that sysvinit is clearly ancient, awful technology there are still people who have spend time learning it and don't want something better.
Another example: bash. Clearly an awful awful language/shell but it sticks around because too many people can't admit that the thing they know is rubbish.
If you ask people why they use bash, and why they don't like systemd, you get perfectly reasonable answers. (I'm not going to go into them here for obvious reasons.) You can't just dismiss these people as stick-in-the-muds.
You should try thinking about what a better bash would look like, it's not as simple as you make it to be. I think the best I've seen in this direction is rc and I think that's still not enough to compensate for the loss of ubiquity.
* Image support (I can't remember where I saw this but it exists)
* Structured data piping (in Powershell)
* Machine-readable command-line arguments, which would enable...
* ...Proper autocomplete (like in IDEs)
* Real types (Bash is 'stringly typed')
* Sanity (e.g. look at how `[` is implemented in Bash; none of that)
* A better way to integrate with programs than running them on the command line (e.g. shell bindings).
That last one would be tricky and really requires moving away from the C ABI as the Lingua Franca of libraries. That is probably quite far away in the future unfortunately.
It's kind of a "cheat," but I've been really happy playing with xonsh (which is essentially bash + ipython). Bash with better control flow and types... it's suddenly not so maddening anymore.
>>> print("my home is $HOME")
my home is $HOME
>>> echo "my home is $HOME"
my home is /home/snail
Loading an entire development environment into your CLI is ten tons of bad news in a five pound bag.
... is not the same as the Bourne Again shell and scripting language, that IshKebab was talking about.
Likewise with bash, what's the better alternative? And don't say fish. Just because it bills itself as such doesn't make it an improvement. All tools have some overhead to learning them.
IMO, bash sticks around because a lot of people still find it useful.
Example: I have 2 mins to check if a duplicate line has occured in a log file with some arbitrary logging format. One pipeline later, I have an answer.
On what are you basing that assertion? I'm primarily a back-end developer, so I'm not necessarily disputing it. However, every other recruiter who contacts me asks about Angular experience, whereas I have yet to encounter a single React job listing in the wild.
I do agree that React seems to get a bit more chatter and cheerleading on HN and Reddit recently... but I never know how much of that chatter is legit professional usage, versus what people are tinkering with in their personal side projects. If I took web forums at face value, I would be under the impression that Rust is taking over the enterprise right now.
e.g. NPM, Webpack, Flux/Redux, React-Router, Thunk, DevTools, CSS, etc.
There are two main strategies that D3 uses to facilitate debugging.
First, whenever possible, D3’s operations apply immediately; for example, selection.attr immediately evaluates and sets the new attribute values of the selected elements. This minimizes the amount of internal control flow, making the debug stack smaller, and ensures that any user code (the function you pass to selection.attr that defines the new attribute values) is evaluated synchronously.
This immediate and synchronous evaluation of user code is in contrast to other systems and frameworks, including D3’s predecessor Protovis, that retain references to your code and evaluate them at arbitrary points in the future with deep internal control flow. I talk about this a bit in the D3 paper (and the essay I linked above): http://vis.stanford.edu/papers/d3 Even in the case of transitions, which are necessarily asynchronous, D3 evaluates the target values of the transition synchronously, and only constructs the interpolator asynchronously.
Second, and more obviously, D3 uses the DOM and web standards. It doesn’t introduce a novel graphical representation. This means you can use your browsers developer tools to inspect the result of your operations. Combined with the above, it means you can run commands to modify the DOM in the console, and then immediately inspect the result. D3’s standards-based approach has also enabled some interesting tools to be built, like the D3 deconstructor: https://ucbvislab.github.io/d3-deconstructor/
Occurring hours after I'd actually created the code that caused the exception.
Since I've moved jobs since then, I can't give a more specific example, sorry.
> that retain references to your code and evaluate them at arbitrary points in the future with deep internal control flow.
For the record, that appears to be exactly what was happening to me when I was struggling with D3.
It’s true that D3 uses closures and anonymous functions internally. But assuming you are using a debugger and the non-minified code, you can use that debugger to see exactly what the code is doing. To continue with the example of selection.attr, the implementation is here:
So a typical call stack would be three deep: selection.attr > selection.each > attrFunction’s closure.
I admire Redis for its performance and simplicity of setup, but if there is one thing that Redis does not have is a coherent API.
Whenever I use Redis, I absolutely need the command cheatsheet, as it seems that every command works differently. There is no commonality in data structures. The whole thing seems more dictated by the underlying implementation than the desire of providing a simple API.
For instance, why LPUSH accepts multiple parameters but LPUSHX does not? Why there is even hyperloglog in a database? Where did the need for RPUSHLPOP come from? Why the options for SCRIPT DEBUG are YES, SYNC and NO?
I don't want to be too negative. I appreciate the great work that has been done on Redis, but I feel it really needs to rethink the whole API layer.
well first of all, one thing is that I make efforts towards a goal, another thing is to reach it. In your analysis I don't reach the goal of simplicity, but I can ensure you I really try hard.
Now more to the point, I still think Redis is a simple system:
> one thing that Redis does not have is a coherent API.
Unfortunately coherence is pretty orthogonal to simplicity, or sometimes there is even a tension between the two. For instance instead of making an exception in one API I can try to redesign everything in more general terms so that everything fits: more coherence and less simplicity. In general complex systems can be very coherent.
Similarly the PHP standard library is extremely incoherent but is simple, you read the man page and you understand what something does in a second.
> Whenever I use Redis, I absolutely need the command cheatsheet [snip]
This actually means that Redis is very simple, in complex systems to check the command fingerprint in the manual page does not help, since it is tricky to understand how the different moving parts interact.
However it is true that in theory Redis could have more organized set of commands, like macro-type commands with sub operations: "LIST PUSH ..." "LIST POP ..." and so forth. Many of this things were never fixed because I believe that is more aesthetic than a substantial difference, and the price to pay to give coherence later is breaking the backward compatibility.
> why LPUSH accepts multiple parameters but LPUSHX does not?
Since nobody uses LPUSHX almost and is a kinda abandoned command, but this is something that we can fix since does not break backward compatibility.
> Why there is even hyperloglog in a database?
Because Redis is a data structures server and HLLs are data structures.
> Where did the need for RPUSHLPOP come from?
It is documented and has nothing to do with simplicity / coherence.
> Why the options for SCRIPT DEBUG are YES, SYNC and NO?
They have different fork behavior, as explained in the doc.
I think Redis is a system that is easy to pickup overall, but that's not the point of my blog post. However our radically different point of view on what simplicity is is the interesting part IMHO. I bet you could easily design a system that is complex for me because of, for instance, attempt to provide something very coherent, because breaking coherency for a few well-picked exceptions to the rule is a great way to avoid over-generalization.
I agree that Redis has the rare advantage that one can understand a command at a time, and the cheat sheet is essentially all that is needed to work with it efficiently. Many systems have a documentation that is much more involved, and in this sense Redis is simple.
Still, the reason I find it non simple is that it seems like you (or other contributors) added a random selection of data structures and operations in it. It is difficult to imagine which operations or data structures will be available without consulting the instructions. For instance, there is hyperloglog, but there are no trees or graphs, or sorted maps. And lists have LPUSHX, but no LSETX, nor there is a LLAST operation (I guess it may be for efficiency reasons, but then LINDEX has the same complexity). Sets have SUNION and SUNIONSTORE, but there is no LCONCAT or LCONCATSTORE.
Let me add an example, since I think it highlights the difference in approach we may have. I find the Scala collections very well designed and easy to work with. Each collection has essentially the same (large) set of operations, from `map` and `filter` to more sophisticated ones such as `combinations(n)` or `sliding(len, step)`. Not only that, but usually this operations will preserve the types, whenever possible. This means that, say, `map`ping over a list will produce a list, while `map`ping over a string will produce a string (if the function is char -> char) or a sequence otherwise. Similarly mapping over a bitset will produce a bitset if the function is int -> int, or a set otherwise, since bitsets only represent sets of integers. This allows me to write even very complex algorithms succintly, without needing to consult the API. I find this very simple from the point of view of the user, although the design for writers themselves is pretty complex.
On the other hand, comments such as http://stackoverflow.com/questions/1722726/is-the-scala-2-8-... prove that some other people find it complex and daunting.
In short: I find Redis easy to use (and this is one of the reasons I do use it often!), but not simple in the sense that it is easy to grasp the design.
Remember that redis is not really a database but a "data structure server". I have to say I personally don't think this should have been part of the redis core, and it's a textbook example of what redis modules should be, but it came 2-3 years before modules.
It came from distributed work queues, which are a popular use case for redis. The documentation actually goes into great detail explaining how to use it.
...Some of us don't like Chipotle because it has too many choices, even though we could just order the get/set burrito.
If there is still interest I could try to fix it, although I don't use Redis as much anymore and I have not looked at its code for a long time.
My initial goal at the time was not to make LPUSHX and RPUSHX variadic, since like you said they are not used so much. It was actually to make LINSERT variadic, which can result in a significant performance boost compared to calling it several times.
Anyway, I will try to find the time to adapt it to the current code base.
I have such a feeling of empowerment and productivity when I'm working with well-designed tools and APIs, with interfaces that fit into my mind the way the handle of a hammer fits perfectly into my hand.
People tend to ask developers they meet about what system, what software, what hardware they should use, and then trust that advice.
I'm not dogmatic about it, but I've noticed that when I do follow it, I usually have a better understanding of how I'll use my code by the time I'm done implementing it. Usually because, in the process of testing my code, I'm getting to 'use' it before I've even written it.
That is, I have yet to find a process TDD/BDD/Whatever, that can truly up the viability of iteration 1 of a program. Instead, it is by iteration 3 that things have a chance. And, it seems, any process is likely to succeed then.
But when you do have a fairly good understanding of what's going on (and you're unlikely to make major incorrect assumptions), TDD is a big help in designing APIs.
In the end, I always want a way to make iteration 1 work. For it to be the only time I have to write something. However, experience is having done many many first iterations.
To have a good interface you want to think how would you use it, and only
writing it after this step. Probably stopping along the way and stepping
backward when problems arise.
Now, getting management to invest in tdd is another story :-p
No. My point was, if you want good API, aim specifically for that, not for
some semi-related proxy like tests.
And with good interface you may suddenly find out that you actually don't need
most of the tests. Tests are heavily overrated when it comes to anything but
what they were designed for (i.e. preventing regressions).
At least the way I write these tests, and I find it hard to think of another way, is to essentially write the usage examples of the documentation, except they're actually runnable.
Also, once it starts getting heavily used, won't that make it much riskier and harder to sell to management to refactor it if you get feedback that it is hard to use? Unit, integration, and functional tests all help here, but let's say your abstraction has adoption at 100 api consumer call sites. There's a lot that could go wrong.
Now, if you are the only person using the api for a long time, that's another way to determine if it's a "good api", because you'll use it to build things. Refactoring it is still a tough sell to management in that case though. Not impossible, but tough.
For example, consistency: it takes long enough to figure out a new module without also having to re-learn all of the things you did arbitrarily differently from the previous module. Pay attention to naming, argument order, styles of error-handling, etc. and don’t abbreviate things that are not extremely common abbreviations (e.g. "HTML" is OK; getHypTxMrkpLang() is a PITA).
Also, if you include lots of examples that obviously work then developers are more likely to trust what they see. Python "doctest" is brilliant for this, since you can trivially execute whatever the documentation claims. Don’t document the ideal world in your API: document what it actually does.
Make your API enforce its own claims in code. Don’t assume that a developer will have read paragraph 3 where it “clearly says” not to do X; instead, assert that X is not being done. This is extremely valuable to developers because then they can’t do what you don’t expect them to do and they won’t start relying on undefined or untested behavior.
This resembles Plato's Theory of Forms: https://en.wikipedia.org/wiki/Theory_of_Forms
> These Forms are the essences of various objects: they are that without which a thing would not be the kind of thing it is. For example, there are countless tables in the world but the Form of tableness is at the core; it is the essence of all of them.
Love this paragraph. That's why it's art.
The benefit of the GUI is to present something that text can't - such as lines between components, or pictures, etc. But what's the (specific) domain use for those thing? If there isn't one then a GUI is just a text UI with a mouse pointer and a bunch of overhead.
There's almost zero connection between what seems simple to your boss in a screenshot and a productive tool. If someone really removed unneeded complexity from a burdened UI, then great, but one person's bell and whistle is another's required tool.
Sometimes you don't want an expert tool, you want an easy button. That's fine. But often there's an inherent tension between 'can never fail' and 'lets you open up the throttle and tear through your problem'.
Compare the editing and reviewing workflows supported by a share drive and those supported by git. A shared folder is undoubtedly simpler, but good luck maintaining a coherent revision history or editing a document in parallel on different isolated subnets.
Sadly, the post is too short to discuss what "intuitive" means for an API (which could probably fill several volumes).
Behind that button, there was no possible way to make a mistake. Now, to be fair when a constraint was violated ( which only happened when something physically broke or somebody didn't set a physical thing up right ) there was a comprehensive explanation of the failure - "The U42 wire for the Gerfish Space Defarbrulator is disconnected or broken. Please refer to section 1.145.2 of the service manual." That happened in a popup.
To me, that's a good GUI - "just tell me when to go to it." But there is an ostensible public choice theory problem with this approach - who will be paid for training in it? Where will a support network for it exist? Nobody, and nowhere. Show this to people, and you can see it on their face - "there goes my job." They think this even after I show 'em the popup.
It's asocial and that's more important that "it's correct." But it worked to sabotage any expectations people might have about me writing GUIs.
Also, when people tell me that corporations are cost driven these days, I just laugh because of this.
Fixing the API is what you do, for instance, when you write language bindings for the horrible BSD sockets API.
The fact that the user can fix it doesn't mean that API authors shouldn't care about simplicity, easy of use, consistency, etc., but it does mean that they shouldn't overdo it.
Like math: you learn basic Math. Then you have a calculator for basic operations. Then you have MS Excel for general professional use. Then you have MatLab/R and stuff for specific use.
For programming I have the impression that you learn to code and then you, from start, have to learn how to use the most sophisticaded tools in place. And several of them.
Right; and if someone makes a tool which is (attempts to be) more friendly to use, it's constantly derided by "real" programmers. Look at, for example, Access, Visual Basic, PHP to some extent.
The correct answer to, "people are using Access to write software" shouldn't be:
"that is garbage! They need a real DBMS! With multi-user support!"
It should be:
"well, Access is clearly showing us that there's a huge under served market of people here, let's figure out what it's doing that appeals to them then work on an improved version."
I doubt rather severely that coding helps with that at all.
Once upon a time, if you trained in computers, you met The Jeremiah who told you "woe betide you who enter here." That was a good thing; you were prepared for the onslaught of unreason surrounding this discipline.
It's kind of like all the recruiters in the movie "Starship Troopers" being maimed.
Then that seems to have stopped.
Programmers are not different, they need equally readable websites.
P.s. ;) or ;(
P.s.2. In before the "HN is also xyz". HN might not have fancy styling but it did carefully consider readability.
It's quite the technical accomplishment, but it's challenging to use in ways that have no reason to be challenging from an experience perspective.
Michael Schumacher and your 15yr old new driver need very different UI's. The new driver needs a simple UI and lots of protection from making mistakes to solve the easier problem of getting from A to B without crashing. Mr Schumacher on the other hand needs a complicated UI and 0 hesitation in the implementation of his decisions to solve the much harder problem of breaking the world record at Monaco.
Determine who your audience is, and how hard the task they're attempting is, then implement the appropriate UI. Defenestrate anyone who tells you simple or complex, GUI or CLI is "inherently better" without considering the problem.
Is red-eye reduction in MS Paint or Python PIL better than in Photoshop? Conversely, would you rather use visual studio or notepad to write a 2-line script?
The UI isn't different because the users are different. The cars are. An F1 car is crammed for a reason. This is a deeper problem than adjusting for users. And this is what most still don't get.
This UI vs functionality dichotomy does not exist. Any compromise is artificial. UI must be purely functional, and from function emerges pure form. A good designer is focused just as much on function, as is a good developer on design. In other words, they are the same person. And this is basically Apple's design philosophy in a nutshell.
A UI designer didn't design the cockpit of the F1 car, nor the Camry. They may have chosen the font, the shapes, the icons... but everything was already there. Function necessitates it. And the interfaces of both cars are incredibly simple.
The UI's are different because Schumacher wants more precise control (brake-bias, suspension rate, turbo angle of attack, etc) and you can't get that level of control without a more complicated UI (you need the appropriate buttons and levers for all of those things).
The title of the person who lays out the Ferrari F1 car's cockpit is literally UI designer. If you think an F1 car's cockpit is simple then you should reconsider. The cognitive load of those two sets of controls is very different, made possible and desirable because one set of users is expert and the other isn't.
And no, MS Paint and Photoshop are not the same program with one just having more options. Photoshop is a beast, and it's UI has improved over the years -- to match that functionality.
The UI is different because Schumacher is driving an F1 car.
> additional options with non-obvious discovery
This is bad design. iTunes and Apple Music are horrible also. Not everyone at Apple gets it. But everything that made Apple successful was about getting it. Steve Jobs got it.
> designer happiness
There is only designer happiness among those who don't get it. There is no designer-developer distinction to the user. Ultimately all there is is user happiness.
It is a complete turn-off. It took me days to figure how to configure it correctly to access it through SSH without following a guide that builds out some giant infrastructure.
It should just be one-click to throw everything up that I need and hide all of that extra config behind a wall that I can access if I want / need to.
It feels like AWS has just kind of mutated organically with absolutely no kind of central planning.
Working around this involves either manually editing text files or trying to find the right dialog box into which to put some pathname.
The other big problem with IDEs is that they usually have no understanding of the tools they invoke. This is a UNIXism - programs take in parameters, but just return an error code. There's usually no machine-processable output from compilers or linkers. If you're lucky, the IDE might be able to associate an error message with the right source line in the right file.
I think in antirez's case there is little 'underestimation' :)
Returning to the more general practice - I agree, it's easy to spend several days building a nice abstraction for something that is not extended again, or something that is extended in a way different from what was anticipated and the abstraction doesn't help. From my experience what has worked the best in terms of internal structure, is to write a basic and working version (this is when you don't know that there are any other users) and to refactor it into a nice abstraction when you reach the point that it needs to be extended - also a huge positive is once it needs to be extended, you know what the abstraction is going to be used for.
I agree with this. Too much time is spent in learning how to work with poorly designed interfaces. This sort of knowledge of man made, arbitrarily designed tools doesn't really teach us anything that applies outside the highly specific use case. And it will likely be obsolete within a couple few years. The half life of IT knowledge is short.
While complex tools might be unavoidable, we need better ways to interface with them. And better documentation. People rely on SO and the like to understand how to interface with an API, decode cryptic error messages or configure some tool because the documentation and interfaces are often lacking.
APIs are written documents describing external controls to a system's behavior. A good document can describe both the controls and behavior clearly and succinctly. A bad one can describe the same in long and convoluted language that effectively hides the underlying system. Often in a mess of "should" and "can" clauses, with lots of passive voice, vague assumptions, and unnecessary complexity and verbiage.
For some reason, I feel many programmers write documentation the same way that they wrote their X-page essays for English class. They're just trained to write fluffed-up junk that fills some imaginary requirements of a long-irrelevant class.
EDIT: To clarify: I'm describing the actual types and methods themselves. They infer an underlying mechanism and a means to control it.
I think it's reasonable to say that APIs are UIs. They are how users and/or machines interact with a system.
bool ufklsjblsboabfds(int, int)
bool operator<(int, int)
I know that to many, the idea of individual identifiers being documentation is a bit radical, but I think it's the first documentation a developer sees. It's the documentation built into the body of every piece of code that they read and write.
I think the opposite – it's totally mundane to most people. Aside from the intellectual exercise it requires, people don't have any reason to use a language that used bool ufklsjblsboabfds(int, int) any more than they would use Brainfuck.
So, sure, the API "documents" the functionality in the most minimal way possible. But when you say "documentation" you're using a term that almost no one will understand in the way you mean it.
No, a good user experience is not a luxury you can afford to omit. Stop creating backlog tickets to "simplify code". It's just plain selfishness.
The same thing can be said for an API. I've be pair architecting/coding a pretty substantial spike over the past two weeks with some hairy revision control and dependency management stories. The main thing we've been aiming for is making extremely sure that developers don't drop a dependency into our DAG. The solution was obvious, but the API design has outright dominated those two weeks.
I'll keep my complex UI thank you very much. Complex doesn't have to mean complicated.
Yet Firefox has changed behaviors that I can’t fix anymore, such as always remembering everything in a Downloads list (they outright removed any way to prevent this so one has to manually clear it).
A simple interface that is changing or obsolete is still useless knowledge I have learned.
Longevity is another reason a developer(s) should spend extra time getting their design polished. Get it right so the product can grow and does not need to introduce many breaking changes.
Make upgrades a joy, not a PITA.
CSV's are simple to parse! JSON, XML are more complex beasts to parse, they are especially hard to parse when they don't fit in memory. I do use a lot JSON, dont get me wrong, but it's mostly for small data sizes.
Do anyone else here feel the same about JSON-Stat?
Which are part of the field and not two records. Or having to deal with user names like Blue, Al (outlook's preferred format for representing people's names).
Or you end up with O'Reilly - is the ' part of a quoted string or not?
And of course a comma isn't necessarily the only separator - what about tabs? Or a document where an intermediary has saved it in an editor that has "helpfully" converted tabs to N spaces (where N is a universally disagreed upon nonzero positive integer)
Speaking of which, I'd prefer cut & paste friendly this:
$ redis-cli hostname:port
To slays me every time this:
$ redis-cli -h hostname -p port
"this is my _workbench_, dammit, it's not
a pretty box to impress people with graphics and sounds. when I work at
this system up to 12 hours a day, I'm profoundly uninterested in what user
interface a novice user would prefer."
Full post: http://www.xach.com/naggum/articles/3065048088243385@naggum....
This means that the implementation is often just as much the UI as any other aspect of the program. Simple implementations are just as important as simple UIs.
Misunderstanding of what's happening (or more commonly what is not happening) with some API endpoint due to an overly complex - often over-abstracted - design can be a very expensive mistake.
You seem to be interpreting it as, "Programmers are not different [from each other]; they need simple UI's [that limit their expressiveness]."
I guess? Honestly, I'm not really sure where your comment's coming from. Please elaborate.
Trillions of checkboxes with cryptic names and gigantic tooltips to dispay the detailled documentation (plus the zillions of relevant StackOverflows for the real-life bugs).
Nope, that's just sarcasm.
Sometimes the best UI is a conf file, a web browser and Google.