To make anything production quality you have to go many iterations from sketching (prototype), to inking (design/setup) and then many, many passes on the post production iterations. Unfortunately our project management hasn't learned it is a creative art yet. You can't just throw another writer in to make it faster always. Game development is more programming/creative mix and it is clear in that industry, it is easier to argue for delays for polish and game mechanic tuning, others it is almost a perception hit just to delay for a better product with more iterations.
Hemingway also had a great tip for any creative or programming or self driven career like writing, leave something you know unfinished that you can start in the morning. So sometimes if you have a solution and implement part of it, prepare to test it or even put a compile error right where you left off, you build and start up the next day at that point.
> “The most important thing I’ve learned about writing is never write too much at a time… Never pump yourself dry. Leave a little for the next day. The main thing is to know when to stop. Don’t wait till you’ve written yourself out. When you’re still going good and you come to an interesting place and you know what’s going to happen next, that’s the time to stop. Then leave it alone and don’t think about it; let your subconscious mind do the work.
The next morning, when you’ve had a good sleep and you’re feeling fresh, rewrite what you wrote the day before. When you come to the interesting place and you know what is going to happen next, go on from there and stop at another high point of interest. That way, when you get through, your stuff is full of interesting places and when you write a novel you never get stuck and you make it interesting as you go along.” – Ernest Hemingway
This next quote about quality/product/content/experience is great, and it can parallel making a good game, or an app that makes someone's life more fun, or a tool that helps someone achieve something. Make your product a friend like a book if you can, you'll probably have to do it yourself as project management systems of today won't allow it.
>“All good books are alike in that they are truer than if they had really happened and after you are finished reading one you will feel that all that happened to you and afterwards it all belongs to you; the good and the bad, the ecstasy, the remorse and sorrow, the people and the places and how the weather was.” – Ernest Hemingway
Agreed. A good stopping place is the worst place to stop!
But when I'm writing prose, it does not feel anything like writing code. I enjoy a beautiful line of written code, and can sit back and admire it for a while, before I get on with the business of hating it a week or three later, but it is nothing like the experience of feeling the taste of words on the tongue, of reaching into my own garbage sack of mental vomit to pull out gems to glimmer in the sun.
Good devs can be bad writers and the other way around. But good code is as good writing. It's a planned endevour which takes a few iterations to get right.
Most people who code bad and/or write bad are doing so because ther code and text are just streams of thoughts typed down as they had them. Sure, iteration can fix a bug here and there, but if their thoughts don't follow a red line, nobody can understand what they wanted to create.
I wrote a similar post when, after many years of software development, I finally got around to writing my first novel. https://adiamond.me/2015/01/writing-and-programming/
The author is right in saying the key is really to show up and put in the work. That can be really hard for a lot of people, but programmers are already accustomed to it.
BTW, Vikram Chandra's Geek Sublime also does a good job investigating the ties between writing code and writing prose. https://www.goodreads.com/book/show/19353724-geek-sublime
Chandra goes way back, pointing out that the generative grammar we use today to describe valid programming language constructs was first created 2500 years ago in India by a guy named Panini who fully and precisely described the structure of the entire Sanskrit language. There's more on that here, if you're interested: https://medium.com/@dmitrypavluk/we-should-thank-sanskrit-fo...
This can be as true of a Science Fiction novel as of any other. It doesn't mean the work has to appear "elevated", but it is the heart of the thing even in a fairly generic work.
I'd toss this one under the category of 'wouldn't it be nice'. It would be nice if writing and programming were similar. It would be nice to have a bridge from the sciences to arts and letters. But if such a bridge could be merely posited, well, this positing would already be common parlance.
For my part I plan out the high level plot, and a list of scenes. At most a 2-3 pages synopsis.
Then I write start to finish. No exceptions: Scene by scene, paragraph by paragraph, without going back.
Interestingly seeing as you suggest writing is more dynamic and non-linear, that method of writing a novel - which I've used twice so far, and in-progress with the third - is a lot less dynamic and non-linear than the way I write code.
I rarely plan out at anything but the very highest levels when I write code. I sketch out components and fill in pieces of code as I need them, and stub out other things, and then I test, and then fill in some more.
I can't write that way. I find if I try to produce any kind of in-depth synopsis I just end up changing most things when writing the full scenes anyway. I need to know the details of what went before to fill in the scene I'm currently working on, so I can't work effectively on it until I've written the previous ones out fully.
Some people do write by jumping back and forth, so I'm not suggesting you're wrong for you, but that's just not how it works for me. When I revise my draft I similarly go through them beginning to end. When I get it back from the editor, I gather up the notes, decides what to listen to and what to ignore, and go through my draft linearly, beginning to end.
How is this different from writing stuff? Most of the time you have some pre-defined thing you want to communicate. If you mean writing novels and not just writing memos or articles then the equivalent is coding games which is just as creative if not more than writing novels.
Writing can often be organized and worded in nearly infinite ways while getting across the (or nearly the same) message.
There are some pretty strong similarities. Writer's block and programmer's block correlate.
EDIT: Sadly, writing has no compiler to tell me that I overlooked the word "block" after "programmer's".
When I write a piece of software, I have more concrete goals that just write a piece of software. I'll usually have a goal (a TODO app), and maybe some ideas about which features. Sure, I might not know exactly how I'll get there, and along the way I might come up with the new ideas, but you don't start with the goal of writing a TODO app and end up with an application that processes DICOM images.
With writing a novel, well, very often the writer has no idea where it will go. This varies--some authors spend a lot of time planning everything out, scene by scene, chapter by chapter. But a lot of authors discovery write. That's what I do.. When I write, I'm waiting, hoping, to surprise myself. I want to go---oh what the hell is that!?
When programming, I go, Oh, I need to do that, to get this working. While writing, I'm constantly feeling my way through each word. I re-read it, outloud, tasting each word, and constantly asking myself: how does it make me feel?
[if you want to maximise the chance of "making it big" traditional publishers are more likely to be able to make that happen, but for my part it's a hobby first and foremost, so that wasn't really a consideration I cared about at the odds of that are extremely poor anyway]
But ultimately when you write code it was done for some purpose so a similar delayed / subjective feedback mechanism is still relevant (i.e. does it solve problems for users? Does it work with real data? Does it scale?)
> One year ago I paused my programming life and started writing a novel, with the illusion that my new activity was deeply different than the previous one.
What's implied here is the hope for a new experience.
But this can never be achieved: A programmer can never experience the writing of a novel as a non-programmer would.
Furthermore: Even the similarities can only be thought about from a programmers mind.
Imagine two writers discussing writing. One is a non-programmer, the other one is a programmer. Even if, after some exlpaining, the non-programmer and the programmer agree on the similarities, this can only happen because the non-programmer changed. They needed to learn some programming just to understand the similarities experienced by the programmer.
The non-programmer writer will never be a non-programmer-writer again.
The author experiences writing as very similar to programming precisely because they are a programmer.
Sure. But the veracity of the comparison doesn't depend on the prerequisites to make the comparison.
The words don't carry any meaning without context.
You could say "the veracity does not depend on the prerequisites" to mean "whoever enters the context will agree to the comparison".
Bad code doesn't work.
There can be "beautiful" code that doesn't work, but it's pointless. And there are many instances of ugly, even abominable code, that does work, but... well, at least it works!
It's very hard, and maybe impossible, to determine if a novel "works". It may work for some people and not others. It may not work today and work in a hundred years, or the opposite.
We can never know. Least of all, the author herself.
And about as much arguing about what makes "good code" as there is about what books are good and bad.
That's what I said... But not the same can be said about a novel...
> It's very hard, and maybe impossible, to determine if a novel "works".
I wonder if there is a an assumption here about what it means to "work" vs to be "bad". Psychologically, maybe it's helpful to view each person as their own interpreter, and there is more variation there compared to (e.g.) a specific python interpreter.
But even in python I could write totally not-python code, and have the python interpreter run it (e.g. by writing a codec). And I could write beautiful code that throws an error, and have a person debug it for <some_purpose>, and in meeting that purpose it might be working.
I think the challenge here is that "work" is being defined in a narrow, technical sense for code, but is recognized in a much broader, social/cognitive sense, for novels!
I've tested some while writing my own novel, and it was more "interesting" than useful at this point, but I might give it another shot with novel #2 to contrast and compare the two.
It'll be interesting to see how those tools evolve, though.
 authors.ai is one example.
I sometimes read code for fun. Some code read like an essay, some like a book and other code read like a collection of poetry. So I feel as if all 3 can apply, sometimes some of them at the same time.
- Programming and writing are similar
- In writing you have rules (natural language grammar), in coding you also have rules (formal language grammar)
- In writing you build big things from small things. Small things must be nice. Big things must also be nice. Small things must align with big things. Like coding.
- If you don't write you're lazy. If you don't code you're lazy. Very similar.
- BUT actually writing and programming are completely different because in one you work for years on a single piece which cannot be changed after it's released, so you torture yourself to make sure it's as perfect as it can be without going insane -- and if you give up at any point along the journey then all of your efforts are basically wasted, WHILE in programming you have objective criteria by which you can judge if something is good enough, release working but imperfect parts of the final product along the way, modify things as you see fit at any point, and even if you decide to stop along the way -- it's fine because your progress is incremental
It is pretty easy to see how poor or grand some instance of software is if you look at it like a single complete product, like a novel.
As to the difference the article mentions revisions of a work. Games are most similar to novels in this regard. Popular game titles don’t evolve but instead release sequels. They may occasionally release patches or contain Easter eggs which is similar to published community support projects like workbooks and commentary from a works original author.
On many occasions I've heard an author say something like "I got half-way through the book and I found out that one of the characters didn't want to do the thing I'd planned for them, so I had to deal with the plot changing in this unexpected way."
I've only once heard an author say "so I had to go back and rewrite the character until they were the sort of person that did want to do the thing I'd planned for them."
But this is the difference between "art" and "craft". The art is deciding what to build, how it should look and feel. This part is imprecise and emotional. It depends more on talent than practice, and is the bit that needs a muse or inspiration.
The craft bit is building it. This part is logical, precise, and needs to be done competently so it doesn't obscure the "art" part (great art can be ruined by bad craft, but good craft with no art is just boring). This needs lots and lots of practice.
This duality applies to writing, coding, sketching, music, movies, any creative practice.
Writing for people to read is not “very logical and very precise.” It’s “very expressive,” and sometimes, although the part the compiler reads is correct, the thing the human infers from the program is imprecise and expressive.
Working with code teaches that code is experienced, too. We read a thing, go hunting for its downstream dependencies, learn other things… Coding is an experience as much as walking through a building is an experience.
Architects of buildings work with precise engineering, but they also craft experiences for humans. I am cautious about drawing parallels between code architecture and physical architecture, but the parallel between the work a code architect performs—creating a precise thing for the compiler and simultaneously creating an imprecise experience for the programmer—and the work a physical architect performs is much more sound.
My thesis, therefore, is that programming is the art of doing a precise thing for one audience—the machine—and a creative, imprecise thing for another audience—the human who experiences the code.
There’s a continuum between writing technical manuals and ‘Ulysses’, with airport-bookstand thrillers somewhere in between.
A Technical Manual would need more attention to the human experience than a spec, IMO.
Deciding how to structure your code is half logic and half aesthetics, the fact that we spend a large chunk of our time refactoring, i.e. switching around pieces of code so that the code does the same thing is a testament to this.
More broadly, I feel like "X is like Y"-type articles are somewhat of a Rorschach test. Things we are experienced at are by definition things we spent a lot of time doing. It's inevitable one will try to self-reflect and draw parallels.
While the specific final words are less precise, there's a lot of rules to adhere to in order to write well as well.
And the best programming is often very creative.
btw, Congratulations.. Looking forward to your sci-fi novel @antirez
Anyway, I love how redis evolved, and its design from the beginning from @antirez. That's why I am eagerly to read his another craft.
(From which follows somewhat implicitely: Whenever you set up a process for shared project development, also ask yourself, would this work for a shared writing effort?)
And: a 'successful' program is easy to spot; a 'successful' novel defies that, beyond the most superficial (and ultimately subjective) measures.
This is the best way to fix a first draft, of short texts at least, without having to wait for it to cool off first. Often an email must be sent quickly, with no time to set the message aside. So always before I hit Send, I invest a minute to pronounce the text aloud, or at least under my breath while moving my lips.
I'm often amazed at the obvious typos I catch this way. As well, my oral fluency -- which appears to come from a whole different place than my written voice -- can often improve entire sentences with better word choices or figures of speech that emerge from my mouth spontaneously as I speak the text back.
Voilà! A much better second draft of the message in a very efficient manner.
And yes, I read this post aloud before I pressed "add comment." I hope it doesn't betray me.
Or perhaps a closer analog is the rubber duck where being forced to explain your problem leads to thinking more clearly about it and solving it.
“The first time you write, you’re learning about the problem, the second time, the solution, and then finally, you’re continuously polishing it.”
Or something like that. It was years ago.
The prose has to "execute" in the interpreter of the reader's imagination. Not much stack space there: go easy on the pronouns.
Very true, but also not something you should focus on too early.
You'll need a lot of experience for this initial design phase to be really fruitful.
He does not consider that writing is an art form, a means of creative expression, not just to get you idea across.
For instance, one of the greatest novels "The Devil to Pay in the Backlands" is confusing and hard to read.
Although that may have more to do with me (as a programmer) than with any relationship between fiction and software.
> I believe programming, in this regard, can learn something from writing: when writing the first core of a new system, when the original creator is still alone, isolated, able to do anything, she should pretend that this first core is her only bullet.
@antirez: why `she` and not `they`? I know you are not a native English speaker (so am I), but I believe that's not the case here and was likely written so intentionally.
The down side of this is that it leaves us with three possible neuter pronoun sets, at least two of which are just about guaranteed to bother someone. Using exclusively male pronouns will obviously bother some people, now, else this wouldn't be an issue in the first place. Using female ones will bother some people (either for ideological reasons or, for those who grew up on "male is what you use for the general case", by tripping them up as they try to figure out who in particular is being referred to). "They" and such are safer but you still get the occasional (incorrect) pedant complaining about that usage.
My native language is Russian, which is generally a gendered (masculine/feminine/neuter) language. Nouns have gender. It mostly follows the spelling of the words - i.e. if a word ends with a certain vowel. Sometimes it doesn't work that way, mostly with loanwords. In other cases the historical form of the word did match a pattern, and was assigned a gender accordingly - and then changed (e.g. by re-loaning it in a more accurate spelling), and no longer fits. When that happens, people will use the more "appropriate" rather than the "right" gender in colloquial speech, and eventually it becomes the new standard, collecting the mismatch.
A good example of this is the Russian word "coffee". When it was first loaned back in 18th century, it was "kofiy" - and in Russian, that is definitely masculine. Eventually it got re-loaned as "kofe", which would normally be neuter; but the masculine gender assignment stayed from past spelling. In the dictionaries, that is - in practice treating the word as neuter became one of the common incorrect colloquialisms, just because it doesn't "look" masculine. Language purists fought this for several decades, and eventually lost: it's still nominally masculine, but neuter is considered an "accepted variant" in modern dictionaries.
So to a Russian speaker, say, New York and Texas are masculine, while California and Florida are feminine. So, when I read "she" in @antirez's article, I was immediately confused, like, did I miss a character introduced in the previous paragraph or ...who is _she_?
In my writings, instead of saying "he", "she", "they", I try to call people by the exact meaning of what I'm writing about, eg engineer, manager, programmer, etc. No idea why it's not THE way to end all the pronouns dilemmas in English.
This is a very subtle and interesting point. This notion of the 'primitive kernel' that is hard to change, is the problem of how much abstraction to invest in, at the beginning of a feature or project. It seems to always be a balance between doing what is needed for the immediate specifications, and doing what is needed for future reusability and extensibility of that same piece of code. In other words, how much should this piece of software be abstracted for future reusability?
It is tempting to think that highly abstracted code is overkill. However, abstracting code on the first introduction of a feature, allows for this "primitive kernel" to be as solid as possible, such that it doesn't need to change often. There is an illusion of doing too much work by considering all the use cases before needing them, but what I think really happens is that
*the longer you put off abstracting something (i.e. copying code instead) the more expensive the abstraction will be once you get to it.*
So the first abstraction, nicely put in the quote above, "During the genesis of the system she should rewrite this primitive kernel again and again", illustrates why it should be done right the first time around. It is cheaper to do so on the first try when nobody is using the abstraction. Compare this to a later time, a time when the kernel is already being used by lots of other components, now you have to take those use cases with their exceptions into account, making the process of abstracting more complicated and risky.
*NOT abstracting as much as possible, is setting the software up for an inevitable increase in complexity, and therefore cost in effort to reduce it.*
Summarizing: abstracting code on the first pass is avoiding the increase in cost of that same abstraction if you wait until there are multiple concrete cases of it. I suspect it's a function of how many concrete cases there are to abstract, multiplied by environment's stability (once it's shipped, abstracting gets even more difficult and costly).