Hacker News new | past | comments | ask | show | jobs | submit login
Speed Matters (scattered-thoughts.net)
197 points by U1F984 on Oct 15, 2021 | hide | past | favorite | 159 comments



Doing something a second time is almost always faster.

You can’t draw any information from this, except perhaps that the author avoided Second System Syndrome.

In college I worked with students who had programming jobs. While my classmates were spending fifteen+ hours on assignments, one of these coworkers who I shared a class with said he was spending two hours and I thought he was lying. Another was claiming three or four. This must be bravado I thought.

Next summer I got a programming job too. That fall I was spending six hours on homework, and by year’s end I was down to three.

When everything is new you go slow. You have to solve problems without muscle memory, and you have to fight nervousness. Am I doing this wrong? Is this even a good answer?

They talk about 10000 hours and mastery, but there are also clear breakpoints at 100 and 1000 hours, where you know enough to do a lot, and you don’t have to stop to consider your first moves.


My son was taking his first programming course, and was watching him working on an assignment. He was running the compiler and executing his program in the terminal.

I mentioned to him: "Did you know you can pull up your past commands by hitting up arrow?"

His mind was blown. I think that comment doubled his productivity.


And the next lesson: C-r will search the command history (at least in most *nix shells). So when needing to do something but you aren't sure how far back it was (not on the screen or not recallable) but you know it is in the history and some text in it, just C-r <something> [C-r repeatedly until you get to the correct version].


On Windows, Ctrl+Backspace will delete the word left from the cursor, Ctrl+Delete the word to the right. It's pretty much unknown as far as I'm aware but it has boosted my editing productivity immensely.

vi users will probably laugh at this.


Unknown? You can't work inside Windows at all without those shortcuts. In Emacs it's alt+Backspace, so it's not too hard to organically find in other environments by trial and error.

Whenever I switch editors, I quickly look for the following shortcuts:

* How do you delete a word forwards and backwards? (there might be a better shortcut such as Ctrl+W)

* How do you move to the word ahead and the word behind?

* How do you kill a line?

* How do you duplicate a line?

* How do you move a line up and down?

* How do you activate multiple cursors?

* How do you comment/uncomment a line/block?

* How do you expand/contract a selection?

I think that covers about 80% of the common actions, and learning them will make you instantly productive in your new editor.


I really only use one of those on a consistent basis.


Some other very handy shortcuts in MS Office programs (and maybe others) is alt-shift-up/down arrow will move the entire line up or down. Quite handy to move text around when composing an email for instance.

Ctrl-up/down will move the cursor past newlines, so you can more quickly navigate when there are long lines with lots of wrapped text.


I got a laugh out of helping a PhD ML researcher who was having problems with their keyboard. They showed me how they had lowered the repeat delay on their wireless mac keyboard, and now when they typed, it went nuts and stayed stuck on repeat randomly. The reason they lowered the delay was that it was taking too long to delete the long command they had (they wanted to hold down delete to clear everything out). I was able to show them that there was interference from their USB-C dock which likely caused the key up events to be dropped - and that they could also use the option key to delete entire words at a time.


Thank you for posting this; I had ctrl-arrow-key left or right to jump words but this is magical.


I've always used CTRL+SHIFT+left then backspace. Since I'm used to using ctrl to navigate and skip words, adding the shift modifier just kinda slotted right into my brain


Actually as a Vim user I have Ctrl+Backspace mapped for insert mode. But I learned Windows shortcuts before vim...


One of my favorites is option-arrow (left or right) to jump between words in the terminal.


If it’s readline-based like bash and you are still using default shortcuts then you can do M-b and M-f for backward/forward a word at a time (not sure what the Meta equivalent is on a Mac. On my keyboard it is the ‘Alt’ key.) Many Mac apps/text inputs support emacs-style shortcuts like these.


Meta on macOS is the Option key.


    set -o vi


Just as useful,

history | grep pattern

With all the useful grep abilities (like peaking at commands before or after the given pattern).


I do this a lot as well. A useful side-tip to go with that: set your bash history size to be unlimited (or at least really big) to avoid losing less frequently used commands.


Try fzf


Related trick: type the first couple of characters of the command in question, and hit PGUP until you find the correct version.


This one's my favorite. Not every distribution of readline enables this, so you might need to uncomment or add the binding for history-search-backward/forward into /etc/inputrc or ~/.inputrc. Similarly editrc for programs that use libedit instead of readline to avoid the GPL.


Great tip!

I've found more like that at: https://www.redhat.com/sysadmin/cli-speedup


:mindblown: I've been using *nix shells for 30 years and just learned and tried this now. Thank you!


i only recently learned this. been big


For bash users who are used to vi-style keybindings, you can enable vi mode[0]. When I discovered this, my "performance" must've more than doubled :)

[0] https://blog.sanctum.geek.nz/vi-mode-in-bash/


My command line time is easily halved by using vi mode in Bash.

(And so many coding tools are inefficient to me because I cannot use vi keybindings.)


Yes it's amazing. And for long complex commands, being able to just press v to drop into a proper vim to edit the command is very useful.


I am working with a junior developer who will use the mouse to click on text, click copy, click the new area, and click paste. I don't have the courage to tell him that he can Control/Cmd-C and Control/Cmd-V


Just let him know.


And you may have to show him during pair programming. Telling someone 10 improvements rarely changes their workflow.

Letting them see/discover the change will drive them to try


In Linux, just select and middle-click.


This blew my mind when I first saw somebody do it. I was a bright-eyed young engineer in shorts visiting a customer site, and he had a PhD, dress slacks, a white button-down shirt, and a tie. I assumed had a jacket to match the slacks hanging in his office. I said, "Whoa, what terminal emulator is that? I need to check it out." It was xterm.


Wait until you hear about "cd -"


I've recently switched from bash to zsh. Checkout the dirhistory plugin, which allows you to use Alt-left to go to change to the previous directory, Alt-up to change to the parent directory, etc.

https://github.com/ohmyzsh/ohmyzsh/blob/master/plugins/dirhi...



When something becomes routine for a developer, where they can predictably finish a work item without designing anything new, that task is immediately flagged in managers' minds as one that should be done by a more-junior/contractor/3rd-party/off-the-shelf solution. Almost by definition, then, good developers are relegated to only doing tasks that they are doing for the first time. The implications of this natural law are evident in the quality of the output, the predictability of the schedule, and career burn-out of people who are asked to be creative 100% of the time.


I still spend a lot of my time manipulating lists. Although it may be fair to say that experience helps you realize that you can turn some problems into manipulating lists. Easier to write tests for, and for the next person to understand.

The problem, from the manager's standpoint, is that the really good solutions are simpler than the really bad solutions. That may not sound like a problem, but the concise, correct answer is often self-evident in retrospect, diminishing the gravity of the situation.

Lots of people make suggestions you instantly agree with, but you would not have come up with all of those suggestions on your own. A bad boss won't understand that, until you make another self-evident decision that you would get more money and respect working somewhere else. Maybe not even then.


I think doing things for the first time is still something you can get better at. Although you're working on brand new problems all the time, you can still improve at skills like digesting the problem, splitting it into subproblems, designing systems, faster implementations and testing.


I wish this was true. I'm always finding myself moving around in an organization and projects trying to find new challenges rather than building another standard API. Certainly every project has it's critical pieces where any extra attention can pay off but not so many.


I think a trick you can learn here is that once you're good at doing something that is difficult to do the first time, and becomes tedious to do on repetition -- but is still something that needs to be done frequently, the way to vastly increase your value is to be able to teach others how to do that thing.


What type of company do you work for? It may be different in software/services vs. commerce companies.


I actually work at a software/services company for commerce. Don't get me wrong, this is a big space and there are many challenges but they often tend to be more vertical than technical. Lately I've been making a monolithic app behave less so with decoupled async notifications. Also spend some time prototyping things for future adoption for if/when that's feasible. I'd like to be using something functional like F# but for some reason not being in adtech/fintech drastically lowers tech appetite of companies.


There's a few other variants of this:

When I was a TA marking assignments I saw a huge range in the number of lines of code between my students' assignments. If I took all the students who got full marks, there was often a 2x or 3x size difference between the smallest submission and the largest. And the large assignments didn't look bloated. If you gave me the code from a random student, it wasn't obvious where in the size range their code would land.

And a couple years ago I worked as a professional programming interviewer. One of the assessments we did was a half hour programming problem. I interviewed over 400 interviews. Most of them had some professional experience as software engineers. Candidates use their favorite language and their own computer & tools. Quantitatively, there is a massive difference between the better candidates and the worst. The strongest candidates could get about 2-3x as much done in the same amount of time as the average. And the weaker candidates (even those with professional experience) could barely implement hello world within half an hour.

As software engineers, we don't have a culture of practice. Its assumed that with enough time working professionally, we'll keep improving. But that might be naive. There are almost certainly ways to measure our skill and improve over time. Its weird we don't even have that conversation.


> As software engineers, we don't have a culture of practice. Its assumed that with enough time working professionally, we'll keep improving

Man, this is exactly what I've been thinking lately.

I don't have that much professional experience and have a background of competitive gaming where there is a really strong culture of explicit practice.

Coming to software development where people with a lot of experience can't articulate their process or the steps they took to improve is really fucking annoying.

I don't expect a step-by-step manual, but I do expect something instead of it all being tribal/intuitive knowledge.


> Its assumed that with enough time working professionally, we'll keep improving.

I’ve heard this described as “10 years of experience” vs “1 year of experience 10 times”.


We do now though? A large chunk of the industry is obsessed with doing binary tree traversals for paycheques.


Also between the first and the second time you have plenty of opportunity to plan ahead for what you are going to do differently next time.


Yes, but that's the trap of Second System. You can have too much time to plan and you end up with a baroque nightmare that everyone resents, including (eventually) you.


Missed in this analysis is just how much time it takes to debug. I have found over the years that I spend at least an order of magnitude more time debugging something than writing it. And that gets worse if the debugging is separated from the writing by days, weeks, or months, because of the context reload and the general head-scratching trying to figure how it is supposed to work and what I did wrong.

I found over the years to avoid long debugging sessions, I can instead write better tests, do smaller commits, and write less clever code. To prevent the kinds of bugs that crop up months later, it's really important to write really good, comprehensive tests.

A lot of tests that are too coupled with the design are also bad, though. They slow refactoring. So you have to get good at writing the kinds of tests that don't inhibit refactorings, or get good at designing things so you don't have to refactor.

Coding speed is irrelevant in the long run. It's only relevant for quick scripts and throwaway code.

Don't get sucked into debugging.


It always gets me when people talk about typing speed in programming. Maybe it's because I can't think faster than 50 words per minute, so it doesn't matter if I could type faster.

But even if I could -- it's not about how fast I can think either.

You're right that it's about how much work I can avoid by thinking about how to keep it simple, how I can write good tests that won't cause someone else to do more work later, and how I can build something in a way that can be modified later.

And you can't get there with linear thinking at all.

If the programming you're doing is can be done by rote -- and the primary task is getting it out on paper -- either it's something that could be automated, or you're much much better than me. Possibly both.

The author of the original article is clearly much smarter than me, but he spends a lot of time doing things like building simple text editors for his own use, writing text matchers in obscure languages like Julia, and learning how create a SQL query planner through trial and error.

Tasks that are all way beyond my ability, but also well beneath my productivity.


To me coding speed is the speed at which you write code with few enough bugs that it is fine to run it in production. So debugging is included in that number, if you have to spend so much time debugging your code then you aren't a fast coder.

If you write a ton of bad code yesterday and got nothing done today since you had to debug all day, then you weren't fast yesterday and slow today, you were slow both days.


You can still write code, test it, debug it, release to production; then weeks or months later someone finds a bug in it (in prod). GP's point is that this kind of debugging sucks a lot more time than writing feature's v1 code from scratch. And I agree.


But that bug is also chalked up to your original time. The time it takes to make feature X is the implementation time + all maintenance time. If that combination isn't low then you are slow.

For example, lets say you spend a full year writing a full product. The a bug you write at month 2 and fix at month 6 is still the time it took to build this product. Being a quick coder therefore includes being good at managing technical debt, bugs etc. There is no reasonable interpretation of being fast at it that doesn't include these aspects, as the clock doesn't stop ticking until you are done.

So when people say you should learn how to become a fast coder, what they mean is that you should learn how to produce code at your current quality level but in much less time. They don't mean that you should throw quality out the window and just write crap without thinking.


Saying "you're slow" doesn't solve anything. I know I'm slow, but I'm still faster when there is a robust type system with good guarantees.


|I found over the years to avoid long debugging sessions, I can instead write better tests, do smaller commits, and write less clever code|

The less clever code part is the key, for me. If I can write my code out, in a relatively simplistic way and go back to refactor and optimize it, I save myself hours of debugging headaches.

Doing it and saying it are different things, but I fully intend to do it every time I start =)


This is why I find myself preferring static languages (Rust currently) over my previous focus (Python).

Midway through a late-night debugging of a service I’d written in Python, I realised I’d spent more time bug fixing and debugging to add more error handling/resiliency (all of which added to the general mess of the code) than would have taken to implement the solution in a statically-typed language, and I ultimately had less guarantees and less performance than the statically-typed language would have given me.

So yes, don’t get sucked into debugging and don’t get sucked into “but it’s faster to write”.


> Coding speed is irrelevant in the long run. It's only relevant for quick scripts and throwaway code.

Except writing tests is coding.


> An example of one of these, the most commonly cited bad-thing-to-optmize example that I've seen, is typing speed (when discussing this, people usually say that typing speed doesn't matter because more time is spent thinking than typing). But, when I look at where my time goes, a lot of it is spent typing.

I’m glad someone agrees with me on this. The argument that “I don’t spend a lot of time typing” is just false. Furthermore, if you’re good at typing, you free up brain cycles to allocate to higher level work. Slow typers severely underestimate how much brain power is spent on that.


On my Macbook Pro, I have Caps Lock mapped to Esc. But every time I hit Caps Lock there would be a pause before the Esc triggered. Since I use vim bindings everywhere this was a nightmare as I switched to normal mode. I was like way less productive. It was nutty.

Fixed it using Karabiner as recommended https://superuser.com/questions/317900/eliminate-macbook-cap... and suddenly I was better at life.


Another option is to map Caps Lock to CTRL and use the default vi CTRL+[ keybinding for ESC. You get the bonus of being able to use a more ergonomic CTRL for virtually anything else that requires CTRL.


That’s what all my friends do and I could never get into it. It’s like the pedal thing. I had it for 3 months but never ended up using it comfortably.


I can't touch type symbols. (!@#$%^&*)

I can feel my brain stutter every time I need to type an esoteric operator.

...okay I know what I'm doing this weekend.


It's not that hard as learning to touch type the entire alphabet! And I don't consider those symbols esoteric, unless you're programming Lisp.


Mostly you would only use (*) from that list for lisp. Some lisp hackers are known to remap the first row of their keyboard to be shifted so that one needn’t hold shift to type parens.


I thought you were going to say they map the first row of their keyboard to parentheses -- which I guess is only a bit further.


I can pretty much cord things like Cmd-Shift-[ or the above without thinking about it, any hand, holding a drink or lighting up. It feels right.

...You should

<3


> Furthermore, if you’re good at typing, you free up brain cycles to allocate to higher level work.

Yes exactly! I finally learned to touch type including symbols on ortholinear keyboards. This frustration of having to look down constantly is gone. I’m free to focus on editing code.


A good analogy is running.

If you aren't good at running, you're gonna have bad time playing soccer. However, running fast definitely doesn't guarantee you will play soccer well.

Typing is similar.


I learned Dvorak for the speed and remained due to a perceived lower strain on my hands.

(Another bonus is that now I cannot hunt and peck since my keyboard still has the qwerty imprints and I guess that is the biggest contributor to typing speed from learning Dvorak)


* perceived lower strain * . Isn't it a proven fact?


By being an exceptionally fast worker, you're usually rewarded with more work.


By finishing your assigned work exceptionally fast, you can focus actual effort on the "sharpening the saw" aspects of your role and/or the organization, and helping others.

Those are the workers who (in a competent organization) are promoted.


Genuinely asking, is there any value anymore in promotions when the market seems to reward job hopping every few years? I think promotions were useful when someone worked only one or a few jobs in their lifetime, but it's not so useful anymore. The amount of effort to get promoted is much higher and with much lower reward than simply interviewing at a new place and getting that job within a couple weeks.


> is there any value anymore in promotions when the market seems to reward job hopping every few years?

Depends entirely on the company. Big tech companies are very good at promotions and retention.

The rewards of job hopping also saturate quicker than people expect. Push it too far and it starts to become a negative. When an employer sees a resume that has a new job every 10-12 months for the past 6 years, they're going to assume the employee will be temporary at their own company, as well.


Big tech companies are bad at retention past 4 years. Your initial equity grant will be the largest, and once it fully vests, you’ll see a dramatic drop in comp.


Is this true lately? Personally I got good sized refreshe plus more from being promoted.


It is if you're actually looking to get promoted to management (for what it's worth). Promotion may also mean getting higher-profile projects.

I don't know about you but the job-seeking process seems like another job itself. If you're good at that job (esp. while executing on your current job) then I guess the world's your oyster.


Alternatively to promotions, you might get allowed to switch to that cooler project you have been eyeballing.


> By finishing your assigned work exceptionally fast, you can focus actual effort on the "sharpening the saw" aspects of your role and/or the organization, and helping others.

If you're lucky. Mostly it's when there's relaxed oversight over your work, so you can hide the fact that you technically finished your work already and are languishing doing what, to many people, seems like a waste of your time even if you know that's not true.


The bigger the organization is, the more able it is to "promote" or segment it's workforce; and less likely to actually do any changes.

The smaller the organization is, the more it is competent at what it's doing, the faster it can change to adapt. But there are sometimes no practical value to promotions at all.

You are seeing it differently, I understand. Why?


If your end goal is to get promoted, there are far easier and less demanding ways to get there.

(Not saying that you are wrong - not at all, you're actually quite correct that many of the people that get promoted are over normal performers, usually combined with some other talent or skill.)


Exactly. As a supervisor, of course I give more work to my staff who work faster. This means they get more opportunities to learn, more exposure to good projects, more development, and more opportunities for promotion (if they want that.)


Only if you have good projects , development and promotion opportunities to give to them . Often it’s just more work of the same kind.


Yes, only a fraction of the work is “good” work in that sense. The fast workers get many more opportunities to get it. It’s like the fast worker buying ten lottery tickets and the slow worker buying one. She who buys ten tickets has ten times the chance of winning, even if she buys nine losers.


That’s not how it works in the companies I have seen. The people who get the “good” work have good connections with management and know about the “good” projects before anyone else. Fast workers just get more work of the same kind if they don’t know about the interesting projects.

I often feel being too good at the daily grunt work makes you too important for that and management wants you to stay there.


So you're looking at needing to be a 100,000,000 X programmer do have a decent shot at winning that lottery.


The trick is to finish slightly ahead of schedule and spend the free time learning, refactoring, exploring new ideas, testing tools, automating stuff, etc.


I would prefer if most (if not all) of those things were part of the normal work schedule.


Whenever I’ve seen teams formalize this, it’s been cut down by someone higher up who wants to show how they’re saving costs or making the team more efficient. It’s possible I’ve only worked at bad organizations, but I’m willing to bet that s the reason why such “benefits” are not formalized.

However it then leaves you vulnerable to the naive worker who will voluntarily push all gears because they’re such a hard worker.


If you are a factory worker, you will be 'corrected' by your coworkers in the first break that comes up.


> If I was 10x faster yet it would have been 10 hours. That's a long plane ride. Even with a full-time job I would still be able to squeeze in a couple of text editor sized projects every month. I would be able to learn so many new things.

I see a dilemma here. Nobody, not even 10x faster than the author, writes a full-fledged text editor in 10 hours. You can create something interesting that does text editing in 10 hours (or 100), but it's mostly going to be a learning exercise. You won't have created a piece of software that is of any value to anyone but yourself.

So the dilemma involves the tension between rapidly speeding through many interesting "efforts" that result in nothing of any use to anyone, but lots of learning value to the programmer; or spending much, much more time creating software that is genuinely useful to people who are not programmers, but risking getting stuck in a given language, problem domain or project and not learning as much.

I made my bed - I opted for 21+ years focused on a single project, but I was lucky in that it spanned everything from kernel-side stuff to hard-realtime to UX and I felt I'm still learning new stuff even today.


> I see a dilemma here. Nobody, not even 10x faster than the author, writes a full-fledged text editor in 10 hours. You can create something interesting that does text editing in 10 hours (or 100), but it's mostly going to be a learning exercise. You won't have created a piece of software that is of any value to anyone but yourself.

You could if you had enough components to assemble the text editor from.


I could stick a GtkTextView into a GtkWindow and run it inside GtkApplication, sure.

That's not what I mean, and I don't think it's what the author of TFA means, by "write a text editor".


The 1000000x solution: <textarea></textarea> (this is me agreeing with you)


hmm I prob could make something kind of like a text editor in win32 fairly quickly as I can think of about a dozen built in calls that would make it work. Would it be full featured? Not much but it would do the very basics (cut/paste/save/read/edit). But that is because I know the system pretty well. If you dropped me into some other system it would take me a decent amount of time.


What you're describing is precisely what I mean: "something that would do the very basics".

Sure, I could cook one of those up, probably in less than 10 hours. But that's all it would do, and all I'd have done is learnt a few things. If I stopped right there, that's the only outcome: me learning some stuff.

Presumably at some point, the learning needs to be applied to something that is actually useful?


Almost all projects start that way as some small learning project. You usually do not build complete applications in 1 day. You usually lean heavy on existing libs or chunks of code you already learned somewhere else. That is what that sort of thing is for. In my 26 years doing this most of what I have written has been thrown away. There may still be some bits here and there living on but I doubt it. Yet at one point all of that code had a use and did it well. That use was me using that knowledge to get things done. Those prototype projects do a lot of that sort of thing. If I wanted to make an scala version of my 'basic editor' it would probably take me at least a week of digging through docs and trying things. But in another realm I can bash it out in a couple of hours. My point was your domain knowledge is what makes you a better developer.


Ant's Editor, Anthony Howe's entry to IOCCC '91, surely took him longer than 10 hours to write; David A. Wheeler's "SLOCCount" estimates about 0.65 person-months (because it's 289 lines of code), which is roughly 120 hours. In its unobfuscated form, it's 5065 characters of C, including comments, and not including the curses implementation it runs on. It's a vi clone that's complete enough to use for real work; you could reasonably argue that it is not "of value to anyone" because there are other editors available that are more featureful that can run anywhere it runs. It lacks, for example, yank, put, and undo. But it's definitely featureful enough to use when you don't have anything else.

If you type 90 words per minute of English, which is a speed that most people reach with a little practice, you can almost certainly type C at 45 words per minute, which is 270 characters per minute, 4.5 characters per second. (Jamie's 500 characters per minute is obviously higher than this, but I think he's a faster typist than most.) That would allow you to type in the Ant's Editor program in about 20 minutes, if you were just typing, not having to figure anything out.

20 minutes is shorter than 10 hours. Like, 30 times shorter. If you had everything totally clear in your mind, you could write 30 times that much code in 10 hours.

Now, is it plausible that somebody could write a whole usable text editor without having to figure anything out, so that their typing speed was the main limitation? Maybe if they'd written a lot of text editors previously. I'm skeptical, having spent half an hour getting binary search right last week, but then, there are much better programmers than me out there. (I still find that SLOCCount systematically overestimates the effort required for things; the calculator program with a generic numerical solver I mentioned here a couple of weeks ago in another comment was 261 lines of Python and took me 12 hours, and SLOCCount estimated it at 0.59 person-months, which would be 100 hours.)

I think writing things in very high-level languages like Python instead of C helps a bit, too. Not an order of magnitude, but maybe 2-5x, especially for small projects like these. I wrote a text editor on an iPaq in Python on my honeymoon.

So, while I agree that you're almost surely not going to write a text editor that will steal users from Vim, VS Code, or Emacs in 10 hours, I do think you can write a text editor in 10 hours. You could probably write a significantly more full-featured editor than ae.

Surely the optimal amount of effort to spend on such "learning efforts", which are expected to teach you things but not produce a program for you to use, is neither 0% nor 100%.


I don't really track what's available in python (or equivalent) repositories/libraries/package managers. However, I'd wager that there isn't an efficient representation of text for editing available in the way that Emacs or vi and its cousins contain. By that I mean something where you open a 12MB file, insert a single character at 8 random places and not have the data structures fall over.

After several decades of different people working on text editing, we now have some fairly good ideas about how to do this efficiently, but they are (a) non-obvious (b) not the sort of thing I'd expect to be trivially available in high-level-language-of-the-day. My expectations have been known to be wrong, however.

And yes, missing undo in a text editor makes it effectively dead in the water.


ae just used a gap buffer, same as GNU Emacs. If you move your insertion/deletion point from one end of a 12MB file to the other, you have to copy 12 MB across the gap, which takes about 4.1 milliseconds on this laptop, which is not ideal but plenty fast enough for keystroke response. Not terribly difficult to do with Numpy, I think. But with gap buffers you have to store your undo log in the form of an edit list or something.

(If you just keep your buffer in a Python string, you incur the 4.1-millisecond cost every time you insert a character near the beginning of the 12-MB string, possibly twice, and you need enough memory for multiple copies of the buffer. The gap buffer saves you this time cost except when you're moving the insertion point or you run out of gap, and keeps your space cost limited to slightly more than enough to hold the text itself.)

In a garbage-collected language on a modern machine (meaning, lots of RAM) I'd be tempted to use ropes, since they make undo trivial. Here's an implementation of ropes I wrote in 45 minutes and 45 lines of code just now, which seems to more or less work, though I'm sure I'm overlooking a few things:

    #!/usr/bin/python3
    from collections import namedtuple
    from functools import cached_property  # 3.8 or later


    def as_rope(obj):
        return obj if isinstance(obj, Rope) else Leaf(obj)


    class Rope:
        def __add__(self, other):
            return Concat(self, as_rope(other))

        def __radd__(self, other):
            return as_rope(other) + self

        def __str__(self):
            return ''.join(self.walk())


    class Leaf(Rope, namedtuple("Leaf", ('s',))):
        def __getitem__(self, slice):
            return Leaf(self.s[slice])

        def __len__(self):
            return len(self.s)

        def walk(self):
            yield self.s


    class Concat(Rope, namedtuple("Concat", ('a', 'b'))):
        def __len__(self):
            return self.len

        @cached_property
        def len(self):
            return len(self.a) + len(self.b)

        def walk(self):
            yield from self.a.walk()
            yield from self.b.walk()

        def __getitem__(self, sl):
            if sl.start is None:
                sl = slice(0, sl.stop)

            if sl.stop is None:
                sl = slice(sl.start, len(self))

            if sl.start < 0:
                sl = slice(len(self) + sl.start, sl.stop)
            if sl.stop < 0:
                sl = slice(sl.start, len(self) + sl.stop)

            # Important special case to stop recursion:
            if sl.start == 0 and sl.stop == len(self):
                return self

            a_len = len(self.a)
            if sl.start >= a_len:
                return self.b[sl.start - a_len : sl.stop - a_len]

            if sl.stop <= a_len:
                return self.a[sl]

            # At this point we know we need part of a and part of b.
            # Since slicing a leaf creates a Rope, we can blithely just do this:
            result = self.a[sl.start:] + self.b[:sl.stop - a_len]
            # Avoid making Concat nodes for lots of tiny leaves:
            return Leaf(str(result)) if len(result) < 32 else result
So then inserting a character, for example, is buf[:point] + c + buf[point:], which just creates a couple of Concat nodes and a Leaf to hold c (plus slicing existing nodes if necessary); this takes about 30 microseconds, which is 150 times faster than the brute-force string-copying solution for a 12-megabyte buffer. You need slightly more code than this for a robust editor because your trees get unbalanced, a problem with many well-known solutions.

Syntax highlighting, markers that move when text is inserted, etc., do imply more effort, but, again, there are well-known solutions to these problems. And an adequate redisplay is no longer the major research effort that it was 37 years ago.

So, while it's true that the data structures you need for a reliably responsive text editor aren't trivially available in Python, or Lua, or JS, I think you can just write them. I wrote the above code at about 6 words per minute because there were a lot of things about ropes (and about Python) that I wasn't clear about and had to figure out in the process; surely I could have written it 2-5 times as fast if I knew what I were doing.

If you're interested in learning about this stuff, Finseth's book is the classic, and Raph Levien has blogged a bunch of really nice up-to-date stuff about his rope-based editor Xi, which he wrote in Rust.


https://xi-editor.io/docs/rope_science_00.html is Levien's series about ropes for use in text editors.


(The other comment about the solver was https://news.ycombinator.com/item?id=28660097.)


> I wrote a text editor on an iPaq in Python on my honeymoon.

Correction: a Zaurus.


To me, this is not a lesson about speed, but rather a lesson about "compound interest". He was able to compound all his learnings from the first go round in to the second. That doesn't always mean it will go faster, but it does usually mean you don't make the same mistakes as the first time (which can result in speed).

I find this notion of compounding to be much more useful in life than just in terms of finance and interest. Starting early, or doing something more, yields a compound benefit down the road. You get better at a thing faster, which compounds and lets you get better still. It's how expertise is realized and why there will never be equal outcomes between different people with different levels of motivation and persistence.


I enjoyed the read.

> If you compare two coders, one who can touch type and one who has to hunt and peck, the difference between them is not just down to typing speed. The hunter-and-pecker has to think about typing! This consumes attention and short-term memory that is sorely needed for thinking about the program itself.

I never learned to touch-type, but I've also typed a lot of stuff[0]. I seem to be a lot faster than most folks, and I write wordy code. Take a look at my codebases, to see what I mean. Lots of documentation[1].

Also, I have a friend who is an amazing programmer, but has been absolutely clobbered by RSI. It looks like the worst case I've seen. He's had at least one operation. Maybe two, by now. I think they didn't fix the issue, just ameliorated it a bit. It really does break my heart, because I consider him to be a treasure to programming.

I like my code to be performant, and I seem to be able to get releases out the door in good time, but I suspect the author would go crazy, looking at me work.

[0] https://stackoverflow.com/story/chrismarshall

[1] https://littlegreenviper.com/miscellany/leaving-a-legacy/


Can you finish a thought tho? As in, why did you just type all that up? What is the point you're making?

Respectfully.


> Can you finish a thought tho?

Yes ... No ... Maybe ... Tell ya what ... lemme get back to you on that. I gotta ask my wife...

> As in, why did you just type all that up? What is the point you're making?

Huh? It pretty much speaks directly to to the OP.

I’ll pretend that this wasn’t a rather strange troll, and answer the ... um ... ”question.” I'll try to use vernacular, so it's clear.

The author talked about how they have a process that they use to speed up the “raw” mechanics of software development.

They specifically wrote that they believe that a “touch-typer” is a more effective programmer than an untrained “hunt-and-pecker.”

I wrote that I am a “hunt-and-pecker,” and that I believe that I am quite effective. Since this is HN, I backed up my statement by pointing to proof (the canon of my work, as catalogued in my SO story), and, as extra credit, I also pointed to an article that I wrote, discussing how I document my code.

I've been writing since I was a wee bairn. I've written a 400-page book (which was never published, because it was embarrassingly out of date, by the time the editing was complete), and, if you follow that link I provided, you'll see a lot of writing. You may be bothered by my longform posts on HN, but these are TL;DR, compared to my normal prolix prose.

I also mentioned a personal anecdote, about a good friend, who is a trained touch-typist, and an excellent software engineer (and about a quarter-century younger, but I didn’t feel the need to mention that). His touch-typing directly led to a serious case of RSI (Repetitive Stress Injury), that required two operations (UPDATE: He hasn’t received his second one yet). This RSI was bad enough to negatively affect an extremely lucrative career.

Speed can be a curse, as well as a blessing.

> Respectfully

Sure, whatevs.


Thanks, that cleared that up.


This obsession with productivity is not healthy, and a young man's game. Relax for a second, will ya. Think before you act.


Maybe you misread the article - the obsession here is with "speedup", which is not the same as productivity.

The author gives the example that instead of simply doing more X, being faster can enable you do Y instead of X (where Y might be only working half-days). Very much "work smart not hard", which is where a lot of grind-y productivity stuff lands.


You can still be hyper-productive without killing yourself in the process. Also, some people actually like to do these things.

Keep in mind that there is a point where relaxation turns you into a worthless bum, and constantly analyzing things before you act will mean you struggle to learn which paths don't work in reality.


It's possible to walk a line that straddles both approaches.


It's not an obsession with productivity. Speed of the tools we use limit the thoughts we can think.


nothing is more frustrating for me that to have a goal, and a path, but be unable to get there because of .. issues. i know thats programming and working in an organization

but if it is going to take me a year to deliver an X, I probably wouldn't choose to do it. a month, hell yes. the worst - pulling the trigger on X assuming it will take a month + delta and have it take a year.

there's also the reward and momentum factor. if i keep getting nice wins i get pulled into doing more interesting work. if its going to be another 6 weeks before i can see any demonstrable progress... i'm actually going to go alot slower and be tempted to just forget the whole thing.

some of us are actually in it to _do stuff_....not just show up at standups and collect a check. honestly though maybe i have less to show for wanting to dig into real work.

another important context here is that Jamie is driven by a desire to make programming more useful and accessible to people. the fact that he's trying to quantify that is quite interesting and relevant to that goal.

so .. put in your 40 hours and have a life. there are other people with differnt goals.

edit- sorry temporal, it doesn't sound like it , but i'm really agreeing with you


I agree and I would say that choosing to work on the right things is much more important than being a fast programmer. Sometimes you need to spend more time thinking about a problem, do a bit of a "market study", or gather more data. Too many startups out there are solutions looking for a problem. On a smaller scale, as an individual programmer, it's easy to get caught into yak shaving or trying to optimize the wrong thing.


> choosing to work on the right things is much more important than being a fast programmer

Choosing to work on the right things is completely orthogonal to being fast though.

> Sometimes you need to spend more time thinking about a problem, do a bit of a "market study", or gather more data.

You can do this even if you are a fast coder. Only difference is how long it takes to test things, build MVP's and show them to potential users. MVP's are a great way to better understand what to build or do market studies, so I don't see why this goes against being fast.


Being able to quickly try things out is a great way to understand a problem better. Coding speed can help a lot with that.


Old people don't understand, we need more reasons to use Adderall


> There are ~33k characters in the rematch repo, most of which are tests. I type ~500 characters per minute. So if I could sit down and type the correct code first time, without making mistakes or getting distracted, it would take 66 minutes. I don't see any fundamental reason why I shouldn't be able to at least approach that bound for such simple code - maybe get within 3 hours, say. So there is potentially room for another 10x speedup.

This seemed like a very, well, odd analysis to me.

My typing speed is the last thing that affects my overall productivity.


I read it as being the upper bound and not the target of optimization.


Wouldn't reducing the amount of typing by thinking about what you're doing instead of copying it increase that upper bound?

And wouldn't thinking about what needs to be done (and what doesn't need to be done) increase that upper bound even more?


There are all sort of programmers. I personally am sensitive to the creative side of it. Problems tend to have an infinite number of solutions and I feel unsatisfied if I don't get close to the ideal one. I can feel my solution is wrong and I can feel when I get inspired out of nowhere. I have no control on this, the only solution I found for this is to let the problem macerate for a few hours/days if it's possible. I think it's analogous to a painter artist who leaves his frame unfinished for days/months and comes back to it till he feels satisfied. It's the creative part of the job that's uncompressible. Of course you can have programming jobs with no creative part, in which case you can do 10x sometimes 100x depending on who you compare yourself to


Hmm...there are "hunt-n-peck" typing programmers??

How is that possible given that to become a programmer you need to type all day every day for years.


I see hunt and peck people in IT pretty regularly. I also see an increase in the number that aren't familiar with basic efficiency shortcuts such as ctrl-c/ctrl-v.

There are people joining the workforce as "IT professionals" who have not used a keyboard as their primary input device for most of their life. Many people have now grown up with phones or tablets as their primary computer.

It says a lot that 90% of the time I click a WikiPedia link in an online forum, it is the ".m." mobile version, not the desktop version.

Over the next decade this is just going to get worse.


hunt and peck is probably an exaggeration but typing speed is not the bottleneck for programming https://twitter.com/id_aa_carmack/status/1302651878065475584


If you don't measure it, don't require it, it won't be there.

Nobody ever asked "are you a proficient typer?" in an interview. And no one ever followed up with "show me".


Talk to a secretary. I have known many. They still have to do typing tests as part of their interviews, as do stenographers. Since these skills directly speak to their vocations, and are fairly easily measured, it's natural.

Coders, on the other hand, aren't applying for typing positions. You could say that the "Draw the Pirate" tests, given by many companies, these days, are like typing tests (but I don't think so. I won't go into my thoughts on that).

I have heard, anecdotally, of programmers being tested for speed of coding, as part of the interview process.


I did mean for coders, of course.

Pray tell how exactly can you be tested for coding speed? Or would it be speed / correctness? I can't really picture how this would work in an interview situation...


I never learned to type properly. I don’t think it hinders my development speed or anything. Typing is a very small part of the time programmers spend while working.


I find that typing speed plays a big role in how fast I can work: the faster I can get my ideas through my fingers into my text buffer, the faster I can reclaim some short-term memory for my next thoughts.


they exist, although I imagine they are becoming rarer as typing skills are becoming part of the standard elementary/middle-school curriculum.

My dad was a hunt-n-peck programmer all of his life. Electrical Engineer PhD, often programming in fortran or c (although all his c code was actually just fortran too). Two-finger typer all of his life.

His son: 140-160 wpm on Kinesis Advantage 2, thanks to typing classes in middle school.


I was a hunt-n-peck programmer because I learned how to program as I was learning how to type.


don't judge me


Your public post is, unfortunately, being judged by hundreds to people (at least).


I asked myself how can I code faster just now and I asked myself what slows me down. At the moment it’s local build times. I often drop into Linqpad to sanity test code as sometimes even running a unit test is painful (because of building the code not the test itself)

The team is longer term looking at splitting up libraries but maybe I should see what quick wins .Net has to offer.

For side projects it’s normally the startup stuff, so using those starter kits eg saas with login set up etc. makes sense. I’d definitely look at all the code so I understand it but it saves all that wheel reinventing and discovering the same issues everyone has.


The author makes a point of noting that many small improvements work together to make a larger speed increase, but are there 80/20 interventions here? What is the most effective way to become a faster coder?


I've been keeping a private notes file about this. People tell me I'm fast, and I agree with them, so I'm trying to track why.

Some examples:

- Be a fast typist. At 120WPM, typing is not a limiting factor. I know professional programmers who do 40WPM and it limits them a _lot._

- Have self-awareness about what is taking you time. Ask whether you can stop doing that. A good example of this is having good autocomplete: if you're looking at docs when you could configure VSCode (or whatever) to have the answer a tab keypress away, you're losing minutes per hour.

- Get really good at using the fuzzy finder. Stop clicking on files.

- Write scripts for common actions, either in `~/bin` or in your source directory.


>Have self-awareness about what is taking you time.

It's simple, but so true!


Here is a starting point that I've just typed up, this will help, I think:

0. Make less code. Define your problem-space in a new tooling-space, resulting in a new solution-space.

1. Sleep well, exercise regularly, eat sparingly and in company. Your mind is only as good as your brain. And your brain has a support system. Support it. That's a no-brainer.

2. Type less code. Make scripts to suggest common things, configure your autocomplete to not include unhelpful options. Use math-like single/two-letter notation and possibly a comment where it makes sense.

3. Type faster. I type comfortably at about 45WPM. I can see how 120WPM would be three times faster, and beyond wouldn't much matter. But I am at a vantage point that pecker hunters never visit.

4. Do less. Don't be hiding things in tabs and drawers, don't be managing your windows. Don't be looking for and opening files. Don't be fighting for your uninterrupted time. Be prepared.

5. Be prepared. Read up on the problem space. Read and run some solutions to some aspects. Prepare your workspace. Make time.

6. Do it. Sit down and do 23 minutes of it, as if you were to be torn away after. It's not hard, you can do 23 minutes of anything, really. Write stuff down. Throw up some code. Run it.

7. Keep at it. Make a plan for the next 3 hours. Put a buzzer to buzz you every 23 minutes and check yourself. Breathe, adjust, and persist. Is it fun? Are you shoveling shit? Those two require different check-lists.

8. Don't be looking busy. Be advancing. Are you enjoying your results, is it right? Are you shoveling some necessary shit? Or are you looking busy?


Write a lot of code. Read a lot of code. You can try programming competitions. Typing speed.

These are things I did as a fairly junior to intermediate developer. (so the first 10-15 years)

Later on it's about knowing what to do, the right approach can be infinitely faster than the wrong approach that doesn't work.

EDIT: as someone else said, know your tools, whether it's your IDE, your editor, your debuggers, your compilers, profilers etc.


Sometimes I think the biggest slowdown for most coders is the inability to make a decision and start working in a certain direction. Not-deciding also includes masquerading procrastination as researching a subject :).


That definitely matches my experience. I've learned that when I'm paralyzed by uncertainty, I just need to pick something, and in the act of implementing things I automatically make the decisions. But that habit needs to be honed..


One of the things that my most recent job really highlighted was - speed matters, but you need to know how it matters. In my particular niche essentially there were cliffs. If you were faster than x, you got y, if you were faster than x2 you got y2. Basically, you could halve the latency of your system for 0 benefit. You could decrease the latency of your system by 5ns and the benefits would be amazing. You could gain 5ns and it'd be nothing.

Speed does matter, but first, you need to know now.


For self-started projects what matters far more is maintaining consistent motivation and progress. For sure, there will be places where a good choice will pay off and save lots of work or bugs, but how often that happens matches the level you're at. The post is saying this will happen more often with practice, which I completely agree with. The important thing is that you keep on making. Consistency also matters.

Don't let weeks go by AFK for no reason.


Speed doesn't matter if you're doing the right thing.

Writing code can't be done quick without knowing what you need the outcome to be.

Then speed is set by how fast you type.

But it's a theory, and practice no one writes code without having to put thinking into it.

So speed is not a measurement I would ever count as a good trait in terms of quality. And quality cannot be measured, so we're back to why speed is even interesting in productivity of software creation


> Then speed is set by how fast you type.

Ideally, but that's not always the case. For example, let's say my goal is to write the program that outputs Hello World. In some languages it's as fast as writing the literal characters. The only extra burden is maybe to put quotation marks around them.

  "Hello World"
13 characters total. But if I did this in Java for instance, well that's a different story. Now the program looks like this:

  class HelloWorld {
    public static void main(String[] args) {
        System.out.println("Hello World"); 
    }
  }
87 characters total. The original program could have been written 6 times over in the time it took to write just this one. But that's not the real issue. The real issue is that not only do I have all that extra syntax, I have a whole bunch of new concepts to contend with including classes, scopes, functions, function composition, console arguments, lifetimes, access, types, arrays, objects, and methods. I can't just do a thing, I have to do all this extra work before I can do the thing. In the first case I have a single concept in my mind: String. In the second case I have to keep not only N concepts straight, but how they compose as well, and all the arcane rules therein.


This is like those reviews of operating systems that are 50% about the installation process, even though most people install their own operating system at most once, and most likely zero times. Then go on to use it for years and years. Making the "issues" listed in the review something that affects 0.01% of the time interacting with the product being reviewed.

Most developers are working on a codebase where they did not personally write that "void main(...)" boilerplate! Someone else did all the initial scaffolding, they joined the team years later, and they're adding new feature "X".

In terms of productivity, all of those things you listed as negatives due to the mental load they impose are absolutely essential for this scenario of a team member joining and having to modify a large code base. Scopes, lifetimes, classes, interfaces, etc... are the levers for the mind that enable teams to work together successfully.


> Most developers are working on a codebase where they did not personally write that "void main(...)" boilerplate

Most developers yes. Most programmers, no; Excel is the most popular programming language in the world by far. Imagine if Java were used instead of Excel to solve the same kind of problems Excel is used for today. Think about how much extra boilerplate would be in this world for no reason. Because the many hundreds of millions of people using Excel have done so for their purposes without missing the features you claim are essential for team development.

And maybe they are! But to my point, if you're not doing team development, then why do you need to even think about these features that aid in team development? If you are not trying to implement an object-oriented system, then using a tool that prescribes a maximalist object-oriented design philosophy is going to add a lot of extra work with no clear tangible benefit to the actual problem you're trying to solve. This is incidental complexity, and it's going to slow you down no matter how fast you type compared to a system that excludes this complexity.

The system that eliminates incidental complexity will provide the least amount of friction between you and solving your problem. That's the ideal. Any tooling you add to the task is going to introduce some complexity. That complexity may ultimately be beneficial or it could get in your way. It depends on what you want to do.


I'm not really sure what the parent poster was trying to say. It seems like an abstract example, but then goes into way too much detail about the actual language concepts Java brings to the table.

Obviously one can't take the time from zero to Hello World as a benchmark for productivity.


I think one thing that often gets lost in these types of discussions, is it’s not just fast versus slow. It’s also patient versus impatient. In certain situations you absolutely need to be methodical, because you are stacking and layering complexity in a manner than is impossible to fuck the impatient way of it goes sideways.


My thoughts on "good, fast, cheap, pick (at most) two":

1) If you're not fast you will never get good 2) There's no such thing as "good, slow, and cheap" and if there is, that person is booked for the next 30 years


As the developer (but not the OP, of course), cheap is not an attribute I'm inclined to optimize for...


I’ve come to expect ~80% of work requirements be ‘fast and cheap’ (regular projects) and the rest is ‘fast and good’ (well-funded startup, maybe internal). As I work near academy, there’s a fraction of ‘slow and very cheap’ (produce publications expected by grants by end of the reporting period).

‘Good and slow’ requires exceptional planning or some kind of a lifestyle business, and thus is uncommon in software, AI in particular. Most work conditions require tight deadlines to coordinate with other (relatively isolated) business entities, therefore gotta go fast to deal with possible scheduling conflicts and unaccounted for parts of work.


Yeah. Aspire to be good, fast, and expensive. Also, slow gets expensive fast anyway.


The author mistakes coding for typing. You maybe get faster at typing, but coding also involves thinking and doing research. Unless you do very repetitive tasks and you can type from memory.


I don't think the author is making that mistake. He worked on the Eve language, whose entire ethos was that the notion behind Fred Brooks' "No Silver Bullet" -- that there is no "one" change to development tolling/methodology that could produce a 10x speedup in productivity -- isn't really true anymore. In "Out of the Tar Pit", Mosely and Marks argue that today there is so much incidental complexity (from poorly designed or ill-fitting tools) slowing down our work, that removing it would result in those 10x gains Brooks argues can't be achieved in NSB.

The question then is how do we remove that incidental complexity? Eve tried to do this through programming language design, combining Prolog-like semantics with a relational database and modern web technologies. In some scenarios it really is at least 10x more productive than other languages, letting you do very sophisticated things in a few hours or days that could take experienced developers a literal week or more in other languages.

The author is musing here about how much he could get done as long as his tools get out of the way. i.e. how much more productive could he be if he only had to deal with the necessary complexity of the problem, rather than being mired in all the incidental complexity of the tooling. What if a programmer could express themselves as freely as a writer can? Where typing and imagination are the only barriers between your mind and expression. It's a nice fantasy future.

Yes there is a lot of thinking and doing research in programming, any experienced programmer knows this. I think if you look at this developer's history you would agree he is an experienced developer who knows these things to be true.

https://en.wikipedia.org/wiki/No_Silver_Bullet

http://curtclifton.net/papers/MoseleyMarks06a.pdf

(edit: also, I think the author addresses your criticism directly here: "When I think about speed I think about the whole process - researching, planning, designing, arguing, coding, testing, debugging, documenting etc. Often when I try to convince someone to get faster at one of those steps, they'll argue that the others are more important so it's not worthwhile trying to be faster. Eg choosing the right idea is more important than coding the wrong idea really quickly. But that's totally conditional on the speed of everything else!")


> In "Out of the Tar Pit", Mosely and Marks argue that today there is so much incidental complexity (from poorly designed or ill-fitting tools) slowing down our work, that removing it would result in those 10x gains Brooks argues can't be achieved in NSB.

Just to clarify for others, Brooks' paper does not say that there will not or cannot be 10x (order of magnitude) improvements in programming.

He made two principle statements in his paper:

1. That there will not be a 2x improvement in programming every 2 years (comparable to Moore's Law about the increasing density in transistors that was roughly every 18-24 months). That isn't to say that there won't be 2x improvements in some 2 year periods, but there will not be consistent 2x improvements every 2 years.

2. That within a decade of 1987 (that is, a period ending in 1997) there would not be a 10x (order of magnitude) improvement in programming (reliability, productivity, etc.) from any single thing.

So trying to refute Brooks' 2nd assertion, what lots of people try to do, by looking at changes post 1997 is an absurdity as it ignores what he actually said and the context of it.


I feel that Brooks puts a time horizon of 10 years on his predictions because of AI. If we achieve AGI, then that would be an obvious example of something overnight which would yield (at least) a 10x improvement of productivity. But I think it was a safe bet for Brooks to say we wouldn't be there by 1997.

That said, on the topic of environments and tools Brooks writes, "Surely this work is worthwhile, and surely it will bear some fruit in both productivity and reliability. But by its very nature, the return from now on must be marginal." This claim is not timeboxed to a decade, but is instead projected to perpetuity. He boldly asserts without support that that from now on, improvements to tooling environments definitionally cannot yield 10x improvements in programming. I think this is where people have taken issue, and have shown this claim to be wrong.

I think this error comes from a misconception in the previous paragraph: "Language-specific smart editors are developments not yet widely used in practice, but the most they promise is freedom from syntactic errors and simple semantic errors."

There is no reason to believe that language-specific smart editors could only save you from simple semantic errors and syntax. In fact, language specific editors in the ethos of Eve and Light Table supported sophisticated debugging and development modalities:

- time-travel debugging capabilities that rewind the state of a program to a previous point in its execution history

- what-if scenarios to test how the program would respond to different inputs without restarting it

- provenance tracing of program state to see how it was calculated

- live debugging and modification of a program while it's running

- saving, sharing, and replaying program state to trace bugs

- the ability to ask new questions about program output e.g. "Why is nothing drawing in this area of the screen?"

These are not "simple semantic errors" but deep program analyses and insights that are not even supported by most (any?) mainstream languages (nor could they be without changing core language semantics or adding on new language features with orthogonal semantics). Brooks doesn't imagine any of this and dismisses it all as "marginal" whereas in my experience actually using these tools, I found them literally transformative to my productivity and workflow.


Probably just my misunderstanding:

Some of the items in your list sound like things that were features of Smalltalk IDEs back in the late 80s ?

- time-travel debugging

- live debugging and modification

- saving, sharing, and replaying

p304 "Debugger Windows" "Smalltalk/V 286 Tutorial and Programming Handbook"

https://rmod-files.lille.inria.fr/FreeBooks/SmalltalkVTutori...

----

And later, Rewrite Rules:

A very large Smalltalk application was developed at Cargill to support the operation of grain elevators and the associated commodity trading activities. The Smalltalk client application has 385 windows and over 5,000 classes. About 2,000 classes in this application interacted with an early (circa 1993) data access framework. The framework dynamically performed a mapping of object attributes to data table columns.

Analysis showed that although dynamic look up consumed 40% of the client execution time, it was unnecessary.

A new data layer interface was developed that required the business class to provide the object attribute to column mapping in an explicitly coded method. Testing showed that this interface was orders of magnitude faster. The issue was how to change the 2,100 business class users of the data layer.

A large application under development cannot freeze code while a transformation of an interface is constructed and tested. We had to construct and test the transformations in a parallel branch of the code repository from the main development stream. When the transformation was fully tested, then it was applied to the main code stream in a single operation.

Less than 35 bugs were found in the 17,100 changes. All of the bugs were quickly resolved in a three-week period.

If the changes were done manually we estimate that it would have taken 8,500 hours, compared with 235 hours to develop the transformation rules.

The task was completed in 3% of the expected time by using Rewrite Rules. This is an improvement by a factor of 36.

from “Transformation of an application data layer” Will Loew-Blosser OOPSLA 2002

http://portal.acm.org/citation.cfm?id=604258


There are no silver bullets if you are already near the top of your game. However, in my experience, most engineers and programmers are far from working optimally.

Here's the silver bullet that seems to work very well.

> Master your current stack.

Time and time again I see programmers who know computer science but really don't know their language. They run to stack overflow for everything from sorting to reading files. They make obvious syntax errors. They pause constantly not to think about problems, but to remember how to implement something simple.

Most programmers can 10x themselves just by learning their language and stack really well.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: