
How Old School C Programmers Process Arguments - signa11
http://www.usrsb.in/How-Old-School-C-Programmers-Process-Arguments.html
======
tomsmeding
I have to say, to reasonably experienced C programmers, this code should be
pretty clear. If you need that whole paragraph to understand (* ++argv)[0],
you might be an awesome programmer, but you're not yet a reasonably
experienced C programmer. ;)

I do think this is even more concise than they would have written in
maintained code. In particular, I believe the missing {}'s for the 'while' and
'for' loops whouldn't have been left away, but I might be mistaken. Compare,
for example, with this gem:

[https://www.cs.princeton.edu/courses/archive/spr09/cos333/be...](https://www.cs.princeton.edu/courses/archive/spr09/cos333/beautiful.html)
(submitted as
[https://news.ycombinator.com/item?id=5672875](https://news.ycombinator.com/item?id=5672875)
some time ago)

There Kernighan talks about a piece of code written by Rob Pike for inclusion
in a different book; the code was written to be as small as possible while
still including some useful functionality. In particular, Pike made well-
considered choices about which functionality to include and which to exclude,
but there are also some of those expressions that will not be completely
transparent to the beginner C programmer. But the goal was writing _short_
code, and _clarity_ came secondary, but also emerged partly as a result of the
first.

~~~
bsder
> If you need that whole paragraph to understand (* ++argv)[0], you might be
> an awesome programmer, but you're not yet a reasonably experienced C
> programmer. ;)

Lines like that have far too much information density for my old brain,
anymore.

Things that went through my poor brain: "Why pre-inc/decrement? Why the +1 on
the for? Did he actually get the precedence of the operators correct? Okay, it
looks like argc and argv will be mangled if you actually want to do anything
else with them. I see comparing to '\0' with no count limit--is there a buffer
overrun lurking here?. Does that error handling actually work--that loop can't
exit with that condition unless the second clause does something.

And that was like--30 seconds?

I'd assign argc and argv to something else so if I need them later for
something they are in their original state. I'd give myself a variable that
points to each individual argv on each iteration, and I'd call it something.
I'd make the error handling more explicit so if I had to add another case
later I wouldn't have to rack my brain about whether the error gets handled.
Etc.

Apparently I've become an inflexible, washed-up, old fart because I just don't
enjoy writing C code with that kind of puzzle-like quality anymore.

~~~
contras1970
i disagree with the sentiment and preferences expressed in your comment. the
code on display is at the sweet spot of complexity _given the scope of task at
hand_. your preferences might lead to code with lower complexity of any given
expression, but there would be a higher number of expressions, higher number
of statements, and higher number of names. that would mean _higher_ complexity
of the full code. should we ban multiplication on the basis that X times Y is
the same as X plus X plus X...?

> Why pre-inc/decrement?

as opposed to what? nothing? clearly, the code needs to advance the iterators
before dereferencing.

> Why the +1 on the for?

because it's interested in the option _name_ as opposed to the leading dash:
argv[0][0] is '-' (see the while above), the switch is looking at argv[0][1].

> Did he actually get the precedence of the operators correct?

not sure what this is about, the only involved expression has explicit
parentheses (out of necessity).

> Okay, it looks like argc and argv will be mangled if you actually want to do
> anything else with them.

yes... and? mutating a local int and a local pointer is bad? how would you go
about this without mutating an iterator?

> I see comparing to '\0' with no count limit--is there a buffer overrun
> lurking here?

nope, it's an array of null-terminated strings.

> Does that error handling actually work--that loop can't exit with that
> condition unless the second clause does something.

i do not understand what you're pointing at here.

> I'd assign argc and argv to something else so if I need them later for
> something they are in their original state.

consider they're already local names, and YAGNI. do alias them _if you need
them later_ , not because you might one day.

> I'd give myself a variable that points to each individual argv on each
> iteration, and I'd call it something.

you would give yourself a possible bug and an obligation to keep the two
things in sync.

> I'd make the error handling more explicit so if I had to add another case
> later I wouldn't have to rack my brain about whether the error gets handled.

what is more explicit than

    
    
        printf("Illegal option %c\n", *s);
    

?

i'd really love to see your preferred version of the snippet from TFA. i posit
it'd be two to three times as long, and its total complexity would be
similarly increased.

~~~
bsder
> Why pre-inc/decrement?

No, why "pre-" instead of "post-"? Is that a real requirement , or is this an
old C++ programmer who knows to write pre-crement in order to avoid creating
unnecessary copies?

> Did he actually get the precedence of the operators correct?

It's because I see a "chain" of operators and I have to think about it.

> nope, it's an array of null-terminated strings.

Is it? What happens when I pass 256 '-' characters. Or maybe
32767/32768/65535/65536? Or maybe ...

> Does that error handling actually work--that loop can't exit with that
> condition unless the second clause does something.

If I need to add a second unflagged argument, that error condition test also
needs to be updated. It's an annoyance.

A lot of this is no big deal in this code.

The problem is that code is habit. You will carry the habits from short code
over to long code. So, when you are now parsing 45 flags and 8 arguments, you
will write code the same way and it _WILL_ have bugs.

------
yongjik
So it will admit any combination of -n and -x. In particular, these things
won't raise an error:

    
    
        find - hello
        find - - hello
        find -xnxn hello
        find -x -n -x hello
        find -xxx -nnn -xn hello
    

Yes, the code has an impressive logic density, but there's a reason why modern
programmers moved away from such style.

~~~
userbinator
_In particular, these things won 't raise an error_

They don't necessarily need to be. Repeating the same option can either have
no effect (|), invert the option (^), or increase something (as in -vvvvv.)
IMHO it's a good thing when there are no error cases --- it basically means
the entire input space has a defined effect.

~~~
MaulingMonkey
Raising an error is an _intentionally defined_ effect.

Doing something under-defined, by accident, that you'll at some point break
because you weren't thinking of that specific edge case when e.g. refactoring,
for someone else who didn't actually _intend_ to exercise that edge case but
never realized they were doing so as they never received an error... well, not
my cup of tea.

~~~
candiodari
And causing errors for everything will cause your program to bug out in
perfectly valid cases.

Defensive versus total programming. Either have their place, but ...

Thing is most programmers vastly overestimate the case for defensive
programming. In practice, most programs need to run. So for instance, the
control loop for a nuclear reactor is not defensively programmed. It never,
ever, ever gets to yield an error. Why ? Because the system is not "fail-
safe". If the program ever were to say "this doesn't make sense, I'm quitting
(or otherwise doing nothing)", there is nothing guaranteeing the system is in
a safe state, and so it may melt down. A bug that fails to notice a critical
condition, and ignores it, on the other hand leaves the system under the
control of the operator, and nothing (should) go wrong (meaning the second
level checks should catch the problem)

The general guideline is defensive programming when starting up, with a force
switch to override it (so the system can be (re)started if operators decide
that's the best outcome). Once the program is running, total programming
should be your go-to solution. When in doubt, do the best you can and continue
on to the next request.

Since I've been programming million qps+ systems, I've learned that there's
even nothing wrong with almost totally ignoring errors. You'd be surprised how
well the following works: before handling a request, note down the time (using
the cheap instruction), then every time you leave your method, check the time.
If it's over a timeout, increase a counter, log if not logged in the past 10s,
and send an empty reply. This makes the developer's job harder, but makes
everybody else a lot happier.

~~~
MaulingMonkey
> And causing errors for everything will cause your program to bug out in
> perfectly valid cases.

Erroring out is admittedly not a panacea to every situation, but the user-
facing end of a command line interface is pretty low hanging fruit and a
pretty obvious place to error out. The nuclear reactor equivalent wouldn't be
the low level control loop, it'd be how the display panels react to operator
inputs and sensors - the red lights they turn on, the klaxons they sound, the
systems they engage to attempt to put the reactor into a safe state when it
thinks it's getting ready to melt down.

And even for that low level control loop, you'd better believe they're
thinking very very hard about, and once again _intentionally define_ , how
they handle any edge cases they can think of, not just throwing caution to the
wind and letting the program's behavior fall where it may.

Even for a rocket, sometimes the best answer is "do nothing - and let the
range safety systems blow the damn thing up if that wasn't the right answer."

> Since I've been programming million qps+ systems, I've learned that there's
> even nothing wrong with almost totally ignoring errors. You'd be surprised
> how well the following works: before handling a request, note down the time
> (using the cheap instruction), then every time you leave your method, check
> the time. If it's over a timeout, increase a counter, log if not logged in
> the past 10s, and send an empty reply. This makes the developer's job
> harder, but makes everybody else a lot happier.

Totally ignoring errors leaves you with crashy garbage that does _not_
accomplish your goal of high uptime/availability/not corrupting data in C++ -
I'd wager you're dealing with safer languages to type that with a straight
face! Even there, though, you probably write code to at least _handle_ null
references and other edge cases. You might not trigger fatal assertion
dialogs, you might not log to some fancy telemetry system, you might not even
log at all - but _something_ to handle error conditions.

And if you're writing that code _anyways_ , you might as well write some
logs/telemetry as well to make your devs lives easier. You don't have to force
it into the face of your end users, you don't have to make the whole thing
explode in the face of adversity, but some opt-in dev tools for yourself tends
to be nice.

A fun trick from gamedev: "Assertion" macros that force you to include an
error handling statement (often a simple "return;"). Devs get to check out if
that null pointer passed to your leaderboards is a systemic bug that broke all
leaderboards, gamers get to keep their quest progress instead of the entire
game crashing because a server being down or giving a malformed response means
the game doesn't know how to handle updating a single leaderboard
occasionally.

Best of both worlds. Be greedy - you can have happy users _and_ devs.

~~~
candiodari
> Totally ignoring errors leaves you with crashy garbage that does not
> accomplish your goal of high uptime/availability/not corrupting data in C++
> - I'd wager you're dealing with safer languages to type that with a straight
> face! Even there, though, you probably write code to at least handle null
> references and other edge cases. You might not trigger fatal assertion
> dialogs, you might not log to some fancy telemetry system, you might not
> even log at all - but something to handle error conditions.

That is exactly what I mean. Defensive programming is erroring out and
refusing to do anything from that point forward (think of it like an uncaught
exception with less information). "Total" programming is doing the best you
can, given what's available. Generally if I am unable to do something, or look
something up, I'll return a partially filled out request, log it (logs are
rate-limited VERY early in the logging process), and increase a counter
created for that particular case.

(I do use C++ often, but we use variable guards to guard against null
pointers, use-after-free, and sharing (unless explicit)).

> A fun trick from gamedev: "Assertion" macros that force you to include an
> error handling statement

Link ? I'm interested.

~~~
jsjohnst
> Defensive programming is erroring out and refusing to do anything from that
> point forward (think of it like an uncaught exception with less information)

I’m not sure where you got that impression, but it’s not the generally
accepted definition.

 _Defensive programming_ does not imply crash/abort/exit, it just simply
implies a style of programming where you don’t trust “things will always be
ok”. That could involve checking every input for correctness, checking every
function output, handling all exceptions, etc. Yes, a defensive method could
throw an exception on invalid input, but if the entire program is written
defensively, then it should be handled and potentially continued.

------
userbinator
Extract this into its own function and you basically have getopt() --- the
POSIX version, not the rather more bloaty GNU version that has sometimes-
unexpected behaviour (like reordering arguments).

If you like this sort of source code, I'd also recommend looking at the BSD
standard utilities --- they have a similar simple and concise style.

Yet there's still one small simplification that can be made: the condition in
the inner loop doesn't need "!= '\0'".

If there are many pure-binary options, then putting them in a bitflag is also
an easy extension to the present code:

    
    
        char binopts[] = "abcdefg";
        char *opt;
        if(opt = strchr(binopts, *s))
         options |= 1 << (optidx - opt);
        else
         // put the switch(*s) here
    

_Nowadays they just haul in the software weenies with their fancy objects and
methods. Lost is the subtle art of manipulating arrays of pointers to strings
of characters._

Indeed, compare with the "modern" way of doing it:

[https://github.com/commandlineparser/commandline/tree/master...](https://github.com/commandlineparser/commandline/tree/master/src)

...or something slightly simpler (because it's only one file!):

[https://github.com/mono/mono/blob/master/mcs/class/Mono.Opti...](https://github.com/mono/mono/blob/master/mcs/class/Mono.Options/Mono.Options/Options.cs)

It may seem like an exaggeration, but in my experience working with C# and
Java, code like that is the norm --- the core functionality is obscured by
being scattered amongst a large amount of "fluff" and it's hard to get the
whole picture of how it works because the flow jumps around so much. In
contrast, the C code in this article can be understood by just staring at it
for a little bit --- the entire functionality is contained within less than a
dozen lines.

~~~
Const-me
> compare with the "modern" way of doing it

The examples you’ve linked implement much larger set of functionality.

They de-serialize arguments into strongly typed values (validating arguments),
validate against unknown commands/options, print various help messages,
support globalization, support arrays in a single argument, and lots more.

Sure it’s overkill for a simple app with just a couple of options, but once
you have sufficiently complex command line interface, this modern way becomes
much simpler than what’s possible with C.

Compare with “old school” way of doing it: [https://github.com/gcc-
mirror/gcc/blob/master/gcc/opts.c](https://github.com/gcc-
mirror/gcc/blob/master/gcc/opts.c) [https://github.com/gcc-
mirror/gcc/blob/master/gcc/opts-globa...](https://github.com/gcc-
mirror/gcc/blob/master/gcc/opts-global.c) [https://github.com/gcc-
mirror/gcc/blob/master/gcc/opts-commo...](https://github.com/gcc-
mirror/gcc/blob/master/gcc/opts-common.c)

BTW that thing isn’t even localized, i.e. everything’s US English-only.

~~~
mehrdadn
What in the world is this inconsistent and bizarre spacing around brackets? I
literally thought it was a new syntactical construct in C when my eyes first
landed on it!

    
    
      cl_options [next_opt_idx].neg_index == opt_idx
    
      old_decoded_options[i].errors & ~CL_ERR_WRONG_LANG
    

and the parentheses too... why in the world do they do this?

    
    
      set_option (opts, (generated_p ? NULL : opts_set),
    		opt_index, value, arg, kind, loc, dc);

~~~
todd8
Once upon a time, programmers didn't have IDE's, Emacs, or even text editors.
I'd been programming for seven or eight years before Bill Joy created vi. Each
programmer and each program had its own style.

Often, a program's layout reflected the programmer's inner thoughts as he or
she worked through the creation of the code. Expressions were written like a
mathematician might write, with spacing and bracketing reflecting some way of
thinking about the grouping of the abstractions at hand.

This is just a random routine, written around 1975, from Niklaus Wirth's PL/0
compiler for Pascal, the programming language that he created. The indenting
is wild by contemporary standards:

    
    
        procedure getch;
        begin if cc = ll then
           begin if eof(input) then
                      begin write(' program incomplete'); goto 99
                      end;
              ll := 0; cc := 0; write(cx: 5,' ');
              while not eoln(input) do
                 begin ll := ll+1; read(ch); write(ch); line[ll]:=ch
                 end;
              writeln; readln; ll := ll + 1; line[ll] := ' ';
           end;
           cc := cc+1; ch := line[cc]
        end {getch};
    

Early C code too, even in the Unix kernel, was often dense and hard to
understand (the kernel was under 10,000 lines back then). See Lions'
Commentary [2]. Here's a small function, setfs(), line 7167 of the system 6
Unix kernel in Lion's book. In particular note the lack of indention under the
for loop:

    
    
        setfs(dev)
        {
           register struct mount *p;
           register char *n1, *n2;
          
           for(p = &mount[0]; p < &mount[NMOUNT]; p++)
           if(p->m_bufp != NULL && p->m_dev == dev) {
                   p = p->m_bufp->b_addr;
                   n1 = p->s_nfree;
                   n2 = p->s_ninode;
                   if(n1 > 100 || n2 > 100) {
                           prdev("bad count, dev);
                           p->s_nfree = 0;
                           p->s_ninode = 0;
                   }
                   return(p);
           }
           panic("no fs");
        }
    

It seems obvious now that standard and consistent formatting make programs
easier to understand. Why did we old timers do that to ourselves? First, short
programs were easier to keypunch or enter via a teletype machine. Second, we
had a plenty of time to study our code. Turn around time for a compilation,
from submission to printed listing, could take 30 minutes to 12 hours.

[1] [http://pascal.hansotten.com/niklaus-
wirth/pl0/](http://pascal.hansotten.com/niklaus-wirth/pl0/)

[2] John Lions, Lion's Commentary on Unix 6th Edition with Source Code.
[https://www.amazon.com/Lions-Commentary-Unix-
John/dp/1573980...](https://www.amazon.com/Lions-Commentary-Unix-
John/dp/1573980137/ref=sr_1_1?ie=UTF8&qid=1517149381&sr=8-1&keywords=lion%27s+commentary)

~~~
DonHopkins
>Often, a program's layout reflected the programmer's inner thoughts

And often it reflects the programmer's inner carelessness and disrespect for
other programmers who have to deal with their code.

------
QShift
Skimmed through the article and found a small mistake.

    
    
      That means that (*argv)[0] is the first character of the program name and (*argv)[1] is the first character of the first argument.
    

(*argv)[1] is the second character of the program name (in this case), not
first character of first argument.

------
Annatar
"That’s how Kernigham and Ritchie did it in 1978. Nowadays they just haul in
the software weenies with their fancy objects and methods." No, we use
getopts(3C). For decades now. I even use getopts in my shell programs. For old
UNIX hands, using getopts in C or in shell programming is certainly nothing
new. How, you ask? Simple! Watch this: while getopts hDd: Option do case
"$Option" in h) Usage ;;

    
    
          D)
            Debug=true
          ;;
    
          d)
            Destination="$OPTARG"
          ;;
        esac
      done
      shift `expr $OPTIND - 1` # I did not use $((OPTIND - 1)) for a reason!

Bam! Your program will now behave exactly as every other UNIX executable,
especially if you do not name it with .sh postfix and make it executable, the
user won't be able to tell the difference between it or say, ls(1)! Having
said that, the example in the article cements what I've been saying all along:
C is more than fine in the programming hands of a thoughtful mind.

------
cperciva
On the topic of processing command-line arguments and the recent post on
obscure things you can do with C: My "magic getopt"
([http://www.daemonology.net/blog/2015-12-06-magic-
getopt.html](http://www.daemonology.net/blog/2015-12-06-magic-getopt.html))
does some very evil things to make it possible to write a "normal-looking" C
getopt look which accepts both short and long options.

------
d--b
Mmh, what's so special about this? A C# version is pretty close...

    
    
        int i=0;
    
        while(i < args.Length-1 && args[i][0] == `-`) {
    
          for (var j=1; j<args[i].Length; j++) {
    
            switch(args[i][j]) { case `x`: ...

~~~
pjmlp
The pointer tricks on argv.

------
mjl-
this is how they did it somewhat later in plan 9, with macro's:

ARGBEGIN{ default: usage(); case 'm': m = ARGF(); break; case 'p': pflag = 1;
break; }ARGEND

[https://github.com/9fans/plan9port/blob/master/include/libc....](https://github.com/9fans/plan9port/blob/master/include/libc.h#L919-L940)
[https://github.com/9fans/plan9port/blob/master/src/cmd/mkdir...](https://github.com/9fans/plan9port/blob/master/src/cmd/mkdir.c#L57-L71)

~~~
henesy
The Plan 9 solution always felt really logical imo.

~~~
mjl-
i agree. too "clever", feels like a needless optimization. (optimizing for
fewer keystrokes to type? or obscurity?)

btw, many of the plan 9 commands use slightly different/custom option parsing.
probably for historic reasons.

~~~
mveety
No, this isn't clever at all. Its an optimization for when you need to write a
lot of little utilities that need to parse arguments. It probably came out of
a day of writing tools, and after the fifth time of rewriting that while/for
loop someone got pissed and made that macro. I use this macro all the time on
both 9front and unix because it makes parsing arguments trivial (and I don't
use long options, but that's more because I'm a plan 9/unix extremist than for
any technical reasons).

------
aplorbust
"What's so cool about this is its flexibility."

This is something I have never understood.

I am a connoisseur of command line progams. I have used hundreds of them over
decades. I write command line programs and scripts every week. It is an
obsession. Yet I am embarassed to admit that honestly I have never understood
the benefits of flexibility with passing arguments. I never understood the
"getopt" movement. I do not use it. All my programs are very simple and
straightforward without heaps fo options.

I apologise for being obtuse, but what am I missing?

To anyone who might be offended: I am not criticising flexibility of
commandline arguments or its coolness. I just want to understand what are the
practical advantages over something more simple and less flexible, like what I
prefer. Only if I understand the advantages can I be an advocate for using
numerous commandline options and flexibile parsing.

(Theres another post about setenv et al. on the front page right now. For
programs that read from environ variables -- which can be a better alternative
to using heaps of commandline options IMO -- I just use envdir from
daemontools.)

~~~
coldtea
> _Yet I am embarassed to admit that honestly I have never understood the
> benefits of flexibility with passing arguments._

It's about not having to remember what goes before what, if you can put them
like this or that, etc ON TOP of remembering the arguments themselves.

> _All my programs are very simple and straightforward without heaps fo
> options._

If your programs are basically ./a.out then perhaps you're not the target
audience for getopts?

~~~
pjmlp
Easy to sort out with an help flag.

~~~
coldtea
Easy, but also one more needless impediment to just using the damn thing.

------
blikdak
One thing an old school C programmer would not do is neglect to put parens
around their blocks.

~~~
enriquto
On the contrary! We force all our blocks to consist of a single statement so
that we can always omit the damn parens.

~~~
kqr
Are you saying you need statements in your loop blocks!? I just added this to
a school assignment:

    
    
        for (String next = in.readLine();
             next != null && dictionary.add(next);
             next = in.readLine());
    

...you can sense my contempt for this assignment. I would of course never do
this in a real program.

------
dajt
I wasn't all that impressed with the code - too 'clever' and concise for me.

I wouldn't be surprised if it let a bunch of unexpected inputs through as
another commentor pointed out.

That may have been okay back then but in today's hostile environment and with
so many people around that aren't natural programmers who dream of pointers
you probably want something a bit more verbose.

I've been programming C since the late 80s so I'm not a new-comer. But I'm not
a fan of that code even though I can understand it okay.

The more experience I get and the more people I work with, the simpler I like
my code to be. I'll always take simplicity over speed and concision if I can.

------
ggm
The getopt() arguments we used to have on USENET before gnu swept the board..

------
cagey
Maybe I've forgotten more C than I'd like to admit,

    
    
      while (--argc > 0 && (*++argv)[0] == '-')
    
      ...
      Notice that the decrement always occurs before the ‘>’ is 
      evaluated. This would be true even if it were postfix (i.e., argc-- > 0).
    

but isn't the last statement ("This would be true even if...") in error? IIRC
a post-decrement would occur _after_ the ‘>’ is evaluated.

edit: clarity

~~~
kbsletten
The decrement is the same either way, the value yielded by the decrement
expression would however be different. I don't like the wording either.

------
scarface74
What's clever about this? It's a little obtuse and overly permissive. How do
most developers parse arguments? I suspect it would be a for loop with nested
if statements - since you can't switch on strings. If you expect a value to
follow the flag you increment the counter in your if block and 'continue'.

------
zoul
It’s nice and smart, but also notice how it needs a whole web page to explain
and would happily compile after a lot of invisible single-letter changes that
would introduce horrible bugs. There is value in the old ways of writing
software, I just wouldn’t throw out modern software engineering just yet.

~~~
oldcynic
Any competent C programmer shouldn't need _any_ explanation of that simple
example.

Granted pointers, just like objects in more "modern" languages, take some
while for new programmers a while to get their heads around.

~~~
vram22
>Any competent C programmer shouldn't need any explanation of that simple
example.

Interesting point, and I agree. I read that code a little earlier today, and
(was a bit surprised to see that) I could understand it pretty much right
away, even though I have not used C a lot for a while. I did use it a lot
earlier though, on both Unix and DOS/Windows, and I do remember poring over
that K&R book early in my C career, and working out the meaning of each line
of code in almost the whole book. That may be why I could figure out quickly
what it meant, now. More cryptic stuff might take a while to figure out,
though, but the point I want to make is that the general principle remains the
same: you have to understand what each line and even what each token (of the
code) in each line is doing, how it works, etc., by reading the books and
docs, by trial and error, modifying the code and seeing the modified output,
using debugging print statements, isolating smaller chunks of code and running
them to see how they work and if your mental model of what is happening
matches reality, etc.

------
ateesdalejr
Wow, the amount of compactness and craftsmanship in these lines of code is
amazing.

~~~
jstimpfle
I think there's nothing magical about it at all. It's rather that "modern
software architecture" (examples for what I mean linked in another comment
here about C# and Java) is a disaster.

~~~
beefhash
It might not even be that. I presume that kind of code used to be written with
pen and paper, a compilation cycle almost prohibitively expensive.

When you sit down and just design a piece of code for hours as to not waste
your compilation cycle, I would guess that you naturally end up being fairly
crafty after some time.

~~~
oldcynic
Nope, you just end up thinking that compactly as you type.

When I learned C it was a given to code efficiently. Pointer manipulation and
efficient structure packing was expected, and therefore taught. You'd rule out
of an interview anyone who didn't easily grok it.

It's probably no surprise that the fastest GUI editor I've used to date ran on
a 7.1MHz machine. Such is "progress"!

~~~
tech2
Just because I'm curious, CygnusEd?

~~~
oldcynic
Yes indeed. :)

Most impressive, I think, is that it managed to be so fast and include smooth
scrolling.

~~~
vram22
Brief was pretty fast too. I only used it briefly (heh), but read in some
computer magazines at the time, that it used a lot of code optimizations, one
of which was that it used assembly language code, and another being (IIRC)
that it used BIOS calls to dynamically change the speed of movement of the
cursor when it detected that you pressed an arrow key for a longer time
(probably by changing the key repeat rate or reducing the repeat time), so
that when you did that, movement through the file would be faster. The idea
being that a user pressing the arrow key for a longer time probably meant to
move through a longer section of the file (than usual) to reach some other
distant place in it, so Brief figures this out and assists them with that, by
scrolling faster with that BIOS technique.

Anecdote: On a trip to the US (Boston), I once met Norm Miles, who, my US
colleague said, was the creator of Brief.

The Brief editor is actually still available, here, and for free now:

[http://www.briefeditor.com/](http://www.briefeditor.com/)

[https://en.wikipedia.org/wiki/Brief_(text_editor)](https://en.wikipedia.org/wiki/Brief_\(text_editor\))

------
Chiba-City
The original getopt was a really tight library.

