Hacker News new | past | comments | ask | show | jobs | submit login
First Edition Unix Manual 'Miscellaneous' Section (1971) [pdf] (archive.org)
80 points by susam 10 days ago | hide | past | favorite | 51 comments





Here are links to OCR'ed PDFs (and other formats) of all the parts [1]. 12 in total; the submission's “man71.pdf” is the last of the bunch.

[1] https://www.bell-labs.com/usr/dmr/www/1stEdman.html

Edit: I miscounted. My bad.


"Since UNIX times delays assuming tabs set every 8, this has become a defacto ‘standard.’"

Oh ...

It is something beautiful over such simple, small and hackable systems like this. Imagine how much control you actually had over your computer.


I dunno. I work on OS development. I do that level of programming fairly frequently. It's not hard, even on modern computers. You just have to realize the consumer UI is just a thick layer of icing made from high-fructose corn syrup and high-ratio shortening but underneath that is just regular old computer programming, just like the Vic20. APIs usually exist at the OS level to control all the hardware (on Unix, it's ioctl() and devctl() and similar, and in the kernel itself it's called 'device driver' programming.

If you're interested, you may find it easier than you think to program computers instead of scripting UIs or server farms. A hobby or career in OS development can be satisfying and rewarding.


I find it's hard to reasonably switch careers once you get to a high earning position in web development. I've long had a big interest in lower level stuff: I've done a minimal OS in high school (boot sector in assembly, first-fit malloc and round-robin scheduler on 32-bit x86 in high school — I remember bringing a bootable 3.5" floppy into my graduation project presentation with teachers being surprised: "you've got something running, too?"), and while I've always strived for a more direct-to-hardware job, I don't trust I can apply for any of the relevant positions and expect anything but a junior-ish salary.

I do believe that I'd easily pick it up (eg. I can easily spot potential races and common problems with async code in other people's code, which is usually the trickiest problem though I'd expect hw bugs to be even worse in low-level stuff), but I am not sure I can sell myself well enough to get a position that matches my skills (since experience is limited).

Other than spending time away from work (which I lack because of the family), do you have any other tips on how to break into the field?

Remote from a EU timezone a requirement :)


It's easier than you'd think to make the transition if you're willing to go into embedded development. Pretty much any demo of competence can get you hired as demand is high and the work is challenging.

Pick up a Nucleo or mbed board and make something cool!


> Pick up a Nucleo or mbed board and make something cool!

That's currently in conflict with my life circumstances ;-)

The number of "projects" I wish I could do sitting on my shelves already surpasses the time I can find to spend on them.

Basically, I want to be paid to "learn," and I think I'll be fair value in a month or two. So the question is how to get there? ;-)


Another way of putting this is, you choose to let available opportunities shape your future career.

You can do that, or you can choose to exert some control over the process.


If you're not willing to spend ~20-40 hours getting an Arduino-class object to, say, talk to a MMC/SD/SDHC card using only your own code, no libraries (this is something I can do easily as a rusty hybrid EE-FW guy who is now mostly EE), then I don't know what to suggest.

Thank you for conflating "not willing" with "unable". Also on suggesting how a 20-40h project will make me appear as an expert in embedded programming.

> Also on suggesting how a 20-40h project will make me appear as an expert in embedded programming.

That was actually my real point. For many smaller companies, it will. General programming experience plus any honest signal, even small, of low-level qualification will get you attention. We get lots of generalist candidates for our embedded positions and the #1 easiest high-signal filter for us is "how do we know they're actually interested in embedded?" Pass that, and things can open up for you.


Cool, thanks: that is useful feedback!

I haven't applied for a lower level dev so far, but it gives me some encouragement to maybe try next time I want to switch positions (though I did work or get offers on some embedded dev shops, but on the web side of things).


> That was actually my real point. For many smaller companies, it will.

Just an observation, but I've almost never seen job postings for entry/junior level engineers in the embedded/systems world. Only senior-staff level roles (I get the feeling that a lot of early career talent in these domains is sourced from local university pipelines). This has been my observation across both small companies and large corporations. Are these shops really considering people who's only experience is some contrived side projects at that level?


So, um, are you hiring? And where are you located?

Seattle, and, no, not for firmware engineers (we have a pretty solid firmware team!).

> It's easier than you'd think to make the transition if you're willing to go into embedded development.

for which salary tho. just checked a bit the current offers and in my country, senior embedded engineer is 60k€ / year at most, and I saw a fair bit around 40k€...


In theory I could deep dive into my Debian machine, but there are just so many things. Just think of the USB protocol compared to eg. a wanna-be implementation of RS232. There is no way I could get deep understanding of the whole system.

That said, I think you are correct in that there are less magic than you think when looking at the surface.


> but underneath that is just regular old computer programming, just like the Vic20.

Yeah, unfortunately I thought that I could easily understand modern computers after having played around with the 6502 family a lot. Only to learn that even at the lowest level there's a lot of legacy cruft and decades worth of added abstractions.


As a thing, OS development requires some UI to interact with low level things, .. And for me most satisfying - put reward aside , as I'm not interested in material part -, thing is Game Development, tools to see parts - debug lights bounce with a terminal - of render process ( Some may check Unreal Engine, or any other game engine UI's ), Even I don't fill comfortable enough with blueprint kind of low level UI's as oppose to text programming ( double d = 10.0 is easy for me to drag and drop a variable ) Material always have many faces. UI is one of them.

Exactly. My only gripe with modern machines is how narrow their interface is, I wish they exposed more of their internals.

e.g., I'm too young to remember the Commodore computers, but it's my understanding that you could change the display colors and sprites by poking a memory address. I'm not advocating for specifically that, modern computers are connected to the Internet and it would be a security disaster, but that kind of interaction with the machine is something that's missing.


People are forever saying they want this*, and bemoaning that modern computers are so wrapped in layers of abstraction, but the fact is you absolutely don't want this on the same machine you use to browse the web and run untrusted javascript.

Meanwhile there's Raspberry Pi and a whole host of retro-clones out there for cheap, and emulators out there for free. We live in a golden age where all the old stuff is still there if you want it, but you can also download incredibly sophisticated enterprise level development tools at zero cost, and hundreds of pages of tutorials and guides just a Google search away. It's never been easier for kids to get into programming.

* Example from here yesterday https://news.ycombinator.com/item?id=28441563


> absolutely don't want this on the same machine you use to browse the web and run untrusted javascript

100% agree, I said it in my comment :)

There's no chance you can get away with unrestricted memory access on a machine that ever talks to any other unknown machine ever, on any protocol and for any reason.

> It's never been easier for kids to get into programming.

I honestly don't know about that. Some things are easier, others are harder. There's no question the amount of quality information out there makes it much easier for someone who wants to learn, but even setting up a development environment is a huge blocker for a complete beginner. Hello world is the hardest program you're ever going to code, it's all downhill from there. And that's where the layers of abstraction are coming back to bite you.

Of course everyone who wants to learn how to program is going to overcome that, most people are just not interested in the first place and it's kind of dumb to assume they'd learn if only the environment were different, etc.

OTOH I wonder if the layers of crap aren't just making it harder for those who are interested.


You don't even need to install anything, there are plenty of programming environments online. My kids wrote their first programs using Scratch, there are dozens of sites that will let you type in Python and run it right there and then. There are programming apps for mobile phones and tablets. You can even develop applications in Swift on the iPad using Playgrounds and soon will be able to upload them directly to the App Store.

I agree that the actual barriers are probably lower than ever, e.g. with Scratch, Playgrounds, replit or even the browser JS console. I think the bigger issue is that the competition for (child) attention and interest is much fiercer nowadays. In the 1980s we had a few channels of fixed TV programs, maybe a handful of expensive computer/video games, LEGO and some plastic toys. Even back then, interest in programming was only for a select few.

Lets not fool ourselves, programming has a steep effort-reward curve, maybe even steeper than chess or music instruments. Nowadays it's competing against an infinite supply of deliberately tuned shallow-curved attention seekers like youtube/tiktok videos and app store quasi-games.


> I agree that the actual barriers are probably lower than ever

The barriers for programming something that other find usefull is way higher today.


Ergonomics were much, much worse before. I had my Commodore 64 hooked up to the TV in the living room. It was not easy to program using an old tube TV displaying text with 320x200 resolution graphics.

Which is why the first thing 1980's game developers would do when getting out of their bedrooms into proper offices, was migrating into development systems based on systems like VMS/UNIX or similar, and then upload the games into the Speccy and C64 via the extensions port.

If you want to have a modern day bare metal experience, playing with microcontrollers is probably your best bet. Buying a board and attaching it to a PC for development has never been easier. Device drivers are usually rather thin wrappers over memory mapped IO. A simple scheduler for multithreading isn't that hard to write and those in FreeRTOS or ThreadX are quite readable despite their production quality feature sets.

However, moving up from there to more powerful systems adds more and more (mostly necessary) complexity: address space isolation for robust multitasking, more complicated busses and attached devices, more asynchonous operations, etc. There is no way to have both true simplicity and the comfort / performance of a modern personal computer.


>it's my understanding that you could change the display colors and sprites by poking a memory address

As far as I know FreeBSD still lets you poke around in /dev/mem, you could make the poor OS have a seizure by doing "sudo dd if=/dev/random of=/dev/mem" last time I tried it. Linux is more restrictive with /dev/mem by default, but I think it can be configured to be more lax if you compile it yourself.


You could basically do similar stuff like that across all home computers up to the 16 bit days.

You can still do that in PCs by booting into real mode, OSDev has plenty examples.


Install FreeBSD, tabs(1) is still a thing! It's interesting that FreeBSD man says "A tabs utility appeared in PWB UNIX," while Wiki refers PWB/UNIX as having its initial release in 1977.

PWB Unix was the earliest sources I could find. FreeBSD lists history items for releases going back to the earliest unix release because BSD Unix is derived from AT&T Research Unix and FreeBSD is derived from 4.4BSD...

The tabs file does predate this, and I'll make a note of that (which is what this article is about).


> My only gripe with modern machines is how narrow their interface is, I wish they exposed more of their internals.

Modern machines are quite the opposite. It's the current crop of main stream OS's and their associated API's which are to blame, not the hardware designs.

> e.g., I'm too young to remember the Commodore computers, but it's my understanding that you could change the display colors and sprites by poking a memory address.

The commodore 64 was a very simple machine with little memory. Only one program ran at a time so no OS was needed. And the memory registers you poked at was the API as things were very simple back then.

However, you can certainly do the same with modern GPUs[1] but they are so massively complex that the manual for the Intel graphics controllers are well over 1000 pages, some manuals exceeding 2000 pages[2]. For comparison, the manual for the voodoo2 is 132 pages[3].

> I'm not advocating for specifically that, modern computers are connected to the Internet and it would be a security disaster, but that kind of interaction with the machine is something that's missing.

That's why we have an OS to control access to those bits of hardware. The internet has nothing to do with it.

The problem you face is most modern operating systems are massive, bloated even, to the point where the interfaces are buried underneath miles of code. How does one approach a simple hardware project of poke at bitmaps and pixels stored in the Intel GPU from a "modern" Operating system.

The Linux kernel is something like 10% AMD GPU driver code, mostly auto-generated by massive build tools which are as complex as the kernel itself. So no wonder you see the system as narrow, its so massive it blurs into one indistinguishable monolithic blob which gives it that narrow feel.

[1] https://wiki.osdev.org/Accelerated_Graphic_Cards

[2] https://www.x.org/docs/intel/

[3] http://darwin-3dfx.sourceforge.net/


> Only one program ran at a time so no OS was needed.

The Commodore 64 had a simple OS, and it does have all the elements to be called a primitive OS. The features include:

- Devices (0 was the keyboard, 1 was cassette, 2 was RS-232, 3 was the screen, 4-30 were the serial bus where printers and disks lived)

- Uniform I/O calls across those devices (you must OPEN a device, and then can use CHRIN, GETIN, CHROUT, LOAD, SAVE calls to move data, and then you have to CLOSE it).

- Handles (called "logical file numbers", up to 10 open at once supported) - a bit more sophisticated than CP/M really.

- A rudimentary notion of standard input/output (called the "default" input and output device)

But this primitive OS definitely depended upon what could be considered a single background task to read the keyboard (SCNKEY) and update the timer (UDTIM) - triggered by an IRQ that was set to fire off 60 times a second (50 for PAL). Tape I/O overtook this IRQ and messed up the timer though.

However nothing in this primitive OS except for reset routines even acknowledged the presence of the SID chip or features of the VIC beyond the text display, so you were definitely on your own there.

> How does one approach a simple hardware project of poke at bitmaps and pixels stored in the Intel GPU from a "modern" Operating system.

Linux exposes a `/dev/fb0` device, doesn't. Can't you `mmap()` this device and peek/poke to your hearts content, assuming something else isn't trying to write to `/dev/fb0`?


I'm still infatuated with the likes of Smalltalk and Genera where the entire system seems somewhat advanced and hackable.

They have binary and library files in /etc. This bothers my sense of order! Not that modern filesystem layouts don't disagree with me in many places also.

Modern Unix file system layout come later, with (4.2?) BSD.

Yeah, my sense of what is "right" is stuck in Sunos 4.x, as that is what I learned on.

    /etc/rmt
As i barely recall... It annoyed me too, then.

> Thus we come to the UNIX warm boot procedure: put 173700 into the switches, push load address and then push start. The alternate switch setting of 73700 that will load warm UNIX is used as a signal to bring up a single user system for special purposes. See /etc/init.

Why is "glob" called "glob" ("global")?

I'm not very familiar with UNIX's history, so take this with a grain of salt, but I believe it originates with the QED text editor. More specifically, with Ken Thompson's rewrite of the QED text editor, when he bequeathed it with an ed-like syntax. (Perhaps more accurate to say "when ed's ancestor began to look like ed".)

In Thompson's QED, much as with ed, the 'g' (aka "global") command's syntax is `g/regular expression/command`. It means "for every line (i.e., globally) matching regular expression, perform command". A common use was `g/re/p` to print matches, thus the now-familiar `grep` binary.

The /etc/glob binary is very similar in feel: `/etc/glob expression command`, which means "for every file matching expression, run command". (Note that the /etc/glob binary didn't return a list of files for the shell to deal with; it did all the command execution on its own.)

I can't pin down exactly when 'g' was added to QED. If we assume Thompson's first rewrite of QED had it, then 1967 might be a good guess. 1970 is the latest possibility, because that's when the only manual I can find[0] is dated. Note that it is for Ritchie's rewrite and mentions an "improved Global command", so it's definitely not the first appearance. There is some more history of QED and ed at [1].

[0]: https://www.bell-labs.com/usr/dmr/www/qedman.html

[1]: https://www.bell-labs.com/usr/dmr/www/qed.html


I don't know the actual reason. However, I suspect its for the same reason why they used "bin" for binary, "usr" for user, "var" for varying, and so forth. There was a culture, back then, of not wasting characters because to do so resulted in a real-life waste of space that could limit what was possible (due to storage size constraints). This extended to the fact that the terminal you typed on was sending characters over a 150 or 300 baud modem. Every chance to save some space while still providing reasonably clear communication was considered desirable.

The question wasn’t about the abbreviation, but about what wildcard expansion has to do with "global".

I knew glob was short for global, but I've never been clear on how the word global applies here. Perhaps it was something like "apply this command globally to all matching files"?

As for the terse abbreviations common Unix commands, recall that much of the early work on Unix was done on very slow teletype devices such as the ASR 33. At 110 baud, brevity is a virtue.


See also: historical Unix system manuals, in web format [0].

[0]: http://man.cat-v.org/unix-1st/


Look, developers who cared enough to write some documentation. In 1971. With a highly constrained environment. They documented more than some developers do today :-).

Probably dogfooding; a central reason for original UNIX was typesetting/documentation.

Check out the entry for "libb.a". That was your base userland in 1971, folks.

Spelled fortran incorrectly.


looks like this was modified in 1998? from page 13:

    ... [ rest deleted --DMR 1998 ]



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: