SO that page isn't updated for the affected versions? Is that because they only "support" (release patches for) the newest release? I was under the impression they "supported" the two latest releases..
Spiped just 'pipes' an SSH connection from one point to another. It's essentially a very thin VPN.
But this bug is exploited via authenticated users. If you're using ssh keys (...you are, aren't you?) you would basically already need a valid SSH login to use this vuln; if nobody but you has an ssh key login, you're safe. This vulnerability may still affect you - even with spiped - depending on who has an ssh key login to your box.
(And this thinking that you can just 'wrap another layer of encryption/abstraction' around a problem, is why we can't have nice things)
Also, there are some nice fanless Mini-ITX boards out there with dual gigabit, which is all you need (plus a case for a board and a switch, of course).
M350 enclosure is popular and cheap.
It goes without saying such box will smoke anything you can buy of-the-shelf. Hell, even the anemic ALIX Geode gave a nice boost to my home network over the crappy Linksys I had previously (things are now snappy and 100% stable).
Soekris now has new models with Intel Atoms, top quality stuff as always (old ones are rock solid too, if a bit dated). I went with ALIX instead as I had a local distributor so it was quite a bit cheaper. Plenty fast for now, too.
Canned 6 Ubuntu machines (servers) and replaced with OpenBSD 5.3 recently. So much goodness ships with openbsd it's unreal. Bar Theo's sharp tongue (which is usually spot on), you get pf, opensmtpd (so much cleaner than postfix), nginx in base, miles better manual pages than Linux, no horrid gnu info, small and simple base system, absolutely no surprises, no bloated crap like dbus/upstart, tmux in base and what I can only describe as a warm fuzzy "why the hell isn't everything like this" feeling.
Also, it's just about the only thing I've found that can work adequately entirely offline without having to use google to fix obscure problems and decypher documentation.
I've got a bootable USB stick that contains the base packages, entire FAQ (main documentation source outside manpages), all normal binary packages I use, WiFi firmware and its less than 2Gb.
From my experience FreeBSD documentation is often outdated and no longer applicable. In my opinion incorrect documentation is worse than having no information as it often causes more problems and frustration.
Well, its going into the kernel as kdbus, which you could argue is bloated. Linux is much bigger than the BSDs, although it is modular and you can remove it, but the big distros use much of it so you can't if you use them...
The preferred document format for the GNU system is the Texinfo formatting language. Every GNU package should (ideally) have documentation in Texinfo both for reference and for learners. Texinfo makes it possible to produce a good quality formatted book, using TeX, and to generate an Info file. It is also possible to generate HTML output from Texinfo source.
Make sure your manual is clear to a reader who knows nothing about the topic and reads it straight through. This means covering basic topics at the beginning, and advanced topics only later. This also means defining every specialized term when it is first used.
Programmers tend to carry over the structure of the program as the structure for its documentation. But this structure is not necessarily good for explaining how to use the program; it may be irrelevant and confusing for a user.
Instead, the right way to structure documentation is according to the concepts and questions that a user will have in mind when reading it. This principle applies at every level, from the lowest (ordering sentences in a paragraph) to the highest (ordering of chapter topics within the manual). Sometimes this structure of ideas matches the structure of the implementation of the software being documented—but often they are different. An important part of learning to write good documentation is to learn to notice when you have unthinkingly structured the documentation like the implementation, stop yourself, and look for better alternatives.
For example, each program in the GNU system probably ought to be documented in one manual; but this does not mean each program should have its own manual. That would be following the structure of the implementation, rather than the structure that helps the user understand.
Instead, each manual should cover a coherent topic. For example, instead of a manual for diff and a manual for diff3, we have one manual for “comparison of files” which covers both of those programs, as well as cmp. By documenting these programs together, we can make the whole subject clearer.
The manual which discusses a program should certainly document all of the program’s command-line options and all of its commands. It should give examples of their use. But don’t organize the manual as a list of features. Instead, organize it logically, by subtopics. Address the questions that a user will ask when thinking about the job that the program does. Don’t just tell the reader what each feature can do—say what jobs it is good for, and show how to use it for those jobs. Explain what is recommended usage, and what kinds of usage users should avoid.
In general, a GNU manual should serve both as tutorial and reference. It should be set up for convenient access to each topic through Info, and for reading straight through (appendixes aside). A GNU manual should give a good introduction to a beginner reading through from the start, and should also provide all the details that hackers want. The Bison manual (https://www.gnu.org/software/bison/manual/html_node/Concepts...) is a good example of this—please take a look at it to see what we mean.
That is not as hard as it first sounds. Arrange each chapter as a logical breakdown of its topic, but order the sections, and write their text, so that reading the chapter straight through makes sense. Do likewise when structuring the book into chapters, and when structuring a section into paragraphs. The watchword is, at each point, address the most fundamental and important issue raised by the preceding text.
If necessary, add extra chapters at the beginning of the manual which are purely tutorial and cover the basics of the subject. These provide the framework for a beginner to understand the rest of the manual. The Bison manual (https://www.gnu.org/software/bison/manual/html_node/Concepts...) provides a good example of how to do this.
Don’t use Unix man pages as a model for how to write GNU documentation; most of them are terse, badly structured, and give inadequate explanation of the underlying concepts. (There are, of course, some exceptions.) Also, Unix man pages use a particular format which is different from what we use in GNU manuals.
Unix manpages are fine. If you've read the GNU awk manual vs BSD awk, you'll understand what I mean. GNU awk manual is hundreds of pages long and serves only to confuse the user. Not only that, half of it is listing differences between GNU awk and other awks which is just mindnumbing. A fine case of cat -v. And don't get me started on glibc.
The bit that pisses me off is: man awk (see info awk instead). I think they fixed that but it was a pain in the butt for years. Is it in man or is it in info?
Even the document I cited conceded that there are some very good Unix man pages — BSD Awk may very well be one of many, I have not read it. It follows, logically, that there could be some very bad Info documentation manuals, and the GNU Awk manual may very well be one of many; I have not read it.
This does not disprove the point the citation made; namely, that the Unix man page structure is unsuitable for many things since its structure tends to make people write manuals badly, and there is no place for the kind of introductions and tutorials that good manuals should contain. On the contrary, the man page format lends itself to very strict and terse reference documentation, which is not what I would call a “manual”, as such.
Someone should perhaps do a study of which of the two formats seem to generate the better documentation.
> The bit that pisses me off is: man awk (see info awk instead). I think they fixed that but it was a pain in the butt for years. Is it in man or is it in info?
The official GNU system documentation is in Info. Now, the GNU system is meant to be compatible with Unix, and Unix uses man pages. And since many GNU tools were (and are) not actually developed to be used in the GNU system as such, but instead find their major use and development as parts of various Unix variants, it follows that most of them have Unix have man pages too. Since writing documentation is work, and writing duplicated documentation is even more work, the man pages for GNU tools are often overly terse, incomplete and/or out of date compared to the Info documentation, which is the official documentation. Some well-known exceptions exist, notably GNU Bash does not have any Info documentation, but instead has an (enormously long) Unix man page.
(You are demonstrably wrong, but I will address your implied point instead of your hyperbole.)
I think that the hostility to Info comes from exposure to the standalone Info reader. It is non-intuitive, non-Unix-like, and does, as far as I know, not even support text attributes, so it looks bad even for a text-based UI. It is not very newbie friendly (like GNU nano with its very explicit menus to guide you). Nor is it consistent with other Unix-based text UIs like “vim” or “less”. Instead it behaves like a brain-damaged Emacs, but without even many basic text-navigation features from Emacs, so even Emacs users feel lost in Info.
I feel that if someone made a module for vim or something to read and navigate Info documentation (and made it the default instead of the barebones standalone Info reader), then Unix people would warm up to Info documentation.
Personally, I read Info documentation using the Emacs built-in reader. It is, unofficially I think, the canonical Info documentation reader, and it is beautifully integrated with Emacs. I prefer reading most manuals this way (if they have an Info version).
My problem with info is the document structure. The manual for each application seems to be structured so differently with topics under specific titles and subtitles, I always get lost and it takes me a while to find the thing I'm looking for. And it really is frustrating to stumble in the dark looking for that one command line flag or input format that you've found and used many times before.
For me the consistent reference-like structure and terse language of man pages is ideal. The whole document is shorter than a bunch of info pages will be, and because of the structure, I always know where to expect specific things to be. Some man pages do not quite fit the traditional model so there is a bit of compromise, but I still find it easier to scan through such pages and remember the location of things than I do with info pages.
The difference between (Open)BSD and GNU/Linux world man pages is that the former group spend plenty of effort on polishing the language & structure to make the text clean, short and to-the-point, as well as consistent. So exactly the qualities I prefer man for are taken as far as possible. And because we love well written man pages, there's a man page for damn well near anything in the base system; whereas on other systems you're sometimes looking at info pages, sometimes html documentation installed where-ever or nowhere, READMEs, comments in config files, howtos and tutorials via Google, or at the source because nobody bothered write real documentation. If the source isn't at hand or you don't have the time for it, you try imitate what you see in use already, and pray..
> My problem with info is the document structure. The manual for each application seems to be structured so differently with topics under specific titles and subtitles, I always get lost and it takes me a while to find the thing I'm looking for.
Use the index – it’s what it’s there for. In the usual Info readers, it’s the “i” key. This is what I use when, for instance, reading the GNU C Library manual and wanting the documentation for a specific function, struct or macro. If you want the command-line switches for the “foo” command, the node to go to in the manual you find yourself when invoking “info foo” is called “foo invocation”; you go to a specified node using the “g” key. Then you simply use the space key to navigate serially through all the following nodes. There are only three more keys that I use when reading Info documentation.¹
I think it’s mostly a question of habit. Man pages has taught you to read mostly the whole document quickly, while scanning for the information you need. These are habits unsuitable to both to reading a book of printed documentation or Info documentation. If you have a specific idea of what you want to find, you use the index.
1) The “l” key – “l” for “last” – corresponds to a web browser’s back button, the “u” key goes “up” in the document tree, and the “t” key goes to the top of the current document. These five keys are all the keys I use for Info navigation, except for normal searching with C-s, but that’s Emacs. This should not be difficult, but it is. I suspect, as I wrote previously, that it’s not Unix-y enough.
The index is the problem. The index for each application and library looks different. There are headings and subheadings and subheadings.. sometimes they're descriptive enough to help you find what you need, but in my experience, they often are not. But in either case you still need to read through them and interpret them. And reading through tens of headings is a pain, especially if after reading them you navigate to the seemingly most relevant page and find it doesn't have the information you need. ("Where the hell was it?!" is something I've said more than once, trying to battle info documents.) This is what I mean when I complain about structure; it's not consistent, it's not predictable. I don't want it to read like book, I want it to be useful and being consistent in such a manner that I can always find my answer quick helps there. Good man pages more or less have this consistency. And because they're terse, it's still quite easy to scan even when the structure for some reason does not conform to the tradition.
I think you misunderstood me; the Index is not the table of contents or the list of headings. The index is what you access with the “i” key, and can also be read directly using the “g” key to go directly to the “Index” node.
I agree, reading headings and navigating that way is mostly a waste of time when trying to find something specific. Which is why I never do it; I use the Index.
If I want an introduction to complex topic, I usually won't use man pages. I will open up a web browser and search. That will give me web pages that have pictures, diagrams, and possibly even introductory essays about the topic.
There are many more people who know how to write web pages or forum posts than who know how to write "info" pages. There are also great tools for searching the web, but poor tools for search info pages. "info" pages can't even have diagrams or pictures. Consequently, information obtained via "info" tend to be stale, incomplete, and generally unhelpful.
The real question is why "man" pages are still useful in the era of the internet. The answer is that sometimes, you want a concise, accurate, quick reference to what is installed on your system. "man" fills this function admirably. "info" fills that function awkwardly and poorly.
This is true for me too as long as I don't use OpenBSD. The cause is that GNU/Linux manpages are horrible. They are mostly outdated and not in sync with the versions that are actually installed.
As soon as you log in to your newly installed OpenBSD system, your root account has a mail in her mailbox stating to read the "afterboot" manpage. When I first time entered "man afterboot" in my shell, I was blown away.
The idea is to create a list of items that can be checked
off so that you have a warm fuzzy feeling that something
obvious has not been missed.
It gives you a fast introduction with all informations you need and then references all locations where you find every other topic. At that point, you don't need google anymore where you again find outdated and incorrect informations.
The documentation is so high quality that this is the biggest advantage I appreciate the most.
An internet search will give me many different choices of documentation of unknown provenance, authorship, official status, datedness, accuracy, and comprehensibility. This, to me, is not helpful, since I have no way of knowing which are any good, even after reading them. On the other hand, Info documentation will give me the official, up-to-date, correct, and mostly well-written documentation.
Internet searches certainly have their place if I want forum posts, mailing lists or blog posts, which I sometimes do. But I often want the official documentation corresponding exactly to my installed version of the software, and this is often harder to find using a web search than simply using the Info documentation, because of the above issues. Also, something to keep in mind in these modern times is that web usage is monitored. Local reading of Info documentation is not.
Tools for searching Info pages are built-in to most Info browsers. I use the Emacs Info browser, which has both the usual Emacs way of searching (C-s and C-r searches both in the current node and in the whole document), and the Info Index, which is what I start with when looking up the reference documentation for something.
Also, contrary to what you claim, Info documentation can and do contain images. For example, the HTML GnuTLS documentation here: (http://gnutls.org/manual/html_node/OpenPGP-certificates.html) was generated from Texinfo, and the images are still there when I browse the corresponding Info documentation with the Emacs Info reader. Those images are simple because they are generated from Dia drawings via EPS files, but images can be anything, just like in HTML.
An internet search will give me many different choices of documentation of unknown provenance, authorship, official status, datedness, accuracy, and comprehensibility.
True. But the documentation for GNU programs, both man and info, is often also "of unknown provenance, authorship, datedness, accuracy, and comprehensibility." You even said yourself that the man pages for many of these GNU programs are years out of date on some systems. Official status is not guaranteed either, since anyone can fork an open source program and produce a clone.
Somehow, despite all that, we manage to muddle on. Probably, it's because we're humans, with the ability to filter out bad information and incomprehensible explanations, and find the real information. And in my experience, this is much easier to do with a man page, which usually simply gets to the point, than with a long, meandering info page which treats me like an idiot and yet often fails to mention vital information.
Thank you for the clarification that info pages can support images. Since I have only ever used info in a terminal, I was not aware of this.
In a lot of ways, info reminds me of Microsoft's CHM format, another HTML workalike with limited abilities. CHM can also support images.
Web usage may be monitored, but I suspect that searching for information about open source programs won't reveal anything about me that downloading Linux did not.
Given how readily good search and navigation exist, I'd much prefer GNU-style awk man pages to BSD-style awk man pages. 15 years ago I taught myself the AWK programming language using "man awk" on a Red Hat Linux box with no network connection. Had the manual been merely a listing of available switches, that wouldn't have been possible.
I'd actually prefer it if more languages provided man pages with detailed language as opposed to random assortments of HTML posing as documentation. Always liked perl precisely due to this.
You can use w3mman (it installs with the w3m package on OBSD) and you'll be able to follow manual xrefs. Html output with links can also be generated, see the afterbook man page linked above for an example.
I think PF in openbsd is still single core.
FreeBSD 10.x apparently has an SMP version of PF.
My feeling is still that if high performance is desired, FreeBSD is still preferred. For "normal" workloads though, I think OpenBSD would be a fine choice. I do like how "clean" an OpenBSD system feels.
I've not done a direct comparison but it feels solid under load. IO bound processes seem to balance IO fairly i.e. no bad neighbour problem where one process will starve another and a fork bomb won't take out the system even though I managed to hit a loadavg of 80! It still was very responsive.
Secondarily, I have a VM on BigV which seems to suffer less pauses and oddness than Ubuntu 12.04LTS did.
Works nicely on UltraSparc machines (U30 and U60 tested) as well which is a bonus. I still find them superior to Intel even if they are power hungry and rather old now. Context switch time seems pretty low on Sparcs. Not sure if it uses hardware contexts or not though - haven't looked at the code.
I'm going to reinstall a couple of servers this weekend with 5.4. It generally is just a matter of printing out /etc/fstab, doing the install (I skip the upgrade on these), install some packages, and laying back down the configs and keys from source code control. It takes about 20 - 30 minutes to put these servers up (gateway, dns).
 I should point out I've had a box running it before release to make sure that any changes are accounted for.
Upgrades generally work well for me, but DO always check the release notes and the upgrade guide for the new version. Sometimes you need to do a few extra steps, but in my experience it will always be clearly documented. And just a heads-up for the NEXT version:
OpenBSD 5.5 will be year 2038 ready, but this requires a change to a 64 bit time type. This results in a "flag day" event, where old binaries will not run on the new kernel, and the new binaries won't run on the old kernel, and some file formats will be changing. A remote, no-console process will be provided, but it will be a more touchy update process than usual.
I do upgrades on some machines, but for some servers (firewall, dns, print server), it just takes a lot less time to do a fresh install. I keep the configs, keys, etc. under source control and can put it all back faster than doing the upgrade.
It is also pretty good practice for anytime those servers go bad. It helps to be able to put temporary replacements in service from whatever I have lying around. I can save the hot spares for machines that have user data on them (e-mail, file servers).
I seem to recall someone (Theo, I believe) say that in order for an OpenBSD to maintain library compatibility with existing applications, you had to do an "Upgrade in Place" - and not do a fresh install.
I.E. The default approach, incremental upgrade, is the only way to ensure your OpenBSD system doesn't fail.
For the servers I'm talking about, blowing the whole thing away and installing any packages from the new disc is just fine and keeps away the clutter.
I look at it this way, if all I'm really doing is adding some flags or configuration files, I would rather just blow it away and do the reinstall. Last couple of times I did that with my firewall, it was a 20 minute install.