Hacker News new | past | comments | ask | show | jobs | submit login

Impressive hardware, but UGH!, not yet-another-Linux powered computer! If I had the time, I'd port illumos to it myself, but since I don't, Linux on this thing makes it a non-starter for me.



As someone who spent years maintaining Solaris systems, it always amazes me that there are people out there that like Solaris.

Especially in the context of a hobby/experimental system.


I love Solaris (except Oracle Solaris 11, which I really dislike!), and I love illumos and SmartOS even more.

I grew up on Solaris - my first ever UNIX was a Solaris 2.5.1 system on a SPARCStation 20. I've been running Solaris on intel since my first Pentium 90 workstation on Solaris 2.5.1.

Since I know how to build and package software for Solaris, I have everything I could ever want or need on it. It's a comfortable system, and it's elegant, once one fully understands all of its capabilities. And it's extremely reliable and high performance, especially on intel based processors.

For some context, I am forced to work on Linux and I spend my entire working day working on it. Compared to reliability of Solaris and ease of use, I have grown to dislike Linux in the extreme. If you are thinking, "but that is insane, Linux is so great, how is that possible!", remember that I grew up on UNIX, so I have different criteria for what is comfortable and reliable (even in terms of development) than your average Linux user or Linux system administrator does. I dislike the GNU tools and user land (with very few notable exceptions) because I'm used to AT&T System V tools and that is how I expect the tools to behave; GNU tool chain usually frustrates me to no end. Working with Linux frustrates me to no end (I do professional development and system engineering on it).

For example: --some-long-option comes to mind, or lack of proper manual pages ("see the texinfo page"), lack of backwards compatibility support, tar -z (tar is a tape archiver, not a compressor!), and so on, and so on... I miss my ZFS, I miss my mdb, I miss my dbx, I miss my SMF, I miss my fmadm, I miss the simple and effective handling of storage area network logical units, I miss the fiberchannel stack which actually works... I don't have any of those issues on illumos based systems, but it drives the point home:

the last thing I want is yet another Linux based computer. I have enough of that as it is at work - almost 71,000 servers, 49% of them running Linux, and it sucks.


>For example: --some-long-option comes to mind, or lack of proper manual pages ("see the texinfo page"),

What are you even talking about here? man/info works wonderfully, if I want more readable information a terminal sure as hell isn't going to give it to me easier than searching a wiki. And solaris absolutely had problems with documentation on their larger packages.

> lack of backwards compatibility support

Hardly even a real issue if you actually maintain your damn systems more than once every half decade.

>tar -z (tar is a tape archiver, not a compressor!)

... It still is a tape archiver AND a compresser AND a 100 different but completely valid and usable things.

ZFS absolutely is usable.

Why do you enjoy DBX over GDB?

SMF? One would think you would love and embrace systemd.

FibreChannel stack that works? https://wiki.archlinux.org/index.php/InfiniBand

I can't refute all that you have said here since I am not familiar with all of it. But, have you considered you are just doing it the wrong/difficult way?


Manual pages on traditional UNIX systems are extremely detailed and contain lots of good, usable examples, and Solaris / illumos based operating systems really shine in this area. People who grew up on a real UNIX expect to find comprehensive, high quality documentation in the manual pages in a terminal session. This feature was driven hard by enterprise customers and professional system administrators in times when wikis did not exist, and even today the quality of the content in some arbitrary wiki written by someone on the internet when they felt like it is dubious in comparison to manual pages written by people with formal education in engineering and technical writing!

Like I wrote before, on UNIX we have different expectations in different areas than what people are used to and accept as given on Linux. The focus is different on UNIX.

Apropos dbx versus gdb: dbx has a 1,000 page manual, and makes it really easy to step through assembler code while listing the original source. How many pages of documentation does gdb have again? On top of that, gdb doesn't even fully support my OS, I don't think gdb properly supports anything that is not Linux... hmmm, that reminds me an awful lot of Microsoft Windows monoculture.

systemd versus SMF: systemd is a shoddy copy of SMF with a Windows twist, trying to replace every service in the system. Unlike SMF, which is part of the fault management architecture, which is part of self-healing technology, systemd has no such concept, self-healing and a contract filesystem is science fiction for systemd. SMF watches over services, but it doesn't try to replace them; "do one thing, and do it well."

InfiniBand is a different technology than fiberchannel.


To get full GDB documentation you need to use info gdb, the man page states that itself. Man pages are quite limited correct, so they offered a better solution just like what you are looking for... Not sure what the issue is here. The amount of documentation is massive, 2321 lines of text in an easy to browse format...

GDB also works on a large amount of computers. Windows, Linux, netbsd, etc.

>>>However, its use is not strictly limited to the GNU operating system; it is a portable debugger that runs on many Unix-like systems and works for many programming languages, including Ada, C, C++, Objective-C, Free Pascal, Fortran, Java[1] and partially others. [0]

>hmmm, that reminds me an awful lot of Microsoft Windows monoculture.

What? Actually they support windows, which is exactly the opposite of what you are trying to say here... I use GDB DAILY on windows (work.) with zero issues.

I'll agree that perhaps systemd doesn't cover all use cases or wants. But calling it a shoddy copy of SMF with a windows twist is disingenuous. I don't care for the for or against systemd arguments but after the initial reaction/learning phase when pulling away from upstart/sysv/init based shit/etc, many of us are actually starting to warm up to systemd. It handles services wonderfully, it handles logs wonderfully, perhaps it's a bit bloated whatever you can always revert to what you want if you decide to spend the time to actually do it.

>InfiniBand is a different technology than fiberchannel.

Fair enough, i'll have to read up more on it than.

You are making quite a lot of generalizations without doing proper research. If you want to be stuck in your "In the old days us Unix people had it right!" mindset than this discussion is pointless. Otherwise I would love to continue butting heads on this.

[0] https://en.wikipedia.org/wiki/GNU_Debugger


> To get full GDB documentation you need to use info gdb, the man page states that itself. Man pages are quite limited correct,

`info gdb` is completely unacceptable, and an outrage: standard documentation on UNIX are manual pages, not to mention that systems other than GNU/Linux do not use GNU info.

> Man pages are quite limited correct,

Incorrect; manual pages are rendered by the nroff document typesetting system. Entire books have been typeset for printing with nroff. Case in point: the UNIX Text Processing book, the AWK book, the ANSI C book. The system is extremely flexible and very powerful, once one understands what is going on. When you hold the printed versions of these books in your hand, you can see that they are beautifully typeset and rendered. Brought to you by the same programs which render UNIX manual pages when you type `man some_command`!

What you see on the screen (on UNIX, cannot vouch for Linux) when you type `man ls` is an actual professional typesetting system rendering the content for stdout instead of a printing press!

> I don't care for the for or against systemd arguments but after the initial reaction/learning phase when pulling away from upstart/sysv/init based shit/etc, many of us are actually starting to warm up to systemd.

That's because you haven't had the opportunity to enjoy SMF. When you've worked with SMF, systemd looks like a cobbled-together toy. For example, systemd turns ASCII logs into binary format, just like on Windows. This in turn goes against the UNIX philosophy of

Write programs to handle text streams, because that is a universal interface. [McIlroy]

http://www.catb.org/esr/writings/taoup/html/ch01s06.html

> You are making quite a lot of generalizations without doing proper research.

That's is quite ironic, telling that to someone who does professional system engineering and software development on GNU/Linux for a living. I have been doing UNIX and Linux professsionally since 1993, and working with computers in general since 1984, how many years is that? I spend every waking moment of what free time I have researching UNIX and Linux. To tell me that I'm "generalizing without doing proper research" just because I am not succumbing to GNU/Linux group think is what one could call disingenuous.


I'll admit, perhaps I am wrong in the greater picture of things here. But you are also wrong on some points. Particularly man pages being superior to info. troff/nroff markup is needlessly complex compared to Tex. You can also use your vi keys in info as well.. Perhaps you can just boil this down to being comfortable using man pages, but info pages provide more options and usability when it comes to created documentation, that's just a simple fact. If you have trouble quickly finding the information you need when using info, consider reading the info info page ;).

In fact TeX is used/preferred over nroff/others for a huge majority of physics/mathematics academic journals. And quite a bit outside of it. [0 - 3]

I will admit for stuff I already know and understand enough of to be considered proficient with it, man pages can be quicker. For something I just installed and still need to learn info pages provide a much better platform.

You may find the following link enjoyable to skim through. http://unix.stackexchange.com/questions/77514/what-is-gnu-in...

> What you see on the screen (on UNIX, cannot vouch for Linux) when you type `man ls` is an actual professional typesetting system rendering the content for stdout instead of a printing press!

Love the enthusiasm but (La)TeX falls into that description as well.

> That's because you haven't had the opportunity to enjoy SMF.

Maybe, I've put it on my list of things to tinker with more. Thanks for the link.

> That's is quite ironic [...] I am not succumbing to GNU/Linux group think is what one could call disingenuous

I don't care about you succumbing to any group think or whatever other word you can come up with. I am trying to show you why it is actually superior in many ways. Just because you are comfortable with nroff absolutely 100% does not make it better. To put it simply, you may be a professional system/software engineer but if you can't keep up with why these systems are considered (and shown to be) better than what you have now than you will just continue to be frustrated/fall behind.

[0] http://www.math.ucla.edu/~tao/submissions.html [1] https://www.overleaf.com/gallery/tagged/academic-journal [2] https://en.wikipedia.org/wiki/LaTeX [3] http://www.catb.org/esr/writings/taoup/html/ch18s03.html


http://unix.stackexchange.com/questions/77514/what-is-gnu-in...

Quoting from the link above:

ADDENDUM: While not strictly relevant to the question, note that man pages are still considered the standard documentation system on free Unix-like systems like those running atop the Linux kernel and also the various BSD flavors. For example, the Debian package templates encourage the addition of a man page for any commands, and also lintian checks for a man page. Texinfo is still not widely used outside the GNU project.

Which I can confirm and concur with. Long story short, I would forget GNU info, because it is an invention not suitable to the task at hand, which is efficient and fast lookup of information in a reference manual.


LaTeX is a great typesetting system, just not for manual pages. It is frustrating in the extreme having to wade through a GNU info "page" like one does through a web browser when one is in a datacenter trying to solve a priority 1 incident.

LaTeX is a sucessor of TeX, which was designed with the goal of writing academic research papers, with a specific focus on mathematics research, not writing reference documentation; it is great for what it is designed to do, however it was not designed to be an online reference manual system, and it shows in the browser-like nature of the GNU info usage paradigm.

Manual pages have a certain structure, which, when one understands it, makes them extremely efficient at locating the information:

SYNOPSIS

shows me the valid forms of using the command in question, in one to three concise lines.

OPTIONS

lists all the available options which might not be present in the examples, but which I might need.

EXAMPLES

the most important part of a manual page; on GNU/Linux, this part is usually non-existent, but on UNIX, the EXAMPLES is almost always there, and it almost always contains several detailed treatises on how to use the command, system call, or a library in question. After SYNOPSIS, this is the first part I jump to with the "/" character (forward search in less(1)), and often contains enough information for me to start using the program in question and be productive immediately.

SEE ALSO

If I cannot remember exactly which command I am looking for, but I know commands related to it, just by calling up the manual page of the related command, I can look in the SEE ALSO section and find the manual for the command I could not remember.

FILES

provides which files are affected. This information is vital when knowing which files to inspect, monitor, or modify.

AVAILABILITY

Sometimes, I just need to know which package a file or a command belongs to, whether it is multithreading-safe ("MT safe"), or whether the interface I am about to use is stable, uncommitted, deprecated, or external; AVAILABILITY section will tell me that. This section also does not exist on GNU/Linux, where it is science fiction for the developer to have even thought about forward and backward compatibility; often times, the Linux developers are so undisciplined that they do not even deliver built in documentation, and the manual page is written by someone else as a placeholder, and AVAILABILITY section won't exist in it, because the third party that wrote the manual page cannot know that. For example, Debian GNU/Linux often has such manual pages. That is unthinkable and intolerable on UNIX!

By convention, all the manual pages on UNIX contain these (and additional) sections. The order of locating pertinent information in a manual page, then, becomes as follows:

1. SYNOPSIS;

2. EXAMPLES;

3. OPTIONS;

4. SEE ALSO;

5. FILES;

6. AVAILABILITY.

With the order of scanning listed above, I often locate the pertinent information within five seconds, up to 35 seconds maximum (we timed it, ten runs, did the average, mean, and median, and corrected for standard deviation).

GNU info on the other hand, I'm stuck in trying to navigate "topics" as if I were in a web browser. The navigation is haphazard because everybody has their own idea of what the documentation to their program should look like, something that is well defined and uniform in the manual pages.

When you are troubleshooting a problem or need to scan through large amount of documentation quickly and efficiently, if you understand the structure (1 - user commands, 1M (or 8 on BSD and GNU/Linux) - system administration commands, 2 - system calls, 3C - standard C library, 3LIB - libraries, 4 (or 5 on GNU/Linux) - file formats, 5 - standards and macros, 6 - games, 7 - special files, 7D - device drivers, 9 - device driver interfaces), searching through the correct manual page becomes even faster, like a search on steroids, or with a twin turbo and a supercharger combined.

None of that structure is present in a GNU info manual; there, as is usual with GNU/Linux, it's a "free for all".

Any software I write is delivered with a manual page strictly following norms described above, because on UNIX, that is what we do, and it would be shameful and unprofessional not to do it (shoddy product), even if what one writes is freeware, in one's spare time. It's completely unacceptable and unthinkable to deliver a piece of software without a manual page. We have completely different quality standards and expectations of software on UNIX, even for free and gratis software.

This book, sometimes available in printed form and as a free PDF, explains how to use the nroff typesetting system:

http://oreilly.com/openbook/utp/UnixTextProcessing.pdf

the book is gratis to download, as it has been out of print for several decades, but it is invaluable when learning how to typeset documents with nroff(1), including manual pages.



> tar -z (tar is a tape archiver, not a compressor!)

Did you not want -z to exist at all (so you would pipe through gzip separately), or not want it to be a magical default?

GNU changed the -z handling at some point in the last decade (so that it autodetects whether input is compressed upon extraction and decompresses it without being told to), so now tar -xzf foo.tar.gz and tar -xf foo.tar.gz both work, where previously the second one would have failed because tar wouldn't have tried to decompress. Is that change what you're bothered by (it's pretty counterintuitive to me!), or did you just not want compression built into tar at all?

GNU tar now includes flag-based support for -j (bzip2), -J (xz), --lzip, --lzma, --lzop, -z (gzip), and -Z (compress).


I do not want -z at all, ever; -z has no business in a tape archiver, as it goes against the core UNIX philosophy of "do one thing, and do it well" [McIlroy].

Implementing UNIX tools inside of other UNIX tools is not how UNIX works; that might be acceptable on Windows, but it sucks on UNIX.

For example,

  xz -dvc archive.tar.xz | tar xf -
works everywhere as is, including Linux, while

  tar xzf archive.tar.xz
will not work on systems which do not use GNU tar or where GNU tar is not linked with libz, libarchive and liblzma.

GNU way is broken, because it is the Windows way, and Windows is busted.


Another example might be "sort -u" (because you can get the same result from "sort | uniq"). There seems to have been a pattern where people decided that if "foo | bar" (or "bar | foo") is a common enough idiom, they could or should create "foo -b" to mimic it.


Except that `sort -u` enables an optimization for dealing with a large number of duplicate lines.


How would Illumos be better? I don't have any experience with it (although I've got some with Solaris/SunOS itself), and I'm curious.

I'd assume that Linux would have a lot more software available to it, as well as more maturity to its ARM ports.


illumos has the fault management architecture, SMF, and ZFS.

I'm not interested in running this computer as a desktop, but as a UNIX server which I can carry in my pocket.

As for software, the package library of illumos based systems can stand shoulder to shoulder with Debian based ones:

http://www.perkin.org.uk/posts/building-packages-at-scale.ht...

Linux on these types of devices is not interesting to me, as every such device comes with it. It's neither different nor original.


> Linux on these types of devices is not interesting to me, as every such device comes with it. It's neither different nor original.

Being neither different nor original seems like a plus, when it comes to servers. Having a predictable, internally-consistent standard system would be best. Easier management, easier configuration, predictable behavior between machines, and all that. Of course, the opinion of which system it would be better to standardize on would be a matter of opinion.

Also, I think anyone trying to run any sort of serious server on a CHIP is using a nailfile where a screwdriver would be better-suited.

If it makes you any happier, ZFS is doable on a Linux-based SBC. I found a fair amount of documentation of it being done on the first generation of Raspberry Pi.


Yes, but to me having a consistent server means having an illumos based server, because that guarantees SVR4, XPG4, SuS and POSIX behavior, as well as that my applications will JustWork(SM), and that my data integrity will be guaranteed.

I can't use ZFS on Linux because the place where I work doesn't allow it, as they are scared of having to support it (and they don't know how), and they're scared of redhat denying them support. On top of that, why would I use ZFS on Linux when I can have the real deal on any illumos or FreeBSD derivative (assuming they would let me)? Again, zero interest in running Linux. I like sleeping through my nights instead of sitting in a priority 1 crisis bridge having a bunch of managers yelling at me, and all because of having problems on Linux I wouldn't be having if I were running SmartOS.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: