Back then we expected programs like: `telnet`. A program that can be developed by one person (or a few at most) and requires just a few tools to be developed (gcc, gdb, vim, make).
I'm not saying that what point out is not true, but telnet is prolly best compared to openssh, which is seems to be developed by 2 devs, and waaaaay more complex. Both are tools, both may be used for various over-the-network interactions with OSes.
> don't ask me why, I don't get it either
It's an ad network. It has to deal with compliance in many regions. There are mobile apps.
I would say back then, the most demanded programs were command line programs (small utilities that do one thing well).
Nowadays the most "demanded" programs are programs that want to attract as many users as possible (and so, they have to "scale", they have to be "distributed"), programs that need to grow as much as possible (the bigger the program the bigger the company the bigger the profit), programs that need to be rewritten in new tech stacks every 3 or so years.
I guess the mentality has changed along the way as well.
Back in the days consumers were not in the software picture so much. Maybe some pre-PCs were catering for home users, maybe some BBS to dial into, but that was it.
This has changed. And this is why we have Twitter.
> programs that need to be rewritten in new tech stacks every 3 or so years.
Where do you get this from? I dont see such trend at all. Okay, some library needs to be replaced, and we share more code nowadays (the stacks are deeper, but what would you expect? more cycles to burn and more ecosystems). But no-one factors in the cost of constant rewriting to stay up to date with the hippest stacks.
> Where do you get this from?
I'd guess front end dev. 3 years is positively prehistoric laughing material for all the up and coming resume-driven-development front end rockstars.
Frontend dev has largely matured and stabilized, it's not the shit show it used to be anymore.
I'd say this is wrong. Creating a twitter-clone is simple, and can be done by one person. The hundreds are neccessary for making it scale and managable. And many of those hundred are even there for the content, not the tool. They filter out fakenews and nasty behaviour, monetize the platform and do all kind of other things which are not in the realm of Unix-tools itself.
You are severely understimating the amount of work that goes into a (web) client application that is being used by as many people as Twitter's - UX design and testing, accessibility, internationalization, browser compatibility (including lots of exotic ones you may never have heard of)...
No, doing a twitter-clone is quite simple, because the interface is not very complicated. It's even on the usual list of tutorial-examples these days. So in worst case you can just copy&paste the whole interface from somewhere. The backend is also not quite complicated for an average developer.
The hard part in doing twitter is scaling up your poor clone which works well enough in a single instance for some hundred users, to an architecture of multiple instance serving million users.
But this comparison is unfair, because the original telnet-client&server was also only a single instance-solution, not some upscaled monster for millions. Even though a twitter-clone still needs a bit more knowledge than telnet, it's still on a level one person alone can handle.
That's twitter the server right there.
twitter the client is sending in your tweets, and showing you a list of tweets from other people you are subscribed to in some order - probably chronological.
The twitter client does not need to keep any data (between sessions) but should be usable with just sending off requests for data from twitter's public APIs and posting your tweets to same. (assuming of course as I said that twitter APIs give access to this which I am not sure of)
I didn't, and I'm sure I'm not alone. Let a website stay a website.
I expect it to be free and maintainable (from both developer and user perspectives).
So you still need Chrome/Firefox, and Gnome/KDE/Mate/whatever to run them in.
I think the OP's underlying unhappiness is more that "the people who want to use unix now are fundamentally different people who want different things that the people who wanted to 'learn unix' 35 years ago.
(Though I'll bet 80+% of the people they taught to pipe ls to wc were within weeks using irc and mutt whenever they were looking the other way in class, because for most of us people are more fun than machines, and back then we used irc/usenet instead of Twitter/Facebook, but to fill the same sort of niche.)
There were simple technologies which did a lot too, like Hypercard.
The creeping complexity of the web isn't something which had to happen. I understand the reasons it did happen, but in an alternative timeline, without Netscape vs. IE, the dot-com boom/bust, and the whole Netscape meltdown, it might not have happened.
People were building sophisticated applications which did a lot in the days of NeXTStep, BeOS, SGI, and Amiga, without this massive complexity build-up. Altavista scaled well too.
That can stay within the browser or some shared library. It absolutely doesn't need to affect unix/linux architecture in any way.
I think for most people learning unix, understanding chrome/firefox boils down to knowing "there is a chrome/firefox repo somewhere, it gets compiled/packaged for your distro somewhere and has a bunch of dependencies". DE doesn't need to have any provisions for Twitter specifically either, apart from what is already essential for most GUI applications.
I get what you mean, I guess I just twitched reading "unix twitter client" being something we expect from unix.
I hear that more and more, often followed by: "In the 90's..." But then I compare the steps it used to take to setup my NextCloud install, including altering php.ini, installing and configuring mariadb, php-fpm, getting a free start-ssl cert, etc. to my current docker-compose.yaml and I think: No.
Edit I also think we forgot all about how we manually did things, like mounting USD drives with kernel 2.4. I love how Linux is today.
But the automation comes with the downside that we're forgetting how to build simple systems at all.
Put differently: Try explaining your docker compose setup to a beginner that only knows how to use normal desktop software.
It a mind numbing number of abstractions but one can understand the high level if one understands what an OS (or a computer) is.
It IS easier, but I agree it's much more complex under the hood. I don't think that has to have downsides. A human can only comprehend so much, we must build on the skill-sets of our fellow humans.
It's a bad interpretation because the abstraction level is arbitrary. GUI applications? No. Terminal, yes, but we need to automate. Wifi? No, we virtualize the network. DNS? Depends, in k8s it is used for service discovery. Tooling? Package manager: yes. Network Center? No. Firewalls? I don't think so.
So, what constitutes a 'fresh linux install'? Why are there different distros (archlinux, ubuntu, alpine)?
What are the contracts for such a Docker image? Should it be secured? How is input and output arranged on the Docker image? What if I want to use a smarter way for logging (using UDP packets to a discoverable logging provider), can I tell all my Docker images to start using that?
How about monitoring and reliability? What constitutes a failed docker image? How do I detect it?
An abstraction is only good when it covers 99% of the underlying complexity. This abstraction is so leaky, it is more of a burden. Especially to beginners, who now are tasked with understanding Docker and understanding the limits of the abstraction.
Good technologies are explainable through solid abstractions. They are not 'leaky'. For me, Docker and k8s are so complicated because they are based on leaky abstractions and not hiding the underlying complexity sufficiently.
Bingo. Ive worked with so many Docker houses of cards slopped together that like to pull the latest docker images regularly in their docker compose and, surprise, something changes in that images structure and assumptions they were leveraging that breaks the rest of the application. As the number of containers increase, the failure rate also increases. You can lock in versions but for trendy users, "then why are you using docker, bro!?"
Containers aren't bad, they're actually great and useful technology, but using them for rapid development just to hope you're going to manage complexity often only makes things worse. Theyre far too often misused and abused than they are used for sane development purposes.
Still, you understand it ones you can build on that. But tbh I agree that there are now students that lack basic sysadmin skills needed to understand docker images from the inside. That said, I don't understand Kubernetes very deeply, so we complement each other nicely.
You can absolutely run all of the above in Docker containers. Just because most don't doesn't mean you can't or shouldn't!
For a user who is not concerned with the difference between kernel and userland, I think the analogy is good enough. Just throw in that it has to be a Linux system and you can get away with explaning it the same way you would VMs, just being more lightweight and reuses some of the "base system".
"Back in the days", you installed one package, edited one config, started the service, and it "just worked". Now i need to install a docker, type some docker install command, that pulls who-knows how many files, puts them in some 'weird' location, including the config file (which belongs in /etc), so instead of having a 2mb service, I now have a 1gb docker image sitting on my system, with a config file hidden somewhere, and I, the user, don't see any benefits, compared to just 'apt-get install'-ing something.
I mean, I understand, daling with dependencies are hard for some developers, but sometimes is just overkill.
The "whole computer" analogy works because I can drag it anywhere, plug it in and it runs exactly as intended. I can even copy it and run in 1000 times in parallel. When people complain about overkill I start talking about shared layers.
I understand it's easier to ship software as a "almost VM", but imagine everyone doing it... need a browser, download firefox docker image. Video player? docker image! Text editor? Docker image!
If firefox can make it 'just work', and whole (eg.) lamp-stack can 'just work', why not others?
Docker is a relatively new thing, and software worked for decades before that, without any real issues.
Sure, as long as a third party did all the hard work of integrating the package into the system first. Have to install something outside the repo? Clear your schedule...
For me, it frequently didn't. Sure, nginx will work out of the box, but I've had some difficult Apache installs, and less mature software? Often a dependency/configuration nightmare, and then, sometimes, it still would die with an opaque error message. And then something breaks and getting to a clean slate can be challenging.
That's one of the huge upsides of Docker for me: Stuff finally "just works" on my machines, too! And it actually "just works" on others' machines as well, which it has all along as they tell me, but it even does so when I'm watching. Amazing, really.
I think the config situation has improved as well – the most frequently used configuration options usually can be passed in as environment variables, so I don't have to pin things to non-default values just because the config file I copied and edited had that default value in its config at that particular version, only to break five versions later – neat. If I do need a config file, I keep it in git and bind-mount it in – the location is usually easy to find (e.g. in the docker run one-liner on the DockerHub page).
Image sizes can be crazy, that part needs work, many people don't write space-efficient Dockerfiles. I try to use alpine or scratch images where possible, that makes the problem quite manageable; and, frankly, the amounts of network, storage and ram needed for the overhead of an alpine image are a very fair bargain for the time and headspace I gain by being able to treat applications as a way darker-gray box than otherwise.
Well you see, Linux culture doesn't really consider the concept of "system" and "application" to be independent. Everything has to be built to work together or the whole thing catches fire and explodes. Consequently, its easier to build entirely different systems for every single application and package them all up together than it is to try and get them to work on the same system.
Weirdly, they seem to think this is a good thing.
Nix is just another overengineered solution to the problem that doesn't really exist on other platforms that are actually, you know, platforms.
Having them all in containers and a docker-compose.yml file makes it easier to quickly start an environment for a specific project.
Ofcourse like everything in life, it's a tradeoff, and in this case we're trading in diskspace and some overhead for ease of use and portability.
Most images have environments variables you can set that will control the config, or you can bind mount a config file from anywhere on your hard drive.
> Now i need to install a docker, type some docker install command, that pulls who-knows how many files, puts them in some 'weird' location, including the config file (which belongs in /etc), so instead of having a 2mb service, I now have a 1gb docker image sitting on my system, with a config file hidden somewhere, and I, the user, don't see any benefits, compared to just 'apt-get install'-ing something.
On the other hand, I can `docker-compose up -d` on practically any platform in existence. I don't have to worry about what package manager this host uses, or whether the version they have supports the config flags I'm using. It's the same everywhere. I don't have to deal with the pain that is 3rd party apt repositories, and handling pinning when they want to ship a newer version of a library than the main OS does (I believe Salt does this).
That, perhaps, is worth less to you than to other people. I would _far_ rather handle setting up another couple Docker containers in Compose than I would handle figuring out which Apt repository they actually publish to, and handling the installation, and fixing the configs (because many services ship with broken configs, because they have no idea what your Postgres password is, for example. Not true of Docker, their docker-compose.yml typically includes setting the password).
This is especially true of multi-component systems, or systems that I'm testing out or using in dev. If I want to enable the Jaeger built into an application, it's one command to run it. In apt, that would have to install 4 or 5 daemons that I would then need to configure.
I also think complaining about disk space is rather disingenuous at this point. Yes, disk space costs money. No, you are unlikely to have Docker make a significant difference in the price to build your PC. 1GB is huge for a Docker image (but many people are bad at trimming them down, so I'll accept it). In 2017, storage cost roughly $0.03/GB on an HDD. You're paying 3 cents to store that Docker image. Probably less at this point, I saw an article about WD releasing $15/TB HDDs in the near future. The moderate performance impact seems more important to me.
Try running an OS off a USB back when Windows 2000 was around. It was very very difficult. These days I just run my OS off Samsung SSDs plugged in externally 24/7 on nearly all non cloud hosted machines.
If I go to a friends house or bring my laptop around I just boot off that external flash devices. Seperating your data from the machine is very flexible.
This is simple now because the hard work has been done. Also way easier to learn programming these days with StackOverflow and virtually infinite resources of the modern internet. Even 80s or 90s internet had some advantages over today but holistically it's just really been getting better and bigger. Learning programming back then require decent manuals and access to other people with expertise.
And on the flipside: Try explaining to people that running stuff on hardware is still viable, that they don't need to break their software up in a dozen containers when, if you look critically, it's just another CRUD app with authentication.
I work with go (system programming) and containers today (I'm 40+, working since 94 with Unix) and the whole development/deployment process is way easier/cheaper/better engineered, than it was before when i was writing and maintaining C based services running on Unix. I remember the staging environment was a physical machine (an expensive spare machine idling), the test suite was bad, we needed a huge QA team, we used to debug stuff in production, every deployment was a surprise. Software engineering best practices were almost 0.
> Try explaining your docker compose setup to a beginner that only knows how to use normal desktop software.
Which "beginner that only knows how to use normal desktop" is trying to run "docker compose setup"?
Simple example from outside of computing: if you have high quality steel available, the design of many machines becomes much simpler.
Accountants, financial managers, psychologists, chefs, any field that requires training has intricacies that can't be explained simply to a lay person
I think the point is not about "add an abstraction layer" and "the problem is gone".
You could make a tar.gz of a chroot in the 90's too, or you could make a snapshot of an VM. Or you could have an abstraction layer too (I did a lot for provisioning apache+mysql) like: addwebserver.sh "hostname", and addvhost.sh "customdomain" (which also included the php thing, backups, and compliant log rotation).
I think the main point here was about the fact that UNIX (and internet) was more "easier to teach" because it was more simple.
Or... do we need to understand and install docker, in order to teach to "list files" in the first day?
I don't know about that. You had the BSD vs System V at the time, and standard commands had different flags, default shells, etc.
Then shortly after, all the various RISC Unix flavors.
There is, of course, more than Linux these days, but it has so much market share on the server side.
I worked one place that had IRIX, AIX, Solaris, and Tru64 on one site, for a company of only 600 people. Each with different service management, shells, low-level (hardware/firmware), backup, etc all to learn. Old UNIX was a fucking shambles, with companies desperate to break things in small and large ways to try and create lock-in to their flavour.
Then the network cards had DIP switches on them and things like thin-net or AUX big network connectors.
This made things we now take for granted a highly skilled domain. Getting your IBM box to talk to the Sun workstation on the other side of the room was rocket science.
The different dialects of UNIX meant that there were niches for those that knew the difference between how to run a 'ps' command on a Sun box to (say) an IRIX box.
Incidentally I am not sure it was about lock in for all companies. IRIX was about doing things in 3D and enabling those applications to work. They had no interest in tinkering with UNIX to make it different for the sake of it. It would be different because they had different hardware and 3D to be standardised.
I felt IBM was different just for the sake of it with AIX. The same with Sun except they dominated in some universities so were the de-facto standard for many people coming through education at that time.
Oh, and compiling open source programs across several of those. Real fun.
Looking at modern distributions I hardly see a difference given distribution specific directories, packages or tooling.
You still have that.
For example, I develop on a Mac but production is Linux. The differences between GNU TAR and BSD TAR are a continual pain in my side. More everyday commands like ps still trip me up, too.
It's not nearly as bad as back in college when I had to keep track of the differences among Solaris, HP-UX, Slackware, FreeBSD and NEXTSTEP, but still.
I also develop on macOS for Linux and using the GNU equivalents with a quick function for path setting, is well, not very hard.
I’m far more open to alternate approaches when they are laid out in a clear, detailed and unbiased way. Mutual respect and all that.
Layout out alternate approaches assumes that I haven't done any research into the matter at all. It may be that the reason I don't use something is that I have never heard of it, in which case having the alternate approaches laid out is useful. Otherwise, it is a waste of the person's time to lay out what may already be known.
Something like "Have you considered installing GNU coreutils? If so, I wonder what made you choose not to" might do the job. But really "Any reason why you didn't install GNU coreutils via brew" seems to directly ask the question.
It can come across a bit passive aggressive. But I don't think we should concede clear questions to passive aggresiveness.
Forcing you to examine your reasoning usually results in better understanding
> I would have gone with (installing MSDOS)
what if the person asking the question wouldn’t have gone with any approach in particular? Do you find it more polite to suggest an approach nonetheless?
Its painful and usually waaay easier to call people an idiot, so its important to pay people to be teachers.
For my own tech stack for example, the most expensive part of a build in terms of compute resources is TypeScript compilation.
The difficulty lies in making sense of the stack of different pieces of software that make up a Linux system, and the terminology we use to communicate about them (e.g. what mounting a filesystem means, which was initially hard for me to understand, coming from Windows).
In teaching the first steps should always be to make clear:
* why is this interesting, why would they care?
* why is this easier than it looks? (Take their fear away to get them started, reintroduce fear later as you see fit)
* show some examples that solve problems they always had, but never could solve any other way
Not all will learn, but even if you just managed to get people to understand why this makes sense and that it is totally possible to things on your own, you succeeded.
The worst teachers in my life where those where I had to learn something incredibly complicated without having a remote clue what it is needed for.
processor + RAM in motherboard with hard disk and some kind of integrated graphics solution/external graphic card for video output, plus keyboard + mouse for input (hardware)
hit the power button, motherboard inits procoess and runs BIOS which jumps to hard disk MBR (typically, ignores CD/USB/net boot)
MBR jumps into bootloader on first sector on hard drive (GRUB or syslinux/extlinux) (this is my pre-EFI understanding)
bootloader loads kernel (with optional initrd) from hard disk into memory
kernel performs its own init, mounts root device (hard drive formatted in Linux-friendly file system like ext4 usually), sets up all hardware given bundled drivers/modules, calls /sbin/init or comparable
/sbin/init spawns TTYs, does network configuration, then starts X server + a display manager (typically gdm if i remember correctly for GNOME?)
X has input drivers for keyboard + mouse, output driver to get graphics onto monitor(s) (not sure if Wayland finally killed X yet, gonna guess no)
from there, you can load /usr/local/bin/Chromium while will create a window on DISPLAY=:0 and go on Hacker News (granted you have it installed with all of its wonderful dependencies. probably 300mb+ of executables and resources and dynamic libraries)
the fact that there's 20 different ways to do a lot of the steps I just described is... interesting. the magic (to me at least) happens after /sbin/init. What all gets started? Why? I feel the latest Ubuntu probably spawns... 40-60 processes? systemd and this/that/the other.
Add in the many tricks CPUs employ, like branch prediction, re-ordering, et al. Things don't happen in sequential order anymore, and multiple chunks of that code are being operated on at the same time.
Even Assembler is pretty much a high level language nowadays, because once its loaded in memory, lots of unseen permutations will happen to it at execution time. It's a language of macros after all.
Or do you want people to be able to use it on a day to day basis?
These are very different courses
I think the original comment I was responding to was about the complications of Linux. To me... the complications of Linux aren't really in the 20+ years of "tech debt" that is "just make it boot and work for x86_64 like it used to work for i386", but moreso the 20-30 different configuration files and formats that happen after boot.
Funny. I've been using Linux for 10 (?) years and I still don't know what mounting means. I just know it's something that needs to happen in order for me to access the contents of a disk. I don't really care about it either, I only care about accessing the data. Why isn't it automatic anyway? Windows does it automatically whenever possible, why can't Linux?
Linux can do that as well, every desktop environment worth their salt does it by default.
There is also no one canonical linux stack. Different distros do things differently (sometimes drastically so).
E.g. systemctl flags and argument ordering differs from sysv style /etc/init.d/foo commands (as just one example)
So to learn “GNU/Linux” really you need to learn several distributions and piece together the common components and why different approaches are better suited to different environments.
Does this fragmentation help or hurt the ecosystem over the long-term? I personally think it's the #1 reason Linux never stole any sizable market share from Mac OS X/Windows. Hardware support just isn't there. Polish just isn't there. Instead I can name multiple init systems, multiple shells, multiple package managers all fighting for space in our "brains" to be remembered and used. :P
I assume you speak about desktop. MacOS never had any huge market share in this space. Its iPhone (and its predecessor, iPod) which allowed Apple to become as big as they currently are.
> Does this fragmentation help or hurt the ecosystem over the long-term?
Over the long-term it allows for abstraction layers, such as Ansible or Nix, to become status quo.
If we really want good hardware support on Linux we have to at least some reference that manufacturers can use to verify it works
The bad manufacturers like Nvidia and Broadcom intentionally block Linux compatibility for drivers they don't write themselves.
The problem for hardware manufacturers is that they think their drivers are valuable trade secrets and don't want to mainline them into the kernel...
Again, this is by design to try and force manufactures to open their driver code. To me, that seems like a lost cause now thanks to the proliferation of binary blobs required for hardware anyway.
> So, if you have a Linux kernel driver that is not in the main kernel tree, what are you, a developer, supposed to do? [...] Simple, get your kernel driver into the main kernel tree (remember we are talking about drivers released under a GPL-compatible license here, if your code doesn't fall under this category, good luck, you are on your own here, you leech).
In order for a driver to be in-tree, it has to be open.
Incidentally, this document is full of the kinds of arrogant and user-hostile arguments that the Linux community is known for, for example: "You think you want a stable kernel interface, but you really do not, and you don't even know it."
The example provided: "how many files in a directory? just pipe `ls` to `wc` and because `ls` output is one file per line, you've got your answer!" Except newlines are totally valid characters to have in filenames. Try it:
echo "foo" > 'test
ls | less
ls | wc -l
“Today in some versions of Linux ls puts single quotes around file names...”
I recommend a shell that’s intelligent enough to distinguish lines and present them downstream as strings including spaces. I hope not to come across such an odd implementation of ls ...
I think the maintainers made a good decision with this change but this page exists because they received a lot of negative feedback. I’m kinda surprised to see people wanting ambiguous output.
It's visual noise, and because of column output I never found it ambiguous.
It's not pretty, and it shouldn't be done but it's doable (in many shells, but none of this is specified in POSIX):
ls --quoting-style=shell-escape-always | while read -r escaped; do
$ cd "$(mktemp --directory)"
$ touch $'foo\'\n\'bar'
$ ls --quoting-style=shell-escape-always | while read -r escaped; do
> eval "filename=$escaped"
> echo "'$filename'"
$ touch $'foo\'\n\'bar'
$ ls --quoting-style=shell-escape-always | while read -r escaped; do
> eval "filename=$escaped"
> echo "[$filename]"
- disk encryption
Not even counting things that aren't technically part of Linux but are still quite common, like:
- configuration management tools (ansible, puppet, salt, etc)
- every language's reinvention of 'make' and their own package managers
Because those languages have a life outside Linux distributions, and even those cannot agree which package manager one is supposed to use nor developer specific packages that don't infect all users on the same machine.
And make is even outdated for modern C compiler toolchains with support for incremental compilation and linking. That is why everyone that cares about build performance is on ninja.
.. you forgot the plural there, fixed it for you.
There are many Linux distributions that include neither of them, like Android and ChromeOS.
> Today in some versions of Linux `ls` puts single quotes around file names which contain white space likely in order to have those paths easier to copy and paste, but it does so only if !isatty().
I almost fainted. I believe this is a misprint: Linux puts the single quotes only if isatty().
The author's tweet, linked nearby, uses different wording and gets it right.
>in some versions of Linux ls
Do you mean "in some configurations of GNU ls"?
I wonder if educating people about computers these days, might have something to do with intentionally vague and gatekeeping language.
No one talks about an "Emacs" system or a "Firefox system" either. They talk about the relevant products where the label is disambiguating, like GNU Emacs.
I'd like to know more about namespaces, file organisation - what 'should' be where - processes and syscalls, etc.
(To be honest man pages just don't suit me for deliberate learning, I treat them the same as --help outputs, whereas for an answer to this question, for example, I'd find a textbook more helpful.)
Anyway, you don’t learn it before you start using it. The more you use the command line, the more you’ll find out that what you’re trying to do is easier or more scriptable there (with personal exceptions, like, I use a GUI file manager). You learn it as you need it.
I just feel there's a lot I don't know still, and is like to pick some more up preemptively rather than waiting until I need it and somehow guessing it does exist, or asking the right question to discover them as an answer.
Man pages are great when I want to know how to use something specific. I'm asking more for a textbook I can read cover to cover, and discover things I didn't know much or anything about, or hadn't thought to use.
As for the "file organisation", here is the Filesystem Hierarchy Standard (FHS): https://en.wikipedia.org/wiki/Filesystem_Hierarchy_Standard.
There is also a lot of mention of "file organisation" in the book.
But Linux? I wouldn’t really know where to start. Maybe some old 80’s UNIX books. Out of newer books maybe one by Sobell or some of the O’reilly books. I think I tried learning from the “Unleashed” and SAMS books, which aren’t great.
There’s two things to learn. The technical aspect and all the commands. And then, perhaps more importantly, is the “UNIX philosophy”. I’ve seen too many systems administered and scripts setup like they were Windows systems.
The worst way to learn is by googling questions and topics, then reading the first page of search results which seem to be the same regurgitated cut and paste crap from the big “Linux admin” websites.
* https://github.com/learnbyexample/scripting_course/blob/mast... - my collection of resources, mostly for command line and bash scripting
* https://0xax.gitbooks.io/linux-insides/content/index.html linux kernel and its insides
A 3rd edition is coming out on March 2021.
Linux, FreeBSD, OpenBSD and so forth are in a much better place now.
Just another flavour of vendor-specific weird ass versions of Unix.
not an uncommon way to run FreePBX for VoIP stuff which is based on centos.
BSD, on the other hand, IS Unix, and its utilities remain much smaller and leaner than the Linux ones.
I'm working with BSD and Linux since the 90s, then around 2000-2005, I had to manage some IRIX, HP-UX and AIXs and they were all different, with different set of parameters for common tools or other specific management tools. I spent years to master shell scripting, hardening and troubleshooting on HP-UX and AIX.
Minix is trivially easy to work with but that's not Unix, just something similar.
Linux (with GNU or busybox) is super easy to find and often easier to run (both on qemu and on modern computers you can just stick the kernel somewhere the [virtual] machine can read it and then just power on.) Once you do that all the code is there. It's not unix though, it's not really even POSIX, it's just what everyone is currently doing with computers (but you really need terraform, docker, and node.js to have the complete picture.)
I thought the reason was to aid shell scripts that assumed no whitespaces in file names, wasn't it? Also, I believe I've seen single quotes when using `ls` on a terminal, so the behavior is not only for `!isatty()`.
$ touch 'a b' c d
'a b' c d
a b c d
It wouldn't help shell scripts that assumed no whitespaces in file names anyway. You would get something like "'a", "b'", "c", "d" tokens which make no sense anyway.
The only sane way to iterate over files that I'm aware of is to use something like
find . -type f -print0 | while read -d '' fname; do echo "fname: '$fname'"; done
Splitting on spaces, expanding asterisks, shell scripting is a minefield. You would basically never want any splitting to happen. Applications like reading a CSV by splitting using IFS are dirty hacks, that break with the slightest additional file parsing complexity (escapes, quoting etc).
In Python you can just do `os.listdir('.')` and it will actually do what you want. No 5 layers of "oh, actually" and "yes, but what if", which inevitably happen in threads discussing the simplest shell operations. It just works as intended. If you only want the files and not the directories, you can do `filter(os.path.isfile, os.listdir('.'))`.
There is no reason to use shell scripts for anything more complex than a few lines or for one-off interactive work.
while read -d '' fname
echo "fname: $fname"
done <<<"$(find . -type f -print0)"
for fname in *; ...
for fname in *; do echo "$fname"; done
With bash or on GNU systems try
printf "%q\n" "$fname"
I always have to look up the 'safe iteration' invocation when iterating files in a directory, because it involves jumping through a few more hoops than is really reasonable.
I also got confused when it converted to more complex behaviour, but not everything about it is bad.
I've seen arguments for object-oriented shells when we want to have more complex plumbing than traditional Unix shell piping, but not sure if those goals couldn't be achieved while remaining in text-only land.
It seems very questionable to be living with a severe restriction from the 1970s in 2020.
But... it can't be unbroken without breaking backward compatibility in all kinds of legacy code.
This breaks a remarkable number of things! I just deleted it from Finder, then remembered I could `rm '-file'`.
> Please don’t make the behavior of a utility depend on the name used to invoke it [...] Instead, use a run time option or a compilation switch or both to select among the alternate behaviors [...] Likewise, please don’t make the behavior of a command-line program depend on the type of output device [...]
> Compatibility requires certain programs to depend on the type of output device. It would be disastrous if ls or sh did not do so in the way all users expect. In some of these cases, we supplement the program with a preferred alternate version that does not depend on the output device type. For example, we provide a dir program much like ls except that its default output format is always multi-column format.
As a result, GNU provides a "dir" command since the early 90s, which is meant to be a consistent alternative to "ls". Yes, unlike what the myth said, it was not here to help CP/M or DOS users.
The issue of "ls" has been described in this StackExchange answer by Eliah Kagan ,
> When its standard output is a terminal, ls lists filenames in vertically sorted columns (like ls -C). When its standard output is not a terminal (for example, a file or pipe), ls lists filenames one per line (like ls -1). When its standard output is a terminal and a filename to be listed contains control characters, ls prints ? instead of each control character (like ls -q). When its standard output is not a terminal, ls prints control characters as-is (like ls --show-control-chars).
On the other hand,
> Whether or not its standard output is a terminal, dir lists filenames in vertically sorted columns (like ls -C). Whether or not its standard output is a terminal, when dir encounters a control character or any other character that would be interpreted specially if entered into a shell, it prints backslash sequences for the characters. This includes even relatively common characters like spaces. For example, dir will list an entry called Documents backups as Documents\ backups. This is like ls -b.
1 point by somerandomboi 0 minutes ago | edit | delete [–]
This appears to be quite a subjective claim, people comprehend/retain information extremely differently. The ls command applied as an example to the argument undermines its credibility, as virtually every Unix user is aware of its importance. ls remains arguably the most Unix filesystem command, why would it be difficult for users to understand its relevancy?
So ls is chosen because it is supposed to be the universally familiar archetype, but author is noting that corruption of the underlying principle in the default configuration of some variants & distributions has extended to something as basic as this.
Mostly we’ve traded a class of problems (debugging unknowns and differences between systems) with a new class of problems (complexity of cloud providers, abstractions and less debugability).
i'd say that the problem now is less a lack of simplicity and more that there just isn't a whole lot of interest in teaching practical usage and operation of the computer systems that underlie a lot of modern tech: from where i sit, /industry/ (individuals are whole different story) is interested in paying well primarily for new development both applications and infrastructure, but has decided that the quality, improvement, and maintenance are more of an afterthought. it's a microcosm of a larger human tendency to prioritize novelty, perhaps.
from where i sit, much of the opportunity for a good (well-paid) position is skewed towards greenfield development: if you can build new applications, or architect new infrastructure, great! industry loves you, and will pay you much dosh to do that. if you want to fix or improve existing things, or ensure that new stuff actually works, you're valued less.
QA/SDET as an attractive profession appears to have largely ceased to exist, cause you can just assign that work to the greenfield devs, and they can do it sorta--they'll probably focus more on new development, and the QA will suffer for it, but they'll produce enough testing and validation product to tick a box.
on the infrastructure end, architects are in great demand, but once that infrastructure is built, you don't "need" anyone to maintain or improve it, or diagnose issues as they arise. i've spent a lot of my career doing tech support work for infrastructure for odd career path reasons, and capable colleagues or counterparts are few and far between: the people i support, who maintain existing systems, seem to have not the slightest idea how they or their underlying protocols work, and other people in support roles seemingly sit on a binary divide of incredibly capable technical diagnosticians and troubleshooters and people who have been shunted into the role for lack of ability to perform in greenfield work, but can't perform in other roles either.
management has often asked me why they can't find people who are good at maintaining infrastructure or who are able to diagnose and address emergent problems, but the root of this seems quite obvious. it's not that the systems have grown too complex: they've always been complex, and while the locus of that complexity has perhaps shifted somewhat, i don't think our forebears were operating in some simple, easily-understood world that has ceased to exist. the problem is that nobody wants to pay anywhere near for an advanced brownfield skillset compared to what they'll pay for adjacent skillsets in greenfield work. the smart and capable people recognize this and move towards greenfield work even if they don't like it as much, and brownfield work is left with a sea of people who couldn't transition and can't deal with the complexity because they never could. the complexity or difficulty ain't new, but all the people that could deal with it were driven out.
me> hey how do you bla
guru> $ man bla
me> oh shit thanks
Let the old flesh die. Maintaining it is Sisyphean, and it should have been written in Rust.
>the smart and capable people recognize this and move towards greenfield work even if they don't like it as much
People can actually like maintaining enormous, C/C++ legacy codebases instead of greenfield work in safer and smarter and easier-to-use technologies?
Masochists, I guess.
in a sense, yes? it's not masochism, it's perhaps recognition that sometimes it does make sense to recognize that old systems are imperfect, but weren't built entirely out of toothpicks and gum, and can be retrofitted to remove their worst parts, and that strong parts do exist and can be made stronger?
the alternative seems to be that we continuously build new toothpicks and new gum, but that somehow those will better by virtue of having been forged in a modern toothpick and gum factory that puts out perfect new shiny. it definitely has no faults of its own to be discovered several years down the line, nope.
This is one of the worst behavior changes ever to come out of GNU. It should never have been the default.
[+]: Imho it makes it easier to copy-paste paths (sometimes I need to , as well as spot whitespace / strange characters in filenames.
Path handing is an important feature. It should be standardised and predictable. It doesn't even matter how it's standardised. What matters is that everyone uses the same system so there are no random surprises or thwarted expectations.
In a robust OS everything would be a lot more interoperable and standardised than it is in UNIX. Being able to pipe things around is not the killer feature it might be - not if you have to waste time pre/post translating everything for arbitrary reasons before you can do anything useful with it.
But if we're being honest, path handling (as well as structured data) in shell scripts and pipelines has always been of the largest trash-fires in Unix -- while I don't personally like how Powershell solved the problem on Windows at least they tried to solve it.
I don't think I was the only person unhappy with this. The fact that https://www.gnu.org/software/coreutils/quotes.html exists seems to indicate that others feel as I do.
Furthermore, I was disappointed by the reaction from both the developers and other people leaping to their defense who felt that they'd been personally insulted by users suggesting that this may have been better as a non-default option. If I can set QUOTING_STYLE=literal everywhere, surely the distro maintainers who wanted this could have set QUOTING_STYLE=shell-escape?
I'd be the first to say that everyone is free to disagree with me. I have the source, I have a workaround, I adapt.
For instance, one could argue that hiding the behaviour behind a flag makes the feature effectively useless (users that would benefit most from it would never know about the flag, and users who know enough to find the flag probably know about `find -print0` too). Punting the problem to distributions just means that everyone who is against the feature on general principle will now hound distributions for making the change (probably making arguments like "why are you making yourself incompatible with Debian X or Ubuntu Y.Z?") -- and will also result in the feature being unused and thus useless.
Now, is that enough of a reason to make a change to the default behaviour? I don't know, but to me it doesn't seem as though the right decision was "obvious". And again, the behaviour is only different when the output is displayed on an interactive terminal -- so the only breakage is the interface between the screen and your eyes.
Ignoring that FOSS developers are basically working in the public good (and usually unpaid or underpaid relative to their impact), this is a childish way of acting towards anyone in an even remotely professional environment. The maintainer replying to you was actually a courtesy, but of course you see it in a negative light.
If every technical disagreement you have ends with you ranting/abusing the other person, you'll quickly discover you're the only one left in the room.