Maybe within Fuschia, but not in my UNIXy systems. All users on a system need r-x on /Users, but that doesn't give them r-x on every user's home directory located as a subdirectoy of /Users - a handle acquired by bob to /Users does not imply access to /Users/delinka.
Further, if your process is treating '..' as a subdirectory, you're doing it wrong. Paths must be normalized (e.g. ~ expanded, . and .. resolved) before requesting a handle via absolute path.
Lastly, this document reads as if knowing a full path grants access to that path and its subdirectories. If that's the case ... oy.
".." is required for atomic traversal (similar to symlinks), which is important in some situations, such as making sure a file tree you've descended isn't removed out from under you in a way that breaks your state. (The directory itself can be moved while you hold the handle, but the important thing is that being able to rely on ".." permits certain algorithms that are safer and more consistent.) Canonicalizing paths introduces race conditions between canonicalization and actual access, which is why this is performed in the kernel.
Canonicalizing paths makes sense if you're accepting paths from untrusted sources and you cannot make use of POSIX openat() + extensions like O_BENEATH. HTTP GET requests are supposed to be idempotent, anyhow, so there shouldn't exist any sort of race condition as a conceptual matter.
But in regular software, it's better to just pass paths as-is. The shell performs "~" expansion, but it doesn't resolve "." or "..". And the shell performs other expansions, like file globbing, for which there's rarely any reason to implement in regular software. Supporting "~" expansion but not file globbing is inconsistent; if you're not the shell, don't implement shell-like features as it just creates confusion and unnecessary complexity.
Calling ".." a holdout from POSIX is misleading. The semantics exist for legitimate and even security-relevant reasons, albeit not necessarily reasons that an embedded smartphone OS may care about. What POSIX lacks is O_BENEATH (from Capsicum) or a similar flag for tagging a directory descriptor to prevent relative paths or otherwise ascending the tree. Capsicum extensions make POSIX perfectly capable of implementing strict capability semantics, and they do so by extending semantics in a manner consistent with POSIX. POSIX isn't inherently broken in this regard, it's just perhaps too feature rich for some use cases and not feature rich enough for others.
There's no end of complaints about POSIX--either it's too complex or too simple. The fact is, no single specification will ever please anybody, so griping about how POSIX is broken is not only pointless but belies a failure to appreciate the underling issues. The alternative to POSIX is basically nothing. Standardizing on Linux is, at best, a lateral move. Adding optional components to specifications has proven the worse of all worlds, and like most other standards POSIX has been slowly removing some optional components altogether while making others mandatory.
FreeBSD has moved to be compatible with the proposed Linux O_BENEATH as well. We're hoping this helps write portable, capability-restricted code between FreeBSD and Linux.
(We already had very similar functionality for Capsicum-restricted directory fds to piggy-back off of.)
That's the point they make. And in Unix, it's really easy to do it the "wrong" way (like, "type 3 characters" easy), so lots of people do.
Back before protected memory was common on PCs, some people said "If you're chasing pointers without being sure that it points to a valid address, you're doing it wrong". Well, perhaps so, but in practice lots of programs were doing it wrong, and it was the users who suffered.
Capabilities sound to me a bit like protected memory for persistent storage. It'll be a little inconvenient for a little while, and eventually we'll wonder how we ever lived without it.
Except ".." also is a solution to permitting you to traverse a directory tree without accidentally chasing an invalid pointer; no bug in your program nor any external application can ever make ".." invalid. ".." is equivalent to a back pointer in a linked list or tree structure. Imagine you have a handle to /foo/bar/baz and want to ascend to baz's parent. Between acquiring the handle to /foo/bar/baz and attempting to ascend to the parent, "baz" could have been moved and "/foo/bar" may not exist. Without ".." all of a sudden you're orphaned and your application is stuck in an inconsistent state. Maybe the best solution is to just panic, but that's like saying that all applications should be prepared for any pointer access to segfault at at any moment. That's one solution, and it's actually how some smartphone application environments work. Another solution is guaranteeing the condition can't happen, period. Which is preferable depends on your use cases and which side of an interface you'd prefer to place the burden. For pointers it's fairly obvious which provides the most preferable semantics for maximum safety (or it was until the smartphone and cloud paradigms), but for file systems the answer is less obvious. In Unix when ascending directory trees the scenario of a valid pointer becoming invalid is impossible such that there's always a valid path to the root of the tree (even if the depth can change; you just stop when opening ".." simply reopens the same directory), but for descent there remains the race between readdir + open (as opposed to readdir atomically returning file handles).
These particular semantics were clearly purposeful, not an accident. The former was thought useful, the latter an acceptable simplification. It's why in Unix you can't delete a populated directory in or when a process holds an open reference (unlike other file types), and why you can't hardlink directories.
Sure it can, if you consider lack of permissions to be invalid. We already do, in other similar situations.
> Without ".." all of a sudden you're orphaned and your application is stuck in an inconsistent state.
No, it just means you need an out-of-band method to accomplish this.
> that's like saying that all applications should be prepared for any pointer access to segfault at at any moment
No, nobody's talking about crashing. It's more like saying you can't assume you can do raw pointer arithmetic to jump around in an array. Languages like Java and Python feel restrictive to C programmers at first, too.
I don't understand these claims of races and segfaults. Doesn't Fuchsia avoid race conditions like this with VFS cookies?
> That's the point they make. And in Unix, it's really easy to do it the "wrong" way (like, "type 3 characters" easy), so lots of people do.
The OP is writing about iterating subdirectories and accidentally treating ".." as such, if you use certain low level functions - which is kind of bad practice in most cases anyway.
Yeah, pointing to a wrong directory can lead to unexpected results. On the other hand, if this becomes a security issue, then maybe the process should have properly restricted rights as pointed out elsewhere already.
Talking about "..", one could BTW extend the discussion to mixing Filename characters with path separator characters in the same string. ;)
Fuchsia considers that level of access to be unacceptably broad for most applications, which is why it uses a capability-based permissions model instead of a user-based one.
There just isn't a short-cut for making sandboxes trivial to setup.
I really wish that Solaris/Illumos Zones were standard on Linux. You could have really light-weight containers as anonymous/ephemeral zones whose "init" is the program you want to sandbox, and more heavy-duty guest-like containers as Zones already is.
The difference between Zones (or BSD jails) and Linux containers is that with Zones (jails) you have to explicitly decide what to share to the zone, while with clone(2) you have to be explicit about all the things you DON'T want to share with the container. I.e., Zones requires white-listing while containers requires black-listing, and we all know that black-listing doesn't work as a security device. Granted, the kernel developers could have forgotten to virtualize something important, but when they fix that you don't have to modify and rebuild the zone/jail launcher.
If understand correctly in fuchsia "absolute path" is always relative to a filesystem handle so knowing it and being able to use it are pretty similar
In a shell one would have to expose a path->handle dictionary for scripts.
You can think of capability-restricted directory descriptors as (sort of) individual-fd chroots. File permissions still apply inside a chroot. But the namespace of anything outside the chroot is totally inaccessible.
Well, they speak of the path as a "resource provided to a subprocess". In that context, it sounds more like a handle/file descriptor that the child process can pass to some "read", "write" or "get handles of children" syscalls - and that happens to correspond to the file object at /home/bob/foo.
If so, it wouldn't imply that knowing (or guessing) the string "/home/bob/foo" would automatically give you access to the handle.
That's just my reading of our though, no idea of that is what they actually do.
Indeed. When I read...
As a consequence, this implies that a handle to a directory can be upgraded arbitrarily to access the entire filesystem.
...I was wondering whether the author even knows what filesystem permissions are and how they work. I say let the filesystem handle resolving relative paths; and let the permissions system handle the check on whether one is allowed to access the referenced object.
The topic here is to let a user start a process and pass a restricted view of the file system to that process which in turn can spawn child processes to which it could restrict access even further. In order to make it possible to do useful work it's sometimes necessary to also pass around handles/filedescriptors between processes (possibly within different sandboxes) and it's a good idea that the rules governing the view narrowing are not broken.
Anyway, I hope someone is archiving them somewhere, because there's a lot of knowledge there that will otherwise be lost in a few months.
% ln -s foo/bar/baz link
% ls ./link/..
> Also, why do you need a new syscall to do it when it seems like string manipulation is all that’s necessary?
I'm not sure what you mean -- Plan9 is a separate operating system which does the above path sanitisation with each path-related syscall.
> I'm not sure what you mean -- Plan9 is a separate operating system which does the above path sanitisation with each path-related syscall.
> A new kernel call, fd2path, returns the file name associated with an open file, permitting the use of reliable names to improve system services ranging from pwd to debugging. Although this work was done in Plan 9, Unix systems could also benefit from the addition of a method to recover the accurate name of an open file or the current directory.
You could, but that doesn't really change the point of my comment -- "." is very trivial to handle either lexically or as a directory entry and generally Linux's handling of it is basically a no-op (though it should be noted that it's a no-op compared to $path/ not $path -- which is an important distinction with symlinks-to-directories or mountpoints).
The key difference is whether the symlink is actually resolved (and thus ".." applies to the partially-resolved prefix of the path resolution) or if it applies to the symlink component itself (and thus it never gets resolved).
> A new kernel call, fd2path, returns the file name associated with an open file [...]
fd2path goes from fd -> path, which is the inverse operation to path resolution. All path resolution in Plan9 goes through cleanpath() (as far as I know). fd2path is similar to readlink(/proc/$pid/fd/$fd) on Linux.
The two things are separate concepts.
Tiny nit, but microkernels don't imply a capability based security model. For instance Mach, QNX, Redox, etc. aren't capability based.
It's a very good idea for your microkernel to be capability based because it cuts a lot of validation out of the critical path for IPC, but it's by no means a requirement.
I was falsely under the impression that the Mach port table only had a single global namespace.
I want to believe that this is a good-natured effort at improving the state of modern operating systems; but I feel like I've been burned by my trust in Google too many times.
I'm genuinely curious about this. In what way do you feel burned?
I think I get what you're saying. I use gmail and Google uses an algorithm on gmail to decide what ads to show me when I do a search. The ads make money for Google and I don't get any of that money. All I get is free search results. I think what you're saying is that because Google is using your information to show you ads and make money that somehow you feel cheated?
But I don't feel particularly burned by this arrangement. If Google gave an option to start charging me money to avoid ads if I'm honest with myself I'm pretty sure I'll choose to continue seeing ads instead. I guess I'm curious about exactly how you feel burned.
By having too much data on me, my friends, my kids, politicians, crunched by AI. It is just too much power, if you think few moves ahead.
To be fair, there are many threats in the world we can all be worried about, but largely ignore in our daily lives, to no meaningful detriment in most cases.
Did the bank set your mortgage rate based on risks, derived from your data?
Did you lost job application because your private emails leaked conflict on previous job?
It is dark ways your data can propagate, Goofle is a business, and it makes money on your data. Is ad targeting the only service they sell?
I just decided to ungoogle myself completely, and hide from all tracking on the web. No data from me anymore, I ceased to exist :)
Besides, it's certainly not their responsibility to maintain a service indefinitely.
The fact that my comment is being downvoted tells me that people here have a shockingly poor memory for this sort of thing.
And heck, it's not even the death of Google Reader that was bad for RSS. The life of Google Reader was bad for RSS too. It was a "good enough" product released for free, as in good enough that it wasn't worth trying to compete with a free product so nobody did, but it wasn't actively maintained and just served to cause the entire RSS ecosystem to stagnate for years.
They had 3 perfectly good options:
1. Don't build Google Reader in the first place if they weren't interested in actually maintaining the damn thing.
2. Put some effort into it, keep improving Google Reader, make the whole RSS ecosystem better rather than causing it to stagnate.
3. Sunset Google Reader over a much longer period of time, like a year instead of the 3.5 months they gave. Those 3.5 months were just barely enough time for people to build replacement services.
That's what maintenance is though. Keeping something functional, but not developing it further.
>... because they never figured out how to monetize it.
That's not true. They shut down Reader because the codebase was dated, and there were few engineers left on that team. It was a reallocation of resources.
>Sunset Google Reader over a much longer period of time, like a year instead of the 3.5 months they gave.
That I agree with.
> They shut down Reader because the codebase was dated, and there were few engineers left on that team. It was a reallocation of resources.
And why did they reallocate resources? Because they never figured out how to monetize it.
They changed the whole concept of having one single root. As I understand the plan is to use many fragmented and independent filesystem handles that can optionally mounted together.
So the restriction is more about non being able to access a folder if you don't have access to an appropriate handle.
Paths like folder1/folder2/../folder2/file are still perfectly fine.
References to a forbidden parent directory from a chroot can just return ENOENT, because it doesn't exist in that universe. I may not be understanding this fully, but to condemn ".." based on some accusation that it's incompatible with chroot semantics (or "a holdout from POSIX") seems tendentious.
Maybe I don't really understand Fuchsia's approach. But I do really like OpenBSD's approach in unveil.
Here is some further reading on why we should downvote these types of things into oblivion: https://meyerweb.com/eric/comment/chech.html
When "considered harmful" is considered flaimbait rhetoric, perhaps the problem is with the readers who refuse to engage with any mildly-worded criticism.
A refreshing opposite of clickbait as the actual proposition is in the title.
You can verify this as follows:
curl -I --path-as-is 'https://github.com/fuchsia-mirror/docs/blob/master/the-book/../../master/the-book/dotdot.md'
which quite more serious
Is this for technical reasons, or similar philosophical ones because symbolic links also allow for escaping from “jails”?
Take a perfectly spherical unix:
$ mkdir /tmp/hn
$ cd /tmp/hn
$ ln -s . foo; mkdir bar; touch baz
$ ls -l bar/../baz foo/baz foo/../baz
ls: cannot access 'foo/../baz': No such file or directory
-rw-r--r-- 1 jepler jepler 0 Nov 28 18:21 bar/../baz
-rw-r--r-- 1 jepler jepler 0 Nov 28 18:21 foo/baz
Nope. I don't want that at all. There's a reason that by default cd goes out of its way to make .. ignore the parent of a symlink.
So fuchsia also won't have CWD, then? Because if it has CWD, then the process can always chdir / rendering this lack-of-.. exercise pointless.
Hope that makes sense.
Compare that to a POSIX system where a directory has an actual child which is a reference to the directory's parent whose name is always '..'.
The talk about resolving '..' is merely a demonstration that the behavior of "cd .." can be supported/emulated in a context where you have both an open directory and corresponding path, without requiring that '..' literally exist.
I like that syntax by the way, kind of like drive letters but much more descriptive and not limited to A-Z. Netware also used it for full path specs.
ln -s ../../.. root
Welcome to Google Future (c). Enjoy your stay.
Here you go https://fuchsia.googlesource.com/docs/+/HEAD/the-book/
The origin of the meme was an editor's labeling of Dijkstra's commentary. Titling your own article this makes it seem like you wrote the article, forgot you wrote it, found it again, and are presenting it to others as an interesting perspective.
The correct design is to have both a "root" and a "current object" associated with every "file descriptor object" and allow ".." to work up to the "root" (and thoughtfully handle cases where the "current object" is moved outside the "root").
You can't do it with paths because that doesn't track directories being renamed, that would cause the descriptor to suddenly become inaccessible in the middle of an operation.
Your suggestion matches the behavior of "O_BENEATH" in Linux and FreeBSD, or e.g., Capsicum directory descriptors with the CAP_LOOKUP capability.
so don't implement posix.
I don't know why people get so hung up on new oses being POSIX compliant.
POSIX is sometimes useful for writing cross-platform C/C++ code. It's extremely limiting though, out-of-date, & doesn't actually offer the write-once run anywhere you'd like with POSIX.
You can write POSIX code that will fail to build, fail at runtime, or even behave incorrectly when you run it on another "POSIX" system (at least as far as Windows/Linux/OSX/Android are concerned). Certainly a far cry from how a standard is supposed to behave.
POSIX also, for the most part, targets the lowest common denominator of platform features which means the POSIX API isn't as rich as makes sense for the majority of applications, doesn't have the same performance/security, and/or isn't as easy-to-use.
POSIX also leaves many subtle decisions to implementations' discretion which means that even if everything works in the happy path, it'll break in subtle corner cases. For example, PATH_MAX is defined as 256, _XOPEN_PATH_MAX is 1024, but Linux & OSX both have unbounded limits so a POSIX program can easily fail to be able to open all files on an OS; making this a build-time constant was the stupidest decision in the world & endemic of how POSIX is designed.
Most modern language runtimes these days (Rust, Go, Java, Swift) comes with a far richer, less bug-prone & more feature-full set of features in the standard library out-the-gate on all platforms (so you just need to port the 1 standard library) & most libraries build on that standard library so you usually get them for free.
Most platform vendors also provide custom APIs to interact more richly with their specific features for performance, battery, usability, security, etc. To take full advantage of a platform, which you're pushed to usually by market forces, POSIX doesn't help you.
* EDIT: Also, POSIX is gigantic. The majority of useful existing tools probably use 20% of the entire standard. Porting that smaller API surface isn't challenging.
As you well point out hardly significant when using other programming languages, even C++ standard library improvements are making it less relevant for C++ devs.
Also the redox team knocked out rust versions of most of the standard utilities in a few months.
The reason Android and iOS got away with it was that nobody was really running important software on their phones in the first place, so it was a brand new capability that didn’t need to be compatible with anything.
You also forgot about ChromeOS, which I bet hardly anyone on the US school system cares about POSIX.
Or the webOS running on LG televisions nowadays.
FreeRTOS, mbed and IncludeOS aren't POSIX as well.
Regarding Fuchsia, there is already support for ISO C, ISO C++, Rust, Go, Dart/Flutter, with Java/Android on the horizon.
So I doubt Fuchsia will miss POSIX that much.
ChromeOS benefits from the fact that it is just a web browser - if you want just a web browser rather than a generic operating system, Fuschia is of course unnecessary.
Nobody targets webOS, and I suspect people’s choice of TV has more to do with how it looks on the in-store display or how cheap it is than what apps you can run on it.
FreeRTOS, mbed and IncludeOS are not generic operating systems in the first place.
There are essentially no “pure” ISO C or ISO C++ programs - everybody uses system-specific libraries at some level.
Basically, Fuschia suffers from the fact that a large amount of userland will need to be rewritten for it, depending on what market segment it targets. (“Just” being an alternate Android runtime for phones, as an example, wouldn’t require this.) This is an obstacle - not an insurmountable one, but being non-POSIX is an obstacle for any OS, that the people behind it need a strategy to tackle. Whether or not people will code apps for Fuschia will depend on how excited developers get about it at first, and then how many people use it.
I don't think there is a week that has passed in the last 10 years that I haven't used the command line for something.
To the point of the article, it sounds like the client will handle the work the server once did in parsing the navigation and path commands.
Though the lack of symlinks sounds like it would be the a more painful loss than ".."
Both of these would likely require many *nix utils to be changed to be compatible.
It's possible parent is some super-junior developer doing compartmentalized tasks and never leaving the IDE, but for anything past that avoiding the CLI and remaining productive is basically impossible.
so... not at all?
I like this approach from Fucshia, Makes me remember dumb sploits in php code. You can take the commandline from my cold, dead hands.
Even if I didn't use vim+tmux, so much other "stuff" needs the terminal. Compiling, profiling, testing, searching, moving files around, ssh/rsync and so on.
Sure, IDEs have most of this stuff, but they're not always as user friendly as cli tools, at least once you're acquainted with the later.
Plus some languages sort of have their own customs. Like if you were doing Smalltalk a lot of things you’d use a command line for in web development or mobile dev you’d use a workspace in the IDE instead. And then you have Lisp, where people used to joke that Emacs had become its own OS essentially.
Even MS finally succumbed to reality when Ballmer finally got the boot.
For instance, 20% of developers could be responsible for 80% of all software, or 80% of the most used software, or the software that generates 80% of GDP, etc.
I learned to program (in the sense of writing programs in a compiled language) around the time the Macintosh and Amiga came out, before free Unix-like OSes on your PC were much of a thing. So while the Amiga did have a command line, and so did Macintosh Programmer's Workshop, I mostly saw a command line as the obsolete interface associated with MS-DOS. Obviously source code control was not a thing for a kid programming in the 80s.
Even though I have more recently used git than any other source control, I don't think the horrible interface has anything to do with its utility. You may like it, you may hate it, but it doesn't have to be like it is, it's just the personality it has. Kind of like Linus and his grouchiness.
Never understood the folks that wanted to only use one or the other, not both, when they are complimentary. Avoiding either is doing yourself a disservice.
My start was on an Apple, but the rest tracks well enough. did Slack, then RH 5.2.
SGI handled the command line / gui matter particularly well. Most things had a GUI, and the GUI would issue the "--gui" or "-verbose" option to get the additional feedback needed for the GUI to behave more like one would expect, despite it basically being a wrapper for an otherwise CLI program.
The time I spent on IRIX really solidified when and where the two paradigms make sense. And they both do. There is no one size fits all winner here.
For my part, I never understood why someone would want to go back to not using a GUI, once they had been invented. I mean, using a graphical interface doesn't prevent you from typing commands within windows. It's just a question of whether you limit yourself to the ancient teletype paradigm or not. So I never saw (post-1984) GUI and command line as equally valid and valuable worlds, because a GUI can encompass everything, while the command line doesn't.
There's many reasons you'd still want to use a terminal interface, CLI tools, and a scripting language, easily searchable. They have staying power because they excel at certain repeatable tasks, whereas a GUI is often better for exploring.
No one has invented a lasting, portable GUI CLI, so that question is moot for now. Maybe they could, but the work involved is probably not worth it for the gain in functionality.
Next, that they are "ancient" is immaterial. They have enough of the features needed to be effective. Paper is ancient for example, and still useful in various situations.
Something like AppleScript may work acceptably for scripting, but is not available on 90% of the world's computers.
Also recommend giving a newer shell like Fish a try, it is quite helpful.
People still use paper to a fair degree in many offices, as you say, but it's not an alternative to electronic records in the way Coke is an alternative to Pepsi.
: Ubuntu used to mail you an install CD, for free, anywhere in the world. In the early 2000s. It was pretty cool.
Because of the long running Linux/Windows rivalry, hardly anybody can imagine something better than a Unix-clone any more. But the very name Unix was chosen because the original OS was not intended to be the be-all/end-all of OSes.
I'm assuming this is an anecdote rather than data? Looking around my office, 19 out of 20 people are using the "xterm and chrome are the only two apps I run on my laptop" style of development; only one has a graphical IDE + chrome