

Why Skin-Deep Correctness - Isn't, and Foundations Matter - timf
http://www.loper-os.org/?p=448

======
jeffdavis
"the machine must present a single address space which can be considered non-
volatile"

That doesn't solve the problem, or it is at least an oversimplification. How
do you ensure that you are only writing out valid states, and not some state
that is temporarily invalid (i.e. in the middle of some operation)?

I'm sure there's an answer for that, but that seems like totally the wrong
direction. That's what a DBMS is for, and/or a modern filesystem with DBMS-
like features.

I think that DBMSs and the fancy features in new filesystems are underused by
applications. Perhaps that could be improved through greater standardization
of the way to access basic ACID guarantees. Or, perhaps OSes should have these
things more built-in and the lower-level APIs that are tricky to get right
should be more obscure.

In any case, an oversimplified notion of persistence does not improve matters.
Virtual memory just seems like the wrong place to solve these problems. How do
you do a "rollback" of some operation when it's a direct modification of the
in-memory state? (Note that a rollback is not the same as going back in time
-- a rollback surgically undoes a single operation without affecting
concurrent operations). If nothing else, a program bug can do a lot more
damage (sometimes subtle and not detected for a while) within its own address
space than it can to data already held by a separate program (like a DBMS or
the kernel's filesystem) with its own consistency guarantees.

I think one of the most common software engineering mistakes is to not
understand the role of a DBMS in application development, or to drastically
oversimplify it (usually leading to a bad reinvention of the DBMS). The
reality is that most applications would be greatly simplified by using a full
DBMS (typed data, etc.), and almost all applications would be greatly
simplified by relying on ACID guarantees.

------
notsure
For the record, this guy is either going for the long troll writing these
sorts of absurd nostalgic pieces, or is really this enamored with the
classicist "pure" world of computing of the 70s that he's still stuck in. He's
been writing these sorts of patronizing, wringing-of-hands pieces for years
that honestly belong more in comp.lang.lisp more than anywhere else. Always
yearning for the good ol days and talking about how we're in some sort of
technological Dark Ages is his thing. Shame he's such a verbose critic of
everything new to come across his way instead of someone who actually builds
or does something.

~~~
jeffdavis
Agreed.

However, there is some kind of a point here, which is that applications
shouldn't be forced to deal with all kinds of error-prone methods. For
instance, I think that filesystems are more low-level than most applications
should have to deal with, and the hierarchical structure more often represents
some arbitrary imposition than a useful organizational technique. It's 2011,
and linux _still_ doesn't offer a stable filesystem with
snapshotting/versioning, CRCs, atomic operations, etc. Soon btrfs may be here,
but it will be years before it's widespread enough for application developers
to count on it, and the basic posix APIs won't make it easy to actually make
use of these things.

~~~
lloeki
I bet ZFS would already be there if not for license compatibility issues.

~~~
jeffdavis
That doesn't change the situation. Right now, writing applications is harder
to get right, because the basic storage mechanism is a filesystem that forces
a hierarchy upon you and provides only the most rudimentary safety guarantees
(no ACID, no snapshots, etc.).

It happens that Solaris, FreeBSD, OSX, and windows (yes, windows) have taken a
good, strong step forward to address this issue. Unfortunately, linux is
behind. I say that mainly to show priorities -- one would think that any
feature that fundamentally makes applications easier to write would be the top
priority of an OS.

------
comex

        About as interested in building — or even permitting to exist
        — the cheap, legal, easily-user-programmable personal computer
        as Boeing or Airbus are in the cheap, legal, and easy-to-fly
        private airplane.  The destruction of HyperCard alone is proof
        of this.
    

It's kinda hard to take this seriously.

------
pavpanchekha
The OP seems to forget that code is a living, breathing thing, and it runs in
a changing environment. Versioned file systems are a good idea now because
storage is cheap; they were a bad idea years ago. And Apple isn't avoiding a
versioned file system because they hate you, or because of profits; they're
doing it because they have to stay backwards compatible with programs that
don't know how to work with a versioned file system. Fundamentals matter;
agreed. But unfortunately, you can only choose fundamentals once, unless you
(like, say, the Linux kernel) can rewrite all software that uses your system.
So instead of whining about people choosing the wrong fundamentals while
abusing colorful analogies, do some research on helping ordinary programmers
write future-proof systems. Think: what would have been a design that would
allow us to transition from non-versioned to versioned file system. And
remember that Plan9 didn't win, Linux just slowly stole features from it (and
is continuing to do so).

~~~
riffraff
years ago storage was more expensive, but data was also smaller, I'm not sure
storage has grown much faster than the size of the data stored on it
(following the internet downloader escalation: text files, pictures, mp3s,
700mb divx, 4.8gb full DVDs, 10g blu ray, ...)

~~~
pavpanchekha
MP3s and videos do not change, so there is no extra cost for them. Versioned
filesystems are costly for documents that change a lot. Those haven't gotten
bigger (hell, they're smaller).

------
buff-a
Dup: <http://news.ycombinator.com/item?id=2797992>

------
msutherl
I love this guy.

