
I hope WebDAV dies - kristianp
https://unterwaditzer.net/2015/kill-webdav.html
======
robmueller
It's not just WebDAV that's an abomination (returning HTTP status codes and
messages in XML, WTF were they thinking), but vCard and iCal as well.

The vCard "group construct" (see rfc6350) is one of the dumbest things ever
added to a spec. It seems a trivial thing to add, but completely screws up
your internal storage and manipulation formats. It's horrible, on top of all
the other horribleness that is vCard.

Of course, I'm biased, given we're trying to push alternatives to IMAP,
CardDAV and CalDAV - [http://jmap.io/](http://jmap.io/) \- but please do check
it out. Having actually worked on IMAP servers and clients, CalDAV servers and
clients and CardDAV servers and clients, we've learned a lot about creating a
saner alternative.

~~~
binbasti
I didn't notice that JMAP includes calendars and contacts. Looks good!

Personally, I'd prefer a dumb storage with a generic API over yet another
custom protocol/API just for these data formats alone. Worked great on
people's local drives before the Web, and I think it should work similar on
the Web.

~~~
treve
Except that a dumb storage cannot do scheduling, and many other things.

~~~
binbasti
The storage can't do the scheduling itself, but apps can do it on top of dumb
storages.

------
rsync
We (rsync.net) supported webDAV from ... about 2006-2012.

It was a total mess and headache the entire time - mostly because the standard
implementations that you would attempt to use in a FOSS environment are
_complete abandonware_.

I am speaking of mod_dav on apache. It's abandonware. The original authors
cannot be contacted, it lies stagnant ... it's broken.

One of the original reasons that we supported webDAV was that the mac finder,
with it's "Connect to Server" choice in the "Go" menu supported webDAV - but
their webDAV was really, really quirky and non-standard and did not function
well with _anything_. We had to reverse engineer Apples weirdness and even
then it rarely worked well. MS DAV in office, etc. - also weird. And again,
very difficult to even work around the weirdness because mod_dav was
abandoned.

Giving up on DAV was a win-win - not only did we stop wasting time on these
bizarro home-grown dialects of DAV, but we also removed a ton of attack
surface by removing apache entirely from rsync.net storage arrays. Nowadays
it's just OpenSSH and I think it will stay that way.

wistful: If only (if only!) Apple would support SFTP in the "connect to
server" function of the Finder.

~~~
jfb
You might be happy to know that WebDAV didn't work any better for those of us
at Apple, either.

~~~
rsync
Do you have any insight into why SFTP support was never built into "connect to
server" ?

It seems so simple and obvious ... would have saved humanity _millions_ of man
hours with people mucking around with sshFS/FUSE on their macs, which barely
worked back then ...

~~~
jfb
No insight whatsoever. Probably nobody ever asked for it, to be honest.
"Connect to Server ..." was overwhelmingly used to connect via AFP or,
_shudder_ NFS at the time, both of which expose a filesystem like interface.
How much like a filesystem is a SFTP connection, as opposed to a FTP-like one?
That's probably the reason, right there: WebDAV purported to expose filesystem
semantics, which meant that _in theory_ adding it was simply defining a
communications protocol to the Finder, rather than a complete translation
layer between filesystem operations and FTP ones.

~~~
zurn
SFTP is closer to NFS than FTP.

------
fkooman
Full disclosure: I contributed to the remoteStorage specification [0].

Instead of (or in addition to) considering remoteStorage, one could also go
back to "normal" WebDAV, if such a thing exists, and completely forget about
the CalDAV/CardDAV servers.

I'd say WebDAV is more mature, it has litmus[1] for testing implementations.
remoteStorage has as far as I know only one production instance running at
5apps [2]. The benefits of using the RS protocol are mostly due to the CORS
headers (which could be implemented easily for WebDAV) and the use of
OAuth/Bearer, for which a PR exists for SabreDAV [3]. One thing missing from
WebDAV is the (implicit) mapping of OAuth scopes to ACLs, which should not be
too difficult to implement... The discovery, by depending on Webfinger, is
also not one of my favorites. I'd prefer something like OAuth authorization
server discovery [4].

I mean, I am not opposed to using remoteStorage and love JSON as much as the
next guy, but it just doesn't bring (in my opinion) many benefits and loses
interop with existing WebDAV clients for no good reason...

[0] [https://datatracker.ietf.org/doc/draft-dejong-
remotestorage/](https://datatracker.ietf.org/doc/draft-dejong-remotestorage/)

[1] [http://www.webdav.org/neon/litmus/](http://www.webdav.org/neon/litmus/)

[2] [https://5apps.com/](https://5apps.com/)

[3] [https://github.com/fruux/sabre-
dav/pull/710](https://github.com/fruux/sabre-dav/pull/710)

[4]
[https://www.tuxed.net/fkooman/blog/as_discovery.html](https://www.tuxed.net/fkooman/blog/as_discovery.html)

~~~
binbasti
(Co-founder of 5apps and RS core contributor here.)

 _> remoteStorage has as far as I know only one production instance running at
5apps_

That's not true. 5apps is running the only public service for end users at the
moment, but there are certainly more production instances running.

 _> The benefits of using the RS protocol are mostly due to the CORS headers
(which could be implemented easily for WebDAV) and the use of OAuth/Bearer,
for which a PR exists for SabreDAV [3]._

As both of these would be optional additions to WebDAV servers, all of
WebDAV's benefits parish with most servers not supporting these new
extensions. That's the very critique in the article as far as I understand.
WebDAV alone is not good enough, and optional additions lead to a world of
incompatibility and pain.

 _> One thing missing from WebDAV is the (implicit) mapping of OAuth scopes to
ACLs, which should not be too difficult to implement_

And another addition.

 _> I'd prefer something like OAuth authorization server discovery_

And another one. Counting 4 now. :)

 _> but it just doesn't bring (in my opinion) many benefits and loses interop
with existing WebDAV clients for no good reason_

You just mentioned that to get to feature parity with remoteStorage, a WebDAV
server needs 4 optional additions, for only one of which an unmerged PR to a
single server implementation exists. Maybe I miss something, but it doesn't
sound like interop is WebDAV's benefit in this scenario.

~~~
fkooman
Ah, my point was, actually, to base the remoteStorage spec on WebDAV instead
of a new JSON-based protocol as it currently exists. remoteStorage could
define a WebDAV 'profile' for what needs to be supported by the server...

~~~
binbasti
Yes, and then you have another optional version of WebDAV, adding to the
existing mess, while you don't actually have the benefit of interop that you
say would be the reason for using WebDAV in the first place.

The way I read it, that's basically the point of the article.

------
0x0
The two (yes, two!) implementations that Microsoft ship are also full of crazy
bugs that come and go between versions.

There's a list compiling known bugs here:
[http://www.greenbytes.de/tech/webdav/webdav-redirector-
list....](http://www.greenbytes.de/tech/webdav/webdav-redirector-list.html)
and here: [http://www.greenbytes.de/tech/webdav/webfolder-client-
list.h...](http://www.greenbytes.de/tech/webdav/webfolder-client-list.html)

It's almost like they sabotaged the clients on purpose to break
interoperability.

~~~
e12e
FWIW last time I looked, the sanest way I could find to actually access WebDAV
from Microsoft Windows was with Cyberduck:

[https://cyberduck.io/](https://cyberduck.io/)

~~~
0x0
One of the big "sells" for WebDAV was transparent access like a network drive
in Windows clients without the need to install additional software. As a power
user, accessing WebDAV isn't much of an issue. But as, say, a software
developer wanting to provide a WebDAV service, it's another story :-/

~~~
e12e
Yeah, I was hoping to not need any software in Windows. Having a maintained
client that presents a sane UI, with few demands for the back-end ("speaks
sane WebDAV") is still quite good -- if you want filesystem like semantics
over a secure connection that works reasonably well across the Internet.

Note that, according to some recent searching, in windows 7+, WebDAV over TLS
w/Basic Auth _should_ be less painful -- I'm going to have to test it later.

------
rebelde
What I loved about WebDAV is that it let me edit a file on a remote server
using my personal PC. When I wanted to save to the remote computer, I just
saved it.

Is there a good replacement out there?

~~~
ikurei
What about samba/shared folders, ftp, or even sshfs if you are a unix user.

If you like Emacs, there is Tramp, and I know Vim has something equivalent.

~~~
rebelde
I have tried a number of the ones that you and others mention. The security
was awful. They would create the remote drive (or whatever) automatically, all
the time. eek! I want to have to log in first.

It has been a few years since I have tried. I hope something has improved.

Again, basically, I would prefer to edit in Notepad++ or Sublime instead of
vi. Personal preference.

------
INTPenis
For a self-hosted, wide area network file sharing solution I think WebDAV is a
very simple and cheap choice.

My experience with WebDAV has been just that, its simplest form. Setup an
Apache web server for project managers to share files with a client for
example.

~~~
Touche
IIRC doesn't Apache require that the Apache user owns the files? This made it
annoying when you wanted to use those files elsewhere on the server.

~~~
e12e
Well, there's just three basic options: have the WebDAV server run as root, to
allow changing uids. It's probably a bad idea. Have a group that both the
server and uid is member of, have that group own the files (better, but breaks
down when you need more than one user, unless you have dav-uid1, dav-uid2 ...
groups for every uidN, and have the server be member of each -- and even then
you're not much better off -- the server can still access all files -- at
least uid1 logged in locally won't have access to files owned by dav-uid2
directly). Finally, just run the server as your uid, on an alternate port.
That's probably more sane, optionally have a proxy stand in so you can connect
over 443.

The state of dav clients is a mess, FWIW last I checked, getting it to work
sanely on Linux, either with Gnome, or davfs -- was easy enough. Trying to get
it to work reasonably from Windows was hopeless -- I never tried OS X -- I'd
hoped that they actually had a usable client.

Either way, if you basically end up just being able to access it reasonably
from Linux, you might as well just go with sshfs. For a handful of users,
setting up reasonably secure[1], key-based file sharing -- ssh(fs) is hard to
beat.

SAMBA/CIFS isn't fit to be used over the Internet, NFSv4/pNFS w/kerberos and
encryption might be, but it, along with AFS is complex to set up.

The second easiest thing with some hope of being both secure and not a
nightmare to get working for most (after sftp/sshfs) is probably Samba/CIFS
over a VPN.

Not sure about the current state of the webdav-server/client stuff for python
-- that _might_ be a viable way to get a pair of multiplatform client/server
things working. Last I looked the WebDAV part of Zope/Plone worked fine with
davfs (sadly the rest of the thing, for editing content etc, needs/needed
work).

On paper WebDAV har many things going for it: it works over standard TLS
ports, should go through most firewalls, is "trivial" to secure using
certificates (not sure about 2fa -- would probably depend on clients having a
sane UI/UX for "require certificate and OTP/password" etc).

Does anyone have any experience with _actually_ using something derived from
Plan9 9p for sharing files? I've only been able to find broken, abandoned
implementations, and no recent how-to's or tutorials.

Again, on paper, it would seem that 9p wrapped in some kind of certificate
based protocol (either built from nacl primitives, or simply TLS/SSH) _should_
be a reasonable candidate. A nice combination of not reinventing the wheel,
and keeping things as simple as possible (as opposed to, NFSv4, CIFS, AFS).

[1] Ssh breaks down when it comes to revoking keys etc. This is in theory
solved by using ssh certificates (which can expire, be revoked etc) -- but
AFAIK there aren't any sane cli/ui/management tools yet. In that case managing
x509 via AD or some other CA tool is probably easier -- but still ridiculously
painful.

~~~
INTPenis
>The state of dav clients is a mess, FWIW last I checked, getting it to work
sanely on Linux, either with Gnome, or davfs -- was easy enough. Trying to get
it to work reasonably from Windows was hopeless -- I never tried OS X -- I'd
hoped that they actually had a usable client.

Nonsense, the reason I use WebDAV when non-tech people need a file sharing
service within my company time and time again is because it works so easy on
Windows and Macintosh. Their two favorite OS'.

On windows you mount it as a regular network share, you can even authenticate
with AD if it's in your network. On Mac you just Cmd+K in finder and use the
same URL as on Windows. On Linux I've recently learned it's equally simple.

~~~
e12e
This works over TLS1.2? Without any extra software needed in Windows? Note
that last I checked, clients were running Windows XP -- maybe this is
fixed/improved in 7/8/8.1/10?

[ed: I seem to recall I had some issues with Windows 7 as well, but I may be
misremembering things. Does seem that WebDAV is now properly bundled with
Windows, and https should work as long as the certificate matches. Apparently
Windows will refuse basic auth (but do digest, which really isn't that much of
an improvement) over standard http -- but as part of the point is the added
security of simple transport encryption over SSL -- I can't imagine why anyone
would want to expose WebDAV (other than read-only, perhaps) over anything
other than SSL]

As for it working in OS X -- I haven't tried it -- and my impression was that
it did work. Apparently others in this thread have different experiences...

------
dragonwriter
The big problem I have with WebDAV is that the specs for very pieces all do
kind of "vertical" slices of functionality without separation of concerns
between low-level protocol (what does HTTP provide and what is it missing, and
how do we generically extend it to provide what is necessary for the
applications we are trying to do) and content semantics, so that where it
extended the HTTP protocol (with new methods, etc.), those extensions were
tightly bound to content semantics.

I think rethinking the same problem domain with a more RESTful approach, a lot
better could be done, with a lot simpler set of generic protocol extensions
covering the few real gaps in HTTP/1.1 and a simple set of metadata content
types [0] to cover authoring/versioning information, with primary content just
using any content type.

[0] And, really, this is two problems that should be addressed somewhat
separately, in terms of representation-neutral metadata _content models_
first, and then in terms of various representations for those content models.

------
graniter
After writing a custom CalDAV server I agree with the author. The number of
requests it takes to get the data you need is insane, and most requests will
involve a DB hit. My CalDAV server is much more "chatty" than my web server
even though they serve essentially the same information.

Plus each CalDAV client does things a little differently so it's very hard to
debug, especially when most clients are black boxes. Permissions and feature
and how they respond...a potentially different.

I (unfortunately) consider the custom CalDAV server I wrote to be a bit a
technical achievement, even though I'd rather I didn't have to do it. But I
had to do it because there's really no other good option for writing calendar
services that work with native calendar applications.

~~~
untitaker_
I bet it's easier to use SabreDAV
([http://sabre.io/dav/](http://sabre.io/dav/)) for your server than writing it
all by yourself, even though it means writing PHP. I really feel like its
author is the only one who understands what the hell is going on.

~~~
brongondwana
Ken Murchison from CMU has a pretty good grasp on it these days too, having
written the Cal/CardDAV support for the Cyrus IMAP server. I'm slowly catching
up, having rewritten parts of it :)

------
Tharkun
OP seems to think that parsing XML is slow. If parsing XML is slower than
parsing JSON, you're doing something wrong.

~~~
untitaker_
Parsing was never the bottleneck, I admit. But I can't see how parsing XML is
as easy a task as parsing JSON either?

Of course you can show me some benchmarks that show that libxml2 is much
faster than the avg. JSON implementation. But you can do that with any format
that is verbose, complex and therefore hard to parse, yet has been around for
so long that it paid off to write well-optimized C libraries for it.

~~~
mikekchar
I recommend that you simply write a JSON parser and an XML parser using a
parser generator. Then you will see. They are both trivial. I would be
surprised if it took you more than an afternoon to do both.

Ease of parsing was the whole point of XML. It was actually a beautiful
concept that because it was a subset of SGML, you could validate documents and
build smart editors, etc, etc on file formats in XML. Then when you wanted to
simply read the file, the parser was mind blowingly simple to write.

Of course people decided that it would be _great_ for communications
protocols, and inter-language object representations and... all the stuff that
it really wasn't great for.

And people decided that you would want to write event driven parsers and do
DOM traversals and crazy things like that.

Yeah, that stuff is complicated. (and dare I offend people by saying that it
wasn't really a good idea after all).

But parsing is dead easy. Personally prefer JSON for most things because it is
easier to type :-)

~~~
untitaker_
I said "easy parsing", not "easy implementation". "Computationally cheap" is
probably a better term.

Everything is trivial with a parser generator. This is besides the point.

------
amelius
I tried WebDAV a couple of years back on MacOS. Turned out that MacOS had a
bug in its WebDAV implementation, causing files to be lost. Never looked at it
since.

------
hobarrera
> In practice, you can't use a WebDAV client library. In practice, you copy-
> paste XML from the examples in the RFC into your source code and hope for
> the best.

I've written CalDav clients. This is completely true. Being both an extension
_and_ modification of WebDAV, CalDAV cannot be used with common WebDAV
libraries.

------
hobarrera
JMAP[1] seems to be attempting to replace IMAP + CalDAV + CardDAV.

Now that we _have_ a replacement, I'd love to see these die. Especially the
latter two.

We're still missing a replacement for WebDAV though. :(

[1]: [http://jmap.io/](http://jmap.io/)

~~~
treve
The problem with Jmap is that it doesn't work well with the rest of the world.
Unless your only use-case is personal sync, this can be problem. It's possible
to ditch WebDAV, but also breaking compatibility with iCalendar and iTip makes
this a no-go for most people.

If they considered using jCal/jCard, compatiblity would be possible and then
it actually has a chance.

In fact, they could even ignore jCal/jCard and pick a format that can express
the same data-model as iCalendar/vCard, and then they would still have my
vote.

The way it's currently developed, is that it's an entirely new format that
maps to iCalendar in a lossy way. Any iCalendar/iTip extension that is
currently exists or that is under development would have no way to be
expressed in Jmap.

I think the authors of Jmap have a really great understanding of Imap and what
it would take to replace it, but their use-case of what they need from
calendaring is narrow and their approach to replacing CalDAV is naive.

It's basically ActiveSync v2. Just like ActiveSync it uses HTTP as a tunnel
with a single endpoint and an RPC-like system using POST, except ActiveSync
uses WBXML, and Jmap uses Json.

~~~
brongondwana
By the way, I see you edited this from the first time I read it - I should
point out that FastMail provides a full CardDAV/CalDAV service and uses VCARD
under the hood as well (it's the Cyrus IMAPd server)

What we do need to make sure works is extensibility and custom properties,
because as you have correctly pointed out - we can't tell what's necessary for
the future. I really want to make the extensibility not be at the expense of
the simple things working well and reliably though. I see a fetish in
standards bodies for edge cases and complexity. We want "making the simple
things easy and the hard things possible" not "making the hard things possible
and the simple things hard". The whole user principal dance in *DAV at the end
of which there is STILL no standard way to share calendars/addressbooks - all
the DAV sharing stuff is basically unused or semi used, and the whole use of
collections is pretty much ignoring the capabilities of DAV.... that's crazy.
It's layers upon layers and you get crap like Evolution depending on seeing
that the PUT action is allowed on the collection (which it's not in Cyrus,
because you can only put resources with in the collection) and marking the
calendar read-only. Really? Even bootstrapping is a fraught nightmare.

That's what we want to avoid with JMAP - a million choices and all of them
horrible, with no clear guidance.

~~~
treve
> By the way, I see you edited this from the first time I read it.

There were no comments yet, so it seemed ok to elaborate without destroying
context.

I'm definitely with you on the issues with DAV. Things are way harder than
needed.

------
mkesper
The hassle of syncing contacts, spam filters, calendar events, GPG keys (and
emails) makes me think about replacing it with something like mailpile.

Did anyone here try that?

------
anc84
In a footnote XML is mentioned as being wasteful on bandwidth but doesn't it
compress very well?

~~~
zamalek
It compresses extremely well; even if you use a predetermined dictionary
(reducing CPU usage). However, because of CPU usage considerations it probably
won't ever do as well as high-entropy encodings such as Protobuf because to
get truly great compression you simply have to spend the cycles.

If you are using XML for the right applications (i.e. _not_ a data firehose)
the compression/CPU characteristics shouldn't matter at all. It's when you
start using it for high bandwidth scenarios that things become sketchy: you
should be negotiating an out-of-band stream (such as SI, Jingle or just a
plain old socket) and using that for the firehose.

That being said, people like AeroFS supposedly used it as a firehose for
years[1] (in the form of XMPP) before having to replace it with a simpler
protocol: so there does seem to be some elasticity to that assertion about not
using it for a firehose.

[1]: [https://www.aerofs.com/blog/open-sourcing-the-stupid-
simple-...](https://www.aerofs.com/blog/open-sourcing-the-stupid-simple-
messaging-protocol/)

------
untitaker_
Weirdly this post seems to have been submitted multiple times:
[https://hn.algolia.com/?query=webdav%20dies&sort=byPopularit...](https://hn.algolia.com/?query=webdav%20dies&sort=byPopularit..).

I thought HN prevents this? It seems to be the same exact URL.

------
zeveb
XML really is the mindkiller (and the bandwidth-killer). Let's compare &
contrast XML, S-expressions and JSON for the two examples in the article:

    
    
        <?xml version="1.0" encoding="utf-8" ?>
        <D:propfind xmlns:D="DAV:">
            <D:prop>
                <D:getcontenttype/>
                <D:getetag/>
            </D:prop>
        </D:propfind>
    

becomes:

    
    
        (propfind (prop getcontenttype getetag))
    

or:

    
    
        {
          "op": "propfind",
          "prop": ["getcontenttype", "getetag"]
        }
    

And:

    
    
        <?xml version="1.0" encoding="utf-8" ?>
        <C:calendar-query xmlns:D="DAV:"
            xmlns:C="urn:ietf:params:xml:ns:caldav">
            <D:prop>
                <D:getcontenttype/>
                <D:getetag/>
            </D:prop>
            <C:filter>
                <C:comp-filter name="VCALENDAR">
                    <C:comp-filter name="VEVENT">
                        <C:time-range start="20150909T000000Z" end="20300909T000000Z"/>
                    </C:comp-filter>
                </C:comp-filter>
            </C:filter>
        </C:calendar-query>
    

becomes:

    
    
        (calendar-query
         (prop getcontenttype getetag)
         (filter (component "VCALENDAR")
                 (component "VEVENT" (time-range "2015-09-09T00:00:00Z" "2030-09-09T00:00:00Z"))))
    

or:

    
    
        {
          "op": "calendar-query",
          "prop": ["getcontenttype", "getetag"],
          "filter": [
                      {
                       "component": "VCALENDAR"
                      },
                      {
                        "component": "VEVENT",
                        "filter": {
                          "time-range": ["2015-09-09T00:00:00Z", "2030-09-09T00:00:00Z"]
                        }
                      }
                    ]
        }
    

I think it's pretty clear which is least readable and least concise.

So, what does XML buy one in exchange for its lack of concision? Well, a pre-
processing tool which knows nothing about WebDAV or CalDAV could use a schema
definition to sanity-check the formatting of a query. It could probably be set
up to detect if one used an invalid date format; it may or may not be
configurable to detect if one searched for a VCALENDAR component inside a
VEVENT. Regardless, there are semantic constraints in a data model which
cannot be expressed with, e.g., a DTD or XSD.

How about JSON? What does it buy? Well, it has built-in hash tables (as
opposed to faking them with (table (key val) (key2 val2))), which is a huge
win, and it has a ton of libraries in every language one can imagine.

How about S-expressions? I think that they win on clarity, readability and
concision. But there are not a ton of parsing libraries available for every
language (OTOH, a canonical-S-expression parser can be written in a few hours
for any language).

So, who wins? I think that for most projects nowadays, JSON's clearly the
answer. The pain of XML simply doesn't pay off in practice, while the elegance
of S-expressions doesn't outweigh the unfamiliarity of the great mass of
enterprise software developers.

But if you have a choice, use S-expressions.

~~~
phlakaton
I think that's a gross oversimplification. Here you're critiquing XML's use as
a substrate for a query language, for which I think you have a good point. For
APIs there are reasons why JSON is generally easy enough to work with. But
there is a sweet spot for semistructured documents where I would say XML still
has the edge, because you don't have to quote text and errors in structure are
much easier to identify. The specific problem domain matters with choices like
these. See "XML Is Not S-Expressions" for a deeper treatment of the question.

~~~
zeveb
> But there is a sweet spot for semistructured documents where I would say XML
> still has the edge, because you don't have to quote text and errors in
> structure are much easier to identify.

No argument—as a markup language, XML is find (not perfect, but fine); it's a
better markup language than either JSON or S-expressions.

But as a data encoding, it's far inferior to JSON and S-expressions.

> See "XML Is Not S-Expressions" for a deeper treatment of the question.

Yeah, I've read it. He's right that XML is better for documents; he's wrong
that the benefits of integrating data and document encoding are worth the pain
of XML.

------
spiralpolitik
If only there was a process whereby you could propose corrections to existing
standards or even propose a new standard to replace the existing standard
instead of whining about it on Hacker News...oh wait.

Seriously if these things bother you then and you think you have a better idea
write it up in an RFC.

~~~
adrusi
Writing a good RFC is hard work, does it not make sense to write informally
about your ideas first to see if they have any appeal before investing the
time into proposing a new standard?

Writing about it this way also exposes the problems to people who otherwise
wouldn't give two fucks about webdav. If the author had simply submited an
RFC, only the people working on calendar/contact syncing tech would take
notice.

I find it sad that you consider pointing out the flaws in something to be
"whining".

~~~
spiralpolitik
Isn't that the point of the RFC process ? It's right there in the name
"Request for Comment". The whole point of the RFC structure is that it forces
you to lay out what someone would need to implement your RFC, which is what
you need to even begin the discussion on a new standard.

A lot of the successful early internet standards followed this path. They
weren't written by committee, they were written by one person with an idea to
make things better.

And I only consider it whining if you point out the flaws, then don't do
something about the flaws using the process for correcting them. The internet
was built by the RFC process and its poorer these days because people feel to
follow it is "a bit of hard work".

~~~
adrusi
_which is what you need to even begin the discussion on a new standard._

 _I only consider it whining if you point out the flaws, then don 't do
something about the flaws using the process for correcting them._

To even begin discussion? The discussion begins with pointing out the flaws.
One person writes an article pointing out some flaws in a technology, then
people comment saying how X is actually not a flaw, and how Y is also quite a
major flaw. After that you start focusing on brainstorming ideas that address
these flaws. _Then_ you refine those ideas and come up with some specification
or prototype implementation (followed by a specification based on it). All
this has to happen before what you claim to be the beginning of the
discussion.

You _can_ do all of these things in private, maybe consulting a couple friends
and colleagues, but you don't have to. You can also do it completely out in
the open, on a public forum. In doing so you increase the noise, but you also
increase the signal. Do you consider this process whining?

