

Why no curl 8 - ternaryoperator
http://daniel.haxx.se/blog/2013/03/23/why-no-curl-8/

======
brokentone
Finally someone who:

1\. Thought through the application/library from the beginning so that new
features and changes are binary compatible.

2\. Is considerate of the distribution/repository world we live in, doesn't
just say "ain't my problem"

3\. Doesn't just add numbers to do it.

Good work sir, stay strong curl 7

~~~
endgame
Not only that, but libcurl's documentation has some of the best structure I've
ever seen.

~~~
skrebbel
that's just to make up for a ridiculously insane API.

~~~
iuguy
What's so insane about it? I ask as I've recently been working with curl for
webdav backup scripts and haven't had too many problems beyond fat fingers.

~~~
skrebbel
it's modeled after Netscape 2.0's view of the web, and not after how HTTP
actually works.

It tries to force different protocols like HTTP and FTP into a single API,
making for lots of combinations of switches that sometimes do and sometimes do
not work together.

I could go on, but it's all stuff like this. It's not a bad API, just very
archaic.

------
rubyrescue
this reminds me of the time at Microsoft with Visual Studio 6 where the
version comparison was a string compare, but build numbers were julian dates,
with 4 digits - 9365 was december 31st, 1999.

so we couldn't go from 9365 to 0001. So we just went to 9366 and so we had
999-366 days to sort out a better version numbering system.

~~~
klodolph
Those aren't Julian dates. Julian dates are just a count of days since some
epoch, so the day after 9365 would be 9366, then 9367, etc.

~~~
mikeash
Took me a lot of head scratching before I realized that the system was just
YDDD, where Y is the last digit of the year (in this case, 9) and DDD is the
number of days into the year (so 365 is December 31st in normal years).

This is definitely the weirdest example of the Y2K problem I've seen.

~~~
rubyrescue
yep that's it...

------
chubot
Makes sense, thanks for providing meaningful version numbers. I'm looking at
you Chrome, and now Firefox :)

~~~
ditoa
Honestly I don't have an issue with Firefox/Chrome for the rapid version
number increases. A browser is the most updated (or should be) bit of software
on your computer. Honestly I don't even think of the version of Firefox or
Chrome these days. They are just "Firefox" and "Chrome". This is much better
for general users IMHO as it just means they use "Firefox" and not "Firefox
4.1.2" or whatever. A consistent update schedule makes a public version number
pointless and it has, at least in my experience, made extension developers
better at building extensions that don't break because I have upgraded from
version 15 to 16 like we had back in the Firefox v2/3/4 years.

At the end of the day when the version number isn't really known to the end
user it makes little difference if you bump up the major version number or the
minor.

Also I love how quickly we get new features now. Firefox has improved more in
the past year than it did in the past several years before the shift to a 6
week release cycle. It isn't suited to all software I admit but for a browser
it is perfect. None of this "This site only works with Firefox XX or Chrome
XX". Thank god!

~~~
LeonidasXIV
> at least in my experience, made extension developers better at building
> extensions that don't break because I have upgraded from version 15 to 16
> like we had back in the Firefox v2/3/4 years.

This is more caused by Firefox stopping to break the compatibility all the
time. The plugin API in Firefox used to be just the internals, exposed to JS,
so everytime something changed, it broke extensions. These days, they stopped
doing that quite as often plus they published the Jetpack SDK which promises a
stable API at the cost of less possibilities to change the browsers behaviour.

------
rwmj
The best thing libvirt[1] does is to have a stable API from day 1. Over 7
years and counting. In practice this means a couple of things: We have to be
careful and considerate when introducing any new API, since we're going to be
maintaining it for a very long time. And although we do deprecate APIs
occasionally, we keep them around forever so programs don't need to be "fixed"
all at once (or maybe never).

[1] <http://libvirt.org>

------
rajanikanthr
Curl is good..but as a .NET programmer
httpie(<https://github.com/jkbr/httpie>) and Fiddler are suffice to me

~~~
manojlds
Httpie should suffice for any, well like they say, human.

~~~
benatkin
The post is mainly about the library, not the command-line interface.

------
qntmfred
I'm glad they've decided to address this 13 years into v7?

------
benatkin
This is good. It's very good to have things that just won't break. The
successor to curl 7 should probably not be called "curl", just like the
successor to JSON, if there is one, should not be called "json".

------
anonymous
A minor nitpick:

> These distributions only want one version of the lib, so when an ABI bump is
> made all the applications that use the lib will be rebuilt and have to be
> updated.

This is not true, at least on Archlinux. Currently I have 5 different versions
of libpng installed:

local/lib32-libpng 1.5.14-1

local/lib32-libpng12 1.2.50-2

local/lib32-libpng14 1.4.11-2

local/libpng 1.5.14-1

local/libpng12 1.2.50-2

~~~
Spidler
As someone who's been in the distribution business a couple of years ago, I
can tell that it isn't as well supported in practice.

Imagine for example, a rather classical stack. Let's take KDE cause they are
on the top of the news.

we have libpng 1.2.x With headers in include/libpng/. ( This is wrong, very
acute of you to notice. ) We link kdelibs to libpng-1.2 ( independent use,
inherited link ) We link kicker to libpng-1.2 ( kicker uses some things from
libpng as well. Yey )

New version of libpng is installed, 1.4. Breaking ABI but not API in the
process! If this version is installed in "include/libpng/" You'll link future
versions of kicker against this. This gives the chain:

kdelibs : libpng 1.2.x kicker : libpng 1.2.x libpng 1.4.x Oh, what just
happened? You say that the ABI collides at run-time and nothing loads .png
files anymore?

So, what if we instead make /include/libpng link to /dev/null, and only
explicitly install headers into libpng-1.2 and libpng-1.4? Sure! Lets go that
way.

kicker now links properly against 1.2 for both. The krita comes along. Krita
wants to use a new fancy feature in libpng 1.5. cool. Let's just link that.

Wait. did we just collide the public API again? Crap.

So, in practice. It doesn't work quite as well as hoped. Function resolving
when several libraries provide the same function is. Messy. Especially when
they aren't perfectly interchangeable.

------
justin66
> These distributions only want one version of the lib, so when an ABI bump is
> made all the applications that use the lib will be rebuilt and have to be
> updated.

That's... a big deal?

Relative to everything else that goes on in making a distribution, it seems
pretty trivial.

~~~
zdw
The problem is that putting together the distribution is "someone else's
work", not the work of application software developers. The division of labor
has always put everything on the distro maker, and almost none on the
developer.

There's no automated pipeline of code -> build -> distro package -> testing
happening, because almost everyone is still stuck in the bad old days of
"download the tarball and compile it" and if you're really lucky you get a
checksum on the tarball.

Ideally, we'd move to some automated means of putting together software.
Imagine if there was GitHub/TravisCI style automatic process that spit out
packages for Linux/BSD/Solaris/OS X/etc. on every checkin. That's where we
need to go.

~~~
icebraining
Well, Debian has git-buildpackage, which creates .deb packages from git
repositories, and can use the commit log for the package changelog.

The problem is, unless the upstream dev uses Debian, why should he care?

~~~
lmm
Worse is when they do use debian, so they care about debian... but not about
any other distribution. (And debian is weird in how it packages some things -
when I had a project released as source tarballs, the only people I got emails
about failing to build it were debian users).

More modern languages solve this by having their own formats - java/maven is
particularly impressive, but python/ruby/node all have their own ways to
specify dependencies and package libraries - which not only work
crossplatform, but also make it much easier to have compatible or incompatible
version bumps.

Maybe it's time for a virtualenv/bundler-like solution for C. Or time to stop
using it for anything outside the base OS.

~~~
claudius
And then you get six different package managers on a single system for Ruby,
Python, Node, Perl, X and Y?

Also, how would you specify a cross-language dependency, for example from a
Python application to a Ruby application to a Ruby library? Should the Python-
installer know how to handle "Ruby-dependencies" and what little piece of
software to call to install said dependency?

~~~
lmm
>And then you get six different package managers on a single system for Ruby,
Python, Node, Perl, X and Y?

I guess it would be better to have a cross-language VM - something that the
CLR and the JVM are trying to do. Or it's possible to treat virtualization
images (e.g. AMIs) as your execution environment and use a package manager
inside that (although there are still no good package managers for C in terms
of handling dependencies on conflicting versions of the same libraries). The
point is that the "native" system is too inconsistent.

>Also, how would you specify a cross-language dependency, for example from a
Python application to a Ruby application to a Ruby library?

I don't think that actually happens; you can only have a dependency where you
have a common interface. If you were running the programs in Jython and JRuby
you could have such a dependency, but in that case you would be able to use a
JVM-oriented package manager for both.

You're right though; there's no reason a package manager _should_ be language-
specific. A virtualenv/bundler-like piece of software that managed the
installation of libraries in any language would be a very good thing.

~~~
claudius
> I don't think that actually happens; you can only have a dependency where
> you have a common interface.

There is always the OS as a common interface. For example, I don’t see why
some application should not require a mailserver to be installed, i.e. a
‘mail’ binary to be present, even if said binary might be a native C binary
rather than a Python script.

I guess it all depends on what you want to do: If there is little to no
interaction/overlap between applications, either because you want to keep them
separate anyways[0] or because there is only one application running on a
system, deploying a VM for each application makes perfect sense. On the other
hand, if I had to install every library required by each application
separately for said application[1], my system that currently fits nicely into
6 GB would probably require at least a few gigabytes more, and likely also
more RAM.

[0] Because you don’t necessarily trust the authors, for example, as appears
to be the case with all the Markets/Playstore/insert-fancy-name-here.

[1] This btw already exists, it’s called ‘static linking’.

~~~
jfb
I'm intrigued by NixOS [1]; it seems like it's designed to solve this exact
problem.

[1] <http://nixos.org>

~~~
claudius
Whether it solves that problem appears to depend on your definition of the
problem. To me, it appears as if every library required by every application
was stored together with said application, and that every version of every
library and application ever installed was also stored. That’s surely nice if
you have large disks, but it is neither going to work for phones nor for
(sensibly-priced) SSDs.

I agree, though, that it does look like an interesting project, but also have
to admit that I’m quite happy with the state of package management in Debian –
including the idea to force everything into the dpkg/APT framework.

------
Kiro
I never understood the strictness surrounding version numbering. We've had
long discussions at work whether a release should be called 4.2 or 4.1.4. Who
cares and why does it matter?

~~~
agwa
It matters because a good versioning system indicates to users what to expect
from the release. If version numbers are major.minor.patch, then incrementing
the patch number should indicate that you've only fixed bugs and it should be
safe to install without risking new bugs. Incrementing the minor number should
mean you've made feature changes but everything is backwards compatible.
Incrementing the major number should mean you've made major changes that break
compatibility.

See <http://semver.org/> for more details.

Edit: others have pointed out that this is more important for libraries and
less so for user-facing applications. For user-facing applications I'd
recommend two version numbers so you can distinguish between feature releases
and bugfix-only releases, which can be a useful indicator to the end user.

------
shurcooL
What are the reasons not to follow semver today?

~~~
kscaldef
I'm not sure, but you may be misunderstanding the article. As far as I can
tell, they _are_ following semver, and the discussion is about why they would
go to extreme measures to maintain the same major version for 13 years an
avoid a binary incompatibility. In other words, it's not that they don't
recognize the value of semver, but that they are also considering the cost to
users of the varying types of version bumps.

~~~
shurcooL
I think my question is misunderstood also.

I realize they follow semver (at least unofficially), which I think is great.
What I want to know is if there any good reasons why others don't do it?

