

Why curl defaults to stdout - akerl_
http://daniel.haxx.se/blog/2014/11/17/why-curl-defaults-to-stdout/

======
jrochkind1
I actually _want_ printing to stdout more often than I want printing to file,
it is more often what I need. I guess different people have different use
cases.

I will admit that rather than learn the right command to have curl print to
file -- when I _do_ want to write to file, I do use wget (and appreciate it's
default progress bar; there's probably some way to make curl do that too, but
I've never learned it either).

When I want writing to stdout, I reach for curl, which is most of the time.
(Also for pretty much any bash script use, I use curl; even if I want write to
a file in a bash script, I just use `>` or lookup the curl arg).

It does seem odd that I use two different tools, with mostly entirely
different and incompatible option flags -- rather than just learning the flags
to make curl write to a file and/or to make wget write to stdout. I can't
entirely explain it, but I know I'm not alone in using both, and choosing from
the toolbox based on some of their default behaviors even though with the
right args they can probably both do all the same things. Heck, in the OP the
curl author says they use wget too -- now I'm curious if it's for something
that the author knows curl doesn't do, or just something the author knows wget
will do more easily!

To me, they're like different tools focused on different use cases, and I
usually have a feel for which is the right one for the job. Although it's kind
of subtle, and some of my 'feel' may be just habit or superstition! But as an
example, recently I needed to download a page and all it's referenced assets
(kind of like browsers will do with a GUI; something I only very rarely have
needed to do), and I thought "I bet wget has a way to do this easily", and
looked at the man page and it did, and I have no idea if curl can do that too
but I reached for wget and was not disappointed.

~~~
mikepurvis
I think the biggest nuisance with this strategy is that neither tool is
included by default on the machines I'm usually working with— wget is missing
from my Mac, and curl is missing from my Ubuntu servers.

Both can be quickly rectified, but it's still a pretty big pain.

~~~
js2
The obvious solution is to submit a patch to curl, such that when it's called
as "wget", it emulates wget's command-line options, and vice versa.

Witten only partially in jest. I've submitted patches to both projects and
both are relatively straight-forward code bases to dive in to.

------
NickPollard
I think his argument is valid, and thinking about curl as an analog to cat
makes a lot of sense. Pipes are a powerful feature and it's good to support
them so nicely.

However, just as curl (in standard usage) is an analog to cat, I feel that
wget (in standard usage) is an analog to cp, and whilst I certainly can copy
files by doing 'cat a > b', semantically cp makes more sense.

Most of the time if I'm using curl or wget, I want to cp, not cat. I always
get confused by curl and not being able to remember the command to just cp the
file locally, so I tend to default to wget because it's easier to remember,

~~~
Touche
I find that weird. Why are you wanting to copy stuff from http all of the
time? I only ever rarely do that because I want to examine something like an
API response further for an extended period of time. Usually I just want to
see the response once, or view the headers.

~~~
coldtea
> _I find that weird. Why are you wanting to copy stuff from http all of the
> time?_

This kind of question (and the other question you make above) remind me of the
people that answer Stack Overflow questions with their opinions of "best
practices" and "what you should be doing instead" instead of answering what
the poster asks. E.g:

Q. "How do I store JSON in mysql?"

A. "Why do you ask? What do you want to achieve with this? You'll be better
served with a NoSQL database".

etc, etc.

~~~
Morgawr
That is called the XY problem[0]. It is a fairly common practice especially in
technical IRC channels. Honestly, it can be very frustrating but a lot of
times it is useful and actually helps both the person asking the (wrong)
question and the group of people trying to provide a proper answer. Some
(maybe most?) people do take it too far and even _refuse_ giving you a
straightforward answer just to be pedantic and annoying. Those types of people
have made this much of a bigger problem than it actually is, however thinking
about the XY problem before you ask a question is always a good idea,
regardless.

[0] -
[http://mywiki.wooledge.org/XyProblem](http://mywiki.wooledge.org/XyProblem)

------
viraptor
I think he may be missing what people mean by "it's easier without an
argument". It's not just "only one option" \- what I see in reality quite
often is: "curl [http://..."](http://..."), screen is filled with garbage,
ctrl-c, ctrl-c, ctrl-c, damn I'm on a remote host and ssh needs to catch up,
ctrl-c, "cur...", actually terminal is broken and I'm writing garbage now,
"reset", "wget [http://..."](http://...").

I'm not saying he should change it. But if he thinks it's about typing less...
he doesn't seem to realise how his users behave.

~~~
yason
That's the reason I use wget and only when necessary I switch to curl. It's
not that I wouldn't forget about that nasty behaviour (eventhough I sometimes
do forget) but it usually goes like this:

    
    
        $ curl -o news.ycombinator.com
        curl: no URL specified!
        curl: try 'curl --help' or 'curl --manual' for more information
        $ curl -O news.ycombinator.com
        curl: Remote file name has no length!
        curl: try 'curl --help' or 'curl --manual' for more information
        $ curl -O foo news.ycombinator.com
        curl: Remote file name has no length!
        curl: try 'curl --help' or 'curl --manual' for more information
        <html>
        <head><title>301 Moved Permanently</title></head>
        <body bgcolor="white">
        <center><h1>301 Moved Permanently</h1></center>
        <hr><center>nginx</center>
        </body>
        </html>
        $ wget news.ycombinator.com
        --2014-11-17 14:27:18--  http://news.ycombinator.com/
        Resolving news.ycombinator.com (news.ycombinator.com)... 198.41.191.47, 198.41.190.47
        Connecting to news.ycombinator.com (news.ycombinator.com)|198.41.191.47|:80... connected.
        HTTP request sent, awaiting response... 301 Moved Permanently
        Location: https://news.ycombinator.com/ [following]
        --2014-11-17 14:27:19--  https://news.ycombinator.com/
        Connecting to news.ycombinator.com (news.ycombinator.com)|198.41.191.47|:443... connected.
        HTTP request sent, awaiting response... 200 OK
        Length: unspecified [text/html]
        Saving to: ‘index.html’
    
        [ <=>                                                                                                                                  ] 22,353      --.-K/s   in 0.07s
    
        2014-11-17 14:27:19 (331 KB/s) - ‘index.html’ saved [22353]
    

With wget, I can just through any URL to it and it‘ll probably do the right
thing with the least amount of surprises. „Grab a file“ is my usecase 99.99%
of the time, „Print a file“ is the rest 0.01%.

~~~
Touche
I asked this of someone else, just out of curiosity, why do you find yourself
downloading content from http that often? What are you doing with these files?

~~~
dec0dedab0de
On my laptop, to view something while offline. On a server, to install
something that is not in a package manager.

~~~
noselasd
On that note, sites that don't easily offer copyable links for download annoys
me to no end.

Most of the time downloading anything, I need it on another machine, and
normally the easiest way is copy/pasting an URL from my local web browser to a
remote terminal, and fetch it using wget. In which case an auto redirect to a
mirror selector that again auto downloads the item, all handled by javascript,
is pretty frustrating.

~~~
Igglyboo
I noticed this with mega.co.nz and I hate it, they do all the downloading with
javascript then start an actual browser download (which is just copying from
localstorage to your download directory) which finishes in an instant but I
have to remain on the page now.

~~~
cooper12
I don't think it would be possible to use curl for mega because it has to
locally decrypt the stream first. I do remember using a (now deprecated)
python library that wrapped the mega api [0] a while ago, so that might
interest you if you want to do everything from the command line.

[0]
[https://github.com/richardasaurus/mega.py](https://github.com/richardasaurus/mega.py)

------
shapeshed
Do one thing and do it well.

IMHO cURL is the best tool for interacting with HTTP and wget is the best tool
for downloading files.

~~~
digi_owl
Pretty much. i keep seeing curl being used as the "back end" of web browsers,
fueling the likes of webkit.

Wget on the other hand end up within shell scripts and similar (i have before
me a distro where the package manager is made of shell scripts, core utils and
wget).

------
rachelbythebay
This "-O" seemed dubious to me so I took a look. Turns out... yep, it's not as
simple as that.

"curl -O foo" is not the same as "wget foo". wget will rename the incoming
file to as to not overwrite something. curl will trash whatever might be
there, and it's going to use the name supplied by the server. It might
overwrite anything in your current working directory.

Try it and see.

~~~
bkirwi
According to the manpage, the filename depends only on the supplied URL:

    
    
      Write output to a local file named like the remote file we get. (Only the file part of the remote file is used, the path is cut off.)
      The remote file name to use for saving is extracted from the given URL, nothing else.
    

wget is a hugely useful tool for making local copies of websites and similar
things -- the no-clobber rule is useful there, and the built-in crawling and
resource fetching is fantastic. OTOH, for most things, I actually like curl's
'dumb' behaviour; it seems to match up better with the rest of the UNIX
ecosystem.

------
userbinator
I think of curl as a somewhat more intelligent version of netcat that doesn't
require me to do the protocol communication manually, so outputting to stdout
makes great sense.

------
wyldfire
It would be really nice if curl took the content-type and results from
isatty(STDOUT_FILENO) into consideration when deciding whether to spew to
stdout.

~~~
acqq
Yes, I can't imagine there are actual scripts dumping binary data to the
terminal, and it would help everybody whose terminal would otherwise
experience what viraptor nicely describes:

""curl [http://..."](http://..."), screen is filled with garbage, ctrl-c,
ctrl-c, ctrl-c, damn I'm on a remote host and ssh needs to catch up, ctrl-c,
"cur...", actually terminal is broken and I'm writing garbage now, "reset",
"wget [http://..."."](http://...".")

I admit it happened to me more than once.

~~~
kybernetyk
For debugging home made servers it's quiet nice (if you don't want to
wireshark):

    
    
        curl -silent http://i.imgur.com/0nCbgbi.jpg |hexdump -C |less
    

Now you could automate this for testing, etc.

There are use cases for terminal output and changing this behavior now would
probably wreck many 3rd party scripts. And if you really just want a simple
download manager you probably should use wget anyways.

------
davidmh
HTTPie is a command line HTTP client, a user-friendly cURL replacement.
[http://httpie.org](http://httpie.org)

~~~
blacksmith_tb
I find it very useful for debugging (and it's in the Ubuntu repos, and can be
installed via homebrew on OSX).

------
0x0
Chrome dev tools have a super useful "Copy as cURL" right-click menu option in
the network panel. Makes it very easy to debug HTTP!

~~~
icebraining
Same with Firefox dev tools. I use it all the time.

~~~
__david__
There's also an awesome Firefox extension called "cliget" that will give you
curl and wget (and some windows only thing I've never heard of) command
lines—It adds a "Copy curl for link" context menu item for every link.

It's quite nice because it will put all your cookies on the command line so
you can trivially download files protected by a login page directly to remote
servers.

------
mobiplayer
We all have some user-bias and in this case it is geared towards seeing Curl
as some shell command to download files through HTTP/S.

Luckily, Curl is much more than that and it is a great and powerful tool for
people that work with HTTP. The fact that it writes to stdout makes things
easier for people like me that are no gurus :) as it just works as I would
expect.

When working with customers with dozens of different sites I like to be able
to run a tiny script that leverages Curl to get me the HTTP status code from
all the sites quickly. If you're migrating some networking bits this is really
useful for a first quick check that everything is in place after the
migration.

Also, working with HEAD instead of GET (-I) makes everything cleaner for
troubleshooting purposes :)

My default set of flags is -LIkv (follow redirects, only headers, accept
invalid cert, verbose output). I also use a lot -H to inject headers.

------
_almosnow
99% of the time I'm using curl/wget is to download a compressed file. So, for
me, `curl | tar` is shorter than `wget -O - | tar`, and much better than
`wget` -> download -> decompress -> delete the file.

~~~
LeonidasXIV
I don't trust tarballs to decompress cleanly and not explode into umpteen
smaller files which I have to hunt in my download folder.

~~~
Sir_Cmpwn
So don't run it from your downloads folder, run it from the place you actually
want it extracted to. You're only locked into a downloads folder if you're
using a web browser, which in this case you aren't.

------
eddieroger
Having known both tools for a long time now, I never realized there was a
rivalry between them - I just figured they're each used differently. cURL is
everywhere, so it's a good default. I use it when I want to see all of the
output of a request - headers, response raw, etc. It's my de facto API testing
tool. And before I even read the article, I assumed the answer was "Everything
is a pipe". It sucks to have to memorize the flags, but it's worthwhile when
you're actually debugging the web.

------
talles
> people who argue that wget is easier to use because you can type it with
> your left hand only on a qwerty keyboard

Haha I would never realize that

~~~
bshimmin
I've worked with multiple people who chose passwords based on whether they
could be typed with only one hand. I guess there's a perverse sort of sense in
it, if you're really that lazy.

~~~
talles
Makes sense but... I actually think the exact opposite.

For me the perfect password is one that you type the consecutive characters
alternating between left and right hand.

Words that you type with just one hand you do a little 'twist' with the hand
that, IMO, is a little slower and uncomfortable to do. As soon your finger
reach a key the other hand is already moving to the next and this goes back
and forth.

But I guess I'm making a point more about comfort than laziness.

~~~
bshimmin
I see what you're saying. I think the one-handed-password people mainly did it
so that they could do something else with the other hand, like hold a cup of
coffee, flick through the newspaper, that sort of thing. Seems like madness to
me but there you go.

------
discardorama
The "c" in "curl" stands for "cat". Any unix user knows what cat(1) does. Why
the confusion?

~~~
wtetzner
I think the confusion is probably that people didn't realize that "c" stood
for "cat" in "cURL".

------
gtrubetskoy
I am surprised there is no mention of the BSD fetch(1)
[http://www.freebsd.org/cgi/man.cgi?query=fetch%281%29](http://www.freebsd.org/cgi/man.cgi?query=fetch%281%29)
, which probably pre-dates both curl and wget.

------
lsiebert
I was recently playing with libcurl (easiest way I know to interact with a
rest api in c), and libcurl's default callback for writing data does this
too.It takes a file handle, and if no handle is supplied, it defaults to
stdout. It's actually really nice as a default... you can use different
handles for the headers vs the data, or use a different callback altogether.

I really, really like libcurl's api (or at least the easy api, I didn't play
around with the heavy duty multi api for simultaneous stuff). It's very clean
and simple.

------
ams6110
I use curl over wget in most cases, just because I learned it first I guess. I
use it enough that I rarely make the mistake of not redirecting when I want
the output in a file.

The one case where I will reach for wget first is making a static copy of a
website. I need to do this sometimes for archival purposes, and though I
always need to look up the specific wget options to do this properly, this use
case seems to be one where wget is stronger than curl (especially converting
links so they work properly in the downloaded copy).

------
pbhjpbhj
"cat url", huh, that makes sense.

Why not just alias it ("make a File from URL" -> furl?) if people want to use
it with -O flag set as default?

------
zkhalique
I find it pretty cool how authors of text-mode UNIX programs are still around.
In fact the GNU culture has kind of grown up around that. And yet, to me text-
mode stuff is just a part of a much larger distribution, not something to be
distributed to so many systems. Oh, how times have changed.

------
unclesaamm
I am in the opposite camp, where I always try to pipe wget to file. Then I end
up with two files. Argh.

------
geon
> if you type the full commands by hand you’ll use about three keys less to
> write “wget” instead of “curl -O”

Unless you forgot what the option was since you don't use it multiple times a
day.

------
johncoltrane
OK, the screen filled with garbage happens the first time you use curl, then
you read the README or --help, which you should have done before, you learn -o
and… it never happens again.

No big deal.

------
agumonkey
curl could parse mime type and decide where to push the stream, POC:

    
    
        #!/usr/bin/env sh
        
        case $(curl -sLI $1 | grep -i content-type) in
            *text*) echo "curl $1"
                    ;;
            *) echo "curl $1 > $(basename $1)"
               ;;
        esac
    

[https://gist.github.com/agumonkey/b85cef0874822c470cc6](https://gist.github.com/agumonkey/b85cef0874822c470cc6)

Costs of one round trip though.

------
angelortega
tl;dr Because the author says so.

