
Curl vs. Wget - of
https://daniel.haxx.se/docs/curl-vs-wget.html
======
JonathonW
For my usage:

* Wget's the interactive, end-user tool, and my go-to if I just need to download a file. For that purpose, its defaults are more sane, its command line usage is more straightforward, its documentation is better-organized, and it can continue incomplete downloads, which curl can't.

* Curl's the developer tool-- it's what I'd use if I were building a shell script that needed to download. The command line tool is more unix-y by default (outputs to stdout) and it's more flexible in terms of options. It's also present by default on more systems-- of note, OSX ships curl but _not_ wget out of the box. Its backing library (libcurl) is also pretty nifty, but not really relevant to this comparison.

This doesn't really need to be an "emacs vs. vim" or "tabs vs. spaces"-type
dichotomy: wget and curl do different things well and there's no reason why
both shouldn't coexist in one's workflow.

~~~
manyxcxi
> This doesn't really need to be an "emacs vs. vim" or "tabs vs. spaces"-type
> dichotomy: wget and curl do different things well and there's no reason why
> both shouldn't coexist in one's workflow.

Totally agree. I love curl for testing API request/responses manually. It's
usually a huge part of navigating my way around a new API that doesn't have a
client library available for whatever language I'm using at that time.

I also use it for weird requests that need special headers or authentication.

Wget is the first thing I turn to when I'm downloading anything remote from
the command line or scraping some remote content for analysis.

~~~
majewsky
Random plug: Another invaluable tool for API analysis is
[http://mitmproxy.org](http://mitmproxy.org).

~~~
mhils
Thanks for mentioning it! :) One of the authors here, happy to answer any
questions.

------
falcolas
My favorite use of wget: mirroring web documentation to my local machine.

    
    
        wget -r -l5 -k -np -p https://docs.python.org/2/
    

Rewrites the links to point local where appropriate, and the ones which are
not local remain links to the online documentation. Makes for a nice, seamless
experience while browsing documentation.

I also prefer wget to `curl -O` for general file downloads, simply because
wget will handle redirects by default, `curl -O` will not. Yes, I could
remember yet another argument to curl... but why?

That said, I love curl (combined with `jq`) for playing with rest interfaces.

~~~
daveguy
Thanks for the jq tip. I hadn't seen that before. Link if anyone's interested:

[https://stedolan.github.io/jq/](https://stedolan.github.io/jq/)

~~~
k3n
Underscore CLI looks interesting as well, though I haven't personally had much
of a chance to play with it. It does require NodeJS, which might be a deal-
breaker for some, but if it's already in your toolchain then it might come in
handy.

[https://github.com/ddopson/underscore-
cli](https://github.com/ddopson/underscore-cli)

~~~
daxelrod
I've written a similar tool underscore-cli and jq called jowl. It's designed
to be easier to learn (for JavaScript developers) than underscore-cli or jq.
The README includes a comparison.

It's still early in its development, and would benefit from a tutorial and a
few more features, but it's getting there.

[https://www.npmjs.com/package/jowl](https://www.npmjs.com/package/jowl)

------
geerlingguy
Also putting this out there—for nicer REST API interaction on the CLI, and a
little more user-friendliness, you might also want to add HTTPie[1] to your
toolbelt.

It's not going to replace curl or wget usage, but it is a nicer interface in
certain circumstances.

[1] [https://github.com/jkbrzt/httpie](https://github.com/jkbrzt/httpie)

~~~
wdmeldon
This is an extremely minor quibble, but the dependency on Python makes me less
inclined to use it. I'm stuck on Windows for a lot of the work I do so
configuring Python is never fun and there is no one line installation method
from what I can tell.

Putting it up on chocolatey[0] might be a good idea. Not sure how feasible
that is however.

[0]
[https://chocolatey.org/packages?q=httpie](https://chocolatey.org/packages?q=httpie)

~~~
sbierwagen
Should have lead that comment with "I'm on Windows, so".

Of course it's going to be miserable to use any command-line tool on Windows.
It's Windows.

~~~
kayone
I'm not sure why you feel like that, I switched back to Windows after ~1 year
of using OSX, I wouldn't say its "miserable" there is really nothing I could
do in OSX's command line that can't do in windows.

~~~
barbs
What is your Windows command-line environment? Plain CMD prompt or Cygwin? Or
something else?

~~~
gutnor
Powershell is very powerful and for windows it beats cygwin/mingw. Not quite
sure how it measure against Linux shells running on Linux, but Microsoft has
really made a proper shell for windows, too bad it looks so different.

Obviously if you work in a cross-platform environment, cygwin/mingw is still
the only thing that will provide you some sort of consistency in your workflow
on Windows machines.

------
ubercow
Though only briefly mentioned in this article at the buttom, I'd like to give
a huge shoutout to aria2. I use it all the time for quick torrent downloads,
as it requires no daemon and just seeds until you C-c. It also does a damn
good job at downloading a list of files, with multiple segments for each.

~~~
barbs
+1 for aria2. It's like the VLC of command-line download tools - it Just Works
with anything.

[https://aria2.github.io/](https://aria2.github.io/)

~~~
toomuchtodo
Also can be installed with brew.

------
Rauchg
I instinctively go to `wget` when I need to, uhm, get the file into my
computer[1]. `curl -O` is a lot more effort :P

Other than that, curl is always better.

[1] Aliasing `wget` to ~`curl -O` might be a good idea :)

~~~
cm3
I use wget for downloads because it follows links by default and resume is
just -c. I never figured out how to make curl do the equivalent of -c.

~~~
rogerbinns
Lets compare the length of the man page:

    
    
        $ man curl | wc -l
        1728
        $ man wget | wc -l
        1096
    

How about the --help output?

    
    
        $ curl --help | wc -l
        178
        $ wget --help | wc -l
        176
    

The wget help is nicer, grouping options together by category and with longer
text. curl just has a long list of options in alphabetical order. How many
(long) options do they have?

    
    
        $ curl --help | grep -- -- | wc -l
        175
        $ wget --help | grep -- -- | wc -l
        137
    

I'd say it is a lot quicker to work out the flags etc you need with wget
because there is less to look through.

~~~
LukeShu
"I'm glad I typed `man wget` instead of `wget --help`" \-- no one ever

You want the `wget --help` text over the man page, 99% of the time. The other
1%, you want the full info manual. The man page is an awful mix between the
two; too dense for scanning through for the flag you need, but not containing
the full information when you need specifics.

~~~
ars
> no one ever

Except [at least] me.

I like man better because it's consistent. Some tools want --help, -help, -h,
-H, -\? etc.

I like man better because I can search it.

I like man better because it gives me the details, not just a list.

~~~
LukeShu
For all of the reasons you gave (except search--that's what grep is for), I
usually reach for `man`. But, for wget the information density of the man page
is just wrong. At least these days it has some more information in it--it used
to just be a reformatted version of the --help text, plugged into a generic
template.

~~~
ars
grep on help output is annoying sometimes since many programs send it on
stderr and you need to redirect it if you want to pipe it to grep.

Plus even if grep matched something you can't read the context without extra
options.

man is much easier than doing all that, but the time you have your full search
command you would have already gotten your info from man.

~~~
phs2501
|& is useful in these cases, as it redirects both stdout and stderr to the
piped process's stdin. It's a cshism, but it works in both zsh and modern
bash. Much nicer than typing _cmd 2 >&1 | cmd_.

For context try _grep -2_ , where 2 is the desired lines of context.

    
    
      $ wget --help |& grep -2 base   
        -i,  --input-file=FILE           download URLs found in local or external FILE
        -F,  --force-html                treat input file as HTML
        -B,  --base=URL                  resolves HTML input-file links (-i -F)
                                           relative to URL
             --config=FILE               specify config file to use
      --
                                           existing files (overwriting them)
        -c,  --continue                  resume getting a partially-downloaded file
             --start-pos=OFFSET          start downloading from zero-based position OFFSET
             --progress=TYPE             select progress gauge type
             --show-progress             display the progress bar in any verbosity mode
    

(Not that I'm arguing that this is an excuse for wget's [and GNU projects' in
general] man pages sucking, but it's a useful workaround.)

~~~
ars
I know how to do it, that's not the question.

But rather, doing all that seems easier than man to you?

------
mooreds
"Wget can be typed in using only the left hand on a qwerty keyboard!"

I love both of these, but wish that curl was just like wget in that the
default behavior was to download a file, as opposed to pipe it to stdout.
(Yes, aliases can help, I know.)

~~~
Xophmeister
Streaming to stdout is more Unix-y, allowing you to pipe the response into
further processes. For example:

    
    
        curl http://api.example.com/json | jq '.["someKey"]' # etc., etc.

~~~
lomnakkus
It may be more unix-y, but it's less user-friendly if the expectation is to
just download a file.

EDIT: Wow, surprised by the downvotes. I don't think I said anything
controversial (y'know principle of least surprise and all), but maybe I was
being a bit too opaque: _wget_ , by virtue of being the first on the scene,
built an expectation that $THING_THAT_GETS_URLS would result in a file without
any other input/arguments. Curl, to this day, surprises me because I was
around when wget was all you had.

~~~
vacri
Nonsense. You can use both tools. 'curl' is 'cat url', and by default is meant
to behave like 'cat' \- that is, send stuff to STDOUT. 'wget' is 'web get',
and gets an object from (only) the web or ftp, and plonks it on your
filesystem. They both do exactly what they're supposed to (according to name),
by default.

~~~
lomnakkus
> 'curl' is 'cat url',

Whoa... TIL something! I don't know if that's the official etymology, but
that's a great mnemonic!

EDIT: ... and yes, I use both tools :).

~~~
vacri
Well, it looks like I was taught wrong, and it's not 'cat URL' (though that's
a good way to think of it), but the rather more direct-to-STDOUT-sounding 'see
URL'. TIL something too :)

[https://en.wikipedia.org/wiki/CURL](https://en.wikipedia.org/wiki/CURL)

~~~
botw
My first impression is 'cat URL' not 'see URL'. The wikipedia article does not
have to be official.

------
song
I use wget when I need to download things.

curl is for everything else (love it when it comes to debugging some api)...
Httpie is not bad too for debugging but most of them time I forget to use it.

------
Veratyr
Since aria2 was only passingly mentioned, let me list some of its features:

\- Supports splitting and parallelising downloads. Super handy if you're on a
not-so-good internet connection.

\- Supports bittorrent.

\- Can act as a server and has a really nice XML/JSON RPC interface over HTTP
or WebSocket (I have a Chrome plugin that integrates with this pretty nicely).

They're not super important features sure but I stick with it because it's
typically the fastest tool and I hate waiting.

------
hmsimha
Curl gets another point for having better SNI support, as wget versions until
relatively recently didn't support it.

This means you can't securely download content using relatively recent (but
not the newest) versions of wget (such as any in the Ubuntu 12.04 repos) from
a server which uses SNI, unless the domain you're requesting happens to be the
default for the server.

As an example, I found the file
[https://redbot.org/static/style.css](https://redbot.org/static/style.css)
only accessible with SNI. Try `wget
[https://redbot.org/static/style.css`](https://redbot.org/static/style.css`)
vs. `curl -O
[https://redbot.org/static/style.css`](https://redbot.org/static/style.css`)
on Ubuntu 12.04. Domain names which point to S3 buckets (and likely other
CDNs) will have similar issues.

------
josteink
For me defaults matter... 99% of the time when I want to use wget or curl, I
want to do it to download a file, so I can keep working on it, from the
filesystem.

wget does that without any parameters. Curl requires me to remember and
provide parameters for this obvious usecase.

So wget wins every time.

------
contingencies
If nobody's tried it, _axel_ mentioned in the report as possibly abandoned has
the awesome feature of splitting a download in to parts and then establishing
that many concurrent TCP connections. Very useful on individual TCP flow rate-
limited networks.

~~~
hcarvalhoalves
Haven't found anything better than axel to saturate the link yet.

~~~
MoSal
Try saldl[1]. It depends on libcurl. So protocol support should be good and
reliable.

[1] [https://github.com/saldl/saldl](https://github.com/saldl/saldl)

~~~
contingencies
[https://github.com/saldl/saldl/wiki/saldl_vs._aria2](https://github.com/saldl/saldl/wiki/saldl_vs._aria2)

~~~
MoSal
FWIW, this page is incomplete and outdated.

For example, `--mirror-url` was implemented. So, it is now possible to
download from two sources concurrently.

------
dallbee
We are forgetting our long lost cousin, fetch. [http://www.unix.com/man-
page/FreeBSD/1/FETCH/](http://www.unix.com/man-page/FreeBSD/1/FETCH/)

------
xd1936
wget has the amazing flag `--page-requisites` though, which downloads all of
an html documents' css and images that you might need to display it properly.
Lifesaver.

~~~
troyvit
wget has another great flag, -k, which changes references to the css, js, and
images to absolute URLs, resulting in a 1 page download that still looks like
the original page. It's useful for making dummy pages for clients. I wish curl
had this for my OSX friends who need the functionality above. Getting a wget
binary onto OSX is a pain but curl is there by default.

~~~
arm
Whoa, nice, thanks for the tip! That’s amazingly useful and I seriously wish I
had known about it earlier!

For OS X users, you can get wget pretty easily with Homebrew¹. Just install
it, then enter the following:

brew install wget --with-gpgme --with-iri --with-pcre

…well, those extra options aren’t strictly needed. Just what I used since I
wanted wget compiled with support for those things (GnuPG Made Easy²,
Internationalized Resource Identifiers³, and Perl Compatible Regular
Expressions⁴).

You can see all the compile-time options _before_ installing wget by typing
in:

brew info wget

――――――

¹ — [http://brew.sh/](http://brew.sh/)

² —
[https://www.gnupg.org/related_software/gpgme/](https://www.gnupg.org/related_software/gpgme/)

³ —
[https://en.wikipedia.org/wiki/Internationalized_resource_ide...](https://en.wikipedia.org/wiki/Internationalized_resource_identifier)

⁴ —
[https://en.wikipedia.org/wiki/Perl_Compatible_Regular_Expres...](https://en.wikipedia.org/wiki/Perl_Compatible_Regular_Expressions)

------
et2o
After vi vs. emacs, this is truly the great debate of our generation.

~~~
agumonkey
Come on people, who already emails with curl. Admit it.

~~~
notfoss
This would be better directed at those who use Outlook. The ones using curl to
send mails will be boasting about it ;)

------
DannyBee
Really interesting. Under curl he has:

"Much more developer activity. While this can be debated, I consider three
metrics here: mailing list activity, source code commit frequency and release
frequency. Anyone following these two projects can see that the curl project
has a lot higher pace in all these areas, and it has been so for 10+ years.
Compare on openhub"

Under wget he has: "GNU. Wget is part of the GNU project and all copyrights
are assigned to FSF. The curl project is entirely stand-alone and independent
with no organization parenting at all with almost all copyrights owned by
Daniel."

Daniel seems pretty wrong here. Curl does not require copyright assignment to
him to contribute, and so, really, 389 people own the copyright to curl if the
openhub data he points to is correct :)

Even if you give it the benefit of the doubt, it's super unlikely that he owns
"almost all", unless there really is not a lot of outside development activity
(so this is pretty incongruous with the above statement).

(I'm just about to email him with some comments about this, i just found it
interesting)

~~~
Flenser
"almost all copyrights owned by Daniel." and "389 people own the copyright to
curl" aren't mutually exclusive. I think Daniel was saying that most of the
code is copyright to him, and you are saying that the rest is copyright to 388
other people.

------
nowprovision
Unmentioned in the article - Curl supports --resolve, this single feature
helps us test all sorts of scenarios for HTTPS and hostname based multiplexing
where DNS isn't updated or consistent yet, e.g. transferring site, bringing up
cold standbys, couldn't live without it (well I could if I wanted to edit
/etc/hosts continuously)

------
jonalmeida
wget was the first one I learned how to use by trying to recursively download
a professor's course website for offline use, and then learning that they
hosted the solutions to the assignments there as well..

I did well in that course, granted it was an easy intro to programming one. ;)

------
hartator
> Wget requires no extra options to simply download a remote URL to a local
> file, while curl requires -o or -O.

I think this is oddly the major reason why wget is more popular. Saving 3
chars + not having to remember the specific curl flag seems to matter more
than what we can think.

~~~
cathexis
I'm always amused by people who do the opposite: using wget to send get/post
requests to web servers and having to add `-O /dev/null' (or, even worse `-O -
> /dev/null'to keep from saving the results.

------
fosco
Curl scripts allow open connection to view all new logs in a session.

can wget do similar? I did not know it can or could however from my point of
view if it cannot this is like comparing a philips head screwdriver to a
powertool with 500pc set.

~~~
viraptor
What do you even mean by logs in a session?

~~~
fosco
for example, in troubleshooting with a bluecoat proxy, I can run a curl
session in conjuction with grep to check for very specific types of traffic
and leave that script open while I might have an end user test.

~~~
viraptor
Sorry, can't imagine what you mean. Do you just start curl with a list of urls
to process and grep for errors? Any specific examples?

~~~
fosco
it acts like a live packet capture of a log file.

not sure how else to describe other than I do not think wget is capable of
such functionality.

~~~
viraptor
sooo... "wget -O - .... | grep ..." ?

~~~
fosco
havent not worked with it in that matter, however like someone above said.

typically I use wget to download, and curl to troubleshoot http https.

however, if that is capable of keeping everything open I suppose thats a point
for wget.

------
notfoss
aria2 is much more reliable when downloading stuff, especially for links which
involve redirections.

For example here's a link to download 7zip for windows from filehippo.com.

Results:

* Curl doesn't download it at all.
    
    
      curl -O 'http://filehippo.com/download/file/bf0c7e39c244b0910cfcfaef2af45de88d8cae8cc0f55350074bf1664fbb698d/'
    

gives:

    
    
      curl: Remote file name has no length!
    

* Wget manages to download the file, but with the wrong name.
    
    
      wget 'http://filehippo.com/download/file/bf0c7e39c244b0910cfcfaef2af45de88d8cae8cc0f55350074bf1664fbb698d/'
    

gives:

    
    
      2016-03-03 18:08:21 (75.9 KB/s) - ‘index.html’ saved [1371668/1371668]
    

* aria2 manages to download the file with the correct name with no additional switches.
    
    
      aria2c 'http://filehippo.com/download/file/bf0c7e39c244b0910cfcfaef2af45de88d8cae8cc0f55350074bf1664fbb698d/'
    

gives:

    
    
      03/03 18:08:45 [NOTICE] Download complete: /tmp/7z1514-x64.exe

~~~
MoSal
The URL does not work right now. But I tried another one from the same site.

No client can get this right, always. aria2c is not more reliable. It's just
choosing to take the filename from the redirect URL. It appears to be the
right thing to do in this case. But it would fail if the start URL was
actually the one that had the right filename.

Hosts can use the Content-Disposition header if they want to make sure all
(capable) clients get the right filename.

In saldl, I implemented `--filename-from-redirect` to handle your use-case.
But It's not set by default.

~~~
notfoss
Thanks for the explanation. But generally I have found aria2 to be more
reliable in such scenarios.

------
awjr
Useful to know: If you use the Chrome dev tools, in the network tab, you can
right click on a request and "Copy as cURL".

------
dorfsmay
My usage pattern has been:

    
    
      - wget to download files (or entire sites even)
    
      - curl to debug everything http/https

------
haxpor
For certain case like creating a Telegram bot which has no interaction with
browser, do you think we can make use of curl (post request) to make PHP
session works?

As there's no browser interaction in Telegram bot, the script just receives
response back from Telegram server. This might help to kerp track of user
state without a need of db?

------
X-Istence
I use curl because it is generally installed. I prefer not to install wget,
especially on customer machines because it stops 90% of script kiddies. For
some reason wget is the only tool they will attempt to use to download their
sploit.

~~~
lucb1e
Pretty sure skiddies will not assume most victims have wget already, they'll
just ship it with the exploit. If not installing wget is an annoyance to a
hacker, they're already in too deep ;)

~~~
X-Istence
Your stock standard drive by PHP exploit attempts usually attempt to "wget"
another PHP file to public_html.

They try wget, fail, and move on.

------
MoSal
I should probably write a "saldl vs. others" page someday.

> _Wget supports the Public Suffix List for handling cookie domains, curl does
> not._

This is outdated info. (lib)curl can be built with libpsl support since
7.46.0.

~~~
emj
Released on 2015-12-02, so it wont be in many dists for some years. :-)

~~~
MoSal
Stability and security comes first. So, let's ship an X years old curl release
+ patches ;)

------
sametmax
Nowaday I just use httpie. It's in Python, so easy to install in windows, and
let me work easily with requests and responses, inspect the content, add
coloration, etc. Plus the syntax is much easier.

------
jrbapna
I like Wget's option to continue a file download if it gets interrupted. I
believe you can achieve the same thing in curl but its not as simple as just
setting a flag (-c).

------
dominhhai
>> Wget can be typed in using only the left hand on a qwerty keyboard!

Great!

------
arca_vorago
Wget is under GPLv3 so thats what I use more often. Sometimes I will use curl
in certain cases, but yes, I will use a GPL product over a non-gpl product if
given a choice.

------
cushychicken
The "only need a left hand" sways me for wget.

------
StreamBright
There is no other industry where tools are debated so much as in IT. We
literally waste tonns of hours on arguing over minor differences and nuances
that really should not matter that much.

~~~
newjersey
There's a balancing act between trying to cut a tree with a blunt ax that one
never resharpens versus spending all week in an ax store looking at different
axes. (Speaking of which I'm doing something like that by looking at hn so I'm
not trying to say bad to anyone else because I'd be a hypocrite if I did)

~~~
Amorymeltzer
"Give me six hours to chop down a tree and I will spend the first four
sharpening the axe." \- commonly attributed to Abraham Lincoln

------
mistat
curl for checking http headers simply with: curl -vskL http:1.2.3.4 -H "Host:
example.com" > /dev/null

~~~
ce4
Why don't you just use the -I flag?

~~~
mistat
For some sites a HEAD may return different headers than GET, so it is safer to
return the results in full. Also using vsk shows the request headers,
including IP so you can easily see if things such as roundrobin DNS is in use,
again to assist with debugging.

------
dustingetz
Who funds projects like this?

------
module17
TLDR: curl rocks.

~~~
lucb1e
Unless you want to download a web site for archiving purposes.

------
ProceedsNow
Wget just werks.

------
mynewtb
You should all check out wpull!

~~~
mh-
[https://github.com/chfoo/wpull](https://github.com/chfoo/wpull)

------
keville
Everything 6 years old is new again:
[https://news.ycombinator.com/item?id=1241479](https://news.ycombinator.com/item?id=1241479)

~~~
guelo
> Updated: February 26, 2016 17:20 (Central European, Stockholm Sweden)

