Hacker News new | past | comments | ask | show | jobs | submit login
Curl exercises (jvns.ca)
273 points by weinzierl on Aug 27, 2019 | hide | past | favorite | 59 comments



This is more SysAdmin related, but one power-curl function I use atleast 30 times a day is this alias I have in my .bash_aliases

This will output the HTTP status code for a given URL.

     alias hstat="curl -o /dev/null --silent --head --write-out '%{http_code}\n'" $1  
Example:

  $ hstat google.com
  301
I also use curl as an 'uptime monitor' by adding onto that code section. (a file with a list of URLs, and "if http status code !=200" then email me.)

There are variations on this all over the place but I really depend on it and I like it.


Curl can send email, too!


Sort of an inverse Zawinski's Law.


Can you make it read emails? Asking for a friend...


I don't speak imap, but you can send pretty much any command to an imap(s) URL using -x and receive a reply... Wait, ok I get it.


I like your solution doesn't need a pipe to awk|cut|perl.


I really like using HTTPie (https://httpie.org). Much friendlier syntax then curl, formats the output...

Also works great with fx (https://github.com/antonmedv/fx). Just pipe the output from HTTPie and you get easy json processing.


Fx looks like a really nice alternative to jq. I love HTTPie for API debugging but wouldn't disregard curl which is a great and versatile tool for everything http.


Last year, I submitted a patch to Mozilla Firefox which fixed some major issues with the "Copy as cURL" functionality: https://bugzilla.mozilla.org/show_bug.cgi?id=1452442 (namely, that it would stop working in a number of common cases, e.g. after opening the Response tab, or while the request was in-flight). That particular experience taught me a great deal about Firefox and cURL options, plus I had the opportunity to fix a tool I use very frequently.

cURL really is an excellent tool for web debugging. Using something like "Copy as cURL" makes it possible to generate a request which is textually identical to a request made by a real browser, which is really important when debugging subtle weirdness in a browser (or, exploring a web server for bugs!). It's easy to script and comes with a decent progress UI. It's also practically omnipresent and slightly more common than wget - I often need to use foreign systems where I have to make do with the least-common denominator tools, so learning the ins and outs of cURL have been quite handy for me.


This one is handy, if you want to create your payloads dynamically:

    cat payload.json | curl $url --data -@-
You can use this to re-play access logs, like so:

    sed -n 23p accesslog.json | jq '.args' | curl $endpoint --data -@-


Curl is really handy if you want to write a quick crawler for some globs of urls.

For example if you want to download a gallery of images with some fixed format you can do

  curl "http://example.com/imgs/[01-99].jpg" -o "#1.jpg"
And it will fetch them and name them with the correct zero padding.

Or if you want to download the first 10 pages of hn:

  curl https://news.ycombinator.com/news?p=[1-10] -o "hn_page_#1.html"

It can glob pretty complicated patterns! See this for syntax: https://ec.haxx.se/cmdline-globbing.html


Nice, I didn’t know `curl` had globbing support. I usually do that using shell globing, as in:

    wget http://example.com/imgs/{01..99}.jpg


I’m not at a testing computer, but I think that’s a shell glob, no?


Which is what they said :p


curl != shell


Yes, that exactly what I said.


Curl can also handle cookies using the -c and -b flags. I wrote a simple dungeon crawler whose user-side "client" is a single line of bash, and the game server is stateless - the entire game state gets stored in the cookie:

  c=x; while [ $c ]; do clear; curl -c k -b k hex.m-chrzan.xyz/­$c; read c; done
(just make sure you don't have an important file called "k" in the directory you run it in, or it will get overwritten)


One great use of curl:

Most web browsers have a "Copy as cURL" function in the web developer menus.

in Firefox it is: tools -> Web Developer -> Network

Under this tab, you will see all the requests used to construct a web page.

You can right click on an item and do "Copy as cURL"

If a web page has some API you want to explore or use in a program, you can recreate it this way.


I find copy as Fetch easier to work off of (copies as prettified JSON instead of a single line CLI string) + it comes with the added bonus of being able to do any quick tinkering in the same dev tools window.


I often need to get both request and response headers. For this I use:

    curl -vsI https://example.com | grep "^[<>] "


This reminds me of an idea I've been meaning to get to for the longest time, this sequence of exercises would make a great mini-ctf, it would require the addition of a story to motivate it, and an endpoint to provide responses that would contain information to link to the next step. I was planning to do an interactive introduction to the git commands in this style but this topic would work well too. I think it would work well as a medium for self serve training.


You could definitely mix and match different commands in this mini-CTF tutorial, to give it a bit more flavour. Maybe consider a war-game style game, where each level needs to be completed to get the password for the next one (or, similar online puzzle series where each puzzle's solution links to the next one, etc.). Such highly-linear puzzles have proven (for me at least) to be great learning tools simply because users are forced to focus on one learning objective at a time.

Maybe a kind of "digital archaeology" story, in which you have to coax data off of an old server in various ways and analyze old git repos? Does sound kind of fun...


I’m learning about Bug Bounty. Burp has a feature properly named “copy as curl command”. It converts the captured request to a curl command, so you can just paste it in the terminal or a file and start playing with it.

After making changes to some argument values in the body, I was getting a weird error. It turns out that I was suppose to also update the content length header. The work around was to simply eliminate that header. But I wasn’t able to find a good explanation as to why.


Chrome also has "copy as curl command" option built in: https://i.imgur.com/Ie9Zvjd.png


curl sets that header for you.


I would suggest one more exercise; make a GET request to an address where the server replies with a redirect.


that's #16, though perhaps I should just say that the reason the response is empty is that Google is redirecting you. I'd love more suggestions for curl exercises though :)


Yeah although I'm getting this:

  <HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
  <TITLE>301 Moved</TITLE></HEAD><BODY>
  <H1>301 Moved</H1>
  The document has moved
  <A HREF="https://www.google.com/">here</A>.
  </BODY></HTML>
So not an empty response...


Yea looks like they've changed it to output HTML fallback now. Presumably for browsers which don't respect the redirect status code. This document must predate that change.


Ah, I missed it!

Another exercise could be to have a page where the Access Control Allow Origin is set to a specific domain, then make a request by setting a different origin.

E.g:

  curl -H "Origin: https://wrong.origin.com" https://httpbin.org/anything
Many people think CORS is enforced by the server.


Do you have one that uses `––resolve`?


I thought, the idea of the parent was to teach the user how to follow redirects.


I do wish Curl supported OAuth 2.0 out of the box - I would then drop all REST clients in a jiffy.


Your comment is very interesting, but I have to ask, perhaps naively, is curl the right place "to do OAuth"? Isn't its implementation more of an application level initiative?

I am interested in this because we're starting to work on a project where these two ingredients will come together.


I meant curl behaving as an OAuth client. Many REST services today are OAuth protected and using curl is simply too cumbersome to access them. If curl supported password and client credentials OAuth grant types, then a vast number of folks would ditch REST clients like insomnia and postman and stick to command line curl.


Got it. Thanks. I'm out on the periphery, and while I'm aware of the protocol, I'm rather new to OAuth on a development level.

After looking at the cURL site, it looks like someone had the need and the will to bake in kerberos, so why not OAuth? Then I looked at the OAuth specs. It would be quite an undertaking for sure, but it could be good :-)


OK now how does one bypass cloudflare's captcha when attempting to get a blog page or sth? I m trying to create a link-previewer and this constantly shows up


Is there anything to format your responses when using curl ? the reason why I dont use curl often and rely on some browser extensions or an app. for simple usecases


You could pipe to `jq` (https://stedolan.github.io/jq/), or send the response to a file and view with whatever.


Pipe to jq is the best. Other than that you can always furiously search stack overflow for some arcane sed/awk/grep-fu.

There is also the -w option to format. Sometimes I’ll write a small python script to pipe output into for wrangling data.


I love using Curl when I get an evil spam email and I really want to see where it’s coming from and what it really does when you click those links.


If you hit any of them, then you have proven to the sender that you are a real live person, and that you really did actually accept their spam e-mail message.

Or near enough to that effect.


I find curl to be a great tool, but a poor CLI. Problem is, I'm not sure how to make a great HTTP CLI. Many have tried, but everything I've always seen leaves me wanting.

There are several complications:

  - URI building (things that need URL escaping)
     - path components
     - query parameters
  - headers
     - including common standard ones that should have
       their own command-line options
     - things like ETags that can benefit from storing
       them in xattrs
     - things like conditional requests and range
       requests (think download restart)
  - whether there's a body
     - whether it's an "HTML form" or something else
     - Content-Type
  - whether to chase redirects
  - trust anchors for TLS (HTTPS)
  - what to do with response headers
  - how to integrate filtering of response body vs
    headers (can't use stderr for either)
  - authentication options
  - cookie jars
  - proxies and proxy auth
HTTP is such a complex beast that using it from a shell feels fraught with peril.

For ease of use in a functional way, it'd be nice if, for example, one could set a header value as-is on the command-line, but also by reading from a file (or possibly a pipe, so read only once).

Perhaps something like:

  $ VERB $base_URI [options]
with options for:

  - appending URI local part elements
  - adding query parameters
  - adding headers
  - specifying where to write response headers
  - specifying where to write response body
     - if a file, maybe set [some] response headers
       as xattrs!
  - specifying request body source (default: stdin
    for VERBs that normally have a request body)
  - trust anchors
  - authentication
Examples:

  $ # GET with various options:
  $ GET https://base-URI-here \
      -a local-part-element   \
      -q param value -q ...   \
      -H header value ...
  $ 
  $ # POST with file as request body source
  $ POST https://base-URI-here \
      --file file
  $ # POST with HTML form specified on command-line
  $ POST https://base-URI-here \
      --form field value --form field value ...
  $ 
  $ # And so on for all HTTP methods
Besides the options shown above, which are fairly obvious, it'd be nice to have:

  --rheaders PATH      # where to put headers
  --xheader header     # save given response header
                       # value as an xattr
  -T trust_anchors.pem # or a directory, or whatever
  -C client_cert_and_key.pem
  -U username          # for Basic, DIGEST, SCRAM
  --password-file FILE # for Basic, DIGEST, SCRAM
  -N ...               # For Negotiate.  This could
                       # be key/value pairs that can
                       # be used with
                       # gss_acquire_cred_from()
  -J ...               # Get a JWT token and use it
A convention for using xattrs for storing Content-Type and ETag (and a few other things perhaps) would be very very nice.


You could probably just put the requests library from python behind a quick and dirty cli...


This seems great and very usable. I'd add that I'd make VERB optional, and default to GET so we could finally return the old:

    wget http://something.com


I have a massive platonic crush on Julia Evans. This post is yet another reason. So good!


I always recommend her blog as part of the answer to:

"I want to lean to program" or "I want to become a better programmer"

Her blog still has that pure joy of learning and doing with computers that we all remember.


I know you mean well, but this is a gendered comment that doesn't really add to the conversation of how great the OP link is.

Consider how you might write your comment if the sex of the OP was male - would you make the same statement?


wat

You may have missed the key word there: "platonic".

Consider how you might be hurting your cause and lowering the quality of discourse here by being overly sensitive and self-righteous; if you knew the lengths to which I've gone in support and defense of my gay and trans-gendered cousins, would you refrain from polluting the thread w/ knee-jerk PC Politburo bs?


I'm not sure why you and the sibling comment reference gay men - I'm thinking of a female-identifying person needing to respond to 'platonic crushes' when a male might instead see 'I want to buy you a beer', or simply hear that his blog is inspiring.


I would. Then again, I’m gay.


*No, not bicep curls.


Can you measure when the user is doing a bicep curl with some kind of electrode? If you can, then maybe you could use that to send binary data using libcurl. True curl exercises in both senses.



I think the name is, technically, cURL. I love cURL and use it all the time. We send our customers cURL one-liner examples of how to test our authentication service which requires POST requests, and it just makes their lives and our lives that much simpler. They can test their stuff very quickly and easily, then ultimately it gets folded into their own services, all debugged and ready to go.


It is, but many people refer to it as how they’d invoke it.


   % which curl
   /usr/bin/curl
   % which cURL
   cURL not found


The software package has a name and the command line tool (executable) has a name. They are not the same.

https://curl.haxx.se/docs/faq.html#What_is_cURL


The title (not the original article) and some of the comments were calling it "Curl", also not found.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: