Fx looks like a really nice alternative to jq. I love HTTPie for API debugging but wouldn't disregard curl which is a great and versatile tool for everything http.
Last year, I submitted a patch to Mozilla Firefox which fixed some major issues with the "Copy as cURL" functionality: https://bugzilla.mozilla.org/show_bug.cgi?id=1452442 (namely, that it would stop working in a number of common cases, e.g. after opening the Response tab, or while the request was in-flight). That particular experience taught me a great deal about Firefox and cURL options, plus I had the opportunity to fix a tool I use very frequently.
cURL really is an excellent tool for web debugging. Using something like "Copy as cURL" makes it possible to generate a request which is textually identical to a request made by a real browser, which is really important when debugging subtle weirdness in a browser (or, exploring a web server for bugs!). It's easy to script and comes with a decent progress UI. It's also practically omnipresent and slightly more common than wget - I often need to use foreign systems where I have to make do with the least-common denominator tools, so learning the ins and outs of cURL have been quite handy for me.
Curl can also handle cookies using the -c and -b flags. I wrote a simple dungeon crawler whose user-side "client" is a single line of bash, and the game server is stateless - the entire game state gets stored in the cookie:
c=x; while [ $c ]; do clear; curl -c k -b k hex.m-chrzan.xyz/$c; read c; done
(just make sure you don't have an important file called "k" in the directory you run it in, or it will get overwritten)
I find copy as Fetch easier to work off of (copies as prettified JSON instead of a single line CLI string) + it comes with the added bonus of being able to do any quick tinkering in the same dev tools window.
This reminds me of an idea I've been meaning to get to for the longest time, this sequence of exercises would make a great mini-ctf, it would require the addition of a story to motivate it, and an endpoint to provide responses that would contain information to link to the next step. I was planning to do an interactive introduction to the git commands in this style but this topic would work well too. I think it would work well as a medium for self serve training.
You could definitely mix and match different commands in this mini-CTF tutorial, to give it a bit more flavour. Maybe consider a war-game style game, where each level needs to be completed to get the password for the next one (or, similar online puzzle series where each puzzle's solution links to the next one, etc.). Such highly-linear puzzles have proven (for me at least) to be great learning tools simply because users are forced to focus on one learning objective at a time.
Maybe a kind of "digital archaeology" story, in which you have to coax data off of an old server in various ways and analyze old git repos? Does sound kind of fun...
I’m learning about Bug Bounty. Burp has a feature properly named “copy as curl command”. It converts the captured request to a curl command, so you can just paste it in the terminal or a file and start playing with it.
After making changes to some argument values in the body, I was getting a weird error. It turns out that I was suppose to also update the content length header. The work around was to simply eliminate that header. But I wasn’t able to find a good explanation as to why.
that's #16, though perhaps I should just say that the reason the response is empty is that Google is redirecting you. I'd love more suggestions for curl exercises though :)
<HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
<TITLE>301 Moved</TITLE></HEAD><BODY>
<H1>301 Moved</H1>
The document has moved
<A HREF="https://www.google.com/">here</A>.
</BODY></HTML>
Yea looks like they've changed it to output HTML fallback now. Presumably for browsers which don't respect the redirect status code. This document must predate that change.
Another exercise could be to have a page where the Access Control Allow Origin is set to a specific domain, then make a request by setting a different origin.
Your comment is very interesting, but I have to ask, perhaps naively, is curl the right place "to do OAuth"? Isn't its implementation more of an application level initiative?
I am interested in this because we're starting to work on a project where these two ingredients will come together.
I meant curl behaving as an OAuth client. Many REST services today are OAuth protected and using curl is simply too cumbersome to access them. If curl supported password and client credentials OAuth grant types, then a vast number of folks would ditch REST clients like insomnia and postman and stick to command line curl.
Got it. Thanks. I'm out on the periphery, and while I'm aware of the protocol, I'm rather new to OAuth on a development level.
After looking at the cURL site, it looks like someone had the need and the will to bake in kerberos, so why not OAuth? Then I looked at the OAuth specs. It would be quite an undertaking for sure, but it could be good :-)
OK now how does one bypass cloudflare's captcha when attempting to get a blog page or sth? I m trying to create a link-previewer and this constantly shows up
Is there anything to format your responses when using curl ? the reason why I dont use curl often and rely on some browser extensions or an app. for simple usecases
If you hit any of them, then you have proven to the sender that you are a real live person, and that you really did actually accept their spam e-mail message.
I find curl to be a great tool, but a poor CLI. Problem is, I'm not sure how to make a great HTTP CLI. Many have tried, but everything I've always seen leaves me wanting.
There are several complications:
- URI building (things that need URL escaping)
- path components
- query parameters
- headers
- including common standard ones that should have
their own command-line options
- things like ETags that can benefit from storing
them in xattrs
- things like conditional requests and range
requests (think download restart)
- whether there's a body
- whether it's an "HTML form" or something else
- Content-Type
- whether to chase redirects
- trust anchors for TLS (HTTPS)
- what to do with response headers
- how to integrate filtering of response body vs
headers (can't use stderr for either)
- authentication options
- cookie jars
- proxies and proxy auth
HTTP is such a complex beast that using it from a shell feels fraught with peril.
For ease of use in a functional way, it'd be nice if, for example, one could set a header value as-is on the command-line, but also by reading from a file (or possibly a pipe, so read only once).
Perhaps something like:
$ VERB $base_URI [options]
with options for:
- appending URI local part elements
- adding query parameters
- adding headers
- specifying where to write response headers
- specifying where to write response body
- if a file, maybe set [some] response headers
as xattrs!
- specifying request body source (default: stdin
for VERBs that normally have a request body)
- trust anchors
- authentication
Examples:
$ # GET with various options:
$ GET https://base-URI-here \
-a local-part-element \
-q param value -q ... \
-H header value ...
$
$ # POST with file as request body source
$ POST https://base-URI-here \
--file file
$ # POST with HTML form specified on command-line
$ POST https://base-URI-here \
--form field value --form field value ...
$
$ # And so on for all HTTP methods
Besides the options shown above, which are fairly obvious, it'd be nice to have:
--rheaders PATH # where to put headers
--xheader header # save given response header
# value as an xattr
-T trust_anchors.pem # or a directory, or whatever
-C client_cert_and_key.pem
-U username # for Basic, DIGEST, SCRAM
--password-file FILE # for Basic, DIGEST, SCRAM
-N ... # For Negotiate. This could
# be key/value pairs that can
# be used with
# gss_acquire_cred_from()
-J ... # Get a JWT token and use it
A convention for using xattrs for storing Content-Type and ETag (and a few other things perhaps) would be very very nice.
You may have missed the key word there: "platonic".
Consider how you might be hurting your cause and lowering the quality of discourse here by being overly sensitive and self-righteous; if you knew the lengths to which I've gone in support and defense of my gay and trans-gendered cousins, would you refrain from polluting the thread w/ knee-jerk PC Politburo bs?
I'm not sure why you and the sibling comment reference gay men - I'm thinking of a female-identifying person needing to respond to 'platonic crushes' when a male might instead see 'I want to buy you a beer', or simply hear that his blog is inspiring.
Can you measure when the user is doing a bicep curl with some kind of electrode? If you can, then maybe you could use that to send binary data using libcurl. True curl exercises in both senses.
I think the name is, technically, cURL. I love cURL and use it all the time. We send our customers cURL one-liner examples of how to test our authentication service which requires POST requests, and it just makes their lives and our lives that much simpler. They can test their stuff very quickly and easily, then ultimately it gets folded into their own services, all debugged and ready to go.
This will output the HTTP status code for a given URL.
Example: I also use curl as an 'uptime monitor' by adding onto that code section. (a file with a list of URLs, and "if http status code !=200" then email me.)There are variations on this all over the place but I really depend on it and I like it.