Which isn't too far from your desired outcome (notably, without relying on argument position for meaning). Although, I guess the argument that "curl is already installed on almost every server", sorta gets moot because I imagine it will take a while for most distros to move to the latest curl that will support --json/--jp
It's a lot easier for a distro to update a default utility than to add a new one though. I suspect having updated curl would happen faster than adding httpie to the list of default utilities.
Especially singe HTTPie is a python package. curl exists even in plenty of embedded systems, in many Docker base-images, etc.; but scripting runtimes like Python generally don't.
I wish this were the case. Unfortunately for us grownups that run stable/LTS distros, non-security updates like these don't usually make it to us for like 5 years or something.
It depends. How long did to take for bash4-isms to get adopted? Sometimes the update time for more basic utilities, for all systems can take a long time. (Apple switching to zsh doesn't help here.)
Except if it's an enterprisey distro, it won't get updated. Specifically RHEL 7 ships 7.29.0 and cherry-picks features (TLS handshakes) to backport to their nine-year-old version.
The exact opposite is more likely true, depending on the nature of the change. A new utility has no current usage so there are no in-the-wild backwards compatibility concerns. The system python on redhat based distros was stuck at 2 for a long time because some of the sysadmin utilities relied on it and all those deps needs to be vetted or converted to either a newer python version or their own shipped version of the runtime that didn't use the general purpose python install.
Python is a huge outlier in this case. Very rarely do you have a default tool that can't be upgraded, and this case was only because Python is an interpreted language.
Curl is a command line tool. As long as they only add new functionality, there is very little that prevents an upgrade.
There's a big difference between a bump to the latest curl version and a jump between two major versions of a language runtime. The Python 2->3 transition took years and many distros kept both versions.
Most dependencies get updated much more quickly. It wouldn't even shock me if this change got picked up mid-cycle.
Unhappily RedHat won't. Heck, RedHat has gone well out of their way to backport updated TLS handshakes (1.2 at least) to their 2013 version (7.29.0) of curl.
I have recently been trying a port of HTTPie in Rust, xh [0] (which is needlessly hard to find in a websearch). I am a Python guy, but love having single executable tools.
Happy to hear about xh, thanks for mentioning it. I recently ran into a problem where HTTPie didn't support HTTP/2, so I had to fall back to curl. The ticket for HTTP/2 in HTTPie is still open https://github.com/httpie/httpie/issues/692
Happy to see that xh supports HTTP/2 out of the box.
We are rolling a brand-new mini-language that integrates really well with the existing request building syntax, but also features stuff like JSON type-safety and amazing error messages for basic syntax errors.
It will probably be post 3.0, since the underlying HTTP interface we use (https://pypi.org/project/requests) does not support HTTP/2. We are currently discussing how to migrate from that to something like httpx, without causing any change of user-visible behavior.
-s: if HTTPie disables progress bar by default, that's just a different design choice, the advantage is in the eye of the beholder
--data '{"user...: likewise, HTTPie's default to JSON is a design choice. I wouldn't say the default is superior
-H "Content-Type...: likewise, if HTTPie adds this header by default, thats just getting in my way when I don't want that header
-X POST: you don't need to specify that in cURL if using --data
Ahh, I believe you're right. "PUT" would have been a better example, in that case. But I suppose getting the cURL syntax wrong helps my point that I find the cURL syntax confusing.
HTTPie does not support multi-valued keys in query strings (sending several field[]= for example). It is very annoying and, for consistency, I would rather use cURL (instead of mixing syntax with HTTPie).
For TLS we use proxy listening on 127.0.0.1, e.g., stunnel, sslsplit, haproxy, etc. This way we only ever have to type a single address and port, i.e., 127.1 80, or short alias for the hostname, e.g., echo 127.0.0.1 p >> /etc/hosts.
The --jp (json part) command line option, described at https://github.com/curl/curl/wiki/JSON, has "anti-pattern" written all over it to me. Why introduce some specific, curl-only wonky-ish version of JSON? Is this any easier to remember than normal JSON? I mean, right now, I use cURL all the time with JSON posts, just doing something like
-d '{ "foo": "bar", "zed": "yow" }'
The proposed --jp flag seems worse to me in every way.
(Note I do like the --json as just syntactic sugar for -H "Accept: application/json" -d <jsonBody>)
--jp is a perfect fit for shell scripting integration:
bash$ curl ... --jp "foo=$foo" ...
This has zero shell-quoting, expansion, escaping, or separator issues. Whatever's in environment variable `foo` will be sent to the server as a single value associated with key `foo`, whether it's a zero-length empty string, or full of backslashes or spaces or newlines or whatever.
There are fancier ways to accept multi-arg, but they all have weaknesses, and this matches the way curl handles -H arguments already today (one per header, stack them if you want many), so I think it's a sound way to handle CLI arguments.
(I don't have any specific views on whether this is how curl should do JSON or not, but I recognized the CLI safety mechanism immediately.)
> Is this any easier to remember than normal JSON?
Yeah, actually it is. It's immediately intuitive to me. It makes string interpolation way easier. Quick, how do you do -d {object} and pass in environment variables with correct string escaping? Do you start with single quote or double quote? Where do I put backslashes? Bash vs zsh compatibility? Plus you have to make sure all the slashes, quotes, brackets and braces match.
Realizing that I'm screaming into the wrong textbox, but "-H 'Accept: application/json'" is the wrong header for curl to set in that circumstance, since all that curl is able to say with authority is that the content-type it emitted is application/json, not what the user wants/accepts back. Maybe this feature is an example of the 80/20 rule, and that more advanced usages can't use --json and must still craft the explicit C-T and Accept headers
It is so weird that command-f on that wiki doesn't show a single content-type header
Actually, you are correct, I had a copy/paste error from pulling a different curl command and merging it, the curl page correctly shows it as `-d [whatever] -H "Content-Type: application/json"`
--jp doesn't even make sense as it's an abbreviation.. it's rather be --json-part.
The notation seems to be similar to HAML<>HTML. It's not JSON, and it doesn't make any sense other than really short ad hoc queries. It's just confusing and only solves a problem that a tiiiny amount of people have (regualar ad hoc json queries, where people are too lazy to actually write out json).
Otherwise, why not do the same for XML, css. Or heck.. why not simply support HAML as well
It's better to have such functionality extracted in some other tool and just pipe it
To everyone saying "just use tool x for this": the advantage of curl is that is so widely available.
For your development laptop you can install anything you want but more often than not you need to log into a EC2 instance, a Docker container you name it.
Curl is often pre installed or very easy to install. I know it's usually not an up to date version but as time goes by you will be able to rely on this feature on pretty much any machine.
I was wondering, "Why not pipe output to JQ" up until I read this:
> A not insignificant amount of people on stackoverflow etc have problems to
send correct JSON with curl and to get the quoting done right, as json uses
double-qoutes by itself and shells don't expand variables within single quotes
etc.
But I definitely didn't remember how to handle the closing paren and closing quote correctly, and I had to google for an example just now. So I'm not allowed to say this is easy to remember :)
Sounds like this idea is limited to the curl tool and wouldn't add anything to libcurl, which is great. I'd prefer libcurl leaving JSON to other libraries.
I use bash variables inside JSON with curl all the time, which leads to string escape screw ups. I know there are alternatives that make testing REST + JSON easier, but since our software uses libcurl in production I prefer to test with curl to keep things consistent.
I can see this being useful, but I'm not looking forward to the list of command line options being even longer. The output of "curl --help" on my system is already 212 lines long.
I wish the curl command was split such that different protocols had different commands. I REALLY don't want to see a list of FTP specific command line options whenever I'm just trying to look up a lesser-used HTTP option.
That said, this is really a minor gripe compared to just how useful curl has been for me over the years.
I'm already well past the stage of using moar/micro/[rip]grep to scan through curl's manpage and find what I need.
OTOH, if you ignore using curl to GET resources to download, >90% of my curl usage is slinging json, and often involves interpolating strings and hence copy-pasting, so this feature would be immediately useful to me.
Curl is kind of the swiss army knife of the web so I don't think a long manpage is out of line.
In the linked github wiki there's an example of the syntax of the suggested --jp flag used to pass key-value pairs and put them together as a JSON object:[1]
You could possibly solve it by supporting single quotes for value containment:
curl --jp "x='$var'"
Gives:
curl --jp "x='foo,y=bar'"
Assuming that is what you meant. Isn’t this normally how these things are handled in unix shells? That and escaping which probably doesn’t apply here.
I don’t know what the best way to implement this would be, but the current proposal looks so weird for me that I’m either completely missing something or it’s wildly unnecessary.
If you do something like single quotes then you can no longer use the shell's abilities to properly pass in escaped values like this:
bash$ foo=this\"test\"
bash$ echo "value of \$foo is: $foo"
value of $foo is: this"test"
If we're throwing away the ability to let the shell handle escaping properly then there's really no point to --jq at all versus just manually attempting to cobble together JSON directly
I feel like if you only want to make a single JSON request, a simple curl invocation with the JSON data in single quotes or in a file should be enough. And if you make many different JSON requests, you're probably much better off with one of the alternative tools.
Related to the second point, I really wish more people put more time into creating tools for their testers. Shell/Ruby/Python/Perl scripts that are custom-made for the specific service they're testing and provides better UI. So that instead of a sequence of curl invocations, logins, and error-prone copy-pasting, people could just:
Some of the replies say this is a layer violation: HTTP doesn't care about JSON so curl shouldn't either. But you have to add Content-type and Accept headers when working in JSON, which I personally often forget, so I think this does make sense.
I'm indifferent if they do this or not, can always use pipes and jq, but if they do, I hope the json-part option uses some syntax that's a subset of jsonpath and/or jq, so I don't have to understand a third syntax when people start using this.
Here is how one might adhere to the so-called UNIX philosophy.
Utility #1: 580-character shell script to generate HTTP (NB. printf is a built-in)
Utility #2: TCP client to send HTTP, e.g., netcat
Utility #3: (Optional) TLS proxy if encryption desired, e.g., stunnel^1
1. For more convenience use a proxy that performs DNS name lookups. Alternatively, use TLS-enabled client, e.g., openssl s_client, etc.
Advantages over curl and similar programs: HTTP/1.1 pipelining
For the purpose of an example, the shell script will be called "post". To demonstrate pipelining POST requests, we can send multiple requests to DuckDuckGo over a single TCP connection. TLS proxy is listening on 127.0.0.1:80.
Based on personal experience as an end user, I find that using separate utilities is faster and more flexible than curl or similar program mentioned in this thread. For me, 1. storage space for programs, e.g. large scripting language interpreters and/or other large binaries, is in short supply and 2. HTTP/1.1 pipelining is a must-have. Using separate, small utilities 1. conserves space and 2. lets me do pipelining easily. I write many single purpose utilties for own use, including one that replaces the "post" shell script in this comment.
AWK was designed by Kernighan who is on the record as subscribing to the Unix Philosophy. For all I know Aho and Weinberger also subscribe to the philosophy. I think its safe to say that AWK and the Unix Philosophy are compatible. I have never seen anything that says otherwise.
Was "rigid dogmas" in reference to the Unix Philosophy? I haven't ever seen it described that way.
My most readily available `man awk` spit out 1235 lines of text. Its namesake physical book is even longer. It is a Turing-complete language. It can execute arbitrary system commands with `system()`.
It is the antithesis of the Unix Philosophy. Always has been. And that's okay.
It seems the goal is to make it easier to craft JSON by having curl perform escaping, while the proposal would seem to require some sort of in-memory tree representation of the data.
One alternative would be to provide escaping more directly like this:
And then curl would do the substitution with the appropriate type-specific escaping for each variable. This has a few nice properties:
1. What's on the command line resembles what's actually going to be sent.
2. Curl doesn't actually need to parse (nor validate) the JSON, or to create a tree representation of the data within itself. %s is invalid JSON anyway, so you can do a string substitution - all you need to keep track of are matching quotes (including escape sequences).
I've used a printf style format string here, which could be expanded for extra convenience. For example the Python-style `%(env_var)s` sequences could be used which could expand environment variables directly. Or something could be added for convenient handling of bash arrays.
JSON isn't under-specified, you can tell if something's valid JSON just based on the rules here: http://www.json.org/json-en.html. The mapping of the JSON data model to the data models found in various languages is what's ambiguous. Which is not impacted by curl supporting JSON in the slightest:
- The `--json` option only adds a content-type header, it doesn't alter the transmitted data at all.
- The `--jp` option has a bespoke format that's not part of the JSON spec, and which doesn't actually depend on a specific data model, it's just string manipulation.
Numbers are the main thing I assumed you were talking about: the JSON spec is clear though, numbers can be arbitrarily long.
The problem is not with the JSON spec, the problem is when you are converting from one data model to another. Any program which claims to perfectly round-trip the JSON data-model should support arbitrarily long numbers, there's no ambiguity in the spec about that.
If you are only parsing JSON as a means to encode your own data-model, then there's no obligation to support arbitrary precision, but users should not expect to be able to round-trip arbitrary JSON data.
AFAICT, `--jp` doesn't do anything which would affect the length of supported numbers, even though it's generating JSON.
JSON lets you write numbers. They can have a sign, decimal part, and an exponent. The standard euphemistically describes this as:
> JSON is agnostic about the semantics of numbers. […] JSON instead offers only the representation of numbers that humans use: a sequence of digits. […] That is enough to allow interchange.
But can you encode/decode an arbitrary integer or a float? Probably not!
* Float values like Infinity or NaN cannot be represented.
* JSON doesn't have separate representation for ints and floats. If an implementation decodes an integer value as a float, this might lose precision.
* JSON doesn't impose any size limits. A JSON number could validly describe a 1000-bit integer, but no reasonable implementation would be able to decode this.
The result is that sane programs – that don't want to be at the mercy of whatever JSON implementation processes the document – might encode large integers as strings. In particular, integers beyond JavaScript's Number.MAX_SAFE_INTEGER (2^53 - 1) should be considered unsafe in a JSON document.
Another result is that no real-world JSON representation can round-trip “correctly”: instead of treating numbers as “a sequence of digits” they might convert them to a float64, in which case a JSON → data model → JSON roundtrip might result in a different document. I would consider that to be a problem due to underspecification.
The numbers was what I was mainly thinking of, so thanks for your exhausting enumeration of those problems.
Jason.org requires white space for empty arrays and objects while RFC 8259 does not (and I often see [] and {} in the wild).
A lot of packages de fact break the spec in other ways, such as ppl blatting python maps out rather than converting them to JSON so that the keys are quoted as ‘foo’ rather than “foo”. I’ve complained about this when trying to parse the stuff only to receive the response “it works for me so you must have a bug” from the pythonistas. This has happened in multiple projects.
This would provide some additional utility, but honestly I don't see the point.
Anyone sending JSON via curl CLI a lot is probably having to manipulate JSON via CLI for purposes other than sending requests with curl as well. It makes more sense for most people to just learn one json manipulation tool and pipe input in and out of things that need it.
I use cURL a lot. I can see how this would maybe somewhat useful for working very quickly, but the wiki-given use cases of k-v pairs and lists are simple enough in raw JSON.
Something that would be helpful is for cURL, HTTPie, Postman, Fiddler, etc to standardize on a request/response pair format such as Chrome's HAR. There are some tools in NPM and the below HAR to cURL too, so I think native HAR support would be more helpful than a JSON builder.
When dealing with support on someones docker image, for me it's far better to have this in one utility. Yea, you can write a command with the current version, but cutting it down to -jp will be much easier.
I've included --json in a custom redefinition for years, glad to see something like that coming to the official binary!
curl() {
args=()
for arg in "$@"; do
case $arg in
--json) args+=("-H" "Content-Type: application/json") ;;
*) args+=("$arg") ;;
esac
done
command curl "${args[@]}"
}
If this means I can just use libcurl to GET a web endpoint and parse the JSON in a C program rather than have to manage multiple dependencies, I'm all for it!
> Getting JSON syntax right, error free, by hand, in a terminal, is not easy.
you are right and I wonder why shells haven't done anything to address this. Fish might, actually. colorization isn't really useful in aiding comprehension, but colorization is good at giving an indicator that there is a parse error somewhere.
This is great. When a new user uses Darklang, we want them to be able to make JSON API requests quickly and easily, and there aren't great client-side tools for that that you can expect users to have installed. giving them a big long curl command is no fun, but `curl --json 'the-body' would be amazing`
Doesn't really look like it's adding anything, and the `jp` part looks like the people referenced on stackoverflow will just be more confused.
Often times the JSON being sent down is complex, I can't imagine anyone wanting to basically rewrite it into something else for anything other than 2 field JSON objects
I know I've done the quoting dance before, while exploring an API in one project I resorted to using zsh heredocs to build the payload argument to avoid all quoting issues. I'm sure there is a better way already but it sounds nice to have this built into curl as its so common.
I would prefer to use the --json flag to provide syntactic sugar for setting the content type and accepts headers and leave the marshaling of data to a separate tool. Or if it has to be baked in, refactor `jo` into `libjo` and a CLI wrapper so that the two tools behave the same way.
TIL! Seems to be a quite confused topic as many things (it seems for example some java servers) even require it, but you do seem to be right, as JSON must always be utf8.
So great! This has been one of the most requested curl features for years. Without this feature, to send JSON, you had to craft a valid JSON string yourself or shell out to another utility that creates a valid JSON string.
I just want to pass it a filename that contains the JSON. Never been a fan of heaving around post bodies that dangle from a curl command...and I hate postman.
But curl has so many features already, it's odd to say now is the time to stop adding more. There is even Socks4 proxy support, does anyone even use that now or ever?
> But aren't there also several command line utilities which already support JSON.
There are command line utilities which consume, query, or format json.
But aside from e.g. httpie (which is essentially a competitor to Curl), which "several command-line utilities" make authoring JSON easy and convenient?
Because if you check point (3), the link, and the paragraph before it, this is entirely about sending valid JSON (ideally with the correct headers).
In fact the second section of the link in question literally states:
> # JSON response
> Not particular handling. Pipe output to jq or similar.
JSON is used a lot, really a lot, with CURL. It's ubiquitous on the web. cURLs intention is to support all this common URL stuff. Adding JSON seems a natural fit to me.
For example, here's a JSONy POST request with cURL:
curl -s -H "Content-Type: application/json" -X POST https://api.ctl.io/v2/authentication/login --data '{"username":"YOUR.USERNAME","password":"YOUR.PASSWORD"}'
Here's that same request with HTTPie:
http POST https://api.ctl.io/v2/authentication/login username=YOUR.USERNAME password=YOUR.PASSWORD