Hacker News new | past | comments | ask | show | jobs | submit login
HTTPie – A user-friendly CLI HTTP client (github.com/jakubroztocil)
367 points by _ghm2 on Dec 1, 2019 | hide | past | favorite | 89 comments



HTTPie is amazing, but I grew tired of it being "slow", slower than some of my services response times at least.

Migrated to Curlie [1], `alias http=curlie`, and been happy with it since. Same API, better performance and access to full `curl` flags.

[1] https://curlie.io


From https://curlie.io/

> If you like the interface of HTTPie but miss the features of curl, curlie is what you are searching for. Curlie is a frontend to curl that adds the ease of use of httpie, without compromising on features and performance. All curl options are exposed with syntax sugar and output formatting inspired from httpie.

So curlie is curl. Thanks for this.


Can confirm; I switched to curlie a long time ago and haven't looked back.

It would be interesting to create an alternative to `http-prompt` [0], using curlie and go-prompt [1] instead of HTTPie and `prompt_toolkit` [2], respectively.

[0]: https://github.com/eliangcs/http-prompt

[1]: https://github.com/c-bata/go-prompt

[2]: https://github.com/prompt-toolkit/python-prompt-toolkit


If the API is really the same you don't need an alternative, just an option to use curlie instead


Curlie seems very nice, but I'm a little concerned about your advice to alias http(ie) to curlie as they are strongly very much not interchangeable ("-a" means something entirely different between the two).


Never experienced slowness with it, what specifically is the problem?


Author here: the main driver for writing curlie was not about the slowness of httpie but about all the features I was missing from curl. As curlie is just a wrapper on top of curl, you get all the great options of curl like http/2, http/3 support, advanced TLS options etc.


For one-off runs it doesn't feel slow, but it gets more noticeable if you're running it in a loop or against a server you know is fast (i.e. local Go server), but consider this output as an example (running from an AWS server):

  # time http google.com >/dev/null 2>&1
  real 0m0.337s
  user 0m0.259s
  sys 0m0.036s
  
  # time curlie google.com >/dev/null 2>&1
  real 0m0.042s
  user 0m0.011s
  sys 0m0.005s
  
  # time curl google.com >/dev/null 2>&1
  real 0m0.039s
  user 0m0.009s
  sys 0m0.004s


Ok, sounds like a problem with Python’s startup speed. They have been improving it little by little but not sure how far they’ll get.


One other thing is that httpie is typically an interactive tool. If I were hitting services in a loop I’d just write a Python/requests script to do it rather than bash+httpie.


> Like curl but unlike httpie, headers are written on stderr instead of stdout.

What’s the thinking here? If I ask for the headers, wouldn’t I want that on stdout?


Most of the time when I do any further processing (save to file, pipe to other tools, etc) I'm interested in the response body. The headers are a useful diagnostic, and things like cookies are useful to save, but those don't typically belong in the same stream: I can't do the same things to the headers as I can the body, and so including both in the same stream will tend to reduce my options for downstream processing. If I actually do want them both in the same stream I can redirect stderr to stdout.


IDK, that is one of the things that annoys me to be honest, because sometimes headers get written after the body into the output (idk why, but guess some buffering and/or parsing+formating lag).

I almost forked it to get both outputs into stdout haha


I get that sometimes too. It should be fixable without having to send headers to stdout.


The examples show that the headers are printed by default, so writing them to stderr is the correct behavior.


Writing by default is logically followed by stderr? I just don’t follow.

Asking a program for info and then getting that info on stderr seems really bizarre to me.

Is this some historical UNIXism?


A lot of command line tools send debugging output to stderr so you can build a pipe/redirect output and still see the debug and error outputs.

You could consider headers to be some sort of debug prints..


It's convenient to have stdout only returning the body so you can work with it without having to remove the header. With curl, if you pass the -v, you get headers on stderr too.

When headers are specifically requests with -i for instance, I agree curlie should print them on stdout. PR accepted ;)


>Asking a program for info and then getting that info on stderr seems really bizarre to me.

The info that you're asking the program for is the response body. You're using `curl` / `curlie` primarily to get the response body and then pipe it to the rest of the command. The headers are ancillary.

If the headers were the info you were asking for, eg when running `curl -D -`, then yes they should get printed to stdout. That is not the case being considered here.


Truly one of the best cli tools out there. The api is concise and flexible, the --help flag is easy to read. But most importantly, there's sane defaults.

defaults to GET request

   http google.com
when json body exists it becomes a POST request

   http google.com user=65


I don't get how you think that second part is a "sane" default.

"user=65" is not valid JSON, unless you consider it a single JSON string literal.

HTTPie converts that into a JSON object.

Curl will send the data literally as you send it - if you want to send JSON, give it JSON as the body. If you want to send a "Form POST" quest, give it an encoded query string as the body.

If HTTPie detected that what you gave it was valid JSON, or whatever other media type, and used that to 'guess' the request Content-Type, that would be useful and smart.

But HTTPie not only mangles input, it mangles it in a way that is completely non obvious.

passing it "foo=bar" will send a JSON encoded object like `{"foo": "bar"}`.

passing it "foo=bar&baz=boo" will still send a JSON encoded object like `{"foo": "bar"}`.

So, it knows that ampersand is a field separator, but then promptly ignores all the content after it?

So, no. I call bullshit on "sane defaults" when it mangles data, and silently drops data it's given.


Completely agree. Opaque magic to "guess what you want" from incomplete or plain wrong arguments isn't "sane defaults."

It also doesn't make it easy to use, because eventually the magic will guess wrong and it will go from magically the right thing even though I passed garbage arguments to completely the wrong thing at the drop of a hat.

An example of a "sane default" where curl falls short might be to enable redirects by default, because that is usually what the user wants (and correct client behavior).


> Opaque magic to "guess what you want" from incomplete or plain wrong arguments isn't "sane defaults."

Please explain how a well-defined key-value convention is equitable to "incomplete" or "plain wrong" arguments.

In what way is any of this opaque when it's fully documented and quite easy to follow?: https://github.com/jakubroztocil/httpie#id16


Fairly sure I'd forget all the time that it thinks giving it a syntax that looks like form-encoded obviously means I want to send a JSON request.


key=value is a convention used in so many places that have nothing to do with form encoding

Would Java developers see this and imagine you're passing values for a .properties file?

If you've never seen the tool before, you also wouldn't know how it works at all. Once you hit help for the first time (you know, to actually learn how to use it), knowing it defaults to JSON is pretty much a base truth.

It's entire value proposition seems to be stongly tied to JSON support (2nd highest feature in the pitch at the top).

This is like saying you'd forget jq takes JSON strings...


How many key/value formats commonly used for http request/response transactions use = ??

The first feature pitch is “user-friendly curl alternative with intuitive UI”.


I'll leave it at this, I'm glad they prioritized not having to escape JSON on the command line over people who have no idea about how the tool works being able to guess how to pass form inputs.

That is probably the most user friendly difference they could make for working with JSON based APIs


Right, but this is HTTP, which has used form encoding since... 1998, if RFC 2388 is to be believed.


That doesn’t sound right. I’m pretty sure it’s older than that. New RFCs tend to replace old ones and I don’t think they reliably make it painfully obvious this is what’s happening.


A URL without a scheme is somewhere between "plain wrong" and "incomplete."

The program switches around between HTTP methods, content types, query building, request serialization, etc, at every minor syntactic change to the command line arguments. The fact that a foo=bar implies that the request content type AND the accepted response content type will be set to application/json AND the foo=bar is POSTed as a JSON object isn't a "sane default."

That is not to say it can't be a useful utility. The quirky syntax for query parameters, JSON-serialized request body, etc, can be learned and is quite concise, so after the initial learning curve, it can be used very efficiently.

"Sane defaults" is false advertising, because it implies the least amount of surprise. This tool is the opposite. It is unintuitive but potentially very efficient once learned.


The third feature listed after "syntax" and "formatting" is "Built-in JSON support"

It's a JSON-centric tool made for people who work with APIs that use JSON in very compatible way, with JSON-centric defaults

Changing the accepted type matches behavior of Js HTTP libraries like `request`, where accept and request content types are set in unison by default.

This stuff is all sane defaults for the people who are usually in Postman firing off JSON data to APIs that all look very similar.

"Sane defaults" don't exist in a vacuum, writing (and escaping) properly formatted JSON is a pain (see jq and how the 2nd paragraph in how to even invoke it is dedicated to escaping)

The tool defaults to removing that pain point in a very easy to understand way.


Broadly curious, what would your thoughts be if the CLI printed it's guesses, alongside how they could be more explicit? "X is not valid, assuming: [more explicit version of argument goes here]"

This strikes me as the best of both worlds--you're seamlessly educating the user on what is happening and what they could have done instead.


Just to be clear (see my other comment), I'm not against tools that require a learning curve.

I'm against dishonest advertising with loaded phrases like "sane defaults" (this term has its place, but it's not here) and "for humans" (this term is always a red flag) when the tool invents a load of unconventional, unintuitive syntax.


> I'm against dishonest advertising with loaded phrases like "sane defaults" (this term has its place, but it's not here) and "for humans" (this term is always a red flag) when the tool invents a load of unconventional, unintuitive syntax.

Author of the dishonest advertising here ;)

I don't think the fact that the tool introduces a mini language for crafting HTTP requests on the command line contradicts with the statement that the tool also provides what the author believes are sensible defaults (e.g., default to POST when sending data vs. GET when not, default to "prettifying" HTTP messages for human consumptions on the terminal vs. not touching them when the output is redirected, etc.).

"For humans" is meant to highlight the focus of this project on usability and interactive usage, as opposed to curl’s focus on feature-completeness across a number of protocols and non-interactive usage:

> curl is a tool to transfer data from or to a server, using one of the supported protocols (DICT, FILE, FTP, FTPS, GOPHER, HTTP, HTTPS, IMAP, IMAPS, LDAP, LDAPS, POP3, POP3S, RTMP, RTSP, SCP, SFTP, SMB, SMBS, SMTP, SMTPS, TELNET and TFTP). The command is designed to work without user interaction. [1]

[1] https://curl.haxx.se/docs/manpage.html


Sane defaults means it is making 99% of cases that I'm using the tool easier, convention over configuration.

In 99% of cases httpie can assume that I'm making a POST request or a GET request with json or query params. Very rarely will I need to add custom headers, or change the Content-Type.

Yes I can explicitly define the JSON body, headers and method type every time. Or I can use a tool which uses ..... "sane defaults"


I think it's hard not to like the HTTPie interface if you give it a bit of time. The design is brilliant.


I gave specific examples where it's unintuitive and your response is "The design is brilliant".

Touché sir, your argument has won the day. /s


Charitably, the emphasis could be put on “if you give it a bit of time.” Read that way, it’s not an argument for intuitive UX, just a comment about how the UX grows on you. It doesn’t contradict what you said. Should it?

After all, this is a discussion forum where regular humans can share their experiences. Not debate club.


A design isn't "brilliant" just because "people got used to it, despite how shitty it is".


You can send it raw JSON too. For example:

echo '{"foo": "bar"}' | http :8000


I use HTTPie all the time. It's not a curl replacement, it's a competitor in the space of CLI HTTP clients. The argument syntax is easier to use than curl, and it's geared for humans interacting with JSON APIs, including coloring and pretty printing output.


> I use HTTPie all the time. It's not a curl replacement, it's a competitor in the space of CLI HTTP clients.

Sounds like a curl replacement to me. From wikipedia (https://en.wikipedia.org/wiki/CURL):

"cURL is a command-line tool for getting or sending data including files using URL syntax."


Curl is a lot more than something humans can use from the command line for http/s requests.

This seems like a small tool with good ergonomics for http/s requests only.


> Curl is a lot more than something humans can use from the command line for http/s requests.

I continue to be surprised by the protocols which curl supports. SMTP, IMAP, SFTP..


As it looks now, I don't see much reason to switch away from curl + jq.

The thing I like about curl + jq is that I can easily switch it out for:

TEMPFILE=`mktemp`; curl -s <REQUEST> >$TEMPFILE; python -c "import json; f = open('$TEMPFILE', 'r'); d = json.load(f); <json processing logic and print statement>; f.close()"

and run in my CI environments or ship in docker images for testing purposes.

That said, I do see the potential for a tool like this. What I would like is the ability to manage different profiles for different URLs from the command line. Profiles could just be collections of header specifications - "Authorization: Bearer <blah>" or "Content-Type: application/json"

(Maybe https://github.com/postmanlabs/newman#using-newman-cli ? Didn't know about it until now.)


For me personally, the ability to write custom authentication plugins for HTTPie is the reason I use it. For my use case I wrote a simple AWS Sigv4 [0] plugin to simplify calling AWS API Gateway endpoints during development/debugging.

[0] https://github.com/aidan-/httpie-aws-authv4


>TEMPFILE=`mktemp`; curl -s <REQUEST> >$TEMPFILE; python -c "import json; f = open('$TEMPFILE', 'r'); d = json.load(f); <json processing logic and print statement>; f.close()"

This is painful. In powershell this is built in and as easy as iwr url/to/json | convertfrom-json | other logic


Interesting, looks like Linux users can also try out Powershell now: https://docs.microsoft.com/en-us/powershell/scripting/instal...

It's a beefy package, though. Packaged in a docker image on top of alpine:3.10.3 following their installation instructions for alpine: https://gist.github.com/nkashy1/643e7a263054c02e2caceb3912f8...

The image comes in at 178 MB. alpine:3.10.3 is 5.5 MB. An alpine image in which you add bash, bash-doc, and bash-completion clocks in at 13 MB.

Sounds really powerful for personal use, but not an ideal tool for production use (e.g. when you need to spin up a pod on a Kubernetes cluster to debug an issue in production). Will definitely try it out.


Probably because it has to include the entire CLR/.net framework. I suppose that is a concern, but I doubt I'll ever be doing k8s or docker in my career anytime soon.


For similar pipelines I build Xidel (http://www.videlibri.de/xidel.html)

xidel url -e 'logic'

and one such xidel call is enough regardless, if the url returns json, xml or html.


This is pretty handy. I usually default to httpie for playing with my APIs and documenting examples - it's much clearer than curl.

It doesn't seem to get much development attention though. There hasn't really been features I find useful added in the past year. The biggest issue is no support for nested JSON bodies, meaning non-trivial API calls end up just as complex as curl.


-H for request headers

-d for body

-X for method

—-verbose to see response headers

How much “clearer” could curl get?


hmmm how about this:

-M for Method

-B for Body

-RH for Request Header

you know, so you only have to know the first letter and don't have to remmeber arbitary flags?

I know that it is not that easy, but unix tools do have the problem that often the flags don't make sense on the first glance.


-H for header

-d for data

-X for execute

There was a reasonable attempt at this already. There are going to opinions on either side. And through regular use the flags would become muscle memory.


> I know that it is not that easy, but unix tools do have the problem that often the flags don't make sense on the first glance.

Should you be sending HTTP requests to something without knowing precisely what you're doing? httpie just sorta assumes from your arguments what you're trying to do, and for me it has often been wrong. Probably works OK if you have a mostly read-only API.


I don't send random things, I either look up curl if I need to script something or I just use postman.

EDIT: Depends, most of the time I use standard JSON Rest apis, there is not really that much to misassume I think.


For those using Emacs, I'd recommend restclient.el [1]. In Emacs fashion, you can keep REST API documented in a single (or more) files and evaluate parts directly in buffer for testing and playing. Throw that on git/mercurial and you have versioned and documented REST API ;)

[1] https://github.com/pashky/restclient.el


Visual Studio Code has a similarly named (seemingly?) API-compatible extension: https://github.com/Huachao/vscode-restclient


Those who like HTTPie might also like http-prompt[1] which is an interactive prompt built on top of it.

[1] http://http-prompt.com/


Wonderful tool. I’ve said this before but something holding it back is that you need python to run it and have a python environment set up. Usually this isn't that big of a deal but it’s just enough friction to make me not reach for it most of the time.

Conversely something like jq is trivial to install and use almost anywhere because it is just a binary linked against libc. As a result I use it all the time.


The Python dependency for HTTPie is handled by most Linux and macOS package managers [1]. It would be great to have it in Windows package managers as well.

[1]: https://httpie.org/doc#installation


One time I tried to install HTTPie from ubuntu 16.04 apt repository I got an old version that was not compatible with httpie-jwt-auth so I had to install it using pip.

1. pip3 install httpie.

2. python3 -m pip install httpie.

3. sudo pip install httpie.

Finally, it was working. Last week I upgraded to ubuntu 18.04 and surprise, httpie is not working. I didn't bother installing again from pip. Luckily this time, apt repository had a recent version.

Scrolling back through my terminal history, I see that I went through the same struggle with s3cmd.

So yeah, installing anything that requires python is a bit painful. Fortunately, someone in the comments mentioned https://github.com/rs/curlie which is an HTTPie alternative in golang.


`snap install http` gets you the latest httpie, fwiw


Snapyy doesn't currently work in windows 10 WSL1 but hopefully will work in WSL2


For someone who doesn't already know curl it might seem easier to use this, but otherwise it doesn't seem so useful.

Commands are a bit shorter (as it assumes json apparently) but, when working on the terminal, once I wrote my curl command I just keep repeating it anyway or just changing one parameter.


I have been using curl for command line http stuff for years and years. I tried out HTTPie a few months ago for testing a json-flavoured API and I found it is amazing, it is so much less awkward than piping curl into jq and struggling with curl’s POST body options


I'm in the same boat. Every time I have had to use curl over the past ~15 years I've had to look up how to do the few basic things I need to do. With HTTPie, its easy enough that I haven't needed to look anything up after the first two or three times.

Sure, cURL can do it and if I used it frequently enough, I guess I'd remember how to use it without having to look it up every time, but for my basic and infrequent use, HTTPie is more pleasant to me and I can remember how to use it.


I'm not particularly good at memorising shell commands, but curl always felt straightforward. I use it for all kind of requests and rarely need to look up the parameters.


Which curl body options are you talking about? I just see -d for the data, -XPOST which is optional when using -d, and --data-binary mostly for file transfer


I keep using httpie for all testing and manual stuff, because with cURL (or wget) I have to continuously call up the help pages to find the correct options. In httpie it seems the philosphy is that you mostly use various non-option strings for these common additions (headers and query parameters) and I can remember these more easily :)


There's at least 1 thing curl can do but httpie cannot: anyauth where curl sends a request and based on the response curl will send another request with the correct authentication format.



A wonderful program. Apparently this is a hot topic but I've found this tool after growing frustrated with how verbose curl is to type for simply trying out JSON APIs interactively. Combined with something like fx[1] this is a real treat.

[1] https://github.com/antonmedv/fx


This is the gold standard of CLI tools. Sure, cURL may technically be more capable, but HTTPie is friendier, more memorable, and clearly built with developer UX in mind.


It's the other way around, curl is so good its amost like using the spec. directly


alias wget='echo "using httpie as wget";http -d'


Why would you do this? If you want to use httpie why would type wget? If you’re pasting a snippet that uses wget, wouldn’t you want it to actually use wget?

I don’t see any benefit of using this alias


I really like it for functional testing and for describing steps in peer reviews; it's concise and readable.

It's not the only tool out there; sometimes cURL makes more sense depending on the context. Use the right tool for the right job!


This is one of the best cli tool I loved it 4 years ago and still do


Can it dump all the scripts/images/etc being loaded on a specific website? (not refering to source)


what about performance ?


I'd say that its intended use case is to familiarise yourself with APIs and test out functionality rather than use it in a place where performance really matters.

I use it all the time for just that, the API is so simple I rarely need to check the docs.


When I hear "curl alternative", I think of something low-level, portable and frankly, something that will compile _almost_ anywhere that has a C compiler.

But then I realized I'm thinking of libcurl, not curl. So maybe this isn't so bad in Python as a client tool. Does Python 3 still start up really slow though? (honest question)


No, Python has never (last decade) had a startup time that is noticeable by humans when running CLI scripts, unless your CLI script is importing a large codebase. JVM languages are an example of languages which do start up too slowly to be attractive for CLI scripting. You are right that python can be horribly slow to import a large codebase, for example when starting a web server in a large django project.


AOT compilation has come back to JVM languages both on HotSpot and Graal. There are now lots of ways to remove JVM startup time penalties and most of them provide for startup times that are as fast as Python independent of code size.


Does this apply to scala?


Scala Native [0] gives you AOT via LLVM, so it isn't quite the same thing, but can, in some circumstances, significantly speed up your code.

[0] https://github.com/scala-native/scala-native


Right, that makes sense. I did just test a couple of short Python programs

  $ time python3 test.py 
  Hello world

  real    0m0.020s
  user    0m0.020s
  sys     0m0.000s


FWIW we had to move away from python in the project I was working on because just waiting for the help took more than 15 seconds (IIRC) on a beaglebone black.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: