One pattern I really like is opening the networks tab of the browser, finding the request I'm interested in and "copying as curl" - the browser generates the equivalent command in curl.
Then I'd use something like https://curlconverter.com/ to convert it into request code of the language I'm using.
In a way curl is like the "intermediate representation" that we can use to translate into anything.
I also use the browser's 'copy as curl' function quite frequently as it's so convenient, having all auth and encoding headers set to _definitely_ work (instead of messing around with handmade, multi-line curl command lines)
Be aware that online service like this one might log your request which could have sensitive data, I'm not saying it does, but those websites give me the creep
I watched the network logs, and it doesn't seem to transmit anything. Additionally, their privacy info clearly states:
> We do not transmit or record the curl commands you enter or what they're converted to. This is a static website (hosted on GitHub Pages) and the conversion happens entirely in your browser using JavaScript.
Privacy policies on most websites mean jack when they can be changed at any time for any reason, imo.
Not to mention the pattern nowadays is: offer a free service, pay a little lip service to privacy concerns, then the enshittification train comes rolling down the tracks a few years/ months later.
Not saying this site is going to go down that path but IMO giving the benefit of the doubt with regard to privacy on the internet is bad practice for 2024.
I agree that it's possible, and that the majority of utility websites do use a backend to provide their utility, but it seems curlconverter.com doesn't make any requests to a website to convert and instead does so in javascript.
It would be nice if more sites offered themselves as PWAs that worked when you set "throttling" to "offline" in the dev menu, so that you could ensure that no data is leaving the browser when working with sensitive info.
Maybe that would be a nice browser plugin. Something that blocks any further requests. I guess it would work similarly to ad blockers, only once enabled blocks everything.
Just unplug the cord, or disable the WiFi, for a few seconds. As we are presumably discussing sensitive data, nothing is as certain as the end of the cord in your palm.
I kinda wish the address bar in any browser had an "advanced" popout menu that's basically a curl frontend with all of its bells and whistles. Basically move that functionality from the dev tools.
Sadly most major browsers go the opposite direction, removing more and more "advanced" functionality and information from the address bar (e.g. stripping the protocol, 'https://', from the URL)
Firefox on Android has recently started to hide the URL entirely when using the search-from-addressbar feature. Instead of the URL of the search page, it shows the search terms, which is redundant since the terms are already shown by the page itself.
Yeah, that has made my life so much easier when troubleshooting an API endpoint. I can tweak the request params to run against a local instance as well as pipe through jq for formatting etc.
This is pretty interesting. It's not like HTTP needs an intermediate representation, but since cURL is so ubiquitous, it ends up functioning as one. cURL is popular so people write tools that can export requests as cURL, and it's popular so people write tools that can import it.
The benefit of curl over raw HTTP is the ability to strip away things that dont matter.
Eg an HTTP request should have a client header, but they're typically not relevant to what you're trying to do.
curl is like an HTTP request that specifies only the parts you care about. It's brief, and having access to bash makes it easy to express something like "set this field to a timestamp".
Actually "Copy as cURL" adds much that is not required. In some cases this can be useful. However if all one cares about is what is actually needed to succesfully request a resource using HTTP, then "Copy as cURL" always includes too much. It includes "things that dont matter".
HTTP is more flexible than "Copy as cURL". There are things that can be done with HTTP that cannot be done with cURL.
(NB. I am not a developer. I am not writing software for other people. I do not profit from online advertising. I use a text-only browser with no auto-loading of resources and no Javascript engine; I never see ads. Been using HTTP/1.1 pipelining, which all major httpds support, outside the browser, daily, for many years. It works really well for me; I rely on it. As such, I am not the appropriate person with which to debate the relative merits of HTTP protocols for developers who profit from online ads served via "modern" web browsers.)
That's kind of what I mean. E.g. I believe curl will add a Content-Length header, which is good to have, but I don't need every example HTTP call to show me that.
To me a curl call is kind of shorthand for "these are the parts unique to this request, do the appropriate thing for general-use headers". If I see a raw HTTP request missing a Content-Length header (assuming it could use one), I don't know whether to assume that I do the normal thing, or whether the server ignores Content-Length, or perhaps if the server specifically errors out when it's set.
Vice-versa, if a raw HTTP request does have a Content-Length header, I'm not sure if that means it's required or just supported.
If I see a curl call specifying Content-Length, it sets off the "something weird is going on" bells in my head. Nobody specifies that in curl, so it's presence is odd and worth looking at.
I've used a similar tool as part of API logging, filtering out the signature on the bearer token... It's useful with request id for dealing with error reporting.
https://curlconverter.com/ is a great example of intelligent UX. Whatever browser you're using is shown in the instructions for "Copy as cURL". Very clever.
In curlconverter.com clicking on "C" redirects you to the --libcurl option documentation page instead of generating a C snippet.
Wouldn't a more user-friendly way be to still generate a C snippet, but to mention that it can be done with the --libcurl option too?
Having a flag in the command line interface that spits out the source code of the program doing the same stuff as your command is pretty cool. It's like lifting the hood and showing you what's going on. This not only helps you get a better grip on how things work but also lets you make changes to fit your needs. You can tweak or add stuff based on what you want, making the whole experience user-friendly. It's all about giving users the power to customize things their way.
It's also just great documentation for a programming library. Like, if you're using libcurl and realize you need to do a range request (or whatever), or "copy as curl" from browser network tab, you can just do it on the command line and add `--libcurl` and find out exactly how to do that with the C library. It's the bee's knees.
This kind of thing was one of the reasons Visual Basic macros for Microsoft Office was so successful. You can perform actions in Word, Excel, watch the macros they produce then customise themselves to your needs afterwards in code. It is a simple and powerful concept, so good to see it in curl.
While it doesn't appear to have been updated in many years, Microsoft built a similarly useful tool[1] that lets you browse the structure of a given Office document and see C# code that generates various components of it.
There is, but we need to throw away our outdated current programming model. Think Lisp or Smalltalk. There should not be a separation between program written in some language, operating system and shell [1].
You'd simply run:
CURL url: "https://example.com" method: 'post
in an interactive system, that can either represent your shell, or your application code. We need --libcurl because UNIX is not an interactive environment, so there is an enormous abyss between runtime and compile time.
(Syntax in this example from a Smalltalk-like environment I have been designing, should be understandable enough)
---
1: "An operating system is a collection of things that don't fit in a language. There shouldn't be one." — Dan Ingalls, 1981
Yes yes, power shell is powerful and really good. I wanted to hate it because it seems too verbose and I don't like the mix of capitals and hashes in names. but the APIs they make available from .Net are pretty phenomenal. Extreme verbosity aside, done day I'm have to seriously learn it
For simple interactive usage aliases go a long way, and with tab completion it gets really fast to write. For example, Invoke-WebRequest is iwr, Select-Object I think is just select. Some others are ForEach-Object which is %, and Where-Object can be ? Or where. Also, since PowerShell is case insensitive, you don't really need to use all those capitals - the worst you'll get is a warning from your editor, if it has PowerShell integration (around 99% sure about this)
I think you don't even need to type out the whole option, just the first few letters most of the time? I haven't used PowerShell in a bit, but I think if you have a command with a -Force option, but just type -f, it will go through as -Force, if and only if the command doesn't have any other options starting in f
Can't hate it, but I can't love it either. I wish some other more "normal" languages (Ruby? Python?) had a better "shell" story and dotnet integration.
For so long I have dreamed of a ruby-like shell language. At one point I even built a prototype that was a ruby DSL, but I found myself continually having to implement functions and it never seemed to get to a point where I could use it all day and not run into missing functionality that I had to add. Some day I'll return to it, but it will have to be a day when I don't have a real job :-D
Can you copy and paste that snippet as is into a C# program? This is what I meant.
Powershell has a high level object model but it doesn't make Windows itself anymore programmable and interactive, as a whole than zsh does for Linux. It is no Lisp Machine.
Given that we already have the concepts of exported entry points and extern in C, this should already be possible to a certain extent. The only other thing which needs to happen is that ELFs should have the concept of exported data structures, so while "URL" might not be a defined structure in the OS, something like curl can provide it.
Too bad we're stuck with UNIX/POSIX model - you could even take this idea of exported data structures and have the terminal represent data in the preferred users format instead of having tools like jq.
We have dlopen and we can list exported symbols, but we have no information about a function's arguments, ABI and calling convention, so it's pretty much impossible to turn UNIX into a fully late-bound and interactive REPL. Same issue with syscalls.
The only way is starting from scratch, with a novel approach and programming model.
Why can’t you add additional data about the data types of parameters in a separate ELF section? It would be only used by programs that look for it like a specially designed shell.
The calling convention can be assumed to be the same as used by the OS/arch combination.
You could, but you then have to recompile the world with this new information stored as a ELF header or something, and good luck if the library is not written in C (so it has its own conventions, ABI, memory model and binary format)
I'm talking about the status quo today, not how you can improve in a perfect world where everybody adopts a better way of doing things.
Implementing an half-baked Smalltalk layer on top of UNIX will not turn UNIX into a Smalltalk environment.
I wonder if one of the existing interpreted languages (python/javascript/ruby/whatever) could maintain a patch for llmv/gcc which did exactly that and in the process make the most incredible seamless integration ever between itself and C (and also C++ now with its ABI stabilizing!)
With CLR assemblies, you get rich embedded metadata including UDTs like structs and unions, and architecture-portable JIT-compiled bytecode to boot - both sufficient to map the entirety of C data and execution model.
You might find this work and talks on liballocs by Stephen Kell interesting. The pitch is how to enable smalltalk like reflection for unix as it exists today. https://www.humprog.org/~stephen/#works-in-progress
To compile it you'll need to tell it to link to libcurl, e.g. with -lcurl on gcc:
curl https://ifconfig.me --libcurl ip_fetcher.c
# Output: your ip address, and a file ip_fetcher.c
gcc -o ip_fetcher ip_fetcher.c -lcurl
# Output: no errors, just a file ip_fetcher
./ip_fetcher
# Output: your ip address
(I'm sure most people are saying "no duh" right now, but I'm probably not the only one on here who doesn't write C code every day!)
Shameless promotion: Hurl [1] is an Open Source cli using libcurl to run to test HTTP requests with plain text!
We use libcurl for the super reliability and top features (HTTP/3 for instance) and we've added little features like:
- requests chaining,
- capturing and passing data from a response to another request,
- response tests (JSONPath, XPath, etc...)
There is nice syntax sugar for requesting REST/SOAP/GraphQL APIs but, at the core, it's just libcurl! Using verbose option, you can grep the curl commands for instance. (I'm one of the maintainers)
The idea is that you can copy-paste the C code into an existing program or at least use it as a reference to know exactly which libcurl API calls are needed to replicate the curl call.
you'd probably have to modify it with a few parameters to make it useful for generic web pages and such, I would think of it as more of a base to build on so you don't have to dig through documents for hours
In an enthusiastic tone of an AI enthusiast: Thankfully, now that we have ChatGPT, this feature is obsolete and the curl executable doesn't have to contain half-baked quines in it anymore!
I'd never change --libcurl and gengetopt[0] with some output from some artificial thingy which babbles semi-truths which doesn't understand what it's doing.
They are deterministic tools which does what you want in a battle tested way, and will let me sleep well at night, which is an underappreciated feature of mature programs.
A code generator needs neither hundreds of GPUs nor an internet access to work. Not having to train a code generator in ethically questionable ways is also a huge plus, too.
Someone should buy libcurl.com and make it return the source code to generate a request to itself. Bonus points for setting the same headers and options as the triggering request.
One convenient thing in browser web developer tools is the ability to copy requests from network tab as either curl commands or even as javascript code. I love to see more this sort of things!
And was curl is such a "standard" to represent a request there's also many tools converting that curl output into native code (Like Go) already, which makes it very fast to reproduce something without manually having to set all the flags. I'm always happy this feature exists without even needing third party extensions in the browser.
Not sure if I like patching more and more parameters into the executable for gimmicks like this. Would be also work when you exchange the executable name instead of addding a parameter. Like ...
This is really nice, but a feature that some applications implement that I wish was available in the library is some way of outputting curl command line reference which is equivalent of the requests being made. In-fact, I often find myself, MITM'ing myself using burp-suite and "copy as cURL command" feature for exactly this.
I use the browser equivalent of this all the time to generate javascript code for requests. It’s very cool to see this for C and hopefully other languages, too.
> As if the world needed more unsafe C code connected to the internet.
Assuming (as usual) that the code generation is solid because of curl’s reputation: why not trust it? It would be pretty bad if the generator could emit memory-unsafe code. (I don’t know.)
For a trivial example, the code just calls curl_easy_init, a bunch of curl_easy_setopt, curl_easy_perform to do the work, and curl_easy_cleanup. (It leaves comments like "CURLOPT_WRITEDATA set to a objectpointer" in a comment block on params for which "You may select to either not use them or implement them yourself" - that's presumably where you are going to write your own unsafe code :-)
Fair point about memory allocations in C, but often alt languages rely on other people's code which you'd implicitly trust to do the same thing. So then it becomes an argument of testing and trust. All the same, you trust strangers code or you write your own.
... or "curl calls to python webscraping".
Although having to look for the right version of libraries with different number names and therefore different feature sets was tedious to do manually, AI might just guess fast and sometimes even right as to which import to use.
Then I'd use something like https://curlconverter.com/ to convert it into request code of the language I'm using.
In a way curl is like the "intermediate representation" that we can use to translate into anything.