Is there any chance to break spec and allow manual redirect handling?
the fetch API makes a lot of sense in a browser, but imo this is a pretty crucial feature that undici's implementation lacks. [0]
Deno decided to break spec [1][2] so the following code works fine:
Oh right, because they don’t want cross-site scripts to be able to see redirected URLs since they could contain secrets. I wish we could completely do away with cross-site scripts and just have nice things!
Yea, so instead it would just encourage more 3rd party libraries doing random things on your site. This is what happens in native. Instead of embedding an ad in an iframe and isolating its damage you embed your ad service's library in your code and it spies on way more activity than it ever could otherwise.
It's pretty funny that such an RPC framework as the browser exists that gives the end user a genuinely decent sandbox, yet all it receives is criticism for its flaws. People will then happily install a screen dimmer or "productivity" tool with superuser privileges from a completely untrusted source.
I would also be okay (ish) with explicitly isolated third-party code execution, like your example of an iframe to a different domain. I'm pretty sure that should already be the case with iframes, in fact (you obviously shouldn't be able to embed an iframe to facebook.com on your website and then use your website's JavaScript to inspect the DOM on that facebook.com iframe).
A better idea would be to replace / improve the fetch spec. Handle redirects, set JSON as the default request content type, encode URI components for users, decode response bodies with JSON headers as JS objects, etc.
Right now most developers either write their own library to do these things on top of fetch or another HTTP client or use a third party library from npm.
A new standard would allow us to make http requests out of the box comparable to high level HTTP clients like superagent, axios etc. that have been in use for the last decade, since before fetch was conceived, with whatever benefits fetch provides (I think it's cancellable now?).
A new standard would need buy-in from browsers to actually be a standard. Browsers likely don't have a lot of incentive to make spec changes only Node is interested in (like making the whole spec more complicated from their point of view because of Node's (or Deno's) different security model).
The level of collaboration and good-faith we've been getting from spec bodies like WHATWG is very high as it is and we really don't want to push or abuse it.
Does it have to be a buy-in, though? You could agree on server-specific extensions to fetch and codify them in the same spec (so it doesn't get lost otherwise). Browser vendors wouldn't need to implement it but they will still keep an eye on it when working on future browser apis.
Codifying the redirect behaviour for server/"trusted" envs makes some sense with Deno/node doing the same thing. I'm not versed enough in the standards orginazation politics to say if that's viable though.
The rest of your "improvements" aren't that. They're opinions, opinions laser focused on a JSON centric api.
I'm sorry to say this, but not all of the web is one big JSON blob.
Some of us have to talk to SOAP (xml) services.
Sometimes it's weird rpc stuff with protobuf.
Or even just pulling down raw binary data and feeding that through an API/library that wasn't built for web streams.
Honestly, the amount of code you need to add over the top of a primitive like the Fetch API to get what you wanted (bar the redirect change) is trivial and minimal.
Thanks for your opinion, however as mentioned, or as you’ll be aware if you’ve used JavaScript, these are well worn cow paths that have been default in the most popular HTTP clients for a decade now.
As mentioned existing HTTP clients had this behaviour back in 2012, the designers of fetch chose to ignore these cowpaths in favour of a more minimal implementation requiring the use of third party HTTP clients in both browsers and node, simply to have reasonable defaults and not repeat oneself, for the foreseeable future.
the value the js community gets from `fetch` being a unified standard is far, far greater than the benefit you as a developer would get from not having to add 15 extra LOC around `fetch`, that you'll probably hide behind a `fetchJSON` function anyway.
I didn't downvote, but my guess is its primarily a kneejerk reaction to the following line:
> set JSON as the default request content type
or something similarly subjective.
Downvoting is going a bit too far, but bundling a few individual opinions on defaults alongside broadly accepted (implemented in both Node & Deno after much discussion) features seems... presumptuous.
> ...we'd love to hear from the community what you'd like to see.
In our tests, fetch on Deno was wayy faster than undici fetch that now has been merged into nodejs.
I couldn't figure out why that was case, but forwarding requests over node's http2 client (instead of undici fetch) then had comparable (but not as fast) performance as Deno (presumably because our hand rolled impl lacked connection pooling).
Deno's is probably written in rust via the tokio rust crate. This version of fetch in NodeJS is written in JavaScript. It would probably get upgraded to C++ in due time once it's mature enough.
Really happy to see fetch() merged into core! Looking at the PR, what are those WASM blobs in Undici and how is it that Node.js accepts a random very large compiled blob instead of asking for the source file + compilation step? https://github.com/nodejs/node/commit/6ec225392675c92b102d3c...
Sorry def didn't mean in a bad way! I just wanted to ask/note that in the PR, and with the very little context I have, it seems like an arbitrary code blob so I was surprised, I'm 100% it'll be well documented in e.g. undici itself, and prob in other places.
Except that's exactly what Corepack does, just silently and opaquely on the user's system. At least, the moment it switches from opt-in to opt-out.
Its unverified binaries are just as good as random binaries, and silently replacing any pre-existing yarn/pnpm/whathaveyou without checking if it was perhaps a custom build is not exactly predictable either.
Other than Corepack, though, I think you people are doing an amazing job!
Also worth calling our James whose work on web streams in Node as well as events/cancellation helped drive a bunch of this.
I can name maybe 50 people who worked on this effort overall at some capacity (I am one of them - spending maybe 40 years on APIs to help enable this).
Note the job isn't entirely done and help is very much appreciated!
Quick question, judging from the commit alone this seems like a rather small change. I assume there's more to it though. I just wonder how come features like this that kind of seems obvious to include in the ecosystem takes quite some time to land? I understand the reality is more complex perhaps, so I am genuinely curious.
I hope you realize no disrespect, this will greatly improve the daily work for me since I use node at work every day.
The commit adds undici (another Node.js project at https://github.com/nodejs/undici ) as a dependency and exposes its `fetch` - the code changes you see are probably just adding a flag :)
> I just wonder how come features like this that kind of seems obvious to include in the ecosystem takes quite some time to land?
Ok cool and thanks for the clarification, I imagined it was more to the topic than I realized.
Last question I have is will this be included in the next version of the LTS version or how do experimental features usually progress for someone who are unfamiliar?
Can't wait until I can use fetch in my codebase without any additional dependencies!
I had a quick look, the implementation is interesting
Node uses undici JS library. Undici includes a C HTTP parser that is used via webassembly. The parser is generated from rules that are implemented in JS/TS using llparse framework.
A native C binary there in the middle is quite surprising.
my guess is that this adds stuff to the global object which has backwards compatibility concerns.
And with every public API: Once you add it, you can't really hope to ever change it and even bug-fixes could be breaking some user's code (because they relied on the bug), so you have to be very careful to ship your public API as bug-free as possible.
Yes I could imagine and I have experience of developing apis with "bugs" that you kind of need to keep since people come to depend on it working a certain way.
But I can't help to feel like Node is progressing kind of slow compared to other languages / tools. This is just a personal impression from a bystander who is not really into the actual progression though. I have a feeling that node used to push new features all the time but in last couple of years have stagnated somewhat. Stuff like imports etc is still not really here.
I understand that developing an api that has such huge userbase as node is a massive undertaking that probably has issues that I can't imagine but it doesn't really help me in my day-to-day experience with it.
I think some of it is because Node doesn’t have very many ways to let people know about potentially breaking changes. A compiler (like Elixir) has an opportunity to communicate potential breaking changes to the user before the program actually runs, and a static type system (like Rust) makes it easy to detect such broken code and reject it outright instead of silently changing behavior. It’s harder for Node: where would the diagnostic even go? The console? Some apps use Curses, so this will break them, and many of them send their own logs through something other than the console, so nobody’s actually watching it.
Node also isn’t actually a language, so there’s a lot of stuff that Node developers get in new version, but it’s actually being done in the V8 project.
Meh, the js/ts ecosystem breaks things all the time. At least 2 weeks ago you couldn’t run typescript with ts-node, unless you explicitly enabled highly experimental loader flags.
And this is no obscure corner case, ts-node is huge
fetch is a "modern" (ES6 era, so not brand new) API for making HTTP requests, to replace XMLHttpRequest (the older method that IE pioneered).
It's promise based (thus easier to integrate into code than older callback styles), and designed to give some better options around CORS and handling responses.
> Why it should be in node?
There is a push to adopt some certain "web apis" (APIs that emerged in web browsers) to increase compatibility, reusability and reduce cognitive load when working with both backend and frontend javascript.
Having a common low level call like fetch() means that libraries that are built atop it (SDKs to talk to services, or apply common middleware like JSON:API, oAuth etc) can be shared between node/deno and browsers.
Fetch is a popular browser api to download and request stuff from the internet. Having the same api both in browser and in node allows easier re-use of the same application code or libraries on frontend and backend.
I do have a question that's only tangibly related to fetch, I was wondering what Nodes stance is on the seemingly increasing gap between the NodeJS and browser implementations of the JavaScript engine.
It's still very common to see Node modules that use require(), for example, but would otherwise work flawlessly in a browser where there is no require() support (without using shims and other libraries).
> I was wondering what Nodes stance is on the seemingly increasing gap between the NodeJS and browser implementations of the JavaScript engine. It's still very common to see Node modules that use require()...
"Seemingly increasing gap" - how did you come to that conclusion? If anything the gap has reduced since the availability of ES modules. I see more and more libraries preferring ES modules, but ultimately that's up to the library developers.
Node has supported ESModules (`import`) for a while and you can use it (either by naming your files `.mjs` or setting `type: module` in your package.json.
There is a solid interop story between require/import and you can mix if you want.
`require` is going to be supported "forever" so code doesn't break and both module systems work.
As for the general question: Node is committed to supporting modern JavaScript and to not allow such a gap to be created. If you see a place Node doesn't do a great job at that please tell us!
However note that ESM in Node comes with drawbacks that prevent end-users from relying them in various situations. Those will be mostly solved once loaders become stable, but until then it's still advised to ship packages as both CJS and ESM.
Huge thanks to the node team for adding this, I’ve been wanting fetch in node for years now! Installing node-fetch for every project was getting kind of old haha
Thanks a ton for this. Fetch API is elegant and it working on node natively will so many edge cases we need to handle currently. Thanks a ton, please know that this will positively change our life and tens of thousands of other developers worldwide.
You can upload a stream and monitor that for upload progress - so sure.
If you upload a string for example you would have to take an extra step to get upload progress (make it into a stream and monitor that).
There is additional (unrelated) talk in the fetch standard itself to provide this sort of functionality as part of `fetch` which would save the middle step.
You can use timeouts through `AbortController/AbortSignal` - eventually it'll even be built in with `AbortSignal.timeout` which is currently under-works in the spec level.
Every Node.js dev should be thanking the Deno creators. It has forced them off of their "this is how Node does it" complacency. Most improvements I notice and care about on Node these days are all things that Deno already supports (fetch, ES modules, async std lib uses promises, etc).
The Web Platform APIs aren't perfect, but they are better, and more thoroughly thought out than Node's.
What is "this is how Node does it" complacency you speak of?
Node has been talking about Fetch since at least 2018 _before_ Ryan even announced Deno. This is true for promises, ESModules etc. Ryan _attended those meetings_ in the Node collaborator summit.
Don't get me wrong I am thankful for Deno (and a contributor!) but:
- It's a lot easier to make changes when the project is new and you don't have a lot of users (which is great for Deno) whereas Node has a whole (huge) ecosystem to consider on every change.
- It's a lot easier to make changes with venture capital and funding to hire a bunch of people to work on it full time vs. a project with a "regular" MIT license where the code is owned by the community and not a company.
Those are not complaints about Deno, I like Deno (and not just Ryan, Lucas, Kit, Ben and the other people are nice and helpful and the project is really nice).
It's just to explain very little of Node's innovation is (at the moment) driven by Deno.
My biggest issue with node when I was working on it briefly was I couldn't do the 'import' statements like wepback. Are they supported too now?
I'm not a web developer so I had a very hard time understanding why there are so many different type of imports in JavaScript like require, import, umd, amd, etc and which one works in browser and which one works in node?
Also why do so many libraries have this strange 5 line header code for this umd, amd, business. Is that to make their packages work with nodejs?
Does anyone who knows enough JavaScript point me in the right direction about it. I find all this very confusing.
> I'm not a web developer so I had a very hard time understanding why there are so many different type of imports in JavaScript like require, import, umd, amd, etc and which one works in browser and which one works in node?
If I remember my history correctly, it's because when Node first came out, there was no import system in JS, let alone a standardized one; there was no sense of scoping (everything global), nothing about dynamic or lazy loading of dependencies, no tree shaking / removing unused code, and even going to the definition of something in an editor was difficult.
NodeJS adopted CommonJS (I don't recall if they invented it), which is a module and dependency system based on require() and exports. It was only a few years later when the JS standards body settled on import; by then, the JS (dependency / module management) world was already very divided.
"supported" is somewhat of a misnomer IMHO. It's only supported if the other tools you're using support ESM. Otherwise, it's not. And it's really annoying when libraries choose to go full ESM because it breaks things like tests and deployments in weird ways.
But why? This makes it much harder to ensure that identical code works in browser and nodejs, to the point where I sometimes just can't use nodejs. And I don't use package.json, so having to name files ".mjs" is a really painfull restriction.
Browsers will happily accept `.mjs` files they don't actually care about file extensions only about `Content-Type` and type=module on the script tag :]
There is absolutely no need for a package.json when using ES6 imports without a build step. Package.json doesn't do anything when I'm loading the code in a browser, why would I have to use it when I want to run the same JS code with node?
this is a silly point. it's a different runtime, there are also different apis available - as is the point of this article. node is not an internet browser. but ok
If you're hosting the browser-based version over a server like Apache2 (to host the files), you could add a `RewriteRule` to redirect queries to `.js` resources to their `.mjs` counter-part files.
Import is slowly becoming the standard, but working with typescript professionally modules has easily been the most annoying part of the Node experience.
It works pretty well, and then you need to use something like Azure Functions and then suddenly it doesn’t. For various reasons.
My most recent example was using lodash, which works perfectly fine with import with typescript targeting esnext in node16, but then needs to be setup with require when you target an azure function and commonjs. I mean, maaaybe you could avoid it by using mjs, which is currently sort of needed to move into the node16 functionality in azure functions, even though they sort of run node16 just fine in part of them without it, and you sort of don’t want to use mjs files and so on.
I’m sure it’ll get there in a few years, but it is no doubt annoying to have to fight the toolset ever so often. Over something that feels like it should just be working.
That last part isn’t really exclusive to node these days though, is it?
I had the same problem so for serverless functions I am using standalone webpack config which transpires functions into the supported by the cloud format
In short, node has imports, browsers have imports, but node-related toolkits do not support it, and people still webpack bundles because development and deployment processes are a thing, and it doesn’t matter much which module system a “binary” bundle uses in the end.
Some people tried to force ESM adoption by making popular modules ESM-only for no technical reason, but tools are not there yet, and it only annoyed (predictably) half of the internet. Because even if you are pro-ESM or indifferent to it, you can’t do much to migrate your projects’ build pipelines. If you see typescript and webpack, you wouldn’t see “ESM” in there. It either doesn’t work or is too fragile for production.
People who claim it’s done say so because they are using particularly unaffected stacks (including no stack). Idk which way to promote ESM would be the most correct, but this one is too deceptive.
Going X-only for a political reason is not a good one, especially if the standard is of a different platform, which had no X at all back when the de facto standard was set.
> Does anyone who knows enough JavaScript point me in the right direction about it. I find all this very confusing.
There isn't a straight forward solution, but the closest for me is the combination of using a transpiler (Babel), and a bundler (Webpack).
A common criticism on node/javascript projects is the boiler plate setup required. As far as I know, there isn't an IDE that takes care of doing this part for you, where your experience developing outside of the web might (I like to think it akin to starting a project from scratch with C++, gcc, and make).
Some larger projects, do have scripts that do a lot of boiler plate for you, such as Create-React-App. https://github.com/facebook/create-react-app but that is for a specific use case.
I been naming the files with .mjs file extension which allows import keyword and top level await. No special flags needed in the latest version. I use JS not TS.
I know this comment is not very helpful, but the lack of a module-system and the lack of threading are unfortunately the biggest issues that JavaScript had from the start.
One thing I hate about fetch() is that you can't manually follow redirects. Using { redirect: 'manual' } doesn't expose the Location header of the redirect, so it's essentially useless. I know that node-fetch fixed this issue, so I hope the official Node fetch() does not have the same problem.
I love fetch, except for the way that it combines headers. If more than one of the same header is set (such as multiple set-cookie headers), it combines them all into a single comma-separated string. I know this is allowed in the HTTP spec, and it's probably even a sensible default. But the fetch spec doesn't allow any access to the raw headers, so there's no straightforward way to get the original uncombined headers back. Set-Cookie headers, in particular, often contain commas in their expiration dates, so just splitting around `, ` will lead to problems.
There’s an NPM package containing this implementation called ‘undici’ that you can install in previous versions of Node; the package.json claims support back to 12.x.
However the fetch implementation itself is documented as unstable and only supports Node 16.x. I believe this is because it is not quite spec compliant yet, so there is latitude for breaking changes that make it more spec compliant.
Not every browser API is an instant candidates to be added to Node's core. If there isn't a big performance hit from doing it as a user library or lack of functionality to do it as such in the first place then it's rare it'll be brought into core right away, if it will at all.
It's nice to see the really popular ones make it in though, makes for fewer dependencies when doing browser compatible projects and also allows one to be lazy and not learn the Node way of doing it. Maybe we'll see websocket go into core some day too.
Eh, I mean I get it JS lacks a real standard library but at the same time you don't see HN fill up with people saying C is all over the place because when they went to implement an HTTP call glib doesn't mirror libuv or C++ is all over the place because POCO doesn't mirror Boost and so on.
The "all over the place"-ness tends to come more from people who act like changing to a newer library is the only way to code something new not from the lack of JS forcing a single API be used across every app regardless of type.
Intercommunicating backend services isn't new, but having an HTTP API call another one from the backend is a case that used to be more normally accomplished by calling both APIs from the frontend. The slow ooze of browser-oriented thinking onto the backend took time. Existing backend engineers, who thought it a harebrained scheme, had to eventually wear down, and give up.
So when you write an application that integrates with an external service via an HTTP API, you think the normal way to architect such an application is to make all API requests from the browser? And you think that making such calls from the backend is a new and "harebrained" idea that backend devs have been forced to adopt against their better judgement?
I'm curious to know what position you occupy in our industry?
Yes, I think the normal way to integrate an app with an HTTP API is via the browser.
Calling an external API from a backend that integrates directly with an client app has been rare for me. Payment processing is all that comes to mind, and that was usually through an imported API package. I've made data scraping tools, but they were unique projects, not part of an app backend. I am a senior full-stack webdev.
Has it ever occurred to you that your users can (a) see all requests you make to third party services, including your credentials and client secrets, (b) modify and replay those requests at will and (c) spoof responses from those services?
I think the misunderstanding is, beyond special cases like a payment processing, why would you be calling third party services from your service? When I think of calling a 3rd party HTTP API, I am thinking of public APIs, like say a widget that shows the weather in a users area.
I'm going to take a wild stab in the dark here and guess that despite having the title "senior", you have little or no experience in commercial software development? Because what you dismiss as "special cases" covers a vast amount of the plumbing of the modern web. In modern apps even something as simple as storing and retrieving files requires HTTP calls at some point in the stack, and such calls should never under any circumstances be initiated from someone else's computer.
Which brings us back to the original question you responded to above. Contrary to your odd views of how software is architected, issuing HTTP requests from the backend is both normal and common enough to be banal. Node adopting the fetch API is generally a good thing as it allows JS developers to code against the same interface whether they are working from the frontend or the backend.
Sure - but that weather/map/whatever widget might be paid, and unless you want to pay for others' API usage, you should make the relevant API calls from your backend, not frontend. Also consider caching 3rd party service replies, etc.
In general "when we feel it's ready". For some features I've worked recently (like `flatMap` and friends for streams) I think it'll take about a year.
For `fetch` it will take us around a year or two probably.
I wrote more details on the PR itself but I (and others) basically feel strongly that it should be as close to compliant as possible (Deno have a nice list which they happily shared with us of how they diverge and that seems reasonable).
Basically there is a bunch of stuff that should happen first:
- We need to run (and pass) the web platform tests.
- We want to go over the spec (again) and check compliance and add more tests and fix bugs.
- We want community feedback on the implementation and to
improve the DX to be better (while not sacrificing spec compliance).
- We want better support for stuff like `File` in core.
- Web streams need to go out of experimental first.
This was a _long_ road that had many stepping stones (like EventTarget, AbortController, Blob, ReadableStream etc). It's important to get it right.
Worth mentioning the fact it's experimental doesn't mean it's not safe to use in simple code in this case - it just means if you make your code behave in a way that diverges from the specification we may change our implementation to match the specification and break your code.
Anything like `await fetch('./someUrl').then(x => x.text())` should be fine.
Yes, but not in that PR since `FormData` needs to behave differently as part of the platform but there is intent to support it before moving from experimental.
Same reason the WebSocket API isn't supported, Node already had lower level APIs and browser APIs moved 1:1 don't necessarily make sense. In this case it's a slightly extended/non-standard Fetch and enough of the "it doesn't need to be in core it can be a normal library" folks thought it'd be useful enough. And someone went through the work of actually making the native implementation.
I actually think both the Websocket API and fetch make sense. There's nothing more frustrating than adding a bunch of checks to a SvelteKit project to see if you're on the server or not.
I wouldn't be surprised if WebSocket made it into core eventually tbh, the above are just examples of why such things haven't immediately. If WebSocket does it'll probably have more modifications/deviations than say Fetch did here though as well as an entire server side half. `ws` is well loved but there are 2 optional native binary addons which greatly increase performance, that alone makes a good enough case for it to end up in core - just a matter of someone finding it interesting enough to make a solid PR to do so instead of just using the `ws` library as is.
However, I feel the fetch API is completely botched because it lacks timeout support. I have cobbled some together for https://github.com/Backblaze/gists/pull/8 but gosh. I really hope all that is actually not necessary :/
Also, keep in mind that the linked solution does not abort the web request. It only aborts waiting for the response in the JS code. The browser does not close the connection.
First: not a stupid question - the release cadence, multiple release lines and other minutia of a project like Node are not something I'd expect users to be intimately familiar with!
This _just landed today_. You can get it by building from master:
Ive used Fetch in the past, great lib. I'm curious why Node core has decided to make this an official part of Node? From one perspective there could be a hard argument that this should never be a part of Node as it is an external library and not a building block of the language? Almost feels like feature creep for the core language? I know that's not the intent, but wanting to understand the logic behind this decision :)
FWIW, this was the answer I was hoping to get, grabbed from a separate comment :)
"With fetch available in node, we have a single, standardised, promise-based HTTP client Interface available on all (more or less common) JavaScript platforms.
This is great news for an ecosystem traditionally extremely fragmented and may lead the way to more consolidation in the JS library space. "
I second this, since fetch is a wrapper around xhr (though doesn’t have to be because node is a non-restricted runtime), and it diverged already for redirect semantics. It’s news, but is it big?
As someone who has been working almost exclusively on Node.js applications for years now, this is big.
Front-end developers have, for years, enjoyed this wrapper around the clunky XHR API. It is simple, intuitive, and it comes _free_ in the browser environment (and was easily shimmable before it was standard). Additionally, the depreciation of the popular request[0] a while ago, there has been a gap in The Right Way to do http requests without going into the low-level API that ‘http(s)’ provides. Fetch is promised-based (enabling seamless async/await) and, most importantly, standardized.
Basically, fetch provides a meaningful abstraction atop each platforms’ implementation of http-request-doing further unifying server-side and client-side patterns and idioms.
As a person who only occasionally writes JS and has zero interest in keeping track of the differences between client-side and server-side JS - fetch has been a big friction point for me in the past, it's big news to me.
It ships browsers but not because it's a JS standard (ECMA) rather a web standard (WHATWG). Almost everything in JS is an external library as it doesn't have a true stdlib to go with the language just a core language and implementation specific APIs like browser Fetch or Node sockets.
Hopefully WebSockets (W3C/IETF) will get embedded in Node's core one day as well.
Comments like “it’s standard” do not answer the original question:
“I'm curious why Node core has decided to make this an official part of Node?”
The standard describes many functions and classes. What makes fetch more land-able than e.g. the event system, of which node has incompatible implementation? Or feature detection via navigator. Why it? What were the main point(s) of adding fetch specifically?
If something is standard, and it makes sense in the context of a backend, then it's convenient to just use the same API on both ends. Navigator makes almost no sense on the backend unless maybe it worked as a dummy polyfill object.
Basically because fetch isn't a great API for servers it took a while to get consensus on actually landing it for the interoprability/simple API value.
Then it took a while to get consensus it's fine to do even if we can't implement the standard fully and diverge from it on stuff like CORS (like Deno does).
Then there were a bunch of work adding APIs like `EventTarget` and `AbortSignal` to Node.js which are quirky'ish web APIs.
As a side note just to show positive collaboration: that work made me make maybe 10 PRs fixing things in EventTarget in Deno - and Deno helped Node a bunch now when landing fetch.
Then web streams and blobs and a few other things as well as the undici work in the background.
Then undici-fetch by Ethan, merge into Undici and then the PR by Michael - and it's still not done :)
First, this is awesome and congratulations! Second this sounds like a lot of work! Can you comment why was it so hard to add fetch to node, yet packages like “node-fetch” have existed for some time and seem to implement fetch rather easily? Only asking from a curiosity perspective.
node-fetch (which is great btw!) implements fetch "reasonably" as in stuff like `fetch('./foo').then(x => x.json())` works but it's very far from a "real" fetch.
Some people argued users would still like it and there was an attempt[1] by Myles but at the end of the day - the changes were pretty big.
For example: a response body in fetch is a web stream (a whole new stream type) and to cancel you use an AbortController (web type) which is an EventTarget (another web type). `node-fetch` uses Node streams instead which is very reasonable but not something a platform can "get away" with while still calling it fetch.
Fetch evolved after promises had been standarized as the way of doing async, whereas node is older than that. Node originally used callback style for its core API and had another method for making http requests. Since fetch, developed in the browser, OS contributors have filled the void of a promise supporting http requesr library, with packages like axios, and isomorphic-fetch. Only now that the standards have finalized and we have reached consistent browser support has core team has made it a priority to include it in core.
Good that some people are happy about this, but it really is pointless to post a link to "lib: add fetch" followed by code diffs. HN should have a requirement that there be an actual post, rather than a discussion starter for those in a particular community. All you'd have to do is type in a brief text post with an explanation and then link to the code diffs for those interested.
There will be a blog post on this in the Node.js blog and official media communication. This _just_ landed today and I suspect the HN crowd are more in the "get news early" camp and not "wait for the official press release" camp.
Funny how my brain works, I thought it was there already, but I must have got so used to having it as a dependency I forgot that it was a dependency not built in. To be fair I don't "Node" all the time! It's on and off.
Can you set more than one cookie in the request and read more than one cookie in the response? Was shocked when I learnt that's not supported by the spec..
This is an understatement. With fetch available in node, we have a single, standardised, promise-based HTTP client Interface available on all (more or less common) JavaScript platforms.
This is great news for an ecosystem traditionally extremely fragmented and may lead the way to more consolidation in the JS library space.
NodeJS API are traditionally not promised based (although nowadays some do offer a promisified option), so I'm not especially excited to have a promise-based API :)
It's a great news for people doing both front end and backend I guess, but for people doing NodeJS développement it's just one more option to do http request (which I guess is going to be very similar to the module node-fetch that have existed for a while, without special consolidation effect)
node js is a web software platform over a decade old that does not include a stable API for making a web request so there are a bunch of community libraries for making a web request, and which one is "best practice" has changed like four or five times through the years. this is a new api for making a web request. yayy progress.
I love that the most popular language's core lib now brings a trivial bit of code available in every other platform and its a celebrated super achievement
Not to mention the Fetch API isn't available on every other platform... maybe you're thinking of the general ability to make HTTP requests, but nodejs had that from the start.
It’s not trivial and it’s not groundbreaking. Node has had an HTTP client since its inception, this is just a different API. It’s “great” just because it’s a decent abstraction that’s also available in browsers.
The issue in the JavaScript ecosystem for this kind of stuff is that it runs on two very different environments and they need time to find solutions that work for all the parties involved.
Why can't you use TypeScript with node? It's not built in, but it's easy enough to use by building or running it with ts-node. Also, once you are using TypeScript, you have import syntax support. Deno has chosen a different path, but there are definite trade-offs.
Yup the point I'm making is its built in. no hoops no babel. I find myself gravitating towards Deno before Node because with node I know I'll have to do some hoop jumping to make it work.
Seriously. I maintain an open-source Node.js CLI utility and was shocked when import/export syntax is only achievable if you use a transpiler like Babel. Such overkill for something that’s been in the JS spec for nearly a decade.
sort of. There's only been one instance for me that this has caused any issues... I ended up just writing my own version of the missing tool. Deno has almost parity with node at the moment though https://deno.land/std@0.123.0/node.
This is still experimental and we'd love to hear from the community what you'd like to see.