Hacker News new | past | comments | ask | show | jobs | submit login
Direct Sockets API in Chrome 131 (chromestatus.com)
200 points by michaelkrem 40 days ago | hide | past | favorite | 159 comments



I think a lot of people don't realize it's possible to use UDP in browsers today with WebRTC DataChannel. I have a demo of multiplayer Quake III using peer-to-peer UDP here: https://thelongestyard.link/

Direct sockets will have their uses for compatibility with existing applications, but it's possible to do almost any kind of networking you want on the web if you control both sides of the connection.


> Direct sockets will have their uses for compatibility with existing applications...

In fact runtimes like Node, Deno, Cloudflare Workers, Fastly Compute, Bun et al run JS on servers, and will benefit from standardization of such features.

  [WICG] aims to provide a space for JavaScript runtimes to collaborate on API interoperability. We focus on documenting and improving interoperability of web platform APIs across runtimes (especially non-browser ones).
https://wintercg.org/


This slowly alters the essence of The Internet, due to the permissionless nature of running any self-organising system like Bittorrent and Bitcoin. This is NOT in Android, just isolated Web Apps at desktops at this stage[0]. The "direct socket access" creep moves forward again. First, IoT without any security standards. Now Web Apps.

With direct socket access to TCP/UDP you can build anything! You loose the constraint of JS servers, costly WebRTC server hosting, and lack of listen sockets feature in WebRTC DataChannel.

<self promotion>NAT puncturing is already solved in our lab, even for mobile 4G/5G. This might bring back the cyberpunk dreams of Peer2Peer... In our lab we bought 40+ SIM cards for the big EU 4G/5G networks and got the carrier-grade NAT puncturing working[1]. Demo blends 4G/5G puncturing, TikTok-style streaming, and Bittorrent content backend. Reading the docs, these "isolated" Web Apps can even do SMTP STARTTLS, IMAP STARTTLS and POP STLS. wow!

[0] https://github.com/WICG/direct-sockets/blob/main/docs/explai... [1] https://repository.tudelft.nl/record/uuid:cf27f6d4-ca0b-4e20...


Hello, I wanted to say I've been working on a peer-to-peer library and I'm very much interested in your work on symmetric NAT punching (which as far as I know is novel.) Your work is exactly what I was looking for. Good job on the research. It will have far-reaching applications. I'd be interesting in implementing your algorithms depending on the difficulty some time. Are they patented or is this something anyone can use?

Here's a link to an over-view for my system: https://p2pd.readthedocs.io/en/latest/p2p/connect.html

My system can't handle symmetric --- symmetric. But could in theory handle other types of NATs ---- symmetric. Depending on the exact NAT types and delta types.


I read OP's thesis (which focuses on CGNAT), and one of the techniques discussed therein is similar to Tailscale's: https://tailscale.com/blog/how-nat-traversal-works

  ...with the help of the birthday paradox. Rather than open 1 port on the hard side and have the easy side try 65,535 possibilities, let’s open, say, 256 ports on the hard side (by having 256 sockets sending to the easy side's ip:port), and have the easy side probe target ports at random.


this comment section has been the most useful and interesting thing I've seen for my own work in a very long time. And completely random, too. Really not bad. To me this represents the godly nature of this website. Where you have extremely well informed people posting high quality technical comments that would be hard to find anywhere else on the web. +100 to all contributors.


indeed, Tailscale was the first to realise this.

We added specific 4G and 5G mobile features. these carrier-grade boxes have often non-random port allocations. "By relying on provider-aware IPv4 range allocations, provider-aware port prediction heuristics, high bandwidth probing, and the birthday paradox we can successfully bypass even symmetric NATs."


> By leveraging provider-aware (Vodafone,Orange,Telia, etc.) NAT puncturing strategies we create direct UDP-based phone-to-phone connectivity.

> We utilise parallelism by opening at least 500 Internet datagram sockets on two devices. By relying on provider-aware IPv4 range allocations, provider-aware port prediction heuristics, high bandwidth probing, and the birthday paradox we can successfully bypass even symmetric NATs.

U mad. Love it!


What if someone finds your IP address and sends you a bunch of crap? It would be very easy to use someone's entire monthly data allowance.

Plus, it only works if you can afford and have access to cell service, and in those cases you or have access to normal Internet stuff.

Unless cell towers are able to route between two phones when their fiber backend goes down. That would make this actually pretty useful in emergencies if a rower could work like a ham repeater, assuming it wasn't too clogged with traffic to have a chance.


I don’t understand the topic deeply. Is this futureproof, or likely to be shutdown in a cat and mouse game if it gets widespread, like it needs to for a social network?


Can you explain further... how does this improve upon websockets and socketIO for node?


Without a middleman you can only use web socket to connect to an http server.

So, for instance if I want to connect to an mqtt server from a webpage I have to use a server that supports websocket endpoint. With direct sockets I could connect to any server using any protocol


You can also use WebTransport with streams for tcp and datagramms for udp https://developer.mozilla.org/en-US/docs/Web/API/WebTranspor...


Not peer to peer though presumably?


There was some traction & interest in https://github.com/w3c/p2p-webtransport but haven't seen any activity in a while now.

I'm pretty cocksure certain a whole industry of p2p enthusiasts would spring up building cool new protocols and systems on the web in rapid time if this ever showed up.


Perfect timing with realtime AGI happening. Need lots of focus on realtime streaming protocols


Yes and not in Safari yet either. Someday I hope that all parts of WebRTC can be replaced with smaller and better APIs like this. But for now we're stuck with WebRTC.


This a very early draft I'm following: https://wicg.github.io/local-peer-to-peer/


WebRTC depends on some message transport (using http) existing first between peers before the data channel can be established . That's far from equivalent capability to direct sockets.


Yes, you do need a connection establishment server, but in most cases traffic can flow directly between peers after connection establishment. The reality of the modern internet is even with native sockets many if not most peers will not be able to establish a direct peer-to-peer connection without the involvement of a connection establishment server anyway due to firewalls, NAT, etc. So it's not as big of a downgrade as you might think.


That changed (ahm.. will change) with ipv6. I was surprised to see that I can reach residential ipv6 lan hosts directly from the server. No firewalls, no nat. This remains true even with abusive isps that only give out /64 blocks.

That said, I agree that peer to peer will never be seemless thanks mostly to said abusive isps.


> I was surprised to see that I can reach residential ipv6 lan hosts directly from the server. No firewalls, no nat

No NAT, sure, that's great. But no firewalls? That's not great. Lots of misconfigured networks waiting for the right malware to come by...


I sure hope not, this will bring in a new era for internet worms.

If some ISPs are not currently firewalling all incoming IPv6 connections, it's a major security risk. I hope some security researcher raises boise about that soon, and the firewalls will go closed by default.


My home router seems to have a stateful firewall and so does my cellphone in tethering mode - I don't know whether that one's implemented on the phone (under my control) or the network.

Firewalling goes back in the control of the user in most cases - the other day we on IRC told someone how to unblock port 80 on their home router.


it kinda of already begun


Has there been a big ipv6 worm? I thought that the defense against worms was that scanning the address space was impractical due to the large size.


i don't think they scan the entire space. but even before that there were ones abusing bonjour/upnp which is what chrome will bring back with this feature.


IPv6 isn't going to happen. Most people's needs are met by NAT for clients and SNI routing for servers. We ran out of IPv4 addresses years ago. If it was actually a problem it would have happened then. It makes me said for the p2p internet but it's true.


> If it was actually a problem

It became a problem precisely the moment AWS starting charging for ipv4 addresses.

"IPv4 will cost our company X dollars in 2026, supporting IPv6 by 2026 will cost Y dollars, a Z% saving"

There's now a tangible motivator for various corporate systems to at least support ipv6 everywhere - which was the real ipv6 impediment.

Residential ISP appear to be very capable of moving to v6, there are lots of examples of that happening in their backends, and they've demonstrated already that they're plenty capable of giving end users boxes the just so happen to do ipv6.


Yes and setting up a single IPv4 VPS as load balancer with SNI routing in front of IPv6-only instances solves that.

Most people are probably using ELB anyway


What do you mean not going to happen? It's already happening. It's about 45% of internet packets.


The sun is about 45% of the way through its life.


Not happening for 55%.

Try to connect to github.com over IPv6.


It doesn't work now so it's never going to work?


If it doesn't work for a website as large as technically forward as GitHub in 2024, the odds are not looking good.


GitHub might work someday. Wide enough adoption that you can host a service without an IPv4 address will never happen.


Honestly, it could be a feature rather than a bug…


Yes, that's one of the rare exceptions of a company trying to obsolete itself. It's actually one reason a bunch of people are moving away from Github.


"We are introducing a new charge for public IPv4 addresses. Effective February 1, 2024 there will be a charge of $0.005 per IP per hour for all public IPv4 addresses"

https://aws.amazon.com/blogs/aws/new-aws-public-ipv4-address...


Yes and setting up a single IPv4 VPS as load balancer with SNI routing in front of IPv6-only instances solves that.

Most people are probably using ELB anyway.


Not only that, but DTLS is mandated for any UDP connections.


Is that a problem? Again, I'm talking about the scenario where you control both sides of the connection, not where you're trying to use UDP to communicate with a third party service.


I think all three comments including mine are essentially saying the same but in different viewpoints.


This looks to use Web Sockets, not WebRTC, right? I don't see any RTCPeerConnection, and the peerServer variable is unused.

I ask because I've spent multiple days trying to get a viable non-local WebRTC connection going with no luck.

view-source:https://thelongestyard.link/q3a-demo/?server=Seveja


Web sockets are only used for WebRTC connection establishment. The code that creates the RTCPeerConnection is part of the Emscripten-generated JavaScript bundle. I'm using a library called HumbleNet to emulate Berkeley sockets over WebRTC.

The code is here: https://github.com/jdarpinian/ioq3 and here: https://github.com/jdarpinian/HumbleNet. For example, here is the file where the RTCPeerConnection is created: https://github.com/jdarpinian/HumbleNet/blob/master/src/humb...

I feel your pain. WebRTC is extremely difficult to use.


Check out Trystero[1], it makes WebRTC super simple to develop with.

[1] https://github.com/dmotz/trystero


There's also this new WebTransport thingie based on HTTP/3:

https://developer.mozilla.org/en-US/docs/Web/API/WebTranspor...

I haven't tinkered with it yet though.


Yeah, not in Safari yet and no peer-to-peer support. Maybe someday though! It will be great if all of WebRTC's features can be replaced by better, smaller-scoped APIs like this.


Doesn't WebRTC still require an secure server somewhere?

Direct sockets will be amazing for IoT, because it will let you talk directly to devices.

With service workers you can make stuff that works 100% offline other than the initial setup.

Assuming anyone uses it and we don't just all forget it exists, because FF and Safari probably won't support it.


Longest Yard is my favorite Q3 map, but for some reason I cannot use my mouse (?) in your version of the Quake 3 demo.


Interesting, what browser and OS?


Brave browser (Chromium via Flatpak) on the Steam Deck (Arch Linux) in Desktop mode with bluetooth connected mouse/keyboard.


Hmm, I bet the problem is my code expects touch events instead of mouse events when a touchscreen is present. Unfortunately I don't have a computer with both touchscreen and mouse here to test with so I didn't test that case. I did implement both gamepad and touch controls, so you could try them to see if they work.


Same browser on win10. Mouse works after you click in the window and it goes full screen. However, it hangs after a few seconds of game play.

Stopped hanging... then input locks up somehow.

Switched to chrome on win10, same issue: input locks up after a bit.


Yeah that issue I have seen, but unfortunately haven't been able to debug yet as it isn't very reproducible and usually stops happening under a debugger.


Even with the problems, just the few seconds of playing before the crash+input hang got me hooked. So, off to GOG to get q3a for $15. Also, quake3e with all the quality, widescreen, aspect ratio and FPS tweaks... chatgpt 4o seems to know everything there is to know about quake3e, for some reason.

Talk about getting nerd sniped.


Works in Firefox, on the same system.


I can't use mouse either, macos/Chrome. Otherwise, cool!


Awesome demo. I’ve really missed that map it’s been too long.


Yeah we use WebRTC for our games built on a fork of Godot 3.

https://gooberdash.winterpixel.io/

tbh the WebRTC performance is basically the same network performance as websockets and was way more complicated to implement. Maybe the webrtc perf is better in other parts of the world or something...


Yeah WebRTC is a bear to implement for sure. Very poorly designed API. It can definitely provide significant performance improvements over web sockets, but only when configured correctly (unordered/unreliable mode) and not in every case (peer-to-peer is an afterthought in the modern internet).


We got it in unreliable/unordered and it still barely moves the needle on network perf over websockets from what we see in north america connecting to another server in north america


I wouldn't expect a big improvement in average performance but the long tail of high latency cases should be improved by avoiding head-of-line blocking. Also peer-to-peer should be an improvement over client-server-client in some situations. Not for battle royale though I guess.

Edit: Very cool game! I love instant loading web games and yours seems very polished and fun to play. Has the web version been profitable, or is most of your revenue from the app stores? I wish I better understood the reasons web games (reportedly) struggle to monetize.


Thanks! The web versions of both of our mobile/web games do about the same as the IAP versions. We dont have ads in the mobile versions, so the ad revenue is reasonble. We're actually leaning more into smaller web games as a result of that. Profit on this game specifically I think it deserves better. I think Goober Dash is a great game, but it's not crushing it like I'd hoped.


That's really interesting, thanks! I agree Goober Dash deserves to be successful.


I would say WebRTC is both a must and only worth it if you need UDP, such as in the case of real-time video.


I mean, the only cases where UDP vs. TCP are going to matter are 1) if you experience packet loss (and maybe you aren't for whatever reason) and 2) if you are willing to actively try to shove other protocols around and not have a congestion controller (and WebRTC definitely has a congestion controller, with the default in most implementations being an algorithm about as good as a low-quality TCP stack).


Out-of-order delivery is another case where UDP provides a benefit.


Runs smoother than the Android home screen. :)


Not really peer to peer though, is it? The q3 server is just running in the browser session that shares a URL with everyone else?


Yes, it is. The first peer to visit a multiplayer URL hosts the Quake 3 server in their browser. Subsequent visitors to the same multiplayer URL send UDP traffic directly to that peer. The packets travel directly between peers, not bouncing off any third server (after connection establishment). If your clients are on the same LAN, your UDP traffic will be entirely local, not going to the Internet at all (assuming your browser's WebRTC implementation provides the right ICE candidates).

It won't work completely offline unfortunately, as the server is required for the connection establishment step in WebRTC. A peer-to-peer protocol for connection establishment on offline LANs would be awesome, but understandably low priority for browsers. The feature set of WebRTC is basically "whatever Google Meet needs" and then maybe a couple other things if you're lucky.


This is neat. A little perverse, but neat.


When reading https://github.com/WICG/direct-sockets/blob/main/docs%2Fexpl..., it's noted this is part of the "isolated web apps" proposal: https://github.com/WICG/isolated-web-apps/blob/main/README.m... , which is important context because the obvious reaction to this is the security nightmare


That doesn't really make it any better, if you ask me.

The entire Isolated Web Apps proposal is a massive breakdown of the well-established boundaries provided by browsers. Every user understands two things about the internet: 1) check the URL before entering any sensitive data, and 2) don't run random stuff you download. The latter is heavily enforced by both Chrome and Windows complaining quite a bit if you're trying to run downloaded executables - especially unsigned ones. If you follow those two basic things, websites cannot hurt your machine.

IWA seems to be turning this upside-down. Chrome is essentially completely bypassing all protections the OS has added, and allowing Magically Flagged Websites to do all sorts of dangerous stuff on your computer. No matter what kind of UX they provide, it is going to be nigh-on impossible to explain to people that websites are now suddenly able to do serious harm to your local network.

Browsers should not be involved in this. They are intended to run untrusted code. No browser should be allowed to randomly start executing third-party code as if it is trustworthy, that's not what browsers are for. It's like the FDA suddenly allowing rat poison into food products - provided you inform consumers by adding it to the ingredients list of course.


> Every user understands two things about the internet: 1) check the URL before entering any sensitive data, and 2) don't run random stuff you download

I think you're severely overestimating the things every user knows.


Does it help to think of it less as Chrome allowing websites to do XYZ, and more as a PWA API for offering to install full-fat browser-wrapper OS apps (like the Electron kind) — where these apps just so happen to “borrow” the runtime of the browser they were installed with, rather than shipping with (and thus having to update) their own?


The last time I used Chrome was about 3 years ago. You have a choice.


Only kind of. If you are on Mac you can use Safari. On Windows your options are Firefox or other versions of Chrome (Edge, Opera, Brave, etc), and Firefox will not work right enough, and it'll drive you to a version of Chrome.


Something always breaks my streak, but since last year or so I feel I am down to twice a year or something.


Unfortunately this is the future. Handing the world wide webs future to Google was a mistake, and the only remedy is likely to come from an (unlikely) antitrust breakup or divestment.


> Handing the world wide webs future to Google

Nobody handed anything to anyone. They go with the flow. The flow is driven by people who use their products. The browser is how Google delivers their products so it’s kinda difficult to blame them for trying to push the envelope but there are alternatives to Chrome.


> They go with the flow.

The ancient history of just 10-15 years ago shows Google aggressively marketing Chrome across all of its not inconsiderable properties like search and Youtube, and sabotaging other browsers while they were at it: https://archive.is/2019.04.15-165942/https://twitter.com/joh...


Indeed. There was time I myself used it as my primary browser and recommended it to everyone around. That changed when they started insisting on signing into the account to „make the most out of it” so I went back to Firefox. Since then I stopped caring. I know, virtue signalling. My point is: nobody handed anything over to Google. At the time alternatives sucked so they won the market. But today we have great alternatives.


And some developers shipping Chrome alongside their apps, instead of learning proper Web development.


I doubt websites as we know it will be what we’ll be dealing with going forward anyways.

What is a browser if we just digest all the HTML and spit out clean text in the long run?

We handed over something of some value I guess, once upon a time.


> If you follow those two basic things, websites cannot hurt your machine.

Oh yes they can. Quite a bunch of "helper" apps - printer drivers are a bit notorious IME - open up local HTTP servers, and not all of them enforce CORS properly. Add some RCE or privilege escalation vulnerability in that helper app and you got yourself an 0wn-from-the-browser exploit chain.


How often does that actually happen?


Interesting — the Firefox team’s response was very negative, but didn’t (in my reading) address use of the API as being part of an otherwise essentially trusted app (as opposed to being an API available to any website).

In reading their comments, I also felt the API was a bad idea. Especially when technology like Electron or Tauri exist, which can do those TCP or UDP connections. But IWA serves to displace Electron, I guess


I'm hacking on a Tauri web app that needs to bridge to talking UDP protocols literally as we speak.

While Tauri seems better than ever for cross platform native apps, it's still a huge step to take to allow my web app access to lower level. Rust toolchain, Tauri plugins, sidecar processes, code gen, JSON RPC, all to let my web app talk to my network.

Seems great that Chrome continues to bundle these pieces into the browser engine itself.

Direct sockets plus WASM could eat a lot of software...


with so many multiplatform gui toolkits today, tauri and electron are really bad choices


What's your recommendation? I've tried so many multiplatform toolkits (including GTK, Qt, wxWidgets, Iced, egui, imgui, and investigated slint and sciter) and nothing has come close to the speed of dev and small final app size of something like Tauri+Svelte.


I've also tried Flutter, React Native, Kotlin multiplatform, Wails.

I'm landing on Svelte and Tauri too.

The other alternative I dabble with is using the Android Studio, XCode to write my own WebView wrappers.


What did you dislike about kotlin multiplattform?


of course dev speed will be better with tauri plus the literal ton of JavaScript transpilers we use today.

but for us an inhouse egui pile of helpers allow for fast applications that are closer to native speeds. and flutter for mobile (using neither Cupertino or material)


Glad to hear that egui is working for you, but in my experience it's not accessible, difficult to render accurate text (including emoji and colours), very frustrating to extend inbuilt widgets, and quite verbose. One of my most recent experiences was making a fairly complex app at work in egui, then migrating to tauri because it was such a slog.


The web stack is now the desktop UI stack. I think the horse has left the barn.

It’s not great but there’s just no momentum or resources anywhere to work on native anymore outside platform specific libraries. Few people want to build an app that can only ever run on Mac or Windows.


The cross platform desktop gui toolkits all have some very big downsides and tend to result in bad looking UIs too.


I've built my app[1] using Qt (C++ and QML), and I think the UI looks decent. There's still a long way for it to feel truly native, but I've got some cool ideas.

[1] https://get-notes.com/


You are probably not solving the same problems many other people are facing.

Many such applications are accessible on the web, often with the exact UI. They may even have a mobile/iPad version. They may be big enough that they have a design system that needs to be applied to in every UI (including company website). Building C++ code on all platforms and running all the tests may be too expensive. The list goes on.


I just started prototyping a mobile version of my app (which shares the code as my desktop app) and the result looks promising (still work-in-progress tho).

Offering a web app is indeed not trivial. Maybe Qt WebAssembly will be a viable option if I can optimize the binary and users wouldn't mind first long load time (and then the app should be cached for instant load). Or maybe I could build a read-only web app using web technology.

Currently, my focus is building a good native application, and I think most of my users care about that. But in the future, I can see how a web app could be useful for more users. One thing I would like to built is a web browser that could load both QML and HTML files (using regular web engine), so I could simply deploy my app by serving my QML files without the binary over the internet.


That's definitely one of the best looking Qt apps I've seen.


Thank you! I think Qt is absolutely great. One need to put a little effort to make it look and behave nicely. I wrote a blog post about it[1], if you're interested.

[1] https://rubymamistvalove.com/block-editor


> but didn’t (in my reading) address use of the API as being part of an otherwise essentially trusted app

That’s what the Narrower Applicability section is about <https://github.com/mozilla/standards-positions/issues/431#is...>. It exposes new vulnerabilities because of IP address reuse across networks, and DNS rebinding.


- It is possible, if not likely, that an attacker will control name resolution for a chosen name. This allows them to provide an IP address (or a redirect that uses CNAME or similar) that could enable request forgery.

This is quite trival, not even possible though. DNS server is quite a simple protocol. Writing a dns that reflect every request from aaa-bbb-ccc-ddd.domain.test to ip aaa.bbb.ccc.ddd won't take you even for a day. And in fact this already existed in the wild.


Have isolated web apps/web bundle gained any traction over the past few years? I just realized that this thing existed and there were some discussions around it -- I almost completely forgot this.

I did a search, and most stuff come from a few years ago.


It makes much more sense to bundle a binary + web extension (w/ native messaging) to handle bridging the browser isolation in a sensible manner.

It's a minimal amount of extra work and would mean you cross browser isolation in a very controlled manner.


It is used by chromeOS


You means apps written by Google as "native apps"?

Any use cases outside that?

If not, it is probably fair to say nobody uses this.


A PWA is a IWA so lots of people are using it besides Google


I found this issue indicating a bad idea for end user safety:

https://github.com/mozilla/standards-positions/issues/431


Mozilla won't even support webusb[1][2][3] due to security reasons, so there's no way they'd support raw sockets.

[1] https://developer.mozilla.org/en-US/docs/Web/API/USB#browser...

[2] https://wiki.mozilla.org/WebAPI/Security/WebUSB

[3] https://mozilla.github.io/standards-positions/#webusb


I prefer web apps to native apps any day. However, web apps are limited by what they can do.

But what they can do is not consistent - for example, it can take your picture and listen to your microphone if you give permissions; but it can't open a socket. Another example: Chrome came out with an File System Access API [2] in August; it's fantastic (I am using it) and it allows a class of native apps to be replaced by Web Apps. As a user, I don't mind having to jump through hoops (as a user) and giant warning screens to accept that permission - but I want this ability on the Web Platform.

For Web Apps to be able to complete with native apps, we need more flexibility Mozilla. [1]

[1]: https://mozilla.github.io/standards-positions/ [2]: https://developer.chrome.com/docs/capabilities/web-apis/file...


nah. we need even less. i rather webapps because of the limitations. much less to worry about


I saw this proposal years ago now and was initially excited about it. But seeing how people envisioned the APIs, usage, etc, made me realize that it was already too locked down. Being able to have something that ran on any browser is the core benefit here. I get that there are security concerns but unfortunately everyone who worked on this was too paranoid and dismissive to design something open (yet secure.) And that's where the proposal is today. A niche feature that might as well just be regular sockets on the desktop. 0/10


I’m excited, and anticipate some interesting innovation once browser applications can “talk UDP”. It’s a long time in the making. Gaming isn’t the end of it — being able to communicate with local network services (hardware) without involving an API intervening is very attractive.


Indeed. I'll finally be able to connect to your router and change your wifi password, all through your browser.


Shhh, you’re giving my parents unrealistic expectations of how much remote tech support I can do.


Anything that moves the web closer to its natural end state— the J(S)VM is a win in my book. Making web apps a formally separate thing from pages might do some good for the web overall. We could start thinking about taking away features from the page side.


This is beyond that, it's more a move to remove the VM than make JS a generic VM.


Great fingerprinting vector. Expect nothing less from Google.


What about WebTransport? I thought that was the http/3 upgrade to WebSockets that supported unreliable and out-of-order messaging


I think WebRTC data channels will be a good alternative if you want peer to peer connection. WebTransport is strictly for Client-Server architecture only.


Status of specification: "It is not a W3C Standard nor is it on the W3C Standards Track."

Status in Chrome: shipping in 131

Expect people claiming this is a vital standard that Apple is not implementing because they don't want web apps to compete with App Store. Also expect sites like https://whatpwacando.today/ uncritically just include this


Expect Apple claiming this is a not vital standard and Apple is not implementing because they don't want web apps to compete with App Store. Also expect sites like https://whatpwacando.today/ to obviously just include this


Which part of "is not a w3c standard and not any standards track" do you not understand?

I am not surprised sites like that include Chrome-only non-standards, they've done this for years claiming impartiality


Cry me a river. Apple doesn't need you to defend their strategic and intentional PWA boycott.


Which part of "is not a w3c standard and not any standards track" do you not understand?

Do you understand that for something to become a standard, it needs two independent implementations? And a consensus on API?

Do you understand that "not on any standards track" means it's Chrome and only Chrome pushing this? That Firefox isn't interested in this either?

Do you understand that blaming Apple for everything is borderline psychotic? And that Chrome implementing something at neck-breaking pace doesn't make it a standard?

Here's Mozilla's extensive analysis and conclusion "harmful" that Google sycophants and Apple haters couldn't care less about: https://github.com/mozilla/standards-positions/issues/431#is...


There are a lot of reasons why people have such extreme differing opinions on this.

I for one, am still salty about the death of WebSQL due to "needing independent implementations". Frankly put, I think that rule is entirely BS and needs to be completely removed.

Sure, there is only one implementation of WebSQL (SQLite) but it is extremely well audited, documented and understood.

Now that WebSQL is gone, what has the standards committee done to replace it? Well, now they suggest using IndexedDB or bringing your own SQLite binary using WASM.

IndexedDB is very low level, which is why almost no one uses it directly. And it also has garbage performance, to the point where it's literally faster for you run SQLite on top of IndexedDB instead: https://jlongster.com/future-sql-web

So ultimately if you want to have any data storage on the web that isn't just key-value, you now have to ship your own SQLite binary or use some custom JS storage library.

So end users now have to download a giant binary blob, that is also completely unauditable. And now that there is no standard storage solution, everybody uses a slew of different libraries to try to emulate SQL/NoSQL storage. And this storage is emulated on top of IndexedDB/LocalStorage so they are all trying to mangle high level data into key-value storage so it ends up being incredibly difficult to inspect as an end-user.

As a reminder: when the standards committee fails to create a good standard, the result is not "everybody doesn't do this because there is no standard", it is "everybody will still do this but they will do it 1 million different ways".


> Frankly put, I think that rule is entirely BS and needs to be completely removed.

That's what Google is essentially doing: they put up a "spec", and then just ship their own implementation, all others be damned.

Here's the most egregious example: WebHID https://github.com/mozilla/standards-positions/issues/459

--- start quote ---

- Asked for position on Dec 1, 2020

- One month later, on Jan 4, 2021, received input: this is not even close to being even a draft for a standard

- Two months later, on March 9, 2021, enabled by default and shipped in Chrome 89, and advertised it as fait accompli on web.dev

- Two more months later: added 2669 lines of text, "hey, there's this "standard" that we enabled by default, so we won't be able to change it since people probably already depend on it, why don't you take a look at it?"

--- end quote ---

The requirement to have at least two independent implementations is there to try and prevent this thing exactly: the barreling through of single-vendor or vendor-specific implementations.

Another good example: Constructible Stylesheets https://github.com/WICG/construct-stylesheets/issues/45

Even though several implementations existed, the API was still in flux, and the spec had a trivially reproduced race condition. Despite that, Google said that their own project needed it and shipped it as is, and they wouldn't revert it.

Of course over the course of several years since then they changed/updated the API to reflect consensus, and fixed the race condition.

Again, the process is supposed to make such behavior rare.

What we have instead is Google shitting all over standards processes and people cheering them on because "moving the web forward" or something.

---

As for WebSQL: I'm also sad it didn't become a standard, but ultimately I came to understand and support Mozilla's position. Short version here: https://hacks.mozilla.org/2010/06/beyond-html5-database-apis... Long story here: https://nolanlawson.com/2014/04/26/web-sql-database-in-memor...

There's no actual specification for SQLite. You could say "fuck it, we ship SQLite", but then... which version? Which features would you have enabled? What would be your upgrade path alongside SQLite? etc.


What part of "cry me a river" you didn't understand? Don't go crazy because at least one of the browsers propose things that move the web forward. Geez, you should take a break from the internet. So many "?"


> Don't go crazy because at least one of the browsers propose things that move the web forward.

No, they shape the web in an image that is beneficial to Google, and Google only.

> Geez, you should take a break from the internet. So many "?"

Indeed, so may "?" because, as you showed, Google sycophants cannot understand why these questions are important.


A generation lost in Internet Explorer....


It’s pretty clear Google are building an operating system, not a browser.


It is called ChromeOS, and its spread is helped by everyone that keeps pushing Electron all of the place.


Can a browser run a web server with this?


Since it allows for accepting incoming TCP connections, this should allow for HTTP servers to run within the browser, although running directly on port 80/443 might not be supported everywhere (can't see it mentioned in the spec, but from what I remember on most *nix systems only root can listen on ports below 1024, though I might be mistaken since it's been a while)


I assume they would limit it to clients.


The inner platform effect intensifies.


From "Chrome 130: Direct Sockets API" (2024-09) https://news.ycombinator.com/item?id=41418718 :

> I can understand FF's position on Direct Sockets [...] Without support for Direct Sockets in Firefox, developers have JSONP, HTTP, WebSockets, and WebRTC.

> Typically today, a user must agree to install a package that uses L3 sockets before they're using sockets other than DNS, HTTP, and mDNS. HTTP Signed Exchanges is one way to sign webapps.

But HTTP Signed Exchanges is cancelled, so arbitrary code with sockets if one ad network?

...

> Mozilla's position is that Direct Sockets would be unsafe and inconsiderate given existing cross-origin expectations FWIU: https://github.com/mozilla/standards-positions/issues/431

> Direct Sockets API > Permissions Policy: https://wicg.github.io/direct-sockets/#permissions-policy

> docs/explainer.md >> Security Considerations : https://github.com/WICG/direct-sockets/blob/main/docs/explai...


Something tells me this is more to do with a product Google wants to launch rather than a genuine attempt to further the web.

I’ll keep my eyes on this one, see where we are in a year


All nice and welcome. At what point browser becomes full blown OS with the same functionality and associated vulnerabilities yet still less performant as it sites on top of other OS and goes through more layers. And of course ran and driven by one of the largest privacy invader and spammer of the world


> At what point browser becomes full blown OS.

Happened over a decade ago - ChromeOS. It's also the birthplace of other similar tech.. webmidi webusb Bluetooth etc.


That means we can connect directly to remote Postgres server from web browser ?


So long as you do it from an isolated web app rather than normal page.


This means that we can finally do gRPC directly from browser.


Thank god they plan to limit this to electron type apps.


so with this I would be able to create a server in my desktop web app and sync all my devices using webrtc


Game over for security.


Great, so now a mis-click and your browser will have a field day infecting your printer, coffee machine and all the other crap that was previously shielded by NAT and/or a firewall.


As long as they don't change the spec, this will only be available to special locally installed apps in enterprise ChromeOS environments. I don't think their latest weird app format is going to make it to other browsers, so this will remain one of those weird Chrome only APIs that nobody uses.


> special locally installed apps in enterprise ChromeOS environments

There was https://developer.chrome.com/docs/apps/overview though, so this seems to be a kind of planned feature creep after deprecating former one? "Yeah our enterprise partners now totally need this, you see, no reasoning needed"


Can we please stop this feature creep in browsers already?


[flagged]


Please don't paste unedited AI output as a comment to a discussion.


I also think direct sockets can be helpful. (Note: I did not read the article because it does not work on my computer.)

Another use would be for extensions (rather than web pages) to implement other protocols (which is related to item 2 in your list, but different).

However, I think that many of these things shouldn't need to use a web browser at all. A web browser is a complicated software and using other software would be better if you are able to do so.

This includes ping, traceroute, etc, which can already be handled by other programs (and can be used even if you do not have a web browser installed); but these things may be useful on Chromebook, perhaps; or if you have Chrome 131 but cannot use other software for some reason.

For example, a service could be available by some other protocols (e.g. IRC), but also provide a web interface; this can then be one of the implementations of the protocol, so that if the web interface is compatible with your computer but the other provided implementations are not compatible (e.g. because you do not have a suitable operating system, or because you don't want to install extra software but you already have Chrome, etc), then it provides an additional interoperability, without needing too much additional complexity.

Handling security is necessary, although there are ways to make it securely: Ask the user first to allow it, and allow the user to configure proxies and restrictions on the use (e.g. if it can only access specific addresses or cannot access specific addresses, or to allow or disallow specific port numbers, etc). (If a SOCKS proxy with localhost can be configured, then the user can use separate software to handle this; the web browser will just need to ensure that it is possible to be configured to not block anything, in case the user is configuring it like this in order to implement their own blocking rules.)

A server's web pages should ideally include documentation as well, which allows you to find documentation and use other software (or write your own), if you do not have a compatible web browser or if you do not wish to use the web interface.

So, I think that it is helpful, although there are some considerations. (The one about documentation is not really one that the authors of web browsers could easily enforce, and is the kind of problem that many web pages already have anyways, and this can't help.)


Yet another small step into ChromeOS take over.


Just now, when I have only recently switched permanently to Firefox...


nice!


Can't wait to see it working.


Why waiting ? What can you do with it ? Can't wait to wait for you.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: