Hacker News new | past | comments | ask | show | jobs | submit login
Stupid solutions: Live server push without JS (underjord.io)
342 points by lawik 11 months ago | hide | past | favorite | 79 comments



I made a little nojs shared drawing board a few years back that worked with MJPEG as the background of an image button ‘<input type="image" ...>’.

Clicking the image button submits the form with x and y click coordinates, and the server returns an HTTP 204 telling it to stay where it is, followed by pushing out the updated jpeg to all connected clients.

It’s pretty fun, and I wanted to have it online all the time but it has a problem I haven’t sorted out where sometimes when a client disconnects the server won’t realize and trying to push them a jpeg locks the whole rendering system.

https://github.com/donatj/imgboard


204 you say. Please don't make this rabbit hole deeper for me :)


204 is a wonderful rabbit hole! You can use it to send messages to the server without refreshing the page without JS. It was a pretty common trick in the early 2000s


How do you detect a success/failure without JS?


Send an actual error page rather than a 204. It's not perfect, but in cases where the server returning an error is very rare, it's fine. Since there is no state anyway, the user hitting back is fine.

Most JS apps in my experiences fail to handle server errors with any sort of user feedback at all.


Thank you very much for sharing this.

I'm attempting to support every client in every configuration on my web-based forum, and this would be a nice little toy.


I envisioned this same concept a decade ago[1] but never implemented it. It's nice to see this actually work in practice!

[1]: https://stackoverflow.com/questions/2576715/pushing-data-onc...


That problem should definitely be solvable. I used MJEPG to create my own streaming server that wasn’t mjpegstreamer and even with HTTPS and reverse proxying there are ways to detect when to stop sending the data, though it isn’t fun to find all the edge cases.

I need to revisit and update that project sometime but it was a very neat event loop C implementation that reads frames from a cheap consumer webcam and without doing any reencoding feeds it to connected sockets as an MJPEG stream (it does have support for re-encoding if the camera can’t produce JPEGs). Was a really fun exercise in minimalism and efficiency as my goal was to stream multiple cameras from a single Raspberry Pi back when they were super slow.


I have no doubt, I just haven't found the time. The codes kind of a nasty mess / proof of concept.


The core of this is of course the response header "Content-Type: multipart/x-mixed-replace" which in theory lets the server push updated chunks of any content type (although apparently only images in Chrome since 2013 [1]).

While it's a crude and handy way to update some content after the initial load without JS, it was also widely used with JS before Web Sockets - to push messages to clients with less latency than XHR polling.

[1] https://en.wikipedia.org/wiki/Multipart/x-mixed-replace#Mixe...


The big problem with this spec (which is shared today by all the HTTP uploads in the world) is that you have to check for the delimiter every byte!

"Content-Type: chunked" is much better because it gives you the size of each chunk upfront! But that requires .js and also was buggy in IE until version 7.

I made a multiplayer online system that relies on chunked: https://github.com/tinspin/fuse


omg! I thought it would slowly create and send along an mjpeg video file. Server push animation is a 90's thing. I wrote something up about it in 2009, and posted a still-working demo, because of a thread here. https://pronoiac.org/misc/2009/10/server-push-animation/


> although apparently only images in Chrome since 2013

I wonder if this includes SVG images...

EDIT: It does!


So now we know that a #1 post on HN has about ~450 concurrent visitors

Edit° (~1 hour later): So I've been checking every few minutes, and it's consistently been at ~450, this is a cool metric to have.

°This link was posted to HN ~4 hours ago. I first checked ~1.5 hours after it'd been posted, and it was #1 on the front page. It's still #1 as of this edit. The counter has consistently been between 445 and 455 the whole time.


I think it started falling over around 450. I've been getting 500 on my Nginx for workers, so I bumped that but the resulting Nginx restart killed the count and it is currently at 217 as it has recovered but I probably lost tons of idle tabs.

The application has retained a 3 day uptime. But I don't think the 450 number is entirely true with Nginx causing an artifical cap on it. Would probably be higher if I didn't hit that limit. Drats, upped the limit, hope it recovers.


When the Europeans are coming up for midday at least. I wonder how that number will change depending on who's up.


And... It crashed. Image is currently down.


The counter does go up and down a bit. I'll have to check the logs and see what that's about at some point. Mostly it comes back up on refresh.


This is pretty good considering most people will only stay on the links for a few seconds.


I had a top post a few years ago and I think in total I had around ~60k visitors over a couple days. Have a bunch of screenshots lying around somewhere for a blog post that I never ended up writing.


2 hours after your comment, at ~190 concurrent visitors. 4th in the HN top atm.


Update: now it's on #2, and it's hovering around 430


Update: #3 and ~130. Whoa, dramatic fall.

Edit: #4, post is six hours old, counter up to ~190.

Edit: #5, ~6 hours, 173 points, ~220 current site readers.

Edit: Back to #4, ~6 hours, 186 points, ~300 site readers.


Dramatic fall was Nginx config bump and restart. Lost all those sweet idlers I'm sure. Hopefully it can go higher now with some luck but we might be past the peak. Really curious how far it can manage.


Update: #4, ~8 hours, 218 points, ~380 readers


Update: #10, ~10 hours, 261 points, ~410 readers.

Edit: #12, ~11 hours, 276 points, ~445 readers


Update: #16, ~13 hours, 295 points, ~200 readers


Update: #43 (second page of HN), ~15 hours, 314 points, ~80 readers


You have to adjust this for people who read the comments but haven't clicked the link :P


They mean visitors to the link


How so?


Re-reading your comment, perhaps I misinterpreted it.

> So now we know that a #1 post on HN has about ~450 concurrent visitors

I thought you were trying to get a measure of traffic to HN. In that case, the number of visitors to the site is only an approximation. But if you were talking about how much traffic HN directs to the #1 post, then the number is exactly correct.


> But if you were talking about how much traffic HN directs to the #1 post, then the number is exactly correct.

Indeed I was :)


He mentioned chunk streaming text in an iframe I think as a joke, but I remember actually implementing a decent chat web app exactly like that years ago in the IE5 days or so.


A company I worked for in 2001 used chucked streaming to provide search results. The search results were collected by performing live searches in parallel (server side) with multiple partners and then aggregated into a single stream of chunked content. Basically the HTTP connection was kept open until all searches were complete, which would take a few minutes. The results were pushed incrementally to the browser as soon as they were available

This worked surprisingly well. They started with almost pure HTML with the results split into multiple tables to produce what looked like a single table of results. The only drawback was that the columns needed fixed width so that they would align properly.

After a while they switched to a Javascript based solution with dynamic filters. They used the same backend streaming engine to output chunks of Javascript code inside <script>-tags, which called a global "addResult(...)" method to update the state. If the search was cached, it would first render HTML code server side, which was then "hydrated" by the javascript code if needed.

Later it was replaced with a standard XHR polling mechanism.

It is interesting to look back at what we had to do that is trivial with today's technology.


Yeah you basically never close the response, right? Just keep appending! Chat is a good use case for it.

Maybe with CSS these days you could make only the last child of a div visible, and have constant updates with no need for JS.

Anyway. I quite like this idea of no JS. I know he said it would be evil to serve ads using MJPEG, but I'd probably prefer it if my page had less JS on it.


> Anyway. I quite like this idea of no JS. I know he said it would be evil to serve ads using MJPEG, but I'd probably prefer it if my page had less JS on it.

Honestly I don't see how it's more evil, just because it doesn't use JS surely doesn't mean uBlock and such can't still block it? To be evil all ads full stop would have to be evil. As you say, I would much prefer this to ads that use JavaScript. If you can still show me ads on your news article when I haven't got JS enabled I'm not going to be too bothered, because hopefully everything should load plenty fast. It also means the rest of the site can load while the ads are loading.


Yep, I've done that to send the status of a large upload, that must have been around 2008 or so.

We had a progress bar that would show the upload percentage, then the unzipping, then the virus scan on the server, then it would be marked as done.


At FriendFeed, when we first introduced real-time comments and likes (pretty sure we were the first social network to do this), we did it with long-polling via an iFrame because web sockets did not exist. It actually worked pretty well!

https://www.ft.com/content/87b8107e-9c6d-3d3f-a60d-3e4f6315d...


This is so cool and very similar to a hack I did some time ago: Noscript-compatible twitch-plays-Zelda

http://nes-o-png.herokuapp.com

So crazy what you can do with server-side-rendering if you really want to. :)


It is quite cool but for me it wont work without javascript on firefox


Works for me without JS.


When I posted this in another context someone linked this beauty: https://github.com/kkuchta/css-only-chat


Hey, that's mine! But yeah, it includes an alternate approach to the same problem: use http chunked encoding and continually append html to the web page (which never quite finishes loading).

Funnily enough, it also uses another evil trick with css background images, which TFA mentions as well: https://underjord.io/is-this-evil.html


There are a few other ways:

- dynamically generated animated GIFs - Content-Type: multipart/x-mixed-replace : the way to animate images on the web since 1993 - before animated GIFs


since 1993 - before animated GIFs*

Animated GIFs have been around since 1987.


I use MJPEG to livestream headless chrome[1]. You can set it up to use the HTTP endpoint but I just do it over WebSocket.

[1]: https://github.com/dosyago/OuterShell


I have the feeling that as soon as the web got obsessed with HTTP(s) only transactions between nodes, people reinvented/rediscovered/hack-reimplemented socket's semantics on top of it.


It's hard for the web to become "obsessed with HTTP", because web is HTTP.

Pedantry aside, as soon as web/HTTP became The Internet (protocol of choice), you are right that the rest was bound to happen.

However, exactly the fact that it wasn't too constrained and allowed a lot of messing around (including with HTML, compared to eg gopher which was more semantic) is what made it "win" over all the other protocols except maybe email (and even there, >50% of people read it with web clients).


I think GP's talking not just about server-client connections but also server-server connections? In the latter case HTTP(S) wasn't as popular as it is today until the late aughts.


I was speaking about the web as a net-work (not just web as in HTTP webpages). There are other protocols such as FTP, SMTP, and direct socket connections over TCP and UDP, but many of these use cases were absorved by REST only (HTTP) transactions and now websockets.


That's exactly the misconception I was trying to correct, assuming you made it while understanding the difference.

That suite of protocols is a suite of internet protocols, and "web" (from "world wide web", certainly familiar from www in websites) is a combination of HTML served over HTTP. If it has evolved to mean the internet, my apologies, but 10-20 years ago if you said "web", you meant HTTP: https://en.m.wikipedia.org/wiki/World_Wide_Web


no... you are just trying to be pedanting. Web, the metaphor coming from a spider web and commonly understood as the interconnection formed by networks, could be perfectly understood as inter+net as well. And web+page as what the HTTP was purposed. And all that text brings nothing to the conversation.


> this can be used for evilish things. You can absolutely keep track of how long someone keeps receiving your frames and use that for your analytics

There is nothing evil in analytics by itself - it's invaluable for improving usability.


> a fun hack that simply happens to work across browsers

Didn't work on IE.

But still something really interesting


Why not an iframe with a meta refresh tag? That seems more standard than rendering text using a video stream. Sure, it's not "live" but an update every second or so would be good enough for this use case.


That's what we were using before XHR in IE5 (it had another name but same stuff.)


Seems like an interesting variant of SSE (Server sent events) without the JS code for retries. Both of these approaches are simpler than websockets, with the limitation that they're one-way (server push).


This is great to know about. But on qutebrowser (QT webengine) it doesn't update. When I tried it on Google Chrome, it worked as expected. So maybe not as universal as the author believes.


Works for me. Perhaps you're using an older Qt version?


Hmm. I’ll have to check that, thank you.


In the light of this post, I'd like to share that I made an almost "no js" live monitoring service a few days ago. Again with the help of Elixir (and Phoenix LiveView). It tracks everything that's posted on Hacker News and Reddit in "real-time", and distributes it to everyone listening to it. And it tracks the number of online users, too. Of course.

You can check it out here https://alertcamp.com/live/Apple,Google,Microsoft,Amazon,Fac...


I think by "no js" it's not whether you write JS, but whether the page uses it. LiveView relies on WebSockets and Javascript.


Quite a few people are having trouble groking the "stupid solutions" part.


I think I've seen this done with an infinite animated GIF as well.


Why not just stream a normal video stream?


Apparently HLS (HTTP Live Streaming) requires JS on every platform aside from Apple's. I was looking at that for streaming video + captions but since it requires JS to run it didn't really achieve what I wanted it to.

Maybe you could build an in-memory video stream for each user and just serve it slowly but then they would likely need to press play or you rely on autoplay and I've no idea how well that would play with buffering behaviour.

This solution maps quite closely to the idea that MJPEG is intended for "live" video. And also I find it adorable that it is just an img tag.


Almost every browser accepts, and sends, byte-range headers for video, which is pretty easy to map for an endless stream - you don't have to return the actual length of the video, and if you don't, browsers will treat it as an endless stream.

Whilst you do have to actually trigger the play somehow, the range will come in controllable chunks, so you can respond in kind without trashing the stream.


You can do this with the WebM container, not with MP4. The reason is that the MP4 container requires the video frames to be segmented properly and marked into a metadata part of the file which can be in front or end but cannot be created with an infinite stream. WebM (i.e. VP8/9) is not supported on Apple's platform so you also have to have a fallback of HLS + H.264. But in general its doable with WebM and works pretty nice with low latency compared to segmented formats like HLS and DASH. It's impossible to cache on a CDN of course, which is not the case with HLS.


i actually use fragmented mp4 streams as a method to live stream video directly to a browsers video tag. I can point any common browser directly to that stream and it will start playing immediately... ive written a small utility to proxy requests to an RTMP server and repackage the stream for HTTP clients using just a little bit of overhead.


You can absolutely do that with the MP4 container format - I have. For the most part, browsers ignore the header information.

MP4 was used for streaming long before WebM existed, and before HLS was established.


Not really. The browser need to read the header to know where the video frames are in the file.

The table with pointers to all the frames cannot be made before all the frames are encoded and their size is known. Then you can shuffle the file and move the table to the start (known as faststarting), and in that case you can start viewing the video before it is completely downloaded.

This can't be used for live content, since the encoded frames does not exist at the time you start viewing in that case.


Does it have to know the information for all frames, or just the key frames? If it’s the later, then you could encode so the initial table, has X key frames spread out over a few hours, then force a frame to exist in those locations when encoding the live stream.


You cannot use MP4 for live streaming. The file cannot be demuxed properly if the moov soon is incomplete. Frames cannot be located, it is impossible to decode it in the end. With live streaming, you don't ever have the full file until the end of the event, which is why you cannot stream it. Video streaming with a complete file is possible of course, but that is nothing special nor related to this discussion.


You only need:

    moov [moof mdat]+
This allows you to build a never-ending fragmented mp4. This isn't a new technique. It's been around and used for livestreaming for years.


HLS is one of those standards that I find absolutely maddening and clearly only exists because a number of things are completely broken. It also makes it much harder to capture or intercept the stream for your own purposes.

Browser/firewall refusing to accept anything other than HTTP? HTTP range requests broken? Connection keepalive broken? I know, we'll segment the video into blocks and send a list of the blocks to download as individual files!


mjpeg is a normal, albeit very simple video stream. While you could certainly squeeze the counter in less bytes by using a more advanced codec with interframe coding, it would be overkill for this poc.


The point im trying to make is more that video streaming is a standard technique with off the shelf solution. The author presents MJPEG as some long lost solution to do this without js, but you can just as easily throw in a <video> tag without JS and call it a day. Except video tags arent as exciting.


If you did that, wouldn’t you need to keep generating frames regardless of whether they updated or not versus this solution that only sends a new jpg down when concurrents change?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: