This uses CloudFlare and the page is cached (CF-Cache-Status: HIT header). Kinda disappointing that we're not receiving data from your compunet but rather the CloudFlare edge machine.
I don’t know too much about how CloudFlare works, but it looks like you got a direct connection! Congratulations! My guess is that your DNS does not have the CloudFlare entries yet.
Yeah, I find the slow loading really fun. Unfortunately, when I install the Ethernet card I imagine it will get a fair bit faster. But on the bright side, I think it will reduce the need for CloudFlare, which helps in terms of authenticity.
I find it interesting how little computer power you need to host a highly accessed website by using CloudFlare. When 386's were the cutting edge it would have taken a car load of computers plus some fancy cutting edge load balancing software. All very expensive.
Please for the love of kittens turn off Cloudflare and never put it on
again. For me and other Tor users that awful service is a guaranteed
way of ensuring we'll never see your site (which I am very interested
in BTW).
Extra points for being able to insult both the OP and myself in one
sentence. If you've nothing constructive to add how about you leave us
insane, ignorant and arrogant grown-ups in peace to work out our own
stuff?
generally, adding random query params like ?1, ?2, ?12345 helps with the default settings of including that in the cache key.
that will also work in this instance.
you won't however see it slowly send the response as you do on http://trombone.zapto.org/, as cloudflare seems to block until it received the full response from the backend.
You're not wrong, but all of that behavior is configurable so may work on some sites and not others. The account owner can tell cloudflare whether to consider query params different or the same for cache hit puproses. You can also configure whether cloudflare streams/buffers (although some of it does require the enterprise plan).
No affiliation with cloudflare other than I use them for several sites.
Somehow getting the IP address of the server (in this case 174...*) would enable you to connect directly. Websites, such as crimeflare.org crawled the net to gather those addresses, probably by scanning, but the mentioned site was shut down as it seems.
A site that really wants Cloudflare's protection would ignore all traffic that doesn't come from Cloudflare though. In practice, many origins probably aren't locked down in this fashion.
Then what exactly is the point of posting it?
Not calling you out specifically but there's been quite a few similar ones, always unavailable because they don't scale beyond one user. So there's nothing to see and little to discuss.
consider putting it behind a cdn like cloudfront or fastly, i think both have free plans - it's a good way to illustrate that you can serve a lot of traffic with very little backend power.
> it's a good way to illustrate that you can serve a lot of traffic with very little backend power.
What? It demonstrates that Cloudfront or Fastly can handle a lot of traffic, since they'll cache just about everything if you put it in front of a static website...
That's also true if you employ a ten year old laptop as I do, quite power-efficient and has a built-in UPS (todo: get one for the router). No CDN or anything and the couple of pageloads per second (at peak), as the HN homepage generates, barely raises the load average into the point-tens.
The software/service you run makes most of the difference between whether it can run on standard at-home hardware or if it needs some distributed autoscaling system (when speaking of HN homepage types of traffic). Of course, if you're serving video or large images/audio, you're going to need more than a few megabytes per second of uplink. Or if your site trains neural networks custom for every visitor, that's a special case. But for regular websites...
True, but that's a weird point as you can use free shared hosting (like Netlify) then as an example that you don't need any paid resources at all to take a website live, but the point of this submission seems to have been about taking a website live with your very own hardware, so mentioning CDNs feels weird.
I think I have CloudFlare on, and I am seeing requests with a CloudFlare header, but I am still getting a huge amount of traffic. I am not sure what the issue is. Maybe the site is so busy that CloudFlare can’t even load it to cache it, haha. Anyway, I have to leave for an exam now, so let’s hope it manages to work itself out.
Changing to cloud flare probably meant updating your dns settings. Based on the TTL of the record and clients respecting it, the change can take a while (up to days even) to spread around the world.
Oh, no doubt, but I hold out hope that somehow, some way, somebody will find a way to handle the hug of death. Perhaps it's simply impractical on such modest hardware.
I think it would have fared much better if I had waited until I got my ISA Ethernet card in the mail. But serving this over serial at 38400 baud and watching it try to keep up was tempting. I’ll have to see how well it fares with real networking hardware. At that point I might post it again, although I will have to add enough content to it to justify a repost.
Are you able to measure the latency of processing one request when there is no load?
Just back of the envelope, if it takes 200,000 instructions to handle a request and we assume 6 cycles per instruction, then that’s about 25 requests per second.
HN is roughly 50K requests over 6 hours, so that’s roughly 2 requests per second on average. I would imagine it peaks to about 25. So in theory you should be able to handle the traffic.
>Perhaps it's simply impractical on such modest hardware
I imagine it's not just the hardware limitations, but the available software. This one, for example being MS/DOS, where there were never really any serious server-side http/tcp implementations.
On the other hand, there were very busy BBS systems running on DOS where there had been time for years of various optimizations to happen.
It loaded fast just now, but it's just a wall of text with 3 links. Just seems less impressive because of that. But it is cool! I enjoy older hardware.
The increased speed is probably because of CloudFlare. I know it’s kind of cheating, but it was the only way to avoid the hug of death. The good news is that the 386 is still serving lots of incoming requests, just not every single one. If I can think of some more authentic way to keep the site up, I’ll try that.
I wrote it back in 2014 and I've hosted brutman.com on it at times. My usual machine to run it on for torture testing is a PCjr with a hard drive and Ethernet. This 386 would be so much happier with an Ethernet card; the serial port connection is really hurting the performance.
A new version of mTCP is in the works; look for it in the next two months.
Oh wow, I know you from the README! It really is something seeing you here. It is not a very impressive site at the moment but the immense amount of traffic that I have been getting is encouraging me to grow it into something with some actual content. Probably something retro related in some way.
It’s fun. Lots of people like doing retro game dev and the like, but often you are stuck with ancient C toolchains that suck even if you like C itself. Getting Rust working is a great way to make it more enjoyable from a hobbyist perspective, I think.
If you talk to the spooky GCC people with beards they'll point out that they support much much weirder chips than that (albeit with little to no library support).
You may not believe me, but there are relatively modern 386-compatible embedded systems. Some of them run DOS for the sake of very old applications, such as certain industrial control software. Targeting them for modernization without gutting the hardware (which can be perfectly functioning and highly durable) is a valid goal.
You're a cat, a 386 DOS machine is a tiny box on the floor. You jump in the box and slowly settle all of your limbs into the box till you can sit. Now you fit.
Perhaps the same kind of people who got GCC to compile for 386 DOS machines (also using DPMI). Which might sound useless, since it's an underpowered operating system running on underpowered hardware, but it was used to build some very popular software, like the original Quake game (source: http://www.delorie.com/djgpp/history.html).
The FastDoom port slims down some floor textures and such and makes Doom playable (and not stamp sized) on a 386.
About the 386, you could run SICP as a GNU Info file blazingly fast and a Scheme compiler on another TTY, or under GNU Screen.
You could typeset with Groff and Mom, too.
Now try running LaTeX on that. Yes, Donald Knuth is the Ur-creator on CS and typesetting, but hey, sometimes you don't need a full 3GB Texlive suite to typeset some book or math equation notebook.
Ditch cloudflare and write the web server in assembly and see if it holds up to raw HN traffic. Bonus points if you have an L1 cache and can fit and serve your entire website from it.
I think the 38400 baud connection is a bigger issue than the CPU load. When I get Ethernet working I think there is a decent chance that I won’t need CloudFlare—especially if I can find a full 32-bit server that really uses the potential of the 386. Now to wait for that card to arrive in the mail. ;)
Edit: If you want to visit the page directly without CloudFlare, go to http://trombone.zapto.org instead.
There are 100 Mbps (Fast Ethernet) cards for the ISA bus, which in practice can apparently be around 50% faster than 10 Mbps Ethernet over such a bus. However, they seemed pretty expensive compared to the ubiquitous 3com Etherlink III that are available quite cheaply. I think Ethernet will still make a huge difference in terms of serving this site.
It is indeed an SX! I am upgrading it to 16 MiB soon, as well as putting an Ethernet card in it. At that point I think it might end up a much more capable host. I am not sure if I want to find some way to keep this page up long-term yet. If there ends up being a reason to, then I will look around for some suitably retro way to host it, while still allowing me to use the 386 for other things.
Sort of. 32 bit internally, 16 bit externally. Not as fast as the DX but had all the processor modes so could run protected mode unlike the 286.
One place I worked we sold a 286 upgrade board that had a 386sx and a few support chips. Undearneath was a 286 style socket. You removed your 286 and put this thing in its place. They worked OK and provided a decent speed boost.
Amstrad's last great PC series (the 3x86 series made with very standard components, unlike the unusual 2x86 series) used this strategy for the 3386, I think.
My issue with these projects is how much energy they use for whatever task they can do.
I have a 10 year old machine that was a high end machine in its day, being lapped by a high end laptop on all tasks, except for gpu tasks I slammed into the PCIe slot. The power consumption is honestly sad. Its not like its an energy
cost thing, as it would take decades to break even in lower electricity bills after buying a high end machine specifically for lower energy use, its more of a self conscious thing.
OPs machine should just be a compute instance virtualized in some gigantic memory cluster somewhere that was already running.
"Johnson, why can't we push this security update?"
"Well boss, we thought it would be more webscale to use our own chip design, so we're kinda stuck, but the good news is that we've negotiated the cost of new masks down to 3 million dollars"
You might be able to bake some preprocessing of e.g. HTTP into the NIC, doing incrementally more work using well defined standards before delivering packets to the host. Especially with an FPGA.
No, that’s just a general purpose computer. ASICs are dedicated chips for a given task, where that task’s “program” is built into the chip and it can only do that.
While there may not be HTTP-specific ASICs, there certainly are SSL/TLS-offloading ASICs and general TCP-processing ASICs, found on higher-end network cards, in routers, etc.
This is what my first PC was my freshman year of college back in 1990. Felt damn good I got the 65MB hard drive instead of the typical 40MB that most machines had back then.
I still remember all the tricks to make a floppy disk fit more data and get more ram. This was long before websites like https://downloadmoreram.com were invented.
Yep, I figured it was the only way to get it to keep working. Interesting that it seems to work for you already. It isn’t yet going through CloudFlare for me when I try to visit it. Maybe DNS cache?
I actually added a link to the page to bypass CloudFlare entirely. I wanted the page to remain accessible so people could see what was on it regardless, but CloudFlare helps with that, but I also know that setting the page load slowly from the real machine is also an experience that many people are there for, so I tried to find a way to have my cake and eat it too.
Big ooof, of course it's down. I'm hosting a simple static site on a Core 2 Duo + 4GB RAM and it's amazing just how slow the hardware is. It can barely handle 100 concurrent visitors (as tested with JMeter), and if it starts swapping on the spinning rust inside... that's it, forget it, come back the next day. I don't even know how a 386 can even start with modern software.
Why so slow? As a “this will never work” stop-gap when my laptop died, I put the latest Manjaro Linux on a Core Dou iMac ( early 2008 ) to use for work that day ( 6 GB ). I was blown away how well it worked and I was able to do everything I normally do without problems ( well, except I had to use the Outlook web client for email / calendar — but that is just Linux and not hardware ). Months later, I still use it almost every day. It is my preferred spot to take Zoom / MS Teams meetings due to the large screen. I run Docker and Distrobox containers on it. I built a toy compiler in .NET on it. I play Diablo and StarCraft on it from time to time. I have it running Plex and it serves movies and TV episodes to my family ( often to more than one device at a time ). I toy around with SerenityOS on it in QEMU ( in a VM ) and it runs great.
I have not tried to host a static website but it surprises me that 100 visitors would give that hardware trouble. I will have to try that now.
It could also be thermals.On old hardware the thermal paste is often completely fried. I've seen C2Ds barely able to handle anything once they got hot.
Yeah, or dust - my C2D Macbook Pro used to peg all its cores with kernel_task at the slightest hint of work, to the point where the UI thread started hitching because it was getting preempted, and the mouse would skip around the screen. Took me a bit of research to figure out that they use kernel_task for thermal throttling (great naming, guys). Opening it up, there were mats of dust that looked like felt pads between the fans and the heatsinks. Took out the felt pads, suddenly everything was smooth as butter.
A regular static site shouldn't be a problem on a C2D with 4GB, and it shouldn't be swapping either (unless you're doing more than web hosting on that machine). I'm assuming you're literally serving static pages and not running much server-side.
Many people host static sites on 1GB SBCs and 1GB/1T VMs with no issues, and you can make do with even less.
Update: I tried some tests on my secondary server, which is likely slower than your C2D (AMD G-T48E). I simply ran ApacheBench on the Proxmox Backup Server web interface login page since that's the only web service I have running on it. The two machines were connected over Gigabit LAN and HTTPS was used.
This is 1000 requests at a time for a total of 5000 requests. While it was running I was able to make a connection to the login page in my browser as well (it took six seconds to load, but it loaded). I think I did the test properly but let me know if I should try something else; it's been a while since I've done this kind of stuff.
~$ ab -n 5000 -c 1000 https://<PBS server>:8007/
This is ApacheBench, Version 2.3 <$Revision: 1843412 $>
...
Benchmarking <PBS server> (be patient)
...
Finished 5000 requests
...
Server Software:
...
SSL/TLS Protocol: TLSv1.2,ECDHE-RSA-AES256-GCM-SHA384,2048,256
Server Temp Key: X25519 253 bits
...
Document Path: /
Document Length: 1940 bytes
Concurrency Level: 1000
Time taken for tests: 34.274 seconds
Complete requests: 5000
Failed requests: 0
Total transferred: 10215000 bytes
HTML transferred: 9700000 bytes
Requests per second: 145.88 [#/sec] (mean)
Time per request: 6854.761 [ms] (mean)
Time per request: 6.855 [ms] (mean, across all concurrent requests)
Transfer rate: 291.06 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 21 2863 2225.0 2535 10907
Processing: 98 3275 1836.3 3142 10434
Waiting: 1 3275 1836.4 3142 10434
Total: 118 6138 3078.9 5655 12545
Percentage of the requests served within a certain time (ms)
50% 5655
66% 6733
75% 6965
80% 7569
90% 12324
95% 12469
98% 12504
99% 12517
100% 12545 (longest request)
You are certainly doing something wrong - I'm serving multiple wordpress and static pages from a former passively heated thin client with a Celeron N3010. The only diff is AES-NI compared to core2duo.
Are those wordpress pages cached, or generated for every pageload? Because if you generate it once and then basically just serve static content, yeah that works fine on any potato (and that's how static hosting should be, so that's great)
I tried running Wordpress with some plugins that required fresh page generation for every pageload for a friend on an Intel Atom... D525 I think it was. A single pageload was more than twenty seconds if I'm not mistaken. Without looking up that model number I'd guess this Celeron probably has similar performance, so your being able to host 'multiple' of those sounds like there's more in play
If the site takes twenty seconds on a D525, it's probably pretty demanding server-side and will require solid hardware to deliver many requests at good performance. Imagine if you're using a machine twenty times faster than the D525 (i.e. a modern desktop CPU), assuming linear speedup you can generate the page in one second. That's one second per pageload per user, and that modern machine is going to likely choke up too if the site gets decent traffic.
Which sums up why wordpress blogs go down a lot when the author runs on shared hosting or a VPS and didn't care to setup caching until it literally can't handle the traffic anymore
I'm running an AlphaServer DS25, and I can host the heck out of static sites. It even runs php sites like Wordpress decently. Then again, I'm running NetBSD.
IIRC, there was a post on here about Redbean being able to serve a wild amount of requests, on minimal hardware. But perhaps it was a different piece of software than I recall.
I used to run a web forum with hundreds of concurrent users on a much lesser specced machine in the early 2000s so this sounds like a software problem.
When I first posted it a few hours ago, I didn’t have CloudFlare. You can see lots of people unable to load the page at all in the earliest comments. Unfortunately it got to a point where pages weren’t just slow to load, but the connections were dropped entirely. So I figured that between the options of enabling CloudFlare or just having it be completely inaccessible, the former was preferable even at the cost of authenticity. I would love to find a better idea for how to keep the site online, if you have any.
Why does the title have the 'i' in 4 MiB of RAM? That's 4 megabytes, right? Traditionally 4 MB?
What does the extra lowercase letter indicate? I'm not used to that, and I've been dealing with bytes from the kilo-size to the peta-size for 40 years.
That was my first thought when I got unlimited mobile data back in ~2006. I could be always on IRC, seed torrents infinitely... and that's when the teenage lucb1e realized that there is no port forwarding on mobile networks and why that is a problem.
To this day I wonder why nobody seems to care at all about that. It's like being on the real internet except you can't reach each other, you have to always go through some third party that is on the internet proper.
At least we got "net neutrality" now, which doesn't apply to SYN packets for some reason but at least it applies in the other direction, so no more 'blocked site' page on buienradar.mobi because KPN wanted to sell its expensive SMS weather service instead of this newly popular weather radar site.
For what it's worth, I did compile and run a bitcoin miner on my phone ten years ago. Running services on it isn't exactly a new idea, but now that they're so powerful, it also means we can't supply enough power from the battery or dissipate enough heat while in a pocket.
That is what happens when you have a page which is genuinely tiny even for the early days of the web, served over a modern internet connection, and it is cached so it does not have to go all the way to the server. The web could be so much faster than it is.
It could be that it is genuinely not working, but it could also be that it takes a while to start loading. The server can only serve 8 clients at once, over a bandwidth of about 4 KiB/s. So maybe try again and give it some time to load in another tab.
Please see the comments on this thread. It was directly internet facing when I posted it initially. Hacker News managed to bring it down. I had to add a cached reverse proxy. If you have any ideas about how to more authentically keep the site online, please let me know.
Fair enough, I'll admit I was a bit triggered by the title. I'll delete my original comment. There is a realistic bound to the amount of requests you can serve per seconds per hardware... I'm not doubting you've done things to increase that.