The last few point releases have been very stable, so normally within a week or so. Obviously this relies on having a pretty good test suite (we run ~8,000 tests in ~60 seconds so not too bad for us).
Is there a simple BOM for making something like this without a Car Thing. Would an arduino/rock chip with a tft touch screen work or does it warrant pi levels?
The computer is the easy part — any 10-year-old Android phone would do the compute part, and the networking part, and the screen part; and would be easier to program; and would cost less than pretty much any SBC kit.
The hard part (at a DIY level) is, honestly, the physical buttons and knob — wired as an input device to the compute without hogging the device's (likely only) port — and with good industrial design (no gross 3D-printed textures that easily get dirty from being touched a lot; no deep wraparound plastic bezel that prevents you from touching the corners of the display; dynamic-function side buttons thin enough to allow the screen to show the labels for them; etc.)
Right now you can get an Amazon Kindle Fire 10 HD for $75 - it has a screen ~3x the size of the CarThing. Spotify works out of the box on it. Same with Web Apps. You can put it in Developer Always On mode in about 10 seconds via the menus and it works great as a dashboard.
The actual Car Thing runs Linux on an Amlogic S905D2 (quad Cortex A53) with 512MB RAM/4GB flash and an 800x480 screen. So you could do something similar with pretty much any random cheap ARM SBC.
I love the idea of this but, given the traffic numbers, this could run on a $4 Digital Ocean droplet and have the same result. They've burnt over a grand just to use vercel. Maybe I'm just older but I don't understand the logic here. A basic VPS, setup once, would have the same result and would be neutral in cost (it's how I run my own little free apps). Maybe the author is lucky enough that $100/mo doesn't really affect them or they're happy for it to pay for the convenience (my assumption).
6k visits per week * 5 page views per visit is one view per 20 seconds on average. Even very modest hardware with naively written application code should have no problem handling thousands of CRUD database queries per second (assuming every query doesn't need a table scan or something).
Modern computers are mind-bogglingly powerful. An old laptop off eBay can probably handle the load for business needs for all but the very largest corporations.
So many people don't seem to understand how efficient modern machines are.
As someone who is literally using old laptops to host things from my basement on my consumer line (personal, non-commercial) and a business line (commercial)...
I can host this for under 50 bucks a year, including the domain and power costs, and accounting for offsite backup of the data.
I wish people understood just how much the "cloud" is making in pure profit. If you're already a software dev... you can absolutely manage the complexity of hosting things yourself for FAR cheaper. You won't get five 9s of reliability (not that you're getting that from any major cloud vendor anyways without paying through the nose and a real SLA) but a small UPS will easily get you to 99% uptime - which is absolutely fine for something like this.
As DHH said somewhere, it's incredible that the modern cloud stack has managed to get PROGRAMMERS to be scared of COMPUTERS. Seriously, what's with that? That shouldn't even be possible?
If you can understand programming, you can understand Linux. Might take a while to be really confident, but do you need incredible confidence when you have backups? :)
The problem is that my coworkers are morons who seem incapable of remembering to run a simple `explain analyze` on their queries. They'd rather just write monstrosities that kindasorta work without giving a single damn about performance.
It seems like computers are getting more capable, but developers are becoming less capable at roughly the same pace.
> It seems like computers are getting more capable, but developers are becoming less capable at roughly the same pace.
And that makes perfect sense. Why should humans inconvenience themselves to please the machine? If anyone’s at fault, it’s the database for not being smart enough to optimize the query on its own.
At my last job, we had architects pushing to make everything into microservices despite how absolutely horrible that idea is for performance and scalability (and understanding and maintainability and operations and ability for developers to actually run/test the code). The database can't do anything to help you when you split your queries onto different db instances on different VMs for no reason.
I heard we had a 7 figure annual compute spend, and IIRC we only had a few hundred requests per second peak plus some batch jobs for a few million accounts. A single $160 N100 minipc could probably handle the workload with better reliability than we had if we hadn't gone down that particular road to insanity.
> ... microservices despite how absolutely horrible that idea is for performance and scalability
Heh, remind me of a discussion I had with a coworker roughly 6 month ago. I tried to explain to them that the ability to scale each microservices separately almost never improves the actual performance of the platform as a whole - after all, you still need to have network calls between each service and could've also just started the monolith twice.
And that would've most likely even needed less RAM too, even if each instance will likely consume more - after all, you now need less applications running to serve the same request.
This discussion took place in the context of a b2e saas platform with very moderate usage, almost everything being plain CRUD. Like 10-15k simultaneous users making data entries etc.
I'm always unsure how I should feel after such discussions. On the one hand, I'm pretty sure he probably thinks that I'm dumb for not getting microservices. On the other hand... Well... ( ꈍ ᴗ ꈍ )
That’s besides the point I’m making. Technology should develop towards simplifying humanity’s life, not making it more complicated. It’s a good thing we don’t have to use clever loop constructs anymore, because compilers do it for us. It’s a good thing we don’t have to obsess about the right varchar size anymore, because Postgres‘ text does the right thing anyway.
It’s a systemic problem. You’re going to loose the battle against human nature: Ever noticed how, after moving from a college dorm into a house, people suddenly manage to fill all the space with things? It’s not like the dorm was enough to fit everything they ever needed, but they had to accommodate themselves to it. This constraint is artificial, exhausting to keep up, and, if gone, will no longer be adhered to.
If a computer suddenly becomes more powerful, developers aren’t going to keep up their good habits performance optimisation, because they had those only out of necessity in the first place.
> Technology should develop towards simplifying humanity’s life, not making it more complicated
I agree with this statement for normal people. Not for software developers. You're just begging for stagnation. Your job is literally dealing with computers and making them do neat stuff. When you refuse to do that because "computers should be making my life easier" you should really find another line of employment where you're a consumer of software, not a producer.
You're right but I'll play devil's advocate for teaching purposes:
* Usage won't be uniformly distributed and you may need to deal with burst traffic for example when a new version is released and all your users are pulling new config data.
* Your application data may be very important to your users and keeping it on a single server is a significant risk.
* You're users may be geographically distributed such that a user on the other side of the world may have a severely degraded experience.
* Not all traffic is created equal and, especially paired with burst traffic, could have one expensive operation like heavy analytical query from one user cause timeouts for another user.
Vercel does not solve all of these problems, but they are problems that may be exasperated by a $4 droplet.
All said I still highly encourage developers to not sell their soul to a SaaS product that could care less about them and their use case and consider minimal infrastructure and complexity in order to have more success with their projects.
Is this really playing the devil's advocate though? I know this is a simplification but Stack Overflow launched on a couple of IIS servers and rode their exponential growth rather well. Sure they added more than "a couple" of web servers and improved their SQL server quite a bit, but as far as I recall they didn't even shift to CDN until five or six years after they grew. Eventually they moved into the cloud, but Spliit doesn't even have a fraction of the traffic SO did in its early days. As such I don't think any of the challenges you mention are all that relevant in the context aside from having backup. Perhaps also some redundancy by having two $4 droplets?
Is the author even getting paid for their services though? If they aren't then why would they care? I don't mean that as rude as it sounds, but why would they pay that much money so people can use their product for free?
* that's just static files. Even a $4 droplets will hardly ever get into issues serving that, even with hundreds of simultaneous requests.
* Okay, I guess that means we should use 2? So that's $8 now.
* Vercel really doesn't help you there beyond serving static files from cdn. That hardly matters at this scale, you should keep in mind that you "only" add about 100ms of latency by serving from the other side of the globe. While that has an impact, it's not really that much. And you can always use another cdn too. They're very often free for html/js/css
* Burst traffic is an issue, especially trolls that just randomly DOS your public servers for shits and giggles. That's pretty much the only one vercel actually helps you against. But so would others, they're not the only ones providing that service, and most do it for free.
Frankly, the only real and valid reason is the previously mentioned: they've likely got the money and don't mind spending it for the ecosystem. And if they like it... Who are we to interfere? Aside from pointing out how massively they're overpaying, but they've gotta be able to handle that if they're willing to publish an article like this
> may be geographically distributed such that a user on the other side of the world may have a severely degraded experience.
Okay, am I crazy or can you not really solve this without going full on multi-region setup of everything? Maybe your web server is closer to them but database requests are still going back to the "main" region which will have latency.
Personally I'm digging a hole through the center of the earth to send data via pulsing laser to the far side. But other people can choose to waste their money on multi region relocation, sure.
My understanding is that DO VPS’ are underpowered (as are VPS offerings from most other VPS vendors). Dollar for dollar, bare metal stuff from Hetzner, OVH, etc are far more powerful.
That said, I completely agree-a $4/month DO VPS can run MySQL, and should easily handle this load; in fact I’ve handled far bigger loads in practice.
On a tangent: any recommendations for good US-based bare metal providers (with a convenience factor comparable to OVH, etc)?
Hetzner is of course not U.S. based, but has expanded to have 2 U.S. sites (Oregon i think, and Virginia)....so that could be an option maybe. Caveat: i have not leveraged Hetzner in the U.s.....so can not speak to their quality.
Uh, actually at a quick glance, seems the U.s. sites are more for their cloud offering and maybe not bare metal servers.....i think (sadly): https://www.hetzner.com/cloud
My open source service, lrclib.net, handles approximately 200 requests per second at peak (yes you read that right, it's approximately 12000 requests per minute) on a simple €13 Hetzner cloud server (4 AMD based VCPU, 8GB RAM). I'd love to write a blog post about how I made it possible sometime in the future, but basically, I cheated by using Rust together with SQLite3 and some caching.
I was surprised by the cost of Vercel in that blog post too, which is why I dislike all kinds of serverless/lambda/managed services. For me, having a dozen people subscribing to $1-$2/month sponsorship on GitHub Sponsors is enough to cover all the costs. Even if no one donates, I’d still have no trouble keeping the project running on my own.
> Running a database accessed that many times on a $4 Digital Ocean droplet?
How many times per second is the DB actually accessed? As far as I can tell my the metrics, they're doing ~1.7 requests/minute, you'll have a hard time finding a DB that couldn't handle that.
In fact, I'd bet you'd be able to host that website (the database) in a text file on disk without any performance issues whatsoever.
I didn't mean it quite so insultingly, but yes, even a very modest server would handle that kind of load easily. You're not particularly high throughput (a few requests per second?) and I imagine the database is fairly efficient (you're not storing pages of text or binary blobs). I think you'd be pleasently surprised by what a little VPS can do.
I think it would be fine. I run a little private analytics service for my own websites. That service isn't as busy but handles ~11k requests per month. It logs to a SQLite database. It does this on a little Raspberry Pi 400 in my home office and it's not too busy. The CPU sits at 1% to 3% on average. Obviously there are a lot of differences in my setup but I would think you could handle 10x the traffic with a small VPS without any trouble at all.
You can read a little bit more about my analytics setup here:
They have under 1k visits per day, unless it's a really heavy app for some reason just about any basic VPS should handle a Webserver + DB for that just fine.
It does feel like the tech community as a whole has forgotten how simple and low resource usage hosting most things is, maybe due to the proliferation of stuff like AWS trying to convince us that we need all this crazy stuff to do it?
Oh I didn't understand that. I guess what was impressive to me was I thought they had Python running on the ring itself... otherwise the fact the client is written in Python seems incidental.
I did a 24-hour running race a couple of months ago. Didn't hit my goal, but still came back with 50 miles under my belt.
To me, the bliss is the moment I forget I've been running for five minutes, or 15, or an hour. I can't meditate. Sitting still and breathing, it's never worked for me. I collapse in on myself. But when out running, or even just walking if it's a longer distance, I hit a point where I disconnect. My body is doing... something, and meanwhile I'm off diving deep into ideas or worlds or scenarios or who-knows-what-else.
When events are billed as 24 hours rather than a distance they tend to be lapped arrangements where you do as many laps as you can in the time but can take breaks for food, rest, first aid, etc, between laps. That can significantly separate the average pace metrics for "overall" and "just moving time".
Events like Equinox24, Endure24, and a few others in the UK (I'm sure other places have similar events too), for example, have 5 or 6 Mike laps over varied (but usually not technical) terrain. There are people who just walk these events too, the events tend to be quite open and welcoming to different styles and abilities, sometimes managing more than 50 miles. It can be surprising how much "time on feet" can matter more than pace after a few hours. I start running but by the end the best I can manage is a zombie shuffle, which is slower overall than normal walking pace.
Only about 1800ft total elevation, which isn't much when you average it down and take into account a couple of hours here and there to swap plasters, eat some scoff etc.
Working on my first foray into DML-based speakers. Low expectations, but more enjoying the fun of learning a new domain (and it’s a distraction from building garden furniture). Got various exciters, panel types, mount ideas ready, just taking the time out of the evenings.
reply