I assume you're talking about people using Github instead of hosting their own server?
But before Github, people also avoided hosting code on their own servers or paying $10/month for a datacenter VPS -- by using previous services such as SourceForge or Codeproject.com.
If a bunch of people need to do activities <X> and <Y> but <Y> has higher priority in life, then some other service conveniently doing <X> will emerge. That's what Github is... it's a natural emergent phenomenon arising from people not wanting to mess around with running self-hosted Gitlab on a laptop, or Raspberry Pi, or on a $10/month Digital Ocean droplet, etc.
But another approach (my point on the rant) is the "I don't want to know how things work, I just want them to work" approach, which I think more and more people are using these days (we can relate this to mechanics, where everything is getting more complex). Because things are getting more complicated under the hood but seem "easier" to understand (because we're adding piles of abstractions on top of piles of abstractions), fewer people tend to take the time to create things with the "old" technos and really learn how things works.
(ps: this exact txt file is self-hosted on a dell optiplex fx160 with 4GB ram and an old processor, bought $20 in 2017 on ebay)
I know full well how much work it is to keep something like that maintained and secure and properly backed up.
I also know how many things can go wrong - including nasty things like missing billing emails when a credit card expires and losing the associated instance.
"GitHub, by default, writes five replicas of each repository across our three data centers to protect against failures at the server, rack, network, and data center levels" - I can't compete with that! https://github.blog/2021-03-16-improving-large-monorepo-perf...
However, for small-scale projects, personal blogs or website, I really like the idea of self-hosting. Knowing that all the data are in my home is reassuring (apart from the fact that my house could catch fire).
(I realized that you're one of the co-creators of Django; thank you very much for this masterpiece)
Your house feels like a fortress, until something happens to you or someone you know. Not just a natural disaster, like a hurricane, tornado, flood, or earthquake, but small disasters too. House fire, as you mentioned, or having your house burgled, some crazy power company glitch, or something as mundane as a child with a cup of water in exactly the wrong place. Even if you are meticulous enough to drive backups off-site backups, that somewhere is probably close enough that it's possible for a natural disaster to take out your house and your backup site. Encrypt your data and save it somewhere geographically far away, if it's really important to you. It's doable manually but for everybody else, imo, an online service (eg Dropbox or Google Drive) is easier, which means it's more likely to happen and not fall off the bottom of the Todo list.
I do prefer having local copies of everything, because I don't trust The Cloud. Malicious actors can get your account shut down, and then all your stuff is gone, locked up and the key thrown away.
My solution has been to run a RAID 1 NAS device for all my local stuff (including all my media I want to stream in the house - nice having movies and TV when internet is down). 4TB drives, rated for NAS usage.
Important documents, and sentimental photos (many GB's worth from the last few decades) I periodically back up onto an external SSD and put in my fire safe with physical documents of importance.
(I keep my backups in three places: local drives, safe deposit box rotated quarterly, and BackBlaze. I’ve only ever needed the first, but I feel safer knowing I have the other two.)
Probably important as I love next to a river.
There's an informed decision to be made when graduating from a computer in your basement, to a colo'd server, to using a VPS (or dedicated server) in the Cloud, to using a consumer facing online service. But if you're willing to have your data in the cloud, replication that used to take setting up multiple colo'd machines across the world (which I couldn't compete with, either) is fairly trivial these days. (A HA failover setup is left as an exercise to the reader.)
That isn't necessarily true.
I suspect the same number of people are taking time to really learn how things work, but the abstractions are allowing more people to get involved in writing and contributing to open-source software, developing services, etc., so the apparent fraction of people doing stuff on their own is lower. But if it weren't for the abstractions, it's not that we'd have more people doing things under the hood, it's that we'd have people not doing them at all.
I'm one of the folks who insists on knowing how things work under the hood, and it's an enormously valuable (and fun) professional skill, but it also frankly limits what I can do. I won't write projects in React because I don't really understand what React is doing and I don't feel comfortable with that, so my dashboards are cronjobs that template out some HTML and upload it to static hosting. (Which is fine, my job isn't making dashboards, my job is debugging prod systems.) Meanwhile my coworkers are throwing together incredibly impressive UIs very quickly. If it weren't for React, neither of us would be writing these UIs.
I remember open-source development in the pre-GitHub days. There was, frankly, a whole lot less of it.
For blogs there's Wordpress. For stores there's Magento. For files there's OwnCloud. Sadly, for communities and collaboration (Web 2.0) there isn't that much. Matrix, Mastodon? They are nowhere near the functionality of Facebook, Telegram, LinkedIn et al.
I spent 10 years reinvesting all our profits into building an open-source competitor to those centralized services, and it still needs some work to rival them:
But I'll tell you something. Having your OWN servers and data is very attractive. That's why everyone moved away from America Online "keyword NYTimes" and towards hosting a web site on nytimes.com using the open HTTP protocol. Today, Facebook's "NYTimes" page is analogous to AOL Keyword NYTimes, Mark Z is analogous to Steve Case, and notifications are analogous to "You've Got Mail!" Nothing new under the sun... now we just need something like the Web to come along and disrupt them by letting everyone self-host their own stuff on a service that's almost as good. The Web Browser in the beginning only had Bold, Italic, etc. but publishers switched in droves, and users followed.
This is true for businesses that can justify the cost of self hosting for a variety of business reasons.
This isn't true for most end-users, who just want reliable services. Hence, Wordpress.com vs. self-hosted Wordpress.
I don't want to be personally responsible for my data; I want a professional organization to make sure it is safe.
For example, my parents are never going to self host their own photos unless they're confident that it is safe and reliable, and takes almost no effort on their part. They don't want that responsibility, and frankly, there's no need.
OSS can be built around businesses that take responsibility and host for others. Part of why so many of these platforms fail to gain mainstream support, is that they're not built to support a solid business case, and instead somehow expect everyday people to learn far more than is realistic, just to replace a service that is free and easy.
That doesn't have to be true. Wordpress, as mentioned above, is a very good example, as it provides both options. (Self hosting, vs. paying someone else to take that responsibility.)
So if the actual complaint is "stop supporting for-profit companies," sure (but then we have to ask why - there are reasons to expect that a for-profit company is likely to be more stable long-term, more likely to be secure, etc.). If the complaint is "GitHub / Microsoft in particular is bad, and chatons in particular is good," sure (but then, what about for-profit competitors like GitLab?).
But that's unrelated to the original complaint. As you say, chatons is maintaining OSS services for you to interact with.
Anyway, I am for actually discussing substance so here it is:
With Facebook, you only have one possible landlord - a monopoly. With Wordpress, you have a choice of many landlords who compete on price, location, and so forth, knowing that you can take your business elsewhere.
In short it is like going from digital Feudalism to Capitalism. See this for much more info:
That is why I started Qbix 10 years ago. But there is something even better than capitalism and privately owned hosting companies: autonomous, self-balancing networks where dumb pipes carry encrypted messages. Blockchains are just the beginning. That’s why I started Intercoin.org but some projects like MaidSAFE and Matrix.org are ahead of us in many respects (for storage).
I completely understand that the scale and scope of abuse in tech can be much higher than with other services due to automation, but I don't see how its reasonable to question whether my stuff is still my stuff because I've entrusted it to a third party.
> Is it really yours?
Yes, though obviously it depends on the service, and my agreement with them. Most of my photos, for example, are backed up in OneDrive, which works quite well for my purposes.
> What if your professional organization says "we've installed some big data analytics", or "whoops we lost all your data"?
Good question! My photos, for example, exist on my local computer, Microsoft OneDrive, and on Google Photos. The odds of losing them are, frankly, as non-existent as possible. Multiple storage locations, IMO, are a good way to mitigate the relatively low risk of losing cloud data. Also, almost no effort on my part is required.
(and I think that's a problem, OSS softwares today are ~reliable enough for end users, but not really for companies)
Look them up.
Like we have an entire cloud "revolution" (note the mocking quotes) because of a bullshit accounting rule!
We'll see if that catches on and makes a difference. Accountants, like software engineers and every other profession, tend to follow trends.
I could have 5-8 NUCs/RPis, install a slew of conventional services I was curious about, and consider things like power, cooling, cables, storage.
I could pay GKE for a k8s cluster, install most of the things I was curious about, and avoid dealing with, e.g., corruption of the SD card. Or the cat knocking the desk-rack of NUCs off.
since rack, stack, and on-prem issues are not what I was wanting to sort out, I chose to deal with what I wanted to learn, rather than rat-hole down data center admin issues.
I've been doing this work for years and years and years. I was scrounging boxes from the CS department for research 15 years ago (or more). Servers are a fundamentally flawed unit of reliability and development. Moving to a design world where you design for someone to knock over the box and _the computation keeps going_ is paradigmatically better.
When you deal with forward looking people in the server-centric world and you show them a service / "keep going" world, their mind is blown and they want it IME.
However, being able to, to varying degree, opt out of said civilisation when it runs counter to your ideals or becomes to authoritarian or means that existing within it could compromise what you believe in, is important. The cloud is great, but we must strive to have the skills to be self reliant so it is truly a choice to participate in it to the extent that we do.
The big difference today is that we don't use native apps, so we have to have accounts, to use somebody else's apps for free, because they want to track us and make money from us.
On the web today, we don't use apps, we generate revenue for others via their apps.
IMHO, a service is just a program someone else manages running on a server, that I can use remotely with a REST API.
In college, in the 80's, I used the symbolic math program Maple on an MTS mainframe. Is Maple a service? I say no, only because services tend to be web related, they have a REST API accessed with HTTP. But if Maple supported a REST endpoint, bam, its a service.
In this OPs example, the "service" API is email, not HTTP. But it achieves the same goal.
What I think the OP misses is: who the heck wants to log on to the server and tweak the site content from the unix prompt when there is an API?
I really see the point in not having to manage a whole big ecosystem by ourselves (I use gitlab, sometimes github). The rant was about using an interface on another website (not the email API) in order to submit a PR or an issue for updating a website. It's a standardized way of doing an edit (and it can be painful when you don't know github), and more and more websites are using it.
oh, i see. i overlooked that. thanks.
A program running on your school's mainframe that you access via some sort of terminal is a computing service from before the Internet.
Plenty of people use a Unix prompt to access both local and remote computers. That its not your preferred method is fine, but there are people out there that prefer it! If those command line tools are hitting the same API (eg using curl), what then?
Seems like the specific problem is pull requests are not a core git feature in that they cannot be conveniently accomplished in a secure and decentralized way.
Pull request is common feature for maintainers in the Linux world, and they don't use github to send them to Linus.
And on top of that they are secure (signed tags) and decentralized (maintainer can publish them anywhere)
It's an annoying problem because you need decentralized identity. Signatures handle that but the UX is not as nice as the centralized github/gitlab account.
The design of decentralized PR hosting also doesn't play nice with the github/lab UX. If you were trying to create a PR to a github repo you'd have to handle authenticating the github pull from where-ever you hosted. If the repo isn't public that seems very cumbersome even if it was supported.
It's just much easier to do this entirely inside github/lab, annoying as that is.