Hacker News new | past | comments | ask | show | jobs | submit login
We are still early with the cloud (erikbern.com)
185 points by omoindrot 11 months ago | hide | past | favorite | 192 comments

It's crazy and destructive that we are still using the unix paradigm in the cloud.

In the 70s we have transparent network fileststems, and by the 80s I had a more advanced cloud-native environment at PARC than is available today.* The Lispms were not quite as cloud native as that, but still you could have the impression that you just sat down at a terminal and had immediate window into an underlying "cloud" reality, with its degree of "nativeness" depending on the horsepower of the machine you were using. This is quite different from, say, a chromebook which is more like a remote terminal to a mainframe.

I was shocked when I encountered a Sun workstation: what an enormous step backwards. The damned thing even ran sendmail. Utterly the wrong paradigm, in many ways much worse than mainframe computing. Really we haven't traveled that far since those days. Cloud computing really still it "somebody else's computer."

There's no "OS" (in the philosophical sense) for treating remote resources truly abstractly, much less a hybrid local/remote. Applications have gone backwards to being PC-like silos. I feel like none of the decades of research in these areas is reflected in the commercial clouds, even though the people working there are smart and probably know that work well.

* Don't get me wrong: these environments only ran on what are small, slow machines by today's standards and mostly only ran on the LAN.

"It's crazy and destructive that we are still using the unix paradigm in the cloud."

  # ssh user@rsync.net "test -f fileThatExists"
  #echo $?
... from my cold, dead hands ...

"Treating remote resources truly abstractly" doesn't work in practice. Too many points of failure in our systems, and you really, really don't want to paper them over with abstraction if you want to build a fault-tolerant system.

That has been true for as long as I have been programming. But:

Similar statements have been made about high level programming languages. Nowadays most devs don’t understand how the CPU works, but write on the top of a tower of abstractions and nobody bats an eye. Many of those abstractions are quite complex!

I can imagine that the same could apply to certain kinds of network activities. Look at how ppl use http as a magical secure and robust inter process communication channel with no understanding of how it works at all.

Lambda is a half a baby step in this direction.

Another problem is tollbooths. The phone system uses a lot of bandwidth simply to charge the customer money. My phone company charges me overseas rates if I make a phone call outside the country, even if both endpoints are using WiFi, not the cellular network! I’m afraid of the same with fine-grained and abstract distributed computing, but perhaps the magical hand wave abstractions I posit above can help.

This tollbooth nightmare btw is the dream of the web3 bros.

We cannot afford the seamless distributed systems, and I don't think we ever will.

I use Python because I don't care if adding two numbers is taking a microsecond instead of a nanosecond. But if netwotk call suddenly takes 1 sec instead of 10mS? well, thats a huge problem, let's add a memory cache and rack-level cache and a parallel fetch and a whole bunch of monitoring.

The local compute is growing much faster than internet, and even faster than local network. I sure hope we get better abstractions, but caring about remote vs local call is not going away.

You could design a OS with abstractions built around latency, instead of physical machines. It would still allow you to find and use resources according to their constraints of use, but wouldn't force you to keep track on which exact machine they are located.

I am not sure what do you need new OS for, and what could you get from OS that you cannot get from today's computing.

If you want user to know they are talking to remote machine, but don't want them to care about which exact machine it is, we have a ton of great solutions already: load balancer, connection pools, service mesh, anycast, dynamic dns, etc...

If you want remote calls to be indistinguishable from local calls on the source code function level, this is also solved! Many RPC frameworks and remote SDKs provide class-based interface which acts the same as the local class.

The only place where OS can help is if any OS function can be magically located on another machine. But even then.. we have remote filesystem (NFS), remote terminal and execution (ssh), remote graphics (x11), remote audio (alsa/pulse)... What is left for the new OS? process management? is it worth it?

> If you want user to know they are talking to remote machine, but don't want them to care about which exact machine it is, we have a ton of great solutions already: load balancer, connection pools, service mesh, anycast, dynamic dns, etc...

Those resources are complex to program against. An OS should offer a simplified abstraction layer to make them as transparent as possible. And yes, process management is worth having a unified programming model that doesn't force you to keep track of where each process instance is being located - that's essential for massively parallel computing.

Of could this could be done with platforms for massively parallel computing. The point of building an OS would be to put these platforms as close to the metal as possible to improve their efficiency.

S3 is a transparent network filesystem.

Unix is only the paradigm for computing servers. Which makes sense because different apps have wildly different ways of scaling.

There's no effective paradigm for abstracting away 1,000's of CPUs in a general purpose way.

I really don't have any idea what you're looking for here, that is possible, that cloud services don't already do.

The paradigm for abstracting away a thousand CPUs is AWS Lambda/GCP or Azure functions/K8s' implementation of serverless. It's not a total drop in replacement because a plain lift-and-shift can't change your paradigm, but cloud functions are very much a Cloud 2.0 (or at least 1.5) paradigm.

S3, yes a network accessible FS. Unix - only is a telling word - it is the /defining/ paradigm for that. Is there a better one yet? 1000 CPUs? I mean, uh Hadoop, spark, etc etc. What?

> Unix - only is a telling word - it is the /defining/ paradigm for that.

Don’t be ridiculous. We had networked distributed file systems over both the arpanet and lans before Unix even had networking. I even mentioned this in my root comment.

Unix did make it work much better a bit later, organizations used to run NFS and single sign on with Kerberos to workstations, automated from scratch reprovisioning of workstations (so you could just reinstall from tftp server with org specific SW customizations included if previous user had messed the box up), smooth remote access to all machines including gui apps, etc.

It just went away due to Microsoft crowding it out.

People forget that a good deal of “cloud” logic existed in a form on mainframes as well.

Mainframes are very expensive. You can buy mainframe with very fast CPU and RAM interconnect and scale it by buying more hardware. Or you can spend 100x less and buy a number of server blades. Interconnect will be very slow, so you can't just run some kind of abstracted OS, you need to run separate OS on every server blade, you need to design your software with that slow interconnect in mind. But in the end it's still 100x cheaper and it's worth it.

Also mainframes have growth limit. You can buy very powerful ones, but you can't buy one that's as powerful as entire datacenter of server blades.

That's why I both hope Oxide Computers succeed and worry they may not.

They are effectively building a mini computer. The smallest unit you can buy from them is an entire rack. Modified rackmount hardware with better software to make it more cohesive.

I really hope they go to half-racks, but I've no idea how you'd stack them.

amen, and those principles still serve us well today. software developers are still as bad at writing code, maybe worse.

I completely understand what you're saying.

I'm sort of reminded of how the US government is the worst (except for all the rest), when having an absolute ruler should be so MUCH more efficient. Problems would be fixed by fiat.

Or maybe, why does lisp persist with its horrible user-unfriendly syntax?


I guess we will just have to invent it. (and you should do your part by reminding people with examples of old systems that elegantly solved the papercuts of today)

> why does lisp persist with its horrible user-unfriendly syntax?

why persists lisp if user-unfriendly else horrible syntax?

sounds more like python :)

“by the 80s I had a more advanced cloud-native environment at PARC than is available today.*”

This statement is entirely false as admired in your footnote.

Architecturally and conceptually more advanced. There's a lot of literature about that environment so you can read what I was referring to.

That’s a hand wavey way to make a claim that can’t be backed up.

What you had then was in no measurable way more advanced architecturally or conceptually. Name one facet in which it was more, could do more, or faster, or better.

You can’t because it couldn’t. No part of your setup was cloud native. Nothing was abstracted away, a core tenant of the cloud.

You can’t just redefine words however you want.

Abstractions usually (always?) have a cost because physics.

> The damned thing even ran sendmail


> Cloud computing really still it "somebody else's computer."

That's the definition of 'the cloud'. Unless you run it locally in which case it's your computer. What's your point.

> There's no "OS" (in the philosophical sense) for treating remote resources truly abstractly

It's unclear what you're asking for. Treating stuff truly abstractly is going to get you appalling and horribly variable scalability. If you're aware of that, why don't you tell us what you want to see instead.

Edit: ok, this is from gumby, now I recognise the name. This guy actually knows what he's talking about, so please tell us what things you would like to see implemented.

>> Cloud computing really still it "somebody else's computer."

> That's the definition of 'the cloud'. Unless you run it locally in which case it's your computer.

Forget the stupid framing of idiotic marketers in the early 00s and go back to the original “cloud” definition (that engineers were still using in those ‘00s but was distorted for a buck).

The term was introduced (by Vince Cerf perhaps) in the original Internet protocol papers, literally with a picture of a cloud with devices at the edge. It was one of the revolutionary paridigms of IP: you push a packet into the cloud (network) but don’t need to / can’t look into it and the network worries about how to route the picket — on a per-packet basis! You don’t say “the cloud is other peoples’ routers”.

Today’s approach to remote computing requires developers to know too much about the remote environment. It’s like the bad old days of having to know the route to connect to another host.

Excuse me sir, are you going to pay for those?

You don’t pay per packet even though a huge amount of computation is done on devices between you and the machine you’re connecting to in order to transmit each one.

See my comment about tollbooths above.

> Utterly the wrong paradigm

When you emerge from the jungle, you may notice that not only UNIX conquered the world but even ""worse"" paradigms of Windows and iOS have proliferated. You have to ask why the situation that is so much worse is so popular: is it really everyone else who is wrong?

Appeal to the currently incumbent solution is not convincing. Very often the majority of people simply choose the lesser evil -- not what's of the best quality or with the biggest productivity.

Or need I remind you that hangings at sunrise and sunset were commonplace and people even brought their kids to them?

I'm sure back then people defended it as well, and it's likely that if you heard their arguments you'd facepalm.

There are still a lot of fossil-powered automobiles being driven around by people. Doesn't mean they are the future.

Have you heard of urbit?

I did. Its use of intentionally obscure language that makes APL seem readable and consistent in comparison just because "only smart folks should be able to code in this" is something I simply can't accept. And I love obscure langauges!

Good replies by others here. "crazy and destructive" that you have no idea about how computers work today, or how computers are still computers. Your ignorance about things like Sun workstations as it relates to literally everything today, I mean you have no idea about modern computing lol

As a developer I have an adversarial relationship with the Cloud even though I use it all the time. The reason for that is money / billing.

As soon as a credit card is in the relationship exploration and experimentation is over for me.

My local Linux machine may go on fire but it will never send me an invoice no matter what I do.

A cynical view would be the billing is designed to trip you up.

As an example, if you use Azure with a Visual Studio subscription which includes credit, once the credit is used all of your services are suspended and no further charges are incurred.

As a pay-as-you-go customer this option does not exist. You can set a billing "alert" but that doesn't stop the charges.

It's kind of weird that it's not just a built in toggle button to the system but GCP has the primitives to let you suspend the system when a threshold is met.


Shame it doesn't actually protect you, this startup [1] had a spending limit and they racked up charges so fast even Google's own billing system couldn't keep up.

In typical Google fashion it's luck of the draw if you get saved or lose your home [2]

[1] https://blog.tomilkieway.com/72k-1/

[2] https://news.ycombinator.com/item?id=25378899

Note that billing budget feature which they used, is, to use the technical term, fucking worthless. All that does is sends an email notification.

I updated my post to link to the section about capping costs, though that still has the delay and won't totally save you.

> and won't totally save you

What a pointless (and perhaps malicious) feature. If something slow is going on, I can check it manually.

It is when something fast is going on that I need automated emergency stops.

Disabling billing, from your link:

> Note: There is a delay of up to a few days between incurring costs and receiving budget notifications. Due to usage latency from the time that a resource is used to the time that the activity is billed, you might incur additional costs for usage that hasn't arrived at the time that all services are stopped.

So still pretty useless. Apparently they do have real-time billing updates via PubSub, but then it's up to you to code what to do when you spend too much. If you're in an exploring phase for [GCP PRODUCT X] you're not going to preemptively write a safeguard to turn off [GCP PRODUCT X] in case of too high billing updates.

It's better than nothing, but kind of a slap in the face that they do have all the tools to really allow people to have a hard spending limit, but they don't.

I've heard the argument "but it's too dangerous, people might lose non-backed up data", but that also happens if you set a Billing Limit, just that the billing limit will kill all your projects AND still let you rack days of over-the-budget billing.

The capping costs sample code in my updated link warns you that it will go and forcibly stop assets, possibly deleting data, due to suspended billing. It still has the same notification delay so it's not a total panacea, but it does help alleviate some of the fear that I'll accidentally end up with a huge bill at the end of the month due to a small misconfiguration or for forgetting to shut down a GPU instance or something.

Yeah that is quite scary. I'd feel much better if I could put a limit on the billing. Just shut everything down if I go beyond X amount of money.

It’s even worse than that really because it only takes a small slip into some of the cloud-native services and that adversarial relationship is entirely unavoidable and unportable and you are stuck with it. Which is exactly what is demanded by the providers to get the best cost-benefit relationship in the short term. Of course the human race is entirely cursed by short-term thinking.

The true cost of all technology is only apparent when you get the exit fee invoice.

Interestingly, at Google the typical developer workflow (google3) is very cloud native.

Most devs write code in VS code in the browser. Many (most?) devs don't have a physical desktop any more, just a cloud VM. The code lives in a network mounted filesystem containing a repository. The repository is hosted remotely (everyone can see edits you make to any file nearly immediately). Builds are done remotely with a shared object cache. Tests typically run in the cloud (forge).

Facebook has similar infrastructure, although more pieces run locally (builds were mostly done on your VM circa 2020)

For my personal projects, I try to do most development on a cloud instance of some kind, collocated with the rest of the infrastructure.

> Many (most?) devs don't have a physical desktop any more,

That would explain the (bad) design of their software.

I prefer the ability to run and debug locally coupled with a good IDE. I know VSCode is popular, people customize the shit out of Vim, but IntelliJ just works for me when I'm writing Java, Kotlin or Typescript/React. Refactor and debug is not comparable. And I know most think its hard on resources, but we have 200k lines of code yet and it works with 16GB M1 Air very well leaving more than enough spare resources for the system.

What? Doesn't even make sense. Why would lacking a physical desktop cause developers to make bad software?

Many developers in now and before like to have their own desk/space it helps them think. Getting ride of that space or changing may not be optimal for many developers I've worked with.

Lol desktop meaning a physical computer. Engineers still have desks with tops. If anything they have more space than ever since the offices are so empty.

Not sure this follows. Their designs might be bad(?), but certainly for any UI driven applications, they do do use the native and emulated devices.

What OP means is that you ssh into a cloud machine for development.

Having heard complaints of Google developers, the problem with this is the limitation of Chromium and the browser more generally. Browsers are utterly terrible at letting users script their own shortcut etc.

It's perfectly possible and in fact quite pleasant to work with intellij inside google. At least for JVM languages.

Disclaimer: I work for google

Wait, I remember Google gave up supporting IntelliJ around 2011, leaving only one full-featured IDE, Eclipse, as the only option. Did it change since 2011?

That reversed in ~2016. Because Android Studio was based on IntelliJ and heavily staffed (including Blaze support for development of Google's own Android apps), TPTB decided that they should put their weight behind IntelliJ instead of Eclipse. Official internal support for Eclipse was discontinued and the Eclipse team was disbanded.

Support is still there in some way, Bazel support being officially integrated is probably a good external indicator.

The Perforce plugin is Piper compatible, and works really well.

The problem is that developers have no idea how to run systems at a scale larger than their local Mac and iPhone.

> VS code at Google

MS have done a fantastic job of getting developers everywhere hooked on VS Code, whether they are writing for the Windows ecosystem or not.

I've also switched all my dev work to Gitpod a year ago and I don't want to go back to developing locally anymore. I curse and swear every time I need to work on a project locally.

I've had interest in trying this dev flow out, but I haven't been able to determine how it would work for multiple projects that work in concert.

For example, a web dashboard project with its own backend that also communicates with an API, which is a separate project.

Does Gitpod (or Codespaces) support projects (repositories) that work together?

Gitpod URLs are generated every time you start a new environment (usually every time you start working a new feature/bug fix), and it doesn't have static URLs. So you would need to update the endpoint URLs manually.

If you use VS Code locally to connect to Gitpod instead of in the browser, all URLs are mapped to localhost, so then it shouldn't be an issue.

But I did some digging and it looks like they're aware of this limitation and are working on solution: https://github.com/gitpod-io/gitpod/issues/898

I can't think about the cloud without immediately grasping its huge downsides: absolutely no privacy at all, data lock-in, forced migration, forced obsolescence, and things just vanishing if the rent is not continuously paid.

I have files on my computer from the 1990s and 2000s. If we lived in the cloud-centric world those projects that I did back then would probably be gone forever since I'm not sure I would have kept paying rent on them.

There's also no retrocomputing in the cloud. I can start DOSBox on my laptop and run software from the DOS era. That will never be possible in the cloud. When it's gone it's gone. When a SaaS company upgrades, there is no way to get the old version back. If they go out of business your work might be gone forever, not just because you don't have the data but because the software has ceased to exist in any runnable form.

It all seems like an ugly dystopia to me. I don't think I'm alone here, and I think these things are also factors that keep development and a lot of other things local in spite of the advantages of "infinite scalability" and such.

I'm not saying these things are unsolvable. Maybe a "cloud 2.0" architecture could offer solutions like the ability to pull things down and archive them along with the code required to access them and spin up copies of programs on demand. Maybe things like homomorphic encryption or secure enclaves (the poor man's equivalent) can help with privacy.

... or maybe having a supercomputer on my lap is fine and we don't need this. Instead what we need is better desktop and mobile OSes.

> I have files on my computer from the 1990s and 2000s. If we lived in the cloud-centric world those projects that I did back then would probably be gone forever since I'm not sure I would have kept paying rent on them.

On the other hand, I don't have any of my 90s/2000s projects because I would occasionally lose a hard drive before transferring everything to my new machine, or would occasionally transfer not-everything and then later regret it.

I guess dropbox isn't "the cloud", but I haven't lost anything since I started paying for dropbox when it came out, and things wouldn't just vanish if the rent is not continuously paid.

I sure wouldn't mind more cloud services that improve and add to the local computing experience rather than deliver themselves only through a browser and a web connection.

With local stuff you can lose it. With cloud you will lose it eventually if it's dependent on any form of SaaS that you don't control.

I agree with you that a cloud 2.0 architecture is needed. I don’t agree with you that you can’t run DOSBox in the cloud. You totally can. In fact, you can containerize a dosbox app and forward the output over websockets or tcp. I have files from 1990s and 2000s as well. I keep backups, as everyone should when dealing with cloud/internet/not-my-machine.

I can run DOSBox in the cloud. What I can't do is run an old version of Google Docs, Salesforce, Notion, or Alexa.

I can run old commercial software that I paid for in DOSBox or a VM because I have the software, even if it's just in binary form. I have the software and the environment and I can run it myself.

That's the difference. The cloud is far more closed than closed-source commercial software.

I can also run the software with privacy. When I run something locally there's nobody with back-end access that can monitor every single thing I do, steal my data, scan my data to feed into ad profile generators or sell to data brokers, etc.

i think yu are mixing saas whit cloud you ca run fireckraker functions, and old verion of rocky linux, but is 100 times more complex than pay for the sistems and the cloud provider encourges this propietry tools because of this omthing similar will be dinamo or firbase which are pay as you go saas

I used to agree about paying rent for my old files until I realized that it costs me anyways to ensure those files are available over a long time.


> I never ever again want to think about IP rules. I want to tell the cloud to connect service A and B!

Dear God this 1000 times. My eyes bleed from IP-riddled firewalls foisted upon my soul by security teams.

If I could also never NAT again, that'd be nice.

> Why do I need to SSH into a CI runner to debug some test failure that I can't repro locally?

Hey I can answer that one. Because an infra team was tasked with "make CI faster" and couldn't get traction getting the people responsible for the tests to write better tests (and often, just hit a brick wall getting higher ups to understand: "CI is slow" does not mean the CI system is slow. CI's overhead is negligible), and instead did the only thing generally available: threw money at the problem.

Now CI has a node that puts your local machine to shame (and in most startups, it's also running Linux, vs. macOS on the laptop) (hide the bill), and is racing those threads much harder.

I've seen people go "odd, this failure doesn't reproduce for me locally" and then reproduced it, locally, often by guessing it is a race, and then just repeated the race enough times to elicit it.

Also, sometimes CI systems do dumb things. Like Github Actions has stdin as a pipe, I think? It wreaks havoc with some tools, like `rg`, as they think they're in a `foo | rg` type setup and change their behavior. (When the test is really just doing `rg …` alone.)

Also, dev laptops have a lot of mutated state, and CI will generally start clean.

Those last two are typically hard failures (not flakes) but they can be tough to debug.

> Do we need IP addresses, CIDR blocks, and NATs, or can we focus on which services have access to what resources?

We need IP addresses, but there's not really a need for devs to see them. Nobody understands PTR records though. CIDR can mostly die, and no, NAT could disappear forever in Cloud 2.0, and good riddance.

Let me throw SRV records in there so that port numbers can also die.

Because it's bothering me: that graph is AWS services, not EC2 services.

> Now CI has a node that puts your local machine to shame

A nice problem to have, I only know the opposite side. Developer laptop being twice the speed of CI.

I'll admit it depends a bit. We're moving to Github Actions and their runners are … slow. There are custom runners, but they're a PITA to set up. There's a beta for bigger runners, but you have to be blessed by Github to get in right now, apparently.

Saying that Spotify is "producer-friendly" must be couched in the context of the times. 100% of $0 is still $0, and at the time most people were just pirating music so you weren't making anything off of recordings. If Spotify wanted to give you literally fractions of a cent instead of $0, you were going to take that. I wouldn't say it was ever really friendly to producers...mostly to consumers and, in order to be friendly to consumers, they had to win over record labels. And I think Spotify made a _lot_ of compromises in order to do that, including taking money that should really be going to producers and paying off the RIAA/labels so they continue to put their catalogs on there.

Source: I was a producer when Spotify started and I still am.

Spotify pays up to 70% of their revenue to copyright holders.

So, your beef should be with them, not Spotify. Which you should already be aware of if you truly are a music producer.

However, it costs you nothing to bash Spotify. It may cost you your career to bash the actual greedy leeches who control the money flows in music.

there are countless musicians who own the copyright to their own stuff and get peanuts from spotify even for non-trivial number of streams.

until spotify pays from MY subscription the artists I listen to, they will not see money from me. bandcamp all the way.

> get peanuts from spotify even for non-trivial number of streams.

Define "non-trivial". Easily 80-90% of all plays are by a handful of artists, the absolute vast majority of whom are owned by labels.

Even if your non-trivial amount of listenings is in the tens of millions, it pales in comparison to Drake or Ed Sheeran.

> until spotify pays from MY subscription the artists I listen to

Your subscription is 15 dollars a month. Yup. That is definitely not peanuts when spread over all the stuff you listen to.

Edit: you could be one of the few people who listen to the same band all the time, but that's not representative of people's listening habits.


> there are countless musicians who own the copyright to their own stuff

80-90% of all world music is owned by four companies [1]. This amounts to about ~99% of music people listen to. The "countless musicians" make up a long tail that is barely a blip on the radar.

[1] https://en.wikipedia.org/wiki/Music_industry#Consolidation

You seem to be about a decade out of date, per your wikipedia link. It states ~72% of music (down from ~88% in 2012) is owned by the big three (not four, after EMI was eaten by Sony in late 2011). Best of all, parent may have the right idea buying from Bandcamp. From your link:

> These companies account for more than half of US market share. However, this has fallen somewhat in recent years, as the new digital environment allows smaller labels to compete more effectively

Which to me at least suggests that seperating hardware like CDs and LPs from the actual music is helping artists. Perhaps that should be taken with a grain of salt, though: I'm still optimistic and naive enough to think things may improve for artists.

> It states ~72% of music (down from ~88% in 2012) is owned by the big three (not four, after EMI was eaten by Sony in late 2011).

It's hard to keep specifying the exact composition of the music scene, and the market share by the Big Four then Three then Four then Three again is fluctuating between 70 and 90 percent from year to year.

> I'm still optimistic and naive enough to think things may improve for artists.

The Big Ones have the industry in chokehold. Bandcamp is fine, but if you want to listen to something other than indies, you're stuck with the catalogs owned by the Big Ones. For example, https://www.sonymusicpub.com/en/songwriters Anything from Betales to AC/DC and from Enio Morricone to Dolly Parton is owned by Sony.

So you want to start a service and provide a service that provides both indies and this music? You will bow to industry's terms. If you have enough money and clout like Apple, you'll be able to negotiate better terms. Until then ¯\_(ツ)_/¯

I started my career in simpler times. Developers would produce a zip and handed it over to an admin guy. Dev and Infra/Ops clearly separated. No CI, sometimes not even a build step.

I understand the power and flexibility of the cloud but the critical issue is the dependency on super humans. Consider a FE or mobile app developer. They already greatly struggle just to keep up with development in their field. Next, you add this massive toolset on top of it, ever-changing and non-standardized.

A required skillset overload, if you will. Spotify concluded the same internally. They have an army of developers and realized that you can't expect every single one of them to be such "superhuman". They internally built an abstraction on top of these services/tools, to make them more accessible and easy to use.

And you're glossing over the pain points that drove the industry to coin DevOps - those times when the zip didn't contain everything it needed to run in production properly and the admin guy had to call the dev multiple times in the middle of the night because their app didn't start properly on deployment. Or the install/startup procedure wasn't documented properly. Or it changed and the document didn't get updated. Or there was a new, required environment variable that didn't get mentioned in documentation anywhere. Or a new, required library was on the dev's local workstation and not on the server. etc etc

Never had such issues, you can still do decent coordination in such hand-overs. Honestly the only issue was the inflexibility of the hardware.

As a former sysadmin who had part of his career in that paradigm, I never again want to wait until 10:30 PM to run manual production deployments handed to me by a developer and hoping their documentation was correct.

Give me CI/CD pipelines deploying containers to a k8s cluster during the day.

> Developers would produce a zip and handed it over to an admin guy

This is literally what the cloud is now for a fraction of the cost of the admin guy.

Current gen serverless containers basically deliver that promise of ease of use, scalability, and low cost.

For me, Google Cloud Run, Azure Container Apps, and AWS App Runner fulfill the promise of the cloud. Literally any dev can start building on these platforms with virtually no specialized knowledge.


And implement them poorly and then wish for an admin guy or SRE when things go sideways at 3AM and production is down.

I'm just not sure how you define what goes into that zip in a way that does not make it substantially harder to solve tough problems than it would be to be familiar with cloud services.

Of course it'll cover you up to a point. If it's a CRUD web app that runs on a single server (or multiple stateless ones) and uses a relational database, you can have a zip file whose contents cover your needs. But if you have anything that justifies Kafka, Cassandra, or distributed storage, the "I'll just throw it over the fence to ops" paradigm isn't likely to fit as well.

I grew up in that era. Where symlinks and sighup to hot reload with zero downtime was an innovation!

Maybe there is a name for this phenomenon, but it feels like when we add so much productivity via layers of abstraction, even more person-effort gets allocated to the higher levels of abstraction. Because 1. that's where people are most productive / happy / compensated / recognized / safe 2. businesses can confidently project return on investment

How many engineers get to work on a part of the stack that has some room for fundamental breakthroughs or new paradigms? The total number has maybe grown in the last 50 years, but not the proportion?

It's hard to justify an engine swap once there's so much investment riding on the old one, so just not a lot of people are researching how to make that new OS.

That is until a Tesla comes around and shows the market what could be better/faster/cheaper.

Probably not the name you're looking for, but I typically talk about this stuff in terms of local and global maxima. Low-risk optimisation efforts typically get trapped on some local maximum over time, while bold efforts get closer to the global one - the minority that doesn't fail, that is. Applies to build vs buy decisions and business in general quite nicely.

From what I've seen, businesses and projects usually become less risk averse the more established they are - they are economically incentivised towards that.

The silver lining for me is that there is always room for disruptors in this scenario.

> businesses and projects usually become less risk averse the more established they are - they are economically incentivised towards that.

You mean the other way around, right? Businesses and projects usually become more risk averse the more established they are.

Yes, sorry :P

I am not a cloud expert but so much of this rings true, esp the following quote:

“Why is Bob in the ops team sending the engineers a bunch of shell commands they need to run to update their dev environment to support the latest Frobnicator version? For the third time this month?”

I just couldn’t stop laughing.

Because devs will not update their Frobnicator for seventeen years, choosing to solve leetcode instead. Eventually the Frobnicator that the devs are using will be so security vulnerable the fact that the source code exists in the package repository is itself a CVE. Because you're a dev, when this happens it's a funny story, but for Bob it's seventeen meetings and having to listen to Franz, the director of development chew out the entire team as if they're utterly incompetent. This means Bob just disables your access to the rest of the systems unless you have a correct Frobnicator, and doesn't care whether he blocks you or not - because you would be complaining to your director either way.

You might be exaggerating here. Anecdotal evidence and all but even the juniors I work with are mostly diligent in keeping their important tooling up-to-date.

Every repo at every FAANG company is full of dependency specifications pinned to versions several years out of date.

Eh, if it's policy then that's another thing. I was responding to your comment that puts the blame on the programmers.

Everything Google is doing comes to the world 10 years later. Being inside Google is like seeing the future. They had all these technologies long ago, now it's just a case of timing and turning them into products. I've learned that sometimes the world just isn't ready for these advancements. The journey Google went on internally, everyone else has to go on for themselves.

That said. I think we're still super early in cloud because it's still about how we the developers use it and not the end advancements for consumers. The cloud has changed user behaviour through streaming services, saas and cloud based storage but I think there's far further to go. Meaning there's some cloud first, always on, behaviour shift that needs to happen with the services catering to that model. You'd say saas and cloud is already there but I think it's a lie. We're just replicating what we did locally in a remote env. The cycle of thin clients and fat servers e.g Citrix and the rest of it. A major shift is coming very soon.

After talking to manager at Google during an interview and him explaining to me that almost all tools at Google are home baked because most if not all services are so huge you wont be able to use opensource solutions for that.

Then I reminded myself about VictoriaMetrics that in benchmark outclassed Google Cloud Metrics by an order of magnitude.

Ppl at Google think they are the smartest (and often are) but in some cases they are simply outclassed so hard.

After this discussion decided Ill never ever want to work with ppl with such attitude.

> After this discussion decided Ill never ever want to work with ppl with such attitude.

This is a problem I constantly come across when trying to hire people coming from Netflix, Google, Facebook, Amazon and similar companies that have this "I'm the smartest vibe".

At one point in time, it was a good indicator of skill that they were coming from one of those places, that we could trust their technical knowledge as long as they left the place willingly and weren't fired from the place.

But some years ago it started to change and now we're seeing previously work experience at those places as something negative, as hires from those places tend to want to upend everything to something their previous company used to do, even though it wouldn't make any sense for the their new workplace to do.

And then constantly "scale" becomes an argument when the product they're building haven't even found a market fit yet. They're always jumping into this theoretical possible limits that we're nowhere near of hitting but want to solve everything upfront.

It's exhausting both for management and the rest of the team to have to deal with, so best just to avoid that class of developers as a whole.

> in some cases they are simply outclassed so hard.

I think this is true in some cases, but Google has been okay at adopting external vendors where internal tools aren't keeping up. And in some cases, folks actually hate the external replacement and miss the google-built tool. So YMMV.

Companies like FB, Google, etc are large enough and have specific enough needs that sometimes they really do have to build their own thing. Buck had to be built by Facebook because Bazel wasn't open source yet, and well that's one example of something from Google outclassing all the competition for organizations that need anything like it.

In re arrogant people, they exist at all organizations whether it's warranted or not. I wouldn't let a random peon at Google affect your perception of the organization. You'll see similar behavior from companies at all sizes so it's not really a telling signal.

Mind linking the VictoriaMetrics benchmark? I found a couple of medium.com articles but not the benchmark itself but it sounds like a good read.

It might be the pricing comparison blog post for managed Prometheus solutions here https://victoriametrics.com/blog/managed-prometheus-pricing/

I haven't seen so many people eagerly waiting for mainframes from 70s. As the author said - no IPs, no CIDR no NAT, no counting of ram so vast infinite resources, you only need a terminal to do everything remotely, it's only one development environement, mainframes for the masses this time.... And yet you still have to work within boundries because you or whoever you work for don't have money for infinite resources. It's a small contradiction omitted everywhere in these kinds of posts. But hey, lets come full circle into the 70s and welcome our ma...cough.. Cloud 2.0. If this happens there's hope. We will relive the microcomputer revolution after that. :)

100% agree.

https://replit.com is making progress on this. They've moved (almost) all dev tools to the cloud so you can just edit and run in the cloud.

My own project, GridWhale, goes one step further and provides a single, integrated cloud platform for development. Rather than writing separate programs for frontend and backend, you write a single program and the platform remotes the UI as appropriate. Here's a demo: https://gridwhale.medium.com/the-gridwhale-gui-system-55c449...

Do we really want to develop in the cloud? My gut says no. I have no real opinion about that, but seems worth investigating. Is anybody working with a proper dev environment (no, sorry, a small react-only project doesn't cut it, I'm talking jvm driven, ran by docker kind of thing) in the cloud here?

Yeah, as another commenter mentioned, all Google SWEs have developed in the cloud for a long time. It allows you to write, run, and test code performantly on any computer with a browser. Some of OP's wishlist are realities at Google. E.g.

  - When I compile code, I want to fire up 1000 serverless
    container and compile tiny parts of my code in parallel.
  - When I run tests, I want to parallelize all of them. Or 
    define a grid with a 1000 combinations of parameters, or 
Build systems, testing infra, and cloud editing all need to be there for the magic to happen. When your cloud editor supports distributed builds & testing infra, and can be used by anyone, life is really good.

FWIW widely available cloud editing is also getting good with VSCode + LSP, if you don't want to pay Replit. Getting Bazel to do distributed builds instead of local builds is really annoying tho.

Thank you! So I think for us the only attractive thing is to have cloud runners for our sadly very resource-heavy tests.

Developing on a cloud VM is all fun and games until the connection drops.

Will it some day be possible to have a reliable connection everywhere all the time? I donno.

Without internet connection, most modern development grinds to a halt pretty quickly in any case. Github won't work. All those dependencies your build needs will no longer download. That issue tracker that tells you what you need to do is no longer reachable, and forget about copy pasting from Stackoverflow. Etc.

There are things you can still do offline of course. But it gets inconvenient pretty quickly. There's not a whole lot of offline development happening anymore.

So, in the rare case the connection drops, you have a coffee break, and then you reconnect. If that's a regular thing in your life, change your internet provider or networking equipment.

There are plenty of locations where internet connections constantly breaks, or the network equipment is just so poor that everyone's connections drop/connect once a day or so. Or that the latency is so high that most servers drops your connection before they even gave you a chance to connect. That's "offline" as well even though I'm not actually offline, my connection is just really slow.

But the mindset you have explains a lot about why most software doesn't well in environments like that, people simply believe conditions like that don't exists so why should the server allow connections that take longer than 3 seconds? "Probably they're spamming us so let's drop the connection instead".

If you have the right setup, all of the issues you're saying are easily worked around (even the "copy paste from Stack Overflow" issue, although I'm not sure if that's a joke or not).

Most software development simply does not happen in those places for that reason. Basically, it's a supply and demand thing. Software developers require decent connectivity and they'll move to where the connectivity is. Or they'll fix a decent connection (using star link or whatever).

I promise you, software development also happens in unbelievable places like South America, Africa and other low connectivity places.

Edit: just came across this submission https://news.ycombinator.com/item?id=33274186

Seems funny that Stack Overflow would enable offline usage when supposedly no developers have poor connection, they'll simply move to places with amazing networking.

> There's not a whole lot of offline development happening anymore.

luckily that's not the case everywhere.

What may be missing from cloud, is alignment of incentives. If you waste more compute, you increase their profit margins. That would explain things the author questions like general latency increases.

I read the article and don't think author arguments about cloud are even about cloud. Looks to me he is more after the development tools.

But the cloud can and will be a fundamental part of the new developer tools.

Since reading the blog post's mention of Repl.it I went and downloaded their new Iphone app and used https://modal.com to spin up 30-40 containers from a script doing sentiment analysis on ~30k movie reviews: https://twitter.com/jonobelotti_IO/status/158291976221638656...

This cost me about 5 cents.

Developer environments and workflows built around the idea that you won't compile and run code on your own device can do wild things at the press of an Iphone app button.

U.C Berkeley has called part of this vision 'serverless for all computation', https://kappa.cs.berkeley.edu/.

edit: Another user also pointed to Stanford's 'gg': https://github.com/StanfordSNR/gg.

> Since reading the blog post's mention of Repl.it I went and downloaded their new Iphone app and used Modal.com to spin up 30-40 containers from a script doing sentiment analysis on ~30k movie reviews:

IPhone processors run billions of cycles every second and are capable of running billions of instructions every second. I'm amazed that we've gone from "Run doom on my toaster", to "I can spin up 30-40(!) containers to analyze 30k reviews".

It's laughable. Doom was written for the IBM PC. This PC had a clock speed of 4.77MHz and 64Kb of RAM. The iPhone 12 runs at 3.1GHz, has 4Gb of RAM and has multiple cores. The phone in our pocket is vastly more capable of any piece of hardware we had 30 years ago, and we give accolades to people that can analyze sentiments (which is just running a bunch of matrix math at the end of the day) in under 1 minute using dozens of insanely powerful machines.

We should be able to analyze 30K sentiments in a minute easily on an iPhone. And we should be able to analyze that data in under a few seconds on a single desktop.

> Doom was written for the IBM PC. This PC had a clock speed of 4.77MHz and 64Kb of RAM.

well, not that PC. Doom required a 386 and 4MB of RAM. But you really wanted a 486 or Pentium to run it smoothly. Catacomb 3D, a real early id Software 3D game, actually would run on an 8088 XT.

Also noteworthy, Doom was created on a NeXT computer, which was also a bit ahead of PC at the time. So there was a power differential applied to what they were creating.

Ah thank you for the correction! I should have looked this up more thoroughly beforehand :)

I think related to the article, this would fall under the lift and shift concept the author described.

What the author really wants is a transformative experience around developing in a way that is cloud native.

So don’t apt install packages on your alternative iPhone which has a Linux container option built into the OS.

Instead, tap a few buttons to say you are developing a webapp with a node backend, Postgres db, and redis instance and code anywhere you go without thinking of setting up an environment. Don’t even think of how to connect to your db. The tool knows you want your service to connect to it and knows not to let anyone else connect except for exceptional case debugging. And once you are done with your v0.0.1, you press deploy, wizard your way through, and it’s out on the internet without you having to think about it further. (And for bonus points, everything including the platform config is quietly getting committed to a git repo in the background so you get all the advantages of IAC by default in the event you need it). And you don’t think about scaling or deployment resources or anything like that. It just happens and you go on about your business (hopefully with some thought given to billing). And when you want to connect a service to another one you don’t think of concepts of ip blocks or auth or certificates. Service A communicates with the world and service B. The dev experience is that you call service B from service A and all the auth and TLS and ip addressing and name spacing is handled in the background. The dev experience is you call service B and that’s it. And that deploys 1:1 to production as well.

Even the above scenario feels somewhat uncreative like it only imagines a few steps up from what we have instead of a paradigm shift. But basically it’s not about shifting to different platforms like iOS or remote dev machines. It’s about an experience of development that is tied in deeply with the environment you ship to and in a way that completely frees you of thinking of low level concepts which all happen in the background.

But this forces architectures that may be problematic for certain use cases. Cookie cutter solutions will always end up getting bogged down with more and more options. And options for options.

This exactly is what Platform Engineering teams do.

Every business is unique and there isn't a universal generalisable useful solution here which is also simple. Hence why Platform teams exist.

I hear you. I’ve been lucky to work on dev experience in a platform team so I do agree that this is a platform teams job. I wonder though as our stacks mature and things normalize, if there’s a chance to do some thinking and organize systems from first principles to create a platform that suits the vast majority of app development. If a team outgrows it, maybe that’s a call for a platform team.

I’m trying to imagine though what a platform team might look like in a world like that. A lot of dev ops teams today work around cloud configs and terraform for example instead of bash scripts and hardware. Maybe platform teams of the future think of plugins and modules for these imagined systems instead of building on top of a lot of low level stuff.

I see what you're saying, but also think it misses the point of where this is all going. An IPhone 12 is an enormously powerful computer, but it's not at all one that is accessibly programmable to Repl.it devs. Similarly, it's possible to run 30k sentiment analysis examples in a minute on a IPhone, but actually doing so would take a skilled dev weeks to implement (because it's not designed to do that!).

Our computer systems have got ludicrously more powerful, and software development has in a sense become ludicrously more inefficient, but computing is a wonderful culture of abundance and _easy and fast enough_ almost always wins over _difficult, faster, and efficient_.

> When I compile code, I want to fire up 1000 serverless container and compile tiny parts of my code in parallel.

Or maybe we should make better use of GPU and other local compute.

You can already do this anyway…


The HN conversation yesterday about the complexity of the proton, in particular how we poke and prod at it to suss out its qualities and quantities, got me thinking about the subatomic particles of my personal subjective conscious experience.

We can trace, with quite a bit of precision, how a certain photon cocktail results in me perceiving the orange title bar at the top of HN. But where's the orange paint in my brain? What is it made of and how could we inspect it like we inspect the guts of a proton? And furthermore, where's the camera that puts all of those particles of paint onto the same stage? We know where the visual cortex is, yes, but where's the camera that can see the whole stage at one time?

That part, the integration of all of our perception into a single 'stage', is where I've long felt there has to be some kind of quantum or possibly field effect at play. Then I wondered if it might actually be possible for there to be an 'afterlife' of sorts in which the quantum relationship between particles is sustained beyond our life.

Dunno. Trippy to think about though.

For a moment I thought there must be some faulty entanglement in my own brain, but now I think you actually meant to post your comment under a completely different story that's currently on the front page :)


omg i didn't even realize i clicked on that story haha. thank you!

Cloud providers have hobbled growth by overcharging for egress.

> Cloud providers have hobbled growth by overcharging

Just stop there.

What the author (and I!) want is for my computer on my desk to have a seamless integration with the Cosmic AC in Hyperspace.

The problem is that the transition point costs me money and the amount is generally unknown or unpredictable.

My laptop or desktop have a fixed price and then I get to use them infinitely. Until that becomes true for the cloud, it will always be hamstrung.

(There is a secondary argument that progress in computing has been held up by the fact that UPLOAD speeds basically haven't moved in 20 years--but that's for another day).

To follow up on this, computers are really powerful and I want to work when the net is down or otherwise unavailable. Yeah, I can use the cloud for production work but why must I rely on other resources when developing tools or applications. This is just a cash and time sink…

I considered cloud for my ML application that uses terabytes of proprietary data. I took one look at those egress costs and bought my own server for less than the cost of one full egress plus a short period of running time.

Just the thought that they might one day up those charges all of their own accord makes it even more of a non-starter.

Completely agree, and the Oct 2022 GCP decision to start charging "egress" fees for accessing multi-region storage buckets _within_ GCP was not much appreciated either!

Thankfully, Cloudflare's come about with basically free egress. They're just kind of a weird shape because the cloud services they offer are of a different shape than usual.

lock in mentality. free data in but high charges out. although azure <-> oracle cloud have gone to zero rated egress for one use case. will be interesting to see if it extends to other use cases and/or clouds.


> Most egregiously, why have the feedback loops writing code become longer?

The author answered their own question earlier in the article, showing that they have grown from using 2 AWS proprietary services to 350.

We're seeing the drive to Internal Development Platforms and Platform Engineering teams who are charged with eliminating developer toil and creating golden paths (ironically coined by Spotify engineering)/paved roads to production as a response to the complexity inherent to modern apps and the modern app SDLC.

Not so much an abstraction over public cloud as it is an opinionated consolidation of DevOps, SRE and cloud engineering.

I have now interns that are supposed to be coding, some are close to have nice diplomas like "CS engineer" and the like but the sad truth is that they don't understand what's a computer, what's a network, what's a server and what's a client, what's the internet and what's a file.

So the cloud 2.0 is all fine and dandy, not having to care about IPs addresses and NAT and storage space is great, however don't forget that all abstractions leak and at some point I'm not sure you can escape going through "from NAND to Tetris" and having built a LAN with a couple of RaspPi to get shit that works done.

That was my old hat rant :)

Cloud that we have today makes sense for companies and businesses and is quite mature. But we are in the early days of the _Personal_ Cloud. For personal apps (Instagram, Photoshop, Health apps, Notes etc), a new kind of cloud needs to emerge - which should look a bit like solidproject.org, IPFS, Dropbox and OneDrive.

Why not iCloud and OneDrive as they exist today? The problem is that sharing is very primitive and basic on those systems. There's no way for someone to build an Instagram or FaceBook on iCloud.

> I'm excited for a world where a normal software developer doesn't need to know about...

I'm not excited for this. There's a quote I cannot find that I miss greatly, but maybe a Whitehead quote or some such about Civilization being measured by that which it doesn't have to think about, that which it takes for granted. It's always struck me as powerful, but giving ourselves the ability to forget & un-learn does not tempt me in development. We are the builders, and this great rich pool of possibilities is rarely improved with merely forgetting & becoming an exclusively higher-level operator. Depth is deeply rewarding in development.

Let's talk about the cloud some, & this pool of capabilities we are so delightfully placed at the helm of, and the cloud's influence on these capabilities.

> Somewhat ironically, software development is one of a vanishingly small subset of knowledge jobs for which the main work tool hasn't moved to the cloud. We still write code locally, thus we're constrained to things that work in the same way both locally and in the cloud. Thus, adapter tools like Docker.

It's super hard for me to imagine a replacement, not because replacements won't be great, but because replacements will have a hard time becoming core knowledge for the software development world.

(It's problematic because great openness let us roam too freely unboundedly but,) one of the greatest glories of software development is how unboundedly open & downright democratic it is. We use software until it stops serving us well, and then we either roll up our sleeves, dive in & improve it, or start something else entirely. But we can keep drawing from the same pool, from the many many possibilities & ideas which all interrelate & support each other, to shape new ideas & give life to new forms.

The authors premise, to me, feels like a proposition that we will be so well served the cloud that we can just leave where we are behind. This is about not needing the pool of knowledge or capabilities we have, the systems we have, because we'll be working somewhere else.

In many ways though, that to me sounds like declaring that the future is different, therefore we need a new primordial soup to start from. 'Using only ATCG for genetics is a paradigm that must be eclipsed!'

But you know what? Someone builds that new place. And in 98% of cases, the same old fundamentals & tools are still at play, underneath the new tools, underneath the new abstraction. Larry Wall (in Perl is the first post-modern language) would say: the truth is our systems will always be post-modern. Modernity's shining image of itself as brilliant sprung from nothing novelty is rarely true. New ideas more often than not creative re-applications and re-mixed of old materials.

Rather than simply say that a new paradigm is probably not so new, I think there's a deeper challenge, which is: how does a new paradigm ever rise to such a height that it becomes well known? How do we adopt a new paradigm & start teaching it & using it? How does it become the next thing?

We are very well served by the cloud. It's presence as a system of services, as far off, maintained-by-other-people, no-longer-our-problem miracle working wondermachine is all true & very well reported and it is coming for everything and everyone doing work today.

But I'm not at all afraid, because, in 99.99% of cases, these vast neo-mainframes have no way to pass on their genetics. They are alone and isolated and developers cannot get into their bowels. There is no "The Midnight Computer Wiring Society" of the cloud, and there never can be and there never will be because the cloud until the PC and the tools we have here is about control & orchestration & order & rules, and there's no permission in the cloud to go develop your own culture, to become a new wave, to change everything for everyone (including the other devs) because you are just one lone neo-mainframe, just a couple of your own ideas that you're calling your paradigm & building your cloud by, but in a huge number of cases your ability to interact with talk with share with other people also doing cloud or to enable other people to try to cloud like you do is exceedingly small. Clouds are all unique and alone & they have a much harder time spreading socially.

How does a new cloudy paradigm ever get the ball rolling? What are it's central tenants & beliefs that make it a flexible, malleable, swiss-army knife where all developers everywhere have even more power & creativity- not just at building applications atop it but enhancing & growing & exploring the platform as well? I do think eventually we will find new things to make core, to form real communities & new shared basis upon (I think Kubernetes' apiserver+controller paradigm is probably a core construct in the future, for example). There's early early signs we are civilizing what so many hyper & not so scale cloud technologies have frontiersed. (But oh it's so early, and it's at so much more depth where this happens than the shallow 'your workflow will be replaced' message of this article).

I think there's a lot of good peering into the mid-horizon to do, that the attempt to peer forwards are good, & commend this article. It's right to ask (sic):

> Rethinking these abstractions to be native to the new world let's us start over and redefine the abstractions for what we need? re else of note).

But I highly doubt we will really get release/escape from the past. The article tries to question the past 50 years, and I think perhaps yes we might diminish it, it might not be at the forefront forever. But I have a hard time imagining enough real value or enough real difference- even if we switch to Zircon or KataOS or Zephyr or Genode or the next thing, I tend to think most existing abstractions will largely remain, perhaps mutated some, some more prevalent than others, and that the view will not really end up looking that very different. Platform will continue, yes changing, but also in many ways similar.

The above all speaks to a fairly slow-building evolutionary view of the future. That said, I think we really dropped the ball on trying to bring software development online, have really had a shitty unambitious maxima we've been stuck at. We still write a crap ton of code that's just driving http clients. gRPC is still deeply one process talking to one service, a very convenient way to still manually write individual send(call)/receive(return) calls/streams. Cap'n'Proto dared to dream a little more multi-service, to follow somewhat after E-language, but never materialized 3-plus-way communication.

The long hangover after CORBA and SOAP blew out & got eaten by simple-is-better ReST has turned into a forgetting, to not trying. The idea of finding new abstractions is interesting, actually (contrary to first part of my rant), but programming language design has remained so focused within the language that we don't have the creativity to expose & play earnestly with the abstractions we have. Language design has focused near exclusively on building better processes, and without integrative cross-system rework, without a much higher scope of change desired. We're just shuffling the cards again and again with the same base system; it's all different syntax spins, different ergonomics, maybe a new safety guarantee (and boy are people excited about that!) but all the same underlying patterns, variously cloaked. A dull post-modernism. If there is change, real change, I think it comes from going back & re-trying an E-lang or an Erlang, or more generally working aggressively towards multi-system. And much of that could just be taking what we do inside processes & doing it on the web/net. I ask myself regularly, and alas, I haven't gotten around to fucking around and finding out: what would a EcmaScript/JavaScript Promise look like, but on the web? This kind of primitive material makes total sense to developers, but we don't really express these things in systems ways; there's the inner process world, & outer systems world, and it's not abstractions we need per se: we just need to tear down the veil between these two realms. The process already has the materials to make a great cloud, we just haven't opened the box yet.

> There's a quote I cannot find that I miss greatly ...

This is probably it: "Civilization advances by extending the number of important operations which we can perform without thinking of them." - Alfred North Whitehead

(And I agree that depth is rewarding and civilization allowing us to forget things isn't always tempting!)

A similar quote from A.N. Whitehad: "By relieving the brain of all unnecessary work, a good notation sets it free to concentrate on more advanced problems, and, in effect, increases the mental power of the race."


> Cap'n'Proto dared to dream a little more multi-service, to follow somewhat after E-language, but never materialized 3-plus-way communication.

I still intend to implement this! It just has yet to become the most pressing issue for the projects I'm focused on, since the development model is already supported using proxying which makes 3-party handoff merely an optimization.

Its not at all like I dont see you being extremely extremely visible elsewhere doing tons of stuff. But I am excited to hear you still are excited for 3-party.

I'm not sure what thr designspace of a language that natively supports cap'n'proto would be & whether that'd be signififantly different than really good libraries or macros for a language. Maybe existing languages are flexible enough. But my gut is that we can mind of start having more ambient systems, ambient data, ambient code, more freely if we really built the language to work beyond the process scope from the start, using cap'n'proto as a base. I readily admit though there's probably some very fine ways to make this remote coding very seamless a development experience with what we've got though, if we want, perhaps with very very few rough spots.

Anyhow... thanks for the ongoing great work kv.

I think you're right that the same old fundamentals & tools will still dominate, but I think this 'new world' can arrive by just making those old fundamentals more powerful and useful.

Erik's example of Spotify has within it the story of an ever more powerful and reliable internet network making first music streaming possible, then video streaming.

The (mobile and wired) internet network is getting so fast these days that maybe even a cloud-first, cloud-only software engineering process is ready to be enjoyed.

Spotify is an interesting example to bring up, & somewhat a sore one for me. Because their Spotify Apps API was in effect an attempt to make Spotify a music cloud. It was, in my view, raringly successful, a great system that thousands of people built intensely good and new music experiences on, embedded within the Spotify desktop app. It turned the desktop client into a cloud-run system for rapid music app deployment, all inside the Spotify walled-garden.

Then Spotify killed it. A decision which continues to baffle me, and which, like Signals recent decision to abandon SMS, speaks mainly of a desperation to completely own the experience. https://developer.spotify.com/community/news/2014/03/24/clos...

So Spotify is now an example of a dead end, a former cloud, a mere product to me. They've taken great works from the past & about, and they've built one product, and they're going to spend decades moving around where the buttons are and trying to tweak how and when they can ring the cash register & deposit some money into Spotify, Inc. They're from the cloud (sort of), but they've wound back 90% of their ambition to be or contribute to clouds.

Spotify (& many others) can build what they build because of a conflux of factors. Simply having gobs more hard drive, cpu, network throughput, and (most important of all) new online consumers are the core hard & fast requirement that enabled Spotify to become Spotify. But that's only semi-related to what I think is really at the heart of this conversation: the cloud. Yes the team was good about scaling out & aggressively deploying new technologies, devops & core tech, and that helped them go. But it's confusing & unhelpful as a case study. Fact is: switches were just getting better, cpus were just getting better, and Spotify could almost certainly have happened in a fairly legacy way, with fairly legacy ideas, & there'd been streaming before, albeit executing & attracting/keeping necessary talent would have had worse odds on legacy ideas.

Yes: from a consumer perspective, Spotify leverages the internet to on-demand deliver content. It's an example of most of the computing happening in a far off neo-mainframe. That's absolutely something we associate with the cloud. I absolutely see that as core to Erik's story here.

But the characteristic seems somewhat uninteresting to me in isolation. I do think more scale out abstractions & ideas have a huge place, a huge future, but also, they keep running into the "then all developers are just consumers of shit they really have no idea of or power over" problem that means there's no real social environment surrounding these advances.

Something that seemed real to me from these threads: the comment griping about never managing firewall rules by hand rings true. And we are developing control planes aka controllers aka operators, are building more intent-based autonomic systems, reasonably well, that do our lifting for us. There's a host of good new "edge" (not edgy edge edge, just like, lots of data centers edge) tech that's also like- yeah- cloud it up more. Think less about computers/resources/clusters, just push code. These are all in the heart of cloud, of making available various grid computing/utility computing notions that have circled around for a long time, of making us think less specifics. And I think that's indeed true & powerful. But it keeps running into the asocial problem above, that there's no social environment, most of the secret-sauce is retained, locked inside the neo-mainframe.

CloudFlare and Deno seemingly are some of the only two who seem to realize the Tim O'Reilly adage that I hear no-where near enough this decade: "Create more value than you capture." Or else your dream is going to some day die as your dream, with yes maybe good marks, but no real lasting success. If cloud computing is to be a future of real note, it has to be a shared one. That's been an exceedingly brutal gauntlet that few technological/cloudogical would-be's have proven their advance through.

Just use a PaaS. Am I missing something?

Yeah it is not going to be FOSS and there will be lock in.

Great article.

> I never ever again want to think about IP rules. I want to tell the cloud to connect service A and B!

DigitalOcean's firewall is a bit like this.

we aren't early. others like Gumby here have made great posts about how the cloud has existed for decades. I don't like his silly broad ranging manifestations but okay. I've seen a lot of old shit, let's stroke off about them but it continues.

Google already has this internally.

Applications are open for YC Winter 2024

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact