It's crazy and destructive that we are still using the unix paradigm in the cloud.
In the 70s we have transparent network fileststems, and by the 80s I had a more advanced cloud-native environment at PARC than is available today.* The Lispms were not quite as cloud native as that, but still you could have the impression that you just sat down at a terminal and had immediate window into an underlying "cloud" reality, with its degree of "nativeness" depending on the horsepower of the machine you were using. This is quite different from, say, a chromebook which is more like a remote terminal to a mainframe.
I was shocked when I encountered a Sun workstation: what an enormous step backwards. The damned thing even ran sendmail. Utterly the wrong paradigm, in many ways much worse than mainframe computing. Really we haven't traveled that far since those days. Cloud computing really still it "somebody else's computer."
There's no "OS" (in the philosophical sense) for treating remote resources truly abstractly, much less a hybrid local/remote. Applications have gone backwards to being PC-like silos. I feel like none of the decades of research in these areas is reflected in the commercial clouds, even though the people working there are smart and probably know that work well.
* Don't get me wrong: these environments only ran on what are small, slow machines by today's standards and mostly only ran on the LAN.
That is far, far more than transferring a byte of data. It's not the future though of course, this was well established in the 80s, it just so happens that it's a decent model for managing remote machines that outlasted over-engineered distributed designs like Plan9, VMS, MOSIX, etc. It's much more meaningful and useful than "something simple" "running little functions floating in the void" which just sounds like some vapid marketing pitch.
The great thing about Linux as the base layer is that it allows a commodity common ground with a very capable system that also facilitates more specialized layers to be implemented on top of it.
> The great thing about Linux as the base layer is that it allows a commodity common ground with a very capable system that also facilitates more specialized layers to be implemented on top of it.
The great thing about x86 instruction set as the base layer is that it allows a commodity common ground with a very capable system that also facilitates more specialized layers to be implemented on top of it. Only a handful of programmers develop at a layer below the x86 instruction set, or even at that level for that matter.
My point is that the instruction set of these large CPUs (not AVRs etc) is itself just an abstraction to an inscrutable micromachine. That abstraction lets you develop a complex program without knowing the details.
Unix was specifically designed for small, resource starved machines and did not have the powerful abstractions of mainframe OSes like Multics or OS/360. It's OK, but as modern CPUs and IO systems have grown and embraced the mainframe paradigms that had been omitted from the minicomputers (e.g. memory management, channel controllers, DMA, networking etc) unix and linux have bolted on support that doesn't always fit its own fundamental assumptions.
That's fine, it's how evolution works, but "cloud computing" is a different paradigm, and 99.99999% of developers should not have to be thinking at a unix level any more than they think of the micromachine (itself a program running on a lower level instruction set) that is interpreting the compiler's output at runtime.
As I said in the other comment, maybe people though that "cloud computing" is a different paradigm back in the 70's but it turns out that no, it's all the same distributed stuff.
If you have two processes on same machine, locking a shared data structure takes microseconds. You can easily update hundreds of shared maps and still provide great performance to user.
If you have datacenters in NY and Frankfurt, the ping is 90mS and fundamental speed of light limit say it will never be below 40mS.
So "lock a shared data structure" is completely out of the question, you need a different consistency model, remote-aware algorithms, local caches, and so on.
There are people who are continuously trying to replace Unix with completely new paradigms, like Unison [0].. but it is not really catching up, and I don't think it ever will. Physics is tough.
It's quite an appropriate analogy. We used to write a lot of code in assembly code. I used to write microcode and even modified a CPU. But it's been decades since I last wrote microcode (much less modified an already installed CPU!) and now the instruction sets of MPUs like x86 and ARM are mostly just abstractions over a micromachine that few people think about.
And an OS it the same: it used to be quite common to write for the bare iron, to which an OS is by definition an abstractional interface. I still do that, but it's an arcane skill frankly not in huge demand. Which is probably a good thing.
Nowadays most code is written at nosebleed levels of abstraction, which frankly is a good thing, even if I don't like doing it myself. But still, as developers do it, they are often dragged back down the stack to a level that these days few understand.
I think the person/company that cracks this will be the dominant infrastructure play of the decade.
It's not an appropriate analogy because most people don't program in x86, but most people know (or can easily look up, when the need arises) basic Linux administration commands.
That is far, far more than transferring a byte of data.
Educate me.
It's much more meaningful and useful than "something simple" "running little functions floating in the void" which just sounds like some vapid marketing pitch.
That was a quick response to what I thought would be better than administrating a UNIX system on an Amazon machine. Don't be an ass. I've no experience with the real distributed computing systems created before UNIX.
The great thing about Linux as the base layer is that it allows a commodity common ground with a very capable system that also facilitates more specialized layers to be implemented on top of it.
No, that's the vapid marketing pitch. The Linux kernel randomly kills processes when it starts exhausting memory. It's garbage.
Hey, Brainfuck also facilitates more specialized layers to be implemented on top of it. Now, what's stupid about doing that, however?
Not sure if I'm being trolled... It finds a remote machine by name, and routes a connection to it. It authenticates and establishes a secure connection with the machine. It sends a command to the remote machine, the remote machine executes it, and the result is returned. It then puts the result into a form that can be used programmatically by the local shell.
> That was a quick response to what I thought would be better than administrating a UNIX system on an Amazon machine.
It was content-free.
> Don't be an ass.
What's good for the goose...
> I've no experience with the real distributed computing systems created before UNIX.
And already the expert. Impressive.
> No, that's the vapid marketing pitch.
No, that's the reality. That's why Amazon, Google, Azure, and everybody else offer it in their clouds and use it on their internal infrastructure.
> The Linux kernel randomly kills processes when it starts exhausting memory. It's garbage.
I'll take that over "running little functions floating in the void", it actually exists and works.
I'm being trolled by someone who finds a DNS lookup and an encrypted TCP connection to send a textual command to another machine is somehow impressive, rather than entirely basic and, again, unimpressive.
It then puts the result into a form that can be used programmatically by the local shell.
That does sound better than returns one octet with two well-defined values, sure.
And already the expert. Impressive.
I know UNIX is shit with a legion of cultists.
Hey, if we live in the best of all possible worlds, explain the market dominance of Windows. Did MicroSoft give Windows away for nothing, to poor unsuspecting university students who decided to hack on it instead of learn what a real computer is, until none remained?
It's even easier to shit on the status quo without having anything better.
> At least read my website before calling me unhinged.
No. I've seen countless tech prophets peddling snake oil over the decades, and it's always the same. Everything is dumb, poorly versed in the state of the art and history, but this magical ill-thought-out thing will somehow solve everything. It's very boring and predictable.
> No, that's the vapid marketing pitch. The Linux kernel randomly kills processes when it starts exhausting memory. It's garbage.
You sound like someone who wants some sort of magical fantasy abstraction level where errors never happen and you can just write code without ever caring about said errors or really anything that happens at a lower level or outside of your code. Sorry, but that doesn't exist and never will. Code runs on computers, it doesn't run in a "void".
Sure, you push the failure into the processes, and most OSes I've used do this. Thus (to use unix terminology) sbrk and fork() fail and it's the program's responsibility to handle that, gracefully or not as it wishes. You also degrade more slowly via paging.
You shouldn't get to the point where a perfectly innocent process is killed to reclaim memory. A process can die, or even (in some cases) recover cleanly from running out of memory -- that should be the process's choice.
In Linux you can do that -- disable overcommit (although I'm not sure if that's 100% foolproof). But that's often not preferred because that way does lead to "perfectly innocent" processes being killed: whoever allocates the next page when memory has run out loses. And how do they get to decide? What if a failure path requires paging in one more page of code from disk? Or causes a store to a COW page?
A boolean, a still (relatively) terse line of code. And when you end up wanting to do something even a bit more complicated it'll be a smooth gradient of difficulty.
As with every discussion around shells, though, the amazing terseness of scripts is great, but every form of programmatic behavior beyond "do this then that" and "do string interpolation here" is a massive pain in the butt. The biggest advantage is that stuff like pipes are a very good DSL.
What do you think happens when you invoke a shell? It's not some magical primitive, it also does string parsing and all the other stuff that a scripting language requires. SSH also has to parse the line!
I did say Python-esque (I do get that Python in particular has a lot of stuff in its prelude). Really if you want to be super pedantic then the most "banana"-y isn't a line in your shell but some C library calls on both sides of the system.
you're omitting the 500 lines of yaml needed to deploy that, the tens of megabyte for the python base image (of which you'll waste more than 99.99%) and the tens of megabytes in bandwidth to the docker registry and to download it on the kubernetes worker... And the image wight should be accounted at least four times: worker node + 3 in the docker registry (ha and stuff).
it's incredibly inefficient if you just freaking want to know if a file exists.
Am I understanding correctly that the concern is syntax based? If so, why not build a wrapper over the existing shell language? If not, can you please clarify?
Well my first thing is I would want a defacto norm adopted in the same way that bash is basically available everywhere. And of course we would want it to be something that doesn't have the busted dependency story of Python. And I would like for third party programs to have a rich interface be provided through this new shell.
For example, you can interact with docker in Python with a lib, but it would be amazing if the docker binary just exposed an RPC interface and API for script languages so you wouldn't have to parse out things from output or otherwise try to figure things out when writing shell scripts. This is, of course, the PowerShell thing, but PS's aesthetics displease me and many other people.
Computers are, of course, turing complete. You can do whatevery you want, really. But for example I end up using zsh instead of xonsh because I want to use what other people write as well.
Gee it's like nobody realizes that k8s is just unix machines too, only difference is instead of a program in the whole OS listening on a port, it's a container... And _gasp_ you can deploy containers to listen to any port you want and have those containers do anything you want... Wow.
I don't use k8s or whatever garbage gets thrown around nowadays.
I well remember my disgust when I learned AWS was just using UNIX virtual machines or whatever, rather than something simple, such as allowing people to run little functions floating in the void. I know nothing about AWS, but I've not been mistaken, right?
Well AWS has been quite clever, using both Xen originally and also now KVM VMs, but everything else is based on that yeah. But that's gotten really complex over the years. They now use a container execution environment called firecracker or something that is open sourced and might be totally separate from their virtualized environment - or at least on top of it where they do let you just run little jams. Cloud isn't bad, I just mean you do have to transfer octets somehow from exec to exec. There are more and less efficient ways of doing it, and also more and less secure ways too - and abstract ways. Not all bad.
Think the point is that as I am writing software, I don’t care about any of that. I expected an opinionated environment when I first heard the word cloud 2 or so decades ago; I wanted to put my code in cvs (git now) and that’s it; I don’t want to think of anything outside that. Especially security or scaling.
I ran 1000s of physical Linux servers over the decades using chroots when it was not fashionable to have containers yet with db clusters because I don’t want my laptop to be different than my servers and I don’t want to think or worry about scaling.
We have aws Lambda now, but it is too expensive and, for me, not trivial enough. It actually requires too much thinking (architecting) to make something complex with it and a lot of that is about money; you can stumble into bills that require at least a kidney to settle.
So I still run my own servers, these days with openfaas. With many scripts, it is almost painless and costs almost nothing (we provide for millions of users for a less than 200$ a month in total). But it still is not the dream: I still cannot just forget about everything hosting and abstractions still leak enough but at least I don’t spend more than a few hours a year on it and I don’t go bankrupt like can happen with Lambda.
We are building our own language and runtime (lisp like with types) to fix this: after 40 years of programming, I am pretty much fed up with having computing power and loving programming but having to deal with all this pointless (for programmers!) stuff around it: we went backwards, at least somewhat.
I like it for many reasons, but especially the helpful community, founder and everything just works. We use it with Istio at the moment. We switched from Cockroach (did not like the license and we had a few perf issues) to Yugabyte en Scylla recently for micro service and Monolith scaling to use with openfaas and it is really scaling to anything we need. Of course different situations have different needs, but this works very well for us.
well, when I first saw a aws prompt in 2006 or so, I was relief that EC2 was a plain debian-like machine.
Also I was saving me a phone call with Dell and a sunday evening configuring some hardware.
No new bullshit paradigm to learn, I could shove my code in there with tool I knew and call it a day.
EC2 was a stepping stone and it worked well to get some stuff in the cloud.
Remember the early days when only non-critical, non-PII stuff were in the cloud? And how some companies "just can't use it because of X" ( or worse the dreaded Y )
In the end, almost everything runs on some kind of OS. All those little functions in the void also have to run in an OS, but this is abstracted away from the user.
Congratulations on understanding what an abstraction is. Do I need to worry about the transistors or individual atoms in a computer? Why should UNIX be the right level of abstraction forever?
I didn't write to explain what an abstraction is, and I will assume you honestly meant to congratulate me instead of trying to be sarcastic. Thank you for that.
I have had two types of developers in my teams: one type that understands the underlying technology of the abstractions that they are using, and one type that only understands the layer/abstraction they are interacting with and blissfully clueless about anything underneath.
Neither group worries about transistors or proton decay in their hardware, and both can make 'things'. However, one group is much more capable of effectively building tools and understanding issues when something goes wrong in the chain, whereas the other group is regularly hopelessly lost because they have no clue of what is happening. I'll let you guess which is which.
This is not about worrying or even being bothered with details, but about understanding what you're working with.
such things however are just so often just so much better.
I was working with an intern lately, we both had remote machines in the same subnet (i'm omitting some details, of course), and he had to pass me a file.
We was about to download it on his laptop, upload it back to the cloud (slack message) when I would have had to download it on my laptop and send it back to the cloud.
I was able to show this young engineer how to use cat and nc (cat somefile.tgz | nc <ip-address> 8000) to send and then I would use nc (nc -l 8000 > somefile.tgz) to send it in a moment, without third parties involved and without having to make this file go across the globe multiple times.
The thing is: if you know what you actually need to do and have UNIX tools available, you can be insanely efficient.
"Treating remote resources truly abstractly" doesn't work in practice. Too many points of failure in our systems, and you really, really don't want to paper them over with abstraction if you want to build a fault-tolerant system.
That has been true for as long as I have been programming. But:
Similar statements have been made about high level programming languages. Nowadays most devs don’t understand how the CPU works, but write on the top of a tower of abstractions and nobody bats an eye. Many of those abstractions are quite complex!
I can imagine that the same could apply to certain kinds of network activities. Look at how ppl use http as a magical secure and robust inter process communication channel with no understanding of how it works at all.
Lambda is a half a baby step in this direction.
Another problem is tollbooths. The phone system uses a lot of bandwidth simply to charge the customer money. My phone company charges me overseas rates if I make a phone call outside the country, even if both endpoints are using WiFi, not the cellular network! I’m afraid of the same with fine-grained and abstract distributed computing, but perhaps the magical hand wave abstractions I posit above can help.
This tollbooth nightmare btw is the dream of the web3 bros.
We cannot afford the seamless distributed systems, and I don't think we ever will.
I use Python because I don't care if adding two numbers is taking a microsecond instead of a nanosecond. But if netwotk call suddenly takes 1 sec instead of 10mS? well, thats a huge problem, let's add a memory cache and rack-level cache and a parallel fetch and a whole bunch of monitoring.
The local compute is growing much faster than internet, and even faster than local network. I sure hope we get better abstractions, but caring about remote vs local call is not going away.
You could design a OS with abstractions built around latency, instead of physical machines. It would still allow you to find and use resources according to their constraints of use, but wouldn't force you to keep track on which exact machine they are located.
I am not sure what do you need new OS for, and what could you get from OS that you cannot get from today's computing.
If you want user to know they are talking to remote machine, but don't want them to care about which exact machine it is, we have a ton of great solutions already: load balancer, connection pools, service mesh, anycast, dynamic dns, etc...
If you want remote calls to be indistinguishable from local calls on the source code function
level, this is also solved! Many RPC frameworks and remote SDKs provide class-based interface which acts the same as the local class.
The only place where OS can help is if any OS function can be magically located on another machine. But even then.. we have remote filesystem (NFS), remote terminal and execution (ssh), remote graphics (x11), remote audio (alsa/pulse)... What is left for the new OS? process management? is it worth it?
> If you want user to know they are talking to remote machine, but don't want them to care about which exact machine it is, we have a ton of great solutions already: load balancer, connection pools, service mesh, anycast, dynamic dns, etc...
Those resources are complex to program against. An OS should offer a simplified abstraction layer to make them as transparent as possible. And yes, process management is worth having a unified programming model that doesn't force you to keep track of where each process instance is being located - that's essential for massively parallel computing.
Of could this could be done with platforms for massively parallel computing. The point of building an OS would be to put these platforms as close to the metal as possible to improve their efficiency.
The paradigm for abstracting away a thousand CPUs is AWS Lambda/GCP or Azure functions/K8s' implementation of serverless. It's not a total drop in replacement because a plain lift-and-shift can't change your paradigm, but cloud functions are very much a Cloud 2.0 (or at least 1.5) paradigm.
S3, yes a network accessible FS.
Unix - only is a telling word - it is the /defining/ paradigm for that. Is there a better one yet?
1000 CPUs? I mean, uh Hadoop, spark, etc etc. What?
> Unix - only is a telling word - it is the /defining/ paradigm for that.
Don’t be ridiculous. We had networked distributed file systems over both the arpanet and lans before Unix even had networking. I even mentioned this in my root comment.
Unix did make it work much better a bit later, organizations used to run NFS and single sign on with Kerberos to workstations, automated from scratch reprovisioning of workstations (so you could just reinstall from tftp server with org specific SW customizations included if previous user had messed the box up), smooth remote access to all machines including gui apps, etc.
It just went away due to Microsoft crowding it out.
Mainframes are very expensive. You can buy mainframe with very fast CPU and RAM interconnect and scale it by buying more hardware. Or you can spend 100x less and buy a number of server blades. Interconnect will be very slow, so you can't just run some kind of abstracted OS, you need to run separate OS on every server blade, you need to design your software with that slow interconnect in mind. But in the end it's still 100x cheaper and it's worth it.
Also mainframes have growth limit. You can buy very powerful ones, but you can't buy one that's as powerful as entire datacenter of server blades.
That's why I both hope Oxide Computers succeed and worry they may not.
They are effectively building a mini computer. The smallest unit you can buy from them is an entire rack. Modified rackmount hardware with better software to make it more cohesive.
I really hope they go to half-racks, but I've no idea how you'd stack them.
I'm sort of reminded of how the US government is the worst (except for all the rest), when having an absolute ruler should be so MUCH more efficient. Problems would be fixed by fiat.
Or maybe, why does lisp persist with its horrible user-unfriendly syntax?
:)
I guess we will just have to invent it. (and you should do your part by reminding people with examples of old systems that elegantly solved the papercuts of today)
That’s a hand wavey way to make a claim that can’t be backed up.
What you had then was in no measurable way more advanced architecturally or conceptually. Name one facet in which it was more, could do more, or faster, or better.
You can’t because it couldn’t. No part of your setup was cloud native. Nothing was abstracted away, a core tenant of the cloud.
Abstractions usually (always?) have a cost because physics.
> The damned thing even ran sendmail
and?
> Cloud computing really still it "somebody else's computer."
That's the definition of 'the cloud'. Unless you run it locally in which case it's your computer. What's your point.
> There's no "OS" (in the philosophical sense) for treating remote resources truly abstractly
It's unclear what you're asking for. Treating stuff truly abstractly is going to get you appalling and horribly variable scalability. If you're aware of that, why don't you tell us what you want to see instead.
Edit: ok, this is from gumby, now I recognise the name. This guy actually knows what he's talking about, so please tell us what things you would like to see implemented.
>> Cloud computing really still it "somebody else's computer."
> That's the definition of 'the cloud'. Unless you run it locally in which case it's your computer.
Forget the stupid framing of idiotic marketers in the early 00s and go back to the original “cloud” definition (that engineers were still using in those ‘00s but was distorted for a buck).
The term was introduced (by Vince Cerf perhaps) in the original Internet protocol papers, literally with a picture of a cloud with devices at the edge. It was one of the revolutionary paridigms of IP: you push a packet into the cloud (network) but don’t need to / can’t look into it and the network worries about how to route the picket — on a per-packet basis! You don’t say “the cloud is other peoples’ routers”.
Today’s approach to remote computing requires developers to know too much about the remote environment. It’s like the bad old days of having to know the route to connect to another host.
You don’t pay per packet even though a huge amount of computation is done on devices between you and the machine you’re connecting to in order to transmit each one.
When you emerge from the jungle, you may notice that not only UNIX conquered the world but even ""worse"" paradigms of Windows and iOS have proliferated. You have to ask why the situation that is so much worse is so popular: is it really everyone else who is wrong?
Appeal to the currently incumbent solution is not convincing. Very often the majority of people simply choose the lesser evil -- not what's of the best quality or with the biggest productivity.
Or need I remind you that hangings at sunrise and sunset were commonplace and people even brought their kids to them?
I'm sure back then people defended it as well, and it's likely that if you heard their arguments you'd facepalm.
I did. Its use of intentionally obscure language that makes APL seem readable and consistent in comparison just because "only smart folks should be able to code in this" is something I simply can't accept. And I love obscure langauges!
Good replies by others here. "crazy and destructive" that you have no idea about how computers work today, or how computers are still computers. Your ignorance about things like Sun workstations as it relates to literally everything today, I mean you have no idea about modern computing lol
A cynical view would be the billing is designed to trip you up.
As an example, if you use Azure with a Visual Studio subscription which includes credit, once the credit is used all of your services are suspended and no further charges are incurred.
As a pay-as-you-go customer this option does not exist. You can set a billing "alert" but that doesn't stop the charges.
It's kind of weird that it's not just a built in toggle button to the system but GCP has the primitives to let you suspend the system when a threshold is met.
Shame it doesn't actually protect you, this startup [1] had a spending limit and they racked up charges so fast even Google's own billing system couldn't keep up.
In typical Google fashion it's luck of the draw if you get saved or lose your home [2]
> Note: There is a delay of up to a few days between incurring costs and receiving budget notifications. Due to usage latency from the time that a resource is used to the time that the activity is billed, you might incur additional costs for usage that hasn't arrived at the time that all services are stopped.
So still pretty useless. Apparently they do have real-time billing updates via PubSub, but then it's up to you to code what to do when you spend too much.
If you're in an exploring phase for [GCP PRODUCT X] you're not going to preemptively write a safeguard to turn off [GCP PRODUCT X] in case of too high billing updates.
It's better than nothing, but kind of a slap in the face that they do have all the tools to really allow people to have a hard spending limit, but they don't.
I've heard the argument "but it's too dangerous, people might lose non-backed up data", but that also happens if you set a Billing Limit, just that the billing limit will kill all your projects AND still let you rack days of over-the-budget billing.
The capping costs sample code in my updated link warns you that it will go and forcibly stop assets, possibly deleting data, due to suspended billing. It still has the same notification delay so it's not a total panacea, but it does help alleviate some of the fear that I'll accidentally end up with a huge bill at the end of the month due to a small misconfiguration or for forgetting to shut down a GPU instance or something.
It’s even worse than that really because it only takes a small slip into some of the cloud-native services and that adversarial relationship is entirely unavoidable and unportable and you are stuck with it. Which is exactly what is demanded by the providers to get the best cost-benefit relationship in the short term. Of course the human race is entirely cursed by short-term thinking.
The true cost of all technology is only apparent when you get the exit fee invoice.
Interestingly, at Google the typical developer workflow (google3) is very cloud native.
Most devs write code in VS code in the browser.
Many (most?) devs don't have a physical desktop any more, just a cloud VM.
The code lives in a network mounted filesystem containing a repository.
The repository is hosted remotely (everyone can see edits you make to any file nearly immediately).
Builds are done remotely with a shared object cache.
Tests typically run in the cloud (forge).
Facebook has similar infrastructure, although more pieces run locally (builds were mostly done on your VM circa 2020)
For my personal projects, I try to do most development on a cloud instance of some kind, collocated with the rest of the infrastructure.
I prefer the ability to run and debug locally coupled with a good IDE. I know VSCode is popular, people customize the shit out of Vim, but IntelliJ just works for me when I'm writing Java, Kotlin or Typescript/React. Refactor and debug is not comparable. And I know most think its hard on resources, but we have 200k lines of code yet and it works with 16GB M1 Air very well leaving more than enough spare resources for the system.
Many developers in now and before like to have their own desk/space it helps them think. Getting ride of that space or changing may not be optimal for many developers I've worked with.
Lol desktop meaning a physical computer. Engineers still have desks with tops. If anything they have more space than ever since the offices are so empty.
Having heard complaints of Google developers, the problem with this is the limitation of Chromium and the browser more generally. Browsers are utterly terrible at letting users script their own shortcut etc.
Wait, I remember Google gave up supporting IntelliJ around 2011, leaving only one full-featured IDE, Eclipse, as the only option. Did it change since 2011?
That reversed in ~2016. Because Android Studio was based on IntelliJ and heavily staffed (including Blaze support for development of Google's own Android apps), TPTB decided that they should put their weight behind IntelliJ instead of Eclipse. Official internal support for Eclipse was discontinued and the Eclipse team was disbanded.
I've also switched all my dev work to Gitpod a year ago and I don't want to go back to developing locally anymore. I curse and swear every time I need to work on a project locally.
Gitpod URLs are generated every time you start a new environment (usually every time you start working a new feature/bug fix), and it doesn't have static URLs. So you would need to update the endpoint URLs manually.
If you use VS Code locally to connect to Gitpod instead of in the browser, all URLs are mapped to localhost, so then it shouldn't be an issue.
I can't think about the cloud without immediately grasping its huge downsides: absolutely no privacy at all, data lock-in, forced migration, forced obsolescence, and things just vanishing if the rent is not continuously paid.
I have files on my computer from the 1990s and 2000s. If we lived in the cloud-centric world those projects that I did back then would probably be gone forever since I'm not sure I would have kept paying rent on them.
There's also no retrocomputing in the cloud. I can start DOSBox on my laptop and run software from the DOS era. That will never be possible in the cloud. When it's gone it's gone. When a SaaS company upgrades, there is no way to get the old version back. If they go out of business your work might be gone forever, not just because you don't have the data but because the software has ceased to exist in any runnable form.
It all seems like an ugly dystopia to me. I don't think I'm alone here, and I think these things are also factors that keep development and a lot of other things local in spite of the advantages of "infinite scalability" and such.
I'm not saying these things are unsolvable. Maybe a "cloud 2.0" architecture could offer solutions like the ability to pull things down and archive them along with the code required to access them and spin up copies of programs on demand. Maybe things like homomorphic encryption or secure enclaves (the poor man's equivalent) can help with privacy.
... or maybe having a supercomputer on my lap is fine and we don't need this. Instead what we need is better desktop and mobile OSes.
> I have files on my computer from the 1990s and 2000s. If we lived in the cloud-centric world those projects that I did back then would probably be gone forever since I'm not sure I would have kept paying rent on them.
On the other hand, I don't have any of my 90s/2000s projects because I would occasionally lose a hard drive before transferring everything to my new machine, or would occasionally transfer not-everything and then later regret it.
I guess dropbox isn't "the cloud", but I haven't lost anything since I started paying for dropbox when it came out, and things wouldn't just vanish if the rent is not continuously paid.
I sure wouldn't mind more cloud services that improve and add to the local computing experience rather than deliver themselves only through a browser and a web connection.
I agree with you that a cloud 2.0 architecture is needed. I don’t agree with you that you can’t run DOSBox in the cloud. You totally can. In fact, you can containerize a dosbox app and forward the output over websockets or tcp. I have files from 1990s and 2000s as well. I keep backups, as everyone should when dealing with cloud/internet/not-my-machine.
I can run DOSBox in the cloud. What I can't do is run an old version of Google Docs, Salesforce, Notion, or Alexa.
I can run old commercial software that I paid for in DOSBox or a VM because I have the software, even if it's just in binary form. I have the software and the environment and I can run it myself.
That's the difference. The cloud is far more closed than closed-source commercial software.
I can also run the software with privacy. When I run something locally there's nobody with back-end access that can monitor every single thing I do, steal my data, scan my data to feed into ad profile generators or sell to data brokers, etc.
i think yu are mixing saas whit cloud you ca run fireckraker functions, and old verion of rocky linux, but is 100 times more complex than pay for the sistems and the cloud provider encourges this propietry tools because of this omthing similar will be dinamo or firbase which are pay as you go saas
> I never ever again want to think about IP rules. I want to tell the cloud to connect service A and B!
Dear God this 1000 times. My eyes bleed from IP-riddled firewalls foisted upon my soul by security teams.
If I could also never NAT again, that'd be nice.
> Why do I need to SSH into a CI runner to debug some test failure that I can't repro locally?
Hey I can answer that one. Because an infra team was tasked with "make CI faster" and couldn't get traction getting the people responsible for the tests to write better tests (and often, just hit a brick wall getting higher ups to understand: "CI is slow" does not mean the CI system is slow. CI's overhead is negligible), and instead did the only thing generally available: threw money at the problem.
Now CI has a node that puts your local machine to shame (and in most startups, it's also running Linux, vs. macOS on the laptop) (hide the bill), and is racing those threads much harder.
I've seen people go "odd, this failure doesn't reproduce for me locally" and then reproduced it, locally, often by guessing it is a race, and then just repeated the race enough times to elicit it.
Also, sometimes CI systems do dumb things. Like Github Actions has stdin as a pipe, I think? It wreaks havoc with some tools, like `rg`, as they think they're in a `foo | rg` type setup and change their behavior. (When the test is really just doing `rg …` alone.)
Also, dev laptops have a lot of mutated state, and CI will generally start clean.
Those last two are typically hard failures (not flakes) but they can be tough to debug.
> Do we need IP addresses, CIDR blocks, and NATs, or can we focus on which services have access to what resources?
We need IP addresses, but there's not really a need for devs to see them. Nobody understands PTR records though. CIDR can mostly die, and no, NAT could disappear forever in Cloud 2.0, and good riddance.
Let me throw SRV records in there so that port numbers can also die.
Because it's bothering me: that graph is AWS services, not EC2 services.
I'll admit it depends a bit. We're moving to Github Actions and their runners are … slow. There are custom runners, but they're a PITA to set up. There's a beta for bigger runners, but you have to be blessed by Github to get in right now, apparently.
Saying that Spotify is "producer-friendly" must be couched in the context of the times. 100% of $0 is still $0, and at the time most people were just pirating music so you weren't making anything off of recordings. If Spotify wanted to give you literally fractions of a cent instead of $0, you were going to take that. I wouldn't say it was ever really friendly to producers...mostly to consumers and, in order to be friendly to consumers, they had to win over record labels. And I think Spotify made a _lot_ of compromises in order to do that, including taking money that should really be going to producers and paying off the RIAA/labels so they continue to put their catalogs on there.
Source: I was a producer when Spotify started and I still am.
> get peanuts from spotify even for non-trivial number of streams.
Define "non-trivial". Easily 80-90% of all plays are by a handful of artists, the absolute vast majority of whom are owned by labels.
Even if your non-trivial amount of listenings is in the tens of millions, it pales in comparison to Drake or Ed Sheeran.
> until spotify pays from MY subscription the artists I listen to
Your subscription is 15 dollars a month. Yup. That is definitely not peanuts when spread over all the stuff you listen to.
Edit: you could be one of the few people who listen to the same band all the time, but that's not representative of people's listening habits.
Edit2:
> there are countless musicians who own the copyright to their own stuff
80-90% of all world music is owned by four companies [1]. This amounts to about ~99% of music people listen to. The "countless musicians" make up a long tail that is barely a blip on the radar.
You seem to be about a decade out of date, per your wikipedia link. It states ~72% of music (down from ~88% in 2012) is owned by the big three (not four, after EMI was eaten by Sony in late 2011). Best of all, parent may have the right idea buying from Bandcamp. From your link:
> These companies account for more than half of US market share. However, this has fallen somewhat in recent years, as the new digital environment allows smaller labels to compete more effectively
Which to me at least suggests that seperating hardware like CDs and LPs from the actual music is helping artists. Perhaps that should be taken with a grain of salt, though: I'm still optimistic and naive enough to think things may improve for artists.
> It states ~72% of music (down from ~88% in 2012) is owned by the big three (not four, after EMI was eaten by Sony in late 2011).
It's hard to keep specifying the exact composition of the music scene, and the market share by the Big Four then Three then Four then Three again is fluctuating between 70 and 90 percent from year to year.
> I'm still optimistic and naive enough to think things may improve for artists.
The Big Ones have the industry in chokehold. Bandcamp is fine, but if you want to listen to something other than indies, you're stuck with the catalogs owned by the Big Ones. For example, https://www.sonymusicpub.com/en/songwriters Anything from Betales to AC/DC and from Enio Morricone to Dolly Parton is owned by Sony.
So you want to start a service and provide a service that provides both indies and this music? You will bow to industry's terms. If you have enough money and clout like Apple, you'll be able to negotiate better terms. Until then ¯\_(ツ)_/¯
I started my career in simpler times. Developers would produce a zip and handed it over to an admin guy. Dev and Infra/Ops clearly separated. No CI, sometimes not even a build step.
I understand the power and flexibility of the cloud but the critical issue is the dependency on super humans. Consider a FE or mobile app developer. They already greatly struggle just to keep up with development in their field. Next, you add this massive toolset on top of it, ever-changing and non-standardized.
A required skillset overload, if you will. Spotify concluded the same internally. They have an army of developers and realized that you can't expect every single one of them to be such "superhuman". They internally built an abstraction on top of these services/tools, to make them more accessible and easy to use.
And you're glossing over the pain points that drove the industry to coin DevOps - those times when the zip didn't contain everything it needed to run in production properly and the admin guy had to call the dev multiple times in the middle of the night because their app didn't start properly on deployment. Or the install/startup procedure wasn't documented properly. Or it changed and the document didn't get updated. Or there was a new, required environment variable that didn't get mentioned in documentation anywhere. Or a new, required library was on the dev's local workstation and not on the server. etc etc
As a former sysadmin who had part of his career in that paradigm, I never again want to wait until 10:30 PM to run manual production deployments handed to me by a developer and hoping their documentation was correct.
Give me CI/CD pipelines deploying containers to a k8s cluster during the day.
> Developers would produce a zip and handed it over to an admin guy
This is literally what the cloud is now for a fraction of the cost of the admin guy.
Current gen serverless containers basically deliver that promise of ease of use, scalability, and low cost.
For me, Google Cloud Run, Azure Container Apps, and AWS App Runner fulfill the promise of the cloud. Literally any dev can start building on these platforms with virtually no specialized knowledge.
I'm just not sure how you define what goes into that zip in a way that does not make it substantially harder to solve tough problems than it would be to be familiar with cloud services.
Of course it'll cover you up to a point. If it's a CRUD web app that runs on a single server (or multiple stateless ones) and uses a relational database, you can have a zip file whose contents cover your needs. But if you have anything that justifies Kafka, Cassandra, or distributed storage, the "I'll just throw it over the fence to ops" paradigm isn't likely to fit as well.
Maybe there is a name for this phenomenon, but it feels like when we add so much productivity via layers of abstraction, even more person-effort gets allocated to the higher levels of abstraction. Because
1. that's where people are most productive / happy / compensated / recognized / safe
2. businesses can confidently project return on investment
How many engineers get to work on a part of the stack that has some room for fundamental breakthroughs or new paradigms? The total number has maybe grown in the last 50 years, but not the proportion?
It's hard to justify an engine swap once there's so much investment riding on the old one, so just not a lot of people are researching how to make that new OS.
That is until a Tesla comes around and shows the market what could be better/faster/cheaper.
Probably not the name you're looking for, but I typically talk about this stuff in terms of local and global maxima. Low-risk optimisation efforts typically get trapped on some local maximum over time, while bold efforts get closer to the global one - the minority that doesn't fail, that is. Applies to build vs buy decisions and business in general quite nicely.
From what I've seen, businesses and projects usually become less risk averse the more established they are - they are economically incentivised towards that.
The silver lining for me is that there is always room for disruptors in this scenario.
I am not a cloud expert but so much of this rings true, esp the following quote:
“Why is Bob in the ops team sending the engineers a bunch of shell commands they need to run to update their dev environment to support the latest Frobnicator version? For the third time this month?”
Because devs will not update their Frobnicator for seventeen years, choosing to solve leetcode instead. Eventually the Frobnicator that the devs are using will be so security vulnerable the fact that the source code exists in the package repository is itself a CVE. Because you're a dev, when this happens it's a funny story, but for Bob it's seventeen meetings and having to listen to Franz, the director of development chew out the entire team as if they're utterly incompetent. This means Bob just disables your access to the rest of the systems unless you have a correct Frobnicator, and doesn't care whether he blocks you or not - because you would be complaining to your director either way.
You might be exaggerating here. Anecdotal evidence and all but even the juniors I work with are mostly diligent in keeping their important tooling up-to-date.
Everything Google is doing comes to the world 10 years later. Being inside Google is like seeing the future. They had all these technologies long ago, now it's just a case of timing and turning them into products. I've learned that sometimes the world just isn't ready for these advancements. The journey Google went on internally, everyone else has to go on for themselves.
That said. I think we're still super early in cloud because it's still about how we the developers use it and not the end advancements for consumers. The cloud has changed user behaviour through streaming services, saas and cloud based storage but I think there's far further to go. Meaning there's some cloud first, always on, behaviour shift that needs to happen with the services catering to that model. You'd say saas and cloud is already there but I think it's a lie. We're just replicating what we did locally in a remote env. The cycle of thin clients and fat servers e.g Citrix and the rest of it. A major shift is coming very soon.
After talking to manager at Google during an interview and him explaining to me that almost all tools at Google are home baked because most if not all services are so huge you wont be able to use opensource solutions for that.
Then I reminded myself about VictoriaMetrics that in
benchmark outclassed Google Cloud Metrics by an order of magnitude.
Ppl at Google think they are the smartest (and often are) but in some cases they are simply outclassed so hard.
After this discussion decided Ill never ever want to work with ppl with such attitude.
> After this discussion decided Ill never ever want to work with ppl with such attitude.
This is a problem I constantly come across when trying to hire people coming from Netflix, Google, Facebook, Amazon and similar companies that have this "I'm the smartest vibe".
At one point in time, it was a good indicator of skill that they were coming from one of those places, that we could trust their technical knowledge as long as they left the place willingly and weren't fired from the place.
But some years ago it started to change and now we're seeing previously work experience at those places as something negative, as hires from those places tend to want to upend everything to something their previous company used to do, even though it wouldn't make any sense for the their new workplace to do.
And then constantly "scale" becomes an argument when the product they're building haven't even found a market fit yet. They're always jumping into this theoretical possible limits that we're nowhere near of hitting but want to solve everything upfront.
It's exhausting both for management and the rest of the team to have to deal with, so best just to avoid that class of developers as a whole.
> in some cases they are simply outclassed so hard.
I think this is true in some cases, but Google has been okay at adopting external vendors where internal tools aren't keeping up. And in some cases, folks actually hate the external replacement and miss the google-built tool. So YMMV.
Companies like FB, Google, etc are large enough and have specific enough needs that sometimes they really do have to build their own thing. Buck had to be built by Facebook because Bazel wasn't open source yet, and well that's one example of something from Google outclassing all the competition for organizations that need anything like it.
In re arrogant people, they exist at all organizations whether it's warranted or not. I wouldn't let a random peon at Google affect your perception of the organization. You'll see similar behavior from companies at all sizes so it's not really a telling signal.
I haven't seen so many people eagerly waiting for mainframes from 70s. As the author said - no IPs, no CIDR no NAT, no counting of ram so vast infinite resources, you only need a terminal to do everything remotely, it's only one development environement, mainframes for the masses this time....
And yet you still have to work within boundries because you or whoever you work for don't have money for infinite resources. It's a small contradiction omitted everywhere in these kinds of posts. But hey, lets come full circle into the 70s and welcome our ma...cough.. Cloud 2.0. If this happens there's hope. We will relive the microcomputer revolution after that. :)
https://replit.com is making progress on this. They've moved (almost) all dev tools to the cloud so you can just edit and run in the cloud.
My own project, GridWhale, goes one step further and provides a single, integrated cloud platform for development. Rather than writing separate programs for frontend and backend, you write a single program and the platform remotes the UI as appropriate. Here's a demo: https://gridwhale.medium.com/the-gridwhale-gui-system-55c449...
Do we really want to develop in the cloud? My gut says no. I have no real opinion about that, but seems worth investigating. Is anybody working with a proper dev environment (no, sorry, a small react-only project doesn't cut it, I'm talking jvm driven, ran by docker kind of thing) in the cloud here?
Yeah, as another commenter mentioned, all Google SWEs have developed in the cloud for a long time. It allows you to write, run, and test code performantly on any computer with a browser. Some of OP's wishlist are realities at Google. E.g.
- When I compile code, I want to fire up 1000 serverless
container and compile tiny parts of my code in parallel.
- When I run tests, I want to parallelize all of them. Or
define a grid with a 1000 combinations of parameters, or
whatever.
Build systems, testing infra, and cloud editing all need to be there for the magic to happen. When your cloud editor supports distributed builds & testing infra, and can be used by anyone, life is really good.
FWIW widely available cloud editing is also getting good with VSCode + LSP, if you don't want to pay Replit. Getting Bazel to do distributed builds instead of local builds is really annoying tho.
Without internet connection, most modern development grinds to a halt pretty quickly in any case. Github won't work. All those dependencies your build needs will no longer download. That issue tracker that tells you what you need to do is no longer reachable, and forget about copy pasting from Stackoverflow. Etc.
There are things you can still do offline of course. But it gets inconvenient pretty quickly. There's not a whole lot of offline development happening anymore.
So, in the rare case the connection drops, you have a coffee break, and then you reconnect. If that's a regular thing in your life, change your internet provider or networking equipment.
There are plenty of locations where internet connections constantly breaks, or the network equipment is just so poor that everyone's connections drop/connect once a day or so. Or that the latency is so high that most servers drops your connection before they even gave you a chance to connect. That's "offline" as well even though I'm not actually offline, my connection is just really slow.
But the mindset you have explains a lot about why most software doesn't well in environments like that, people simply believe conditions like that don't exists so why should the server allow connections that take longer than 3 seconds? "Probably they're spamming us so let's drop the connection instead".
If you have the right setup, all of the issues you're saying are easily worked around (even the "copy paste from Stack Overflow" issue, although I'm not sure if that's a joke or not).
Most software development simply does not happen in those places for that reason. Basically, it's a supply and demand thing. Software developers require decent connectivity and they'll move to where the connectivity is. Or they'll fix a decent connection (using star link or whatever).
Seems funny that Stack Overflow would enable offline usage when supposedly no developers have poor connection, they'll simply move to places with amazing networking.
What may be missing from cloud, is alignment of incentives. If you waste more compute, you increase their profit margins. That would explain things the author questions like general latency increases.
Developer environments and workflows built around the idea that you won't compile and run code on your own device can do wild things at the press of an Iphone app button.
> Since reading the blog post's mention of Repl.it I went and downloaded their new Iphone app and used Modal.com to spin up 30-40 containers from a script doing sentiment analysis on ~30k movie reviews:
IPhone processors run billions of cycles every second and are capable of running billions of instructions every second. I'm amazed that we've gone from "Run doom on my toaster", to "I can spin up 30-40(!) containers to analyze 30k reviews".
It's laughable. Doom was written for the IBM PC. This PC had a clock speed of 4.77MHz and 64Kb of RAM. The iPhone 12 runs at 3.1GHz, has 4Gb of RAM and has multiple cores. The phone in our pocket is vastly more capable of any piece of hardware we had 30 years ago, and we give accolades to people that can analyze sentiments (which is just running a bunch of matrix math at the end of the day) in under 1 minute using dozens of insanely powerful machines.
We should be able to analyze 30K sentiments in a minute easily on an iPhone. And we should be able to analyze that data in under a few seconds on a single desktop.
> Doom was written for the IBM PC. This PC had a clock speed of 4.77MHz and 64Kb of RAM.
well, not that PC. Doom required a 386 and 4MB of RAM. But you really wanted a 486 or Pentium to run it smoothly. Catacomb 3D, a real early id Software 3D game, actually would run on an 8088 XT.
Also noteworthy, Doom was created on a NeXT computer, which was also a bit ahead of PC at the time. So there was a power differential applied to what they were creating.
I think related to the article, this would fall under the lift and shift concept the author described.
What the author really wants is a transformative experience around developing in a way that is cloud native.
So don’t apt install packages on your alternative iPhone which has a Linux container option built into the OS.
Instead, tap a few buttons to say you are developing a webapp with a node backend, Postgres db, and redis instance and code anywhere you go without thinking of setting up an environment. Don’t even think of how to connect to your db. The tool knows you want your service to connect to it and knows not to let anyone else connect except for exceptional case debugging. And once you are done with your v0.0.1, you press deploy, wizard your way through, and it’s out on the internet without you having to think about it further. (And for bonus points, everything including the platform config is quietly getting committed to a git repo in the background so you get all the advantages of IAC by default in the event you need it). And you don’t think about scaling or deployment resources or anything like that. It just happens and you go on about your business (hopefully with some thought given to billing). And when you want to connect a service to another one you don’t think of concepts of ip blocks or auth or certificates. Service A communicates with the world and service B. The dev experience is that you call service B from service A and all the auth and TLS and ip addressing and name spacing is handled in the background. The dev experience is you call service B and that’s it. And that deploys 1:1 to production as well.
Even the above scenario feels somewhat uncreative like it only imagines a few steps up from what we have instead of a paradigm shift. But basically it’s not about shifting to different platforms like iOS or remote dev machines. It’s about an experience of development that is tied in deeply with the environment you ship to and in a way that completely frees you of thinking of low level concepts which all happen in the background.
But this forces architectures that may be problematic for certain use cases. Cookie cutter solutions will always end up getting bogged down with more and more options. And options for options.
I hear you. I’ve been lucky to work on dev experience in a platform team so I do agree that this is a platform teams job. I wonder though as our stacks mature and things normalize, if there’s a chance to do some thinking and organize systems from first principles to create a platform that suits the vast majority of app development. If a team outgrows it, maybe that’s a call for a platform team.
I’m trying to imagine though what a platform team might look like in a world like that. A lot of dev ops teams today work around cloud configs and terraform for example instead of bash scripts and hardware. Maybe platform teams of the future think of plugins and modules for these imagined systems instead of building on top of a lot of low level stuff.
I see what you're saying, but also think it misses the point of where this is all going. An IPhone 12 is an enormously powerful computer, but it's not at all one that is accessibly programmable to Repl.it devs. Similarly, it's possible to run 30k sentiment analysis examples in a minute on a IPhone, but actually doing so would take a skilled dev weeks to implement (because it's not designed to do that!).
Our computer systems have got ludicrously more powerful, and software development has in a sense become ludicrously more inefficient, but computing is a wonderful culture of abundance and _easy and fast enough_ almost always wins over _difficult, faster, and efficient_.
The HN conversation yesterday about the complexity of the proton, in particular how we poke and prod at it to suss out its qualities and quantities, got me thinking about the subatomic particles of my personal subjective conscious experience.
We can trace, with quite a bit of precision, how a certain photon cocktail results in me perceiving the orange title bar at the top of HN. But where's the orange paint in my brain? What is it made of and how could we inspect it like we inspect the guts of a proton? And furthermore, where's the camera that puts all of those particles of paint onto the same stage? We know where the visual cortex is, yes, but where's the camera that can see the whole stage at one time?
That part, the integration of all of our perception into a single 'stage', is where I've long felt there has to be some kind of quantum or possibly field effect at play. Then I wondered if it might actually be possible for there to be an 'afterlife' of sorts in which the quantum relationship between particles is sustained beyond our life.
For a moment I thought there must be some faulty entanglement in my own brain, but now I think you actually meant to post your comment under a completely different story that's currently on the front page :)
> Cloud providers have hobbled growth by overcharging
Just stop there.
What the author (and I!) want is for my computer on my desk to have a seamless integration with the Cosmic AC in Hyperspace.
The problem is that the transition point costs me money and the amount is generally unknown or unpredictable.
My laptop or desktop have a fixed price and then I get to use them infinitely. Until that becomes true for the cloud, it will always be hamstrung.
(There is a secondary argument that progress in computing has been held up by the fact that UPLOAD speeds basically haven't moved in 20 years--but that's for another day).
To follow up on this, computers are really powerful and I want to work when the net is down or otherwise unavailable. Yeah, I can use the cloud for production work but why must I rely on other resources when developing tools or applications. This is just a cash and time sink…
I considered cloud for my ML application that uses terabytes of proprietary data. I took one look at those egress costs and bought my own server for less than the cost of one full egress plus a short period of running time.
Just the thought that they might one day up those charges all of their own accord makes it even more of a non-starter.
Completely agree, and the Oct 2022 GCP decision to start charging "egress" fees for accessing multi-region storage buckets _within_ GCP was not much appreciated either!
Thankfully, Cloudflare's come about with basically free egress. They're just kind of a weird shape because the cloud services they offer are of a different shape than usual.
lock in mentality. free data in but high charges out. although azure <-> oracle cloud have gone to zero rated egress for one use case. will be interesting to see if it extends to other use cases and/or clouds.
We're seeing the drive to Internal Development Platforms and Platform Engineering teams who are charged with eliminating developer toil and creating golden paths (ironically coined by Spotify engineering)/paved roads to production as a response to the complexity inherent to modern apps and the modern app SDLC.
Not so much an abstraction over public cloud as it is an opinionated consolidation of DevOps, SRE and cloud engineering.
I have now interns that are supposed to be coding, some are close to have nice diplomas like "CS engineer" and the like but the sad truth is that they don't understand what's a computer, what's a network, what's a server and what's a client, what's the internet and what's a file.
So the cloud 2.0 is all fine and dandy, not having to care about IPs addresses and NAT and storage space is great, however don't forget that all abstractions leak and at some point I'm not sure you can escape going through "from NAND to Tetris" and having built a LAN with a couple of RaspPi to get shit that works done.
Cloud that we have today makes sense for companies and businesses and is quite mature. But we are in the early days of the _Personal_ Cloud. For personal apps (Instagram, Photoshop, Health apps, Notes etc), a new kind of cloud needs to emerge - which should look a bit like solidproject.org, IPFS, Dropbox and OneDrive.
Why not iCloud and OneDrive as they exist today? The problem is that sharing is very primitive and basic on those systems. There's no way for someone to build an Instagram or FaceBook on iCloud.
> I'm excited for a world where a normal software developer doesn't need to know about...
I'm not excited for this. There's a quote I cannot find that I miss greatly, but maybe a Whitehead quote or some such about Civilization being measured by that which it doesn't have to think about, that which it takes for granted. It's always struck me as powerful, but giving ourselves the ability to forget & un-learn does not tempt me in development. We are the builders, and this great rich pool of possibilities is rarely improved with merely forgetting & becoming an exclusively higher-level operator. Depth is deeply rewarding in development.
Let's talk about the cloud some, & this pool of capabilities we are so delightfully placed at the helm of, and the cloud's influence on these capabilities.
> Somewhat ironically, software development is one of a vanishingly small subset of knowledge jobs for which the main work tool hasn't moved to the cloud. We still write code locally, thus we're constrained to things that work in the same way both locally and in the cloud. Thus, adapter tools like Docker.
It's super hard for me to imagine a replacement, not because replacements won't be great, but because replacements will have a hard time becoming core knowledge for the software development world.
(It's problematic because great openness let us roam too freely unboundedly but,) one of the greatest glories of software development is how unboundedly open & downright democratic it is. We use software until it stops serving us well, and then we either roll up our sleeves, dive in & improve it, or start something else entirely. But we can keep drawing from the same pool, from the many many possibilities & ideas which all interrelate & support each other, to shape new ideas & give life to new forms.
The authors premise, to me, feels like a proposition that we will be so well served the cloud that we can just leave where we are behind. This is about not needing the pool of knowledge or capabilities we have, the systems we have, because we'll be working somewhere else.
In many ways though, that to me sounds like declaring that the future is different, therefore we need a new primordial soup to start from. 'Using only ATCG for genetics is a paradigm that must be eclipsed!'
But you know what? Someone builds that new place. And in 98% of cases, the same old fundamentals & tools are still at play, underneath the new tools, underneath the new abstraction. Larry Wall (in Perl is the first post-modern language) would say: the truth is our systems will always be post-modern. Modernity's shining image of itself as brilliant sprung from nothing novelty is rarely true. New ideas more often than not creative re-applications and re-mixed of old materials.
Rather than simply say that a new paradigm is probably not so new, I think there's a deeper challenge, which is: how does a new paradigm ever rise to such a height that it becomes well known? How do we adopt a new paradigm & start teaching it & using it? How does it become the next thing?
We are very well served by the cloud. It's presence as a system of services, as far off, maintained-by-other-people, no-longer-our-problem miracle working wondermachine is all true & very well reported and it is coming for everything and everyone doing work today.
But I'm not at all afraid, because, in 99.99% of cases, these vast neo-mainframes have no way to pass on their genetics. They are alone and isolated and developers cannot get into their bowels. There is no "The Midnight Computer Wiring Society" of the cloud, and there never can be and there never will be because the cloud until the PC and the tools we have here is about control & orchestration & order & rules, and there's no permission in the cloud to go develop your own culture, to become a new wave, to change everything for everyone (including the other devs) because you are just one lone neo-mainframe, just a couple of your own ideas that you're calling your paradigm & building your cloud by, but in a huge number of cases your ability to interact with talk with share with other people also doing cloud or to enable other people to try to cloud like you do is exceedingly small. Clouds are all unique and alone & they have a much harder time spreading socially.
How does a new cloudy paradigm ever get the ball rolling? What are it's central tenants & beliefs that make it a flexible, malleable, swiss-army knife where all developers everywhere have even more power & creativity- not just at building applications atop it but enhancing & growing & exploring the platform as well? I do think eventually we will find new things to make core, to form real communities & new shared basis upon (I think Kubernetes' apiserver+controller paradigm is probably a core construct in the future, for example). There's early early signs we are civilizing what so many hyper & not so scale cloud technologies have frontiersed. (But oh it's so early, and it's at so much more depth where this happens than the shallow 'your workflow will be replaced' message of this article).
I think there's a lot of good peering into the mid-horizon to do, that the attempt to peer forwards are good, & commend this article. It's right to ask (sic):
> Rethinking these abstractions to be native to the new world let's us start over and redefine the abstractions for what we need?
re else of note).
But I highly doubt we will really get release/escape from the past. The article tries to question the past 50 years, and I think perhaps yes we might diminish it, it might not be at the forefront forever. But I have a hard time imagining enough real value or enough real difference- even if we switch to Zircon or KataOS or Zephyr or Genode or the next thing, I tend to think most existing abstractions will largely remain, perhaps mutated some, some more prevalent than others, and that the view will not really end up looking that very different. Platform will continue, yes changing, but also in many ways similar.
The above all speaks to a fairly slow-building evolutionary view of the future. That said, I think we really dropped the ball on trying to bring software development online, have really had a shitty unambitious maxima we've been stuck at. We still write a crap ton of code that's just driving http clients. gRPC is still deeply one process talking to one service, a very convenient way to still manually write individual send(call)/receive(return) calls/streams. Cap'n'Proto dared to dream a little more multi-service, to follow somewhat after E-language, but never materialized 3-plus-way communication.
The long hangover after CORBA and SOAP blew out & got eaten by simple-is-better ReST has turned into a forgetting, to not trying. The idea of finding new abstractions is interesting, actually (contrary to first part of my rant), but programming language design has remained so focused within the language that we don't have the creativity to expose & play earnestly with the abstractions we have. Language design has focused near exclusively on building better processes, and without integrative cross-system rework, without a much higher scope of change desired. We're just shuffling the cards again and again with the same base system; it's all different syntax spins, different ergonomics, maybe a new safety guarantee (and boy are people excited about that!) but all the same underlying patterns, variously cloaked. A dull post-modernism. If there is change, real change, I think it comes from going back & re-trying an E-lang or an Erlang, or more generally working aggressively towards multi-system. And much of that could just be taking what we do inside processes & doing it on the web/net. I ask myself regularly, and alas, I haven't gotten around to fucking around and finding out: what would a EcmaScript/JavaScript Promise look like, but on the web? This kind of primitive material makes total sense to developers, but we don't really express these things in systems ways; there's the inner process world, & outer systems world, and it's not abstractions we need per se: we just need to tear down the veil between these two realms. The process already has the materials to make a great cloud, we just haven't opened the box yet.
> There's a quote I cannot find that I miss greatly ...
This is probably it: "Civilization advances by extending the number of important operations which we can perform without thinking of them." - Alfred North Whitehead
(And I agree that depth is rewarding and civilization allowing us to forget things isn't always tempting!)
A similar quote from A.N. Whitehad: "By relieving the brain of all unnecessary work, a good notation sets it free to concentrate on more advanced problems, and, in effect, increases the mental power of the race."
> Cap'n'Proto dared to dream a little more multi-service, to follow somewhat after E-language, but never materialized 3-plus-way communication.
I still intend to implement this! It just has yet to become the most pressing issue for the projects I'm focused on, since the development model is already supported using proxying which makes 3-party handoff merely an optimization.
Its not at all like I dont see you being extremely extremely visible elsewhere doing tons of stuff. But I am excited to hear you still are excited for 3-party.
I'm not sure what thr designspace of a language that natively supports cap'n'proto would be & whether that'd be signififantly different than really good libraries or macros for a language. Maybe existing languages are flexible enough. But my gut is that we can mind of start having more ambient systems, ambient data, ambient code, more freely if we really built the language to work beyond the process scope from the start, using cap'n'proto as a base. I readily admit though there's probably some very fine ways to make this remote coding very seamless a development experience with what we've got though, if we want, perhaps with very very few rough spots.
I think you're right that the same old fundamentals & tools will still dominate, but I think this 'new world' can arrive by just making those old fundamentals more powerful and useful.
Erik's example of Spotify has within it the story of an ever more powerful and reliable internet network making first music streaming possible, then video streaming.
The (mobile and wired) internet network is getting so fast these days that maybe even a cloud-first, cloud-only software engineering process is ready to be enjoyed.
Spotify is an interesting example to bring up, & somewhat a sore one for me. Because their Spotify Apps API was in effect an attempt to make Spotify a music cloud. It was, in my view, raringly successful, a great system that thousands of people built intensely good and new music experiences on, embedded within the Spotify desktop app. It turned the desktop client into a cloud-run system for rapid music app deployment, all inside the Spotify walled-garden.
So Spotify is now an example of a dead end, a former cloud, a mere product to me. They've taken great works from the past & about, and they've built one product, and they're going to spend decades moving around where the buttons are and trying to tweak how and when they can ring the cash register & deposit some money into Spotify, Inc. They're from the cloud (sort of), but they've wound back 90% of their ambition to be or contribute to clouds.
Spotify (& many others) can build what they build because of a conflux of factors. Simply having gobs more hard drive, cpu, network throughput, and (most important of all) new online consumers are the core hard & fast requirement that enabled Spotify to become Spotify. But that's only semi-related to what I think is really at the heart of this conversation: the cloud. Yes the team was good about scaling out & aggressively deploying new technologies, devops & core tech, and that helped them go. But it's confusing & unhelpful as a case study. Fact is: switches were just getting better, cpus were just getting better, and Spotify could almost certainly have happened in a fairly legacy way, with fairly legacy ideas, & there'd been streaming before, albeit executing & attracting/keeping necessary talent would have had worse odds on legacy ideas.
Yes: from a consumer perspective, Spotify leverages the internet to on-demand deliver content. It's an example of most of the computing happening in a far off neo-mainframe. That's absolutely something we associate with the cloud. I absolutely see that as core to Erik's story here.
But the characteristic seems somewhat uninteresting to me in isolation. I do think more scale out abstractions & ideas have a huge place, a huge future, but also, they keep running into the "then all developers are just consumers of shit they really have no idea of or power over" problem that means there's no real social environment surrounding these advances.
Something that seemed real to me from these threads: the comment griping about never managing firewall rules by hand rings true. And we are developing control planes aka controllers aka operators, are building more intent-based autonomic systems, reasonably well, that do our lifting for us. There's a host of good new "edge" (not edgy edge edge, just like, lots of data centers edge) tech that's also like- yeah- cloud it up more. Think less about computers/resources/clusters, just push code. These are all in the heart of cloud, of making available various grid computing/utility computing notions that have circled around for a long time, of making us think less specifics. And I think that's indeed true & powerful. But it keeps running into the asocial problem above, that there's no social environment, most of the secret-sauce is retained, locked inside the neo-mainframe.
CloudFlare and Deno seemingly are some of the only two who seem to realize the Tim O'Reilly adage that I hear no-where near enough this decade: "Create more value than you capture." Or else your dream is going to some day die as your dream, with yes maybe good marks, but no real lasting success. If cloud computing is to be a future of real note, it has to be a shared one. That's been an exceedingly brutal gauntlet that few technological/cloudogical would-be's have proven their advance through.
we aren't early. others like Gumby here have made great posts about how the cloud has existed for decades. I don't like his silly broad ranging manifestations but okay. I've seen a lot of old shit, let's stroke off about them but it continues.
In the 70s we have transparent network fileststems, and by the 80s I had a more advanced cloud-native environment at PARC than is available today.* The Lispms were not quite as cloud native as that, but still you could have the impression that you just sat down at a terminal and had immediate window into an underlying "cloud" reality, with its degree of "nativeness" depending on the horsepower of the machine you were using. This is quite different from, say, a chromebook which is more like a remote terminal to a mainframe.
I was shocked when I encountered a Sun workstation: what an enormous step backwards. The damned thing even ran sendmail. Utterly the wrong paradigm, in many ways much worse than mainframe computing. Really we haven't traveled that far since those days. Cloud computing really still it "somebody else's computer."
There's no "OS" (in the philosophical sense) for treating remote resources truly abstractly, much less a hybrid local/remote. Applications have gone backwards to being PC-like silos. I feel like none of the decades of research in these areas is reflected in the commercial clouds, even though the people working there are smart and probably know that work well.
* Don't get me wrong: these environments only ran on what are small, slow machines by today's standards and mostly only ran on the LAN.