This is the same kind of flawed reasoning you see in the front-end world where a bunch of people complain that they do all their work in jQuery so React must be a cult.
Pasting what I wrote in another comment:
The goal isn't "ease of deployment", the goal is "infrastructure as code" so that application infrastructure can be managed in a way similar to application source code (e.g. PRs, blame, code reviews, CI, rollbacks etc). This helps ops people because it allows them to think about infrastructure as abstract resources rather than as a collection of individual machines with specific designations. With k8s, individual machines become a homogenized resource that do not need specialized provisioning depending on the application they will host.
"infrastructure as code" so that application infrastructure can be managed in a way similar to application source code (e.g. PRs, blame, code reviews, CI, rollbacks etc). This helps ops people because it allows them to think about infrastructure as abstract resources rather than as a collection of individual machines with specific designations
Who doesn't want that? Of course you want that.
But will the investment of time and effort pay off for your organization, and if so, how quickly? That's the hard question to answer. It depends on scale, personnel, the types of workloads involved, how easily your tools and practices can be updated, and presumably many other considerations. From my personal experience it seems like in practice the answer to this question is so murky that the deciding factors turn out to be social, including the personal risk aversiveness of the people making the decision, people's loyalty to the company versus their own resume, and whether leadership cultivates a hyperoptimistic growth mentality of making 10x or even 100x decisions (i.e., make decisions assuming the company will be 10x or 100x bigger in a year.)
The problem, then, is helping people compare the cost/benefit of Kubernetes compared to their current practices, for their own organization.
If you only have a couple of servers you probably want to think of them as individual machines rather than abstract resources. A lot of equations simplify when you set x to 1.
Don't get me wrong: it's a great tool for some things. But IMO, for 80% of projects it's completely overkill.
There are plenty of tools out there that can get the job done at the scale that the vast majority of businesses operate in with lower operational and cognitive overhead than Kubernetes.
Or maybe they use Puppet or Chef. Most won't even need that.
Infrastructure as Code - If you're not doing infrastructure as code, how do you know who is taking what actions on your infrastructure? How do you know your tests are running on an environment that represents production? How do you know the tester or dev hasn't fixed and not commited?
CI/CD - Do you have a quicker way to create test environments than just running kubectl create ns?
Resource Utilisation - Sharing servers to save money. Obviously you can use VMs, but do you want to do nested VMs on cloud?
I'm not sure most people should run k8s, but in a world where you can use GKE, I can't really see why not. What offers a better solution?
YMMV... I've also been in a scenario where I just did separate pushes to dokku boxes behind a load balancer. There's plenty of room for in between.
Right now, I feel kind of like I'm treading water though.
This is a very true statement. I'm looking at it with a bit of a different view as well: We're currently setup very classically. Ops maintains terraform, VMs, config management. Some developers have taken over some highlevel cluster management via the configuration management. This works very well.
As long as you're pushing 3 applications around. Onboarding an application into a config management solution can easily take 2 - 4 engineering weeks on the ops side. And the work will always be on the ops-side because that's where the required expertise of the configuration management lies, which is a hefty chunk of specialized knowledge. If you're looking at a ramp-up from 3 applications to 5, 8, 20, like in our case, that's ... ugly. In our case, that's actually planned, because we've been bought due to our experience in this context. Yey.
That's a huge investment if you look at time alone, it's like 80 engineering weeks at worst. That's a year of nothing else all of a sudden. A situation like that magnifies the operations team as a bottleneck even worse.
And that's IMO where the orchestration solutions and containers come in. Ops should provide build chains, the orchestration system and the internal consulting to (responsibly) hand off 80% of that work to developers who know their applications.
Orchestration systems like K8, Nomad or Mesos make this much, much easier than classical configuration management solutions. They come with a host of other issues, especially if you have to self-host like we do, no question. Persistence is a bitch, and security the devil. Sure.
But I have an entire engineering year already available to setup the right 20% for my 20 applications, and that will easily scale to another 40 - 100 applications as well with some management and care.
That's why we as the ops-team are actually pushing container orchestrations and possibly self-hosted FaaS at my current place.
Hm, guess that got a bit longer. You hit a nerve somewhat.
From my standpoint, Kubernetes is a lot more manipulating state and a lot less code than what pretty much anything it replaces. In fact, almost nothing is "as code" until you introduce third party products such as Helm into the mix.
But that matters little, since the point of using it is not technology but standardization. It has the potential to commodify cloud infrastructure. A bit tongue in cheek perhaps and a far from accurate technical statement, but Kubernetes looks more and more like what Openstack should have been.
K8s is meant to reduce devops work and complexity. Most businesses do not reach that level of complexity and will never need K8s.
"My office is a cult" instead of "this tool I get paid to abuse is a cult"
e.g. you can always code, and zip-up files, and build directly using command-line (we still can do it, right?), but when you need to deliver that automation bit, and most importantly hand-off to the next person what you've done - e.g. codified for real, then it becomes a must to have a system like this.
Kubernetes solves problems some groups will run into with shell scripts[†]. But not the problem of using git or GitHub, which is nonexistent.
[†] let us take as given that these problems are numerous and painful!
You can have infrastructure as code without running a second containment layer on top of the containment layer your cloud provider runs (which describes many k8s deployments).
Kubernetes is basically a way to run a Java-like application server that can run things other than Java. If that sounds like an appealing prospect to you, the complexity of Kubernetes may be a good fit.
Kubernetes is complex because sometimes you need to be able to do complex things. Sometimes you operate at a scale where spending 12 hours writing a deployment script is ok, because it will save you hundreds of hours in the near term. Kubernetes expects you to write a bunch of custom integrations to tie your k8s clusters into whatever ITSM / ITIL process you use.
But complaining that running a blog on Kubernetes is too complex is like complaining that a semi is a terrible vehicle because it’s hard to park at the grocery store.
Despite a long track record of failure individuals are trying to introduce the complexity of J2EE onto kubernetes. It doesn't need to be that way. Kubernetes can be very simple and it has been up until recently. Once the Enterprise Architects got their hands on it and decided everything needs to be a plugin and nothing should work out of the box the complexity started to creep up.
You should be able to run your small blog on kubernetes without requiring a team of consultants to set it up or manage it. Just waving your hands and saying well it needs to be complex to scale is a total lie.
The same thing happens with ticketing systems: $old_ticketing_system is way too complicated and bloated, so let's jump to $new_ticketing_system because it's small and easy to understand. Oh, but we miss $feature_1, so let's ask for that. And $feature_2, and $feature_3. Continue until $new_ticketing_system becomes way too complicated and bloated, at which point you find $newer_ticketing_system, which is great except that it's missing $feature_3. Oh, and $feature_4. And 1 and 2, come to think of it. Oh geez, now $newer_ticketing_system is also coming apart at the seams, time to migrate to $even_newer_ticketing_system. . .
Kubernetes is explicitly about managing horizontal scale where the hardware is abstracted away.
(But yes, you're correct about what "large scale" meant in the early 00's)
Maybe it'll involve a bunch of "serverless" buzzwords or some newly invented buzzwords. That's how things usually go historically. A lot of value can still be extracted if you're careful to ignore the cult-y bits. Containers, serverless/FaaS, and Kubernetes can be pretty great if you're the plane pilot dropping the cargo on the island rather than the cult living on it, and future stuff will probably be even better.
Even Google only released K8s because they thought it would push people towards GCP — which it didn’t because it was too easy for AWS to implement a similar service using Google’s open source tech. The open source freemium model for infrastructure tech is pretty much dead as a result of this kind of activity.
I have no idea what business case there'd be to open source it. Maybe some open source devs with former experience at a big company will create one just for fun after they leave the company? Who knows. It'll probably happen eventually, by someone, either way.
The reality is that most of this technology — especially around infrastructure — has become so complex at scale that the tech strategy and the business strategy are the same thing. So infrastructure software has to match your business architecture which is largely dictated to a technology org. Which is why “tech ops” these days is largely just “ops” — the technological complexity is a reaction to increased business sophistication, not the other way around.
AWS had the most K8S deployments even before they released EKS, according to the CNCF:
I know it's not the same but Docker Swarm is pretty great if you just want to deploy some container images on a single host or a cluster - this guide covers setup + traefik + swarmpit ui https://dockerswarm.rocks
I know that's a pipe dream, but really, why does it have to be? What would have to happen for people to actually work on making existing things simpler and better factored rather than reinventing the wheel?
My personal theory is that it's largely because that kind of work simply isn't being valued highly enough. Reinventing the wheel is a much lower friction path to take and has a higher chance of being rewarded highly. It shouldn't be like that, though.
k8s is a building block one can use to provide a simpler service, and if you want to convince anyone it needs a refactoring, maybe provide some specifics?
RKE is a great alternative, and kubespray is quite stable well.
The “running my blog” use case is a Docker use case. Kubernetes was designed from the ground up to enable transparent integration between containerized apps and ITSM platforms. I have always viewed it as more of a scaled application framework than a hosting platform.
It allows you to deploy multiple replicas, automatically setup a load balancer and handles maintaining the link between the LB and the backend. While also replacing any failed replicas.
I am fucking appalled writing config files is a noteworthy skill is 2019. So should you.
If you don't understand the underlying mechanism, either with Python or K8s yaml files, you're going to have a very bad time.
Somewhat ironic side note - Asking folks to write K8s config files is exposing too much complexity for some developers I work with. And I kind of get it. Properly setting up a service with changing environment variables, secrets, ingress, API Roles, AWS IAM roles, and horizontal autoscaling can get a bit nuts.
What we call “DevOps” is really a delicate balance of giving the dev teams enough rope to hang themselves while child-proofing the gallows.
Oh, dear, @wayoutthere, my dev team is about to hate you because I'm going to use that quote extensively over the next few weeks... (giggle...)
Good contracts lead to good collaboration. Kubernetes provides the foundation for a solid end to end contract for managing complex systems automatically. It’s incomplete, but extendable.
Also sounds too important to be dev.
I've never said that K8S is just about writing config files. I've said that it is appalling that wiring config files is still a technical "skill" that warrants articles and discussions in 2019.
What's even more ridiculous is that most people here probably don't even see any alternatives. I've had this conversation several times and it inevitably reveals the unwavering (and irrational) belief that manually entering cryptic text somewhere is the only way to make reusable configurations.
Truly, we have become the tools of our tools.
Yes it's great to capture your configuration in version control. But if at the end of the day I'm staring at a config file in one window and a log file in another and waiting for enlightenment to grab me, that's not scalable and it's rigid. It also pisses me off to no end.
It's essentially the Frameworks vs Libraries debate all over again. I'd much rather have something imperative.
Declarative systems create a perverse incentive to keep things the way they are because it's difficult to reason about how changes affect the system, and it's virtually impossible to explore those effects. There are no guideposts you can use to apply Local Reasoning, and so there is no pressure to organize this 'code' in a manner that supports it. So as the system matures, everyone is working off of memorization. There are too few bite-sized chunks that can be learned a bit at a time. You are locked into your current way of thinking and you've locked out anyone who can bring fresh perspective.
It doesn't take a genius to see this will end badly. Again. It just takes anyone with enough distance to have perspective.
But I've worked with too many people who claim to be Senior or Lead developers but can't actually explain what they do.
I've been tempted a lot lately to try to think of a software team like a sports team. Coach, assistant coach, trainers, and physical therapists all about making you think about your abilities at a different, sometimes philosophical level.
The Surgical Unit idea of Brooks has always bugged the hell out of me. I've known enough nurses to know that you don't want to put surgeons in charge of more than one life at a time and then only for a couple of hours, and letting them interact verbally with those people is a fucking disaster half the time. Not unlike some highly decorated software developers I know. They're brilliant as long as they don't actually have to help people.
The head game in software has been overlooked for far too long and to everyone's detriment. Users as well as producers.
If we had the training part right, this FOMO anxiety would be classed as a disorder.
i'm guessing that writing dyson in Nim is a tacit acknowledgement of that: if this were something geared toward production ecosystems, it would be in golang like kubernetes? although there is the helm luafication, so perhaps dyson is part of a fringe of non-golang k8s auxilliaries.
another way to implement this is with a 'static CMS' where there are still static pages except built into a situated deploy. the 'cultish' (cultic? anyway) aspect of k8s appears to be to phrase all the things in terms of k8s constructs rather than using k8s constructs as a foundation and abstracting out.
i learned about 'rollout' from the CI portion of this post, although initial attempts to search for a comprehensible description of it fail.
I just wanted something with easy templating syntax like this: https://github.com/Xe/within-terraform/blob/master/dyson/src...
The fact that cligen (https://github.com/c-blake/cligen) exists too makes it super easy for me to define subcommands of the thing: https://github.com/Xe/within-terraform/blob/master/dyson/src...
The only thing that's really prevented me from doing so is that I have my own micro-PaaS (https://github.com/piku) that makes it trivial to run a bunch of different apps/services on the same VPS, and the added complexity isn't really necessary.
But since I deal with k8s practically every day at customers, attrition might be compensated by not having to switch tooling.
YMMV within this sort of scope.
Someone could build a simplified fork / derivative of Kubernetes designed for this purpose. That would be pretty rad actually, but it would cease to be Kubernetes because the complexity is the point.
How does it fare in production? I've got a tiny app with two containers (a frontend an a batch job) - it seems like a decent use case.
Since I use CloudFlare (hi jgrahamc!), it's been peachy.
I do not agree with your assessment of Kubernetes. It is not equivalent to something like WildFly or TomEE. Kubernetes runs/manages application servers (and not just java once), along with a whole host of other things devops things at Scale... Kubernetes is great for setting up a blog, and the 11 other applications the author is trying to run.
think you're really missing the point here. Why do you think this was included in the post? https://twitter.com/dexhorthy/status/856639005462417409
And why is "ease of deployment" not something that someone should expect from K8s?
I've been sitting on the K8s sidelines for a bit as things iron out, and I've been deploying it on bare metal on a test bed over the last few days with the intention of using it as IaaS for some of my own apps.
It seems to be what it's meant to do. Keep my app running on infra following the rules I set.
It is for making things easy to deploy ONTO the K8s cluster, not for making the cluster itself easy to deploy.
The key is only to use it for cases where the deployment of the containers without K8s is more difficult than the deployment of K8s.
As such, people do expect it to replace Heroku.
Also note that most adherents view K8s as a replacement (more accurately new API) that replaces EC2. the fundamental unit of computing becomes the Pod, not a VM.
Kubernetes also makes more sense if you look at it as a common way for an organization to run applications among disparate teams with a shared operations infrastructure. It provides a standardized model for things to work the same enough to work the same. If you are only delivering one kind/whole organization thing and it all looks and quacks like a duck maybe you should just deliver a duck instead of putting a duck hat on Kubernetes and asking it to quack.
I don't think there is anything wrong with designing an application that would transition easily into Kubernetes but I feel like many of the proposals/PoC I have seen in the last few years are either fresh systems that get consumed by Kubernetes complexity or are poor replacements to systems that already exist and only seem to serve as resume padders for the team architecting the replacement which gets a viking funeral as soon as they leave. Often the latter case is because the underlying architecture and goal of the system is pre-Kubernetes and doesn't fit the model well of having mostly stateless/replaceable pieces.
But running a single node cluster is totally valid. I use one at home to run arbitrary containers (a DNS-over-TLS proxy, VPN server, radius server, network AP / switch manager, etc.). I don't use load balancers and just use host-based persistent storage, which removes the vast majority of the complexity.
I've seen a lot of people get wrapped up in the complexity of wanting to be able to have a persist storage-backed process go down on one node and come back up on another, and that's not unreasonable, but that's a lot of stuff to figure out early on.
If I weren't using this as a single node k8s instance, I might use it to manage VMs, which might be easier to understand but much heavier and more work to maintain. With what I have now, I've got a folder of YAMLs that defines everything to run on my node, and I'm able to easily put all their persistent data in the same top-level dir for easy backup.
I think the perception of k8s might change over time once people realize that it provides a lot of value even if you completely rule out the tougher stuff to do on bare metal (like load balancers and shared persistent volumes).
There are other options that are good contenders in case of such a workload.
The cycle of picking a tech, jumping ship to it, religiously evangelising it, riding the wave and then jumping ship to the next related tech is typical in my opinion.
I try hard to correct for this bias but sometimes struggle with exactly the same thing. There's just something about wanting to have a uniform "world-view" with fewer explanatory variables that never stops being motivating.
Your resume needs to have lots of fashionable buzzwords rather than pragmatic good enough / keep it simple choices. You must keep on learning (lots of things rather than mastering any one thing). I can write a really nice site in standard Django with some JQuery, and it will take me half the time that adding React to it will. But adding React will make me much more employable and get me a better wage.
It's seems like at some point around five years ago the three tier architecture with it's division of labor vanished over night. I'm not saying things were perfect back then but I've never seen any demonstrably objective reasons why it was replaced.
I went from having to be mindful a few configuration items which arose from deploying my war to different environments to slogging through configuration hell in the Terraform and AWS world. I've been learning way more about Ops than I ever cared to know while at the same time becoming a -10x developer in terms of shipping business value.
Yeah, duh. I render simple HTML templates on the server and serve them as browsers expect them; not with a thousand lines of JS for topping.
I'm reminded of a post from a few years ago where someone's website had a table of items with [delete] links and would take database actions based on GET requests to those URLs. Who cares? It looks the same to a human browsing it.
And then it got crawled by a search engine which followed all the links to see where they went.
But if you're not doing anything unusual like that, I don't see how prefetching HTML would cause any problems.
Top be fair it has reduced our server costs a bit (after maybe 6 months of developer time). I am unconvinced it will be worth the hassle.
Are the improvements worth $125,000?
Though one comment I saw about Kubernetes on here a few weeks back concerned an old schooler like me. The guy suggested that if something goes wrong, just kill the pod and let kubernetes bring up another. Apparently that's the way you are supposed to do things. Something seems really wrong with that approach to me. Just throw resources at the problem with very little understanding of why things went wrong.
Tbh, I have no real world experience in this, so it might just be my own delusion. However, I've recently started getting into self-hosting some of the services I use. I'm using a simpler infrastructure than what OP described and while it is the right choice for me and a useful skill to have, I feel like it absolutely won't get me anything in the sysadmin/ops/etc. job space. I've actually considered adding more "enterprisey" tech to it (like Ansible or comparable stuff) just to make it more sexy for recruiters.
It is typical for devs.
Meanwhile ops have to support every half-arsed tyre-fire technology until the end of time, because a dev wanted to try it once, and now it’s in prod with users relying on it.
Kubernetes is in a sense the pushback against that “do what you want, as long as k8s is up, what you run in your pods is your problem, not ours”.
Webdev does seem to pay better than most other stuff, though.
Still happily using JEE/Spring/ASP.NET + VanillaJS in what concerns webdev projects.
And while other AOT compiled languages might offer a little bit more performance, they lack in tooling and libraries.
Until somebody cyberattacks those pods and steals all personal data of your users because the devs didn't bother to apply security patches. But hey, it's not your problem. You are not responsible for the pods. k8s is still up.
But that has always been true. If a dev leaves a SQL injection for example in the code and it got penetrated, absolutely no one would blame the sysadmin for that.
My interpretation of DevOps is that it's one team with shared responsibility and not "shove your stuff in that pod and don't bother me."
E.g. you could imagine some extreme case in which dependency X, version N has a critical vulnerability - but at the same time, the developed software relies on exactly version N being present and will break horribly on any other version.
You'd need Dev and Ops to actively work together to solve this problem and no amount of layering or containerization would get you around that.
Very infuriating mindset to deal with.
Imagine if people could all just get along.
I've been on both sides of this divide myself, but have spent the last fifteen years or so as a developer. In my experience, developers will burn the whole place down if we're given the chance.
We're focused on writing code, and it's boring to write the same code over and over: we want to write new code, in exciting ways, and we are surprised when it fails in exciting ways.
We're focused on delivering features; our incentives are all about getting it done, not about getting it done well (our industry doesn't even have a consistent view of what's good or bad: note that C/C++ are still used in 2019) or supportably. Some organisations really try had to properly incentivise developers, but I've not seen it really work yet. DevOps is an attempt to incentivise developers by getting us to buy into ops. I've read a lot of success stories, but not seen a lot of success with my own eyes.
I do my best to be diligent, I do my best to wear my Ops hat — yet I still fall down. I don't think that it's unavoidable, but so far I've not avoided it, and I've not seen others avoid it either.
The real problem always starts when they become separate cost centers with separate budgets and have to independently show a 'profit'.
This worked great when several other business systems relied on their vanity toy, and invariably the API would change with every release.
There's a balance to be struck between 'never change anything because it's always worked' and 'new shiny every week'. In my experience it's an absolute nightmare getting people to agree where the line is, and on top of that, get management to buy-in and push-back when either side oversteps.
To balance this with a counter example from the quieter group of people not "hot for the latest tech":
I'm a "dev" and i've never had this problem, however I work for a small company, where everything I make and deploy I also have to maintain in some form or other. This gives me a strong bias towards operational simplicity and trying to essentially eliminate dev ops... New tech which is both complex and opaque in solution without clear cut advantages is basically repulsive to me, because trust and reliability without constant attention and tweaking is important.
- non downtime deployments (yes you can have a downtime, but everytime you deploy an app?!)
- schedule more than one thing (no company has a single product that only has a single binary or at least nearly no company, there are some unicorns tough)
- some kind of automation (this is complex, no matter what you use)
Oddly, a lot of small companies really don't need this. If your customers are mostly businesses in a limited set of time zones, having a maintenance window outside of their business hours is probably easier.
I also think this has a lot to do with how devs spend our time: with the tech itself. Whether your application is running on Kubernetes or a box in your garage matters to precisely zero customers as long as it performs well, but as developers we spend our whole day dealing with various APIs and technologies, so we develop an outsized sense of the importance of those things.
One quickly learns that business has a complete different set of priorities and dealing with software as little bonsai trees is not one of them.
The best way to get promoted at many companies is to write a framework. The best way to get noticed is to write an open source framework. And so on.
Hip technologies are being used in SV, and they have to pay tons of money just to keep the talent pool large and circulating.
Older technologies are used in other cities, and there the market forces aren't so crazy.
But a good Java dev can make plenty of money in SV, and a Go developer will make a competitive salary by Dallas standards but not by SV standards (and probably have a harder time finding a new job).
For the record, I have been using a JVM language as my primary work language since early 2012.
Range of employment options. Possibly salary, though that's more variable. There are some jobs keeping the lights on with legacy tech long after it is done being the hot thing, but typically with any particular stack it's a shrinking numbwrt of jobs often with shrinking average real pay unless it hits a phase where the decline in people able to do it exceeds the decline in work.
If you are riding out the last few years of your tech-focussed career (whether that's before retirement or before moving out of hands-on tech into, e.g., management) that's maybe not so bad, but if you planning being in tech for a longer period it's potentially extremely career-limiting not to adapt to current market focus.
I'm not sure this is true. Most of the shops I've been in don't care about whether you know this or that language or library. You're expected to learn that as you need to. Most of what I've seen cut people from interview loops is missing fundamentals.
Then you'd be wise to stick stuff like Java or .NET, because there are probably millions of jobs requiring them.
The job might not be as interessing as riding every tech wave, but on the plus side there are plenty of tech waves that you save yourself from riding on.
Plus one gets to rescue projects that ended up betting on the wrong waves, getting back to boring old tech.
It is a bit unfair for Cobol, given that its latest revision is from 2014, and while verbose as it might be, it supports most of the nice features of any modern multi-paradigm language.
SOAP was\is a pretty stable technology that did exactly what it promised to do, without too many releases or breaking changes for about 10 years.
Even today it has a good utility for the situations it is designed for...
RPC over a well known standard format, for tightly coupled endpoints, that require metadata, enforced schema, security, and perhaps transactions.
big problem for SOAP is it was the default for web services for a long time, when in reality a big shift happened in about 2008 where webservices were most likely NOT going to fit into those constraints. just my 2 cent
And neither of them were selling "magic" stuff like BPEL and BizzTalk.
Confirmation bias (where you've spent some time on k8s or whatever, and now you just want to cash in on your time loss, objective criteria be damned)
Generational churn (where you find yourself in a field where everything has been said and done, and you just need a new buzzword on your resume to start over; this goes hand-in-hand with corporate IT longing for fresh and cheap staff and their stack in need to look sexy)
Big media (where extremely large infrastructure runs on k8s or whatever, and gets disproportional airtime, because cloud providers want to sell you lots of pods, and people not checking whether the proposed arch is a good fit)
When comparing consumer products where there's lots of choices, I find myself finding an OK options and 'falling in love' with it - when I reflect, it's basically a way of cutting through all the reviews and deciding that one is the best and I've no reason to regret buying it or need to do any more trawling through reviews and comparisons. I'll just buy this one and be done with it.
This strategy often works, to be fair!
Management (top bosses) often seem want the latest ie. Big Data . It doesn't mater it'll cost a fortune and you'll get better results on a single server.
And if the devs are out of control and pushing for %tech% and getting it, that's management at fault. To be a good manager you need to understand what your employees are doing. I've met too many that don't.
There is also such a thing as being sleepy old.
And there is reasonable in between.
So a developer that doesn't want to deal with already solved problems and who wants to advance their knowledge is incentivized to push for jumping to the newest tech.
However going the opposite way (sticking to one reliable tech stack and refusing to change even when something better comes along) could be just as damaging to a business.
How then, do you build a culture where people are open-minded to new tech without feeling obliged to jump on every bandwagon? I don't think I've ever seen an organisation get the balance quite right.
Most developers just don't want to be left behind, so they pick up whatever is trendy at the moment. It's completely rational, because knowing what is trendy getting you hired.
However, implementing what's trendy, without carefully weighing pros and cons*, is what's dangerous.
After trying a few cloud based ToDo applications with mobile apps etc. I gave up and started using org mode files from a text editor.
I think MS is trying to make tools like this for the C# world, but I haven't seen them yet.
Microsoft allows you to publish a web application from Visual Studio project to Azure. It's very simple, but more much opinionated. It's a great trade-off for an individual developer who needs to focus on functionality. In the context of this discussion, there's an important distinction -- it's not an interface, it's just a feature. It's tightly coupled to Azure from one side and to Microsoft dev stack from the other.
But many organizations would rather not directly pay the costs of app services and instead indirectly pay the costs by making their developers tool around with Kubernetes.
That actually wouldn't be bad, and probably faster than getting it running on Kubernetes.
So, a text editor?
If you don't want to be in the "cult", dont use it. Meanwhile i'll be writing service and deployment yamls and avoiding all the proprietary expensive aws bs.
Everyone here is blaming the truck company instead of the person that bought a flatbed to transport a 2-lb box
Executives that don't know anything about technology shouldn't be aiding in making strategic decisions that depend on this. They might make decisions on the advice of others (including, hopefully, executives who do know something about technology—CEOs may make the decisions in some cases, but with CTOs and/or CIOs innthe loop, and if they know nothing about technology, that's as bad as if your CFO knows nothing about corporate finance.)
It's really hard to know this stuff unless you are down in the weeds every day.
Edit: As noted below github actions seem to still be in beta. The original point stands.
step 2) only if there was some magic tech that could solve all these issues. and have a cool name. and we could put it on out resumes... drum roll: K8Sssssss
step 3) bro. it’s working. i don’t really understand what it’s doing but look at all the containers we are running. and the config... super configurable. we’re devops we can figure this shit out right?
step 4) what do you mean we have to update the k8s version we’re running on? we barely got this one working. ahh... the beta tools we were using got a bit more polish... makes sense....
step 5) sob silently when you realize that the work k8s has supposedly saved you now goes into maintaining the k8s cluster. reminisce about the good all days when you could just xcopy deploy your app.
epilogue) in the age of the cloud, k8s make zero sense to me. use the abstraction provided by your cloud and focus on writing your crappy app. you’re not google or amazon. you don’t have to solve the problems they do and you’ll probably never have their scale. oh? you have thousands or bare-metal servers and looking for a solution that can help
manage them? you can also afford a dedicated ops TEAM to manage them? (dave jumping on the latest tech trend does not count as a team). go ahead!!!
He's working for a Swiss bank, and managing an ops team. In his case, I think k8s makes sense.
Then there's me. Doing remote work, struggling to sort out visas, wishing I could have his kind of life and stability, wondering how it all went wrong. To get a job like his requires experience in Kubernetes, Oracle, etc. I can't get that kind of experience with side projects.
"dave jumping on the latest tech trend" is probably trying to build up his résumé, rather than actually help the company. As someone who needs to build up his own résumé, what do you suggest to gain experience in this kind of technology? I don't want to intentionally deceive the company to let me try it when I know they don't need it. But I know that I need to learn it somehow.
2nd thought is that as an employer you don’t want one trick ponies. You want people that understand the fundamentals and can learn and adapt. i will take a person that has good fundamentals, is curios and constantly learning over a person that knows a technology every day of the week.
GKE is pretty amazing. They manage the k8s control plane for you, offer worker node scalability and you can use a decent, intent based, automatable API for declarative deployments. No need to mess around with VMs or proprietary lambda/serverless stacks.
Disclaimer: I‘m the author.
You could possibly do it cheaper on AWS with RI discounts, but you'd have to setup the cluster yourself to avoid the ~$140/mo fee.
Im running mine with three VPS and 8GB ram.
Just for personal services and learning/fucking around purposes.
However, http/https ports are already used on routers to offer an admin web GUI. It’s technically possible to circumvent this with some ad-hoc firewall rules, but it depends if the router admin UI let’s you do that.
not on the wan side I'd hope