Hacker News new | past | comments | ask | show | jobs | submit login
Etcd, or, why modern software makes me sad (roguelazer.com)
1285 points by Spellman 24 days ago | hide | past | favorite | 630 comments



This is one weird comment section.

There are people attacking the author for a statement made about CoreOS, and for some hate towards Kubernetes.

The key point of the article is not really being addressed here: vested interests from large companies are able to introduce huge complexity into simple, well-designed projects. While the complexity may be good for some end that said vested interest has in mind, they are also in a position to absorb the cost of that increased complexity.

In the meantime, the simpler version of the software is long gone, with the added complexity placing a large burden on the solo developer or small team.

It's almost like there's no good representation in the open-source world for the solo developer or small team. Funding and adoption (in the extreme, some might say hijacking) of open-source projects from large corporations dictates the direction of all major software components today. Along with the roses come some very real thorns.

Just my 2c.


Agreed. I saw it as a general lament against over-engineering. I don't think the point got lost in the super specific example...

You could just as easily level similar rants against the likes of React and it's wider ecosystem, Tensorflow, Typescript (many will disagree), Docker... I'm sure others have their own bugbears.

Much of this is subjective, of course. But to me, it feels like software development is trending towards unecessarily complicated development processes and architectures.

And the only beneficiaries appear to be the large technology companies.

I suppose in exchange, you're getting a guarantee of maintenance. But is that really worth the additional complexity associated with the common use of these tools?


TypeScript is a funny one. I love the language, but at the same time, I totally agree with the premise that it is unnecessary complexity! And yet I swear by it. I can't explain why there's not more cognitive dissonance there.

JavaScript taught me to love async, then functional programming, and TypeScript taught to me to love static types. I'm now desperately wishing for a world of OCaml/Haskell, but where are you going to find teammates using those? And so I'm back at TypeScript.

I think all of these "higher order" languages that do transpiling (including Scala, Closure, Kotlin, Groovy, etc) fit this same scenario. Increased toolchain complexity for a decrease in development/test/maintenance complexity.

With any toolchain, though, it can be hard to know where to draw the line for "worth it for this project" until you're already an expert in using the toolchain, at which point, why wouldn't you use it?


You choose the tool chain to fit the anticipated team. If that team is just you, then who cares? Pick the easiest thing for you. But if you’re an expert in the tool chain and you anticipate new people hopping on, then the on-boarding process should definitely be one of your considerations.

The argument for simpler tool chains for simpler projects is that the time it takes to on-board should not outweigh the time saved by the toolchain’s amenities.

So the argument for TypeScript despite its complexities is that you believe its approach to type decorations on top of JS strikes the right balance for ease of on-boarding (IMO fairly straightforward) and supporting code maintainability (IMO a big improvement in 99% of the cases) despite the extra complexity of lengthening the toolchain (IMO it has some counterintuitive parts but you can mostly just forget about it once you find a working setup.)

I won’t use TypeScript if I’m whipping something up quickly, but if I anticipate open sourcing, adding unit tests and CI, etc., then TypeScript feels like a useful addition.


I used to make the same argument, about not using TS for prototyping, but the tooling is so good that at this point, I generally just change the file extension to ".ts" and write no manual types for the mistakes and type-hints + intellisense the compiler and language server provide.


Yeah exactly, and we can also pick the features we need in a language. E.g. C++ is super complex after C++/11, but we can still just pick a small subset to make it easier.


> we can still just pick a small subset

Unless you have to work with legacy code or libraries.


Then some poor soul gets to wrap the old patterns into the new patterns!

But seriously, if modern C++ has one thing going for it, it has a great backwards compatibility and forward refactoring story.

I'm not a huge fan of the "evolved" syntax, and I rarely use C++ anymore, but I like modern C++ substantially more than C++03, which was where I first cut my teeth as a programmer.


> Then some poor soul gets to wrap the old patterns into the new patterns!

This isn't always an option. Google's C++ guidelines generally forbid using exceptions, but concede that you might have to if you're dealing with existing code, particularly on Windows.

https://google.github.io/styleguide/cppguide.html#Windows_Co...


C++ is approaching Javascript and other modern languages with Moduless!!


What is unnecessarily complex about TypeScript? It's JavaScript, with static typing plus type inference, and pretty nice generics. The ecosystem of modern JS surrounding it is horribly complex but TypeScript itself seems like a fairly straightforward programming language.


It hasn't been a big detriment for me as someone learning Typescript on their own, but it is another moving target for looking up "how do I do x..." and finding most of the forum posts are a little outdated and the latest version of Typescript has a different/better way of doing things than just a year or two ago. I find myself scrolling through github issues comparing my tsconfig to figure out why my stack behaves differently than someone else.


That was my experience with TS maybe two years ago - at this point project scaffolding tools are good enough to generate sane output that I spend a little bit of time upfront but then keep plowing away. Maybe I got better at it as well - but I haven't kept up with TS news in a long time and I don't feel like I'm missing out on stuff or encountering things I don't understand.

I've written >50k LoC of TS in last few months for sure (doing a huge frontend migration for a client) and I can't remember the last time I googled anything TS related. Actually I remember - a month ago I wanted to know how to define a type with a field excluded, took 30 seconds of google.

Meanwhile the project started out as mixed TS and ES6 because most of the team was unfamiliar with it and there are a few dynamic typing evangelists - we ended up going back and just using TS all over the place, the complexity introduced is minimal and the productivity boost from good tooling on product of this scale is insane.


Typically for me the time cost is in going down rabbitholes to attempt to improve implicit static types for getting closer to "whole program" functional type inference (TypeScript repeatedly seduces me into this), and the decision inflection point is generally not for application code but for the space between application and script code, things that you might also write Perl or python scripts to accomplish...the types are especially useful in this context because they tell you a lot more about the script than your typical script, but they also introduce a bunch of overhead for a few lines of code.


Yeah I probably shouldn't complain, I've written less than a couple thousand LoC so I'm still googling a lot, but TS has definitely paid for itself in code clarity already.


Did the dynamic typing evangelist eventually agree to noImplicitAny?


> and finding most of the forum posts are a little outdated

Which is why you use the up-to-date official documentation.


There is complexity in the type system with signatures like

``` function partialCall<T extends Arr, U extends Arr, R>(f: (...args: [...T, ...U]) => R, ...headArgs: T) {

`


At least I can pick it apart and deduce it. I can even use the compiler to give me information about it via editor integration (all same editors have it now). So I’ve I’m unsure it might take a few minutes of investigation to completely understand the signature.

I’ve seen some JavaScript written so tersely it was nearly impossible to figure out without spending Possibly hours on unwrapping the code. That’s the value provided here.


When it comes to spending hours unwrapping code, gotta love js that has undocumented heavy use of string based property access for things like imports. Like the worst of both functional and OOP combined, and generally no IDE support to be had.

Implementation approximate, but otherwise true story, both LHS and RHS:

`let { statefulMutatingAccessor } = require(obj["prop."+tree+".subtree"])`


It's funny that you consider that complex; I consider that a pretty normal, expressive type signature that gives the compiler critical information about how I expect to use that function, which in turn allows it to completely squash several classes of bugs I might write.

But I love strong type systems; people who prefer weak type systems would likely consider things like this to get in their way.


Could you elaborate on this point for someone who isn't familiar with typescript (or javascript, for that matter). I'm no stranger to strongly typed languages but that function signature seems pretty complex to me. By elaborate I mean explain what's going on in that signature, and what the critical information it supplies is?


It's from the TypeScript 4.0 beta blog post[0], which describes it as:

> partialCall takes a function along with the initial few arguments that that function expects. It then returns a new function that takes any other arguments the function needs, and calls them together.

The type signature looks like:

  type Arr = readonly unknown[];
  
  function partialCall<T extends Arr, U extends Arr, R>(f: (...args: [...T, ...U]) => R, ...headArgs: T) {
First of all, know that in TypeScript, colon separates a variable or parameter name from its type. So in C or Java you'd say "Date lastModified", in TypeScript it's "lastModified: Date".

Now, looking at it piece by piece:

  function partialCall
declares a function named partialCall.

  <T extends Arr, U extends Arr, R>
says that this is a generic function with type parameters T, U, and R (the first two of which must extend Arr, that is, a readonly array of objects whose types are unknown).

This function's first parameter is:

  f: (...args: [...T, ...U]) => R
The name of the parameter is `f`, and the type of this parameter is `(...args: [...T, ...U]) => R`. This means that `f` must be a function whose return type is R, and its parameters must match `...args: [...T, ...U]`.

The `...` in `...args` makes it a rest parameter[1] (variadic parameter in other languages). The type of `...args` is `[...T, ...U]`, which you can think of as the concatenation of the T and U arrays. (It's a bit strange to see `...` in a type specifier, I can't recall having needed this before.)

  ...headArgs: T
says that the `partialCall` function itself is also variadic, and its arguments are gathered up into headArgs. The type of headArgs is T.

I don't know if I'd call this normal or clear compared to the signatures I encounter daily, but it's pretty elegant for a higher-order function that takes a function with any signature, and some arguments that match the given function's parameters, and returns a function with the matched parameters removed. And the implementation of partialCall is just this one line!

  return (...b: U) => f(...headArgs, ...b)

[0]: https://devblogs.microsoft.com/typescript/announcing-typescr...

[1]: https://www.typescriptlang.org/docs/handbook/functions.html#...


Thank you for taking the time to explain that, it is much appreciated.


I would say, there is some complexity in reading (and writing) the thing. But there is probably not as much complexity in using it. And you are explicitly codifying the complexity in one place, that would presumably still exist in a dynamically typed language, just implicitly, and likely spread out across the code. This looks pretty nice to me at a glance, just like anything once you write a few of them and come across a few in the wild and take the time to pick them apart they no longer seem so scary.


Offtopic but what's nice about Typescript generics? Typescript doesn't even let you specify variance. It's one of the unsound parts of the language's type system in fact.


You cannot explicitly specify covariant vs contravariant, but TypeScript absolutely does allow you to express these relationships. Unless I misunderstand you.

That said, the type system has come a very long way even in just the last year. The biggest improvements imho being around recursive types (which was one of the biggest holes for a long time imo), literal types / literal type inference, and tuple types.

It's not complete by any means, but it's improving quite rapidly.


It appears that function parameter bivariance is still a thing? [1] Although there seems to now be a flag to make this one use of variance correct.

I would assume even Array<T> is still bivariant as well...

Both of those are horribly unsound, just for convenience. Sure convenience and compatibility are Typescript's ultimate goals, but to actually praise it for its generics? That's very strange.

> TypeScript absolutely does allow you to express these relationships

How would you express a class with covariant or contravariant type params in Typescript?

[1] https://www.typescriptlang.org/docs/handbook/type-compatibil...


It is not the most feature-rich generic system but it's come a long way since the earliest revisions and does have some niceties: https://www.typescriptlang.org/docs/handbook/advanced-types....


describe the steps to release the simplest ever code in javascript to production: write a js file, host it, done.

The same thing in TS adds at least one step (not to mention the rest of the tooling you will want)

So while a prefer it over JS, there's no arguing that it is more complex as now you require a build step for a language that only exist because people wanted a language without a build step.


Almost nobody does Javascript without a build step these days, unfortunately. I miss those simpler days.


You might like deno (https://deno.land) then!

It’s native typescript.


Except for the core which was recently reverted to ES6 due to Typescript's slugguish compilation and inefficiencies.


I would say nobody at tech or hip company.

A lot of fortune 500 companies with some developers who missed the trendy stuff still do it that way. I made a medium size website (30 pages) in React with pure javascript and dependencies being script tags in index.html to vendored files.

So not even JSX. I did it that way because it was the easiest way to develop and deploy in that environment


If you don't need IE support and only care about modern browsers...

<script type="module">


And also don't use modern frameworks like React or Vue, or don't mind sticking all your templates in strings, or in your index.html, and shipping 100kb of template compiler to to user, or write render functions directly ala mithril.


my team (in a large enterprise) uses js for scripts using a shebang interpreter declaration, eg

``` #!/use/bin/env node

console.log("hello cli") ```

While it does depend on node, and there are arguably better crossplatform languages for this purpose, it is a zero-tool chain use case that is very convenient for us.


Yeah, fuck those guys. script tags or GTFO of my project!!


Now, that is a bold claim. Are there any stats on that?


"Nobody" here means few, or more loosely, much fewer teams than before, not "literally 0 people/teams".

And the group mentioned is (I deduce) not generally individual devs, enterprise devs building some internal thing, and so on but teams in companies doing public-facing SaaS, teams in startups, companies like Amazon/Google/Facebook/Apple all the way to AirBnB etc, and so on.

So, you don't really need stats for that.


Yes, exactly. The larger the team, the more likely someone is a front end expert and wants to use latest cool framework, which will by its nature require a build step. Even for something simpler, you'll probably want it for cache busting, minimization, etc.


Browsers should just bite the bullet and add TypeScript support.


At the rate the Typescript is releasing, that'd be a support nightmare. Perhaps a better solution is for TC39 to propose optional types. It could be modeled on Typescript for sure, but it would still be backward compatible.


Javascript of today borrows liberally from coffeescript of yesterday, so it would make sense for javascript of tomorrow to borrow liberally from typescript of today.


You can have that with WASM.

But then if that's an option, I think Typescript will be the last language I migrate to, because Typescript development culture, tending as it does towards overcomplicated solutions to simple problems, is unpalatable to me.

I'm drawn to the idea of using Rust over WASM as a frontend language, and I think I'd rather choose that approach to develop any browser UI where type safety is critical, provided there is no discernable difference in performance (when compared to TS over WASM).


Yes, it's probably a better idea to improve WASM than add a proprietary format (TS is by Microsoft) to the open browsers. Google tried to do the same thing with Dart and it was decried about a decade ago, so now they use it for Flutter.


I think it will be great once support is broad enough. It might, ironically, increase the current fashion for framework churn, but at least there will be no single language for developers to derride.

In fact, I wonder how ECMAScript will fare in a post WASM world... I suspect it would still thrive tbh. Or perhaps people will take to other flexible, expressive languages for UI development. Like Python's niche in computer graphics, or Lua in games and AI research.

I can still see myself using JS in that future. But not for everything.


Can't put my finger it for you, but it defn doesn't feel as good. Perhaps because it needs to interop with javascript, and js objects are all over the place with types.


It's the price of backwards compatibility with JS.

One could completely forgo that, and come up with PureScript or Elm. But then one can't beverage most of the existing code.

Same tradeoffs as what Kotlin and Scala make for Java's sake.


All the JVM languages you list aren't transpiled. They target JVM bytecode just like Java. They're first class, even if Java obviously gets the overwhelming amount of VM level support. Engineers working on the JVM are definitely aware of and want to support non-Java langs.


Scala can target JavaScript. Kotlin is usually used with the JVM, but can target native machine code (and JavaScript too I think?). Transpile was the wrong word for me to use.


Ah, I wasn't thinking of the JavaScript flavors. I think "hosted" is how Clojure describes itself.


I think the parent meant that all of those have things (like scala.js) that allow you to write in the JVM language and compile to Javascript.


Typescript makes reading JS infinitely easier for me, I would always use for a large codebase given the option.


> I'm now desperately wishing for a world of OCaml/Haskell, but where are you going to find teammates using those?

Having walked that way I can recommend you to look at F#, and if you aren't a solo developer (or you want to get real regarding jobs and colleagues) make sure you do not have allergy to C# interop.


You hit the nail on the head with tool chain complexity. Transpiling adds yet another layer in a stack that's already deep. Web isn't my field but it looks like from the outside that obsevability[1] is also lacking. Some friends that are in web programming mention a lot of younger engineers don't realize how bad it is.


It makes me think of military contracting as a parallel example. (at least in the USA) The rules required to develop, build, and deliver military equipment to the US is exceedingly complex. And it isn't necessarily a benefit for the Pentagon, as much as it is for the established defense contractors. The barriers to entry in that industry are huge, purely based on the contracting requirements.

So complexity (in tools and process) do not necessarily serve the individual developer the way they serve larger organizations.


> And it isn't necessarily a benefit for the Pentagon, as much as it is for the established defense contractors.

As with many government related procurement systems, there is so much paranoia about abuse, and desire not to repeat various disasters from the past, that the system has by perceived necessity become complicated.

Of course it makes it frightfully expensive to the degree that few companies can actually throw the resources at it to be able to navigate it. The lack of competition can result in projects costing a large amount of money, and the buyers in the agencies having no real options to look elsewhere.

At an e-government company I worked for, we a service for the state that helped companies navigate the state's procurement system. We created a centralised place for all data to be gathered by the applicant, generating all the forms they needed, and provided a way for the applicant to track the state of their application. It introduced quite a change for the state, dramatically widening the potential pool of companies that could bid for contracts. Also pissed off quite a few of the already entrenched ones :)


It's like a macro version of the story told in Capt. David Marquet's book Turn the Ship Around! Huge piles of bureaucracy, complexity, and waste build up in an organization risk avoidance is allowed to become the primary goal.


If you don't mind sharing, what was the company you worked for?


Hawaii Information Consortium (I guess it's now, NIC Hawaii, https://nichawaii.egov.com/), which is a subsidiary of NIC.

HIC was a great company to work for, as was NIC from the limited interactions I had with the larger corporate powers that be. Most websites/services were provided for free to the state / state agencies, instead relying on small fees per transaction on some of the stuff we did, to fund the work that didn't have transactions (I forget the amount, but we're talking something like 50c per transaction).

Being free was a key incentive to getting various agencies to come on-line.


My first boss (rip) had a internship with a defense contractor his junior year in college. They gave him one project. Design a cover for the air intake for an APC or some such.

Easy!!! No not actually easy because of all the constraints.

It had to be stowable. So a hard cover was out. It couldn't produce toxic fumes if it caught fire. So most plastics were out. Cloth was a problem because it couldn't get sucked into the intake if someone tried starting the engine without removing the cover.

He designed heavy canvas cover with metal stiffeners. And snaps. And then had to switch to a draw string because the snaps failed at -40F. And tended to get clogged with snow.

Whole thing took him three months.

Then there was the friend at college who worked on a VCR to record gun camera footage. Also a bunch of requirements. Higher video quality. And can't lose lock when the pilot does a hard pullout after firing his munitions. And the total production run? Well exactly how many fighters does the airforce have? Couple of thousand?

I think military tech has a problem that it's trying to keep up with commercial driven tech which operates on a scale that's a 1000 timers larger. Military produces a few thousand artifacts to commercials few million artifacts.


Those constraints seem a lot more reasonable than what I've seen in other aspects of the business world. At least they are grounded in the realities of the actual purpose. Well, mostly anyway. I'll leave justifications for the $1000 left-handed hammers as an exercise for later.

I've seen (and removed) plenty of requirements that were put in the specification just because. Because someone needed X amount of Y technology in their projects for the year. Because it seemed cool. Because other people were doing it. Because they wanted it on their resume. Because. But not because it was appropriate to the project or its purpose.

I'd say the majority of the work I do now is pruning these nonsense "just because" clauses out.

Its entertaining at least. I regularly shake my head in disbelief as I go through a specification and wonder, "who hired these people?" or "Why is the person who hired them still working?"

Fortunately for me I also get to hand these questions back, much like the good uncle who only needs to uncle and not actually parent: "Here, have them back!" I say at the end of the visit.

All is well. Go pleasantly amid the wastes.


As I remember the actual story:

The the hammer was waaaaay more than $1000. But that's because the military said "We'll pay you a total sum of X, but to make it easier to fund the project we'll let you break it into n parts and pay X/n per part".

The contractor decided that to make their cashflow smoother, they'd include "manual impulse force generator" as one of the deliverable parts.


The hammer was "only" $435 and that price is really a reporting artifact.

The hammer was part of a larger contract that included spare parts and R&D effort linked to those parts. When the spending was reported, the same absolute amount of R&D ($420) was allocated to every item, inflating the apparent price of a $15 hammer to $435. By the same token, the engineering work on more complicated systems (e.g., an engine) was an absolute steal at $420 and since the total amount of R&D spending was fixed, nobody really got ripped off.


Because IBM was there 20 years ago and not only left behind a pure-IBM stack, but got their RUP requirements written into the policy manual as a barrier to entry for any non-IBM vendor to take over.


I'll leave justifications for the $1000 left-handed hammers as an exercise for later.

Those tend to be an accounting artifact, I heard. If you order a thousand different items for a million dollars total then each one of them will show up as costing exactly $1000 no matter what they are.


> I think military tech has a problem that it's trying to keep out with commercial drive tech which operates on a scale that a 1000 timers larger. Military produces a few thousand artifacts to commercials few million artifacts.

Not to excuse military contracting pork and cost padding, but this is a good point that a lot of people seem to miss. There's also the fact a military contract will be for a production run and follow on support.

So they buy a thousand full units and parts/tools to fix and maintain them up front. You can't just go to Autozone and pick up a new tank tread or parts for a jet turbine.


Another huge factor is the service lifetime of military equipment.

There are aircraft that were designed and created in the 1970s that are still in use today. Sure, many of the components and internal systems have been modified and upgraded, but much of the original design is still there and operating.

>> a military contract will be for a production run and follow on support.

When "follow-on support" has to last for 50 years or more, it makes a big difference.


B-52

First flight: 15 April 1952; 68 years ago

Introduction: February 1955

Status: In service


React I have been thinking about, having worked with it a lot and also recently done a React / TS / GraphQL (GraphQL may have been the poorest choice I made; time will tell) project.

I think React itself is awesome and I've always enjoyed it. While I'm not an expert, it's conceptual foundations and core abstractions felt right, and I do think it makes lots of frontend tasks simpler, especially for non-small projects.

At the same time the frontend ecosystem is a mess, as we all know, and as my CSS skills advance I see more just how much could be done with HTML / CSS / limited use of JS and your classic REST app, that's not a single-page app, and how much complexity might be able to be avoided and thus time saved.

At the same time, I don't think it's just "trendiness" that has caused the growth of React and SPA's. I think they give us various wins that we probably take for granted because we're now used to them.

So, the short answer to my long ramble is, I don't know.

But I do think React itself, specifically, hit a sweet spot in terms of being a powerful yet understandable tool, and still remaining a tool, as opposed to a framework.


I have that rant daily at this point.

Dev A: I need an API to CRUD

Me: But we've been doing CRUD for 30 years without an API this is a small project

Dev A: But I don't know how, its not best practice, my team lead agrees, here is a medium article, get with the times, etc.

Me: Ok so you don't know how to do your job.

Dev B: Here, put the React SPA in a Docker container and run it on some cloud its easy...

Me: But all I need is still CRUD, I dont need to scale it, I dont care if its isolated, it should have been 100 lines of code

Dev B: its not best practice, my team lead agrees, here is a medium article, get with the times, Docker run everywhere, etc.

Me: But what I need to connect it to internal resources and use DNS?

Dev B: load balancer! If only you have an AKS cluster!

Me: We work in a 25 person company that needs some very basic CRUD. It will never scale past one deployment, ever, if it does, dont worry we will pay someone to do it right because we will be swimming in money.

The problem is that people are starting to not know how to do simple things. So you get grumpy admins losing their marbles over the complexity of this BS. Most of the time things could be fixed faster, easier, deployed and managed easier using simple technologies that have been around for 20+ years.

But lets wack it with the Python, Docker, React, NoSQL, GraphDB, K8S hammer because...? Not only that, the juniors don't know multiple technologies anymore they just know X. So sysadmins complain that devops is pushing more roles on them, while I have developers that have written full Node.js sites and dont know that IP or a Port, or TCP, UDP, is...

So basically, same shtick different decade.


Dude! You need to run more servers. 250 should do it. I realize its just a simple page with a pair of boxes for name registration, but that's irrelevant! All the cool kids agree. You need infrastructure. It needs to be done as complex as possible! Complexity-as-a-Service won't just happen, you have to want to make it happen. You don't want to be uncool, do you?

Looks at a specification on the desk Yep. We'll keep the first and last page. Everything else is gibberish. Next!

Edit: Looks like a few people disagree. Perhaps they might like to explain why I saved a bunch of companies around 100k in AWS fees just by pruning their server requirements? No? And yes, one of the projects was a simple set of web forms that required 7 servers to run. We pruned it down to 3 and that was just for redundancy and load. That's just one example.

If you disagree state your why.


i didn't downvote you, but since you're asking – the first part of your comment is needlessly snarky and doesn't really add much beyond "yeah, i'm frustrated by people adding unnecessary complexity just because it's trendy". on the other hand, the part added in the edit sounds like it could be an interesting story! expanding on that would make for a more interesting comment:

> I saved a bunch of companies around 100k in AWS fees just by pruning their server requirements. One of the projects was a simple set of web forms that required 7 servers to run. We pruned it down to 3 and that was just for redundancy and load.

EDIT

and if that thing about cutting down the specification really happened, just tell the story – no need to wrap it in a performance piece:

> I've actually had people come to me with specifications so over-engineered that the whole 20-page doc could be simplified down to a single page without loss of functionality!

EDIT 2

and i'm not saying "never use hyperbole"! just don't make it the whole point of your comment :)


I made that comment once about "Soylent". (Remember Soylent? The nutritional drink?) That company made a big deal about their "tech stack". Not for manufacturing or quality control, but for ordinary web processing. Their order volume was so low that a CGI program on a low-end shared hosting system could do the job. But they were going to "scale", right?

Amusingly, they eventually did "scale". They started selling through WalMart in bulk, and accepted their fate as yet another nutritional drink, on the shelves alongside Ensure Plus and Muscle Milk. Their "tech stack" was irrelevant to that.


That's an amusing story. I suppose the Soylent board met a good salesperson, and the pitch worked. So when the team landed the contract, they had to justify their costs some way... I suspect almost every software project in existence is in some way a victim (or beneficiary...) of fanatical marketing.

I remember working on an simple CMS site, amongst other things, for a pretty large company. When we begun the project, we were tasked with re-purposing an overworked Plesk instance to host the site. We eventually managed to do it, but then found the small amount of disk space that was left to us was getting chewed up by logs.

So I reported this to my project manager, suggesting that we procure more HD space. I think I said something about us 'running out of memory on our hard drive'. The PM promised to feed this back to the client... A week later, our PM said that he and the client had resolved the issue. The client will pay for a new server with a stonking 96GB of RAM!!!

That ought fix our 'memory' issue, right!!?

I mean, it also came with a 1TB HD, a second box for redundancy, dev time for migration, and additional dev time for a switch to Journald, or Logrotate, so I wasn't rushing to point out the misunderstanding... Working with the second box over RedHat Pacemaker was all new to us though, and a complete PITA.

But it was also another 'feature' to sell back to the client. They loved the sound of that. A site that couldn't go down... We made it work, but there was absolutely no real technical need for a Pacemaker cluster and 96GB*2 of RAM. It was just a simple CMS backed site.

Occasionally, that's where such complexity comes from. Not the developers themselves, but some loose cannon of a saleperson. That said, these 'sales people' may even be developers themselves. I often think that's a large part of how and why questionably complicated software exists...


A company I used to work for needed a website. They already had a backend and a REST API (served on the same domain as the website should be) and the old website was served directly by the backend that also served the API.

I am not aware of why it was chosen to retire that and separate the website into its own service - maybe there was a good reason, so I won't comment on that.

However the approach they (or rather some frontend developer) chose was a React front-end (fair enough) with a Node.js backend that translated between the REST API and GraphQL (WTF).

The existing API was served on the same domain so the new React website could've directly interacted with it without any problems. But no, this guy wanted to put "GraphQL" on his resume and so introduced an unnecessary extra component and potential point of failure that the company now has to support.


I made this same tech-stack decision while working for a company in the manufacturing industry. React, GraphQL, Node.js. There was a reason behind it.

The head of the company was pushing to modernize our process flow via software meant to drive manufacturing. We had contractors on site from three different companies who were each using us as a test-bed for their software/manufacturing integration. Every week or two they'd plot out some new data they'd need to move from sales to engineering, manufacturing, QC, etc.

In our case, having GraphQL as a translation layer for the sales website saved everyone involved time. However, I can also see many scenarios where that wouldn't have been the case.

It definitely comes down to using the right tool for the job. Knowing how to identify which tools fit and which don't is one of the skills it's very important to help new devs develop.


Yeah, GraphQL seems like a cool piece of tech that's overkill for 90% of the places it's used.

I'm a bit confused as to how it's become so popular for normal development when it seems to have a lot more boilerplate and setup than a simple REST API.


I feel like GraphQL is the new NoSQL. Not just that it's mistakenly adopted, but also that it is quite valuable for the right use cases, but the public doesn't seem to understand what those are.

Do you really, _really_ need to support queries? 99/100, I'd guess no, and thus constraining people to a more defined interface and access pattern is simpler.


Oh... you need to fill your RDMS with a bunch of JSON responses from your 3rd party API. So you can then decode them in memory because I actually need to select * where identifier = 'banana' to do analytics.

Meanwhile the API returns highly structured data perfect for a RDMS, but we dont know how to query SQL without an API in the frontend. So welcome to my hell.


It's a combination of resume-driven development and premature optimization.

The former doesn't need explaining but the latter can be explained as developers being concerned about requiring the benefits of GraphQL in some uncertain future and decide to include it from the start, even though in most cases they end up never actually reaping the benefits while still being plagued by the extra overhead of using that technology.


> The existing API was served on the same domain so the new React website could've directly interacted with it without any problems. But no, this guy wanted to put "GraphQL" on his resume and so introduced an unnecessary extra component and potential point of failure that the company now has to support.

There's a simple solution to this kind of problem: let the resume driven developer use his skills - fire him.


So how would you operate each of these bespoke services? One service uses TCP, one uses UDP, one uses SCTP. What happens if your buffer sizes are incorrect? What about if your keep alives are too aggressive? What happens if a problem in your TCP connection pool makes it seem like there's a networking issue and so you play around with your network settings only to have all your other protocols dive in performance?

I sympathize with the OP for single developers or small teams (by small, I mean a team of 5 not in a larger corporation), but whenever you have more than a single team, you want to keep system management overhead low. Unifying around a single paradigm like gRPC and Docker containers means that not every team will have their own bespoke chroot configuration and you won't have to retune an HTTP client every time you interface with a new service.

I think there's in general too little representation from small-scale or indie developers. I spend a lot of time working with the Gemini protocol outside of work and I appreciate the simpler approach that the protocol takes. I would love to see these sorts of stakeholders have a say in greater net architecture as well, but let's not pretend like this is all complexity for complexity's sake; this stuff is needed at scale.


>So how would you operate each of these bespoke services? One service uses TCP, one uses UDP, one uses SCTP. What happens if your buffer sizes are incorrect?

Such bullshit. Since when we use UDP for a CRUD service ? Since when SCTP is even considered outside of the telecom world?

HTTP and Rest-like API have been able to handle properly simple CRUD API since 20 years without problems, way before gRPC was even a thing.

gRPC has its usage for large, complex API that required a proper RPC framework. But in 99% of the case, yes it's an overkill.


> Such bullshit. Since when we use UDP for a CRUD service ? Since when SCTP is even considered outside of the telecom world?

You're making a strawmam out of this. I'm responding to the following:

"while I have developers that have written full Node.js sites and dont know that IP or a Port, or TCP, UDP, is..."

But thank you for the insult.

> HTTP and Rest-like API have been able to handle properly simple CRUD API since 20 years without problems, way before gRPC was even a thing.

Hm, what are you talking about? Are you talking about the client? Are you talking about a load balancer? I don't think anyone is proposing that a simple static blog use gRPC to serve content to its users.

> But in 99% of the case, yes it's an overkill.

Compared to what? I'd argue that CRUD over HTTP is wrong. In fact, folks have been writing CRUD over TCP for ages. Why do we need to massage CRUD syntax over HTTP when we can just stream requests and responses over TCP? In fact, TCP predates HTTP by 20 years, so if you're using the historical argument, raw TCP is even older. How much bullshit is there around AJAX and Websockets and HTTP multipart and HTTP keep-alive when we're just trying to recreate TCP semantics?

So why are you drawing the line at HTTP? Seems arbitrary to me.


"HTTP and Rest-like API have been able to handle properly simple CRUD API since 20 years without problems, way before gRPC was even a thing."

There is the problem. "Twenty years old? How can that be any good" is a common POV in my experience


That's a very good argument against the modern framework-conatiners-k8s stack. We used to be able to keep an organization wide track of things such as tcp services, buffer sizes, timeouts etc. but now it's all a black box deployed by a guy who cut and paste some yaml from a Medium article, at best.

It's probably acceptable to not learn the details of what you're doing, as hardware is cheap and all that, but it also constitutes a glass ceiling for how much the organization learns. That's a bigger problem in the long run.


Hopefully any organization that decides to use k8s goes into it understanding what they're getting into, but yes if you just get onto the hype train without thinking, you'll probably have a bad time when scaling.


Dev B: Here, put the React SPA in a Docker container and run it on some cloud its easy...

I have 100 different devs telling me "do this thing, it's easy" and it's easy for them, but now I have 100 different things to think about and make work together and devs never think about how their tiny thing fits into the big picture.

99% of web apps could be pure HTML + CSS on the front end, and Flask or something on the back end, talking pure SQL to Postgres. Really. That's all you need. And probably 99.99% of those apps that run internally to one company.


Docker facilitates the "throw a Dockerfile onto Ops and that's their problem now" mentality. That's perhaps not intentional, but choosing to over-optimize the developer experience makes life very hard for those who have to support and run the application over time.


Docker facilitates the "throw a Dockerfile onto Ops and that's their problem now" mentality. That's perhaps not intentional, but choosing to over-optimize the developer experience makes life very hard for those who have to support and run the application over time.

It also enables pushback along the lines of "your container doesn't run, and it's 100% your fault, as it is supposed to contain all its dependencies". So developers have unwittingly played into the hands of ops there ;-)


Agreed, mostly. To be fair I think that, based on my limited experience with React and Docker and Python, those 3 technologies have valid use-cases in smaller orgs. The other technologies I'm not familiar enough with to comment on.

Also if I was gonna complain about someone I wouldn't complain about devs, I'd complain about team leads, engineering managers - the folks who should be guiding devs - and, the people at the business level who choose the team leads / engineering managers.

The devs are at the bottom of the food chain; it's not their fault.


Full agreement with you on those technologies.

GraphQL is unfortunately still part of a low grade hype cycle where more and more companies are using it because "it's the next REST". But if you don't have a real need for the advantages it offers (for example a mobile app whose client side requests you can't easily change in sync with your API, or a data model that really is graph-like), it can be more cost than benefit.

React on the other hand: I've lost count of the number of small prototypes I've started in vanilla JS, thinking "this time I can just roll it myself", then ended up pulling in React. Even the simplest things are easier and simpler with it, and hugely complex applications are also easier and simpler with it. It has a fantastic API surface.


Exactly - that's my take on React.


> towards unecessarily complicated development processes and architectures

please note that what's "unnecessary" for you is not necessarily unnecessary for others.

this is an important point.

i guess what these projects need is a way to communicate explicitly the costs of maintenance and developer mind-share when adding more complex features to their product.

Because especially as small dev team it is difficult to figure out these costs during a short evaluation of the software, is not easy to pick the best tool for the job unless some members have got experience in some previous job with these tools.


I think your point is slightly off in its 'angle'.

Consider that "unnecessarily-complicated" and "unnecessary" do NOT mean the same thing -- particularly in this case.


that depends on the interpretation.

most people don't mean there is a simpler implementation that has the same feature set and execution properties.

Most people mean it has features i don't need, hence it is too complicated


>Agreed. I saw it as a general lament against over-engineering. I don't think the point got lost in the super specific example...

Redis also comes to mind.


I can’t even begin to clearly express how much I agree with you. I work for a very large company, not a software company but one that produces a lot of internal applications. The infrastructure that we leverage to do even trivial things is mind blowing. Want to send an email to customers? Roll a series of APIs and fuck it let’s do some machine learning bc yolo, integrate all the things and don’t forget to run everything through APIC. It’s insane.


I work in TensorFlow with some regularity, and I've seen explicit examples of what the author of the article talks about where Googlers get used to their own stack (See eg: https://github.com/abseil/abseil-py/issues/99#issuecomment-4... vs what is considered standard Python practice https://github.com/tensorflow/tensorflow/issues/26691#issuec... )

That said, I also see the opposite when this gets some traction with "normal users". The Abseil issue was fixed, and the TensorFlow 2.0 release did make TensorFlow more usable by smaller teams in many situations.


I attribute this companies that can afford to keep way too many high skilled employees on the books while needing to keep them occupied


It seems to me the problem is all of our software is built by and for megacorps. In the context of what they need it for its the best tool for the job but it was never made for smaller orgs where 90% of the bells and whistles are not needed and end up as needless complexity.


I'm not sure how react fits in here. I have found it to be very simple to understand enough to be productive and upgrading to new versions has required next to no changes to our large codebase. Perhaps you mean SPA react apps which I agree are a bit more work than needed for a solo dev.

Docker seems to be optimal for medium size teams. Configuring a server the old way is easier than docker for 1 server but less so for when you have 10 or CI/review apps.


React fits in because it's in the process of jumping the shark because FB needs it to. React is already hugely bloated for no discernible reason compared to Preact, but they're just going to keep bloating it with Suspense. Cf. https://crank.js.org/blog/introducing-crank


> I suppose in exchange, you're getting a guarantee of maintenance.

Hah, with Google? Right.


It's a shot across the bow at the Kool-Aid which everyone here has imbibed to some extent. Of course they're going to shoot the messenger. When I clicked this thread I said out loud to my team, "oh, this should be good," and I hadn't even fathomed precisely how predictable the discussion would be. It's so far removed from any consideration that it lands something like explaining to an American that they're a subject of the Queen. It makes no sense to objectively review the thinking when you're inside, and buy, the thinking.

Bear in mind the livelihood of nearly everyone participating in this conversation is impacted by the situation being lamented. It's telling that CoreOS employees felt compelled to set the record straight and casually "float ideas".

It's further curious to note the difference in HN's reaction to an extremely salty and offensive post from, say, Linus Torvalds, versus this one. People are really bad at objectively setting aside their ideology and tone is a kind excuse to get out of having to genuinely think about whether the angry individual might have a point, and when it hits close to home, well. All bets are off.

You're 100% correct on your overarching point but the less kind interpretation is "the incentives of modern software engineering value complexity to artificially strengthen a software engineering job economy". The good news is that the more Big Tech screws the pooch with user data management and software engineering, the stronger an argument I have among very receptive ears in government circles to start thinking of Big Web/FAANG as hostile to responsible computing innovation, because they never really bought what Mikey sold them, and the troupe that followed Mikey to Washington didn't hang around to explain the downsides or long-term management needs of the current landscape.


I think it's pretty drastic to say "everyone here has imbibed" that Kool-Aid; I work in embedded, and am not super familiar with the frameworks he's complaining about, but I found the blog post entertaining nonetheless.

Sometimes I feel like the people who are deep down in this "full stack developer" land should come up for air and look around at the bigger picture. There's a much bigger world of software development out there and actually I think this post was kind of an attempt at that.

The web-facing industry seems to have dug itself down into a very deep rabbit hole of frameworks-on-frameworks-on-buzzwords, in a way it wasn't 10 years ago when I last played in that space.

It looks very weird over there from over here :-)


You should come join the fun!

In order to deploy a web app in 2020, one must write his frontend using react and typescript that would transpile the code into a nicely bundled and minified javascript and css files along with thousands of your 3rd party dependency libs. Yes, you heard that right! I just counted the number of libs on the node_modules in one of my small side project and it's over 1000s. You might also wonder why the typescript code would produce css when transpiled. That's because in 2020 we no longer write stylesheet directly into a css file. That's so 90s! We write the css as javascript objects now, then let the transpiler generate css from that. Isn't that neat?!

For the backend, you can choose one of the many cool frameworks and languages. Heck, why not use 5? In 2020 the backend should be written as a collection of microservices that talk to each other over grpc. Of course you still need http or how else would your react frontend talk to your microservices? So your microservices now talks multiple protocols, which is very cool!

Now pack each of those service into a collection of docker images, then write 4000 lines of yaml files to orchestrate them on kubernetes, and you're ready to go! Now you have a glorious, scalable app with 99.999% sla that can withstand your show hn frontpage traffic thanks to kubernetes, until you eventually screw up that yaml config and the whole house of cards collapse, but that's on you because you're bad at kubernetes orchestration!

I think we're indeed in a deep trench but damn it's quite an impressive trench we got here!


> Of course you still need http or how else would your react frontend talk to your microservices?

Don't forget GraphQL on top of that, otherwise you're not really in 2020.


I'm genuinely curious - do you really have to do all of that? Why not just ignore the frameworks and technologies that overcomplicate things and use what works best for you?

I am wondering why someone just hasn't forked etcd if it is truly so awful now.


Yes. If you don’t, and scp a Perl script to run under supervisord that meets the business requirement in three days instead of three months of microservices strategy, you are accused of being so hopelessly junior that you are unemployable.

I have witnessed that firsthand. Despite the anecdote being backend, I’d bet my 401(k) it’s the same up front.


Being a complete newbie on the scene a few years ago, the choice of frameworks and tools was so incredibly overwhelming (coming from C/C++ land). There are strong opinion pieces both for and against basically everything too.

When you are still trying to figure out exactly what HTML and JS should be responsible for, having React, Vue, PHP, Elm and many more as options for "just making a simple web page" can slow down the learning process by an order of magnitude.

I'm not sure what the solution is here- clearly each of these tools has a niche which requires them, but making a basic webapp without a web-engineering degree is quickly becoming an impossibility.


Unfortunately yes. I'm a freelancer so I have to keep up with the latest tech, at least just enough so I can debug issues when working on my clients' projects. You probably don't need to do this on your niche and can just keep using whatever stuff that works for you.


Forking is the answer, if you don't like the where a particular group is going with an open source project. Fork an old version, and perhaps you will find some others who prefer it that way.


Nope! The original group will sue you for trademark infringement, despite that they're the ones peddling a broken knockoff version. (See, eg Python, Firefox, GIMP, etc; I don't pay much attention to webapp-related stuff, so I don't have more proximate examples.)


So just give it a new name.


yes, but that generally doesn't stop someone from feeling annoyed.

While I do like cheering for annoyed people to fork projects it's not always practical for a number of reason like:

- The project is large and really requires multiple people to maintain.

- It's written in a programming language that the person is not familiar with, or interested in (they might even dislike it).

- License issues.

- Their work might prevent them from working on open source projects due to intellectual property concerns.


And here I am, making a career writing LOB web apps in Rails, and getting to MVP in weeks instead of months (or even years). Guess I'm really missing the boat!


My pet theory is developers crave challenges. If their project is not challenging enough to stimulate their mind, they will introduce more and more complexities until the project is hard enough to keep them intellectually stimulated. Thus, software development will never converge into a single tech stack as smart developers that got bored as hell working on crud apps will invent new stuff to make their work more interesting and meaningful.

In 5 years react and kubernetes will be considered legacy tech and we will march on to other new shiny tech.


Rest assured that I am firmly in your camp, and I've been tirelessly making the argument to policymakers that Big Web is not the resident innovative force in computing any more, despite the Obama administration's rapid adoption of their ideals and philosophy to salvage Healthcare.gov. In both the legislative and executive branches I have found many, many sympathetic ears who have been burned by ballooning budgets to handle public cloud and all the concerns that came with it, which folks who advocate for cloud-first deployment tend to overlook when addressing people who genuinely don't know (concerns endemic to Big Web operation of cloud actually need explaining when addressing a sector that's never touched it).

There is a watershed coming where Big Web/FAANG will lose this mantle, and DevOps, half-baked Agile because nobody ever bakes it all the way, framework engineering, and a complete fundamental ball-dropping around security and exploitation of user data are rapidly ensuring it.

At some point, Big Web/FAANG and everybody delivering Web services of some kind got the idea that the Web was computing. I think the prototypical version of this thinking led to Rob Pike's observations about systems research in the early oughts. This forum tends to operate with that underlying assumption about the lens to view computing through no fault of anyone in particular, which is why I say that.


What would you say is the "resident innovative force in computing" now?

I'm a little confused by the reference to healthcare.gov: if Big Web is not the right paradigm to handle a... big web site, then what is?


Perhaps I disagree with the implied assumption that a big web site was the answer. I don’t have an answer for you beyond that, because step one is challenging the increasing belief that Big Web has computing figured out and we need not evaluate the decisions and foundational technologies and delivery models that have led to where we are. Step one is a mountain, as you can imagine, and it’s possible to start the conversation of climbing it with only a handful of ideas of what could be over it. Many smart people with quadruple my intelligence and foresight have toed these waters throughout computing’s history. They just lose to hype and marketing.

I realize that’s an underwhelming answer, but if we asked that everyone who disagreed with the status quo simultaneously proposed a perfect solution to fix it, we would be nowhere in any sense. I assure you that thinking about the answer to your question is a significant part of my life, and talking to people who use computers to shape that thinking is key. Government was largely responsible for setting the computing agenda and spurring all of the right technologies in the beginning, and I believe there is merit to revisiting that thinking rather than ceding computing to engineers on the West coast.


At some point in the early-mid-00s we as engineers just "gave up" on the web as it was formulated as a document/resource transfer system with hyperlinking, and turned it into this ungainly mess of DOM manipulation, rampant RPC calling, database abusing, JS bound bloat. Finding the HTML document under the weight of ... stuff... on top has become impossible.

There were precursors to this... GWT was an early example of abuse of the web browser stack to accomplish something very non-webby. But it's really gone full bore now.

Back in the mid-90s when I made web pages for a living when some designer came to us with a photoshop mockup in a PSD and said "make the page look like this" we felt perfectly fine saying "no, the web can't do that. design for the web, not photoshop"; no such luck now. And so here we are...


You should also consider that the contrarian nature of HN would set it up to exactly the opposite reaction had the post expressed the reverse of its current positions. The big irony is that it's not inconceivable that it would involve some of the same actors voicing different opinions.

> "the incentives of modern software engineering value complexity to artificially strengthen a software engineering job economy"

Not sure I would agree. Hanlon's razor comes to mind. To me, the referred complexity is reminiscent of a number of other complex patterns that somehow found their ways in software engineering, out of possibly good intentions. Regardless, an idea needs to be sold and when the marketing is well executed the community will ingest the kool-aid. Which it then tries to digest for a few years. Then begins a game of chicken that consists in not being the first to admit that you're in pain, out of fear of being branded an infidel, or out of sunk cost bias. Eventually a few courageous and influential souls finally spit it out in disgust and scream "wtf is this shit?!?" Then the rest awakes from the common stupor and a holy war ensues. I still remember SOAP. I still remember with what verve I was sold Scrum.

As for Kubernetes, only time will tell.


> " that somehow found their ways in software engineering, out of possibly good intentions"

It seems that systems and principles invented for some specific situations get rapidly entrenched as "best practice" due to the wild success of their root corporations. That's twisted much like fashion - "that dress looks great on that model, maybe if I wear that dress, I would look good too".


There's some truth in that, but there's more to it. Successful tech companies build good tech that is often worth adopting:

- the companies' successes are often predicated on strong ability to develop technologies

- good (well-maintained) technology requires resources and the successful tech companies have those resources.


Valid. The "fashion" point was more about P(adoption worthy tech | successful originator) != P(successful originator | adoption worthy tech)


I'm alarmed by the way you've brought up Mikey Dickerson completely unprompted (both here and in the OP's comment section), and that your account is just two hours old. It's fine if you take issue with the Silicon Valley mindset in government IT, but there's no reason to make it personal.


[flagged]


The reason it seems like an attack is due to the hostile tone of your comments, the burner account, and the unfounded/unsourced accusation that USDS left its partners behind.

I apologize if you just wanted to discuss the merits of a startup-minded approach to government IT, but I'm honestly skeptical of your motivations. USDS ruffled many feathers among the entrenched IT interests, and I've seen what the propaganda response looks like first hand.

If you really believe that you're fighting the good fight, then I'd welcome a direct conversation. Even a phone call if you like. I promise that I'm not in any way similar to a QAnon believer.

bfb09 24 days ago [flagged]

Perhaps my beef with your cadre is assuming that questioning its merits is AFS propaganda coming from someone who’s never set foot south of Embarcadero and assuming everyone who mentions principals has an axe to grind. You’ve painted a lovely caricature of who you think your opposition is, but you’re looking in entirely the wrong direction. Hint: Two FAANGs.

I don’t speak on the telephone with underhanded accusations, sorry.


[flagged]


[flagged]

bfb09 23 days ago [flagged]

Perhaps because I’m a government lobbyist and consultant formerly of FAANG with 24 years of experience trying to warn the Web industry that the exact thesis of the article is turning non-Web sectors against them? Couldn’t possibly be that, no, I have to be schizophrenic or a troll. The reason I know you don’t have schizophrenia close to you in your life is how willing you are to throw it at an anonymous commenter with a dismissive gesture toward “schizo”. You’d shut up if you knew what it’s like.

And you and “a proper response to accusing you of being a conspiracy means I’m being trolled when really I have no response” wonder why I change HN accounts like I change socks. Perhaps it’s because I hold this community in the utmost contempt precisely because of you and people like you, and I have zero interest in building a reputation among the harmful, toxic people who don’t realize how wrong they have it that I am working to excise from the conversation.

Take it or leave it. The good news for you is that you’re on the comfortable side of the argument so you will never get flagged or chastised by dang. Flagrantly accusing someone of psychological defect due to not liking their opinion is okay depending on the target and beliefs.


cool story bro. I have no idea what you think I "sold" Washington, but anyone who has kubernetes never talked to me.


> There are people attacking the author for a statement made about CoreOS, and for some hate towards Kubernetes.

That's not really surprising, and maybe should be a lesson to the author. If you want to make a particular point, you shouldn't make inflammatory/controversial statements about other, only-tangentially-related things. It distracts from and dilutes your point.

For my part, I know one of the CoreOS founders, and while I have no opinion on CoreOS itself, it's really off-putting to see some random armchair quarterback on the internet shitting on the work of someone you know. While I more or less agree with the actual real point of the article, the unnecessary attacks really divided my attention.

The article starts off with a cheap potshot at CoreOS, followed by a screed about Kubernetes, and then finally, more than half-way through, we get an admission that the author has "digressed", and we get to the actual point. Certainly the point required some background about Kubernetes, but I can understand why people focus on the negative CoreOS and Kubernetes aspects of the post, since they consist of more than half of the content.


Eh, idk. It makes for entertaining reading for a devops-sceptic.


Off-topic, but I've never seen anyone describe themselves as a "devops-sceptic" - could you elaborate more on what that looks like for you?


You're just asking so that you can attack my response, I suggest.


While I know there are people who argue in bad faith on the internet, I am genuinely curious and am not looking for a fight. I would have reached out to you privately but you don't have an email or contact listed on your profile.

I'm mostly curious about which part of devops you're skeptical about, if it's tooling, or process, or the mentalities or something else.


I think everyone here remembers that time they wrote a simple HTTP server, that just works, and would continue to work today.

If anything, I hope that we can recognize that there has been advancements in computing. Back in the day you would be open to attacks left and right. It was the wild west back in the day. Today, you can take that tiny webserver and put it in a VM, and voila, now it doesn't matter if anyone breaks in. As long as you didn't add the SSH key to github or anything like that.

We can have complex programs, as long as every-man doesn't have to know the details behind it, or implement it or risk losing access he previously had.

I think new protocols that are extremely hard to implement falls under that, and I don't think Qemu or Linux does, as nobody loses anything when Linux adds new complex features.

We have to make sure, at least, that we continue to support HTTP/1. I think we already lost a lot when we moved away from IRC.


FCGI controlling Go programs, maybe front-ended by a load balancer and back-ended by MySQL/MariaDB/Postgres, can get an awful lot of work done cheaply.

FCGI is an automatic "orchestration system" - it starts up more instances as needed, restarts them if they crash, shuts them down when they go idle, and restarts them once in a while in case they leak some resource. It's like the parts of Kubernetes you really need.

Go is an OK language. Not a great language, but an OK language. It comes with almost everything you need for server-side applications. Also, the "goroutine" model means you don't have to fool with threads vs async. If a goroutine blocks on a lock, no problem; the CPU gets redispatched. If you need a million goroutines, that works, too. None of that "oh, no, the async thread made a call that blocked and screwed up the whole system".

MySQL/MariaDB/Postgres have been around for decades, and they work pretty well. You can have replication if you need it. (NoSQL seems to have come and gone.)

Get the stuff at the bottom right, instead of piling more cruft on top. It's cheaper.


> Back in the day you would be open to attacks left and right.

If you wrote your simple HTTP server well, then not really.

> you can take that tiny webserver and put it in a VM, and voila, now it doesn't matter if anyone breaks in.

1. It might still matter.

2. The virtualized/containerized version could well have its own security issues.

3. The virtualized/containerized version is heavy and expensive (relatively).

4. Why did you move away from IRC? Come back... :-(


What if 'advancements' were anything but? How would we know whether we're advancing a nascent engineering discipline in the best way possible? Is public cloud, an optimization for a certain calculus of engineering resources, the optimal form of computing delivery?

There's an underlying presumption about computing progress that has weaseled its way into your thinking, and the devil of it is that there are vanishingly few people willing to question it (can hardly blame them, given this thread).


You seem unfamiliar with the mystical energy force that always ensures the direction of "progress" is always positive for everyone. It's very un-techy to not believe in this, you need to immediately forget enough social science subjects so you can get back to the purity of ignorance.


James Mickens' 'Technological Manifest Destiny', cited at UseNIX 2018, is a real thing. At the time, listening to it, it was easy to chuckle, but I've since had a lot of sobering conversations with people in which I realized partway through that they really believed these things, and that's frightening.

Technological Manifest Destiny:

1. Technology is value neutral and will therefore automatically lead to great outcomes for everyone.

2. If we delay the rollout of new technology than we would delay the distribution of the inevitably great impacts for everyone.

3. History is generally uninteresting because the past has nothing to teach us right now.

(https://www.zachpfeffer.com/single-post/2018/11/27/Transcrip..., ca. 37 minutes in)


You jest, but there is an actual argument somewhere in this dumpster fire arguing that philosophy cannot explain the overall direction of technology, as evidenced by a particular software's choice of license and the underlying assumption that technology is "different" from any other human endeavor.

Nebulous philosophical arguments do explain things. Quite readily, actually.


That demo from 1968 ruins everything (https://www.youtube.com/watch?v=yJDv-zdhzMY), like the illusion of progress.


> Today, you can take that tiny webserver and put it in a VM, and voila, now it doesn't matter if anyone breaks in

This is a dangerous viewpoint to hold. Containers do offer some slightly better security, but they are not a silver bullet or anything close. There are still lots of ways to cause problems whether the software is in a container or not.


>> a VM

> Containers

These are not the same thing, and have different security guarantees.


Since the whole article is about containers, I assumed they were using VM as shorthand for a container.


Since they also mention QEMU, I do believe they actually meant VMs.


Short of CVEs, which are fixed after identification, can you point out other ways to "cause problems" with a tiny webserver serving static content in a container?


Well, there have been many container breakout CVEs, and I'm sure there will be more: https://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=container

But let me ask the question another way. What security do you gain by being in a container instead of on a server without a container?


> What security do you gain by being in a container instead of on a server without a container?

I'm not a container guru, but... this is obvious, right? You have to break out the container, which is an extra layer. In order to get root on the server, you need:

• Without container: application vulnerability (to gain access to the OS) + kernel vulnerability (to gain access to root)

• With container: application vulnerability (to gain access to the container) + container breakout vulnerability (to gain access to the OS) + kernel vulnerability (to gain access to root)

You make it sound like I'm missing something — what am I missing here?

Edit: are you talking about CVEs like this one (https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-3215)? I guess I'm assuming that a container vulnerability will give an attacker less to play with than a kernel vulnerability.


Why do you need to access the OS? If you break out of the web server, you have control of the container. You can then launch any sort of internal attack at the access level of the container. You could also launch a DDOS from the container.

Most of the bad stuff you can do when you take over a web server doesn't require root access, just user level access, and you get that whether or not it's in a container.


Containers share a platform. They increase attack surface by supplying a vector to any neighbour, by being something that can be attacked at all, and by requiring another control plane that requires credentials, making it a social engineering opportunity as well as being another component that itself can be vulnerable.

They are not an additional shell of defence, they do not reduce your blast radius.


> They increase attack surface by supplying a vector to any neighbour

Could you explain what you mean by this? I'm still not getting it.

Here's the example I have in mind: I have one application listening on a TCP port, and another one listening on a different port. I don't want the first application to talk to the second application. If they're just two processes on the same machine, they can both see each other, and an attacker who manages to exploit an RCE vulnerability gets access to the port "for free"; if they're both in containers, then they can only see the ports I specify, and it's really easy for me to specify them.

This literally does seem like an additional shell of defence to me! What sort of attack would it make worse?


Trusting the isolation like this is absolutely misguided, to the extent that it’s in conflict with understanding how computers work. Those who do not remember rowhammer or spectre and their kin are doomed to repeat them.

As for your example, there’s no difference between that and OS mediated mandatory access controls, separation of uids, chroot etc. Those capabilities are present irregardless of whether you’re using containers.

Anyone selling you on containers as a security measure is pitching snake oil. They are a resource allocation & workload scheduling tool. They may help applications avoid inadvertently treading on one another’s toes e.g. avoiding library conflicts or easing the deployment of, say, QA and showcase environments on the same metal, but it’s intrinsically a shoji wall.

There’s even a cultural downside, since developers may make exactly these flawed assumptions about runtime safety, or hard-code paths for config files rather than making them relative to deployment location, make assumptions (implicit or otherwise) about uids and so on, i.e. containers can breed sloppy dev practices.


I feel like you are several metaphors removed from me right now. Rowhammer? Spectre? I am just trying to serve a couple of websites here! I'm not going to buy another rack of servers just to isolate the two. Have you seen how expensive those things are?

> Anyone selling you on containers as a security measure is pitching snake oil. They are a resource allocation & workload scheduling tool.

I agree with both of these statements — and I suspect this is where it all falls down. When you put all your applications in containers, you are not done with security, no. Nevertheless, I've found great value in having one file per version per application, being able to upgrade my dependencies one-by-one, automatically rolling back in case of failure, and taking advantage of the tools in the container ecosystem. With all that, the security features such as isolating filesystems and network ports are just the cherry on top. Before containers, I was thinking about how to give my applications limited filesystems to work with, but along came Docker, and I didn't have to think about it anymore, because I'd already done it.

This is why I feel so out-of-touch with many of the commenters here. I'm surrounded by people decrying containers for security reasons, I want to defend them because of the many benefits they have given me, and I think the people preaching some kind of True Security (where everything is 100% perfectly isolated) aren't taking this into account. I feel like I could take your comment — "Trusting the isolation like this is absolutely misguided" — and apply it to any part of the stack. You have to stop somewhere.

I've seen the salespeople. They definitely exist, but I don't think they're here on HN, and they're more likely to be learning than lying.

And developers making flawed assumptions? You know this didn't start happening with containers!


> I am just trying to serve a couple of websites here!

Don't underestimate your responsibilities as an active participant online and operator of a globally reachable computing resource. Like driving a car, if you're not fully qualified and alert then you can endanger yourself and others. This is how PHP got such a bad rap.

The fact that the container deployment tool has helped you configure security elements (such as limited filesystem access) is nice, but those elements already existed, along with the tools to manage them. Containerized software deployment did not invent them, and if you're getting procedural benefits from containers, that's great for you, but that's all it is.


Having an additional layer of restriction (be it an alloation and workload scheduler or whatever) is additional security. The fact that there are vulnerabilities that bypass layers of security is irrelevant. The power button on the machine is a vulnerability to your software. The fact the restrictions are possible with other tools is irrelevant. Grouping functionality into a singular paradigm has utility.

To be fair, more security by additional tooling adds a vulnerability in human error, but that's true of all the tools.


Are you talking about VMs or containers?

If you have an application RCE and a kernel vulnerability, what difference would a running container daemon make?


Except you could totally break in to it and cut through the VM layer so that it does indeed matter if someone breaks in.

Some of this is an illusion of simplicity.


The problem, I think, is that the article presents all of its views as almost self evidently true. If you distill it down, the complaint is that etcd added gRPC.

I think it was a good move for an infrastructure piece like etcd to add gRPC. Now I can just grab a generated client in a language of my choice. There's certainly valid critiques of gRPC / protocol buffers but I've found things like gRPC and Thrift to reduce complexity in applications that use them, because there's a thousand interpretations of REST out there, not to mention the crazy things people do with 'simple' HTTP, such as long polling, which introduce implicitly complex semantics into something that is on the surface quite simple.


> Now I can just grab a generated client in a language of my choice.

But you can't curl the state of your infra component without creating a program and downloading the client artifacts. This is a very big step backward from an operability standpoint.

> because there's a thousand interpretations of REST out there

...but there weren't a thousand different interpretations of the etcd API out there: there was exactly one. And now there are two, and the one with the most utility (that can be used without generators and in languages where gRPC support is spotty) is being diminished.

While I agree with you that there could be design consistency arguments in the abstract for gRPC vs REST as a global architectural pattern, none of those applied to the concrete artifact of etcd client<->server communications.


> But you can't curl the state of your infra component without creating a program and downloading the client artifacts.

You can if you enable server reflection[1] and use a tool like grpcurl[2].

[1]: https://github.com/grpc/grpc-java/blob/master/documentation/... [2]: https://github.com/fullstorydev/grpcurl


> tool like grpcurl

Not available in any repositories and largely unknown to general masses (much like gRPC itself).

Personally I like gRPC and sometimes use it in projects. But the author is right — attempting to gradually replace an existing HTTP API via gRPC API is like replacing horses with genetically modified cows. You need a very... corrupt mindset to ever attempt anything like that.


So... being a comment from that "guy" that made the original comment :-)

To reemphasize the problem with grpcurl.... it's _still_ static. You must download the artifact. It's like needing a different phone for every person you call (protobuf artifacts) instead of calling different people on the same phone (same artifact, different address and data in payload).


shrug Was just pointing out that you don't need the .proto files on hand to interact with a gRPC service if you use the above approach.


Yeah I don't disagree, I was mainly pointing out that there isn't a black and white 'grpc = bad' argument.

Especially since I believe grpc does take steps to mitigate some of the RPC issues you'd have with, say, Thrift. For example having the REST gateway so you can support the curl scenario without any extra work. (For example when building my own Thrift-type services I would do extra work to have a mini REST API for status and other operational curling uses)

As to the second part of your argument about there now being a second API, and how that is not necessarily a good thing, I quite agree and can't really speak to the etcd use case, other than that I don't think it's some property of 'Xooglers' to do such a thing to an open source component. I've spent my career using open source components: I've had lone developers who change their APIs willy nilly to fit the pet peeves of their own organization; I've had large companies own an open source component and have it be very disciplined about version migration; I've had lone developers be similarly disciplined; I've had companies make open source projects really complicated.


> For example having the REST gateway so you can support the curl scenario without any extra work.

I would agree with you except for one thing--it's very clear that's a second class citizen. There is this totally extra thing to support the original use case of etcd, and it comes with all the extraordinary quirks of how gRPC protobuf -> JSON serialization works (a strong term in my opinion).

It is very very hard for me to not see an unintentional conspiracy to push this very bad standard of gRPC. It does NOT make 99% of real-world programs faster. It does not make operations easier. It is not even a "data" format at all: go try changing an enum in downstream systems.

The standard is bad. Objectively, terribly bad. Someone somewhere on HN said that "Google was a ball-pit for gifted children." I kinda laughed at that, but really, after k8s and protobuf3, I won't even grant "gifted".

Google is a ball-pit for autistic children who can be funded by their monopoly on search. I'm tired of dealing with their protocols that make everything worse.


> Someone somewhere on HN said that "Google was a ball-pit for gifted children." I kinda laughed at that, but really, after k8s and protobuf3, I won't even grant "gifted".

This made me laugh.


To your point: it's situations like this that could (likely) be mitigated if they would have just upheld backwards compatibility: keep HTTP, add gRPC as alternative, 1st class ways of interacting with the tool. I haven't used etcd directly beyond K8, so I don't know if the new versions inherently invalidated the use of HTTP. But seems to me this is just another example of breaking changes...well...breaking stuff.


> But you can't curl the state of your infra component without creating a program and downloading the client artifacts. This is a very big step backward from an operability standpoint.

You can still do this because the gRPC API is also exposed via REST.


gRPC also adds its own bugs, which are platform dependent and binding dependent.

(I use it)


I agree with you. I like grpc.

But I also think that the original author has a point: for many solo developers it's an extra dependency that they have to learn, and they get virtually no benefit from using grpc because they aren't performance-bound. They are simplicity-bound.


> they get virtually no benefit from using grpc because they aren't performance-bound. They are simplicity-bound.

If you're talking latency, to this day gRPC in dynamic languages is still slower than most fast json implementations in _latency_.

It is simply not correct to talk about "performance" this way.

If you care about CPU allocation and not burning 70% of your CPU on JSON parsing, then sure. Most companies don't care about that, and I/O is 3 orders of magnitude a bigger problem than in-process parsing for those problems.


I think this is another reason why solo developers don't benefit from grpc the way a billion dollar company does: solo developers often aren't using languages that are optimized with grpc (eg they're using python, node, Ruby, sometimes go; they aren't using c++, Java, etc)

Obviously these aren't hard and fast rules, but Google can't just spin up 10x more data centers to run their resource intensive stuff in something other than c++ or Java. They need the efficiency in a way solo developers or small teams usually don't.


Yep! That's the nuanced argument I'd look for, as opposed to 'gRPC bad, they added it because they're from google!'

Simplicity wise my favorite scenario is where if I'm using Python I can pip install an interface (or maven, or NPM, etc. etc) which is the versioned, supported interface for a component. REST + Swagger can help a lot there. So can gRPC. But I can also get how your favorite scenario might be to explore a REST API and build your own client.


> Now I can just grab a generated client in a language of my choice.

No, you can just grab a generated client in a language of your choice if that language happens to be supported by protobuf.

This is a massive regression from the openness of HTTP REST APIs.


That would have been a critique of Thrift, and a valid one, but gRPC has an http gateway interface with semantics for how the RPC calls map to REST calls and JSON. As long as that is supported, then there does not need to be a regression.


Sure, if the provider of the gRPC service wishes to also offer an HTTP gateway, that's true.

But it's trivially true in the case where the provider does so wish (as it would be true of any protocol with a fully-functional HTTP gateway infront of it), and it's untrue in the case where the provider doesn't.

And the provider has to do extra work:

"[offering an HTTP gateway] required adding custom options to gRPC definitions in protobufs, and add an additional container running this reverse-proxy server." ( https://wecode.wepay.com/posts/migrating-apis-from-rest-to-g... )

This still seems like a massive regression in openness to me.


"Now I can just grab a generated client in a language of my choice." Except when you can't because no client is available in your favorite language. Then it is a lot harder than a rest api would have been.


> the article presents all of its views as almost self evidently true

Not so. The article, imho, presents all of its views as unabashedly opinionated views of the author.

The complaint that simple, lucid and well-fit functional designs are getting hijacked by large-corp vested interests has been adequately taken up elsewhere in this thread.


He is talking in absolutes (Kubernetes being the worst piece of software, etc) and trying to convince reader that "megacorporations" are the faceless evil enemy and they are making everything worse for everybody. Either you are with him or against him (servant of the evil megacorps' interest), there is no middle ground.

This person is not looking for a conversation, but looking for a fight. He seems to be angry and cannot direct his anger to "make good art" (Neil Gaiman - Make Good Art), instead trying to insult people and pushing them to a place where their emotions takes control and can be dragged into the mud.


Sorry but even though I disagree with most of the points from the article, I prefer to read this instead of a shy and unconfident rant from someone that's afraid of insulting your favorite tech.


What about a third option, where the author makes focused criticisms supported by clear explanations instead of the shy and unconfident strawman you've constructed?


That's what I read here. People got upset because he said Kubernetes was bad software and gRPC was a bad choice.


> direct his anger to "make good art"

Here is one from me with the solo-dev/small-team in mind - https://github.com/imaginea/inai . Good or not I don't know. It came out of some reading I was doing and I'm having fun with it due to the "deploy first and then dev" mindset it encourages.


Art or not, there are quite a few contribution on the same page he hosted this post on.

Do you work with Kubernetes in a professional setting?


> "megacorporations" are the faceless evil enemy

There are some irony on the text, but to understand this is very naive.

The critic is at malicious/dumb people who latch to technologies without understanding them.

For argument sake, see how many people are defending coreOS etc, and how they claim to have decided to like it based on their superb expertise, yet they will likely fail to explain to you chroot or anything else containers are based on. Or how they will proclaim that LXC is pure garbage and Docker is better (i will not explain the joke, let them downvote :)

Bottom line: we are being sold bad ideas from the past, in new clothing, by the incompetents/malicious people, which megacorp is made of.


So your response to him pointing out that the author talks in absolutes, and tries to shit on any tech he doesn't like...is to talk in absolutes and shit on tech you don't like, with little jabs and dumb in jokes? Very convincing.


Lol. did it fall too close to home?

Read my comment again and you will notice there's not a single comment against or in favour of tech. Not even an adjective next to tech, only to people.

The comment is exactly that some people can't tell the difference between two completely identical techs, just because they buy the marketing of one of them, despite it being a brand on top of the first.


A lot of the complexity is just intrinsic to working on distributed systems, and if you're doing that kind of work, then these products really can help reduce overall complexity by giving you an admittedly complex solution that is at least robust and pretty well tested; that tends to be better than constructing your own.

I think where a lot of the pain comes from, and the article hints at this, is there are a LOT of problems that do not require a distributed solution, and while you can totally solve those problems with a distributed system, it's not really useful to do so in a way that dramatically increases complexity.

This is a known issue, and there are simpler models for these simpler cases. I think the problem is all the hype around k8s has driven a lot of square pegs into round holes.


Yes distributed systems are complex, but IMHO at least a large part of that complexity is caused by people who think they understand the problem, and go off and create some "great" new technology. Which then gets a lot of press, and other people building systems on it, only to discover that the edge case being ignored as inconsequential starts to create lots of failure modes. Sometimes these critical problems are "human factor" ones like with kubernetes configuration, or they are just a lack of proper engineering hygiene. Then you get another layer of crap on top. Repeat that a few times and what you actually have is a giant mess that frequently only works through the sheer effort of a large operations team.

To put this another way, if you read the Jepsen reports overwhelmingly what comes across is a lot of hubris. Its overwhelmingly old stodgy technology these new systems are meant to replace (posgres for example) that are satisfying the promises being proclaimed on high by $NEW_TECHNOLOGY.


I don't look at the Jepsen reports as being a representation of hubris. Most of what Jepsen finds may have been preventable problems, but they are also fixable problems, which is why I don't see hubris.

There's just a reality that we don't really build these systems on top of a distributed operating system. They're starting with an platform abstraction that is already terrible imperfect for the job. There is a ton of pressure to get something out quickly and make it accessible to a broad set of developers rather than to do it correctly (MongoDB is perhaps the most classic example of this). I was building big data systems in the early days, and the whole mentality was that you were building tools that were riddled with flaws, limitations, and outright bugs, but in the right context would make capabilities accessible to a broad set of developers that were otherwise completely off the table.

The truth is, if developers were already experts at distributed systems, they wouldn't need or want most of these tools; a LOT of the value is in accessibility and having something that is "good enough" rather than correct. The products reflect that more than they reflect hubris.

What I do see is a combination of marketing and customer ignorance that does believe in silver bullets and belief that these tools don't have to be understood to be effective. It's an unfortunate byproduct of focusing on that accessibility over correctness.


>> It's almost like there's no good representation in the open-source world for the solo developer or small team.

As a solo open source developer who maintains a popular OSS project, this is 100% the case. Sometimes it feels like big corporations are actively working against us. It's almost impossible to get any coverage at all on any kind of media. No matter how much organic growth your project has.

My project has been growing steadily and linearly for almost a decade. Now it has almost 6K stars on GitHub so you'd think that this by itself would draw some attention? Not so.

It's almost impossible to find it on Google search results. Most users find the project through direct word of mouth from other users of the project. We've never received a single consulting contract from any corporation.


You’re going to be bitter and resentful for a long time if you believe that the fact that you haven’t received a single consulting contract is due to big corporations working against you.

It sounds like you’ve built a useful piece of software. However, you don’t get handouts because you’ve built something. Those who are successful actively promote themselves and their products. There’s a reason companies aren’t comprised of just software engineers.


so what's the project?


I think it’s socketcluster.

However, i’m not quite sure what it does (it obviously does something really well, but I’m probably not precisely the target audience)

It might help adoption and re-share value if your readme starts with some problem and shows how your tool solves it. The cadence/temporal people did this really well: https://www.temporal.io/


Then the author should have focused the article. If their goal was to address that, then don't take a left-turn into hating on Kubernetes for no reason, before even reaching the point trying to be made.

> Funding and adoption (in the extreme, some might say hijacking) of open-source projects from large corporations dictates the direction of all major software components today

This is because large corporations are the biggest users of these pieces of software. Its literally that simple; money only plays a smart part of it.

Etcd's biggest user is Kubernetes. I would estimate that 2nd place isn't even close. I would estimate that Etcd wouldn't even exist today if not for Kubernetes. The direction of open source projects is usually some approximation of a democracy; its users contribute the code they need. If they need gRPC, it gets added. gRPC wasn't added for no reason, or because politics; its a sister CNCF project, and would increase the performance of the API surface, which greatly benefits all of its users.

Lets do a little experiment: Go start an etcd clone. Just go do it! The code is still there! It would be four clicks in the Github UI. Strip gRPC, add the HTTP API, and maintain it. I'd bet a thousand dollars the author won't do it, for two reasons. First, the author hasn't actually used etcd. I know this because, as I said, startlingly few people outside of Kube use etcd directly, and the author isn't providing any substantive technical reasons why the gRPC change was bad; just philosophical ones. And Second, because maintaining large open source software projects is soul crushingingly, destructively, enormously hard. Its way too easy to armchair-quarterback decisions like these based on your own personal flavor of ethics, then cherry-pick the evidence you want to say "corporations are evil! they destroyed etcd by getting rid of HTTP!" Meanwhile, people out there do develop it, and use it, and service billions of requests, and deliver value to their users, and sometimes, turn off their computer and cry because despite all of the hard work and positive results people with no skin in the game still pipe up to say what they're doing is horrible.

Did the author join in on the discussion when these changes were made? Did they voice their concerns? Or did they just read about it after-the-fact and say "jeeze, that piece of software I played around with two years ago really took a turn, I'm going to write a post about how evil corporations are."


> gRPC wasn't added for no reason, or because politics; its a sister CNCF project, and would increase the performance of the API surface, which greatly benefits all of its users.

gRPC (Google RPC) was absolutely added because of politics. Yes, it's performant, but let's not kid ourselves that gRPC wasn't picked because it falls nicely into the orbit of Kubernetes and the Google ecosystem. It's also why etcd can even threaten to remove support for HTTP+JSON (!!!) in the API, when it costs nothing at all to keep that support. This is exactly what the post is bemoaning.

> Did the author join in on the discussion when these changes were made? Did they voice their concerns?

Did you? Did anyone outside of a few people in a SIG or an RFC? This is also the complaint the author is obliquely getting at. Google may be 95% of the etcd _volume_, but they are basically 0% of the institutional users. That a small group of people can push a project in a direction that uniquely benefits them while adding an incidental complexity tax for everyone else is worth complaining about.


I feel as if I have to repeat this during most discussions about Kubernetes and its halo technologies: Kubernetes is not developed by Google. gRPC is not developed by Google. etcd is not developed by Google.

These are all projects under the Cloud Native Computing Foundation; its Platinum-level sponsors include: Alibaba, AWS, Apple, ARM, Cisco, Dell, Fujitsu, Google, Huawei, IBM, Intel, JD, Microsoft, NetApp, Oracle, PaloAlto Networks, Red Hat, SAP, and VMWare. Also included are 20 Gold-level Sponsors, 415 Silver-level sponsors, 3 Academic sponsors, 13 Non-profits, and 100 End-user supporters. That's a total of 570 organizations with a vested interest, and voting rights, on the direction of their projects.

But Google does a lot of technical oversight, right? A non-zero amount. Among the eleven people of the CNCF Technical Oversight Committee, Google has ONE representative (Saad Ali. Learn their names. These aren't just faceless mega-corps we're talking about; they're real people). Microsoft has the most, at two. Also represented is Apple, Intuit, Docker, American Express, Aqua Security, Lyft, Rancher, and Alibaba.

But, ok, Google "sneaks in" a lot of code, right? They're super-evil, that's the message we're trying to convey here. Of the top ten contributors to etcd [1], there's only two people who seem like they work at Google. The top contributor, by far, works at AWS.

Alright, fine. Kubernetes was created at Google, this much we cannot deny. Started by Brendan Burns (now at Microsoft), Joe Beda (now at VMWare), and Craig McLuckie (now at VMWare), all Google engineers a decade ago. Yeah, the thought leaders behind the project sure are still making that sweet sweet Google money, sure seems like it.

Its just tremendously incredible to me that anyone still believes Google has a large say in CNCF projects.

[1] https://github.com/etcd-io/etcd/graphs/contributors


Cloud Native Computing Foundation was spearheaded and initiated by Google as a competitive alternative to AWS's dominance.

Google knew they couldn't catch up to the AWS offering by selling service alternatives, so they intelligently and astutely zagged, and went all-in on an "open source cloud" approach.

Kubernetes is the Trojan horse - it gives people hope that they can develop with open source, and be cloud agnostic. But when the complexity overwhelms and drains them, they invariably look for a "managed" kubernetes solution to take the load off. And, wouldn't you know it, Google happens to offer a managed Kubernetes solution! Sure, so does AWS, but EVERYONE KNOWS (whether you like it or not) that Kubernetes was developed by Google, so surely they would be the ones best at managing Kubernetes ops in production!


> I feel as if I have to repeat this during most discussions about Kubernetes and its halo technologies: Kubernetes is not developed by Google. gRPC is not developed by Google. etcd is not developed by Google.

And your condescension and repetition would show that you don't know what you're talking about.

These were all--100%--developed on by Google and moved into CNCF, which is uncomfortably influenced by Google. Big providers have made peace that they lost the standards war. (And while etcd was not originated from Google, its adoption into kubernetes was obviously influenced by its use of golang.... developed at, where was that again?)

Kubernetes and gRPC started at Google, were exclusively developed by Google, and then much later moved into CNCF as a form of legitimacy.

If you have never tried to get a change into gRPC, I would encourage you to try. It took 3 years for Google to allow an upstream change that would not require rebuilding PHP protobufs from scratch on every request, because it would change the way their C library did things.


This [0] guy is going to burn himself out, going by his contribution chart recently.

[0] https://github.com/gyuho


That looks like a person gunning for an L6 promo!


great; so it's controlled by a big tech cartel, not a single company. that totally addresses the author's complaints >.<


I noticed you named the engineers that worked at Google, but not the top contributor, Gyuho Lee. Is that because he works at AWS?


> maintaining large open source software projects is soul crushingingly, destructively, enormously hard.

Are you making a case in this package that complex open source projects are impossible unless completely controlled by very large, highly opinionated companies? That can be proven wrong by endless examples.

> Did the author join in on the discussion when these changes were made? Did they voice their concerns? Or did they just read about it after-the-fact and say "jeeze, that piece of software I played around with two years ago really took a turn, I'm going to write a post about how evil corporations are."

You don't know this to be true, so why make this accusation? Why not address the argument as it is instead of inventing ad hominems?


No. I'm making the case that nebulous philosophical reasons for why technology is the way it is are nearly always unproductive, and is certainly unproductive in this case, which involves a very liberally-licensed open and libre-source project.

Its startlingly easy to write five-hundred words. Its startlingly difficult to accomplish what the etcd team has; I therefore give the benefit of doubt to etcd.

I don't fully understand why etcd switched from HTTP to gRPC. It seems weird to me. I wouldn't do that for any project I run. But I'm not going to write a blog about it. I'm not going to get angry about it. I'm not going to spin a conspiracy about ex-Googlers spreading gRPC everywhere they can. Instead: there are a ton of people developing and using etcd, who aren't me, who know what they're doing, and their results speak for themselves; They probably made the right call.

> You don't know this to be true, so why make this accusation?

Those are questions. But, yes, my tone was accusatory. Generally, if you have skin in the game, you don't write posts like this. That's why having skin in the game is so important; even if you lose the fight, you're left with positive remnants of the work you've done and the people you've interacted with. There are positive ways to have a negative discussion about technology. Instead, this article goes obscenely negative about etcd based on one thing the author disagrees with, then spins it into being negative about Kubernetes, Electron, Docker, corporations, and finally goes personal by directly criticizing the "expats from a megacorp ... who just want to build big ungainly architecture"

But also, you can look at their github and see their lack of contribution to etcd, if you really wanted to go full-stalker and demand evidence.


> I don't fully understand why etcd switched from HTTP to gRPC. It seems weird to me. I wouldn't do that for any project I run.

At a (very uneducated guess) I'd imagine that the query payloads would be smaller with gRPC since it's binary which would save money when running in the cloud for things like traffic egress, cross-zone, or cross-datacentre billing. HTTP can be unbeliveably verbose.

It's also probably quicker to deserialise which means you get more bang for your buck on your machine spend.


> At a (very uneducated guess) I'd imagine that the query payloads would be smaller with gRPC

> It's also probably quicker to deserialise

These are pretty educated guesses :-). This kind of reasoning has been behind a lot of Google initiatives, such as protocol buffers. If you're slinging enough data around, smaller payloads and faster serialization/deserialization does have a significant payoff. Once you're marinated in that mentality it probably sticks.


Moral reasoning premised in attachment theory(which has been a very productive field in the past decades) and extrapolated to the inanimate would draw a different conclusion.

If you are that tied to your work you may be treating it as a child, and as a child, your moral reasoning will proceed with the underlying premise: "what is good for the child is good", leading towards a parenting of the project ahead of other needs.

But a software system is not a child, in fact. It's just an expression of ideas. "Skin in the game" signals that you've been dragged into being a parent. Is being a parent for software the right thing? Perhaps, if the software's premise draws upon a strong justification. But most of these tools have not been emerged from standalone justifications, but from solving something else, which neatly ties back into the author's argument: the corporate needs are what are complex. And here attachment-based morality recurses: if the corporation is the child and you are tending to it, once again, you will put it ahead of other needs, and hence will develop the justifications for software complexity.

But if we zoom out a bit and look at the world generally, it's operating from a point of indifference: if the output of the corporation is pragmatically convenient, it is good, if it presents an obstacle then it is bad. And ideas - and hence software - that persist in the indifferent world, outside of the attachment relationship, are the ones that survive.

Which perhaps means that all of it is wrong, which isn't a very stunning conclusion if you still have to work with it. But that is philosophy for you.


You said: "Its startlingly easy to write five-hundred words. Its startlingly difficult to accomplish what the etcd team has; I therefore give the benefit of doubt to etcd."

That's a VERY confusing comparison and conclusion to draw from it.


> It's almost like there's no good representation in the open-source world for the solo developer or small team.

IMHO, there is a simple solution to this - release the software under the GPL license. Then no corporation will want to touch it.


If your project is sufficiently popular, that's safe; otherwise maybe not.

As someone else mentioned, when a giant corp absorbs a free project, you can reasonably expect them to maintain it. If you GPL the project, that won't happen. But if it's a popular project, then if it's GPL it will attract more maintenance from people who can't free-ride on the corp's efforts.

(Note that the relevant measure of "popular" is not number of users, but number of people enthusiastic enough to actually contribute.)


Only if it's proper GPL3. Google and Microsoft touch linux everywhere: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/lin...


AGPL would be more apropos in this day and age of "SaaS all the things."


https://opensource.google/docs/using/agpl-policy/ They're absolutely allergic to it.


I love k8s, think it’s the best thing that happened to distributed computing since .. sliced bread? Anyway, it’s good, but I don’t disagree with the author I just have a different perspective. I see k8s as a vehicle for learning about distributed computing not just for me but the industry as a whole. Same thing about Linux, it’s just a stepping stone for node management and has served us well in figuring out a bunch of problems. K8s removes a lot of duplicated effort in the learning process for the industry, just close your eyes and think back to the past decades of ESB wars.. that was a waste!

“Under the radar” comes a new generation of better systems and components that learns from these learning vehicles of computing and will change how we deliver services fundamentally, but without k8s and Linux we would never have gotten there. Open source RISC-V co-evolved with seL4, WASI, webassembly and NATS comes to mind, but I’m sure the future will take its own path.


I also love k8s and agree with the author.

K8s gives us declarative infrastructure with eventual reconciliation. That's massive, and there's nothing else around that really does that.

I have a hard time taking anyone sesriously that makes the case that k8s is bad that hasn't fully grokked the above.


Personally I see this as coming from K8s having effectively two parallel uses.

One is for making it easier to manage large, complex infrastructure. The other is as a quasi-OS for containerized software.

Instead of using a new technology to move infrastructure complexity away from developers we now run development environments on k8s, piling even more complexity and layers of abstraction on.


Disclaimer: I'm was part of the Apache Aurora PMC before the project went into the attic.

Not sure what you mean by declarative infrastructure but eventual reconciliation has been implemented in at least one other project (now retired) called Apache Aurora[1] which runs on Apache Mesos[2].

Twitter ran (and probably still runs) a combo of these to huge scale and great success so I don't think it's fair to say that there's nothing else around that does that :).

K8s is great, but we should be careful not to rewrite history.

[1] http://aurora.apache.org/

[2] http://mesos.apache.org/


I'm familiar with these having used DCOS a fair bit.

>K8s is great, but we should be careful not to rewrite history.

Why's that? That's kind what tech is all about, no?


> vested interests from large companies are able to introduce huge complexity into simple, well-designed projects.

In this case, the vested interest are interested in making it as easy as possible for real world customers to reliably deploy their applications to the Cloud.

That's not an interest that favors unneeded complexity. The complexity comes from the need for a container orchestration system that satisfies the needs of large organizations. If you could satisfy those needs more simply, the incentives are to simplify.

Undoubtedly there is space to build a simpler tool with a subset of Kubernetes' features. And sure, you can pick at specific flaws in the way it's evolved.

But I'm not so convinced by the claim that the whole thing is a mess because tech employees like replicating the stack they are used to.


When I've worked on small teams we avoided complexity by hardcoding locations and using GUI management tools and all these other things that still exist - and honestly, are better today than they were in the past, when you'd also need to get yourself a rack in a datacenter to run stuff.

I think there's a big swath of people adopting fancy tools to try to solve problems they don't have and won't have for years, but I don't see that having destroyed the ability to do things the easy way.


Meh, having your opener call out a specific project as bullshit has that effect.

Pulling the snark would have resulted in more people getting the main point.


Definitely true. The author could have chosen differently and it would have probably led to more constructive comments.

However, I'll also point out that focusing on the snark and ignoring the rest of the article is also a choice made on the part of the commenters.

Given a choice, I personally prefer to take the strongest and most charitable interpretation of someones ideas before choosing to criticize them. At least the discussion is interesting that way...


That things I write for the web usually go through a "lazy comment" filter where I pull out references that would lead to lazy comments. Things that often get stripped are one reference to a specific example when I'm talking about something general–because invariably the comments will focus on "how I don't get the software" or "how I singled it out" or "how what I wrote isn't even true if you look at this way so my point is now invalid". Either I mention no examples, and have the commenters come up with their own, or I have a few so that I don't get attacked by a specific group of people.


While I agree with you, I also think that snark is what makes the blog personal to the author. If it's not a personal reflection of one's thoughts, some may not consider it worth doing.


Perhaps the people who treat CoreOS being called bullshit as dispositive of the entire thesis, and an inclusion worthy of disregarding the author and thinking of him in any way other than making an argument, aren't the intended audience.

Pull the snark out and there will remain any element of this post that kicks off a violent HN centithread that conveniently avoids the core issue. You know it, I know it.


Snark directly related to the main point is fine. The CoreOS comment is unrelated.

In fact the author liked etcd in the time period when it was directly associated with the "bullshit" project. And even mentions "but that doesn't really matter".

Bashing K8S is at least related to the main point.


Well, those that read the article, perhaps. Which would be...nobody?


> Pulling the snark would have resulted in more people getting the main point.

Or resulted in very few people reading it at all.


The author may or may not have the correct conclusion about over-engineering coming from big-co alumni, but if they can't get their one example correct, why should we discuss their post? Are their insults that novel?


> It's almost like there's no good representation in the open-source world for the solo developer or small team.

Availability heuristic: large companies get more press. Partly because they're large and therefore more likely to attract readers for stories about them, partly because their scope is vast and so journalists know about them, and partly because they intentionally seek press for their work.

The scratch-an-itch period of mass-market OSS is long gone. If indeed it ever truly existed.


I don't get it, isn't that the point of open source and free software, why doesn't the author, or a group of like minded people simply fork the old software and release a SimpleEtcD?

Of course, you could argue that the people who made it more complicated in the first place could/should have forked instead of taking over, but that ship obviously sailed a long time ago.


The problem, as I pointed out elsewhere, is that the post makes very few constructive comments and presents a lot of opinions and assertions without justification or reasoning, making it extremely hard to actually make meaningful comments about. As someone else pointed out, if it were an HN comment, it would be flagged.

I don’t agree that this is a statement about complexity, so much as it is a rant about the author being inconvenienced by technology they personally deem pointless and unnecessary. If the CoreOS comments aren’t a sufficient red flag, the HTTP/2 ones really ought to be. Also, some etcd users debate their assertions about newer versions being more complex internally. On the other hand, its difficult to argue with overly cynical type attitudes.


But that's just the "circle of life" no matter what. Things start very simple, and then inevitably get ambitious and try to add new gizmos. Sooner or later you end up with a monster.

Over and over again.


Historically FOSS has been far more resistant to this cycle; not all FOSS projects, but many. It's much easier for FOSS projects to just say no, and they can spend years or even decades refining things. In the commercial world you can never say no. Not only that, but you have to constantly add features.

Now that corporate interests dominate in many corners of the FOSS sphere, this benefit of FOSS is diminishing. Which is a shame, because it's a big reason why FOSS was so successful, and why so many old commercial projects collapsed under their own weight, even though they always had more features than their FOSS counterparts and thus theoretically should have only grown their user base.


This is true - but I think the interesting (and understated) part of this argument is Big Tech's influence on the process.

Say I want to fork etcd or build a greenfield alternative - I now have to compete with (or support) the Hivemind's preferences in direction/progress. Who's to say their way is the "best" way?

I think this tension is much greater than it was 10 years ago.


> In the meantime, the simpler version of the software is long gone

The thing is that with free software the old software is almost never really long gone. It's still out there, free for your perusal. That is the beauty of free software, it's always additive process. But it does also put responsibility on the user to pick the patches they want, either themselves or by proxy. Ultimately you are responsible for the free code you are running, not some vague upstream dominated by fnaag et al.


While this is technically true, using old, unsupported versions of software (such as etcd) in production is a recipe for disaster, especially when you start running into problems with it.

I think, in the case of etcd, it would have been better for the k8s folks to fork the project and add whichever features they felt were needed to be able to use it for k8s. Instead, it seems like they swarmed the original project and added a ton of features that made it a behemoth of complexity rather than the simple tool it once was.


> I think, in the case of etcd, it would have been better for the k8s folks to fork the project

This would only make a difference if someone was willing to maintain the non-k8s fork. From what I understand, nobody was willing to take that on, so the k8s version became the only version by default.


It's not unsupported if you support it yourself or hire/contract someone to support it for you


It doesn't seem to me like the comment section is weird at all. This is kind of a perfect object lesson on pitfalls of writing a persuasive blogpost.

Basically, if you want to argue for a somewhat controversial conclusion, you shouldn't back it up with even more controversial and unsupported (or weakly supported) premises. Particularly if "even more controversial" means "in disagreement with the vast majority of your industry."


When you fine some cool tool you can: 1. Use as is. 2. Make behind the scenes changes that don't alter functionality. 3. Make changes that add functionality. 4. Make changes that remove existing functionality.

4 is the problem. It's like joining a car club then keeping pushing to have petrol cars banned. It is really shitty to find a cool tool that you can use, then try to force remove features that existing users use.


"... of all major software components today."

Thankfully, we still have the "minor" components to ourselves.

Popularity has its downsides.


I think a large problem is open source projects actively marketing themselves, and seeking large user bases. This leads to a wasteland of zombie frameworks, and developers selling out (so to speak) in exchange for the reputation.

It is important to note, too, that etcd is permissively licensed and anyone can run the old version in whatever way they please.

The final, and biggest, takeaway? Hackernews apparently has drank the koolaid and will reach for any convenient justification to write off an opinion they disagree with. Stockholm syndrome all through the comments. Conservatism and pragmatism apparently are no longer current in the cloud economy.


> The key point of the article is not really being addressed here:

Perhaps if the author wanted their key point addressed they could have tried focusing on it, rather than on an ill-considered spray across a group of largely unrelated bugbears.


The simple version of etcd still exists. Nobody deleted the code. Just use that.


Why can't solo developers or small teams represent solo developers or small teams? If an etcd that has a thin, inflexible API is of value, a small team should be able to fork and manage it (the notion it's smaller and simpler ought to imply it's a lot cheaper to maintain than a multi-API etcd, right?).

If the economics of the situation is "large teams have to maintain tools for small teams and individual users," that doesn't actually scale well, does it?


A major problem that leads to dysfunctional bloat is the inherent ego traps of software developed by teams of well-adjusted people.

Even if it's for the good of the quality of a product, it is not often easy telling your team mate that they wrote an unnecessary component that should just be deleted. I have done it lots in my short career (and in terms of engineering outcomes, it's been spectacular), but it's easier for me I think, since I'm more disagreeable than the average person.


Couldn't agree with the sentiment of this comment more.


There was some generic big-company hate thrown in there, but the author really didn't say anything half as coherent in the OP as you did.


> introduce huge complexity into simple, well-designed projects

why does cmake come to my mind?


They haven't gotten to beanstalkd yet...


The author is not aware that etcd is still offering the JSON API he mentioned. https://news.ycombinator.com/item?id=23835651 so his main argument doesn't hold in the first place. The rest of the post is "Google's technologies like protobuf/grpc influence people too much, everything should be a json api".


He's aware of the old API, but believes it will be killed soon. From the article:

"The v2 API lives on for now, but upstream threatens to remove it in every new version and there will surely come a time when it'll be removed entirely."


These open source projects you talk about were developed by profit seeking enterprises for solving complex distributed systems problems at scale.

Because they chose to make them open-source (not out of altruistic motives but out of the same profit seeking perfectly good business motives), these solo developers (who are also profit seeking) can benefit from those projects for free and can afford to build to serve lot more users than they would be able to otherwise.

I don't get the entitled attitude of the author. If he wants a 'simple' system by his standards, he can build one easily enough with the modern tools like golang, node.js and rust.

Btw, the web development tools were never simpler than today to operate a safe and secure production web application to serve any number of users. I'm saying this with about two decades of experience doing the same.


Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: