Hacker News new | comments | show | ask | jobs | submit login

There aught to be a name to the tendency that as tools get better and better, the more your time goes from having your mind in technical-space to social and news-space. It's like the authority to create goes from the individual first-principles (by necessity) maker, to the control over development being in the hands of an external group, and then all your time is spent keeping up with what they're doing. A similar thing happened with a lot of javascript frameworks. It also happened with the transition from building servers from the ground up, to it all being managed by AWS.



I wish I could give you more upvotes. You are describing a somewhat hidden psychology, which I think provides a rational basis for much of "Not Invented Here" psychology. We tend to think that "Not Invented Here" psychology is irrational, but in fact, the loss of control over possibly crucial technology is an important cost, which makes all of us stop and re-consider whether we really want to use some software developed by an external team.


And it's not only that - Time spent not doing our own designs (and instead spent memorizing how to use magical frameworks) is time not spent advancing our technical understanding.

It's a sad state when otherwise very intelligent people think it's bad practice to use plain C and a clean OS API that has been stable for decades (because that was somehow "magic" und impossible to understand and error-prone), and advise you to use the Boost filesystem module just to concatenate two paths.


> Time spent not doing our own designs (and instead spent memorizing how to use magical frameworks) is time not spent advancing our technical understanding.

I can't upvote this enough. I used to do mathematics, and there was the story of a professor would would hold up a book and say, "You should know everything in this book. But don't read it!" Which is to say, you have to go through the process of discovering mathematics to really understand it (maybe with a bit of guidance when you get really stuck). The skill of building complex software systems is no different.


Sure and those of us experienced in writing software tend to avoid reinventing the wheel (in a buggy, untested way).

I see the value in learning by building yourself, but from a software engineering point of view, using a tried and tested framework is likely to give you higher quality product in less time.


Well, the framework I was to use last (Qt) had bugs that simply can't be fixed by users (memory leaks, double free leading to segfault when exiting after reloading QML engine) and immature modules (for example translation) and forces complicated types to the user and forces bad architectural decisions to the user and significantly increases compile times...

> using a tried and tested framework is likely to give you higher quality product in less time.

This is a common sentiment, but note that a framework has huge handicaps

- not knowing your business requirements and concepts

- must be suitable for many software projects that need features you'll never need.

- there is a clear maintenance boundary (framework vs your own code), which requires a complex interface with maintenance overhead and typically forces you to use concepts that don't really match your requirements

If you're experienced in the relevant domain it's almost always simpler to do it yourself / reuse your own work.


Writing your own QT sounds like a long road however that would likely not be as fast as making work arounds or providing bug reports or patches.


I would never write my own Qt. Why would I? In this case I was creating a simple dashboard type application, and that can be easily done using fewer dependencies.


I don't think reinventing the wheel is the right approach. I think careful analysis of bugs and design deficiencies the likes of which you experienced before jumping in and using the framework is the way to do it.

This is why I absolutely love seeing hate filled developer rants about technology with deep dive analyses and links to bug trackers. That's how I was saved from learning Ruby on Rails back in 2008 when most developers gushed about how awesome it after reacting to its slick marketing and building a tiny website in 5 minutes.


What did you prefer instead?


I learned Django instead. I'm cognizant of its flaws but I'm still relatively happy with it.

I think the designers originally intended it to just be "rails for python" but they recognized the pain caused by rails magic and subsequently worked on making django less magical.


Concatenating two paths is difficult if you care about any of the following: security, multiple OS's, Unicode (multiple code units, validity, combining characters), file system restrictions, etc.

Your "two decades" maybe holds for Linux, but what about Windows or MacOS???!

I have seen too many people use string concatenation.

I think an intelligent person would recommend to use the normal library (appropriate for your language, assuming it is well written) since usually your program will be doing many other filename/path manipulations too.



Exactly my point - that function doesn't deal with modern windows. MAX_PATH is 260 characters. Yet Windows now supports longer paths (\\?\ prefix). So I presume there is another Windows function to combine long path names (and maybe canonicalise Unicode better).


> Yet Windows now supports longer paths (\\?\ prefix).

I know. And this is the attitude that leads to complex software. It's a self-fulfilling prophecy: "I can't do it on my own since the problem is so complex". In which case the problem does get complex.

I recommend not wasting time supporting this crazy feature (unless you are writing infrastructure code for tools and you're required to - in which case I'm sorry). 260 character paths are more than enough for any project. And I certainly recommend against using the crazy abstractions from Boost::filesystem.

(Personally I think it's an unfortunate example. I'd prefer to avoid paths from the start, since nobody understands the semantics of hierarchical filesystems. Alas, typically you need to deal with them to some degree).


It is not a crazy feature.

It is a necessary feature for many development environments originally written for Unix (e.g. nodejs), because Unix file systems usually don't have such a low character limit.

Or maybe Windows has mounted or network sharing, a Unix filesystem... In which case your program better deal with long paths (or just fail?)

Really... You are showing exactly why one uses a library so one doesn't need to care about the "complex" details because one hopes the library does a good job of managing it as well as possible for you (although you still need to know the details to use a library function safely).


> Or maybe Windows has mounted or network sharing, a Unix filesystem... In which case your program better deal with long paths (or just fail?)

260 bytes is long. I've never seen a \\?\ path in the wild, and I don't want to. It's a misdesign (well, I'm sure the designer didn't want to design it...). But I'm repeating myself... I can't practically deal with paths longer than, say, 64 characters (can't read them, let alone type them). Fix your paths.

And again, this is off-topic. This thread was about not using a crazy third party library, when the authoritative semantics (however insane) are contained in the OS interface.


That is such an awful toxic viewpoint to take.

"Fix my paths"!? Fix your software! I work from a network share that is about 70 chars long to get to my one project's folder. The beauty of hierarchical file systems is that I can cognitively ignore all the previous paths and just work from there. However when your software doesn't work because "the solution is a misdesign", it's not my path that is wrong, it's your software that is broken and you that are stubbornly refusing to use solutions to well known problems.


[flagged]


>Still supported by 260-byte filepaths.

Because now programs that thought "260 is enough" now only have 190 left, and i've seen (and had software not work with) several pieces of software that have more than that. Because the assumption is that this is a solved problem on all OSs, and that 260 is no longer a hard limit.

>Asking nevertheless: Why? The path to the network share should be easy to abbreviate.

Because that doesn't actually solve the problem. I can abbreviate it on my connection, but not on the server, and if the server is running some poorly written software, it will choke when from my point of view i'm only using 190 characters. As for why, it's segmented by country/company/department/team/name/employeeid/

Then it is my folder, where I have things like clientInformation/[softwarePackage]/[client]/[testFiles|documentation|uniqueArchitecture]

Just the example without any real names is over 100 already. I just went and looked, I was being conservative with my 70-character estimate, a few of my folders are over 180 characters long just to the [client] part of my path.

Also why abbreviate it when we can write software that can handle that for us? Why should I need to shorten my paths to potentially confusing and misleading names just so some people can ignore solved problems in the name of "complexity". Not to mention that abbreviation isn't really a "solution" (what happens when there are collisions? Now you need a complicated "maintained and interpreted by humans" system to manage it, that's not simpler, that's more complex).

I'm honestly not going to respond to the rest, because we both know that those are strawman arguments.


[flagged]


But this is solvable! You are the one claiming it's too complex then refusing to use the well tested solutions...

>You have insanely long paths, and that's still miles away from a reasonable limit (256 bytes) but still complaining.

Because that's just the path to a folder, that still doesn't contain any data! unpack an archive from a server in one, and suddenly poorly written software is blowing up because someone used a GUID as a folder name, or there is software running there that assumes "why would paths need to be less than 60 characters!?" and nests files into a bunch of folders using the filesystem as a tree of hashes.

I think the problem here is that these aren't imagined problems, they are problems I've run into in the past year or so. The difference between this and your strawman arguments is that there are solutions to all of those problems as well, but for some reason you don't see them as superfluous.

I don't know why I let myself get roped into a pointless internet argument. "You clearly lack experience with low-level issues"? Man I'd love to see how you determined that from a few comments about file paths...


> Your "two decades" maybe holds for Linux, but what about Windows or MacOS???!

Is my target application going to run on Linux? If so, then I do not care about Windows or MacOS, just like I do not care about CBM64 or Amiga.

My goal is not to write an awesome system that wins design awards in handling of obscure edge cases thrown at it in a coffee shop by hipster developers. My goal is to build a system that solves my business problem.


...and since you wrote straightforward code not relying on obscure platform-specific technicalities, even if you wanted to port it later, the port of the filesystem code was just a few #ifdef's away.


Time spent not doing our own designs (and instead spent memorizing how to use magical frameworks) is time not spent advancing our technical understanding.

That's a false dichotomy. The time it takes to "memorize a magical framework" is far less than the time it'd take to learn how to write code to do what the framework does. Consequently you can learn a framework and some other technical understanding in the same time it'd take you to only learn enough to implement your own version of the framework. In most circumstances that's actually more beneficial to do that. You'll be further forward in your understanding of the technical stuff.

You're also assuming that frameworks are written by individuals. They're not. In the case of some large frameworks it'd be practically impossible to implement what they cover on your own. You simply can't learn the underlying principles and then implement them all in code yourself.

It's definitely worthwhile learning the basics of the languages you use, and you should be working on things that improve your code and understanding as much as possible, but it's very likely in most cases that will mean building on top of someone else's existing code rather than implementing everything yourself.


These are still claims without any context. It mostly depends on how experienced the programmer is. And most importantly, note that one never needs all the functionality from a framework. Typically it's only a very small part, and often the existing functionality in the framework does not match the requirements 100%.


Very often the match is good enough that it pays off to slightly align the requirements instead of patching the framework. The amount by of man-hours poured into the very core parts of rails for example, just processing and dispatching requests, safely decoding the input from a webserver to a useful set of parameters, routing the request to the proper handler and rendering and returning the response is huge. Certainly, you could take something slightly more modular, such as padrino, but that’s still a mind-melting amount of code if you look at all the libraries and dependencies.

You could reimplement most of the basics, but that would be month or years of work and probably still buggy as hell. I’ve seen my share of “oh, we’ll just build our own framework” and they all turned out to be much more complex than the initiatiator expected.


I would think this claim is self-evident

> In the case of some large frameworks it'd be practically impossible to implement what they cover on your own. You simply can't learn the underlying principles and then implement them all in code yourself.

Not even DHH would claim to have been able to build the whole of Rails by himself.


I would disagree about that.

Learning Django taught me a lot about the proper way to things. When I first started using it I implemented a lot of my own stuff myself (as I wasn't aware that the framework had certain features). I basically wrote my own equivalent of class based views before I understood Django's own. (http://ccbv.co.uk/ helps understand them a lot)

Reinventing Django's class based views is something that I have seen in a few inherited apps (like where I currently work).

If your application is going to be something long lived and gets a number of developers while in maintenance mode, then memorizing a standard framework is a good thing - new developers should be familiar with how a framework works - as opposed to needing to spend time going through someone else home grown code.

Then there is the issue that writing your own code will be untested relative to a framework that has some level of popularity.

Documentation is likely to be better with a framework as well.

(This is my perspective coming from a Python / Django background - I have noticed that JavaScript frameworks and libraries often have a lot more problems).


The most valuable skill is knowing when to externalize your tools. You don't always want to reinvent the wheel every time you need something, when you have deadlines to consider.


Exactly this. There's a ton of gray here. Have to pick your battles as best as you can.

Another example are game engines. Hard deny the value engines like Unreal and Unity provide. They are hard to ignore and have thousands of expert hours put into them.


A good way to pick battles is not to fight most of them. There are so many solutions that don't have any real problems...


Could you provide an example of solutions without real problems, game engines or otherwise?


Basically anything overengineered? Concrete example: C++ "manager" objects. Why not finally learn how to structure applications and keep allocations and resource use in check? The program will be so much simpler, compile quickly, be easy to understand (execution threads stop jumping around like crazy), and typically have less memory leaks / use-after-free etc.

RAII, garbage collectors and other fancy inventions for freeing resources from the call stack automatically? Not needed. Use global resource managers. Globals are needed and semantically the right thing. Not a problem.

Any crazy programming language with all the features they could conceive? The one you need is guaranteed to be missing. Better, express your problem yourself and write a simple generator script that translates your concepts (mostly as plain old data) to the minimal language.

Another: XML (and even JSON), possibly for anything except true document markup. It's slow and doesn't buy us anything. Yes, there are ready-to-use parsers, but after parsing you still hold a mess in your hands (and only strings / floats). Finally learn database basics and manage data in tuples. Make a text file parser that takes simple table (= list of column types) descriptions and then parses tuples from single lines and puts them as typed data in arrays of structs, done!


How does a "global resource manager" work and in what way is it better than RAII and garbage collectors? Googling wasn't helpful.


Just global state. Simple example, a global variable that holds an allocated memory buffer. You initialize it at program startup and tear it down at the end (you can be sloppy and leave out the latter). While the program is running, you re-allocate as needed.

This is better in that it really is a very simple thing to do. There are no possible memory leaks. But what happens is much more explicit - you have precise control, and no bumpy control flow (much nicer for debugging).

But it doesn't have to be memory. If you want to look at my project https://github.com/jstimpfle/learn-opengl (careful - it's not tidied up. But I think it demonstrates my point), most resource state there is global module-wide state. I simply have init_module() and exit_module() pairs that I call from the main function. Problem solved. Not a headache at all.


Is that in support of using the Boost filesystem API?


IMO, only if you're using g++ or older versions of the standard. MSVC, clang and ICC have all supported the experimental::filesystem module for years at this point.


I don't use C++ outside of games programming, I'm not familiar with the Boost filesystem API.


It depends on what you're doing with it, I'd guess.


The amount of security exploits due to memory corruption in software written in C proves that during the last 50 years, it isn't that well understood.


Recently, I spent some time trying to run several machine learning jobs simultaneously across AWS machines. This was a fairly simple use case: all the jobs were totally independent of each other, you could run them by calling a Python function with different parameters.

I'm mostly a stats guy and not much of a programmer. I got a hacky, do-it-myself version using Python scripts up and running with about two hours of work, and learned about Python threads for the first time in the process, a tool that I can reuse in many different areas.

I then tried to do this the "right" way, which according to Amazon is to use Docker and the AWS tools ECR, Batch, and SQS. It took me about 10 times as long to get that working. Yes, this offers much, much more functionality - but most of it is stuff I didn't need. The only real gain I got was my models running about 20% faster, and the knowledge I learned is ephemeral.


I also like to ask myself, "Do I want to become an expert at working with external software X, or do I want to become the kind of person who can build software like X?"


That's great. However, how does the time to solve the business problems fit in?


That's a great heuristic to keep in mind, thanks for that.


Worth dropping a link to this Joel Spolsky article, where he discusses this concept and talks about the fact that the Excel team (in the 1990s) had their own c compiler:

https://www.joelonsoftware.com/2001/10/14/in-defense-of-not-...


I am glad I read this just now, I was jumping from hoop to hoop. and not only hoops but craft paradigms, reminds me of the Fred Brooks idea "There is no silver bullet".

Brooks argues that "there is no single development, in either technology or management technique, which by itself promises even one order of magnitude [tenfold] improvement within a decade in productivity, in reliability, in simplicity." He also states that "we cannot expect ever to see two-fold gains every two years" in software development, as there is in hardware development (Moore's law)

from wikipedia


Glad it helped. Brooks should be required reading for every developer: software is hard.


Sure, NIH syndrome can be rational for technology that is crucial/central to your system.

The problem is it is often used to justify re-inventing even mundane stuff. I once worked with a client who wanted to implement their own bug tracking system. The client’s main product was something totally unrelated.


For 99% of companies, including the ones employees post here container and container orchestration is a resume driven development of devops/engops/sre/seniordevs caused by improper hiring, improper vetting of ideas and personnel and fixation on cargo cults.

It is no different from cargo culting in sales where a perfectly functioning company slowly but surely building up brings in a CRO (can you imagine this showed up as a title?! ) who says "We will sell differently! Give me account managers! Give me sales development representatives! Give me customer happiness coordinators and we will sell to enterprise accounts at 50x contract value!" So the company hires a hundred people in those roles and it does look like the contracts are creeping up. So the CRO says "I know! The issue is that we are spedning too much time on paperwork. Hire me salesops! And give me all these Sales Force integrations! And special IT people reporting to me operating these new tools" So the company hires more people for those roles, burning through millions of dollars in salaries but at the end... the new customers are still just a trickle.

Eventually CRO gets fired and most of the people who got hired on that push are gone as well as millions of dollars are spent. If they spent those millions on Google ads or Facebook ads they would have definitely gotten more revenue but plain Google ads and plain Facebook ads are not sexy.


If even mundane stuff keeps breaking your workflow and demands all of your time to keep up with, then all the power for reinventing it.

All the better if it is simple.


Bug trackers are one of those areas where a lot of companies should build there own. Everyone has there own workflows and information to capture and end up either conforming their process to the bug tracker or spending more time configuring the bug tracker than they would to build a new system from scratch. Those uber configurable systems always suck to use.

Ones like JIRA can takes weeks to setup for your org and include their own query language. All of this complexity just to do something so simple is not rational.

As long as you don't over engineer it then it's only a days work to get something up and running and it's a great project for interns.


> Bug trackers are one of those areas where a lot of companies should build there own.

This sounds insane. I have not worked at a place where the workflow was so holy and important that it couldn't be captured in a near default JIRA install.

Most of the customization asks I've seen with JIRA come from dysfunctional organizations that demand new swimlanes like "QA" and "product approval" and "spec design".


I have not worked at a place where the workflow was so holy and important that it couldn't be captured in a near default JIRA install

JIRA out-of-the-box is surprisingly sane - most people who hate JIRA really hate the custom workflows their own organisation has inflicted on them.


Making a serious bug tracker is hundreds, more like thousands of man hours. You’re saying that instead of investing 50 hours into configuring Jira properly, it’s better to do that? Even with intern work, it doesn’t make sense, especially since interns will be gone tomorrow ;)


I think your either over complicating the requirements or under estimating how long it takes to throw together a system with half a dozen tables and a a dozen views/forms. Or underestimating just how far you can get with such a simple system. Then you've got something tailored to your workflow.

> Even with intern work, it doesn’t make sense, especially since interns will be gone tomorrow ;)

But there will be a new batch sooner or later that can extend and update it.


Most places only need a fraction of the functionality of Jira.


Most developers underestimate the complexity of any project :)

Especially one that becomes a crucial component of a company's workflow (as is the case for bug trackers in any properly run software company).


I would say its more of the case that developers unnecessarily make projects more complex than needed. Certainly on the last couple of projects I have inherited.


This is why BPM systems exist. To take all of those disparate “Good enough” tools that don’t fit your workflow and make them fit your workflow.

Really a tech stack that doesn’t get nearly enough attention IMO.


I don't think that's the whole picture. You could copy an interface (and even implementation) and fork it. But often with NIH we see a reinvention of a technology without even looking at alternatives; often this ends badly (in Linux land much more often than not it seems).


The problem is that reading (and learning from) code is hard. The other problem is that there is so much bad code out there (most of my own is certainly not an exception) that it gets even harder.

And it's not about the code. Writing code is easy. It's about finding the right problems first. And then it's about finding the right abstractions. The easiest way to build a clean conceptual world is to start with a clean slate and ask yourself before introducing new code, "does this code solve a concrete problem? Do I really need it?"

Unix had simple and clear ideas, and it has many reimplementations (NIH!), and most are not that bad - are they?


I gave him mine on your behalf :)


A description of -- not a name for -- is corporate control of an open source project. Trying to keep up with Angular or Kubernetes or Go is not a problem at the Google from whence they come because the technologies are primarily used by and hence primarily designed for use by teams. Not individual developers.

A team's brain can schedule a time where part of it is training or studying while the rest of it is making progress on the task at hand. A team's brain can resolve ambiguity and complexity more easily because it has multiple human brains and those brains have different strengths and experiences.

A single developer can't duplicate a brain. A small team of developers can't replicate the larger multi-team brain of a Google org-chart. Google org-charts are the context for which its tools are designed. The same is true of other corporate open source technology bundles like React and AWS.

People kick themselves for not grokking technologies like Kuberentes without realizing that it wasn't really designed for their use case, isn't documented to be easy to pick up, and isn't managed with the cognitive load of individuals in mind. The time scales around which these technologies are designed are FTE-months or years.

If a two pizza team can get up to speed in a month, it means an individual will probably take about a year. Given equal cognitive efficiency.


I disagree on Go.

Go is well designed and takes upgrading into consideration. Upgrades very rarely break anything. You should just be able to recompile that 1.3 code on 1.8, etc.


On the other hand, dependency management in go in any sane way is such a byzantine task that it takes ages to get working, and even then it still doesn't.

Given an empty set of environment variables, setting up a new go project with dependencies and all in a way that people cloning it later on can also use the dependencies in a single command, without vendoring dependencies, without building custom scripts, is impossible.


It's impossible with first party tools, but I've had a great experience with Glide.

That said, go drives me up a wall. Using interface{} to muck with JSON and typecast every step of the way is just my nightmare. It's inelegant in every way


Type cast every step of the way? Methinks you are either doing something wrong or dealing with unstructured json.


The pain of dealing with unstructured json/mapping types is exactly what I'm referring to. :)

That said, I don't think the language as a whole lends itself towards elegance - that's pretty much the opposite of their goals. I don't agree that their model for simplicity is the only (or best) model for simplicity is all.


I concur with the sentiment.

The risk is still there for dependencies, but it helps that the community for the most part follows "a little copying is better than a little dependency" as an adage.


These are excellent points and very interesting food-for-thought, the 'organizational thinking' that leads to the tools being terribly complex and steep to learn for individuals.

But don't these tools have product managers, and don't product managers create tools to serve customers?

The only cynical viewpoint that I can think of for artificial (or lazy) complexity is consulting dollars.


Google does consulting now?


They do, but I think most of their consulting is driven by a desire to enable other organizations on their tech (hence buying more).

So definitely more helpful than the "obfuscate + delay = profit" model of most consulting agencies.


Now?

If you spend enough they always have.


Been my building worry as i have watched the ongoing changes is the Linux ecosystem in recent years.

It feels like thereis a echo chamber happening, where a small group of peope working for maybe 2-3 companies have decided that their vision is the correct vision.

And everyone that says otherwise are haters and fossils.


Well, hundreds of medium and large companies now rely critically on Linux so it was expected that slowly they will cooperate to steer its further evolution.

People do respond that the whole codebase is open source and hence if some really bad change happens the community can always fork from the last known good commit. But this ignores a subtler issue: soon enough so many of the critical subsystems and so many of their contributions will come from individuals directly or indirectly working for corps that if/when those subsystems start adversely impacting the original Linux philosophy, stripping them all out and starting over again would be not much better than abandoning the whole platform.

This is exactly why alternative efforts that seek to maintain working and up-to date alternative init systems, display managers and desktops are critical to the long-term health of Linux.

Linux thrives on multiplicity of choices, which is anathema for corps who always want to consolidate. But that road eventually will weaken Linux for the general community and varied use cases.


Joel Spolsky talks about this (as a somewhat adversarial tactic, even if it's not purposely so) in "Fire and Motion"[1]

"Watch out when your competition fires at you. Do they just want to force you to keep busy reacting to their volleys, so you can’t move forward?

Think of the history of data access strategies to come out of Microsoft. ODBC, RDO, DAO, ADO, OLEDB, now ADO.NET – All New! Are these technological imperatives? The result of an incompetent design group that needs to reinvent data access every goddamn year? (That’s probably it, actually.) But the end result is just cover fire. The competition has no choice but to spend all their time porting and keeping up, time that they can’t spend writing new features. Look closely at the software landscape. The companies that do well are the ones who rely least on big companies and don’t have to spend all their cycles catching up and reimplementing and fixing bugs that crop up only on Windows XP. The companies who stumble are the ones who spend too much time reading tea leaves to figure out the future direction of Microsoft. People get worried about .NET and decide to rewrite their whole architecture for .NET because they think they have to. Microsoft is shooting at you, and it’s just cover fire so that they can move forward and you can’t, because this is how the game is played, Bubby. Are you going to support Hailstorm? SOAP? RDF? Are you supporting it because your customers need it, or because someone is firing at you and you feel like you have to respond? The sales teams of the big companies understand cover fire. They go into their customers and say, “OK, you don’t have to buy from us. Buy from the best vendor. But make sure that you get a product that supports (XML / SOAP / CDE / J2EE) because otherwise you’ll be Locked In The Trunk.” Then when the little companies try to sell into that account, all they hear is obedient CTOs parrotting “Do you have J2EE?” And they have to waste all their time building in J2EE even if it doesn’t really make any sales, and gives them no opportunity to distinguish themselves. It’s a checkbox feature — you do it because you need the checkbox saying you have it, but nobody will use it or needs it. And it’s cover fire.

[1] https://www.joelonsoftware.com/2002/01/06/fire-and-motion/


This post made be nostalgic for that period which started in 2002-2003 (just after the dot-com crash ) up to 2008-2009 (when FB and Twitter emerged and when Google "changed" its skin to a full ad company), when everything worth reading was being published on blogs, when people (myself included) still believed in the open web (we were all very busy bashing SOAP) and when it wasn't all about IPOs and earning obscene amounts of money (it's also the period when this website was put online and when its founder was just a respected blogger, LISP-er and former Yahoo employee). Good times.


Absolutely agree. Its been ages since I used by RSS aggregator to read blogs the way I used to.


Hell is an undocumented SOAP interface.

I still have nightmares about trying to grok it.


Some of this is caused by lack of backwards compatibility too. In the past once you picked a library you could be pretty sure that the API's you depend on wouldn't change much, and then only with a major release which would come with adequate documentation detailing the important changes.

These days the throw away and rewrite/refactor crowd is in such power that if you don't spend 1/2 your time tracking the commits your likely to find that your dependencies are entirely incompatible with whatever you have built on them and you have no idea how to fix that short of reading the last 6 months of mailing-list (whatever) postings just to change the "runlevel" or get your service to start...


The dependency trees in many projects are out of control too.

It was part of the reason I wanted to move on from my last position. We were spending so much time chasing infrastructure dependency changes that it began to feel like a treadmill that was slowly increasing speed. At some point I got tired and couldn't find the heart to debug yet another vagrant up failure, knowing that fixing it will probably break something else, and the actual project work was just as likely to suffer from the same issue.

At home I keep it simple, and my advice to a sole developer or small team looking at some of these infrastructure platforms and tools thinking they need them: these tools are not for your use case.

You don't need vagrant, docker, kubernetes to manage a 4 person dev team and you are just burning hours and building the chain of brittle tooling that will be your biggest pain point until you eventually hire a sadist/devops guy.


> until you eventually hire a sadist/devops guy.

I try to drop in and follow the devops scene from time to time; after all, a lot of my work ends up getting handled by this stuff, and sometimes you need to fix things yourself. From my point of view, listening to a devops talk is listening to a long monologue consisting of a chain of mostly food-related, seemingly unconnected English words ("Chef Cucumber Puppet Jenkins Salt") that somehow "run" on top of another. None of them are modular, none of them are interchangeable, there are no standards, whatever you write for one of the food-related items will not work with another food-related item, or the same food-related item of a different vintage. The shelf-life of the food-related items seems to be about 3 years.

From an outsider's perspective it feels that, aside from Nix, there has not been any theoretic or standards progress in this field since Mark Burgess' time.


You've given an insightful assessment of the orchestration space -- a modular DIY treadmill nightmare...

Currently it seems Kubernetes is emerging from the fog as a de-facto standard to eliminate all those uncertainty points and provide a common toolset. It's the only orchestration platform that's a turnkey installtion and a turnkey managed service on all 3 major clouds. VMWare is about to open k8s up to the Enterprise market too with a integrated & managed on-prem service [btw VMWare employees: your PKS blog and cloud blog say this is happening mid december 2017... status update, maybe?...]...

With a shared deployment pane across the cloud and on-prem, wrapped in drag-and-drop tools from the major VM suppliers and sprinkled with "self-updating" magic and QOL improvements in the next few years, I believe we're at a watershed moment for a shared standard :)

Google is pretty good at tricking the industry into training techs for their internal stack.


> btw VMWare employees: your PKS blog and cloud blog say this is happening mid december 2017... status update, maybe?...

I'm at Pivotal, we're working on it with VMware and Google. If you want to kick some of the tires, start playing with CFCR, because that's a major piece of PKS.


> From an outsider's perspective it feels that, aside from Nix, there has not been any theoretic or standards progress in this field since Mark Burgess' time.

Because there wasn't much progress indeed. IT operations is a field advanced by programmers, and most programmers have very little experience with (or even interest in) system administration, so the progress of the field is mainly governed by clueless outsiders oblivious to all the tools and mechanisms already in use. That's why it's a huge pile of mess.


The thing that kills me is the preconcevied we have to use $TOOL behaviors. Lets use docker as a deployment tool, where we are putting one container on each VM... Combined with the complete ignorance of packaging. 99% of the problems I hear people trying to solve with docker would be better solved with a 'postinstall' hook in a package called 'my_configuration' (it helps you with configuration versioning and in place upgrades!!!!). Combined with the complete lack of understanding what something like kickstart/autoYast can do.

It seems that in a huge number of cases, its possible to trim out the vast majority of the layers of deployment/etc crap with just a bit of KISS and using tools which are already installed...


Interesting! Do you have any recommended books or tutorials on this approach?


You mean something like your distribution's documentation on building packages?


More like a "how-to and why for 99% of Docker use cases" tutorial I can point people to next time I hear the words "we need to use Docker."


The short story version of this was written by Arthur C Clarke in 1951, "Superiority" http://www.mayofamily.com/RLM/txt_Clarke_Superiority.html


Fantastic, thanks.


Thanks for that! Good story!


I was just thinking the other day about "finished software". These days, those Unix philosophy tools of doing one thing well and leaving small solved problems alone are becoming seemingly fewer and fewer.


There are definitely client app trends away from the Unix philosophy... I would argue that's a product of the success of the Unix philosophy, though. New apps are developed in a world where 'curl' or 'grep' exist, they can move on to more specific needs.

On the platform front I believe this philosophy has recently 'won the war': Microsoft was forced to create Windows Subsystem for Linux (WSL) as a compatibility layer to access exactly that rich tool ecosystem and it's server & production oriented workflow... Cross platform means "not windows", even on windows.

Up a few abstraction levels though, and we can see that philosophy dominating in cloud space... "Micro-services" are API enabled single-use tools focused on doing 'one thing well' and yielding coordination responsibility to higher level applications and not assuming as little as possible about their end-use. Tool silos to support new unforeseen use-cases.

Even more emblematic of the Unix philosophy in cloud space is the emergence and growing popularity of "serverless" solutions: hyper focused single-use tools directly integrated into the computing environment. A single function, pumping between cloud services or transforming some text, "freed" from infrastructure.

In days of old we had to build up the foundations -- text manipulation, process stats, diff capabilities -- but based on that amazing ecosystem the new generations of those tools are freed to focus on new problems and speak a more abstract and higher level language -- APIs, event queues, NLP services... Just like how some primates started grunting, and then they grunted numbers, and then they made satellites and then conquered mars with robots. Shoulders of giants, and all that :)


I dunno.

WSL seems to be more about the success of the Linux kernel API than any philosophy.


Indeed. The cathedrals won.

I would posit(x !) that it's because it is easier to form a community and 'cost of entrance' around a megalith like Kubernetes, than around individual tools that do 1 or a few very similar things well.

Then again, I would say it's time to start looking away from "handling text streams", to something that can handle data streams of various formats, including transcoders that can convert from one type to another.


I think people have a limit of the number of "things" they can learn too. One big thing is one thing, 10 tiny things are 10 things. 10 things you have to learn, and then figure out the optimal (or tolerable) way for them to all work together.

The original unix utilities had the benefit of (compared to now) being in a relatively simple environment, and a (much more) relatively small developer community, and a clear shared architecture understanding between and among utility and systems designers. They almost end up being more different commands or subsystems of one 'thing' then a 'thing' of their own. This is very hard to do. Unix succeeded because it was succesful at it.

Once, for instance, you say "oh yeah, we want just like this, except not just text streams", as you suggest -- it gets even harder. Especially in today's environment which is not that of the unix origin.


I don't necessarily mind relying on unfinished software; but it is tiring to rely on something that doesn't appear to be on the path to being finished. Projects with a complicated upgrade cycle and a rapidly moving upgrade treadmill are definitely a no-go.


> It's like the authority to create goes from the individual first-principles (by necessity) maker, to the control over development being in the hands of an external group, and then all your time is spent keeping up with what they're doing.

This is a fundamental economic question: buy or build? Buying involves search costs ("all your time is spent keeping up"), building involves the cost of ... building.

Often the line shifts because of gains from trade/specialisation and the deepening structure of production. As products become more featuresome, it becomes more economical for producers to specialise in part of the problem. They become better at that part than others are.

It quickly becomes impossible for any single producer to out-produce the combined output of specialists.

As a side note, the tension between the cost of searching and the cost of doing it yourself is believed to be why firms can emerge out of "pure" markets. It also suggests an economic reason for why software projects grow at their margin and another reason for why NIH is so attractive.


Same thing in deep learning. I already had to dump half of my code because of theano, as well as api changes in keras and tensorflow that made it a pain to load some of my old models in newer versions of the frameworks.

Now I'm rewriting a bunch of it in pytorch while anxiously waiting for 0.4 to come out (no idea when) and break all of my models again.


There aught to be a name ..

I think that "vendor lock-in" comes close to describing the situation. However, it does not describe the way the dependence changes the behaviour of the people locked in. I think that "addiction" fits in this case.


That's not lock-in, it's just a dependency.

It would be lock-in if the vendor made it unreasonably difficult to remove the dependency.


I think a lot of this is that once a tool reaches a certain level of size and complexity, it’s impossible to know all the technical details unless you’re actually a developer on the project (and after another threshold, not even then). So at that point you have to use news and social information to find out what’s going on, and just trust that people know what they’re doing.

It can be frustrating, but it’s also unavoidable. While I strive to know as much as possible about my tools, if I had to know every part of the stack backwards and forwards, I’d never get anything done. You do have to cede to the abstractions at some point.


The important distinction here is the pace. I can use a Ubuntu server LTS version and not have to stay on top of weekly development release announcements from upstream.

With k8s lacking LTS, it forces you to drink from the firehose.


In a sense LTS is provided by vendors such as Gravitational and Red Hat. They introduce hysteresis and smooth out the bumps on the upgrade cycle.


> It can be frustrating, but it’s also unavoidable.

Why unavoidable? It's not that one is forced to use some tech... Even though the hyped crowd chants the names so loudly.

A lot of things that complex technology offers are not really needed (not to mention one has to constantly fight with that new complex technology when it doesn't yet do something). And complex tech can be frequently replaced with a small set of tiny scripts and small and simple standalone services.

Domain and deployment-specific scripts and services that are unique, sure. But still easy to grasp mentally (and much smaller than any mature project's codebase).


(I haven't read the rest of your post, but I think I can still answer this little snippet...)

> Why unavoidable? It's not that one is forced to use some tech... Even though the hyped crowd chants the names so loudly.

Peer pressure. Consultant-hiring-pressure, for example: Are you full-stack? "Full stack", now there's a phrase...

(EDIT: I honestly do feel that it's a sort of an attack by the sheer meta on the hiring side of the equation on the meta of the supply side of the equation. I don't imagine recruiters are doing much of this consciously, but it's definitely happening more and more...)


I think there is a difference between knowing the tech enough to list in in a resume, and actually using it.

I sort of know K8s. I have experimented with it, have set up a few smaller but nonetheless "real" clusters, I have ran various apps on them, have broken them with various sorts of fault conditions, have tried to repair them. Without this I wouldn't be able convince myself my opinion of K8s is even remotely close to being proper and informed one (okay, it is still not but it's closer), and not solely based on my preferences and beliefs. And this is also why I'm not using K8s in production anywhere I need to design, set up and manage a system that I should be generally able to reliably repair within hours (or faster).

To interviewers who just want to hear some important keywords, I can tell some tales how I ran that fancy mixed-arch amd64+armhf cluster for my CI. Most likely, omitting the fact that this was a personal project and that in the end I somehow broke it and despite spending a pair of evenings had failed to figure out what went wrong with the networking... so I've just scraped it and switched to Docker Swarm. But if I'd to actually design a system that I would maintain, I'd go with whatever I believe is actually technically appropriate and not with a mauve SQL database[1]. And I believe I would be able to justify my choices, explain the trade-offs, requirements and so on. Unless it's on GKE or it's not my responsibility to manage the cluster - then I can go with K8s, no problem here - it wouldn't be a lie to say that I "know it and have some experience running apps on it, but have usually preferred other solutions". ;)

[1] http://dilbert.com/strip/1995-11-17


> I think there is a difference between knowing the tech enough to list in in a resume, and actually using it.

Of course, but the difference doesn't tend to come across on a recruiter's spreadsheet. Nowadays, I only go by personal recommendations/references.

Just as an aside, and as an offer of XP: Tech A is largely irrelevant unless you have to execute in, say, 1 month. Success almost never depends on the tech -- it mostly depends on people.


> Success almost never depends on the tech -- it mostly depends on people.

Of course, that's true. But my understanding is, the technology choice is not about succeeding or failing - a good team with good plans can succeed with just about any tech that can somehow do the job. It's about costs. Complex but featureful tech may drive development costs down, but may also result in significantly higher maintenance costs.


"Yeah, he's a few PDF's shy of a full stack, I tell ya!"

side story:

Yeah, I went through 3 stages of hiring with $company. Interviews went great, and then I hear back that I didn't show up to the 3rd (much to my surprise). When the HR person checked back in, they said it was an error and instead they were no longer interested in me because I lacked Kubernetes experience specifically.

And my resume includes Openstack and Apache Mesos. Not Kuber. But they dragged me around. Still pisses me off that they couldn't read my resume. Then again, their interview process was... shall we say, "interesting".


> Yeah, I went through 3 stages of hiring with $company.

My BS radar goes off when I get asked for a 2nd interview, or any 4 hour group interview. Thanks, but no thanks. They did you a favor.

These interviewers are crazy, and have zero proof their convoluted process produces better hires than a single 30 minute coffee interview. They just don't want to admit they don't know what they are doing. As if the heavens will open up and god himself will shout down "He's the one!", if they just have enough rounds to get there.


My girlfriend went through ten interviews with Stryker and then didn't get the job. Words fail me.


I've also been interviewee recently in this space. I'm lucky in that in each one the interviewer showed a healthy scepticism or lower enthusiasm for K8s and similar products.

Maybe I'm doing my self a dis-service by not shooting for the moon. But I don't think so. I'd rather do real work than spend days reading upgrade notes. On-premises K8s time will come but for most that time is not now.


Thats why one must lie in their cv


Perhaps. But I think it's better, that if they're willing to play those kinds of games that I wouldn't want to work for them anyways.

All tech companies run their own NIH stack in one way or another. If they aren't willing to admit that to themselves, there's not much I can do. It costs time to train to get new people up to speed... and given what kind of arduous journey learning OpenStack is, I figured they would have understood and went "Well you dont know K8N, but you're trainable in this stack.."


I just call it an ad. It is after all the core competency of the prominent stewards of open source.

The software is the ad, the tangential service is the product; keeping up is the infinite sales call that requires no calls.

This is the price of free.


There aught to be a name to the tendency that as tools get better and better, the more your time goes from having your mind in technical-space to social and news-space.

It seems like a form of bike shedding mixed with busy work. Nearly all of it takes away from the actual product being built.


It's usually framed as the Build Vs Buy question. When do you stop your search for an adequate ready-made solution and just build it yourself?


Is there anything that can be done about this other than to just accept it I wonder.




Applications are open for YC Winter 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: