Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Are we overcomplicating software development?
639 points by ian0 on Jan 18, 2017 | hide | past | favorite | 368 comments
I have recently been involved in the overhaul of an established business with poor output into a functioning early/mid stage startup (long story). We are back on track but, honestly, my lessons learned fly in the face of a lot of currently accepted wisdom:

1) Choose languages that developers are familiar with, not the best tool for the job

2) Avoid microservices where possible, the operational cost considering devops is just immense

3) Advanced reliability / redundancy even in critical systems ironically seems to causes more downtime than it prevents due to the introduction of complexity to dev & devops.

4) Continuous integration seems to be a plaster on the problem of complex devops introduced by microservices.

5) Agile "methodology" when used as anything but a tool to solve specific, discrete, communications issues is really problematic

I think overall we seem to be over-complicating software development. We look to architecture and process for flexibility when in reality its acting as a crutch for lack of communication and proper analysis of how we should be architecting the actual software.

Is it just me?




Many of these practices are popularized by Google/Facebook/Amazon but don't make sense for a company with 100 or even 1,000 people. I try to focus on whether a practice will solve a concrete problem we're facing.

Switching from Hadoop to Spark was clearly a good idea for our team, even though it required learning a new stack, but there isn't a strong reason to switch to Flink or start using Haskell.

Agile makes sense when your main risk is fine-grained details of user requirements, but not when you have other substantial risks, such as making sure a statistical algorithm is accurate enough.

Microservices probably reduces the asymptotic cost of scaling but add a huge constant factor.

Relational databases are the right choice 95% of the time, non-relational stores require a really specific use case.

TDD is good for fast feedback in some domains, but for others, manually investigating the output or putting your logic into types is better. E.g. a lot of my time comes from scaling jobs that work on 10gb of data but crash on 1tb, TDD is not that helpful here.

Continuous integration mostly makes sense when you're making a lot of small changes and can reliably expect a test suite to catch issues.

In short, ask the question "when is practice X useful?" instead of "is practice X a good idea?"


> Microservices probably reduces the asymptotic cost of scaling but add a huge constant factor.

If this were Medium, I'd highlight the hell out of that.

That's so true, and so nicely, succinctly put - it ought to be the reply to end every argument about whether microservices are good or bad.


At the last company I was at, our search microservice was fast (average response was well under 100ms) and it didn't crash once while I was there. At a larger company, this may not be an accomplishment. At a startup, this is the bees knees.

Meanwhile, the rest of our codebase (a monolith) crashed every few days for one reason or another. We had an on-call rotation not because that's what you're supposed to do, but because we actually needed it.

Now I'm not saying that microservices make sense for everyone. In general, I agree that they are used incorrectly. Microservices are hot and software developers, generally speaking, like to use hot technologies. Yes, moving to a microservice was costly. We had to re-write a lot of code, we had to set up our own servers, and we had to get permission from the guardians that be to do all of this. But, for our use case, and I assume there are other use cases too, the benefits of detaching ourselves from the company's monolithic codebase far outweighed the costs for doing so.

TL;DR No argument is the end to every conversation. Few things are so black and white.


I tend to start with a monolithic service.

Sooner or later you get a feel for which bits are becoming at least API stable and could run independently. That's when I split them out.

Do it too soon and you end up choosing the wrong boundaries and tying yourself up in knots, do it too late and your monolith can become a mess that's difficult to detach the pieces of.


I tend to try to write monolithic services in such a way that they could be broken up into microservices if that were ever desired.

I don't go too far with this, just avoid things like shared static state and other anti-patterns.


You mean you follow good software development practices? Heresy!


Another option is to start with an umbrella app (Erlang/Elixir/OTP). It can run like a monolithic or ... nano-services (I suppose) within the same monolith. When it is time to split them out, it is easier.

It does assume that you either start with devs familiar with OTP or you have generalist devs that can pick things up quickly.


True. There's another thread in here somewhere talking about premature generalization. I think that's what you're getting at with "Do it too soon and you end up choosing the wrong boundaries".


TBH microservices do a good job of making you much more dependent on your tools, and selecting the wrong tool for the job won't become clear until you've used that tool for years.


At the last place I was at. We had a micro serviced monolith. I can't even begin to describe that thing in common engineering terms. (note: it's better than it seems).


In case you're wondering about the downvotes, a micro services monolith sounds like an oxymoron.

Could you expand on how the architecture actually looked? What made it a monolith and what made it micro serviced?


Maybe they were referring to a distributed monolith?


I believe, for small shops, the real benefit of microservices is the logic split that forces good design and reduces cognitive load.

You reap the scaling benefits way later, if ever.


You can get that benefit by dividing your system up into libraries with defined, documented, tested APIs. There's no need to introduce all the complexity and failure modes of distributed systems just to force good design.

When you need to scale, then you can easily throw your libraries behind an RPC framework and call it microservices, but there's no need to pay that cost until you actually face that problem.


Just putting libraries that were never designed to scale up behind RPC usually won't help you scale. These libraries tend to work with mutable, stateful objects and don't have any groundwork in place for partitioning.

That doesn't mean you can't scale up from a monolith (even one without clean interfaces) - every startup growth story is a testament otherwise - but it's never as as easy as strapping an RPC layer over your library.


One caveat is that if you need to fix a bug in your library in an API-compatible way, you can't reach into all the codebases that are using your library. You can deploy a new version of the microservice, though.


I mean, you _can_ if you organize your code such that you can. For example, Google's monorepo lets maintainers of a library find all internal usages and fix them. This is one of the benefits Dan Luu notes in http://danluu.com/monorepo/.


I think he means that you can't force all teams that use your library to recompile and pickup the updated code, while if you deploy it as a service, you recompile and redeploy and everyone talking to your service gets the most up-to-date version.

This is a real problem - I recall that Sanjay Ghemawat et al was working on it when I left Google, though I dunno if the solution they came up with is public yet. It's unlikely to seriously affect you unless you're Google-scale, though, by which time you've probably divided everything up into services and aren't taking advice from the Internet anyway. For companies that are a few teams working on a single product, it's easy enough to send a company-wide e-mail saying "Rebuild & redeploy anything that depends upon library X", and if you're doing continuous deployment or deploy only as a single artifact, the problem never affects you anyway.


You were initially replying to a suggestion explicitly qualifying this with "for small shops". Yes, you most definitely can force all teams that use your library to recompile and pickup the updated code - and it doesn't mean "a company wide email", a realistic scenario would involve standing up, pointing to a specific person and saying "Bob, the new version of my library will also work better for the performance problems you had, pick it up whenever you're ready"; and knowing that it's an exhaustive list of people who need to be informed.

For starters the vast majority of code is developed in-house in non-software companies. The vast number of products are a single team working essentially in a silo, not "a few teams working on a single product".

When people are talking about small companies, it's misleading to think "smaller than Google". Smaller-than-Google is still an enormous quantity of development. Enterprisy practices make much sense in scaling software in companies that are smaller-than-smaller-than-Google. if you hear "small company", think multiple steps further from that, a smaller-than-smaller-than-smaller-than-smaller-than-smaller-than-Google company.


> you recompile and redeploy and everyone talking to your service gets the most up-to-date version

Sure, but if you do that in place it will still break stuff that assumes it works like the last version, and if you do a versioned API or the like you still can't force all teams to adopt the new version.


> I think he means that you can't force all teams that use your library to recompile and pickup the updated code

Does your CI system not automatically build dependent artifacts--

> It's unlikely to seriously affect you unless you're Google-scale, though

--okay, whew. ;)


If you need to change your microservice's API in a non-backwards compatible way, you have the exact same problem plus significant operational complexity.


Don't you just create a new one and let the old go obsolete when the "users" switch?


Which is basically what you do for a traditional library as well. Tweak the header so anything being recompiled against it gets a different function signature. Then old apps continue to work, and newly built apps get the fix.


Moderately ironically - this is a place where dynamically loaded libraries are particularly well suited. So long as the API hasn't changed, the library can be patched independently of all the other compiled code.

Of course, there are other limitations this imposes, but it does make it very simple to deploy a new library to all code which uses it.


> you can't reach into all the codebases that are using your library

You can deploy a new version of the dll and applications can pick it up when they restart. Linux will apply security patches this way.


Better, you can do it without a restart is you can serialise current state. That also enforces discipline in defining such state.

Microservices are only a step ahead.

That said, in many cases the cost of a full restart can be accepted.


Nothing about splitting your app into microservices _forces_ a good design. I've never seen microservices with well-defined seams. Every time, knowledge "leaked" between the apps, and any non-trivial change to the app required updating multiple repos, deployment synchronization, etc. Microservices are a tremendous burden that the vast majority of companies will not benefit from.


I did not mean microservice as in "just make it many apps!". I meant as do not share databases and expose everything as APIs.

It helps cognitive load because such apps can be reasoned about without reading code elsewhere.


API is not enough, full contract has to be shared.


> forces good design and reduces cognitive load

Except splitting into microservices is an unnecessarily complex design choice. That's almost always worse, and the cognitive load comes in when you now need to figure out how to get this stuff right. The scaling benefits also require that you get it right, small flaws in your system become massive issues.


"Is your bicycle too slow? Get a helicopter!"


If you separate components wrong in the same code base is an easy fix. If you get them wrong between services you have s much larger problem. I'm not sure why you'd be more likely to get that right with services than within the same code base.


"Logic" is vague and there a several layers you can implement this before even thinking about microservices.

It can be as simple as a simple class, or maybe a larger class as a single-file service, or an entire namespace with a several classes, or a separate library easily referenced. All the "logic" split benefits without the ridiculous hassle of microservices.


I think this is actually a failure in mainstream programming languages, which make it far too easy to reach across what's meant to be a defined subsystem boundary and meddle where you shouldn't.


They also have to weak tools to automate enforcing contracts. Generally the only available tool is "assert".


Definitely agree – the polyglot aspect can also be useful for companies where different parts of their problem fit different tools.

However, exercising proper software discipline and using languages with good/existent module systems, like OCaml or Go, can lead to the same modular results without the fixed overhead. If you don't have a full-time ops person or team, you almost always have no business running microservices.


> ask the question "when is practice X useful?" instead of "is practice X a good idea?"

This too!


It applies to TDD as well, unfortunately a lot of TDD proponents don't really acknowledge that.


> Relational databases are the right choice 95% of the time, non-relational stores require a really specific use case.

Relational databases are great, but I spent large parts of my life as a developer writing layers converting to/from SQL and later ORMs. There's a huge gain in just not translating data. I know Postgres (and others) deal with JSON, but I can't escape the feeling it's a bit shoe horned in there – basic SQL statements have strange new operators like ->> -> #>>.

Relational databases are great for, well, relational data with strong consistency requirements. The popularity of the original MyISAM tables without integrity checks baffled me at the time. Why spend time marshalling data in/out of table form when you don't gain the benefits of a RDBM?

Not doing data translation saves _a lot_ of time. Plain key-value stores are amazing, document stores like Elasticsearch are great, ultimately the choice comes down to requirements and time saving is often a very heavy argument, especially for small companies/startups.


Things such as joins, transactions, and means of enforcing data integrity are useful when solving a whole slew of problems. Not to mention the tooling and community you benefit from when you use a common RDBMS.

I never found data translation/serialization to be a big pain (just rely on a framework/lib that does it for you). It's a bigger pain to hand-roll joins that would be a one-liner in SQL or deal with issues that arise from having your data (unnecessarily) reside in many systems.


I hear this data-integrity thing a lot but I don't run into these problems myself. I think it might be a functional-programming thing. It's much easier and safer to declare your constraints rather than trying to enforce them. If you aren't in a functional language I can see why you'd want to reach out to one but SQL in a separate process is just one of the options.

> Things such as joins, transactions, and means of enforcing data integrity are useful

If the domain needs transactions I'm already speccing them regardless of general usefulness. Yes all that stuff is useful, but all abstractions have a cost which isn't free just because it's hidden in the DB.

For a problem that didn't need transactions, but for which they were useful, why would you automatically want to couple the solution with your storage layer? If you're looking for the ability to express business logic clearly without cluttering it with error handling, for instance, software transaction memory would probably be a better level to work at.


> If the domain needs transactions [...]

Considering we speak about daemon software (services exposing some API to readers and writers) that provides CRUD behavior to the user (end-user or other developer), isn't that nearly always the case to guarantee write access to concurrent writers without the risk of crippling your data?

Furthermore I am not sure how this relates to FP.

> It's much easier and safer to declare your constraints rather than trying to enforce them.

But that is a strong point of RDBMs implementing SQL. You have some kind of schema (think type) and use selected functions (select, update, delete, create, etc.) to transform the data.


> > It's much easier and safer to declare your constraints rather than trying to enforce them.

> Furthermore I am not sure how this relates to FP.

I'm saying that FP is great for ensuring correctness.

> But that is a strong point of RDBMs implementing SQL.

Right. And if I didn't have other functional languages available that might be a bigger issue.


STM is awesome but a single node solution. Distributed transactions are a hard problem.


I understand, but you can't just throw a DB at it and walk away. For instance, which DB? Setup how? Running on what type of hosts? What topological requirements does this have? How much does of a multiplier does it place on your data load.

I'd definitely use a trusted DB for storing bank accounts. The consistency is pretty much the first requirement and the data maps perfectly to tables. And 7B checking accounts isn't that big, compared to some problems so it'd probably scale pretty well even worst-case.

But I probably wouldn't for an MMO. Or at least, it wouldn't be where I stored every little thing going on around them - just the events (xp and gold earned) that they'd freak out about if we lost. But even just a log-structured DB would work well for that.

If there's no contention for a resource (in the bank case - the value in the account) there's much less reason for a transaction. I want the system to make its best effort but I don't want to wait around for the message if there isn't anything I can do on a failure anyways.


Are there domains that don't need transactions?


Exactly.

I start with text files and for most purposes I do not bother with anything else.

Next is a key/value store. Simple.

Relational databases carry large overhead in translating data (as above) and also in design and maintenance of the structure and getting data into and out of them. I spent many years with them, like them a lot, but they are too much complication for most purposes.

Even with relational data RDBMS are only good if you are not certain of how you will be accessing the data. In most cases you are sure.

I am constantly stunned how people reach straight for MySQL or Postgres when flat text files with grep would work just as well and be much quicker to implement


I'm stunned that you're stunned that people generally don't use text files as data stores.


How do you deal with concurrent write access, do you lock the file?


He probably uses flock(2)


But isn't that hard to get right? At least if you using something like sqlite you get consistency guarantees.

Consider your process writing to the file and dying during write() - do you recover and repair the file after you reschedule?


Ramdisk maybe?


seriously curious -- what do you do when someone or some other "function" wants to query your data, wants to update it, etc.?


You can implement key/value stores in RDMS's too. It only take a few minutes to create a key/value table in most databases, combined with a few minutes in your favorite language to map it to an appropriate get/set routine. I find this particularly useful for variable attributes against another table, especially when its really a "foreign index, key, value" table. That way its still possible to join the values to other parts of the database. This paradigm really lends itself to multiple FK/key/value tables, where each one extends another particular table.

All that said, doing this requires careful thought, and DB normalization when its discovered that there is a 1:1 relationship between rows in a table and a particular key/value table. So, its not something that should be taken to extreme, but I find it aids in quick development, as every time you discover you need to store another piece of data for some edge condition it doesn't require lots of DB normalization. Also, I wouldn't really consider making the "value" field a blob, rather a very limited int or string.


TDD is not about the tests, it is about teaching yourself to code better. About forcing yourself to break up your code into small components with well defined interfaces. Something we all want to do but which is hard in practice without a tool to guide you. TDD is that tool.


In our case, we did that using functional programming & type-driven design (which ironically is also a TDD.)


This this this!

Most often I see TDD used as a couch for a really bad type system.


Types are just formalised limited contracts. Emphasis on limited. They are not enough.


T(est)DD does tend to make your code look more like you used a functional-oriented language. A lot less mutability, heavier use of first-class functions, etc. Though I tend to prefer languages that are at least somewhat functional (eg. python, go).


> Though I tend to prefer languages that are at least somewhat functional (eg. python, go).

Can't think of any less functional languages than these.


Think of the language features you need to be able to program in a functional way, then think about what these languages have. There is a sizable overlap.

If you need a template then think of scheme. It is the best example of a minimal functional language.


It doesn't even help people enough apply SOLID principles.

Pure functions help but the principles of generalisation are deeper than this.

Tests verify contracts offline. This will miss real issues.


"putting your logic into types is better"

Can you elaborate on that one? Sounds interesting to me.


In functional, statically typed programmings languages, there is a pattern where business logic, including "actions", is encoded in types. This article [1] gives an example of filesystem manipulations that are encoded in a "FreeF" type.

When business rules are encoded in datatypes, it's easy to check that the encoding and the transformations are complete, and the logic can easily be mock-tested.

[1] http://degoes.net/articles/modern-fp


Some rules are extremely hard to encode as types or the result is extremely awkward. Or worse, performance suffers due to encoding.

If it feels like translating into a foreign language, then it likely is that exotic or you are using a wrong language.


Suppose you have some business logic that subtracts the cost of a transaction from an account balance and returns a new account balance. These things are probably integers, but in many languages you don't have to specify that. You write this function, then later your coworker comes across it and passes it a double. You might end up with weird small discrepancies in account balances (or mysterious errors that only happen sometimes) that could be totally prevented at the time your colleague wrote the code via static analysis, if you use put some logic (costs and balances are integers) into the types.

This can be more sophisticated, like "this function requires a sorted list" so lets make a sorted list type, or packaging things up into biz logic types (a cost type that contains an integer instead of just using integers), but you can catch a wide variety of errors with static analysis if you make your code and logic amenable.


Recently our team built a data pipeline: a few large inputs, a few large outputs, a lot of processing in between, a lot of parallelization & working w/large datasets needed. Essentially you could view the entire process as writing one very complex function.

We approached this first outlining the procedure and specifying the types involved, then outlining functions from each type to the next. You could essentially think of our types as tables, so our outline was f1: t1: -> t2, f2: t2 -> t3,...,fn: tn -> t_output (although not quite that linear.)

This let us split up work on the different functions across a team of 6 people. Specifying the input and output types was basically enough to make sure the functions were correct most of the time, and baking the interfaces into types enforced by the compiler made it easy to refactor & coordinate on changes when necessary. Feedback when we made an error was generally immediately available, because IntelliJ would highlight the function that produced an output value of the wrong type, or the compiler would catch it.

In contrast, if we had relied primarily on unit tests to check the functions, that would have made coordination more difficult, refactoring harder, and would have required us to either generate or acquire test data to feed through each function. But this architecture let us successfully build out most of the logic even while we had no access to real data & a different team was working on data ingestion.


This is interesting, do you mind being more specific - what was the data, how big was it, how long did the functions run?

Assuming you are talking about real types and having something like

  f1 :: t1 -> t2
and

  f2 :: t2 -> t3
you suggest you were able to do

  g :: t1 -> t3
  g = f2 . f1
which works perfectly well, but is sometimes nontrivial to do for more complex functions, in particular if they are not pure (e.g. they do IO as data is too big for memory) and you do some logging and house-keeping in-between and because of runtime behavior that might be hard to predict.

Does f1 consume all input before f2 can run? Is it "streamed-through", like in `sh`, e.g.

  $ find /home/foobar | grep hs$ | xargs wc -l
which is often done as an optimization?

I really like the concept and it works great, but for me it is simpler to apply to smaller constructs and I am still investigating how to apply it to more "business-logic".


Almost all the functions are pure–the only impure functions we use read from or write to a data store. We use Apache Spark, which lets you write pure functions that can operate on data too large to handle on a single box, and it overall works quite well. Eg when designing started this project we wrote something like:

g = f5 . f4 . f3 . f2 . f1

where f1 reads from s3 and f5 writes to a db. Then the implementation work mostly involved breaking these down further, e.g. f1 = h4 . h3 . h2 . h1, where only h1 is stateful and everything else is pure.

Spark is lazily evaluated, and in practice it will stream through many operations–f1 will generally not be done consuming the input by the time f2 starts, although sometimes we force it to for debugging purposes.

Lazy evaluation and the discarding of side effects make logging difficult, which is one of the downsides. There are various monitoring and debugging tools that help but it's still definitely harder than the single machine case.


One thing that bothers me is the 'relational databases are good enough' statement, that is repeated in other contexts as well.

But especially here, where we're talking about reducing complexity, it feels off to me. PostgreSQL and MySQL seem to me like incredibly complex packages. SQL, the language, is not easy to master either; most programmers I meet know mostly basics. On top of that, there's a long ongoing history of security malpractice.

When talking about reducing complexity, CouchDB and Redis are far easier alternatives, in my humble opinion, though they go slightly against 'use the tools developers know'.


The implementation of PostgreSQL is complex, no doubt about that. But if you need strong data consistency and durability guarantees, it provides a rock-solid foundation.

SQL might take some getting used, but it is also not rocket science. It shouldn't take more than a week's study to master the basics. There is of course a lot of awful SQL code out there, exactly because most programmers don't even know the basics. You can do incredibly powerful things in it that would take 10x the code in an OO/procedural language. In my opinion dumping an ORM on top is also not the best way to leverage the strengths of an RDBM.

It is slightly ironic that you bring up security malpractice in the context of PostgreSQL, when in the next sentence you advocate Redis as a far easier alternative. As was recently in the news the Redis defaults were for a long time insecure (google for Fairware ransomware).


> In my opinion dumping an ORM on top is also not the best way to leverage the strengths of an RDBM.

I agree. Unfortunately, the way I see people usually using them is pretty bad - you should not let ORM-generated stuff dictate your business model. Database is a database. A storage layer. Business objects will not map 1:1 to ORM objects. Approaches like "let's inherit from ORM class and add business-related methods", in my experience, lead to total disaster. One has to respect the boundary between storage layer and business model layer.


I'm aware of the different approaches (mostly from reading Fowler's PoEAA), but currently we use an Active Record-style ORM with a few extra features (like Class Table Inheritance) and we haven't found any major issues with this approach. What was the worst case scenario you experienced with the 1:1 approach?


Over the past 5 years I've been in two projects using 1:1 ORM Active Record == business model base approach. One completely failed in part because of this, second is barely manageable, but I managed to save it by moving business code mostly to the outside of Active Record classes.

The problem I encountered in those projects is the mismatch between storage mental model and business mental model, which lead to explosion of crappy code (AKA technical debt). In particular:

1. the classes I need for business model may have initially mapped well to database tables, but over time they stop; business logic and model changes much faster than you'd like your DB schema to

2. since many things in AR can fire SQL queries, you have to keep in mind the workings of your database when doing almost every operation on your model; it's an abstraction leak

3. code shooting off SQL queries is randomly called from all over your codebase; it's harder to keep track of it and, if needed, optimize those queries

I like AR as a convenient API to get data from/to database, but given the point 1., I eventually learned to isolate AR layer as something below business model layer, so that the pattern is that business model is explicitly serialized and deserialized from database, instead of the database being coupled with the logic of your program.

Now I vaguely recall complaining about this before on HN and getting my ass handed back to me by someone who pointed out that these are all ORM n00b mistakes. I wish I could find that comment (pretty sure I noted the link down somewhere). Yeah, I admit - in those two projects I mentioned, we were all ORM noobs. So we've learned those lessons the hard way.


> Approaches like "let's inherit from ORM class and add business-related methods", in my experience, lead to total disaster.

I don't disagree, in fact I'd go further and say that data and logic should not be coupled, but this is the Active record pattern which is far from the only way to use an ORM, most ORM's won't even support this pattern by default.


Moreover this is a terrible pattern. I would never use an ORM like that. An ORM implemented using the DataMapper pattern is so much better.


The best ORM ever, F#'s FSharp.Data.SqlClient. A very thin layer that let's you statically program in SQL in your app. But I typically just use Functions/Stored Procs. But sometimes, for one off things and experimentation it can be nice to write SQL directly in your app.


Relational databases are not simple systems as you say, but they do seem to me simpler to use - especially in the 95% case where a single, large enough machine hosting postgresql/mysql is entirely sufficient.

Key-value stores are "easy", but what I think isn't easy is to reduce your business domain to a simple key-value model without sacrificing promises and gaurantees offered by a good relational database system.


>But especially here, where we're talking about reducing complexity, it feels off to me. PostgreSQL and MySQL seem to me like incredibly complex packages. SQL, the language, is not easy to master either; most programmers I meet know mostly basics. On top of that, there's a long ongoing history of security malpractice.

PostgreSQL and MySQL are very complex, but the complexity is entirely contained. Both are well-tested and reliable, so developers can deploy them without worrying much about them.

I would disagree about SQL being difficult to master. The basics are all most people need, and are not at all difficult to learn. The more advanced stuff (e.g. CROSS APPLY) is not necessarily standard across implementations, and can usually be replaced with application code.

>When talking about reducing complexity, CouchDB and Redis are far easier alternatives, in my humble opinion, though they go slightly against 'use the tools developers know'.

I can't say I'm that familiar with CouchDB, but Redis is entirely inappropriate for most SQL use cases. It's a key value store, and is not meant to do any sort of advanced queries.


Minor aside; I don't think the complex bits of sql are things like cross apply - that's almost no different from a join, especially if you're from a non-sql background where "joins" are typically statements+loop-equivalents and typically hierarchical and ordered. If people have difficulty with cross apply, they're just not trying.

Of course, if you regard sql as something you'd rather not "waste" time on, of course you're going to find those kind of subtle distinctions confusing - sort of like how people think css is difficult.

The more reasonably "complex" bits are the update visibility semantics, i.e. which transaction isolation levels mean what in various scenarios.

That's really complex, and it's truly somewhat unique to sql in that most alternatives simply don't bother trying to solve those problems at all - that can be a bad thing, but it is simpler.


Postgres and MySQL aren't overly complicated for simple use cases.

A lot of other technologies drop complications like foreign keys which you won't miss when you start developing software but it gives guarantees you will miss sorely when you start seeing inconsistent data 6 months in.


The expensive and complicated thing about these is deployment and maintenance. But then, you could instead pick sqlite and switch to big one when needed.

Sometimes it is good to get a pickup truck ahead of time, but often a smaller less versatile car will suffice. But not quite a motorbike.


Databases are often the only stateful component in a system - statefulness is inherently complex.


Go with SQLite if you can get away with it. It's a library, not an external engine, and databases are stored inside normal files, which makes a lot of things easier if you're building a standalone app (as opposed to server-side software).

SQL is well worth its time to learn. It's a good DSL for relational data. Most programming languages used for regular code are not very convenient with relational data. As for its security issues, this is actually simple - one has to respect SQL as a real programming language with its own syntax and grammar, instead of resorting to idiocies like gluing strings together in an ad-hoc manner.


I would say SQL is easier to master than most functional or procedural languages. The true issue, in my experience, is that it is different enough most developers don't want to take the time to learn it beyond the basics, much to their detriment.


I think you're forgetting about practicality and not reinventing the wheel.

I'm going to use a car analogy here. Modern cars are incredibly complicated machines. They're also generally very reliable, thanks to about a century of development and engineering, and (relatively) inexpensive thanks to economies of scale.

If I want to transport myself to work on weekdays, and transport my girlfriend and maybe another friend on a weekend trip, and carry a bunch of groceries once a week, I can do all of that with a standard 5-person car. It isn't completely optimal for any of those tasks: it has more space than it needs for any of them, especially the weekday commuting. For my daily commute, I don't even reach highway speeds because I live close to work, so the engine is seriously overkill.

So I could have a custom-designed vehicle for each of these use-cases. Each vehicle would be a little less complex than my current car. But that's a lot to maintain, and would surely be far more expensive and less reliable, since each one is a one-off, requiring custom design and engineering, special parts, etc., and not benefiting from the economies of scale and engineering resources that a mass-market car gets. So instead, I just go buy a ready-made car and use it, and it works great and I'm happy.

Is PostgreSQL overkill for a lot of uses? Probably so. But it's designed to be used for all kinds of different tasks, and while it may not be quite as efficient for any of those tasks as some custom-designed solution, it's far more flexible, and it's readily available, plus it's benefitted from an enormous amount of engineering and debugging that a custom-designed solution would not. Things like CouchDB don't have nearly the number of users and amount of development, so while they may make sense for some tasks, the fact that PostgreSQL has more lines of code does not necessarily mean it's less reliable, in fact the opposite is likely true, just like an off-the-lot Honda or Toyota is likely much more reliable than some custom-designed car that someone built in their garage or some high-end limited-production exotic car like a Ferrari or Bentley.

The reason for using an off-the-shelf solution is because it's fast and easy and reliable. It doesn't matter if you're not making use of 80% of the features or capabilities. And software isn't like cars or engines; hard drive space is nearly free, and except for certain applications you're not likely to see a significant downside to just using a standard SQL database versus something more tailor-made. The main problem is cost (like with Oracle), but with PostgreSQL or MySQL this isn't an issue since they're free (and Free). It also helps that they use a standardized query language which makes them much more accessible.


I've found that as soon as you change your data model and now need to deal with old data, document stores get just as complicated but require more custom solutions.


> TDD is good for fast feedback in some domains,...... It hears like, "IDE is good for fast feedback in some domains. " If someone saying that IDE is only suitable for GUI application but not the software he is writing, he probably means that "look at me, I am the old school tough guy." It's about the people dislike the tool but nothing about the domain.


Having loosely coupled microservices has benefits for maintenance as well as scaleability. If you are quickly iterating on a product and have it up in a rough state then you can easily work on seperate parts without having to worry about effecting the whole.


It can go both ways. If you mess up modification of the microservice such that it breaks other services that relied upon it, you quickly get into ceremony that a small team might struggle with. A monolith might have had the problem solved faster.


You're right of course, but it can also go the other way. If you break something then it seldom brings the service down, and often the breakage is very visible and can be seen by queues backing up. You can restore the broken service without having to redeploy the system as a whole.

Like most things in life there is no right answer, with tools like Terraform you can build a very complicated microservices system with not much effort, but only if someone on your team is experienced enough. If you're in a small team it's probably not worth the effort of learning the techniques and putting them in practice. We hate premature optimisation after all.


Quite possibly the first time I have agreed with everything in a post on HN! Just because a new paradigm exists doesn't mean it will solve all of your problems and will likely introduce several more!


In short, ask the question "when is practice X useful?" instead of "is practice X a good idea?"

Shorter version: Cost/benefit.


Continuous integration is a good thing. Back in the bad old days you'd have three people working on parts of the system for 6 months and plan to snap them together in 2 weeks and it would take more like another 6 months.

Agile methods are also useful. If you can't plan 2 weeks of work you can probably not plan 6 months.

When agile methods harden into branded processes and where there is no consensus on the ground rules by the team it gets painful. The underlying problem is often a lack of trust and respect. In an agile situation people will stick to rigid rules (never extend the sprint, we do all our planning in 4 hours, etc.) because they feel they'll lose what little control they have otherwise. In a non-agile situation people can often avoid each other for months and have the situation go south suddenly. In agile you wind up with lots of painful meetings instead.

Also I think it is rare for one language to really be "best for a job". If you want to write the back end of a run of the mill webapp, you can do a great job of that in any mainstream language you are comfortable in.


> Agile methods are also useful. If you can't plan 2 weeks of work you can probably not plan 6 months.

Hmmm. I was just thinking the opposite yesterday. I'm a performance engineer working closely with two teams. One doing Agile and the other basing on wikis and Adhoc in-person whiteboard discussions. I find the non agile team more productive, efficient and dare I say happy. The Agile based team makes me sit in on their daily scrum meetings. Although every one uses it to sync up on their dependancies, it just drags for an hour almost every day. I can visibly tell the devs walking out of the room spend more time worrying about "velocity" and "organisation of work" than the money making work that needs to be done. It almost feels like the agile process gives them "one more job" of picking the doable things from the list of stuff that needs to be done so they look better than their peers with better velocity.

Simply put, I was thinking if Agile is just not a good method when you can strive for good leadership and a healthy collaboration among individuals of the team?


> I can visibly tell the devs walking out of the room spend more time worrying about "velocity" and "organisation of work" than the money making work that needs to be done. It almost feels like the agile process gives them "one more job" of picking the doable things from the list of stuff that needs to be done so they look better than their peers with better velocity.

Classic symptom of managers using agile as a (micro)management tool. Velocity, burndown charts, etc. are meant to be used by the team as a self-calibration tool. Managers do not get a say in what they think the velocity should be, either for the team or for individuals. If they do so, they create an incentive (let's be blunt, an overwhelming incentive) for the team/individuals to game them and that way lies madness.

(As an aside, the best response I've ever seen to this type of dysfunction is a team who simply decided to retcon the charts on the fly to make the work committed to match the work done. Management was happy that the burndown chart was right on target, developers were free to be fully productive instead of worrying about what their velocity looked like; it was a win-win solution all around.)


I've seen Jira tickets about creating Jira tickets :)

In large lines I agree with this comment. Micromanagement of Agile teams is detrimental. Implicitly, the message is that managers should leave their teams to work in peace?

The question I have: assume you are that manager. You have 5 agile teams working on 5 client projects. One team seems to get work done much slower than the other teams. What do you do? (And how does one actually track progress of an agile team to begin with? Story points can vary wildly across teams).


> And how does one actually track progress of an agile team to begin with?

You've hit upon the key question: you want a progress metric that's in-line with productivity. IMO, assessing that comes down to evaluating whether functionality/code delivered (at a high level) per sprint is reasonable, evaluating whether the task breakdown the team is operating against is sensible and whether tasks are being accomplished in a reasonable amount of time relative to their difficulty, the skills of the person doing them, etc. In other words, the evaluation needs to be specific and include the circumstances: Saying "Why did implementing XYZ take longer than one would normally, even taking into account ABC?" is going to result in fixing the real issue whereas saying "Why is your velocity number so low?" is going to result in "fixing" the number.

That, in turn, requires the manager to either possess solid software engineering skills or have access to someone possessing those skills who can make the assessment in their place. And, yes, it's a lot more work. But, as has been amply documented, attempting to manage off a single number (a self-reported number, no less), simply doesn't work.


> I've seen Jira tickets about creating Jira tickets :)

So long as it's one:many.


Or one to one where the first is a spike that will take time to determine what should go into the second (discovery, learning, experiment, further estimations). You would not want to commit to a unit of work without some idea of what it entails in Agile. The alternative is Programming Mother Fucker where you just dive in and see where it takes you. The business side usually prefers more predictability. The developers usually prefer to Just Getting It Done (tm).


I was once on a project where an "agile" team, at the behest of their managers, held a sprint "to improve velocity." I kid you not.

I will add that I was not on that team. Our scrum master, who was excellent, shielded us entirely from the management madness.


What was the idea behind this? Spending some dedicated time knocking through some accumulated crud? (I think people call it 'technical debt' these days). Is that a bad thing?


Nope. That I could have respected and would have made some sense. It was code for "the team will work longer hours and over the weekend so that an arbitrary number is higher."


Hmmm. Weird. That sounds like people trying to engage in "growth hacking". I wonder if someone's bonus was tied to it.

If they were really serious about "velocity" (for whatever reason; some are legit), they'd divide by man-hours, not weeks, anyway, and have actuals going back 3+ months (6+ is better) to baseline their sitrep before they started knob-twiddling.


This was just "work long hours to get the project back on track" being communicated as "improve velocity" with about as much success as you'd expect and less understanding than you're projecting.


It doesn't sound like the 'Agile' team is doing standups right.

There are either too many people in the room, or people are talking too much, and probably about the wrong things.

And, if they are doing standups wrong, I question how much else they are cargo-culting.


Generally claiming that a person is doing the method wrong when the method brings no percieved benefits raises a red flag on the general applicability and value of the methodology itself on the problem the person is trying to solve.

There is no one true way to organize development of software and generally shoehorning dogma without proof of value is counterproductive.


I think you can absolutely diagnose an hour-long daily standup as "wrong". Standups need to be limited to 10-15 minutes. That's one of the key tenants of standups: they need to be short and focused.


Yes, regardless of the process framework used having the team meet daily for an hour is pathological and probably is indicator of deeper issues. Which just doing agile "more by the book" wont fix.


Are you really can stand for hours at the standup meeting? :-)

Standup daily meeting must be short and focused. Moreover, standup meeting is for developers only. No project manager, no customer, no QA team. Just developers talking about their problems. Everything else deserves it own separate, hour long meeting once a week or two.


Just want to stress the importance of having stand-ups be developer-only. Do not let project managers, product owners, stakeholders, clients, and so forth become part of it. The developers won't be ready to start working when they take their seats because a lot was said in standup without actually saying much.


Eh, it's valid in this case. Ten minutes standup is formulated as being opposed to a full meeting. The full meeting was there first, standups are supposed to be different. If it's turning into the full meeting it's worth calling that out.


I'm sorry, but no, it doesn't. Sometimes people just do things the wrong way, and there's nothing that can be done other than to tell them they're doing things wrong.


> Generally claiming that a person is doing the method wrong when the method brings no perceived benefits raises a red flag on the general applicability and value of the methodology itself

Agree completely. When incredibly common complaints about a methodology are raised and the response is "you're doing the method wrong" you start to err towards dogma and a "No true Scotsman" approach to management.

Sticking to any technique, including agile, no matter what and with no modification is a symptom of a problem. Projects are unique, there's no one size fits all way to manage them.


Yes, everyone in there knows that scrum meetings should be short, but somehow those meetings take long, because everyone thinks their questions needs answers and their dependabcies definitely must be resolved.. so, an hour it goes.


The point of a standup isn't to get answers or resolve dependencies - it's to make others aware of them.

The first thing we do after stand ups is have a bunch of quick one-on-one or small group meetings.


Our meeting format was for each person to say a) What they worked on, b) What they were about to work on, and c) If they needed help with anything. Our rule was that you could ask for help as long as you were just scheduling a time to meet after the stand-up. It worked really, really well.

Then we got a new scrum master whose desired format was to talk about each item on the Kanban board each day, even the ones that weren't being worked on.


It's status meeting. Developers unlike status meeting, so they invented their own meeting: daily standup meeting.


This is really bad and a major failure on the part of the scrum master. The point of this meeting is to help the team sync on what they are doing and what they plan to do.

It should take 10m at most. Any issues (blockers, dependencies, etc) should be taken to separate ad-hoc discussions to let the team get back to work.


If there is no room for syncing up and every issue is to be taken offline, one should question if it makes sense the stand ups should be kept in sync. In my team we do standups asynchronously over Slack, it's a great and non obtrusive way of updating each other and we achieve the same thing.


It's not the same.

The standup performs an important social function of making everyone involved in the work of the team. People have to face each other, feel accountable to, and feel supported by the group.

And of course there's no room for synching up: The morning standup is meant to facilitate those synch-up meetings, synch-ups that don't have to involve everyone.


>>> The standup performs an important social function of making everyone involved in the work of the team. People have to face each other, feel accountable to, and feel supported by the group.

This is a fantastic description. And it goes a long way towards explaining why standup (even if I'm not directly involved) leave me feeling so uncomfortable.


Kick project manager out of standup meeting and you will love them.


If that makes your standups better, you need a competent project manager.


You are right, problem is in project manager head, which abuses standup for status updates because of lack of understanding of purpose of standup meeting, or because of lack of discipline (developers may provide infrequent updates in git/jira). It's hard to debug development process by email. However, if developers will do their standups properly, then their development process will fix itself. (Or project manager will fire the most active one. ;-) )


Assuming this is scrum or something similar then if "[the standup] just drags for an hour almost every day" then they're not really doing it right.

I've been in well run Agile teams - and they're wonderful. I've been in badly run "Agile" teams and they're soul destroying. Either way agile is not the problem (or, I dare say, the solution).


One thing I've observed in (badly-run, I think) Agile teams is big standup meetings, where if anyone starts a discussion or even asks a question (rather than just reporting status) somebody immediately says "offline!" -- i.e., have that discussion after the meeting.

I can see that the motivation is to avoid wasting the whole team's time on a discussion that only needs two or three people; but suppressing discussion can hurt too, as it stops people learning about tricky issues outside of their immediate work area.

It would be helpful to have some rules of thumb to show when you're doing Agile wrong. Probably those exist already -- anyone got a good link? And probably "too many people in the standup meeting" is a good rule of thumb!

Dragging on for an hour sounds absolutely awful. I'd even say more than about six people is too many.


The point of a standup is to learn that there is a tricky issue outside of your immediate work area, and to know who's got expertise on it. That way you know who to contact if "outside" becomes "inside". The actual details, you hash out in a separate meeting with just the people involved. "Offline!" is absolutely the right response if a standup starts veering into technical details.

I had one team of 10 that had a problem with our standups extending into half an hour once. We resolved that we'd make the standup one minute shorter each day. After a month and a half, we had it down to a one-minute standup (6 seconds per person). It was still useful, though a bit extreme - I'd target about 5 minutes for a 10-person standup (30 seconds per).


I've been in a situation where we did agile with 30 people. Standup took 10-15 minutes.


Did it work well?


Extremely. You'd take ten seconds to say what you had to say. Sometimes you'd say "I need help with SQL", say, and someone would say "I'll help", and you'd be done.

But we had a real agile guru on the team. We didn't do "Agile Methodology" exactly; we did Extreme Programming, and we kept tweaking it. Sometimes he'd say "let's try changing our approach in this way for the next two iterations, and see how it works out". We'd do the experiment, and keep the changes or not. We kept hacking and experimenting with the process, in a controlled way, but never in a "this is how it's done" way.

So if you have an "Agile is the one right way" person trying to run your team with big-A Agile, and he/she wants to do 30-person standups, you're probably in trouble...


Sounds great! Keeping track of how well the process itself is working, especially, and being willing to continually tweak it to fit the team and project.


One doing Agile and the other basing on wikis and Adhoc in-person whiteboard discussions.

The ad-hoc approach also sounds quite agile (at least with a small 'a'). It's certainly closer to Agile than to Waterfall, assuming they didn't do a big design up front before writing any code.

I think the ad-hoc agile approach can work very well with a good team. But Scrum fans always seem to warn against cherry-picking just the bits of Scrum you like and not using the whole process.


But Scrum fans always seem to warn against cherry-picking just the bits of Scrum you like and not using the whole process.

But of course. If you just cherry-pick and experiment, then you won't have any reason to pay an expensive expert to tell you how to do it right!


> I think the ad-hoc agile approach can work very well with a good team. But Scrum fans always seem to warn against cherry-picking just the bits of Scrum you like and not using the whole process.

I'm a big Scrum fan (when it works), and my biggest takeaway is that it's exactly meant for cherry-picking and modifying. The best team I've ever been on was one where we were all using Scrum for the first time. We were constantly trying to mold it to fit us best, and it ended up looking nothing like the original model of Scrum. It was also the only time I've ever been on a Scrum team that did proper retrospectives, which I think is the biggest point!

Pretty much every other team has either ignored it ("Why do we need to discuss Scrum, it's in the book and laid out for us.") or merged it with the Review, so that managers, stakeholders, and people outside the team are involved in that. And no one wants to suggest changes or raise complaints with outsiders watching.

Too many people seem to read a book about Scrum, memorize all the concepts and rules and abide by it, without reading any of the justification behind it. If you swear we need story points, and they need to follow a fibonacci scale, but you can't tell me why story points are better than estimating hours, you're doing it wrong (and then points always get fucking conflated with hours anyways). If you understand that story points are just one way of estimating a task's effort relative to other tasks, and that relative estimates tend to be easier to make, and scale better with all the other estimates when things change, then you're allowed to make the call of whether story points are best for the team, or a different estimating system, or none at all. Even better than someone understanding that, everyone on the team should understand that and be able to weigh in.


Depends what you cherry pick. One of the key assumptions of Scrum is colocated teams and (implicitly) engineers who want to understand and think creatively about the domain problems. Without those you have my personal guarantee you will fail


I'm not a huge fan of Scrum, but there's a grain of truth in there. If you're forced to use the whole thing, it's harder to creatively misinterpret the underlying spirit by e.g. having one hour "standups" where everyone is sitting down, having a backlog that covers a year of work in excruciating detail, or estimating in hours then using those estimates to fire people.


Is everything else between these two groups completely equal? I seriously doubt it is, in which case I don't think it's fair to make any conclusions that hold weight.

This is one of the problems I have with these sorts of things. My company went Agile about two years ago, and lots of people like to rant about how much better everything is now and how much more productive we all are because of it. Except we actually have no way of knowing whether it made any difference at all.


Sorry, I should have made it clearer. I ranted likea personal thought than a definitive statement. The teams work on different projects. The diversity and experience of its members are different. They are not strictly comparable.

But, looking at both teams from above, it feels like the non agile team is very simple and it works. The agile team is more complicated and works only on paper.


From my personal experience: experienced teams can thrive with almost no methodology and an ad-hoc process because... They had experience with other processes and can see the good and bad in them.

I still advocate agile for less homogeneous teams or in situations like other posts have highlighted but a team of more senior developers with a working process that is open to be improved (one of the cornerstones of agile) will thrive with less churn than when forced into a by-the-book agile process.


For me Agile is by definition an ad-hoc process just one with guiding principles for how to go about organising it. The problem comes with formalised methodologies based on Agile which are treated as a one size fits all approach for any team.


1 hour daily meeting sounds horrible (and dysfunctional) whatever the development life cycle looks like.


Agile is a very loaded word. One meaning of Agile is a very specific kind of process, the other meaning and perhaps closer to the original manifesto is what you're describing with the "non-agile" team.


Nowhere does Agile say you have to do standups or measure velocity :) At some point the team that's inventing its own process will find it stops working, what's important is if they can identify when that happens and find new ways of getting things done.

http://agilemanifesto.org/


Scrums and stand-up meetings are mostly a waste of time. Scheduling frequent milestones is not.


A "1 hour daily standup" is not agile. The point of a stand-up is just that... everyone can stand because the meeting is so short. Ideally 15 minutes max.


It's annoying when people get really dogmatic about having to stand up in the stand-up meeting. I know it's supposed to remind and encourage everyone to keep the meeting short, but in my experience that simply doesn't work.


Agreed, enforcing standing up but not brevity is the worst way to do standups, and a clear sign of pure cargo-culting.


Maybe not. But have a culture that the meeting really is that short, and let people sit if they want to.


Agreed. Don't force people to stand, keep it so short and sweet so that people want to stand.


The reason it is referred to as a stand up is because it is short and you stand up for the whole thing. An hour long meeting is just that an hour long meeting. Something is not working right in that agile situation which is why they aren't happy.


Scrum != Agile. Heck its sounds if your doing scrum wrong anyhow.

Kanban board, prioritization, CI + CD, automated tests is probably about as much agile as most companies need.


You are not doing agile.

In a daily scrum you cannot have conversations, its just everyone stating 3 things: What I was working on yesterday, what I will do today, and whether I need help today from someone. For a team of 10 people (a large agile team!) it should not last for more than 15 mins.

I guess other parts are broken too, if they dont even know how to do a standup.


Just because someone says they're using Agile doesn't actually mean it. Your non-agile team sounds much closer to the actual goals of "Agile".


Seems like people think using Agile and using your brain is an either-or kind of thing. It's not magic.


Agile is not a method. It's just a buzzword.


> Continuous integration is a good thing. Back in the bad old days you'd have three people working on parts of the system for 6 months and plan to snap them together in 2 weeks and it would take more like another 6 months.

Also extremely importantly is it brings you:

* Working tests. If you make changes and forget to or don't run all tests, the CI server will catch it and make you aware. You still have to write (useful) tests, of course, but that's a discrete problem.

* Entirely kills the excuse "Well, it builds on my machine". This means no undocumented dependencies, and the entire build is scripted.

* "Release builds" are a non-event. They just happen all the time, and your "release" is just the latest build that includes the changes you want and passes testing. This removes situations where there's only a limited set of people (eg: one) that can do a full release build.

Doing CI early on is much simpler. Aside from being beneficial from day 1, it is much easier to incrementally add to your build script/environment as needed than to try to create a script later based on a complicated manual process.

Not having CI is not nearly as bad as not using source control, but it's in the same ballpark.


"Agile methods" as a term has zero meaning at this point. Talk about the specific things you do to make things work well instead of lumping them under the heading "agile".

The issue of people refusing to coordinate and working independently on two useless things for months at a time is an issue that transcends any popular software terminology. The role of a good manager is to see that units A and B are not in sync and resolve that. That's true in all disciplines. You can't attribute this to "agile" just because you meet every 2 weeks.


I think it's the nature of how a company implements agile, rather than agile itself. I have a client that subcontracts a lot of work to me (my work is mostly gathering business requirements and BPI).

They use "agile" but I have budget and timeline constraints on every project.


I find agile works and works well when you limit the definition to "do what you can to avoid waterfall".

Where it starts to be about how you conduct meetings then it tends to fall to pieces.


No, it's not just you and yes, we often do overcomplicate software development.

It's been that way long before agile methodology or microservices though. Complexity-for-the-sake-of-complexity EverthingHasToBeAnAbstractClass frameworks have been plaguing the software development business since at least the 1990s and I'm sure there are similar stories from the 80s and 70s.

It's hard to find a one-size-fits-all easy method for not falling into that over-engineering / over-management trap. I try to focus on simple principles to identify needless complexity:

- There is no silver bullet (see "microservices"): If the same design pattern is used to solve each and every problem there probably is something amiss.

- Less code is better.

- Favour disposable code over reusable code: Avoid the trap of premature optimisation, both in terms of performance and in terms of software architecture. Also known as "You aren't gonna need it".

- Code means communication: By writing code you’re entering a conversation with other developers, including your future self. If code isn't easily comprehensible again there's likely something wrong.


I think the tendency to over-engineer is a symptom of retrofitting an assembly-line 9-5 shift onto the creative process of writing code.

You sit a guy there 5 days a week for many years. He has to look busy, he has to do something with all of that time. He's not going to get paid if he writes the code in the most simple, concise, and straightforward way possible and then goes home until they're ready to make a new feature two weeks later. He has to sit around and make up something for himself to do.

Contrast with side projects. I have many simple weekend projects that continue to work well and provide their promised utility years later. Because you just write what you need and stop, you don't get sucked into the disastrous complexity spiral that every company-internal software project ends up as.

The other factor here is that people need some signal to say "I'm good at my job" (because no one can actually tell). That signal has to go to colleagues, superiors, and peers outside the workplace. People therefore invent artificial complexity or take intentionally convoluted approaches so they can sound fancy. In the most extreme cases, this is a conscious decision designed to block out "competitors" (colleagues). In many cases, it's a subconscious way to ego-stroke (and to mix in a little bit of variety per point one above).

This is especially true when a household brand like Google or Facebook pushes out some new esoteric thing; everyone wants to see themselves as a Google-or-Facebook-in-waiting and it makes it easy to pitch these things to the bosses, when the fact is that the kinds of things that work at large public companies like Google are probably not going to work in small companies.


Thank you for shining a light on the psychological side of this discussion. I like to highlight psychology when I have these discussions with peers because too often technical folks view the world through technology lenses instead of human ones.


Very accurate and poignant.


> Less code is better.

I'd change this to "Write as little code as necessary, but no less." The problem with "Less code is better" is that some folks use that as justification to write clever one-liners that are difficult for other developers to read. That is not better.

That aside, I agree with everything else you said!


> some folks use that as justification to write clever one-liners that are difficult for other developers to read.

Seriously - Please, if you do this, stop it! You're just slowing everyone else down.


There's also a tendency to "architect" things so that common things can be reduced to one liners. A common one is to create form generators via attributes. Things always end up being way more complicated this way.


> Favour disposable code over reusable code: Avoid the trap of premature optimisation, both in terms of performance and in terms of software architecture.

Some people call it "premature generalization". Relevant C2 page: http://wiki.c2.com/?PrematureGeneralization


The day I learned about premature generalization is that day my speed as a developer jumped by a factor of three. When you're generalizing, it's easy to overlook how much time that takes.

My rule these days is that, in most cases, I shouldn't generalize before the same code exists in three places. Why not two? In my experience, things that are done twice aren't necessarily done three times. But things that are done three times are likely to be done a fourth.


Also, when you've got something that's the same in two places it's hard to tell if it's really the same concept or just a "rhyme" that might later evolve into two distinct things.


> Favour disposable code over reusable code

I prefer this way of putting it over YAGNI. This makes it sound more like the trade-off it is.


It's interesting to work alone on a big-ish project with no one telling you what to do and not having to explain anything to anyone. It easily feels ten-times more productive (in terms of accomplishment), but then again it won't have a business case and one doesn't get paid, either.

I think I'm least productive in open source (again, in terms of felt accomplishment), because if one isn't the sole maintainer (like above), then it's a pretty safe bet that few changes take less than a certain baseline (eg. 1 hour) -- someone always has a nitpick, CI always takes it's sweet time, oh, did we discuss yet in which branches we wanna merge this? Ah, please avoid puns in documentation and comments. Do we want this? Can you write this differently, like ...? Did you manually test this or that scenario ...?

(Now this also has advantages in terms of stability, quality and consistency -- but it's also obviously far, far less efficient)

On the clock it's more like "Meh, change that and that, otherwise it's good, so merge it after these changes and tell ops to put it in prod"


100% this, applying occam's razor to software engineering, the simplest solution is most often the best one.

It always amazes me, the instinct by engineers to overcomplicate things. They shoot themselves in the foot and curse when the inevitable subtle bugs start rolling in.


One of my fav tech talks ever (and I watch a lot of tech talks) is Alan Kay's "Is it really 'complex'? Or, did we just make it 'complicated'?" It addresses your question directly, but at a very, very high level.

https://m.youtube.com/watch?v=ubaX1Smg6pY

Note that the laptop he is presenting on is not running Linux/Windows/OSX and that the presentation software he is using is not OoO/PowerPoint/Keynote. Instead, it is a custom productivity suite called "Frank" developed entirely by his team, running on a custom OS, all compiled using custom languages and compilers. And, the total lines of code for everything, including the OS and compilers, is under 100k LOC.


I can't understand why people don't refer more often on Mr. Kays message. To be bluntly uncharitable and only half kidding, I do understand why consultants don't buy into it. Simpler systems that are less fragile mean less work.


Employees and management don't buy into it for the same reason that consultants don't buy into it. It means less work. Less opportunity to sound smart, seize control, and/or ego stroke. Less variety to break up the work-week's monotony.


Ego is one of the larger problems I've seen over the years. This usually shows, as you mention, with someone trying to sound smart. The irony here is that I've consistently found -- both inside and outside the software industry -- that the smartest people in the room are the ones who can speak about complex topics in a simple way.


It's a dilemma, because "it takes one to know one". While a few smart people in the workplace may be able to appreciate a brilliant dilution of an extremely complex topic into something approachable, most will not understand the starting complexity and just assume it's an approachable topic.

This is fine and everything, but it's bad self-promotion. If you want your bosses to give you a raise, you need them to think that you have a unique, difficult-to-acquire skillset and that it's worth going to lengths to keep you happy.

Unfortunately, modest behavior rarely results in recognition. Bombast is a very effective tool, and at some level, you always have to compete against someone.


Well, it depends. I personally don't feel the need to self-promote to get a raise. I'll probably lose out on a few raises or promotions because of that, but I make a good amount of money and I'm good at what I do. That's enough for me.


This is a fine position to take, but it demonstrates one of our pervasive social problems. People with the humility, modesty, and judgment to make good decisions are frequently passed over because they don't feel the need to lead people along or "prove" their value, whereas clowns frequently realize they have nothing except the show and actively work to manipulate human biases in their favor so that they'll continue to climb the ladder. This works very well. The end result is that good people end up hamstrung by incompetent-at-best managers, and they can take down the ship.

The dilemma re-emerges as one asks himself whether it is right to sit by and allow the dangerously incompetent to ascend based on mind games.

My answer used to be "Yeah, I'll just go to a place where that doesn't happen". I no longer believe such places exist.


Engineers don't buy into it because it's not cool. Complex systems are cool. It goes back to the phrase "well-oiled machine". Swiss clocks. People standing around a classic car with its hood open. A complex system of things working just perfectly is super cool, and fixing them when they break is a popular pastime.


Yeah, this is what I'm getting at with the variety thing. I think that good talent tempers this tinkering impulse when a potential breakage could imperil production. They learn that as fun as complexity can be in the right context, having to lose a weekend staying awake until 5am on Saturday night/Sunday morning trying to fix something stupid cancels it out.

Having a lab and doing experimental stuff is great, but choosing to stake your company's products on it should be a much weightier consideration. In practice, we see that this weight is apparently not felt by many.


I wonder why did he chose to built demo applications, instead of a powerful and useful development tool that has strong value somewhere ?


Seems like the resources he had to work with in the VPRI project were pretty limited. It will be interesting to see what his team comes up with now that they are working with SAP and YC.

So far, I know about this: https://harc.ycr.org/project/

Hopefully, they're shooting for something like this: https://www.youtube.com/watch?v=gTAghAJcO1o


Something missing from this entire discussion is that developers have a hard time understanding what are truly best practices for their product they are creating/maintaining. It is a bold assertion to say that everyone understands all the different nuances in creating software.


Alan just complicated his own laptop by not using which is proven to work.


No, he made a point, which no one would have believed without his example.

The whole vertical software stack sitting on top of the hardware of a PC is generally considered a massive towering best with layers of abstractions, and armies of programmers needed to implement and maintain each layer. To say that this does not need to be so, would be taken as theoretical, impractical nonsense without any proof. Which actually would be a valid position, because doing software is so hard that generally you can't guarantee will something work without actually doing it.

So, yes, to make that point, and to have it taken seriously, he really needed such an example.


That is absurd. You can't write an OS in 100KLOC.


http://www.projectoberon.com/ is released and small enough for a book in the dead-trees format.


In case you're not joking, MINIX is much smaller than 100KLOCs.


Check out stats on the kOS/kparc project: https://news.ycombinator.com/item?id=9316091

There is also a more recent example of Arthur Whitney writing a C compiler in <250 lines of C. Remarkable how productive a programmer can be when he chooses not to overcomplicate.


Everything I saw in the link looks like K, not C. Do you have a link to the C compiler done in C language?


1) False dichotomy. Developer familiarity is one of the most important metrics for choosing "the best tool for the job".

2) Conway's Law applies in reverse here: If your organization consists of a lot of rather disjoint teams, then microservices can be quite beneficial because each team can deploy independently. If you're one cohesive team, there is not much benefit, only cost.

3) Depends. If you have a well-designed distributed system, it can be amazingly resilient and reliable without introducing much administrative overhead. (From my experience, OpenStack Swift is such a system. Parts may fail, but the system never fails.) There are two main problems with distributed systems: a) Designing and implementing them correctly is really hard. b) Many people use distributed systems when a single VM would do just fine, and get all the pain without cashing out on the benefits. See also http://idlewords.com/talks/website_obesity.htm#heavyclouds

4) Continuous integration was not meant to help with complexity. Its purpose is to reduce turn-around time for bugfixes and new features. If your release process is long and complicated, the increased number of releases will indeed be painful for you. Our team sees value in "bringing the pain forward" in this way. Your team obviously puts emphasis on different issues, and that's okay.


I find microservices can help in just keep everything small and focused. I know you can do this with a monolith. But having a process boundary really enforces it.


I find that the boundary creates operational headaches. A function call won't time out, deliver a 502 error, have authentication/authorization issues, require load balancing, etc. etc.

A REST API will.

Plus, once you've debugged a problem that involves crossing 5 microservice boundaries you'll start to wonder if it was all worth it.

Monolith is also a wrong (and somewhat derogatory) word to describe a non-microservice architecture. There's nothing monolithic about loosely coupled code running on the same machine.

I really think that microservices are a hack to deal with conway's law in large corporations. Operationally it's inefficient but it fixes a nexus of technical and political problems when the correct boundary is picked.


Well said. "Monolith" is a pejorative that prejudices any discussion of said code.


Except most applications are monolith. Monolith code can still be loosely coupled. However it is harder.


No, not at all.

The only difference I noticed with respect to rube goldberg (what you call "microservices") systems and coupling is that tight coupling between components of rube goldberg systems was much more painful: particularly debugging across multiple service boundaries.


until you mix it with other legacy part of the system then it will be pain in the neck


> 3) Depends. If you have a well-designed distributed system, it can be amazingly resilient and reliable without introducing much administrative overhead.

About that points. Scaling developments across many devs is a very difficult problems. It just doesn't scale.

A lot of organizations recruit/grow a lot of people and they try to get away from the human scaling problem by having them work independently/in their corner.

This allows people execute a lot of stuff... usually the same stuff 10 times, with little coordination or collaboration between them to the point it can be felt it in the resulting system(s).


Many of the programmers I have worked with actually love complexity, despite trying to convince others (and most likely themselves) that they hate it.

Advice tends to be cherrypicked to suit an agenda they already have (with your example on microservices, the vast amount of resources saying they're very difficult, should be driven by a monolith first approach, and solve a specific set of problems is largely brushed under the rug).

I think because our industry moves so fast there's a fear of becoming irrelevant. Ironically companies are so scared of not being able to employ developers that they're also onboard with complicating their platform in the name of hiring and retention. I think this is down to the sad truth that most developer roles offer very little challenge outside of learning a new stack.


> I think this is down to the sad truth that most developer roles offer very little challenge outside of learning a new stack.

This is a gem observation from this thread. In my own tech sphere the first thing developers are talking about with each other is the new x,y,z lib or framework they're using to accomplish something relatively banal. There's still a lot of work out there that really boils down to basic CRUD and reporting at the end of the day, and developers naturally begin to invent complexities on top of that CRUD to make the work interesting and challenging. I'm absolutely guilty of this first hand.

I've found personally it also doesn't help that past work on projects e.g. large Rails apps that were never architected well turn out to be such nightmares to work on. The memory of the end state of these projects lingers with developers as they move onto the next piece of work, and they're inclined to say "no that doesn't work" and pick up shiny new-tech to do the old job instead.

As a side analogy: most small business construction jobs, e.g. building a timber frame house, don't involve the builders arriving on site and are stumped by the challenge of how to put up the framing for the bedroom walls - there's also very little challenge in these projects, yet the reward is in the completion.


> There's still a lot of work out there that really boils down to basic CRUD and reporting at the end of the day, and developers naturally begin to invent complexities on top of that CRUD to make the work interesting and challenging.

I'd go so far as to say that _most_ work today (at least in startups) is building CRUD apps. The technology has changed, but the work hasn't. Inside of building CRUD apps in Rails, we now build them in React.


> This is a gem observation from this thread. In my own tech sphere the first thing developers are talking about with each other is the new x,y,z lib or framework they're using to accomplish something relatively banal.

The thing is, there is so much other stuff to learn/fix. Where I'm working now most of my day is trying to understand the layers and layers of code they built. If they'd stuck to simpler code I could be learning more about the business and the users workflows. I could be improving the UI with that knowledge, I could be optimizing the business. Instead all my efforts go into understanding the code base.


You've thrown together a bunch of buzzwords and asked if we are over complicating things.

Buzzwords can mean freaking anything. I've seen great Agile teams that don't look anything like textbook Agile teams. Microservices can be a total clusterfuck unless you know what the hell you're doing -- and manage complexity. (Sound familiar?) CI/CD/DevOps can be anything from a lifesaver to the end of all life in the known universe.

So yes, we are over complicating software development, but the way we do it isn't through slapping around a few marketing terms. The way we do it is not understanding what our jobs are. Instead, we pick up some term that somebody, somewhere used and run with it.

Then we confuse effort with value. Hey, if DevOps is good, the more we do DevOps, the better we'll be, right? Well -- no. If Agile is good, the more Agile stuff we do the better we'll be, right? Hell no. We love to deep dive in the technical details. If there aren't any technical details, we'll add some!

Software development is too complicated because individual developers veer off the rails and make it too complicated. That's it. That's all there is to it. Throw a complex library at a good dev and they'll ask if we need the entire thing to only use 2 methods. Throw a complex library at a mediocre Dev and they'll spend the next three weeks writing 15 KLOC creating the ultimate system for X, which we don't need right now and may never need.

It has nothing to do with the buzzwords, the tech, or software development in general. It's us.


It never seems complicated when I am doing my own side work for some reason. There are no design meetings, no hours tracking, no arguments on best practices, no scrum, no testing frameworks, dev ops, etc. I do use git and minimally create bash scripts to simplify repetitive tasks for deployment but its just a huge contrast to working in teams where something simple takes about 50 times longer.

I think keeping things as simple as possible and always going for that goal will increase velocity overall. Everything should be subject to scrutiny for promoting productivity and open to modification or removal. I know there is a balance where you have to increase complexity in a team environment but keeping friction as low as possible in terms of process and intellectual weight couldn't hurt.

The most productive place I've seen so far is a huge athletic brand I worked for where they kept teams at max 5 people in mini projects. This forced the idea of low overhead and kept the scale of management needed small. The worst place I worked for in terms of unnecessary complexity is a well known host, although it is the best place to work in terms of people, hired offshore that has a one-size fits all mentality and layered in as much shit as possible to slow down development to a mud crawl. I don't buy into process over productivity.


One of the things that helps when you're developing your own projects is that you can single-handedly decide to ruthlessly cull parts of the project that take lots of time but provide little value, and you (probably) have decent insight into what those are. You're also probably not at a scale where doing certain really crappy, slow parts of the job can pay off, so you can skip those.

Default form elements with some basic, nice styling to fit your theme? Form done in one hour. Special snowflake version of the same thing from the design team, which has no idea what the platform can and cannot easily be made to do, but the client is absolutely in love with? Two days, a third party dependency or two, some extra environment-specific bugs to track down later, and generally increased fragility (so more time lost in the future). This has slowed you down now and increased the resources required for the project indefinitely. But the client looooooves it.

Support Android pre-5.0, at the cost of 20% more development time, a pile of extra bug reports, an uglier, harder-to-maintain codebase, and a much much longer testing cycle, for a side project? Hell no. Client says that will cost them $4 million/yr not to support those? Ugh. FINE.

And so on, and so on.


Continuous Integration is (with a reasonable test suite) one of few elements of software development that I would consider almost essential for any long running project. It's just too useful to have continual feedback on the quality of the system under construction. (And this is before bringing in micro-services or any other complicating architectural pattern.)

Where I might agree with you more are on points 3 and 4: 'Advanced reliability' and 'Microservices'. While I have no doubt that these are useful to solve specific problems, I think as a profession we tend to over-estimate the need for these things and under-estimate the costs for having them. To me this implies that there needs to be a very clear empirical case that they support a requirement that actually exists. I'd also make the argument that the drive for microservices within an organization has to come from a person or team that has the wherewithal to commit resources over the long-term to actually make it happen and keep it maintained. (ie: probably not an individual development team.)


I think the "learn to code" movement as well as overly-technical interviews for developers are partly to blame for this. It's well-known that developers are tested on how to do something that's considered technically difficult, such as abstract CS problems or a complicated architecture, but they are rarely asked why certain tools, practices or architectures should or should not be used. Comparative analyses to make objective recommendations between different solution alternatives are also rare in my interviewing experience, but they are one of the most valuable skill a competent software engineer should have.

I don't agree on point 4 though - CI can be something as basic as running a monolith's tests on each commit, which makes sure that builds are reproducible (no more "works on my machine").


No. You are correct. Honestly I think you can solve a lot of that by following on from one of Deijkstra's core priniciples: Seperation of Concerns.

When you practice good seperationof concerns, specific choice in different areas can be more easily fixed later. It requires having decent APIs and being thoughtful on the interaction of different components, but it helps immensely in the long run.

Microservices are one way to practice seperation of concerns, but it can also be practiced in monolithic software as well, by having strong modular systems (different languages are stronger at this than others).


Well, yes, we are overcomplicating it. Except on the parts we are undercomplicating... And I still couldn't find anybody that can reliably tell those apart, but the first set is indeed much larger.

1 - Do not pick a new language for an urgent project. Do look at them when you have some leeway.

2 - Yep.

3 - There's something wrong with your ops. That happens often, and it is a bug, fix it.

4 - If CI is making your ops more complex, ditch it. If less complex, keep it. In doubt, choose the safest possible way to try the other approach, and look at the results.

5 - Do not listen to consulting experts, only to technical experts. The agile manifesto is a nice reading, read it, think about it, try to follow, but don't try too hard. Ignore any of the more detailed methodologies.


Much of the problem in the things you mention is that those things are specific solutions that have been confused with goals. I.e., "we're supposed to build microservices" is a horrible idea, as opposed to "given this particular situation a microservice is a great fit".

Understanding the possible benefits and drawbacks of any solution is important. It's important in whether or not that solution is selected, but also to make sure that the implementation actually delivers those benefits.

It's very common in our industry to use "best practices" without understanding them, and therefore misapplying the solutions.


As you've intimated, most people have a very superficial mental model.

Facebook == respected tech brand == someone I should copy. The end.

Guy I know uses Cassandra == developed by hot tech brand Facebook == cool by mental association with Facebook.

Guy I know uses MSSQL or Oracle == developed by crusty old Evil Empire Company that cool people don't want to work for == bad.

Conclusion: We must use "big data" so we can be like the cool people -- err, because we really have some big data.

This doesn't sound like the outcome we'd expect from technical people making these decisions, but we can obviously see that it's what we're getting.


I am working in huge non-IT company as a software developer. I guess that is what gives me a totally different point of view on your lessons:

1) Without a unified technology stack and a common framework we would not be able to build and maintain our applications. We decided on C# as it works best for us. Currently we are 5 developers. Not a single one of us has ever written a line of C# code before entering the company - learning the language from ground up enables us to pick up patterns that our colleagues who joined the company earlier found to be best practices.

2) If you are not introducing a whole new stack with every micro service that you develop the devops costs are quite low.

3) I agree with you on that - I think redundancy always introduces more complexity. However there are systems that handle that job quite well (e.g. SQL Server). For application servers we use hot-spares and a load balancer that only routes traffic to them, when the main servers are not reachable. This works for us, as all our applications are low traffic applications.

4) Continuous integration works brilliant for our unified stack. In the last two years we went down from 1d setup + 20min deploy to 10min setup + 20s deploy.

5) We use agile methodology whenever possible and it works like a charm. However we had a lot of learnings. Most recent example: Always have at least one person from all your target groups in any meeting where you try to create user-stories.

Planning our software architecture has been a key element in my teams success and I do not see a point where we are going to cut it.


1. What problem are you optimizing for? "The job" encompasses code, but it also encompasses staffing. It's a lot easier to hire Java developers than Scala developers. In a leadership role, your responsibility isn't just the day-to-day code - it's the whole project.

2. Microservices vs monoliths is a see-saw. You build a monolith, find it's a brittle, incomprehensible hairball, and you break out microservices. You build microservices, find that operational headaches are killing you, and start consolidating them into monoliths. Which kneecap do you want the bullet in?

3. Fix what breaks.

4. Continuous integration is vital. But it needs to be evolved along with the system. There's this thing I say... "Have computers do what computers do well, have humans do what humans do well". Handling complex and repeatable behavior (i.e. builds and test suites) should absolutely be automated as much as possible. Think continuous integration sucks? Try handing it off to humans for a while! You'll learn whole new levels of pain.

5. All process is about (or should be about) specific, discrete communications issues.


> Which kneecap do you want the bullet in?

Funniest thing I've heard all day!


That's the fun of working with me. I say funny shit!

I occasionally refer to the final steps of a project as "bayoneting the wounded" too.


Yes we are over complicating it, but that it primarily about trying to take what is essentially an artistic process and turning it into a regimented process (a known hard problem).

Rob Gingell at Sun stated it as a form of uncertainty principal. He said, "You can know what features are in a release or when the release will ship, but not both." It captured the challenge of aspirational feature development where someone says "we have to have feature X" and so you send a bunch of smart engineers off to build it but there is no process by which you can start with an empty main function and build it step by step into feature X.

That said, it got worse when we separated the user interface from the product (browser / webserver). And you're rants about microservices and continuous integration are really about releases, delivery, and QA. (the 'delivery time' of Gingell's law above).

These are complexities introduced by delivery capabilities that enable different constructions. The story on HN a few days about about the JS graphics library is a good example of that. Instead of linking against a library on your computer to deliver your application with graphics, we have the capability of attaching to a web service with a browser and assembling on demand the set of APIs and functions needed for that combination of client browser / OS. Its a great capability but to pull it off requires more moving parts.


Link to the post about the graphics library please?



- 1) Choose languages that developers are familiar with, not the best tool for the job

95% of the time, a language that your developers are familiar with is the correct tool for the job simply for that reason! There are cases where it is not the case but those involve special case languages and special case systems. If you don't know what special case means then you're situation is almost certainly on that list.

- 2) Avoid microservices where possible, the operational cost considering devops is just immense

"If your data fits on one machine then you don't need hadoop ..." Same thing applies here. Microservices have place and putting them in the wrong one will bite you bad.

- 3) Advanced reliability / redundancy even in critical systems ironically seems to causes more downtime than it prevents due to the introduction of complexity to dev & devops.

Then there's probably something wrong or limited with the deployment that needs to be reviewed (2 node when you need a 3 node cluster, bad networking environments, etc.) If you have a reasonable setup with solid tech under it, deployed per specs then this should not be true. If, on the other hand, something is out of whack (say running a 2 node cluster with Linux HA and only a single communication path between them) you're going to have problems and the only way to fix them is to get it done right.

- 4) Continuous integration seems to be a plaster on the problem of complex devops introduced by microservices.

I'm not sure about this but, if your deployment system requires CI you have a problem. An individual, given hardware and assets/code, should be able to spin up a complete system on a fresh box cleanly and in a reasonable timeframe. (Fresh data restores can take longer of course but the system should be runnable barring that.) If this requires (i.e. it can't reasonably be done manually) something like an CI script or ansible/chef/etc. script then you're deployment process is probably too complex and needs to be re-evaluated.

- 5) Agile "methodology" when used as anything but a tool to solve specific, discrete, communications issues is really problematic

Agile is commonly used to gloss over a complete lack of structured process or a broken. Even with Agile there should be some clean process and design work that goes into things or you're hosed.


4: if my stack requires a message broker to run, how is setting one up manually supposed to be better to using the ansible scripts.


For me, the trinity of development as a solo developer seems to be:

1. Writing code while using as many useful libraries and tools as possible to avoid recreating wheels

2. Continuous integration set up early on to handle the menial work and to let me concentrate on 1.

3. Constantly evaluating and researching what technology is available and newly appearing to give me an edge, because having an edge is never a bad thing in this field.

Agree with some of what OP said, especially with methodologies become hindrances and HA tools becoming points of failure.


I've seen the addition of unit testing is a big cause of complexity. Previously simple classes now have to be more abstracted in order to unit test. Add mocks, testing classes & test frameworks. Some unit tests are handy, but I dont think it justifies the additional complexity. For the apps I write I'd like to see more emphasis on automated integration testing and fewer unit tests - so we can write simple classes again.


The "threat" of having to add unit tests should force developers to write their classes and components in a way that is easy to reason about. In particular: * put as much functionality into pure functions * depend less on statically-linked globals * import all significant collaborators across seams that can be mocked. * keep state in a small atom, rather than strewn about

If you write code like that, you get many of the benefits of unit tests, whether or not they are actually written.

Perhaps it's a good idea to write a test harness (e.g. larger integration tests) for old so that you have a reasonable chance of catching it if it becomes broken, and focus on writing new code in a testable fashion.


The threat of having to write unit tests will not have nearly the effect of actually writing them.

It's too easy to fool yourself into thinking you've written testable code.


In my experience, it's usually not the existence of unit tests themselves that's causing an issue, but that most of them are badly written. One telltale sign is when writing the unit test becomes overly painful (like too much code setting up mocks), it usually means that your class is not simple enough or has too many dependencies.

Proper unit testing also complements integration testing in that corner cases can be handled at the unit test level, therefore reducing the amount of integration test code which arguably is much more brittle, runs slower and more complicated to write.


Many unit tests are just written to test code, which is at best irrelevant. At worst your codebase is 2-3x bigger and more abstract than it needs to, where useless tests keep code alive and useless code keeps tests alive. Test functionality, as close to the promises given to outside consumers as is feasible. Be it API or UI for other people/projects/services. This is the stuff that needs to work (and thus often need to be stable). No-one cares whether a function deep down inside the code, used a part of the implementation of promised functionality works. Delete it if you can.

Only case where I'd support "unit tests" as typically practiced (small units, isolated functions/classes) is around core competence (defined as narrowly as possible). But then I'd argue that this functionality should be put into a library anyways, which is used by products codebases. And then the tests are tests for the functionality promised to the products.


I'm not arguing against writing integration tests, they are as important if not more important, as you've said. Maybe I've only seen badly written ones, but my issue was against integration tests that check for example if this ever so important, but hidden, flag is being set properly after an API call when that can be checked at the service level. Someone eventually decides that flag is unneeded, and a whole host of tests fail and someone has to dig several levels deep to figure it out.

I guess I shouldn't have used the word 'brittle', but this is what I was thinking of.

And of course, I think unit testing anything and everything is absurd and not a good use of developer time.


I don't think you can avoid meta-debugging. That is, debugging your asserts or tests that you hoped would detect bugs instead of being the bug. Sometimes because more realistic tests unveil a bug, sometimes (as in your example) because underlying code functionality has changed. This is unavoidable but also often enlightening. To my mind, it's even okay if most of your bugs are meta - because these are usually very fast fixes, and it probably means you have a lot of checks. But by the same token, I would agree with you that all such tests have to be well-written, not mailed in, for just the reasons you give. It's too easy to assume that writing tests is somehow a fairly trivial task. Until you end up debugging the test.


I've seen this too. Unit testing was mandated from on high and it's something developers never learned to do properly. My telltale sign is more than one logical* assert. A test should usually be only a few lines of code, a dozen lines should be all that's needed for 99% of tests.

*Logical meaning only test for one thing, not a single assert statement. So testing for null and testing if a value is set is fine, but testing if 10 values are set correctly is not.


If integration tests are brittle, then the integration is likely to be brittle. In my opinion this is something to fix, not workaround by testing lower down.


Done well, unit tests are invaluable. I'm a relative late-comer to unit testing, but can attest to its value.

The discipline of unit testing forces me to think about what risks I'm introducing with my new code.

Unit tests drive better design - smaller classes, looser coupling, better separation of concerns, functions that don't have side effects.

Best of all, unit tests reduce regressions. I can't count the times my test suite has prevented me from introducing a bug in my app.

I can refactor code with much more confidence than if I did not have 400 unit tests checking my work.

Most recently, these tests proved their worth when upgrading my app to Swift 3.0.


My first reaction to your (very thoughtful) review is that #4 seems out of place.

CI can be a way of enforcing the simplicity of the others - it can be a way of tunneling the build process into assuredly straightforward steps and preventing individual team members from arbitrarily (or even accidentally) adding their own complications into build requirements.

Other than that, I think you are definitely on to something here.


There's this book that I've been mentioning around here called Elements of Programming https://www.amazon.com/Elements-Programming-Alexander-Stepan... that makes exactly this claim, that we are writing too much code.

It proposes how to write C++-ish (it's an extremely minimal subset of C++ proper) code in a mathematical way that makes all your code terse. In this talk, Sean Parent, at that time working on Adobe Photoshop, estimated that the PS codebase could be reduced from 3,000,000 LOC to 30,000 LOC (=100x!!) if they followed ideas from the book https://www.youtube.com/watch?v=4moyKUHApq4&t=39m30s Another point of his is that the explosion of written code we are seeing isn't sustainable and that so much of this code is algorithms or data structures with overlapping functionalities. As the codebases grow, and these functionalities diverge even further, pulling the reigns in on the chaos becomes gradually impossible. Bjarne Stroustrup (aka the C++ OG) gave this book five stars on Amazon (in what is his one and only Amazon product review lol). https://smile.amazon.com/review/R1MG7U1LR7FK6/

This style might become dominant because it's only really possible in modern successors of C++ such as Swift or Rust that have both "direct" access to memory and type classes/traits/protocols, not so much in C++ itself (unless debugging C++ template errors is your thing).


Have you looked in the STEPS program by Alan Kay? Trying to recreate modern computing setup from the OS up in 20k lines of code...

http://www.vpri.org/pdf/tr2012001_steps.pdf

"If computing is important -- for daily life, learning, business, national defense, jobs, and more -- then qualitatively advancing computing is extremely important. Fro example, many software systems today are made from millions to hundreds of millions of lines of program code that is too large, complex and fragile to be improved, fixed, or integrated. (One hundred million lines of code at 50 lines per page is 5000 books of 400 pages each! This is beyond humane scale.)

What if this could be made literally 1000 times smaller -- or more? And made more powerful, clear, simple, and robust? This would bring one of the most important technologies of our time from a state that is almost out of human reach -- and dangerously close to being out of control -- back into human scale."

...and of course if you haven't seen it, you'll want to check out the Forth guys who want to do everything with 1000 times less code:

http://www.ultratechnology.com/forth.htm


I'm aware of this but Alan Kay's work and this seem to be orthogonal. Alan Kay talks about reducing real systems that have compilers, inputs etc whereas Elements talks about like the day to day ways of writing code. Alan Kay might come up with a new keyword whose semantics magically lets you cut out 30% but Elements shows you that if you make your types behave certain way, generics will let you cut out a lot of code.


I would counter that this appears to be a repeatedly emerging consensus, including Stepanov, Kay, Simonyi, and a number of other "greats", that an approach that involves some degree of metaprogramming, guided by domain problem, is the way forward. They differ on terms - cooperating systems, model-driven, intentional, generic - and focus - whether to create new syntax, or to guide the creation of specific algorithms or data structures - but they aren't debating the power of the approach.


I recently picked up this book. Seems quite good, but I'm also mathematically inclined (there's a lot of abstract algebra in there).


The only way to have any sense of a good or solid development platform or lifecycle is, to me, to look at your specific situation and tailor everything to your deliverables and needs. Doing anything because of industry trends or academic pontificating will lead you towards the solution someone else had success with in a different circumstance.

Microservices work fine in some situations, agile works fine in some situations, but until you find that you are in one of those situations trying to bend your deliverables to meet a sprint-cycle or some other nauseating jargon will cause, as you put it, over-complication or just poorly targeted effort. (It can also cause enough stress to dramatically affect your health, I know better than most)

Those moments of solidarity between product and effort are real gems that I've only recognized in hindsight.


You are right. Agile, languages, CI, devops are all tools not solutions to problems. Blindly applied, they will not get the results promised.

First focus on identifying the primary job to be done: build a valuable piece of software with as little effort as possible given your current team and existing technology.

Second, consider how valuable the existing software is and whether it really needs to be rewritten at all. Prefer a course that retains the most existing value. It is work you won't have to repeat.

Third, choose tools that maximize the value produced per hour of your team. CI, Devops, Microservices, Languages all promise productivity and reliability benefits but will incur complexity and time costs. Choosing the right mix is part of the art of software management.


You're right, though you should end most of your comments with "for us".

We've been burned by the microservice hype, and it took a while for us to realize that most of the touted benefits are for larger organizations. These "best practices"" rarely include organizational context.


Fatal problems that hit start ups seem left-field, but they are baked into the design choices we make, often without discussion - because they seem part of "current accepted wisdom".

My major issue for startup software development is that often software is developed too discretely - with a utopian 'final version' in mind. Developers don't think holistically enough - they focus on details at the expense of design. "current accepted wisdom" is intangible, ever shifting, whereas the failure of a system is very real and can lead to loss of income etc...

Lots of start up companies don't design systems with humans in them, they write code as if it was a standalone thing - they often leave out the human bits because they are hard to evaluate, measure and control - variety of skill, ideas, approaches, mistakes, quality of life etc.

In my experience, this variety (life) often comes back to bite companies that can't handle eventual variance because of poor system design - not because of a choice of platform / provider / software etc.

I have been reading a lot around the viable system model (VSM) for organising projects. It seems to fit with what my view on this is. I am trying to implement a project using this model currently.

https://en.wikipedia.org/wiki/Viable_system_model


As everyone is saying: do what is reasonable and useful.

E.g let's make an online shop.

It has browsing, purchasing and admin sections.

Browsing is simple. Query dB and show html. It's probably the most used as well and needs to be reliable. Having as different service means admin section could break while users are still able to browse. Same for payments. Sometimes it's crazy complicated. I think of microservices as big product feature boundaries that can work independently. A failure in one doesn't affect the other.

Continuous integration: once you have your tests, and some auto deploy scripts, you have an engine. You push code, tests auto run, a live staging is created for latest code, you play with it. Looks good? Merge with Master. It's deployed to production. The idea is deployment is effortless and you can do it multiple times a day just like git push. Tests just don't only have to be unit. We run integration testing features on dummy accounts periodically from different regions in the world on production. This means you are alerted as soon as something breaks. Fast deployment and great telemetry mean you can always revert to last known good state easily.

Investing in tests is a pain but it pays off in the long run. Especially if you have other developers working on same code base.

Just don't over do it. I believe these ideas came from pain developers actually faced and they used then to solve it. If you're not feeling the pain or won't feel it then you don't need the remedy.


I think in many cases complexity just comes from lack of experience and poorly understood requirements.

I've had my fair share of cases where I ended up implementing something needlessly complicated, only to later realize my approach was terribly misguided. I'd like to think I'm slowly improving on this as time goes on.

The software world has a big discoverability problem. Even though I know there's probably prior art of what I'm working on, I don't always know where to look for it.


Honesty.


I think you wanted to say: "Are we simplifying things in software development?" All of the points you have made are actually simplifications of what might be the optimal solution.

Imagine the solution space as some multidimensional space where there is somewhere an optimal solution. The dimensions include the habits of your programmers, the problem you are trying to solve, and the phase of the moon. Microservices, a special form of redundancy, continuous integration, agile development are all extreme solutions to specific problems. Solutions which are extreme in that they are somewhere in the corner of your multidimensional solution space.

They are popular because they are radical in the way they conceptualize the shape of the problem and attempt to solve it. Therefore they seem like optimal solutions at first glance when really they only apply really well to specific toy models.

Take e.g. microservices. Yes, it's really nice if you can split up your big problem into small problems and define nice and clean interfaces. But it becomes a liability if you need too much communication between the services, up until the point where you merge your microservices back together in order to take advantage of using shared memory.

Don't believe any claims that there is a categorically better way to do everything. Most often, when you see an article about something like that, it is "proved" by showing it solves a toy model very well. But actual problems are rarely like toy models. Therefore the optimal solution to an actual problem is never a definite answer from one of the "simplified corner case scenarios" but it is actually just as complex as the problem you are trying to solve.


1) No way. Absolutely not. Not if what you're building is intended to last. Any language/ecosystem you choose has costs and benefits. You will continue to pay the costs (and reap the benefits) long after your developers could have become fluent in a language.

Certainly the language your developers already know is better than one they don't, all things being equal. But your rule is way too simplistic.

2) Of course. Avoid every complex thing where possible.

3) This means the cost/benefit ratio was not considered closely enough when planning these features. Again, avoid every complex thing where possible.

4) this is a strange one. Most people doing CI are not building microservices. CI is really more about whether you have different, independently moving pieces that need to by integrated. Could be microservices, could be libraries, could be hardware vs software. If you only have a single active branch everyone's merging into regularly, you're doing CI implicitly. You just might not need it automated.

5) take what you can from the wisdom of agile, and then use your own brain to think. And don't confuse agile with scrum.


1) Sounds like there's a lot more to the story.

    * Was the "best tool" what the devs thought it was?
    
    * Was it something they would hate using? Say, Java for Perl devs?
    
    * Was there a steep learning curve? An obscure language?
2) How big is the system? How complex is the business? How ops-friendly are the devs to start with?

3) You (or someone) must know how much system failure would cost.

4) CI can help with your devops, but its main point is to help with your software quality. See #2.

5) Totally agree, though you can also try being agile about "Agile" and taking just whatever parts work for you.

My $0.02 anyway.

(Aside: years ago I worked on a team doing ad-hoc semi-agile, which worked pretty well. I'm 99% sure I could have double our output and launched a management-consulting career if I could have credibly held the threat of Real Corporate Agile Scrum over their heads. But that was before the flood. One of them works for Atlassian now, ironically enough.)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: