Hacker News new | past | comments | ask | show | jobs | submit login

To be fair, testing is to some extent an unsolved problem. The joys of testing were being extolled long before test frameworks were actually usable. Now that they are, and you can glue Travis, GitHub and the test lib of your choice together pretty easily you have solved about 30% of what needs to be tested. If, say, you are developing an Office add-in on a Mac, and you want to test it on Word 2013 on Windows 7, there is no easy way to automate this task, and certainly no "write once, run everywhere" solution.

In my GitHuby life, I write tests obsessively. In my enterprisey-softwarey life I don't, because there is no sensible way to do it.




well it happens a lot of techniques are still not understood.

I mean we develop database heavy code. Should we never test the code running with the database? would be a poor choice since we would loose a lot of coverage. What we did instead were transactional tests. Which means that in PostgreSQL sense that we actually use SAVEPOINTS to actually wrap our tests inside a savepoint and than rollback to the sane state and never commit anything to the database. With DI this is fairly easy since we can just replace the database pool with a single connection that uses the pg jdbc driver which can insert these savepoints.

Test suite runs ~4 minutes (scala full compile + big test suite ~65%+ coverage (we started late)) in best cases and can be slow if we have cache misses (dependencies needs to be resolved, which is akward slow on scala, sbt)


Databases are ridiculously testable because your inputs are just text. What’s hard is when your inputs are platform environments and versions and hardware and racey events and...


Our tests take about 10 seconds to run (mainly the tewts which need to test a lot of endpoints, our domain is around 1.4s), with the compile being the slow part, which brings CI to around 1min30s-1min50s on average.

We use elixir, so we get nice features like ecto sandbox with Async tests out of the box.


I'd say that is an architecture problem up to a point. A test does not need any framework. Simply a defined output for a defined input, and then check whether the output matches expectations.


In this sort of scenario, the bugs lie in the expectations themselves. Tests that don’t account for that are dead weight.


Can you expand on that? Because I don't see how a test can account for the faulty expectations of the person writing it.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: