Hacker News new | past | comments | ask | show | jobs | submit login

I'm not a CTO but I do lead the dev team at our agency (was previously 16 devs, but we've slimmed down to 7 currently). I want to preface this by saying that at an agency, your biggest enemy is always time; sales teams have to sell projects for the absolute minimum in order to get a contract, so you can't waste time on non-essentials for most projects.

That said, the biggest resistance I have found is "this feature is due in three days, I need two and a half to finish, and then we have another half to review and find bugs." In the end, the biggest issue is that we have time to test on the spot or write tests, but not both. You can scrape by with just manual testing, but I don't think anyone would ever rely on automated tests 100%.

Our larger projects are test-backed, and our largest even reaches 90% coverage, but the only reasons we wrote tests for those was because we knew we would be working on them for 2-3 years and it was worth the time in that case. I wish this wasn't the case, but I've found it's always the argument against automated tests in my corner of the market




In my previous agency life, this was something that I experienced as well. A short lived product that was due in less time than any sane dev would estimate. We all knew that we "should" write tests, but there just wasn't time. And in 6 weeks the project would be relegated to living in source control because the campaign was over.

It made hiring devs fun. Trying to explain to people why it was that way, and their insistence that software development doesn't work that way.


> A short lived product that was due in less time than any sane dev would estimate. > And in 6 weeks the project would be relegated to living in source control because the campaign was over.

That is exactly it for 90% of agency projects. Underquoted to get the deal, a rapid development cycle that leaves the devs feeling dead, and then once that first release is out, you have maybe 1 or 2 small updates and the project is never touched again, or at least not for a year or two.

There is no world where it makes sense to write tests for these projects.


What does 'agency' refer to in this subthread?

Agency for what?


Advertising agency, marketing agencies where technology takes a back seat to marketing/promotions etc. Places where projects number in the thousands on websites, apps, games, many systems clients, new technologies etc.

Every developer/engineer should work in an agency for a while because of the amount of sheer work and lifeline of said work is short, projects are primarily promotions and one and dones in many cases.

What we did at the agency I worked at was try to harvest systems from common work. Landing page systems that then had base code that was testable and common across all, create a content management system that supports agency specifics. Promotions/instant win systems that had common code across all and could live longer than the 3 week promotion, create a prize/promotions system that ran all future promotions and improved AFTER most promotions due to time constraints. Game systems for promotional games / advergaming, after new games and types became common or re-usable etc.

Many times, you have to take an after the ship approach and harvest systems that make sense from the sheer amount of work you are going across hundreds of projects. Where good engineering really comes along on subsequent systems where promotions, projects or games/apps were initially made and proved a need or prototype for how to do future projects quicker and with more robust systems.

Testing and doing code specifically for that campaign may be usable or not, but later you can harvest the good ideas and try to formulate a time saving system for the next, including better testing and backbone/baseline libs/tools/tests etc.

I have worked in agencies 5+ years and game studios 5+ years and both are extremely fast paced, usually the harvesting approach is the one that is workable in very pressurized project environments like that. Initial projects/games/apps etc are harvested for good ideas and the first one might even be more prototype like where testing/continuous integration might not fit in the schedule the first time around, or might not even be clear what to standardize and test until multiple types of those projects it out. Starting out with verbose development on new systems/prototypes/promotions/campaigns/games might not be budget capable or time allowed to do so on the first versions as they might be throwaway after just a few weeks or months. There is a delicate balance in agencies/game studios like that where the product and shipping on time is more important on the first go around as the project timeline and lifeline may be short. Subsequent projects that fit that form are where improvements can be made.


I remember my agency days.

Now days I work on a single long-running legacy project where tests make sense. Back then, I read a lot about how testing was the "right thing to do." But I also realized that most of the time (a) the client wasn't going to be willing to pay for the tests and (b) odds are that once we launch the product, that will be last time I ever look at it.

Maintenance will occur in five years when sales talks the client into scrapping the entire thing and rewriting it -- the client won't be willing to pay for maintenance or automated tests, but somehow sales could always sell them on a total rewrite.


It's a very interesting setup: all prod code is "throwaway".

I wonder if each such project is built completely from scratch. If not, the reusable parts can be improved over time, and covered by tests.


> I wonder if each such project is built completely from scratch.

For us, it's a mixed bag. We have a CMS we use for most projects that we did not develop, but we have developed our own packages/blocks for it that are included in every project that bootstrap and whitelabel the hell out of the CMS to provide the functionality we need in every project. From a data standpoint, one of our packages replaces several dozen classes and hundreds, if not thousands, of lines of custom code in every project.

When it comes to more custom projects, specifically ones that never see public use (like a custom CRM, admin dashboard, CRUD-based system, API backend, etc.), we build using the Laravel framework which bootstraps away all of the authentication, permissions, routing, middleware, etc. and gives us a very good blank slate to work with. For these, everything is mostly from scratch, minus what we can use third-party packages for (such as the awesome Bouncer ACL). We have a front-end library that I wrote to abstract away common tasks into single line instantiations, but it's our experience that these projects are being built on a blank slate for a reason. These are the projects that may actually see tests written for as well, although not all will.


The typical stuff an agency can reuse is all covered by frameworks and libraries anyway.

You take an existing CMS or shop software and customize it, or take a web framework and build a very customer-specific service on top of it. Most everything you can share between CMS projects is already part of that CMS.


I find this view (and the replies) interesting. One thing that I've experienced after writing tests a lot is that once you know the patterns, implementing TDD becomes effortless. Eg (python)

Need to test interfacing with an SDK correctly?

Sure, patch the SDK methods and ensure they are called with the proper parameters

Also, for extra coverage, add a long running test that makes actual calls using the SDK. Run these only when lines that directly call the SDK change (and ideally there should only be a few of those).

Need to mock a system class?

Sure - Here's the saved snippet on how to do that

---

This of course applies only if you repeatedly access projects that use the same stack. If you don't then I understand that it can be pretty hard. But basically over time, writing tests must become easier else that's a sign that something in the process is not working correctly. Knowledge isn't being transferred. Or things aren't being done uniformly.

Ideally once you get past a certain point, testing should be just a selection of patterns from which you can choose your desired solution to implement against a given scenario.

I accept that I could be missing something here so please take what I say within the context that my thinking applies to work that can be described as technologically similar.


I always find code coverage such a useless metric: if you have two independent ifs next to each other and one test goes in one if and another test in the other you have 100% coverage. Congratulations. But you've never tested what happens when you go in both


I agree that it is a useless statistic, especially when comparing unit vs integration vs functional vs smoke testing. There are different types of tests and just because you are reaching 90% of your code does not mean you are thoroughly testing it.

The only reason I brought it up was to show that we don't skip test writing entirely and the projects where we do write them, it isn't like we just wrote a test to check that "Project Name" is returned on the homepage and called it a day.


A few years back a person I worked with was tasked with implementing code coverage. Part of that task was they also had to get our code base up to 80% coverage or so.

They wrote stupid test after stupid test after stupid test. Hundreds of them. Oh Em Gee. It was like that story of Mr T. Where the army sergeant punished him by telling him to go chop trees down, only to come back and find Mr T had cut down the whole forest.


That's just the basic coverage metrics, there's more than that: https://en.wikipedia.org/wiki/Code_coverage


There are different kinds of coverage metrics. If line coverage is not enough for your liking you can always go for full path coverage. You'd have to write an exponential number of tests though.


I find that tests pay off pretty quickly in terms of productivity -- somewhere around a week. There are a couple of caveats, though. First, you have to have a team that's already good at TDD (and not just test-after-the-fact). What I mean by TDD is hard to describe succinctly and especially since I said it's not test-after-the-fact, it's easy to think that I mean test first. I don't. To me TDD is a way of structuring the code so that it is easy to insert "probes" into places where you want to see what the values will be. You can think of it a bit like having a whole set of predefined debugger watch points.

With good TDD (or at least my definition of it :-) ), the programmer is constantly thinking about branch complexity and defining "units" that have very low branch complexity. In that way you minimise the number of tests that you have to write (every branch multiplies the number of tests you need by 2). The common idea that a "unit" is a class and "unit tests" are tests that test a class in isolation is pretty poor in practice, IMHO. Rather it's the other way around (hence test driven design, not design driven tests). Classes fall out of the units that you discover. I wish I could explain it better, but after a few years of thinking about it I'm still falling short. Maybe in a few more years :-)

In any case, my experience is that good TDD practitioners can write code faster than non-TDD practitioners. That's because they can use their tests to reason about the system. It's very similar to the way that Haskell programmers can use the type system to reason about their code. There is an upfront cost, but the ability to reduce overall complexity by logically deducing how it goes together more than pays off the up front cost.

But that leads us to our second caveat. If you already have code in place that wasn't TDDed, the return can be much lower. Good test systems will run in seconds because you are relying on the tests to remove a large part of the reasoning that you would otherwise have to do. You need to have it check your assumptions regularly -- normally I like to run the tests every 1-2 minutes. Clearly if it takes even 1 minute to run the tests, then I'm in big trouble. IMHO good TDD practitioners are absolutely anal about their build systems and how efficient they are. If you don't have that all set up, it's going to be a problem. On a new project, it's not a big deal for an experienced team. On legacy projects -- it will almost certainly be a big deal. Whether or not you can jury rig something to get you most of the way there will depend a lot on the situation.

So, if I were doing agency work on a legacy system... Probably I wouldn't be doing TDD either. I might still write some tests in areas where I think there is a payoff, but I would be pretty careful about picking and choosing my targets. On a greenfield project of non-toy size, though, I would definitely be doing TDD (if my teammates were also on board).


tdd can be faster for some but you are forced into a funnel that involves another step.

If you know exactly what you are writing it is quicker to add your changes jump to the next file add your changes. If you are constantly checking the browser to see if what you wrote works Tdd can help.


> I find that tests pay off pretty quickly in terms of productivity -- somewhere around a week.

I think you overestimate the agency project life cycle. Most of our projects are built and ready for client review in 2-3 weeks total. Once the client makes a few days worth of changes, the project is shipped and we likely do not look at it again for another year or three.

That said, there are always long-running projects and those are the ones you try to include tests in.


Interesting. I worked very briefly in an agency a long time ago. Our projects were on the 2-3 month time frame. I suppose it depends on what you are doing.


We have plenty of those as well, but the overwhelming majority of them are about 2-3 weeks of work once we get started




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: