Hacker News new | past | comments | ask | show | jobs | submit login
Modern Extreme Programming (benjiweber.co.uk)
166 points by henrik_w on Apr 21, 2015 | hide | past | web | favorite | 83 comments



I never did pair-programming but I really liked the implied pair programming that resulted from having a checkin-partner that had to review and greenlight your commits. After sending the request you'd walk over and talk about the code for a bit. Usually pretty informally (over a coffee or water if there were a lot of commits :P).

It usually went "ok tell me what your code does" and then some talk about details, design tradeofs, identified issues. Sometimes there was a rubber ducking effect of sorts where you figured out an issue with the overall code design but the major benefit was that it forced you to write code that is not too long and can be reasoned about "in a vacuum" (kind of). Second major benefit was that it forced you to physically walk away from the computer which has all kinds of benefits.

All informal, nothing set in stone. We just used IM-DND to indicate that you shouldn't walk over just now because I'm in a no-disturbe mode of coding.

You can probably bundle it up, write fancy DND/checkin-greenlight software and brand it as "Coffee cup driven software development". Agile and at least 4.0 ;P


The social aspect makes coding more fun. And getting away from your keyboard is great. What you're describing to me sounds like informal code review, a high level walkthroughs of what your code does and what choices you made and why. It's a taste of pair programming. Talking through your code is great. Implied pair programming sounds about right. Try the full thing sometime!


I've done that for non-trivial commits and found it much better than someone reviewing a commit alone. When you are reviewing a commit alone, it's very easy to just skim through and not be aware of details.


Let's say the D represents the duration between when you typed that code, and when you got feedback informally at the watercooler.

Pair programming, when it's working well, is just lim(D -> 0) :)


I once worked at a place that had pair programming as a rule as well. Fix button size CSS for IE6? Two programmers, one computer. Change First Name, Last Name to First Name Last Name? Two programmers, one computer. Being stuck on something totally trivial yet essential where you depend on someone else? Two people sitting on their hands instead of one.

So this is what it ended up being: one guy doing the work or sending HipChat messages to the guy required to get unstuck while the other one was browsing Facebook. Then in the end the guy that was picking his nose all the time looks at the other guy's work quickly and you say "yeah....that seems about right....merge it!".

Sure I can imagine it's great to work out the architecture of your new framework together with a few guys and a whiteboard and then starting to work on the code together. But why would somebody need to watch how you're doing some kind of trivial CRUD?

Unless literally every problem and feature they're doing is super exciting sooner or later somebody is just colouring between the lines to get the application finished and a quick code review when it's time to merge the pull request is all you need.


I am an advocate of paired programming. A lot of organizations do it really poorly. Your examples show exactly where it does not work.

First off, sometimes you have a bunch of minor tasks that need to be taken care of, like updating some text in the UI or fixing a button size, also in the UI. Is there a reason that you have to sit down and plan to only do that? If you have a well prioritized backlog and the freedom to get work done these scenario's should be more like:

"Let's work on adding this new feature here." "Ok that's a good one, we'll need to do a, b, and c to get it all wired up, working and tested. Hey since we're in that part of the code there are some small tasks about changing the button size and the text here, let's make sure to get those done." ... "Great! we completed the new feature, fixed the button size, and the text. Merge it!"

I never sat down with my pair for the day and said "Ok go fix that button. Great, merge that. What's next?" It was always a discussion of the most valuable thing to get done and any collateral issues that could also be fixed. In this case, we often got more done.

Second, if you had a pair partner sitting around doing nothing, there is a problem. They aren't engaged. They either need to become engaged or not be in an environment where there is paired programming. That is ok! But they need to admit it and move on.

Pairing works well if you have the people and processes in place to allow it to work well. If there are people sitting on their hands, then there is a problem. Though I would argue that if you took away paired programming, the same engineers would often still be sitting on their hands, playing on facebook.


We had a similar experience at my last company, and we tried a few ways to get around it, eventually landing on a workable solution.

We started to allow devs to do story cards on their own, for easy stuff (like you said - colors, alignment fixes, CRUD, etc.)

This worked for a while, but we found the number of dev/bug/fix/bug/etc cycles on those stories was waaaay higher than cards done "properly" with pair programming (even big architecture cards).

So then we made the rule that a dev could do a card on their own, but just before committing they would sit down with another dev and show and explain it to them, thus helping to catch any little "gotchas".

This works really well. i.e. the whole point of Agile is there are no rules, you're supposed to come up with your own based on what works best for you and your org.


I think I've realized over the years that this kind of development process lends itself to certain kinds of software. In particular XP is effective for web server products.

Not all software can or should be deployed rapidly like this. It's been my experience that end users don't really want constant churn in their software unless it's transparent to them. It's also hard to justify having 11 SEs sitting around a monitor programming. Their combined rate is probably something like 2million a year with overhead. with that kind of burn rate you'd better have some serious productivity from the 1 guy on the keyboard.


What? That's like saying that programming is easy because it's just typing.

It's not about how fast the guy with the monitor writes things! It's about making the right choice when you're designing a system.

When I do programming by myself I go through many designs and refactorings.

If someone already mentioned "hey, don't allocate on the hot path" and "make a closure here instead of copy pasting" and so forth without me having to be concerned about code quality and performance I'd get things done faster. I usually implement a very naive first version. If after typing this up I already have 3-4 people fixing it, then my pull request can have good performance, style, etc. from the get-go and get merged pretty fast instead of constant "can you fix this, then I'll merge it?"


I think we can all agree that the keyboard isn't the limiting factor here.

Individual programmers at their own keyboards can all focus on separate problems. Two (or more) programmers with a single keyboard can only focus on a single problem at a time, but they get the experience and perspective of multiple developers. It's really just a matter of breadth of focus vs depth of focus.

At a certain point adding more developers won't help and it will end up wasting resources. This may be as soon as you add a second developer, or maybe after you add 6 developers.


I agree with this. Of late I've actually become a bit annoyed with our Slack install, because it keeps changing. What can and can't I do with our chat tool? Who knows! It'll be different tomorrow.

I used to lambast my colleagues for having chosen to use IRC over more modern tools like Jabber or Hangouts. I realize now how foolish I was.

Some software just don't need to change continuously.


"Some software just don't need to change continuously."


Ha. Initially this quote got upvoted. Then it was downvoted. Is this a controversial subject?


> In particular XP is effective for web server products.

I was on a team that used it effectively to develop a Java back-end framework, which was not web services nor hosted by a web server. While I agree not everything fits into XP, you can't classify it like that.

Mob programming is not classical XP, but it is in the same spirit of pair programming. It can burn a lot of money per feature developed and LOC written, you need to also consider the mentoring/learning aspect of it. Same goes with pair programming. It isn't only about code quality. It is about mentoring/sharing/learning.

Don't get me wrong- if you've never pair programmed for a long time before, you don't know really understand the meaning of the word "annoyance", and if you don't have the right setup, one person may zone-out and not really be helping.

Doing it right is an art. If you learn a way to do it correctly, it will really help: your developers will be more productive because they learn from each other, there will be less churn resulting from any formal code reviews that might be required if you need formal sign-off (if required at all) and there will be fewer bugs. Fewer bugs mean: less work for helpdesk/customer service, fewer adversely affected users within the company, fewer periods when you aren't taking orders/etc. which cause lost revenue and customer frustration which can lead to them telling others about their bad experience and losing additional opportunities, etc.

But, odds are unless you try things out, tune them, try again, no matter what methodology you choose, you're going to get something wrong. It doesn't matter what you are developing. Hone it.


I can't tell if this is a spoof or not.


A bit religious, but honest. Just needs to dial down the TDD-preaching - it's not a silver bullet.


The "mob programming" was the thing that made it hard to take seriously. All the downsides of meetings (bikeshedding, grandstanding, high (time x money) consumption) without time and space to think.

I currently work with what https://news.ycombinator.com/item?id=9413117 calls "implied pair programming": code reviews before commit. But the rapid build and rapid deploy is so incredibly far from what's achievable here (C++/WinCE land). A quick build takes about 15 minutes. A full all-target-platforms build takes about 3 hours. It's then merged with releases from other teams and handed to QA. Customers are using releases up to three years old.



This is the first I've heard of "extreme programming". Coupled with "dialing it up to 11" it all sounds like pseudo-masochistic nonsense to me, but apparently it's very real.


I would say that we are moving similarly, and I like the way Benji Weber put it:

- From continuous integration to continuous deployment

- To project ownership (although there is some resistance here on the part of some software devs, who don't feel it's their responsibility)

- Monitoring first: we have a strong desire to increase this but we're a bit at a loss as to what a good light-weight solution is here

The only exception is mob programming. Instead, we still have developers code on their own, and we switch to pair programming when troubleshooting a tricky issue.


I think it's totally awesome that some groups are doing that. I just wish we had a set of these practices which were compatible with the realities of shipping bigcorp and gov contracts.

- Continuous deployment: sure, you're continuously deploying, because it takes a year and 23 signatures to get a release greenlighted into production.

- Project ownership: you can own any part which wasn't nailed down in the contract before the first line of code was written, meaning almost no parts at all.

- Monitoring first: you can come monitor as much as you want, if you fly over to another country to sit behind a screen logged onto the private network on which the app runs.

We do continuous integration in our dev environment, and ofcourse not all production environments are that walled off, but I've seen examples of such extremes many times, and it seems endemic to large-scale organizations.


You could pay for a NewRelic account, or invest the time in getting statsd+graphite or Riemann infrastructure set up. There are other systems too; it's a growth area.

"Monitoring-First Programming" to me means that monitoring and capturing errors in production is very valuable indeed. It's not a reason to abandon regular testing activities, but it is basically the same kind of thing as testing.

Links: http://newrelic.com/

http://www.kinvey.com/blog/89/how-to-set-up-metric-collectio...

http://riemann.io/


I believe it's a mistake to put Riemann in the same sentence as statsd/graphite. It really does not try and solve the same problems, it's more focused on alerting than trend analysis. In fact it recommends pushing data out of Riemann and into Graphite to perform long term trend analysis.

That said, if you have the time to invest in grokking Riemann (learn Clojure first), it would be a fantastic tool to base the rest of your monitoring infrastructure upon.


You are right, sorry if it wasn't clear - I had in mind Riemann as a replacement for the statsd part of that duo. That's how I've seen it used but my knowledge and experience are limited.


> Monitoring first: we have a strong desire to increase this but we're a bit at a loss as to what a good light-weight solution is here

Prometheus is designed with this in mind. Instrumentation is easy to add and takes care of things like concurrency and state maintenance for you, so you can sprinkle it anywhere safely. http://www.slideshare.net/brianbrazil/python-ireland-monitor... goes into this a bit more.


> Monitoring first: we have a strong desire to increase this but we're a bit at a loss as to what a good light-weight solution is here

In my experience, a toolchain consisting of statsd => collectd => graphite. It will, assuming you have never used it before, consume about man-week to get it installed and tuned to your needs, but with benefits lasting for years.

Is it better than the paid tools out there? Probably not, but it's all open source, relatively stable, and has low cognitive overhead.


Thanks for the mob programming concept. I always disliked pair programming, but I actually think that mob programming is a great idea! It sounds like a meeting where you actually get confronted with the reality of the code, and so you know where you have to make decisions both in terms of code design and product design. I'll definitely give it a shot


Where I work, we grabbed an old(er) 50" TV and stuck it up on the wall in a common space. We gave it an old mini-itx and a wireless keyboard/trackpad. Just enough for a browser, XFCE and Sublime.

Now we gather around it and yell at the code like its a football game and toss the keyboard back and forth. Not for every project or team but recommend it way more than I thought I would at first.

We do need to find a way to nerf the keyboard.


You could give everyone a bluetooth keyboard, so that no-one "has the keyboard" and it's all collaborative all the time.


We rather enjoy the "Conch in Lord of the Flies" vibe the current setup imparts.


Rock!

I get the feeling there is some weird backlash against all things Agile lately. I've noticed it particularly in reddit comments.

I suspect it's young people who just weren't around before Agile.


In Dr. Dobb's last year, Dave Thomas, one of the Pragmatic Progs and one of the original signers of the Agile Manifesto, railed against what Agile became[1] "Agile got immediately productized in many different ways. The whole point, to my mind, of the Agile Manifesto is that it's a set of personal practices that may scale to team level. You do not need a consultant to show you how to do that. It may help to have someone facilitate, but you do not need a consultant. And yet immediately what happened was that everyone and their dog hung out an Agile shingle and the whole thing turned into a branding exercise."

[1] http://www.drdobbs.com/240166688


Given enough time, any two-pizza-rule will prompt someone to the clever idea that leads to installing a giant pizza oven to accommodate 36 inch pizzas.


You realize that Extreme Programming pre-dates Agile?

XP was one of those things that prompted the creators to get together and formulate what they all had in common compared to traditional methods. This is what resulted in the Agile Manifesto.

The backlash if mostly against the consultants and pointy-haired-bosses that cherry picked from various Agile practices the things they could grok, and implemented that as a cargo cult solution.

In wider IT world, that cargo cult method has become known as "Agile", just like "hacker" now equals "cybercriminal" for the rest of the world.


>> I suspect it's young people who just weren't around before Agile.

I was around before Agile became the thing that everyone was doing. Myself and many of my contemporaries are very cynical about it because what it most often seems to boil down to "what we were doing before but with more meetings and heavier process".

I've worked in places where Agile Evangelists would regularly take teams of people away from their desks for process-related meetings (retrospectives, planning, errr.... who the hell knows what else) for several hours at a time, more than once a week.

I've heard Agile consultants say things like "And the great thing about Agile is that if you start falling behind, you can take what you have to the customer, involve them in the process, and deliver even more at the next sprint!" as if it would miraculously provide extra developer time/effort for free.

And I've seen true-believers who seemed to think that if the project wasn't making progress then people probably just weren't doing 'Agile' hard enough, and probably needed even more meetings and process.

I like the manifesto. I don't like what often actually gets applied out in the field, and the consultants/evangelists are basically cultists AFAICT.


Weird. The kind of XP I've been involved in is near obsessive about maximizing programming time. Which means meetings are kept short, with minimum attendance.


That sounds like doing it well, in accordance with principles, rather than doing it badly in accordance with consultants...

--edit-- I wouldn't claim to know about XP, I've never worked somewhere that embraced XP specifically, just places that decided to go 'Agile' and then proceeded to bog themselves down in so much procedure that projects started to fail.

I've only worked in one place where the morning stand-ups actually worked as they were intended - 5 developers go round the room, give a quick status and state any problems they have, identify who they need to talk to to get it solved, the get out again in under 10 minutes. Everywhere else I've worked they devolved into either developer chat/argument time or management info-broadcast time. For at least an hour. Every day. groan


I've noticed that Agile is frequently creatively misinterpreted as:

* "move fast and break things" or

* "testing is the end user's job" or

* "what we were doing before, except with morning meetings where nobody is allowed to sit"

I expect some of the backlash is, ironically, against that.

It's similar to "devops" in this respect - a lot of people pretend to adopt the buzzword and then do almost the exact opposite of what it proscribes.


One I'd like to add:

* "Bring in process-fascist SCRUM(tm) masters to put all creativity to a halt"

I've seen it, and it is the complete opposite of the Agile Manifesto's "Individuals and interactions over processes and tools".


The thing is this style can work (well the first two anyway). Why spend two or three times as long coding tests for an in house application used by a few users and the requirements aren't actually known or well communicated? Developer time is often more expensive then the end user time.


I used to think this too, but then I realized that this is actually backward thinking. It's quicker in the short and the long term to write tests.

Every time you code (whether requirements are clear or not), you go through the following steps (often executed in the space of 2-3 minutes):

1) Think 2) Write code 3) Build/run code 4) Test and see the outcome 5) Go to step 1

Even if you are not writing tests, you are doing step 3 and 4. Moreover, if you are not writing tests, you are doing steps 3 and 4 manually.

Manual is only quicker if there are a few of these iterations. If you are doing this iteration every 2-3 minutes, you will be doing it hundreds of times a day and thousands of times in total over the course of a project.

Automating that process will save you a lot of time in the long run, and will still save you time in the short run, as well as leading to higher quality code.


I think a lot of disagreement about TDD comes from the fact some things are a lot easier/quicker to test than others.

If I'm writing a library to add two numbers, it's easy to create fast tests with no false alarms. Assert.equals(10, new BigDecimal(5).add(new BigDecimal(5)) - simple, fast, no false alarms, easy-to-maintain.

Now imagine you're making a website and you want to test with multiple browsers, and it uses caching and connection pools and sessions and whatnot, and the website has several dozen pages, and there are a bunch of configuration options and feature flags and you need to test with all of them.

Now all of a sudden you're administrating a pool of VMs with different OSes, writing web-scrapers, launching web servers every test cycle, waiting for tests of pages you haven't changed, running n tests * m pages * o server configurations * p clients, and all of a sudden you're spending more time keeping your tests working, or changing your software so tests run faster, and you're hardly spending any time working on features users want.

Naturally, to someone writing the first type of software, unit testing seems perfectly logical. But I can also see how someone working on the second type of software might get frustrated!


I wouldn't call your second case unit testing, but rather integration testing.

Part of the point of unit testing is that you keep the 'unit' under test to be as small as possible so you don't have these sorts of problems and can run tests as frequently as is useful. It doesn't remove the need for other types of testing suites, but they can be isolated and run at separate times.


I think the real point is that "testing in production," more commonly known as "monitoring," has to happen anyway. So early on, if you have a choice between adding better, more complete monitoring and adding tests, even integration ones, the monitoring will prove more helpful once you deploy than the tests were. Don't get me wrong, it sucks having to manually perform integration tests but when a product resembles a website and is semi-automatically deployed (e.g. has staging servers with automatic deployment, rapid rollback deploys or a canary-server deployment model, a few minutes of caching and few to no expectations that the app operates perfectly as long as it can be fixed within half a day, well, that's a very different project. In that environment, the piece of mind monitoring provides (no client errors in production) largely outweighs having no monitoring, just tests (did I test everything? What about (obscure government agency) using (outdated version of IE) who is reporting that bug?). Of course, as code is more complex than a website, or where you begin to have significant JavaScript, having at minimum tests for success-paths will help prevent regressions for application-wide changes, e.g. adding PJAX to a site and missing an onload. It's just harder to justify tests until you really need them on some assignments.


>Now all of a sudden you're administrating a pool of VMs with different OSes, writing web-scrapers, launching web servers every test cycle, waiting for tests of pages you haven't changed, running n tests * m pages * o server configurations * p clients, and all of a sudden you're spending more time keeping your tests working, or changing your software so tests run faster, and you're hardly spending any time working on features users want.

You're right - it eats a lot of time and it shouldn't. It's not as easy as unit testing and it should be.

I actually wrote an open source testing framework to handle this type of thing precisely because I was getting so frustrated with having to rebuild the boilerplate code to do this everywhere I went.

This is a simple example test I made with it - it uses a browser (w. selenium), libfaketime, and a mock SMTP server to test a django/celery/postgres/redis app:

https://github.com/crdoconnor/django-remindme/blob/tests/tes...

And this is the harness - it runs 5 services and a browser in 50 lines of code:

https://github.com/crdoconnor/django-remindme/blob/tests/tes...

The framework is currently in pre-alpha now, but it should be a bit more stable in a few weeks. It should be usable to test virtually any kind of software that runs on UNIX, also (not just python apps).

Easily handling the "m pages * o server configurations * p clients" problem is on the roadmap. Different browsers and server configurations will be treated as "just another kind of fixture", and it will be easy to feed fixture groups so that you can run the same automated test under multiple scenarios.


Your second case there is still relatively easy to test!

Consider the problem of intersecting two arbitrary 3D surfaces to get a curve. There are an infinite number of possible inputs. Except for rare simple cases, for each pair of inputs there are an infinite number of valid approximate solution curves AND an infinite number of ways to express each of those curves. Also, you want solutions which are well-behaved -- no kinks, no singularities, etc -- and as simple as reasonably possible (but no simpler).

And even that's still better than something like MP3 encoding or digital musical instrument UI testing, where the quality of the result is arguably subjective...


On the contrary: Agile is embraced by people (often young) who see is as a defense against the boogeyman WATERFALL that they think people had been following until now. In fact, few places did true waterfall, and even then it was not waterfall as originally intended with constant iteration and refinement.

As a result, many teams now insistently follow agile with a strong belief that it's the only way to make progress. They don't accept that before daily standups, people were still capable of communicating when they were blocked. Before sprints, people still came in every day and cranked out code. Before kanban boards and backlogs, people called them todo lists and often did have them in a shared document. Before 1-2 hour sprint planning meetings, people still had a strong sense of the direction of their project and the steps needed to get to a shippable state.

Agile defines units of work (the sprint, the story, etc) and sets up a fixed cadence for communication. It does not by itself create better code, better programmers, or better deliverables.


It got MBA'd. It was turned into something that the MBAs can run reports from and hold people accountable.

Now, I'm not anti-MBA. However, I do think that having people with engineering degrees be subordinate to people with MBAs is an outmoded business philosophy. They need to be more on par with one another.


I know what you are saying. I've been on sales calls, as an agile consultant, where we have deliberately dodged around the word.

Its not just the new blood. My sense is that people were sold agile as a silver bullet, and it unsurprisingly under delivered. Even with Agile+XP you still get bugs, delays and budget misses, the difference is that you know about it much earlier, which, in my experience doesn't make people as happy as I hoped it would.

I currently feel this way about Lisp too. I read the books, learned the basics, built stuff, even got a clojure job and, despite feeling better about my code, I'm struggling to provide measurable improvements over Java... but that doesn't mean I want to go back Java any more than I want to go back to 'not agile'.


Or just an overdose of agile consultancy crap.


Us old people hate Agile too.


Would you rather spend the first half of every project planning and scheduling every little feature, then spend the second half of the project missing deadlines and implementing features that are not right for the product?


This is an incorrect comparison. But I'd also not like to work on a project where someone delivers a feature branch only to find out the feature's been redesigned completely for the 4th time. Or where there is no spec for features at all, just wireframes that change daily and developers are supposed to reverse engineer what the latest UI means into a feature set. But wait, daily standup so it's Agile.

A lot of projects would do good to have a light approach at spec writing, to get faced with problems before they're in code.

I'm unconvinced that a dev process is going to supersede dysfunctional technical management, no matter what we call it.


Precisely. A team of decent engineers and competent managers will typically produce good software. And guess what - it will barely matter which methodology you choose to adopt.


A better dichotomy is: would you rather have a better process, or hire better engineers? I choose better engineers.


Better engineers are expensive, and I can't crack the whip on them like I can disposable engineers. /phb


This seems to me a false dichotomy.

Sometimes requirements actually are set in stone and it's good to do this - I worked on an embedded device last year, with fixed interfaces and fixed security requirements. That project would have gone a lot faster if there had been an effort to explore and document these at the beginning.

--edit-- oh, and of course, agile projects never, ever miss deadlines...


I've worked on way too many projects where things went "Hey, you just delivered the big load to QA? Well, we think the customer wants features X, Y, and Z on top of what you've already written, but we're not budging on the release date, so you better pull the load from QA and start working 16-hour-days and coming in on weekends. Oh, and if it breaks after being delivered to the customer, we'll throw QA under the bus because QA failed to do proper soak testing, and we'll have QA throw you under the bus because you didn't give them time to soak since we're making you develop right up to the night before launch." to trust any process where features can be added after development begins.


Word!


Its because, quite frankly, most places suck at doing Agile. And they fully embrace most of the aspects of "Agile" that seem to be there only to treat devs as children and easily replaceable cogs. Most people's first exposure to Agile is in this kind of environment, and so they grow to resent it.

Someone will undoubtedly come and reply to this saying that those places weren't really doing Agile, but that's irrelevant. They were claiming to do Agile, they paid some expensive consultants to train them on Agile, so from their perspective, they're Agile.


Sadly, I feel that most Agile/XP posts are mis-informed rants, but this essay was really good. It covers many of the changes that are happening. Especially noteworthy for folks who haven't checked into Agile/XP for a while are "Collective Code-Ownership becomes Collective Product-Ownership" and "Test-First Programming becomes Monitoring-First Programming"

Yes, there's a ton of folks who are lost in the woods when it comes to all of this, but there are also a heckuva lot of folks that are taking it to the next level. It's really cool to see the community evolving.


I've done my best programming toiling first with paper and pens for a few weeks, slowly reducing the spaghetti domain requirements into an elegant graph formalism that has proven to be understandable and extensible. After the code was in place pair programming was a fantastic way to share understanding of the code. This mandatory XP really is not suitable for all environments.


I'd try this continuous deployment thing in the finance or maybe in the automotive industry. Let's see how the clients enjoy it.


They love it:

ING Bank Case Study: Improving time to market from 13 weeks to Less than 1 week with DevOps and Continuous Delivery:

https://www.youtube.com/watch?v=9jqY_bvI5vk


It seems they're mostly talking about their client-facing stuff (mobile banking app, etc.). That I can believe. I very much doubt that there isn't a committee who has to sign off every single release of their core infrastructure.


Surprisingly people only do talks about success stories, not how everything went wrong, which is what happens in most of what typical corporations understand as "Agile".


Working in an IB, they have "banking agile" - which is to pretend everything on the project is agile but in reality ends up being iterative waterfall.

Agile works best when the clients buy into it - accepting the good with the bad. In Finance, the business would love the good from agile, but will quickly decry the bad.


Clients probably will enjoy it provided you have a comprehensive suite of automated tests that you ran before deploying.


- "comprehensive suite": you can't mock everything. If you have an algorithmic trading platform, you connect to many 3rd parties and their systems are very complex, poorly documented and they don't work according to the documentation.

- "automated tests": traders don't know what they want. They think they know, but they don't. And they will tell you things they don't want, and when you deliver them they'll be angry.

- you don't have direct access to production. If something fails, you won't get all the logs because regulatory reasons. Sometimes you won't even get anything. - etc.

I see the value of continuous deployment, just wanted to mention that it's not applicable in many-many fields outside of the CRUD bubble.


This is profoundly ironic.

Every excuse that you have made for not automating testing I have heard before ("mocking is hard. let's go shopping"). Most commonly, however, these excuses come from the pits of unprofessional software engineering, though - the "CRUD bubble" as you put it.

I find it pretty funny that you think "customers who don't know what they want" is a problem that is unique to trading. Or poorly/wrongly documented APIs.


No, I have never said that I think these problem is unique to trading (that would have been a funny thing to say, indeed).

"Every excuse that you have made for not testing I have heard before" -- does this fact render the excuses invalid? Yes, mocking is sometimes too hard and you should not do it because the effort never pays back it's cost. And these situations are very common.


>does this fact render the excuses invalid

The "mocking is hard" excuse is invalid, yes.

Sometimes it can make sense to create a dirt simple adapter whose code you very rarely have to change, which you subject to heavy manual testing. Then you mock that and leave it alone.

You can use this when converting a weird protocol into an easier to handle & easier to mock protocol.

However, that just moves the point at which you have to mock back a little, and saves you from writing lots of harness code. It doesn't prevent you from doing continuous integration or continuous deployment. It also doesn't mean you shouldn't mock. It also only makes sense if mocking is harder than actually writing, manually testing and deploying this service.

>Yes, mocking is sometimes too hard and you should not do it because the effort never pays back it's cost.

Testing is hard. Let's go shopping.


He said automated testing. You can still test (and are able to test a lot more things) when you do it manually. You know, like actually use the application as it is intended.


I don't read them as excuses for not testing. I read them as reasons why automated testing is not enough. Big difference, and worth taking seriously.


They are not excuses for not testing, no. They are excuses for not automating a highly repetitive process.


Nothing is farther from me than not automating a highly repetitive process. :) Please don't respond to strawmans, that's not useful for anyone. I'm saying that there are things that you can't mock efficiently because the thing you want to mock has a very complex behavior -- this means you can't think of all the possible combinations of how it can deviate from the spec.

Let's not forget where the thread started from -- continuous, automated deployment to production. I agree that one should do a lot of automated tests, mock what makes sense to be mocked, etc. However, a fully automated testing means no manual testing, right? That's what is almost impossible in many environments -- and this makes continuous deployment extremely risky.

Just to reiterate: I am not against mocking, I am not against automating repetitive processes. I just think that 100% automated testing is not feasible in many cases, and that makes 100% automated continuous deployment infeasible as well.


>I'm saying that there are things that you can't mock efficiently because the thing you want to mock has a very complex behavior -- this means you can't think of all the possible combinations of how it can deviate from the spec.

Mocking complex and unreliable behavior by 3rd party systems is part of writing good tests and, ultimately, delivering high quality software.

If you can't think of all possible combinations of unpredictable behavior, that's fine. You keep testing until you find as many as you think there are and once you do, you mock it.

Once you get to the point where you think you have found all of them, you can release. Continuously.

If you never release that point, either you grit your teeth and release anyway, or you never release.

If you can do this, you will probably score one over on the competition, because getting to this point is rare and valuable.

>However, a fully automated testing means no manual testing, right?

Absolutely not. It just means that you've minimized your reliance upon it, and the manual testing you do do does not have to be done at the end of each iteration - it can be done continuously.

>That's what is almost impossible in many environments -- and this makes continuous deployment extremely risky.

If you are minimally reliant upon manual testing then continuous deployment is not risky.

If you are heavily reliant upon manual testing then continuous deployment is risky

>Just to reiterate: I am not against mocking, I am not against automating repetitive processes. I just think that 100% automated testing is not feasible in many cases

There's a difference between something being hard and something not being feasible. Getting to 100% automated testing is hard, but it is feasible.


3rd party systems - since they're complex and poorly documented and don't work like they say they do, the obvious thing to do is to simulate their vagaries and test against that.

Trader don't know what they want - you could substitute any kind of stakeholder for "traders". It is the point of stole, to handle evolving requirements. IT'S THE ENTIRE POINT OF AGILE.

No access to production - I'm not sure what argument you're making here.

And then your assertion about continuous deployment is unsupported, so I won't bother,


The article differentiates between deployment and release.


XProlo? Extreme Proletarians? So they like, eXtremely don't own the means of production?


I see what you did there :-)


How does this fit into an environment with a large tester presence?


lol?




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: