It usually went "ok tell me what your code does" and then some talk about details, design tradeofs, identified issues. Sometimes there was a rubber ducking effect of sorts where you figured out an issue with the overall code design but the major benefit was that it forced you to write code that is not too long and can be reasoned about "in a vacuum" (kind of).
Second major benefit was that it forced you to physically walk away from the computer which has all kinds of benefits.
All informal, nothing set in stone. We just used IM-DND to indicate that you shouldn't walk over just now because I'm in a no-disturbe mode of coding.
You can probably bundle it up, write fancy DND/checkin-greenlight software and brand it as "Coffee cup driven software development". Agile and at least 4.0 ;P
Pair programming, when it's working well, is just lim(D -> 0) :)
So this is what it ended up being: one guy doing the work or sending HipChat messages to the guy required to get unstuck while the other one was browsing Facebook. Then in the end the guy that was picking his nose all the time looks at the other guy's work quickly and you say "yeah....that seems about right....merge it!".
Sure I can imagine it's great to work out the architecture of your new framework together with a few guys and a whiteboard and then starting to work on the code together. But why would somebody need to watch how you're doing some kind of trivial CRUD?
Unless literally every problem and feature they're doing is super exciting sooner or later somebody is just colouring between the lines to get the application finished and a quick code review when it's time to merge the pull request is all you need.
First off, sometimes you have a bunch of minor tasks that need to be taken care of, like updating some text in the UI or fixing a button size, also in the UI. Is there a reason that you have to sit down and plan to only do that? If you have a well prioritized backlog and the freedom to get work done these scenario's should be more like:
"Let's work on adding this new feature here." "Ok that's a good one, we'll need to do a, b, and c to get it all wired up, working and tested. Hey since we're in that part of the code there are some small tasks about changing the button size and the text here, let's make sure to get those done." ... "Great! we completed the new feature, fixed the button size, and the text. Merge it!"
I never sat down with my pair for the day and said "Ok go fix that button. Great, merge that. What's next?" It was always a discussion of the most valuable thing to get done and any collateral issues that could also be fixed. In this case, we often got more done.
Second, if you had a pair partner sitting around doing nothing, there is a problem. They aren't engaged. They either need to become engaged or not be in an environment where there is paired programming. That is ok! But they need to admit it and move on.
Pairing works well if you have the people and processes in place to allow it to work well. If there are people sitting on their hands, then there is a problem. Though I would argue that if you took away paired programming, the same engineers would often still be sitting on their hands, playing on facebook.
We started to allow devs to do story cards on their own, for easy stuff (like you said - colors, alignment fixes, CRUD, etc.)
This worked for a while, but we found the number of dev/bug/fix/bug/etc cycles on those stories was waaaay higher than cards done "properly" with pair programming (even big architecture cards).
So then we made the rule that a dev could do a card on their own, but just before committing they would sit down with another dev and show and explain it to them, thus helping to catch any little "gotchas".
This works really well. i.e. the whole point of Agile is there are no rules, you're supposed to come up with your own based on what works best for you and your org.
Not all software can or should be deployed rapidly like this. It's been my experience that end users don't really want constant churn in their software unless it's transparent to them. It's also hard to justify having 11 SEs sitting around a monitor programming. Their combined rate is probably something like 2million a year with overhead. with that kind of burn rate you'd better have some serious productivity from the 1 guy on the keyboard.
It's not about how fast the guy with the monitor writes things! It's about making the right choice when you're designing a system.
When I do programming by myself I go through many designs and refactorings.
If someone already mentioned "hey, don't allocate on the hot path" and "make a closure here instead of copy pasting" and so forth without me having to be concerned about code quality and performance I'd get things done faster. I usually implement a very naive first version. If after typing this up I already have 3-4 people fixing it, then my pull request can have good performance, style, etc. from the get-go and get merged pretty fast instead of constant "can you fix this, then I'll merge it?"
Individual programmers at their own keyboards can all focus on separate problems. Two (or more) programmers with a single keyboard can only focus on a single problem at a time, but they get the experience and perspective of multiple developers. It's really just a matter of breadth of focus vs depth of focus.
At a certain point adding more developers won't help and it will end up wasting resources. This may be as soon as you add a second developer, or maybe after you add 6 developers.
I used to lambast my colleagues for having chosen to use IRC over more modern tools like Jabber or Hangouts. I realize now how foolish I was.
Some software just don't need to change continuously.
I was on a team that used it effectively to develop a Java back-end framework, which was not web services nor hosted by a web server. While I agree not everything fits into XP, you can't classify it like that.
Mob programming is not classical XP, but it is in the same spirit of pair programming. It can burn a lot of money per feature developed and LOC written, you need to also consider the mentoring/learning aspect of it. Same goes with pair programming. It isn't only about code quality. It is about mentoring/sharing/learning.
Don't get me wrong- if you've never pair programmed for a long time before, you don't know really understand the meaning of the word "annoyance", and if you don't have the right setup, one person may zone-out and not really be helping.
Doing it right is an art. If you learn a way to do it correctly, it will really help: your developers will be more productive because they learn from each other, there will be less churn resulting from any formal code reviews that might be required if you need formal sign-off (if required at all) and there will be fewer bugs. Fewer bugs mean: less work for helpdesk/customer service, fewer adversely affected users within the company, fewer periods when you aren't taking orders/etc. which cause lost revenue and customer frustration which can lead to them telling others about their bad experience and losing additional opportunities, etc.
But, odds are unless you try things out, tune them, try again, no matter what methodology you choose, you're going to get something wrong. It doesn't matter what you are developing. Hone it.
I currently work with what https://news.ycombinator.com/item?id=9413117 calls "implied pair programming": code reviews before commit. But the rapid build and rapid deploy is so incredibly far from what's achievable here (C++/WinCE land). A quick build takes about 15 minutes. A full all-target-platforms build takes about 3 hours. It's then merged with releases from other teams and handed to QA. Customers are using releases up to three years old.
- From continuous integration to continuous deployment
- To project ownership (although there is some resistance here on the part of some software devs, who don't feel it's their responsibility)
- Monitoring first: we have a strong desire to increase this but we're a bit at a loss as to what a good light-weight solution is here
The only exception is mob programming. Instead, we still have developers code on their own, and we switch to pair programming when troubleshooting a tricky issue.
- Continuous deployment: sure, you're continuously deploying, because it takes a year and 23 signatures to get a release greenlighted into production.
- Project ownership: you can own any part which wasn't nailed down in the contract before the first line of code was written, meaning almost no parts at all.
- Monitoring first: you can come monitor as much as you want, if you fly over to another country to sit behind a screen logged onto the private network on which the app runs.
We do continuous integration in our dev environment, and ofcourse not all production environments are that walled off, but I've seen examples of such extremes many times, and it seems endemic to large-scale organizations.
"Monitoring-First Programming" to me means that monitoring and capturing errors in production is very valuable indeed. It's not a reason to abandon regular testing activities, but it is basically the same kind of thing as testing.
That said, if you have the time to invest in grokking Riemann (learn Clojure first), it would be a fantastic tool to base the rest of your monitoring infrastructure upon.
Prometheus is designed with this in mind. Instrumentation is easy to add and takes care of things like concurrency and state maintenance for you, so you can sprinkle it anywhere safely.
http://www.slideshare.net/brianbrazil/python-ireland-monitor... goes into this a bit more.
In my experience, a toolchain consisting of statsd => collectd => graphite. It will, assuming you have never used it before, consume about man-week to get it installed and tuned to your needs, but with benefits lasting for years.
Is it better than the paid tools out there? Probably not, but it's all open source, relatively stable, and has low cognitive overhead.
Now we gather around it and yell at the code like its a football game and toss the keyboard back and forth. Not for every project or team but recommend it way more than I thought I would at first.
We do need to find a way to nerf the keyboard.
I get the feeling there is some weird backlash against all things Agile lately. I've noticed it particularly in reddit comments.
I suspect it's young people who just weren't around before Agile.
XP was one of those things that prompted the creators to get together and formulate what they all had in common compared to traditional methods. This is what resulted in the Agile Manifesto.
The backlash if mostly against the consultants and pointy-haired-bosses that cherry picked from various Agile practices the things they could grok, and implemented that as a cargo cult solution.
In wider IT world, that cargo cult method has become known as "Agile", just like "hacker" now equals "cybercriminal" for the rest of the world.
I was around before Agile became the thing that everyone was doing. Myself and many of my contemporaries are very cynical about it because what it most often seems to boil down to "what we were doing before but with more meetings and heavier process".
I've worked in places where Agile Evangelists would regularly take teams of people away from their desks for process-related meetings (retrospectives, planning, errr.... who the hell knows what else) for several hours at a time, more than once a week.
I've heard Agile consultants say things like "And the great thing about Agile is that if you start falling behind, you can take what you have to the customer, involve them in the process, and deliver even more at the next sprint!" as if it would miraculously provide extra developer time/effort for free.
And I've seen true-believers who seemed to think that if the project wasn't making progress then people probably just weren't doing 'Agile' hard enough, and probably needed even more meetings and process.
I like the manifesto. I don't like what often actually gets applied out in the field, and the consultants/evangelists are basically cultists AFAICT.
--edit-- I wouldn't claim to know about XP, I've never worked somewhere that embraced XP specifically, just places that decided to go 'Agile' and then proceeded to bog themselves down in so much procedure that projects started to fail.
I've only worked in one place where the morning stand-ups actually worked as they were intended - 5 developers go round the room, give a quick status and state any problems they have, identify who they need to talk to to get it solved, the get out again in under 10 minutes. Everywhere else I've worked they devolved into either developer chat/argument time or management info-broadcast time. For at least an hour. Every day. groan
* "move fast and break things" or
* "testing is the end user's job" or
* "what we were doing before, except with morning meetings where nobody is allowed to sit"
I expect some of the backlash is, ironically, against that.
It's similar to "devops" in this respect - a lot of people pretend to adopt the buzzword and then do almost the exact opposite of what it proscribes.
* "Bring in process-fascist SCRUM(tm) masters to put all creativity to a halt"
I've seen it, and it is the complete opposite of the Agile Manifesto's "Individuals and interactions over processes and tools".
Every time you code (whether requirements are clear or not), you go through the following steps (often executed in the space of 2-3 minutes):
2) Write code
3) Build/run code
4) Test and see the outcome
5) Go to step 1
Even if you are not writing tests, you are doing step 3 and 4. Moreover, if you are not writing tests, you are doing steps 3 and 4 manually.
Manual is only quicker if there are a few of these iterations. If you are doing this iteration every 2-3 minutes, you will be doing it hundreds of times a day and thousands of times in total over the course of a project.
Automating that process will save you a lot of time in the long run, and will still save you time in the short run, as well as leading to higher quality code.
If I'm writing a library to add two numbers, it's easy to create fast tests with no false alarms. Assert.equals(10, new BigDecimal(5).add(new BigDecimal(5)) - simple, fast, no false alarms, easy-to-maintain.
Now imagine you're making a website and you want to test with multiple browsers, and it uses caching and connection pools and sessions and whatnot, and the website has several dozen pages, and there are a bunch of configuration options and feature flags and you need to test with all of them.
Now all of a sudden you're administrating a pool of VMs with different OSes, writing web-scrapers, launching web servers every test cycle, waiting for tests of pages you haven't changed, running n tests * m pages * o server configurations * p clients, and all of a sudden you're spending more time keeping your tests working, or changing your software so tests run faster, and you're hardly spending any time working on features users want.
Naturally, to someone writing the first type of software, unit testing seems perfectly logical. But I can also see how someone working on the second type of software might get frustrated!
Part of the point of unit testing is that you keep the 'unit' under test to be as small as possible so you don't have these sorts of problems and can run tests as frequently as is useful. It doesn't remove the need for other types of testing suites, but they can be isolated and run at separate times.
You're right - it eats a lot of time and it shouldn't. It's not as easy as unit testing and it should be.
I actually wrote an open source testing framework to handle this type of thing precisely because I was getting so frustrated with having to rebuild the boilerplate code to do this everywhere I went.
This is a simple example test I made with it - it uses a browser (w. selenium), libfaketime, and a mock SMTP server to test a django/celery/postgres/redis app:
And this is the harness - it runs 5 services and a browser in 50 lines of code:
The framework is currently in pre-alpha now, but it should be a bit more stable in a few weeks. It should be usable to test virtually any kind of software that runs on UNIX, also (not just python apps).
Easily handling the "m pages * o server configurations * p clients" problem is on the roadmap. Different browsers and server configurations will be treated as "just another kind of fixture", and it will be easy to feed fixture groups so that you can run the same automated test under multiple scenarios.
Consider the problem of intersecting two arbitrary 3D surfaces to get a curve. There are an infinite number of possible inputs. Except for rare simple cases, for each pair of inputs there are an infinite number of valid approximate solution curves AND an infinite number of ways to express each of those curves. Also, you want solutions which are well-behaved -- no kinks, no singularities, etc -- and as simple as reasonably possible (but no simpler).
And even that's still better than something like MP3 encoding or digital musical instrument UI testing, where the quality of the result is arguably subjective...
As a result, many teams now insistently follow agile with a strong belief that it's the only way to make progress. They don't accept that before daily standups, people were still capable of communicating when they were blocked. Before sprints, people still came in every day and cranked out code. Before kanban boards and backlogs, people called them todo lists and often did have them in a shared document. Before 1-2 hour sprint planning meetings, people still had a strong sense of the direction of their project and the steps needed to get to a shippable state.
Agile defines units of work (the sprint, the story, etc) and sets up a fixed cadence for communication. It does not by itself create better code, better programmers, or better deliverables.
Now, I'm not anti-MBA. However, I do think that having people with engineering degrees be subordinate to people with MBAs is an outmoded business philosophy. They need to be more on par with one another.
Its not just the new blood. My sense is that people were sold agile as a silver bullet, and it unsurprisingly under delivered. Even with Agile+XP you still get bugs, delays and budget misses, the difference is that you know about it much earlier, which, in my experience doesn't make people as happy as I hoped it would.
I currently feel this way about Lisp too. I read the books, learned the basics, built stuff, even got a clojure job and, despite feeling better about my code, I'm struggling to provide measurable improvements over Java... but that doesn't mean I want to go back Java any more than I want to go back to 'not agile'.
A lot of projects would do good to have a light approach at spec writing, to get faced with problems before they're in code.
I'm unconvinced that a dev process is going to supersede dysfunctional technical management, no matter what we call it.
Sometimes requirements actually are set in stone and it's good to do this - I worked on an embedded device last year, with fixed interfaces and fixed security requirements. That project would have gone a lot faster if there had been an effort to explore and document these at the beginning.
--edit-- oh, and of course, agile projects never, ever miss deadlines...
Someone will undoubtedly come and reply to this saying that those places weren't really doing Agile, but that's irrelevant. They were claiming to do Agile, they paid some expensive consultants to train them on Agile, so from their perspective, they're Agile.
Yes, there's a ton of folks who are lost in the woods when it comes to all of this, but there are also a heckuva lot of folks that are taking it to the next level. It's really cool to see the community evolving.
ING Bank Case Study: Improving time to market from 13 weeks to Less than 1 week with DevOps and Continuous Delivery:
Agile works best when the clients buy into it - accepting the good with the bad. In Finance, the business would love the good from agile, but will quickly decry the bad.
- "automated tests": traders don't know what they want. They think they know, but they don't. And they will tell you things they don't want, and when you deliver them they'll be angry.
- you don't have direct access to production. If something fails, you won't get all the logs because regulatory reasons. Sometimes you won't even get anything.
I see the value of continuous deployment, just wanted to mention that it's not applicable in many-many fields outside of the CRUD bubble.
Every excuse that you have made for not automating testing I have heard before ("mocking is hard. let's go shopping"). Most commonly, however, these excuses come from the pits of unprofessional software engineering, though - the "CRUD bubble" as you put it.
I find it pretty funny that you think "customers who don't know what they want" is a problem that is unique to trading. Or poorly/wrongly documented APIs.
"Every excuse that you have made for not testing I have heard before" -- does this fact render the excuses invalid? Yes, mocking is sometimes too hard and you should not do it because the effort never pays back it's cost. And these situations are very common.
The "mocking is hard" excuse is invalid, yes.
Sometimes it can make sense to create a dirt simple adapter whose code you very rarely have to change, which you subject to heavy manual testing. Then you mock that and leave it alone.
You can use this when converting a weird protocol into an easier to handle & easier to mock protocol.
However, that just moves the point at which you have to mock back a little, and saves you from writing lots of harness code. It doesn't prevent you from doing continuous integration or continuous deployment. It also doesn't mean you shouldn't mock. It also only makes sense if mocking is harder than actually writing, manually testing and deploying this service.
>Yes, mocking is sometimes too hard and you should not do it because the effort never pays back it's cost.
Testing is hard. Let's go shopping.
Let's not forget where the thread started from -- continuous, automated deployment to production. I agree that one should do a lot of automated tests, mock what makes sense to be mocked, etc. However, a fully automated testing means no manual testing, right? That's what is almost impossible in many environments -- and this makes continuous deployment extremely risky.
Just to reiterate: I am not against mocking, I am not against automating repetitive processes. I just think that 100% automated testing is not feasible in many cases, and that makes 100% automated continuous deployment infeasible as well.
Mocking complex and unreliable behavior by 3rd party systems is part of writing good tests and, ultimately, delivering high quality software.
If you can't think of all possible combinations of unpredictable behavior, that's fine. You keep testing until you find as many as you think there are and once you do, you mock it.
Once you get to the point where you think you have found all of them, you can release. Continuously.
If you never release that point, either you grit your teeth and release anyway, or you never release.
If you can do this, you will probably score one over on the competition, because getting to this point is rare and valuable.
>However, a fully automated testing means no manual testing, right?
Absolutely not. It just means that you've minimized your reliance upon it, and the manual testing you do do does not have to be done at the end of each iteration - it can be done continuously.
>That's what is almost impossible in many environments -- and this makes continuous deployment extremely risky.
If you are minimally reliant upon manual testing then continuous deployment is not risky.
If you are heavily reliant upon manual testing then continuous deployment is risky
>Just to reiterate: I am not against mocking, I am not against automating repetitive processes. I just think that 100% automated testing is not feasible in many cases
There's a difference between something being hard and something not being feasible. Getting to 100% automated testing is hard, but it is feasible.
Trader don't know what they want - you could substitute any kind of stakeholder for "traders". It is the point of stole, to handle evolving requirements. IT'S THE ENTIRE POINT OF AGILE.
No access to production - I'm not sure what argument you're making here.
And then your assertion about continuous deployment is unsupported, so I won't bother,