We have been doing a lot of thinking at Snowplow   about JSON Schema versioning and self-description. One of the key points as it relates to RESTful APIs is: it's not enough to version your API, you should be versioning the individual entities that your API returns; these entities should be able to individually evolve over time, with the focus on evolving the entities' schemas in additive (i.e. non-breaking ways) for existing clients.
There's quite a lot to learn from the Avro community in all this.
“To say that there’s a large amount of literature on the benefits of this approach would be an understatement.”
I've brought this up before, but “literature” carries the connotation of “scientific literature”, and I actually haven't heard of many rigorous, well-constructed scientific experiments that have produced conclusive scientific literature that indicates the benefits of TDD. There's certainly a lot of anecdotal posts and writings, but there seems to be relatively little actual “literature” in the most commonly used sense.
Not saying TDD is bad, of course… Just wondering if this is in fact an overstatement rather than an understatement. If we're going to appeal to authority, we should make sure that authority is valid :)
They report that high-rigour studies show TDD has no clear effect on internal code quality, external system quality, or team productivity. While some studies report TDD has a positive impact, just as many report it has a negative impact or makes difference at all. The only dimension that seems to be improved is "test quality" - that is, test density and test coverage - and the researches note that the difference was not as great as they had expected.
The chapter concludes that despite mixed results, they still recommend trialling TDD for your team as it may solve some problems. However it is important to be mindful that there is yet no conclusive evidence that it works as consistently or as effectively as anecdotes from happy practitioners would suggest.
The meta-study is relatively short and can be read online .
I was thinking specifically of things like the http://research.microsoft.com/en-us/groups/ese/nagappan_tdd.... when I wrote that sentence, but I guess I also consider books as part of what I'm saying here.
> there seems to be relatively little actual “literature” in the most commonly used sense.
Literature: books, articles, etc., about a particular subject
I don't think you should add words to what people have said to make a point unless you get some clarification from the author that the additions actually clarify what they said. There have been quite a few books and articles about TDD.
We created some nice catapults before we "understood" gravity. Even now we don't fully understand existence.
to you. That connotation didn't even cross my mind. Regardless. Are you going to wait until there is a peer-reviewed scientific paper telling you TDD is good before you'll believe it? Do you have personal experience that TDD seems to make your designs better? If so, is that less true to you because we haven't scientifically "proven" it?
Meanwhile, you mostly deal with small, simple, or short-lived systems and have little expectation in building skyscraper enterprise apps, so the promised benefits fall in the "oh that's nice" bucket. Or perhaps you lead a team that is deeply invested in a three-year codebase not using Jabberwocky, and the cost of getting everyone to switch has a serious financial implication.
The stance of "Jabberwocky sounds nice enough, but I do wonder if anyone systematically proved it is worthwhile" is a perfectly valid position. It is not about what you should do, it is about what you can do, and you can't just follow every trend (also see: Agile+XP) or even trial every alternative. For many non-engineering firms, there is "real work" to be done, and the never-ending increasingly expensive pursuit of operational excellence can be a hard thing to sell internally.
This is a huge benefit of science of the rest of us - a few million people can experiment on the edge of what is known, most of them will get nowhere, but a few thousand will determine that "X448YAB" tends to be superior to "X28YNB". Over time, the rest 7 Billion of us will slowly migrate to the more effective way of doing things and everyone benefits. We cannot denounce someone for not using stuff from the cutting edge until that thing is proven effective.
It may be that TDD has little negative effect upon development and results in a FeelGood™ response that could be present within any project with non-zero productivity.
Writing your tests from the client's perspective actually gives you that certainty. TDD is a powerful tool, but in my experience its evangelists get into overly academic debates about exactly what a "unit" is and how strictly you should adhere to the red-green-refactor steps.
For a really thoughtful and in-depth approach to API TDD I recommend Growing Object-Oriented Software, Guided by Tests by Steve Freeman (http://www.amazon.com/Growing-Object-Oriented-Software-Guide...).
There's a lot of other companies facing the same challenge and have also come to the conclusion that there isn't a sufficient solution pre-built. Hopefully, we'll get to the point where there's a standard solution that everyone can use instead of tons of homegrown solutions.
I created a /tokens endpoint, where I POST the auth credentials and in return I get back a newly generated auth token. In my opinion this is a nice RESTFul solution.
You are creating a session. That's why login could be a POST to /users/sesssions
I love TDD. Not because I write a lot of tests, but, because it helps my mind reason about proper composition and develop modular and clean APIs.
Using it as a design principle creates better software and you get a nice bonus -- regression testing for your assumptions.
It also serves as one vector of documentation -- though it's not the best.
I've been building a tool that makes it easier to work with HTTP resources. While it's definitely not finished, I've put it online temporarily if anybody wants to check it out:
It lets you define environments, http resources, and tests to be run against those resources. Custom headers, params, and persistent variables (to be used in subsequent tests) is all in there (although there is no help or handholding right now). There is also a command line tool that lets you connect your local sever to the service, and run the tests against your local environment.
Please excuse the un-styled login/signup screen, I added it just now to put the app online. Also.. this is a very early preview that I did not intend on putting online so early, there will be bugs. I'll be taking it offline this weekend.
As a simple example:
1. Create customer
2. Add tokenized card to customer
3. Charge the card
Step 2 requires the HREF/ID of the customer. Step 3 requires the HREF/ID of the card.
The scenario: https://github.com/balanced/balanced-api/blob/master/feature...:
Scenario: Push money to an existing debit card
Given I have sufficient funds in my marketplace
And I have a tokenized debit card
When I POST to /cards/:debit_card_id/credits with the JSON API body:
Then I should get a 201 Created status code
And the response is valid according to the "credits" schema
And the fields on this credit match:
And the credit was successfully created
For some reason, though, I feel like the natural language would actually intimidate me. Where are ideas like "sufficient funds" defined?
Given(/^I have sufficient funds in my marketplace$/) do
step 'I have tokenized a card'
Given(/^I have a tokenized debit card$/) do
name: "Johannes Bach",
@debit_card_id = @client['cards']['id']
We started having a LOT of trouble with 3rd party services breaking their JSON contracts, so I started using this to test on our Jenkins CI every night to make sure their JSON was constructed properly.
It also helps our front-end/mobile developers make sure the backend devs are creating proper JSON based on specifications.
I have never been a big fan of Cucumber either and I'm not convinced its more readable -- isn't it just more verbose? Interesting point about it being language agnostic though.
I guess I don't have a better solution to contribute. Just wanted to say yeah I have that problem too.
You can end up with a very domain specific testing "language" (DSTL?) enabling developers and domain experts to start describing a lot of different behavior (specifications) to see if the current implementation supports the behavior.
If current implementation does not support the behavior, then you now have a specification to implement the behavior.
I write mostly functional/integration tests. I try to avoid needing unit tests by making mistakes/errors of that sort impossible by construction (types).
Edit: I also write a lot of throw-away tests as I go. I find that the easiest way to figure out how third party libraries work is to write tests against the documentation. If you find something unexpected that way, you can be pretty sure it's somebody else's fault.
Yes and no. Depends how you "write" those tests. In this case, where you describe the scenarios, it is indeed true. However you can take a different approach – without writing explicit tests (and thus the client code). Rather you just describe your API in a sort of contract and then, as you iterate, you verify the implementation is living up to this contract...