Friday, December 12, 2008

Talk with Brian Marick, Part 1 (of 2)

Brian Marick was kind enough to host me for a few days on my tour (12/8 to 12/9). He's working on a Ruby Cocoa book, and I was lucky enough to pair with him while he worked out some TDD stuff for the book.

In this first part, we talk about his background, then get into acceptance testing with fit-style frameworks.

Brian added a couple blog entries: one about our pairing, the other about the conversation topic.

And, of course, here is a link to 'Everyday Scripting with Ruby' book that we mention. It is a great book, so go pick it up!

Talk with Brian Marick, Part1 from Corey Haines on Vimeo.

5 comments:

  1. Brian is wrong, wrong, wrong to dismiss ATDD the way he does.

    Martin Fowler defines refactoring as "the process of changing a software system in such a way that it does not alter the external behavior of the code yet improves its internal structure".

    At the end of the day, external behavior is what really counts.

    Unit tests cover internal behavior whereas acceptance test cover external behavior. As such they are far more valuable when it comes to refactoring. Acceptance tests, written in the right way, form a safety net around your application that keeps it safe as you refactor.

    Consider a simple refactoring like moving a method from one class to another. Your unit tests will have to change to reflect the migration. Essentially you're changing the tests and the code at the same time, which isn't exactly an ideal thing to do.

    With acceptance tests, since the external behavior hasn't changed, you can freely move the method confident that your acceptance tests will catch any problems.

    The key to success is to clearly separate the definition of the goal (the acceptance criteria) from the solution (the implementation).

    The reason people have so much trouble with ATDD is that they don't make this separation. They write acceptance tests as scripts which contain lots of implementation-specific detail and thereby lock them into the current implementation.

    It is difficult to get your head round not writing scripts, but once you "get it", it's hard to understand why people do it any other way.

    More on this topic at:
    http://www.concordion.org
    http://blog.davidpeterson.co.uk

    ReplyDelete
  2. Thanks for your comments, David. Very good points.

    Brian and I talk a little bit more about this in part 2 of the conversation; I'm hoping to have that up tomorrow night or Sunday morning.

    ReplyDelete
  3. Thanks for the interesting discussion. I've never used Fit/Fitnesse, but we've talked about it a bit on my teams. I've always been concerned about the maintenance of the binding code.

    My approach, which Brian helped me conceive a few years ago, has been to have a layer of what I call workflow tests, which I've implemented using watir or selenium. I don't consider these to be proper acceptance tests, because the customer didn't write them and isn't expected to maintain them. The goal of the workflow tests is to ensure that the app doesn't fail catastrophically when it's all put together and that the major value-producing user scenarios work. I assume that if these tests pass, then the software can be released (an assumption that's not always true).

    In that sense, I've done a couple agile projects now without customer provided acceptance tests. I find that my workflow tests in watir or selenium are helpful and support the refactoring that David mentioned in his comment. They are a pain to maintain, though, and they tend to break for reasons unrelated to the underlying code changes, often in ways that are hard to reproduce and fix.

    I've also tried the "just run the unit tests" approach, where we let the build server run the ATs. This works okay, but you've got to constantly monitor the build. The ATs do catch bugs, despite their being a big time and energy sink. What I don't like about this approach is that it violates my policy that every commit should monotonically increase the functionality of the software. If the ATs would have caught a bug, I'd like to know before committing the code, not after.

    I don't want to get rid of the AT/workflow tests, because they cover an important area of integration in my apps. I do want to find ways to make them run faster, more reliably, and be easier to maintain. I agree with David's comment that the key to success is in the separation. I've started started using an approach that I learned from Alan Richardson called model driven testing, where I build a layer of code that models the application, hides the watir/selenium implementation details, and allows my tests to read well without being coupled to the test implementation technology.

    ReplyDelete
  4. Thanks for the thoughtful comments, David. If you check out Brian Marick's blog, he's putting a lot of discussion and thought into his ideas.

    http://www.exampler.com/blog/

    ReplyDelete
  5. What if we have good exploratory testers. We don't. What if we have people who will read whiteboards with examples. We don't. What if we have enough people that each revision we can do acceptance test by hand. It's all well and good creating testing approaches for the top 10% but that doesn't add much value the top 10% will always succeed.

    I think we're beginning to see the testing discussion become detached from reality.

    ReplyDelete

Please don't post anonymously; I won't use your name/url for anything insidious.
Comments by Anonymous are not guaranteed to make it through moderation.
Constructive comments are always appreciated.