The other day, I was involved in a conversation with a group of developers. Several of them work at a large company whose product is built on Ruby on Rails. We were talking about the development environment there, specifically the test suite. The codebase is quite a few years old and has a very slow test suite in part due to some design issues in the codebase. I asked what their test suite was currently running at, and I don't remember the exact figure, but it definitely had a lower bound at 30 minutes (I think it might have been an hour). I then asked about just the unit test suite, the one you should be running frequently while developing. The speed for that was on the order of many minutes. Someone else then asked how it was to run a single set of examples, the type that's focused on whatever part of the codebase you're actively working on, the type that should be run almost constantly while writing code. This was 'better,' being on the order of 30-45 seconds. I remarked that this was still horrible, especially when working on a piece of the system that didn't need specific features of Rails*. One person remarked that this was inevitable with a large codebase. I disagreed. That is, I disagreed with the statement that the size of the codebase was the cause. Slow unit test suites and the inability to quickly run focused subsets are an indication of a design problem with the codebase, not the size. As we talked more, I started to notice something that I've seen before but had a hard time placing: extreme rationalization of bad situations.
This got me thinking. Often times, we choose to be in certain situations and then, rather than admitting that the situation is bad and dysfunctional, then convince ourselves -- and justify to others -- that this is the only way it can be. While we are free to choose trade-offs for whatever situation we place ourselves in, it is important to be honest with ourselves about the causes of the situation. For example, the above company has interesting computer-science-types of problems which makes up for the bad situation regarding the actual development process.
If we are not honest about the causes of a bad situation, we pose a significant danger to those who are either less-experienced or uncertain about their own situation. Take a slow test suite, for example. Having a test suite that actively hinders a developer from running portions of it while developing is a deterent even to run the suite at all. As the run time climbs into minutes, the test suite becomes an antagonist, rather than the helper and guide that it should be. Instead of rationalizing, say you have some serious design flaws in your system. Or, say you have a large codebase and a huge team consisting of people with varying desires to write automated tests. When we mask the real causes of a problem, it has the potential to confuse less-experienced developers who look to us for guidance. For example, thinking a slow test suite is inevitable could stop someone from asking for help optimizing their own codebase while there is still time. If you are talking to a less-experienced developer and you mask the fundamental problems, what message are you conveying to them?
The point of this post is not to point out how people are wrong or that you have to fix, or are able to fix, a bad situation. I just want to raise a reminder that there is a difference between rationalization and being honest with the reasons for something. By rationalizing, we are fooling ourselves and those who learn from us into thinking that you can't do better and it isn't worth trying. By being honest with the reasons, we have the opportunity to both learn from our mistakes, and teach others what pitfalls might await given certain decisions.
So, the next time you find yourself talking with someone and describing a less-than-optimal situation, ask yourself whether you are being honest about the causes. It doesn't mean you have the power or inclination to fix the problem, but talking about the causes can lead to valuable conversations about how we can do things better in the future.
also... can we stop using the terms 'ivory tower' or 'real world' when rationalizing our situations? As DHH said, "The real world isn’t a place, it’s an excuse. It’s a justification for not trying." (other inspirational quotes)
*In fact, I will put forward you can't do an effective test-driven development cycle with a feedback loop that long.
As always, thoughtful comments are happily accepted.
Sunday, August 7, 2011
Wednesday, March 23, 2011
My talk from SCNA2010
I was honored to be asked to give the closing talk at the Software Craftsmanship North America conference (SCNA) last year. It was a fantastic event filled will great thought, exciting conversations and wonderful people. This was the talk I gave introducing the idea of positivember.
Dates have been announced for this year's version of SCNA, November 18-19, 2011, so make sure to go check it out.
The video from my talk has been put up, so I wanted to share it. You can watch it on this page, full screen or click over to vimeo to watch it.
Dates have been announced for this year's version of SCNA, November 18-19, 2011, so make sure to go check it out.
The video from my talk has been put up, so I wanted to share it. You can watch it on this page, full screen or click over to vimeo to watch it.
SCNA 2010 / Corey Haines from Brian Pratt on Vimeo.
Wednesday, March 2, 2011
Turbulence, measuring the turbulent nature of your code
Recently, Michael Feathers has been investigating the idea of mining all the data in our source code repositories to start finding information about our codebase and system design. He wrote an article about possibly using a churn v complexity chart to look for areas that could use some refactoring love called "Getting Empirical about Refactoring".
Since he has joined Obtiva as Chief Scientist, he is here in Chicago fairly frequently. I had the pleasure of spending a morning with him, and we naturally talked about his ideas. I was inspired to build a short bash script that generated a churn graph for my own codebase on MercuryApp. Chad Fowler took my short script and merged it with a script he wrote to run Flog over the codebase. While Michael, Chad and I were in Boulder, CO, for a coderetreat this past weekend, we put together a project called Turbulence.
Turbulence is a gem that you install and run in the directory of your code. It does a churn report, combines it with Flog data, and generates a nice scatter plot view. You can find instructions on the Turbulence project page. Although it currently only support Ruby code, we have plans for expanding the project to support other languages.
One goal is to have people take a screenshot of their graph and post it to twitter with the hashtag #codeturb. This will allow us to view the graphs on hashalbum and see what we can see.
Another goal is to have people send us their data, so we can do some analysis on different contexts. If you'd like to take part in this project, please email me. We are working on finishing up an anonymizer function that will mask the filenames, in case you might worry about directory structures/file names giving away your codebase's dirty secrets.
We are also proposing a series of talks at conferences outlining both results, recommendations and tools for helping analyzing your design based on the shape of your metrics. Keep watch for those.
I've uploaded my current graph for MercuryApp, you can see it in all its glory by clicking on the picture below.
Since he has joined Obtiva as Chief Scientist, he is here in Chicago fairly frequently. I had the pleasure of spending a morning with him, and we naturally talked about his ideas. I was inspired to build a short bash script that generated a churn graph for my own codebase on MercuryApp. Chad Fowler took my short script and merged it with a script he wrote to run Flog over the codebase. While Michael, Chad and I were in Boulder, CO, for a coderetreat this past weekend, we put together a project called Turbulence.
Turbulence is a gem that you install and run in the directory of your code. It does a churn report, combines it with Flog data, and generates a nice scatter plot view. You can find instructions on the Turbulence project page. Although it currently only support Ruby code, we have plans for expanding the project to support other languages.
One goal is to have people take a screenshot of their graph and post it to twitter with the hashtag #codeturb. This will allow us to view the graphs on hashalbum and see what we can see.
Another goal is to have people send us their data, so we can do some analysis on different contexts. If you'd like to take part in this project, please email me. We are working on finishing up an anonymizer function that will mask the filenames, in case you might worry about directory structures/file names giving away your codebase's dirty secrets.
We are also proposing a series of talks at conferences outlining both results, recommendations and tools for helping analyzing your design based on the shape of your metrics. Keep watch for those.
I've uploaded my current graph for MercuryApp, you can see it in all its glory by clicking on the picture below.
Thursday, January 20, 2011
On the goals of Coderetreat
Last Sunday, I facilitated a coderetreat in Cleveland, Ohio, at the Leandog Software boat. It was a great time, filled with writing code, practicing technique and learning both obvious and subtle lessons. I'm always excited to see people's reactions about the event (for example, here and here), as they gain more insights into their journey. Over the past two years, I've found my own role as facilitator transform into a guide, poking a bit here and there at the code.
The full benefits of coderetreat come clear through the course of a whole day, starting with a session, or two, of understanding the problem domain, then the rest of the day is spent pushing our limits, accepting that our 'normal' way of coding isn't enough, that we should be striving towards an ideal. The format of the day is structured very specifically to allow for this experimentation. This is a reason we start early and I don't support having people 'drop by': we want time to get over the 'hacking away to finish' mentality that normally accompanies time pressure and brings with it the necessary corner-cutting. Focusing on improving our skills over 'getting it done' is one of the primary aspects that sets us apart from just being another hackfest: we are practicing to get better at the fundamentals. For example, we aren't there as a way to primarily practice or sell TDD, we aren't there to work on design patterns, we aren't there to finish the problem. We are there to work on learning better software design skills through exploration of the 4 rules of simple design. This is done through experimentation and exploration.
The full benefits of coderetreat come clear through the course of a whole day, starting with a session, or two, of understanding the problem domain, then the rest of the day is spent pushing our limits, accepting that our 'normal' way of coding isn't enough, that we should be striving towards an ideal. The format of the day is structured very specifically to allow for this experimentation. This is a reason we start early and I don't support having people 'drop by': we want time to get over the 'hacking away to finish' mentality that normally accompanies time pressure and brings with it the necessary corner-cutting. Focusing on improving our skills over 'getting it done' is one of the primary aspects that sets us apart from just being another hackfest: we are practicing to get better at the fundamentals. For example, we aren't there as a way to primarily practice or sell TDD, we aren't there to work on design patterns, we aren't there to finish the problem. We are there to work on learning better software design skills through exploration of the 4 rules of simple design. This is done through experimentation and exploration.
Subscribe to:
Posts (Atom)