I was asked recently by a Kansas City IT executive what advice I would give a software development manager (or any team member for that matter) wanting to improve their process if I only had enough time to do so during an elevator ride. Sort of an elevator speech idea but instead of a sales spiel, the goal is prioritized (given the time limit) process improvement advice. This Kansas City executive didn’t specify how tall the building should be and therefore the number of floors the elevator would travel so to maximize time and advice, I’ve taken the liberty of placing myself in One Kansas City Place, the tallest building in Kansas City containing 42 floors.
Anticipating the frequency of stops based on the number of floors, the types and volume of tenants and choosing to ride the elevator mid-morning (after the day has started and well before lunch), I figure I have enough time to discuss two topics with my elevator companion.
So here goes. The door has closed and the ascent has begun. The two topics? RTF and Estimation. RTF stands for Running, Tested Features. Its origin is from the Agile software development community but it applies in any process. And estimation? Well we all know what a black art it can be and how stressful it can be to deliver and receive. It becomes downright painful when estimates are missed in either direction. I have experience with the so called heavier weight methodologies as well as the latest Agile principles and I have found useful practices in all, depending on the team, corporate culture, industry and other factors. Overall I’m a fan of a “Lean” approach in general, regardless of the stated methodology an organization has adopted. Lean meaning look for waste, overhead and underperforming practices and optimize toward the ultimate goal of working software delivered in a predictable manner. This brings me back to RTF, estimation and my elevator ride.
The idea behind RTF is to deliver Running Tested Software (potentially shippable) every iteration or cycle, depending on your team’s vocabulary. Regardless of the time box (one, two, three, even four weeks), the goal is to have features that are “done” not just according to the developer, but also to the customer, via the acceptance tests that are (or should be) in place to validate the features. Automated tests are optimal but as long as they’re run at the end of each iteration, you’ll have a better idea about overall project progress in that you are not deferring an unknown quantity of “not-done” (not tested means not done) work toward the end of the project when some organizations employ a test/fix phase leading to a release into production—typically under immense pressure. So measure RTF early and often. Now the other interesting thing about RTF (at least that we have time for as we pass the 22nd floor) is using it as a target for helping teams identify ways to improve their processes. What I’ve done with struggling and sometimes dysfunctional teams laboring to identify issues and make improvements is simply state that RTF is the target (It’s difficult to argue against the notion) and how they achieve it (for the most part) is up to them. We try that for a few iterations. What typically happens is that teams figure out ways to overcome obstacles, agree on strategies and work together toward hitting the RTF goal. It’s not always without pain but progress is almost always made and the light is shone on the priority areas that a particular team needs to address given their current circumstances. Where the team takes it from there is another elevator ride, err…blog post. With the 42nd floor coming up soon, I’ll move on to estimation.
Consistent estimation can be an amazingly powerful tool when it works. And for some organizations it does work. With confidence in estimates, products and projects can be valued and prioritized enabling the organization to make optimal business decisions. Development teams can have reasonable discussions with management about what is and is not possible in terms of cost and schedule and trust is established while a lot of needless pressure can be relieved. That’s of course when it works. And it seems to fall short more often than not. Teams often joke (or lament) that they can tell customers precisely what something is going to cost–when they’re finished building it. That’s where a technique often called “relative sizing” comes into play and actually somewhat enables that perspective of, “We’ll give the final estimate when we deliver the product”. In a separate blog post I’ll outline the details of the approach but for now imagine yourself at a movie theater snack bar about to purchase a drink before the show starts. When the attendant asks what size you would like what do you do? If you’re like most people you ask to see the size of the cups or perhaps they’re already on display. You then say I’ll take a small, medium, large or jumbo. Never do you answer in terms of precise fluid ounces such as, “I would like 48 ounces of Mountain Dew please”. This is estimating using relativity versus precision. It’s what we humans do best. We’re great at comparing similar things and deciding their size relative to one another but we’re terrible at looking at something in isolation and passing a size judgment. So how does this help us with estimating software? Most developers I’ve worked alongside or that have worked for me over the years are good at comparing requirements and deciding which will take more or less time (size) when compared to others. On the other hand, they cannot seem to master the art of precision and state with any certainty (or accuracy) that one requirement will take 22.5 hours while another will take 12.25 hours. With relative sizing, developers compare requirements and place them in size buckets that represent degrees of size magnitude such as a Fibonacci scale (1/2, 1, 2, 3, 5, 8, 13…). They then work on a few requirements and see how many they can get done (RTF) in a given iteration or time box. They can then use the velocity at which they completed those requirements (already compared/sized relative to the remaining requirements to work on) and predict with a high degree of certainty, how long it will take to complete the remainder of the work to be done. Get it? The same team working on the same bundle of requirements yields measurable results and therefore that team should continue to yield similar results. Of course things will change along the way. New requirements will be added, some will be dropped and others changed but determining the impact on schedule and cost will be straightforward as long as size relativity is maintained. So the idea is size the requirements relative to each other; work for a couple of iterations to determine a team’s velocity (the rate at which things are getting done) and then compare that to what’s remaining to do and then provide management with the most realistic estimate possible. An estimate based on historical actuals.
As the doors open on the 42nd floor, my elevator mate now has two ideas they can use to help their team improve and to help improve their relationship and credibility with management and the rest of the organization. Beats listening to elevator music.