Friday, June 30, 2006

The Failure of Rotisserie Baseball Logic

Andrew -

I recently completed reading two books that before I started reading either of them I had no idea how profoundly related they were. The first book was The Logic of Failure. It has been on my reading list for quite a while, recommended by Rob, a former colleague and now friend. The second book, Fantasyland (about Fantasy Baseball), I picked up on a whim at the recommendation of a friend (Michael) while riding home with him on the train. (You’re smart enough to already see the connection between these two books, aren’t you…)

The Logic of Failure is a clinical study into the psychology of why people make bad decisions that ultimately lead to catastrophic failures. The disaster at Chernobyl is one example that he uses in the book.

To study people’s decision making processes the author created a computer simulation game of two complex adaptive systems (a village in West Africa and a small town in Germany) and asked the study participants to try to improve the lives of the people in the simulation through decision making. More often than not, the simulation ended up with the people in worse shape than before the participants ‘started meddling’. Poor goal setting, focusing on incidentals, not addressing problems soon enough, lack of experience in the domain and cynicism are among the reasons for failure identified and discussed at length for the bad decisions people were making.

The Logic of Failure was a hard book for me to read for a couple of reasons, the first being that it is a very clinical study of why people were making bad decisions that ultimately lead to failure. But it was also difficult to read as I clearly saw several patterns in my own decision making processes that have not served me well.

Well, after beating myself over my poor decisions making skills I decided that I wanted to pick up something to read that would be just for fun - try to lighten my mood a little. I needed something to read that was not work related, not a self-improvement book, not technical - just something for pure entertainment. Well, I hit the jackpot with Fantasyland.

Fantasyland is the story of a sportswriter’s first attempt at playing Rotisserie Baseball during the 2004 Baseball season. Sam Wallace from the Wall Street Journal joined Tout Wars, the premier Rotisserie Baseball league, for his inaugural season of Fantasy Baseball. With his in depth knowledge of baseball, ready access to players and coaches for their insights (using his press pass to get into each team’s club house) and two full time advisors he put on his personal payroll (and an occasional consult from an Astrologer) he was convinced that he could outdo the others in the league. It’s compelling reading as he takes you through the months of his decision making process in preparation for selecting his team during draft day and then toiling over managing the team through trades during the season. (And, it is a very funny book. My favorite line from the book is how he described feeling after being taken advantage of by a seasoned Tout Wars competitor in a lopsided trade: “He worked me over like a drunken chiropractor.”)

Well, with me associating The Logic of Failure with Fantasyland in this blog posting you probably have already concluded that Sam Wallace didn’t do well in his first season in Rotisserie Baseball and you would be correct: eighth place out of 12 teams. The same patterns in decision making documented in The Logic of Failure that led to failure were being made by Sam Wallace in Fantasyland. But the real brilliance of Fantasyland is the epilogue, on the last page and last paragraph of the book:

"Sam Wallace returned to Tout Wars in 2005 to draft a second incarnation of the Streetwalkers Baseball Club. With only two nights to spare on the evaluation of American League ballplayers, he arrived at the draft in New Your fully expecting to be thumped like a traffic cone. Six months later, he won..."

Andrew - once you get some experience in a domain, whether it be Baseball or Project Management, don't over-analyze the problems within the domain. Apply the sound fundamentals that you have learned and trust them work for you. Always be passionate about your domain and never become cynical over the effects of your decisions. Love what you do and success will follow.


Saturday, June 03, 2006

"It's all about Version 2.0"

Andrew - don't take shortcuts, they don't payoff in the long run and often not in the short run either. That is what Dave Stanton, my CTO when I was at Sage IT Partners, was conveying when he said that good software development "is all about Version 2.0".

We have a phrase in my business for taking shortcuts. We call it technical debt. A software development project goes into technical debt by borrowing time by not allowing for quality work today in hopes of getting more functionality completed for the impending release. But we eventually will be forced to pay back the time, the technical debt, if the project makes it into production (and therefore into maintenance mode) or if the business chooses to invest in a Version 2 of the application. There is no getting around it. We either have to take the time to fix the stuff that has been done hastily in Version 1 or we will struggle with modifications to the application because of a fragile code base. Each takes time. This will result in either fewer features in the next release or a delay in releasing Version 2. Payback is a bit...oops, sorry, I'm your dad and I shouldn't talk like that in front of you.

I think we (IT Project Managers) have done a poor job of communicating the concept of technical debt to our business sponsors. We continue to get asked to accelerate schedules, add scope without changing deadlines, work longer hours or even stop writing tests for our code, all of which incurs technical debt. All we seem to do is complain about having to work harder which falls on deaf ears as our business sponsors are also being asked to work harder by their bosses.

But what if we were able to quantify technical debt? Can we put a dollar amount on a decision to increase scope in an upcoming release? Can we quantify how much it will cost by not having a suite of unit tests and functional tests supporting our code? Our business sponsors most likely have gone through a cost justification for the project so costs rather than effort would mean more to them.

The best metric that I have seen to date has been the waterfall methodology's quantification of the costs to a project by introducing changes in requirements late in the release cycle of a project. The graph below illustrates how the cost of a change request grows exponentially throughout the Software Development Lifecycle (SDLC).



This chart illustrates how a change request during Pre-Production can cost as much as 100x more to the project than if it would have cost the project by introducing it during requirements gathering. This high cost of change requests comes from:
  • Needing to update documentation
  • Needing to update acceptance tests
  • Complexity of the change within an established code base, therefore needing more time
  • Time to fix defects introduced to the code base as a result of changes
  • Needing to re-run acceptance tests after the change has been implemented
In a waterfall managed project, it is seen as a success to defer a change request to a subsequent release. But one thing that is hidden in the chart above is that the cost of the change request that has been deferred to Version 2 in a waterfall project can be as expensive as incurring the cost late in the initial release. The only difference in Version 2 is that you have more time to put in the change. But it is still costly. All the things that make change requests costly late in a project are still there during development of Version 2. Therefore, Version 2 is often delayed or reduced in scope because of incurred technical debt inherent within the waterfall approach.

The Agile community has come up with a solution for technical debt - don't pay it. Keep your documentation light and write tests first. Continuously refactor and build/test the application early and often. What we haven't done well is quantify or document our successes to show how effective these practices are in releasing subsequent versions of an application.

One of the development teams at ThoughtWorks that I had the pleasure of serving as Project Manager and Iteration Manager (along with some talented folks from the client's IT staff) has recently delivered Version 2 of a successful application for a Fortune 50 company. With Version 2, the team went through a significant refactoring of the object model, added a significant amount of new functionality and kept the code base approximately the same size as Version 1. There were virtually no defects found during testing of Version 2. In other words, technical debt was not incurred during Version 1 and therefore the overall quality of the code was allowed to improve during Version 2 development. The client received the functionality they expected when they expected it. We owe this to the team's diligent application of XP development practices of test first, continuous integration, simple design, well written story cards, constant collaboration with the customer and fearless refactoring.