I had the honour of meeting Mr.Murali Editor, The HINDU Business Line on a very serene morning at the NageswarRao Park, Mylapore, Chennai. It was a great oppurtunity for me to share some of my thoughts on AGILE and Thoughtworks.

I should say that Mr.Murali has a good knack of getting the conversation going and we spoke for close to 1.5 hours without realizing how time flew. He is well organized with a compact camcorder and a handy tripod. The setting was a welcome change with a very green NageswarRao Park. You can find the links to the video under, http://bit.ly/4WHRTB. He has also a beautiful collection of videos from his other interviews. Be sure to check it out http://60secondschief.blogspot.com/; http://jijomurali.blogspot.com/


A day as an Agile Coach

Close on the heals of completing an Agile project as a PM I took upon a challenge of coaching a new team on their project.
What came out of that exercise are very interesting observations on the Agile methodology and the possibilities that it throws up to validate your mental map of Agile based software development.

  • One thing as a coach I realized was that, I somehow had the detached feeling towards a project and it started giving me meaningful insights on the project.
  • The first thing I went about doing, was to volunteer to organize the Retrospective the team was having. This gave me a complete 3rd person view to the problems and issues that the team was facing and it served to be a great place to get started. Also being an outsider to the team ensures that you can ask basic fundamental questions and still getaway from that. 🙂
  • With the retrospective done, it gave us a couple of quick wins that we could go after. For example: why doesn’t the team respond in time to build failures.  Though the need for a prominent visual indicator was felt, there was some coaching to do around the importance of responding back to build failures.  More importantly, the Retrospective did show up issues on the technical approaches and on the throughput.
  • As a next logical step we looked at the throughput and it was clear that the technical challenges were preventing from the team clocking velocity. Also we did ensure that the team did not move on to other stories, thereby increasing the current Work In progress. At this point of time the team is still spiking out a few approaches to set the ball in motion.
  • Engineering Practices, CI & Test driven development are two pillars of an Incremental development project. The team was practicing TDD but without a CI tool it becomes difficult to setup a regression suite and compute code coverage. We setup HUDSON and provided clear visual and audio cues to indicate Build failures as a first step.

At the end of the day one dominant theme that stayed back with me – “What is the throughput and what are we doing about it?

And What about the the engineering practices supporting this bigger theme?

These aren’t exhaustive in any way, but I hope to put them all together in one shape at the end of this project.

It was a strange coincidence that I happened to attend the CIIs Knowledge Management Meet in Chennai, India on the 28th & 29th of Oct 2009. Here are some of my observations and comments on the subject of discussion.

The theme for the meet was KM & Enterprise 2.0. Though there were representatives from various industries, it was fully dominated by the IT Service Providers, as you would expect these days. To its share of accolades the meet did in fact attract some very notable personalities, Prof. Sadagopan, Dave Snowden & David Gurteen.

Being an outsider to the entire KM initiative it gave me a very good oppurtunity to view KM from my own job function.

We all know and have learnt from our school days,” one that multiplies with sharing is knowledge”. The meet had its share of fundas and some very interesting ones too. Like Prof. Sadagopan mentions, “Knowledge as a liberator”  How sharing knowledge can help us solve bigger challenges such as climate change and epidemics. The one that I particularly liked and believe should be the underlying mantra for all organizations in the KM is when he mentions, “In the Knowledge Era what determines success is not much of what you hold but how much you share”

Dave Snowdens talk was very engaging and he particularly highlighted the feature of having networks and how to make it work for you.

There were other case studies by the IT biggies on how KM had given them a strategic depth and how they measure and make it work. One common chord was that all of them were focussing too much on systems and focussing on the technology aspect of it while missing out on the culture and the eco system to promote knowledge sharing. Isn’t the essence of KM, sharing and openness, rather than protectionism?

KM & Enterprise 2.0 should foster healthy community which should break the barriers of organizations and practices, promoting a healthy competitive practices. I believe its a social responsibility.

David had an hour long exercise on his KCafe, which was very interesting.

Lakshmi Narayanan from Cognizant, did leave with a thought for all competing organizations to inter operate. Food for thought!!

End of the meet I did leave with these impressions:

  • KM in an organization can never be a policy. It needs to be fostered by healthy practices and a good active eco system that fosters knowledge sharing.
  • Does it require metrics & policing to measure the success?  Should the success be measured with Metrics or with the results?
  • It can never be / should be a separate department. It should rather be weaven into the organizations day to day to work. The healthy eco system should drive this rather than a dedicated workforce.
  • It can never be campaigned it should be voluntary.

Kanban Applied

On my first Agile project we have an hybrid model of Agile methodologies. We are part SCRUMish and part Kanbanish if I could say.Its very clear that the methodologies dont matter as long as you focus on these two aspects:

  1. How much of value is being added to the customer every iteration.
  2. Is the team focussing on improving itself.

The focus of this post is to highlight on the second point – “Is the team focussing on improving itself?”

With so much happening on the Kanban front, I wasn’t particularly convinced on the WIP limit on the development front. Partly because we handle the one piece flow and ensure that there aren’t any cards on the wall for long. I would like to dedicate a separate post on this aspect, and focus on how we applied Kanban to our project.

Faced with recurring regression defects, we found out a very different approach to applying the WIP limit to our project.

There was a time in our project where the team was struggling to fix defects that were coming in their later phase of development. We tried to address them by first providing visual cues on the defects that are being opened. When this didn’t work, it was quite clear that the team wasn’t learning from its mistakes.

The team was handling defects like how traditional teams do – add the bugs to the backlog and address them based on their priority. Sometimes this could be a day or a full iteration. This lead to the defect lead getting cold and there wasn’t an opportunity for the team to learn and identify the pattern of these regression defects.

Enter Kanban & WIP. We agreed that we will stop work whenever the Bug count reaches 2 for critical or High defects on our story wall. The moment the bug count for Critical or High defects touches 2, we stop work understand the root cause, fix the issue before proceeding on the stories for that sprint. We also added sound alarms to indicate such an event.

Today we see a more stable code that is getting written and our bugs have stabilized to a larger extent.

My 2 key takeaways from this experiment;

  1. Jump in to address key critical issues so that the feedback is rapid. You dont have to wait till the Retrospective to arrive at an action item.
  2. Look for ways to continuously bubble up critical risks & priority items and fix them to learn and adapt.

Today Kanban philosophies helped me achieved this, tomorrow it could be something else.

As a part of the post i will try and organize my thoughts around the concept of measurement in Traditional Software development approaches and how its very different in an Agile environment.

Traditional approaches are very heavy on ensuring that the requirements are well understood before embarking on the development activities. (Well it is Waterfall!!) So much so that you have standardized work split up (general rule of thumb) between Requirements analysis, Design, Development & Test. Even with all this there isn’t any guarantee that the team which is working on a typical software project meets these guidelines. This has been well recognized and have been mitigated by measuring all the key data points at the various stages of the project.

In other words the inherent un-predictability and chaotic nature of software projects have been well understood. This has been tackled by trying to normalize a standardized behavior over historical measures.

This process is tedious and heavy on accurate data capture at every stage of the development. Any errors introduced will render the measured data points inaccurate for planning future projects.

So why these heavy measurement? Waterfall approach being sequential in nature, measuring at every stage ensures that the project reacts to variances of key data points to ensure that the project objectives are not compromised in the end.

But this doesn’t still explain why a project that meets the various time lines at every stage fails to meet the project objectives. Sample these,

  1. A project on time to UAT does not get out of UAT on time or
  2. A project on time at the various stages of the lifecycle is unusable after its delivered.

This is a familiar situation and most of us know the underlying problem.

This is essentially because of lack of an objective criterion to close out a particular stage of development.

Consider these as a sample

  1. What constitutes a completion of Requirements analysis? Will all scenarios be listed, documented, reviewed and signed off be an objective criteria?
  2. Do we model all scenarios in design, code them and test them to perfection?

On some projects doing them all will ensure a good objective criteria but on most projects there are ambiguities that come with business requirements which makes it tough to objectively measure the degree of completion of a stage. These ambiguities would be introduced right in the beginning of requirements analysis since the lack of apparent visibility would mean that a customer doesn’t specify everything clearly or the design doesn’t capture all the implementation details and requires further analysis during development.

The answer is to have the working software as an objective criteria and to release functionality rapidly. Call it a weekly sprint or 2 week sprints. Releasing working software rapidly ensures that you dont face the signoff ambiguities and dont have to measure too many data points.

This in-fact lays the foundation for the first Agile principle of “Working software over lengthy documentation”

If working software is released at every step of the project, do we measure anything at all?
With working piece of code being delivered rapidly, schedule performance in the traditional fashion wouldnt make sense. Measurement will now have to focus on identifying the ideal output that the team can achieve and the key benefits the customer is able to generate.

Here are a few key measurements that are recommended

  1. Cycle time – By definition cycle time measures the total time it takes to release a working piece of code into Production from the time its identified. Objective – By looking at the cycle time, we can look at any potential bottlenecks that could be hindering the project.
  2. Velocity – The rate at which requirements are implemented into production. This is measured typically in story points or no of requirements.
  3. Scope Burn up / Burn down – depending on the specific requirement a Scope burn up depicts the current progess with respect to the planned rate of implementing requirements into production.
  4. Average time to implement a feature to the end user.

That brings up the second aspect of agile based projects – “Adaptive planning” as compared to “Predictive Planning”.

In the upcoming posts I will add additional details on measuring these data points and its significance.

Working on the Agile model gives you enhanced visibility on the progress of the project. Though it still requires you to be plugged into the requirements it quickly dawns upon you that the key to make this work, is the size of the individual stories. The story size holds the key for the quicker turnaround of feature.

Working along my first Agile project, I came across the Personal Kanban blog by Jim Benson. Jim’s blog helps you relate Kanban based development much better since you start applying it to your routine daily tasks. You appreciate the benefits because you understand the stories better than the stories/requirements in a project.

The biggest draw for me to apply the personal Kanban for my daily tasks was the clarity I was hoping to get. Clarity in terms of Visibility of stories, Bottlenecks and patterns for the bottlenecks.

Equipped with ‘Agile Zen’ I started adding stories to my process flow. Initially some of the stories were just tasks and they looked a lot easier to move them across. As I moved along, some of the other stories were a little bit more complex and I started observing that they were piling up in my ‘Working’ stage. I was also feeling frustrated since I was moving the tasks back & forth.

A quick analysis showed that the stories that were taking longer to complete weren’t simple stories and they served to represent a collection of smaller stories, though all of them were related. Take for example ‘Preparing for hosting an event’. Though there were many tasks in them grouping all of them will mean you loose focus on this story.(Ofcourse there were some simpler stories that didnt move as well. But isnt the goal of my Personal Kanba to throw some light into this aspect as well?)

Some of these complex stories had dependencies and unless those dependent stories were sorted out, it was difficult to move these complex stories.

On a Kanban based system – which applies to regular continous stream of work you break the Iteration or Sprint mould to model the flow of the work that comes in. In the absence of any such timeline constraints its very important to maintain the flow of the system. Otherwise you start to see stories piling up in your “Inprocess” queue. If you set an WIP limit you quickly run into dead lock issues.

Complex & larger stories will not enable flow in your system. Hence its very important to keep the size of the stories at a very manageable level. What is that size? That depends on what you can do in a day. For my domestic chores I look at stories that can be done in a day. However it also pays to ensure that the stories help you relate a completion of a job. Call it a feature in Software Development?

Having heard about Iterative development the first thing that comes across your mind is “How will new development effort not affect existing functionality?”

What previous projects have taught about test automation was more as an after thought – automate all critical scenarios once the development effort has been completed. With an incremental development suddenly there is this realization of how critical it is to automate functional testing.

On an agile setting this is one of the core engineering practices. Given the smaller size of the stories that are played in an iteration and the iteration’s scope it becomes easier to automate along the way. Again the fact of taking smaller increments makes it easier across the board.

What I observed today is an interesting aspect of testing. A bug fix in one of our projects, resulted in opening up an issue on its dependent path. The team realized it quite late in the cycle and there was this usual huddle to sort the issues out. The team finally figured it out as a case of missing tests on the core business flow and sent out a patch.

Working through the bug trace I realized that this was on the main functional flow and our automation scripts haven’t yet automated them. If it was not the automation, I was wondering, who else will detect this error? Why not the DEVs who worked on this issue?

Realizing that we are practicing active DEV pair rotations and the fact that bug fixes are traditionally only on the area of the defect, it became very clear that for the team to practice active DEV pair rotation and incremental development you would need the testing team to actively support by automating all core business flows of the application. Also along the way is the fact that all of the automation tests to run regularly and start giving out alerts to the DEVs and stakeholders. Any let up on this would again defeat this purpose. Hence its critical to automate regularly and more to make the tests run regularly.