September 22, 2012

Space Theories

I once created an agile space that my teams could choose to use. They had decent cubes they could also use. (Decent cubes -- an oxymoron?) Anyway, when the space was created it got used lots. Over time, it got used less. Late one night I pondered why and came up these thoughts. I'm posting it just for fun. Consider your space in light of these thoughts.

The Distance Theory

The likelihood of a team voluntarily using an agile space...

  • is indirectly proportional to their proximity to each other.
Teammates are unlikely to use the shared space if their cubes are only three steps apart.
  • is directly proportional to the sum of the distances between the cubes of the team members.
The more scattered the members are, the more likely they are to meet in a common location. Therefore, the likelihood of a team using the space may be proportional to the team size. This rule holds as long as that aforementioned sum of the distances is greater than the sum of the distances from the cubes to the agile space.
  • is directly proportional to their proximity to it relative to their proximity to each other.
No one will use an agile space that is much further away than whichever cube is most central.

The Equipment Theory

The likelihood of an individual voluntarily using an agile space...

  • is directly proportional to the computer speed and display size of the team computers relative to that of the computers in their cubes.
Equip the agile space with the most powerful PCs and beautiful monitors, and keep it that way.

The People Theory

The likelihood of a team voluntarily using an agile space...

  • is directly proportional to likelihood that the team-lead (or some core, key individual(s)) can usually be found the lab.
Social aspects and the exchange of info/ideas matter.
  • is directly proportional to the cohesiveness of their tasks.
Programmers working on disjoint tasks are less likely to use a shared space.
  • is indirectly proportional to the number of teams using that space.
I can easily listen in or tune out discussions between my team members. Discussions between members of other teams are not easily tuned out. (Cognitive dissonance.)

The Environment Theory

The likelihood of an individual voluntarily using an agile space...

  • is directly proportional to the positiveness of daylight.
Daylight is a positive and negative factor.
  • Daylight good.
  • Glare bad.
  • Looking at distant objects good. Reduces eye strain.
  • Heat bad.
  • Working shades are good.
  • Dysfunctional shades are bad.

September 10, 2012

Standup Around Innovative BVCs -- The CNN Agile Tour

At the CNN Agile Tour put on last week through Agile Atlanta I noticed a couple Big Visible Charts (BVCs) of a sort I don't think I had ever seen before. One of them is a list of tasks that need to be done pertaining to delegating some responsibility and moving some permissions to other people. Multiple teams have a standup around this board. It looks nothing like a card wall. Other than the fact that it doesn't look like a card wall, what it looks like doesn't matter. So I'm not including a picture. No one should try to do what they did.

The point here is that they were innovative in coming up with a BVC that radiates status and helps them communicate. Too many teams get stuck in a rut, using the same old ineffective ways of doing their standup. And often abandon the practice altogether.

The Tours

It was neat to see the Kanban being used by one of the teams and to talk about how they were using it and what they were getting out of it. It was cool to talk about their facilities and compromises they made and how the layout impacts pairing and team size. And we had good discussions about estimating tasks and stories. And other stuff too.

We've had prior tours as well and in each the host has had surprising candor, discussing the honest state of their agile practice including their current struggles.

Alex Kell wrote a nice post about the earlier Agile Tour @ Allure.

And the RedPrairie Agile Tour was neat because we got to sit in on an end of iteration innovation demo, a kind of a technical show and tell across several teams of some neat new technologies they've experimented with or put into place.

I'm planning other tours as well. Follow me on twitter and subscribe to the Agile Atlanta yahoo group to make sure you don't miss the next one.

Each company that has hosted a tour has gotten something out of these tours as well. Let's set one up at your company. Give me a call or drop me an email today.

September 5, 2012

Backlog Completion Date a Kanban Hazard

I've attempted to write this article in the inverted pyramid style common in newspaper articles. (Most important info first. Least important last. Quit reading wherever you wish.) Don't know if I succeeded. What do you think? How badly does this stink?
A mistake I've seen is to use lead time metrics, derived from the development of small, well-split stories, to figure out how long it will take to complete a backlog full of raw unsplit stories. Apples and oranges. Don't do that.

Writings about the Kanban Method eschew story estimation as being both waste and inaccurate. Even relative estimation. So instead, they recommend understanding your system well and gathering the metrics on average lead time. With that data you can compute how long it would take to complete a backlog of work. Even some Scrum teams are beginning to use this approach. I like this method.

But to use this approach you must truly understand your system. I'm talking about understanding the behavior of the system and in particular the actors in the system.

For example, it's common to split stories once they get higher in priority and closer to development. That's a good thing and it happens in Scrum and in Kanban. This refinement may happen many times to a story during it's lifetime. I guess that most teams are completely unaware of just how many times stories are split along the way. Often I see stories indirectly split; Setting out to rewrite a set of stories, the set will be thrown away and a new set written, with no one noticing that there are more, and finer grained, stories in the new set.

A similar effect happens in Scrum when teams estimate defects and include those points in their velocity. They most often have no estimate for defects that have yet to be discovered (neither in quantity nor magnitude of effort). If a significant number of new defects are coming in all the time, then such teams are inflating their velocity and underestimating the magnitude of their backlog. This is a recipe for disaster. Such teams wonder why they never hit their dates. Others blame it on a quality problem.

As an aside, any commitments regarding the completion of the backlog have to be in concert with the stability of the backlog. You need to compute a new end date whenever the backlog changes. But that's the same whether you are using lead time or relative estimates with velocity.