Wednesday, December 17, 2008

Definition of Done

The Boulder Agile Meetup group met last night and discussed the definition of "done", or DoD, on an agile project. Here is a summary of our discussion.

  • The DoD should be agreed to by the team, written down and rigorously adhered to
  • Legacy systems - any code base without automated tests - present many challenges to meeting a useful DoD within short iterations. The group didn't reach any conclusion on how to solve that problem. I'd say your choices are either to (1) put new development on hold long enough to write the tests, (2) gradually introduce tests over several iterations, or (3) live with the pain for a long time.
  • The DoD is different for tasks, backlog items, iterations, and releases
  • Operational needs for the software should be considered early on and addressed within normal iterations
DoD for tasks:
  • unit tests pass
  • code was pair programmed or code reviewed
  • coding standards are met
  • test coverage standard is met
  • code and tests are included in continuous integration system
  • task was completed with the simplest possible implementation
  • code base was refactored to support the new task
  • sufficient negative unit tests were written (more negative than positive)
DoD for backlog items / user stories:
  • acceptance criteria are met (presumes that Product Owner sufficiently defined acceptance criteria)
  • functional tests pass
  • non-functional tests pass (scalability, reliability, security, etc.)
  • story documentation is completed
  • item follows architectural & design guidelines
  • automated installation & deployment completed (recommended that it includes the full stack - even the operating system - where practical)
DoD for releases:
  • Customer acceptance (functional & non-functional requirements)
  • Release documentation completed
  • Operational needs met
  • Regulatory & compliance requirements are met if applicable
  • automated installation & deployment completed

For some reason we didn't discuss DoD for iterations, but I would say that it is:
  • Met DoD for all stories / backlog items in the iteration
  • Met the goal or theme for the iteration, if one was defined
  • Acceptance by the product owner and/or customer at the iteration demo/review

Wednesday, November 26, 2008

Productivity of distributed teams

Jeff Sutherland and Guido Schoonheim recently gave a presentation on the experiences on forming a team for Xebia distributed between Holland and India. Their shocking conclusion: a fully distributed scrum team has more value than a colocated one.

They claim the benefits of distribution are cost reduction, availability of talent, and scaling up/down with knowledge retention.

They claim their approach accomplished the following results on a relatively complicated system with 100k+ lines of java code.
  • 95% of defects found within the iteration a feature was developed
  • only 50 defects found in acceptance phase
  • less than 1 defect for kloc
  • 15 function points delivered per developer per month. Note that Mike Cohn has published results of a small co-located team that achieve 16 FP/dev/month
How did they accomplish this state of hyper-productivity, as they call it? Here are some of the highlights.

  1. Bring the Indian team to Holland for 2 iterations starting the 2nd iteration. Establish personal relationships, shared agile value system, common mindset, mutual respect. My team at Envysion successfully used a similar approach; we had the Chinese team members on-site with the client for 2 months at the beginning. See my previous post for more info.
  2. A few people from each site travel to the other location every 2 months or so to maintain the bond.
  3. Staff with equally skilled people in all roles in both locations
  4. Use video conferencing for daily stand-up meetings, retrospectives and planning meetings. The sprint demo was done only by the Holland team because stakeholders wanted it presented in Dutch language.
  5. They started with a single team, then when they expanded to multiple teams, they seeded those teams with people from the first team to maintain the culture.
  6. They maintained discipline to follow XP practices, including pair programming
  7. Had a clear and disciplines Definition of Done for each sprint, which included 90% unit test coverage and fully automated functional and regression tests

Tuesday, November 18, 2008

Independent test teams

The agileprojectmanagement discussion group on yahoo has a recent thread titled "Testing Sprint Advice" that caught my attention. (Note that you'll need to be a member of the group to follow the link.)

I believe the ultimate goal of agile testing/QA should be the complete and unambiguous definition of acceptance criteria. The ultimate manifestation of that would be executable requirements defined by the product owner. Of course the product owner would create these in collaboration with testers, and including developer input as appropriate.

There is often debate over whether test teams are more effective when they are independent from the development effort or when they actively collaborate with developers. I personally believe the collaborative approach is better. If the product owner and QA collectively define unambiguous acceptance criteria, then the developers are usually in the best position to automate the validation of those criteria through automated tests of various kinds. Since it's hard to formally define every single acceptance criteria, testers can focus much of their effort on exploratory testing - which of course may leading to a better understanding of additional acceptance criteria.

Steven Gordan makes some great points in the post, arguing for the collaborative approach which I'll summarize below..

1. If the testers effectively come up with their own version of the requirements based on an "independent" understanding of the product, then inevitable disputes over the actual requirements for the iteration ameliorate much of the potential value of independent testing....

2. If the testers actively participate in the development of the acceptance criteria for the iteration, then where is the independence? If they are going to actively collaborate with the team and customers from day 0 of each iteration, everyone would be better off if they were actually part of the development team.

3. If the testers passively utilize the acceptance criteria for the iteration without actively voicing their independent opinion about what is missing from the acceptance criteria, then we get the worst of cases 1 and 2. The testing is not truly independent, yet what independence remains is not being leveraged to improve the acceptance
criteria.

Inevitably, independent testing finds problems later than
collaborative testing would.
Thanks to Steven for the great insight.

Saturday, November 15, 2008

The SEI addresses Agile

The SEI recently published a paper that asks the provocative question: why not embrace both agile and CMMI? In an earlier post, I wrote that Perficient achieved level 5 using an agile approach, so I know it is possible.

Both CMMI and agile have a long history, it turns out. Although modern agile methodologies mostly emerged in the 1990's in the context of small teams, maybe the origins of agile and CMMI really weren't so different after all.

The paper traces the roots of agile back to Iterative and Incremental Design and Development (IIDD), a technique developed more than 75 years ago by engineers including W. Edwards Deming. It's also noteworthy that Demind is one of the fathers of the lean movement who is credited in part for bring lean to Toyota many decades ago.
An early progenitor of IIDD was Dr. W. Edwards Deming who began promoting Plan-Do-Study-Act (PDSA) as the vital component of empirical engineering. Early adopters of Deming’s teachings in the aerospace industry include NASA (National Aeronautics and Space Administration) and the US Air Force, each of which developed entire systems using time-boxed, iterative, and incremental product development cycles.

Origins of CMMI
everyone working to develop the initial CMM was looking for the solution to the “software problem” from the perspective that software is a component of a larger system and that if it failed, lives would be lost (e.g., aircraft, ships, weaponry, medical devices).Systems were evolved using careful and deliberate development paths according to lower risk,standardization-heavy and contractually-driven relationships between the developer and the customer.
The paper does a good job of explaining some of the reasons why the 2 camps are often at odds, which includes the fact that both approaches are often misused, which adds fuel to the fire.

Here's a paragraph that summarizes the paper's conclusion.
Agile methods provide software development how-to’s, purposely absent from CMMI, which work well on small co-located projects. Meanwhile, CMMI provides the systems engineering practices often required on large, high-risk projects. CMMI also provides the process management and support practices (and principles) that help deploy and continuously improve the deployment of Agile methods in an organization regardless of organization or project size.
I tend to agree, except that agile approaches can also be successful when small teams aren't co-located; I know because I've done it.

A final quote, reminiscent of Rodney King's famous "can't we all just get along?"
If those of us in both the Agile and CMMI camps would understand and accept our differences and explore the advantages of the other, we will find new ways of combining the ideas of both to bring improvement to a whole new level. Our challenge to CMMI and Agile proponents alike is to learn the value of the principles and technology of the other and begin to use them in their work
In my opinion, CMMI is too often misued to force a heavyweight, waterfall methodology on an organization for the primary purpose of marketing the organization's supposed capabilities rather than truly improving the organization's capability to sustainably produce business value for customers. That's why I will probably always be leary of any CMMI initiative started by non-engineer management folks.

Tuesday, November 11, 2008

Can a customer support team be agile?

Mattias Skarin recently posted a useful 2-page summary of his approach for managing a support organization using agile techniques and a kanban board.

I recently led a team of Solutions Consultants (who supported our customers development efforts) at IP Commerce. The challenge we had was that there was ALWAYS a full backlog of customer support tasks, and those tasks are virtually always a higher priority than project-based work that the team needs to get done - stuff like improving the process, creating sample applications, writing documentation.

What I did to solve this dilemma was to allocate 20% of each person's time for project work, with 80% for ongoing customer support. The team members scheduled their time to carve out 4 or 8 hour time blocks during the 2-week iteration where they could focus on project work.

For iteration planning purposes, I kept a backlog of project-based user stories only - no customer support tasks - and the team planned how much of that backlog they could knock out in 20% of their time. The customer support stories/tasks weren't considered during planning - we knew those would flow in steadily.

This worked well for us, but let me know if you have other advice on managing support organizations.

Wednesday, November 05, 2008

Agile in the Extreme

What happens when you take agile techniques to the extreme? At the Agile 2008 conference, a couple of Aussies, Paul King and Craig Smith, gave a presentation titled Technical Lessons Learned Turning the Agile Dials to Eleven. You gotta love the reference to Spinal Tap!

Back in my days at BoldTech Systems (now Perficient), we did hard-core XP. I mean hard core. We all had tables on wheels, no cubicles, no walls - the whole team in a "bullpen". We strictly followed all of the XP practices, and if we didn't, the VP of Engineering who was the uber-XP-Coach, would quickly swoop in and set you back on the straight an narrow path.

Some valuable lessons we learned:
  • Pair programming for 100% of production code isn't optimal. Many programming tasks are just too routine to justify it. For those, do frequent (at least daily) code reviews. Save pair programming for the tough stuff.
  • 100% pair programming can be exhaustively intense. How many people in this world do you really want to sit shoulder-to-shoulder with for 8 hours a day? Or 4 different people for 2 hours per day? Most developers I know need some time to work alone to keep their sanity.
  • Continuous integration requires an enormous amount of discipline. If you have to stop everything whenever a single test breaks, and you're writing lots of tests, you better not have very many tests fail.
  • Don't forget to do design. XP doesn't emphasize design, but it doesn't preclude it, either. Do just the right amount of architechture and design - at the beginning of a release and each iteration.
  • Test driven development is good. Period. Go ahead an turn the dial to 11 on this one!

Thursday, October 30, 2008

Poll: Does your company call people "resources"?

Thanks to a suggestion left on my previous post, I created a poll to find out if companies are calling their employees "resources". Please take a minute to answer the poll at my blog home page.

Wednesday, October 29, 2008

Resources vs. People

Is it just me, or is anyone else out there bothered by the use of the word resources to refer to a company's employees? Webster's defines resource as:

That to which one resorts or on which one depends for supply or support; means of overcoming a difficulty; resort; expedient
I suppose employees fit within that definition, but so do the desks and copy machines. Maybe it's my ego, but I like to be distinguished from inanimate objects in the workplace. The word resource to me is just too cold and inhuman.

Lots of organizations preach that people are their most important asset. It's easy to say, but how do you demonstrate that value?

  • By putting significant effort into designing a thorough and effective interview process, to make sure you hire the best people to begin with. How well organized is your company's interview process? And how often have you had to let someone go because they didn't turn out to be a good fit?
  • By respecting people within the organization. Give them clear objectives and establish a culture that encourages them to innovate and excel working as a team. Establish a culture of continuous improvement where every employee is truly empowered and expected to improve quality, process, and customer satisfaction. In Toyota, every production line worker is expected to stop the line if they find any problem, get to the root cause, and then correct it.
  • By growing people within the organization. Give people a clear path for career growth. Give them opportunities to try different roles within the organization. When you have a position to fill, look inside the organization first before looking for someone new.
A good indicator of how well your company treats its employees is their longevity within the company. Be wary of any company where the average employee has only been around for 1-2 years.

Monday, October 27, 2008

Is Agile a Fad?

I attended today's Agile Denver meeting - this time in Boulder - to hear Mary Poppendieck's presentation, Is Agile a Fad?

I'll summarize some of her material here, mostly in reverse order, starting with the key points and conclusions of the talk, followed by some of the contextual info she presented leading up to those conclusions.

The key to successful development organizations

The key to a successful development is for engineers (developers) to have a deep understanding of their customers - both internal and external. When new engineers start at Toyota, they spend their first 6 months on the production line assembling cars, so they fully understand their internal customers. Then, they spend 6 months working for a dealership - selling cars - so they know what customers really want.

Another key is building a culture that retains quality people for the 6-10 years it takes to build true expertise, and growing leaders from within the organization.

Fads vs. enduring principles

Why do we have these fads that fail?

  • silver bullet thinking. there is no silver bullet
  • trying to apply 1 solution in different contexts. different contexts require different solutions.
  • essential tensions in software. Don't swing too far toward one side or the other; rather find a solution that solves the valid concerns on both sides of the issue.

The principles behind systems engineering are robust over time. The concepts in project management are fragile over time.

Systems Engineering Project management
low dependency architecture complete requirements
quality by design quality by testing (at the end)
technical disciplines maturity levels
respect for complexity scope control
skilled tech leaders resource mgmt
learning & feedback cycles timeboxes
success = accomplishing system's objective success = accomplishing planning scope, cost, schedule

5 essential tensions in development

  • People. self-managed vs. managers. Answer: the servant leader facilitating self-organization.
  • Process: empirical vs. defined.Solution: relentless improvement, rigorous process for effective improvement. Identify the true root cause of problems, hypothesize solution, determine how to measure if it succeeds.
  • Product: development team vs. customers & biz operations. Solution: whole team philosophy. team talks to customers so they understand the problem deeply.
  • Planning: evolving plans vs. predictability. Solution: pull scheduling, set-based design (build multiple options), clear technical vision
  • Performance: concern only for the next iteration vs. long-term scope, schedule & cost. Solution: a team with pride & passion that delights customers - and has a deep knowledge of it's customers' needs, sustainable profit, breakthrough innovation

A brief history of software methodologies and the seeds of agile

What happened to all those methodology buzzwords? RAD, lean, structured programming, etc.? Sprinkled throughout the history of software, various people discovered and promoted practices that we call agile & lean today. They also promoted various practices that were unsuccessful fads.

1968

NATO conference on the software engineering crisis. Edsger Dijkstra said that programming became a problem in relation to the size and complexity of computer hardware. Douglas Ross of MIT said the most deadly thing is the assumption that you can specify what you're going to do, and then do it. The solution (compared to assembly languages): high level languages! (Cobol, Fortran, etc.). This removed drudgery, but increased the level of complexity possible, which led to the same problem all over again(Dijkstra).

1972

New York Times software project

  • structured programming made software more readable. Dijkstra proposed quality by design, as opposed to reliance on testing.
  • Dave Parnas devised information hiding, the concept of objects.
  • Top down programming was introduced by Terry Baker - basically this was the concept of continuous integration.
  • The "chief programmer team" concept introduced the tech lead, design review, pair programming, common code ownership. Result was 100 times more productivity (measured in LOC) and higher quality than typical at the time.

1976

Barry Boehm proposed that software maintenance was becoming the most expensive part of systems, and that the cost of changes got exponentially greater in later phases of the lifecycle. This famous (infamous?) curve was the key reason that everyone tried to nail down all the requirements at the beginning.

1982

  • Daniel McCracken & Michael Jackson wrote that the lifecycle concept (waterfall) was harmful and perpetuates failure by constraining thinking, and ignoring the reality that needs inevitably change over time.
  • James Martin wrote the 4th gen languages would allow application development without programmers. (A gross oversimplification of the inherent complexity.)

1984

  • Scott Schultz at DuPont introduced timebox development. 30 days for analysis & design, 90 days to develop. He called it rapid iterative production prototyping.

1988

  • Boehm introduces the spiral lifecycle model. More evolutionary model, but it's still a project management model, not a systems engineering model.
  • Watts Humphrey introduced software process maturity model (CMM). Attempt to bring in statistical process control and mandate maturity assessments. Focus on project management practices over system engineering practices.

1991

  • James Martin wrote book on RAD, facilitated by CASE tools. (Where are those CASE tools today? Anyone?) Problem: RAD often produced un-maintainable code. Didn't live up to the hype.

1995

  • Internet booms. J.C.R. Licklider serves as the technical visionary for several key internet organizations. Standards were developed.

Measurement and management

I've been following a thread on the Agile Project Management mailing list on measuring productivity. One of the posts makes reference to the oft-repeated axiom, "if you can't measure it, you can't manage it." Sounds perfectly reasonable, doesn't it? It also seems perfectly reasonable that you would want to manage, and therefore measure, the productivity of software developers and software teams. But as several people in this thread point out, there is really no good measure of software productivity. We all dismissed the notion of measuring lines of code a long time ago - at least I hope. Agile methodologies encourage the measurement of team velocity - how many features, story points, or estimated task hours the team completes in a particular iteration. I would argue, as did some in the discussion, that this measurement is properly used only to estimate the workload for the next iteration - it's "yesterday's weather". If it's used by management over the long term to measure team productivity then human nature dictates that the team will skew the estimates to generate a positive outcome.

I would argue that the best measurements for success are (1) customer satisfaction and (2) profitability. While it's true that true customer satisfaction can't be measured until the end of a development project, the product owner can provide interim measurements of satisfaction - as each iteration is delivered.

Sunday, October 19, 2008

Scrum and XP top list of agile methodologies

VersionOne released the results of The State of Agile Development survey for 2008. The most commonly used agile methodologies are Scrum and XP. Scrum, Scrum/XP hybrid, and XP together represent almost 80% of agile software development. 1.9% of respondents reported using Lean Development.

Friday, October 17, 2008

Simplicity defined

One of the most important but most elusive principles of agile development is simplicity. In XP, it's stated as Simple Design, and other contexts often times referred to as KISS (keep it simple, stupid), and closely related to YAGNI (you aren't gonna need it). Simplicity is a key to successful agile development because it's absolutely necessary to support other agile goals and practices:
  • short iterations
  • refactoring
  • collective ownership
  • pair programming
  • avoid premature optimization
In turn, simple design is enabled by test-driven development and refactoring.

The biggest problem with the simplicity principle is that people often disagree on what constitutes "simple", and I've always struggled to come up with a definition of simplicity that was, well, simple. I came across a quote today that I think sums it up pretty well, from Antoine Du Saint-Exupery.

Perfection is not when there is no more to add, but no more to take away.

Tuesday, September 30, 2008

Lean Thinking

Lean Thinking, by James Womack and Daniel Jones, is a good introduction to Lean theory and presents some compelling cases studies of enterprises - from small and simple to huge and incredibly complex, such as Pratt & Whitney - that made the transformation to Lean. I haven't finished the book yet, but can summarize the primary thrust of lean thinking as eliminating waste (muda, in Japanese). The foundational lean principals are:
  • Value
  • Value stream
  • Flow
  • Pull
  • Perfection

Value can only be defined by the final customer, in relation to a specific product, service, or both. It seems like an easy concept, but I challenge you to define your organization's value in a single short statement.

The Value Stream is the complete set of activities required to design a product, produce the product, and manage orders for the product. The value stream typically extends beyond a single company to all of its suppliers. To achieve the ultimate goal a lean enterprise must be formed through cooperation with suppliers. Eliminate muda in this stream.

Flow refers to the practice making value-producing activities flow together to dramatically reduce the time required to produce product. It commonly requires removing departmental barriers and changing mentality from producing large batches of parts to producing complete products at the same rate that customers are ordering them.

Pull techniques dramatically simplify the planning and scheduling process. Don't build the product until the customer orders it by pulling it from production. Each downstream step in the production process in turn pulls from the step immediately upstream from it.

Perfection means, quite simply, that the goal is not settle for doing better than the competition, but to continually strive for improvement (kaizen). You can never reach true perfection, but you can get asymptotically close to it over time.

I'll dig deeper into these concepts in later posts.

Tuesday, September 09, 2008

5 levels of planning

Yesterday I attended the Agile Denver meeting for a presentation by Hubert Smits on the 5 levels of planning. I was already aware of the 5 levels and I like to use all 5 levels, but it's always good to reinforce the principles behind the practices and hear from other agile practitioners out there. Here are the 5 levels, their suggested frequency, and the content of the plan.
  1. Product vision: annually. A 1-2 sentence statement. The "elevator pitch".
  2. Product roadmap: 1-2 times/year. The major themes of each release.
  3. Release planning: 3-6 times/year. The user stories/features for 1 release.
  4. Iteration planning: each iteration. With practice, most agile teams choose 1-3 week iterations.
  5. Daily planning (stand-up or daily Scrum meeting): the 3 questions (what did I do since last meeting, what will I do today, and any impediments)

Friday, August 08, 2008

Training for product owners

There are about a hundred Scrum Master courses for every one Scrum Product Owner training course. Yet the product owner is an absolutely key role in agile/Scrum projects. During the Agile 2008 conference, some people from Innovel presented on the product owner role and have made publicly available some great material for training product owners. It's a hands-on role playing exercise based on a hypothetical product called Beer Miles, a credit card that gives users reward points for purchasing beer from participating retailers. The product-owners-in-training are presented with a business case for the product and a set of product capabilities (aka user stories or features) that they must categorize and prioritize.

Cheers to the folks at Innovel that made this available!

Thursday, July 10, 2008

The Goal

I finished reading The Goal by Eliyahu (Eli) Goldratt. This is classic business novel about the Theory of Constraints (TOC), often cited in Lean and Agile literature. Written as a novel, it's an enjoyable read, and a must-read, I would say, for anyone who is serious about improving the way his business operates.

I always like to distill a good book down to it's bare essentials, so hear goes.

The goal is to make money. There are 3 fundamental measurements that express the goal, listed in order of importance.
  1. Throughput: the rate at which the system generates money through sales.
  2. Inventory: all the money the system has invested in purchasing things it intends to sell.
  3. Operational expense: all the money the system spends turning inventory into throughput
The aim is to maximize throughput while minimizing inventory and operational expense.
Note that in software, inventory is any software or feature that is unfinished or not yet delivered to customers.

Stated differently:
  1. Throughput is money coming in
  2. Inventory is money stuck inside the system, or investments that potentially could be sold
  3. Operating expense is money going out (to make throughput); any investment that can't be sold
Note: Agile software development reduces inventory by building software in small batches (iterations) that are quickly delivered to customers.

There are 2 types of resources:
  1. bottlenecks (a.k.a. constraints): capacity <= demand
  2. non-bottlenecks: capacity > demand
Balance the flow of product through the system, not capacity, with market demand. Make the flow through the bottleneck equal to market demand. A system needs to have excess capacity to handle the fluctuations in demand and variations in output from each resource in the system.

Activation vs. utilization of a resource:
  • utilizing a resource is using it in a way that moves the system toward the goal
  • activating a resource is using it whether or not there is any benefit from it's output.
Two rules of bottlenecks and non-bottlenecks:
  1. The level to which you can utilize a non-bottleneck resource (without increasing inventory) is determined not by the capacity of that resource but by some other constraint in the system.
  2. Activating a resource is not the same as utilizing it; activating a non-bottleneck to its full capacity is counter-productive with respect to the goal.
The implication: you must not seek to optimize every resource in the system. A system of local optimums in not an optimal system; often it is a very inefficient system. Optimize the whole system; not localized subsystems. [Lean principle: see the whole.]

The process for accomplishing the goal:
  1. identify the system's constraints (bottlenecks)
  2. decide how to exploit the constraints; maximize their utilization
  3. subordinate everything else to the above decision. Operate all other components to maximize utilization of the constraint.
  4. elevate the system's constraints; add resources or otherwise increase capacity of constraint resources
  5. if in the above steps a constraint has been broken, go back to step 1. Do not allow inertia to cause a system's constraint. Whenever a constraint is broken, immediately re-examine conditions included the changes made in steps 2-4; they may now be problematic.
Effective management seeks answers to these 3 questions:
  1. What should be changed
  2. What should it be changed to
  3. How to cause the change - without creating new problems, and with enthusiastic support

Tuesday, May 27, 2008

Perficient achieves CMMI Level 5 using Agile

Perficient, the company where I recently worked (formerly BoldTech Systems), recently achieved CMMI Level 5 certification at their development center in China using an agile methodology. I worked in the China facility just over a year ago when they achieved CMMI Level 4 and wrote about it in a previous blog post. Perficient is now one of the very first companies to achieve level 5 using agile.

I don't advocate pursuing CMMI certification unless it's a business requirement. There are many offshore (primarily Indian) firms who tout their CMMI Level 5 certification, so it may be an important factor when competing for certain contracts.

Regardless, this is a great achievement and an important one for the agile community, whether you like CMMI or not.

Tuesday, April 29, 2008

Are you really doing Scrum?

Jeff Sutherland, one of the founders of Scrum, has spoken about the Nokia Test - 8 questions to determine if a team is actually doing Scrum. A more succinct summary can be found here. Can your team pass the test?

Monday, April 28, 2008

Stealth agile and agile contracts

I attended tonight's Agile Denver meeting, which was a presentation by Richard Lawrence from Avenade entitled Stealth Agile - how to implement agile techniques when you don't have full -- or any -- buy-in from management. It was a short preso, and the discussion after the formal part was informative. One audience member asked how to introduce agile practices on a project which is architecture-centric and project leaders tend organize developers' work around components or layers, rather than features that provide end-user value. The suggestion from the audience was to build a small feature as a proof-of-concept designed to expose risk. Architects generally favor proofs of concept, and they also generally like the idea of finding and reducing risk. Note that you can present this idea without even using the word agile or any of it's practices. Sneaky, eh? One caution though - when people see that your POC works, it'll likely end up in production, so built it with production quality, including automated tests.

One of the most interesting questions, I thought, had to do with a decidedly non-stealth agile issue, which is, how do you write a contract to be agile from the beginning? Richard's response was that traditional contracts are typical very scope-centric; they focus on fairly low-level details about what software features will be built. For an agile contract, he advocates one that focuses on the product vision, and just enough scope to specify who will own the intellectual property of different parts that get built - something the Avenade legal team insisted on for their consulting contracts. The agile contract specifies the vision, a team size, and a time frame in which the contractor will endeavor to achieve the product/project vision, allowing customer and supplier to collaborate on refining the specific features which best achieve that vision - through iterative development of working software and the feedback that results. I suppose this requires a fair amount of trust between the parties, but if enables success, I'd say a little trust is a small price to pay.

Monday, April 21, 2008

Lean foundation of Agile Methodologies

Agile methodologies such as Extreme Programming and Scrum emerged in the 1990's as a radical departure from traditional, waterfall software methodologies. But were these agile methodologies really so new and radical? Many thought leaders have recently made the point that agile principles and practices are a software manifestation of the principles behind the lean product development strategies applied so successfully by Toyota starting decades before the agile software movement began in earnest.

Listed here are the seven principles of lean software development as identified by Mary and Tom Poppendieck in their book Lean Software Development: an Agile Toolkit and their agile counterparts from the Agile Manifesto (1), Scrum (2), and Extreme Programming (3).

Eliminate Waste
  • Working software is the primary measure of progress (1)
  • Simplicity - maximizing work not done - is essential (1).
  • Simple design - YAGNI (3)
  • The most efficient method of conveying information is face-to-face conversation (1)
  • business people and developers work together daily (1)
  • XP planning game (3)
  • Test-driven development (3)
  • Continuous integration (3)
Amplify learning (feedback)
  • early and continuous delivery of valuable software (1)
  • business people and developers work together daily (1)
  • Scrum sprint reviews held with all stakeholders (2)
  • XP - small, frequent releases (3)
Decide as late as possible
  • Welcome changing requirements, even late in the process (1)
  • Scrum product backlog - prioritized prior to each sprint (2)
  • Sprint planning / XP planning game (2) (3)
Deliver as fast as possible
  • deliver working software frequently (1)
  • potentially shippable software at the end of each short sprint (2)
Empower the team
  • Build projects around motivated individuals...trust them to get the job done (1)
  • The best...designs emerge from self-organizing teams (1)
  • Scrum self-organizing teams and Scrum master as servant leader (2)
Build integrity in (to delight customers)
  • Our highest priority is to satisfy the customer through early...delivery of valuable software (1)
  • Continuous attention to technical excellence and good design (1)
  • Design improvement / refactoring (3)
See the whole (optimize the whole system, don't sub-optimize)
  • At regular intervals, the team reflects on how to become more effective (1)
  • Scrum - sprint retrospective (2)
  • Design improvement / refactoring (3)
  • sustainable development - should be able to maintain a constant pace indefinitely (1) (3)
  • Design improvement / refactoring (3)

Thursday, April 17, 2008

When agile is not a natural fit

For a while now, I've been leading a team in a situation where textbook agile practices aren't a natural fit. My company, IP Commerce, has built a platform to enable electronic payments. Applications (built by third parties) connect to our platform to gain access to a variety of payment services such as credit card processing, electronic check processing, and e-commerce services such as PayPal. My team is responsible for integrating the various payment services with the IP Commerce platform. Each integration is called an adaptor, which translates messages (transactions) from the IP Commerce format to the service provider format & protocol. Why is it that this type of development doesn't easily fit into the typical agile model?
  • The scope of each adaptor is essentially fixed
  • Each adaptor must be certified before it can be deployed
  • The duration of each adaptor project is 6-12 weeks
Let's examine each issue in more detail.

Scope
Each adaptor translates messages, and to accomplish useful business functionality, there is a minimal set of messages that it must support. For example, in credit card processing, a customer must be able to do authorizations, voids, refunds, and settlements. Without all of those features, it doesn't meet any customer's minimal requirements. The only scope which is negotiable is minor features, such as corporate purchase cards or certain industry-specific features, for example those that support the lodging industry.

One of the core presumptions of agile is that scope is negotiable and features can be prioritized. If 90% of the scope is fixed and all the features (in this case, message/transaction types) have the same priority, the backlog isn't very interesting.

Certification
Agile methodologies, Scrum in particular, assume that a product can be deployed when the product owner judges it to have enough functionality completed. In our case, an adaptor cannot be deployed until the service provider to whom we're integrating certifies it. Not to mention that, as explained above, we need essentially all of the features working before it is useful to customers. This dependency on an external verification process is unavoidable, and unfortunately, it often takes a long time.

Duration
Each adaptor project lasts between 6-12 weeks. Most agile projects I've been involved with before now have been much longer, and have many more iterations that establish the all-important feedback loop and team rhythm.

Adapting agile to the situation
One approach we have tried is to treat each adaptor as a single coarse-grained feature in the backlog. This makes sense because these are the units of functionality that the business prioritizes, but on the other hand it doesn't make sense because agile features (user stories or backlog items) need to be small enough to fit into a single iteration (sprint). That disadvantage is a big one in my mind, so we have decided it's not a good approach.

Instead, we have decided to choose smaller backlog items, which for a single adaptor include a combination of true user-stories (message types and other user-identifiable features) and milestones such as 3rd party certification. In the past, I have been a big proponent of using a consistent sprint/iteration duration. In this situation, however, we've found it to be useful to first identify a concrete objective for each iteration (e.g. complete the first 2 message types, or achieve 3rd party certification) and then choose the sprint length based on the tasks required to achieve the objective - while keeping each iteration to 4 weeks or less.

It still feels awkward at times, but I feel we benefit greatly from relatively short iterations.