A Practical Guide to Seven Agile Methodologies, Part 2

A Practical Guide to Seven Agile Methodologies, Part 2

his is the second in a two-part series that surveys Agile methods and helps readers decide which combination is the most appropriate for their projects. The first part introduced Agile and summarized Extreme Programming (XP), Scrum, Lean, and Feature Driven Development (FDD). This part will look at the Agile Unified Process (AUP), Crystal, and Dynamic Systems Development Method (DSDM) before providing a comparison of all seven methodologies. (In case you missed it, be sure to read part 1.)

Agile Unified Process
The Unified Process (UP) is an iterative and incremental software development process framework. It is often considered a higher ceremony process because it specifies many activities and artifacts involved in a software project. As a process framework there are several adaptations, the most popular being the Rational Unified Process (RUP) from IBM. The Agile Unified Process (AUP) is an Agile adaptation of the UP formalized by Scott Ambler and written about by others including Craig Larman. Ambler succinctly summarizes AUP as “serial in the large, iterative in the small, [and] delivering incremental releases over time.”

Figure 1. Phases and Disciplines of the Unified Process. AUP is an implementation of the Unified Process which tailors it by selecting only seven disciplines (model, implementation, test, deployment, configuration management, project management, and environments).
Image courtesy of IBM.

Risk management plays an important role in AUP projects. AUP stresses that high-risk elements be prioritized early in development. As a means to this end, a risk list is usually created early on and maintained throughout the development process. Additionally, AUP stresses the early development of an executable architectural baseline. This architectural core is developed during the Elaboration phase to validate key requirements, assumptions, and address technical risks.

In describing AUP as “serial in the large,” Ambler refers to the four major phases of a UP project: inception, elaboration, construction, and transition. These phases occur in serial and each concludes when a specified milestone is achieved.

  • Inception?The goal of inception is to develop a shared understanding of the scope of the new system and to define a candidate architecture.
  • Elaboration?The goal of elaboration is to expand the team’s understanding of the system requirements and to validate the candidate architecture.
  • Construction?During construction the development of the system is completed.
  • Transition?During transition system testing is completed and the system is deployed to production.

AUP is “iterative in the small” in that each phase is divided into one or more iterations. AUP disciplines are a subset of UP disciplines and include: model, implementation, test, deployment, configuration management, project management, and environment. During most iterations, all of the seven disciplines of AUP occur in parallel (see Figure 1). Each discipline represents an activity that drives the project closer to achieving its vision.

Crystal is not actually a methodology itself, but a family of methodologies that vary based on the size and complexity of the project. Crystal is the name given by its creator, Alistair Cockburn, to the entire family of methodologies. Each specific methodology in the family is named after a color corresponding to the geological crystal’s hardness, representing that project’s size and criticality. While Cockburn eludes to the possibility of a spectrum of methods, so far the only implementations we are aware of are: Clear, Yellow, Orange, Orange Web, Red, and Maroon.

While these flavors of Crystal share many elements, it should be noted that they are not intended by Cockburn to be upward or downward compatible. If a project starts as a Crystal Clear project it should not expect to transition to a Crystal Maroon project. The founder implies that should the project turn into a Maroon project, the project should adopt the characteristics and practices of a Maroon project, and not expect to “grow” the prior Crystal Clear practices over time.

Regardless of which Crystal implementation you choose, you will find seven key principles at the heart of each:

  1. Frequent Delivery: Project owners/customers can expect deliverables from the team(s) every couple of months. On larger or more critical projects the deliverables may not go into production but stakeholders will see intermediate versions and be able to provide feedback.
  2. Continual Feedback: The entire project team meets on a regular basis to discuss project activities. The team also meets with stakeholders regularly to make sure the project is heading in the expected direction and to communicate any new discoveries that may impact the project.
  3. Constant Communication: Small projects expect the entire team to be in the same room, while larger projects are expected to be co-located in the same facility. All projects expect to have frequent access to the person(s) defining the requirements.
  4. Safety: Crystal is somewhat unique in its focus on the safety aspect of software development. This comes in two forms. One is the safe zone that team members must have to be effective and to communicate truth during the project without fear of reprisal; this is true of most Agile methodologies. The other form of safety that only Crystal recognizes is that the purpose of each software project is not the same and that some software projects are affect the safety of their end-users. For example, a space shuttle system is much more critical than a recipe organizer.
  5. Focus: Team members are expected to know the top two or three priority items each member should be working on and should be given time to complete them without interruption.
  6. Access to Users: As with most Agile methods, Crystal expects that the project team will have access to one or more users of the system being built.
  7. Automated Tests and Integration: Crystal has various capacities for verification of project functionality. Controls must be put in place to support versioning, automated testing, and frequent integration of system components.

Some key Crystal concepts are project size and criticality. Size is defined as the number of people involved in a project?nothing too earth shattering here, except that as the team size grows Crystal changes to add more formality to the structure, artifacts, and management of the project.

Criticality is defined as the potential for the system to cause damage. In other words, a life support system that malfunctions could cause a lot more damage than a video game that won’t let you save your game. As project criticality increases the rigidity of the project needs to increase as well to ensure the expected demands can be delivered.

Figure 2 depicts how to determine which Crystal methodology to use for a project. As the project size increases (moves to the right of the diagram), the harder the project, and hence a more comprehensive (darker color) of Crystal is necessitated. As the criticality of a project increases (moves from the bottom to the top of the diagram), aspects of the methodology need to be put in place to accommodate the additional requirements, including the artifacts generated by the team; however criticality does not affect the color of Crystal used.

The letters in the cells represent the criticality of the project as follows:

  • C: Comfort
  • D: Discretionary Money
  • E: Essential Money
  • L: Life
Figure 2. The Crystal Family of Methodologies. One characteristics of Crystal is its intentional scaling to projects based on size and criticality. The larger a project gets (from left to right), the darker the color.

The numbers in the cells (along the bottom of Figure 2) represent the upper size of the project team. For teams up to six, the Crystal Clear methodology works fine. For teams of seven to 20, Crystal Yellow will introduce mechanisms to help manage the additional team size. For teams of 75 or 80 members, Crystal Red should do the trick. A project of one to six folks working on an atom splitter need a tad more in the way of checks and balances than the same size team creating yet another web site for their garage band.

Due to the bulk of the published data focusing on Crystal Clear and Crystal Orange projects, we will include some specifics of each of these methodologies. For more information on other methodologies in the Crystal family, see the references provided at the end of this article.

Crystal Clear
Roles vary across the Crystal methodologies as the size and criticality of the project increases. In other words, Clear has the fewest defined roles and Maroon has the most. The minimum defined roles for Crystal Clear projects are:

  • Sponsor
  • Senior Designer
  • Programmer

Crystal Clear expects that all team members will be working in the same room together. Support for more complex communication is not specified. The most important role is the Senior Designer, who is expected to be capable of making all the technical decisions that need to be made. Roles of project manager, business analyst, tester, etc. are shared among all team members.

Working software is expected to be delivered every two to three months. The team can work in smaller iterations if desired, but the expected release is every 60 to 90 days.

Crystal Clear requires minimal documentation, as project milestones are commonly the actual software delivery, not written documents. The team is responsible for defining any additional artifacts that they produce, and for defining their own coding standards, testing practices, etc.

Crystal Orange
As you would expect, the number of roles increases greatly for Orange projects. Roles can vary depending on the organization, but typically they include the traditional roles of Architect, Sponsor, Business Analyst, Project Manager, etc.

Because there is a greater need for verification in larger projects, specific care is taken to make sure more structure is put in place around the overall process, as well as a larger emphasis on testing, as each subgroup within the team is expected to have a tester.

Crystal Orange projects are described as typical medium-sized projects.

Working software is expected to be delivered every three to four months. The team can work in smaller iterations if desired, but the expected release is every 90 to 120 days.

Crystal Orange defines a set of specific deliverables:

  • Requirements Document
  • Release Sequence (Schedule)
  • Project Schedule
  • Status Reports
  • UI Design Document (if project has a UI)
  • Object Model
  • User Manual
  • Test Cases

As with Crystal Clear, the team is responsible for defining their own standards and guidelines for the artifacts they deliver.

Dynamic Systems Development Method
The Dynamic Systems Development Method (DSDM) was developed in the UK in the mid to late 1990s by folks with a business?not a technical?perspective. The process is now managed by the DSDM Consortium and is probably the most popular Agile methodology in practice in the UK. DSDM is technically considered to be a framework, and the framework is versioned?and releases are managed?by the Consortium.

DSDM is one of the heavier Agile approaches available. It was originally developed as an extension to Rapid Application Development (RAD), incorporating best practices from the business-oriented founders.

DSDM projects consist of three phases (see also Figure 3):

  • Pre Project: Things that need to occur before the project begins.
  • Project Lifecycle: The actual project occurs. This phase is broken into five stages:
    • Feasibility Study
    • Business Study
    • Functional Model Iteration
    • Design and Build Iteration
    • Implementation
  • Post Project Phase: Things that need to occur after the project has been completed.
Figure 3. DSDM Lifecycle. The three phases of a DSDM project involve a lot of steps.

DSDM is founded on nine principles. These principles are:

  1. Active user involvement is imperative.
  2. The team must be empowered to deliver.
  3. Frequent delivery is key.
  4. The primary criterion for acceptance is the delivery of functionality that meets the current business needs.
  5. Iterative and incremental delivery is essential.
  6. All changes during the project lifecycle are reversible.
  7. Requirements are baselined at a high level.
  8. Integrated testing during the entire project lifecycle is expected.
  9. Collaboration and cooperation between all stakeholders is essential.

Given the business roots of DSDM, it should not surprise you that DSDM has a focus on delivering functionality on time and on budget. To accomplish this, it is understood that each project will be split into chunks, with each chunk having a specific number of features, budget, and time. If a project is running out of time or budget (since they are both fixed), the least important features are dropped and considered for future projects. Features (requirements) are prioritized using the following rules:

  • MUST have this requirement.
  • SHOULD have this requirement if at all possible.
  • COULD have this requirement if it can be delivered without major impact.
  • WOULD like to have these requirements if there is enough time remaining.

The MUST, SHOULD, COULD, and WOULD are commonly represented with the MoSCoW acronym.

DSDM purports that user feedback on requirements is crucial, and it places a lot of weight on prototypes created early in the project phase. These are used to refine requirements and clarify expectations of all stakeholders.

DSDM does not dictate the testing approach a project team should use, but does mandate that testing be performed throughout the project lifecycle phase. Testers must be involved in all aspects of the lifecycle phase to make sure all testing opportunities are maximized.

Workshops are used to provide a forum for all stakeholders to discuss the project requirements and functionality. They can be held as often as needed. Modeling is similar to Workshops, but held by the technical team members to create and communicate the technical aspects of the system, as well as other aspects of the system that can benefit from visual models, such as the business domain.

Due to the fixed controls of DSDM, a configuration management system that controls various aspects of a project is a requirement.

DSDM appears to set the upper boundary of project size to six teams of six people for each team. There may be instances of larger team sizes, but we were unable to find documentation supporting them. Because DSDM was founded on business best practices, you should be aware that DSDM also recommends it not be used for safety critical systems. This would include systems such as life support, toxic waste management, nuclear reactors, etc.

Each of the methodologies considered in this article have unique strengths and weaknesses (see Table 1). (Read part 1 of this article to refresh your memory on XP, Scrum, Lean, and FDD.)

Table 1. Methodology Strengths and Weaknesses
Each methodology has its own unique strengths and weaknesses that make each more appropriate in certain contexts.

  • Strong technical practices.
  • Customer ownership of feature priority, developer ownership of estimates.
  • Frequent feedback opportunities.
  • Most widely known and adopted approach, at least in the U.S.
  • Requires onsite customer.
  • Documentation primarily through verbal communication and code. For some teams these are the only artifacts created, others create minimal design and user documentation.
  • Difficult for new adopters to determine how to accommodate architectural and design concerns.
  • Complements existing practices.
  • Self organizing teams and feedback.
  • Customer participation and steering.
  • Priorities based on business value.
  • Only approach here that has a certification process.
  • Only provides project management support, other disciplines are out of scope.
  • Does not specify technical practices.
  • Can take some time to get the business to provide unique priorities for each requirement.
  • Complements existing practices.
  • Focuses on project ROI.
  • Eliminates all project waste.
  • Cross-functional teams.
  • Does not specify technical practices.
  • Requires constant gathering of metrics which may be difficult for some environments to accommodate.
  • Theory of Constraints can be a complex and difficult aspect to adopt.
  • Supports multiple teams working in parallel.
  • All aspects of a project tracked by feature.
  • Design by feature and build by feature aspects are easy to understand and adopt.
  • Scales to large teams or projects well.
  • Promotes individual code ownership as opposed to shared/team ownership.
  • Iterations are not as well defined by the process as other Agile methodologies.
  • The model-centric aspects can have huge impacts when working on existing systems that have no models.
  • Robust methodology with many artifacts and disciplines to choose from.
  • Scales up very well.
  • Documentation helps communicate in distributed environments.
  • Priorities set based on highest risk. Risk can be a business or technical risk.
  • Higher levels of ceremony may be a hindrance in smaller projects.
  • Minimal attention to team dynamics.
  • Documentation is much more formal than most approaches mentioned here.
  • Family of methodologies designed to scale by project size and criticality.
  • Only methodology that specifically accounts for life critical projects.
  • As project size grows, cross-functional teams are utilized to ensure consistency.
  • The “human” component has been considered for every aspect of the project support structure.
  • An emphasis on testing is so strong that at least one tester is expected to be on each project team.
  • Expects all team members to be co-located. May not work well for distributed teams.
  • Adjustments are required from one project size/structure to another in order to follow the prescribed flavor of Crystal for that project size/criticality.
  • Moving from one flavor of Crystal to another in mid project doesn’t work, as Crystal was not designed to be upward or downward compatible.
  • An emphasis on testing is so strong that at least one tester is expected to be on each project team.
  • Designed from the ground up by business people, so business value is identified and expected to be the highest priority deliverable.
  • Has specific approach to determining how important each requirement is to an iteration.
  • Sets stakeholder expectations from the start of the project that not all requirements will make it into the final deliverable.
  • Probably the most heavyweight project compared in this survey.
  • Expects continuous user involvement.
  • Defines several artifacts and work products for each phase of the project; heavier documentation.
  • Access to material is controlled by a Consortium, and fees may be charged just to access the reference material.

Table 2 compares the methodologies discussed in this article, and attempts to provide a very high level indication of which approaches might be more suitable for a particular project, depending on the project’s specific circumstances. The table illustrates which conditions favor (?), discourage (X), or are neutral (-) with respect to the specific conditions listed.

Table 2. Methodology Comparison.
The information presented here is not meant to be representative of any scientific metrics, nor even specifically of the stated goals of each methods’ inventors. However it is representative of the authors’ first-hand experiences helping various teams adopt these methodologies, in whole or in part. In this sense it is very practical, and is intended to be informative as to how successful the adoption of each method has actually been.

Small Team???XX?
Highly Volatile Requirements????X
Distributed TeamsX????XX
High Ceremony CultureXX??
High Criticality SystemsX?X
Multiple Customers / StakeholdersX??X

Table 3 depicts the authors’ interpretation of the goal of each methodology expressed as a simple phrase.

Table 3. High-level methodology description.
A single phrase can sum up the intent of each methodology’s founder.

MethodologySummarizing Phrase
ScrumPrioritized Business Value
LeanReturn on Investment (ROI)
FDDBusiness Model
AUPManage Risk
CrystalSize and Criticality
DSDMCurrent Business Value

Two ways of categorizing the appropriateness of any Agile method to a given environment are project size and criticality. Although this doesn’t provide a complete view of the appropriateness of an Agile method in a context, it does provide a very good general description of the fit. Alistair Cockburn has developed a scale based on these characteristics for comparing methods. In Figure 4 we have attempted to plot the various methods covered in this article based on our experience and observation.

Figure 4. The Cockburn Scale. The authors’ evaluation of the appropriateness of each of the covered methods based on their experience and observation and illustrated via the Cockburn scale.

As Figure 4 hopefully shows, XP is generally most appropriate on smaller, highly dynamic projects although many of its practices can provide value when combined with other management methodologies. XP has also been scaled to companies with hundreds of developers, but handling large projects is a customization a company has to make?it is not inherent to the XP process due to it?s intense focus on constant, quick feedback and simplicity.

AUP provides a higher ceremony process that may be appropriate for larger teams, distributed teams, and systems of higher criticality. If the adopting corporate culture is likely to change slowly from a Waterfall-like process, AUP would be a good choice to “ease” into an Agile mindset.

Scrum and Lean are frameworks that focus on how to manage the overall process, maximize business value, and reduce waste. Because Scrum and Lean do not specify technical practices, either can complement methodologies that do, such as XP or a company’s existing methodology.

DSDM is a heavier and more formal flavor of Agile, and is very business centric. It compares in many ways to AUP, but focuses on current business value as opposed to risk.

Crystal offers a range of methodologies to choose from, each varying by project size and criticality. As the project size and/or criticality increases, Crystal adds mechanisms to support the additional burden of larger teams and higher degree of safety required by more critical projects.

Finally, FDD is an interesting mix. It can function as a complete Agile process, or can be combined with Scrum, Lean or XP to produce a customized integration of techniques.

The important point is that no methodology, Agile or otherwise, is meant to be taken verbatim. It must be customized in the context in which it is being applied in order to increase the rate of adoption and the opportunity for success.


About Our Editorial Process

At DevX, we’re dedicated to tech entrepreneurship. Our team closely follows industry shifts, new products, AI breakthroughs, technology trends, and funding announcements. Articles undergo thorough editing to ensure accuracy and clarity, reflecting DevX’s style and supporting entrepreneurs in the tech sphere.

See our full editorial policy.

About Our Journalist