SAFe 1: An Introduction to the SAFe® Scaled Agile Framework

“If you can’t explain it simply, you don’t understand it well enough”.

Albert Einstein.

Summary

SAFe is an agile delivery system for enterprise-scale software development. It is designed to address the limitations of single team-based delivery. In this introduction we discuss:

  • Problems solved by SAFe
  • SAFe framework overview
  • How to implement SAFe

Introduction: Why SAFe?

Enterprise software delivery requires the capacity of multiple development teams, multi-sprint delivery cycles and solid alignment between strategy and execution. The SAFe framework defines a set of organizational and workflow patterns for scaling agile delivery beyond single teams.  Specifically, SAFe addresses:

  • Scaling delivery capacity to multiple delivery teams.
  • Multi-sprint planning horizons
  • Aligning delivery teams to product value streams vs. short-term projects.

While many scaling frameworks are now in use, SAFe has become the most widely adopted, supported extensively with training, coaching and consulting resources. 

SAFe continues to evolve, and with the release of SAFe 5, the framework has redefined itself in terms of Seven Core Competencies and positioned itself as a solution to the broader challenge of Business AgilityThe Seven Core Competencies are designed to support a shift from Scaled Delivery to a more comprehensive Business Agility, and encompasses topics like Organizational Agility and Lean Portfolio Management. Scaled delivery using teams-of-teams remains at the foundation with Agile Product Delivery and Team and Technical Agility, and the basic transformation approach continues to be based upon organizing around value stream-aligned ARTs (Agile Release Trains).

The core of the SAFe delivery model is Develop-on-Cadence, Release-on-Demand. To support this approach, delivery is based on Program Increments (PIs) – fixed-width timeboxes of 5-6 sprints (many organizations plan in 6-sprint quarterly increments), in which teams of delivery teams (ARTs) execute against a Program Backlog, producing release-ready product increments every 2 weeks, but where actual product releases are based on business demand – referred to as  Develop on Cadence, Release on Demand. In this way the process of releasing is decoupled from the development cadence.

 SAFe is a large, complex and evolving framework, and organizations are encouraged to engage the help of SAFe Program Consultants (SPCs – trained and certified by Scaled Agile Inc.) to provide implementation guidance and coaching. Wholesale adoption of the entire framework is likely to fail. The key question is what are the baseline patterns needed to get started, and how to incrementally scale adoption across an enterprise?

SAFe Delivery Fundamentals

At its essential core, the SAFe delivery system uses teams of teams known as Agile Release Trains (ARTs) to plan and execute in fixed width Program Increments (PIs), which typically comprise 5 to 6 sprints. Each team within an ART is a standard Scrum Team, and an ART may comprise up to a dozen teams. Additional roles are employed to coordinate planning and execution within the ART, specifically a Chief Product Owner, or Product Manager owns and manages the overall Program Backlog, which is the single source of work for all teams in the ART, and a Chief Scrum Master role, known as a Release Train Engineer (RTE), facilitates planning and execution of each PI. 

For scaling delivery capacity beyond a single team, SAFe introduces a Program Level in which multiple teams, organized into an ART, plan and execute in multi-sprint time-boxes known as Program Increments. The input to planning each PI is a Program Backlog owned and managed by a Product Manager.

  • Program Level – In SAFe, programs are planned and executed by Agile Release Trains (ARTs), in fixed width time-boxes of 5-6 sprints. An ART is a set of agile delivery teams aligned around a common product value stream. The program level operates via a set of governance mechanisms for ARTs that support feature intake, planning and delivery. The PI Planning process is used to estimate the scope of what can be delivered in each Program Increment.
  • Team Level – One of the outputs of the PI Planning process is a set of team backlogs reflecting, in story-level detail,  the scope of what each team has estimated they can deliver in the next PI time-box. The delivery framework is usually Scrum, that is, each team’s backlog is implemented via a series of sprints. At the conclusion of each sprint, team’s, where appropriate, integrate their work to produce an overall increment of the system.
Feature Delivery by Multiple Teams
Feature Delivery by Multiple Teams

Organizations are encourage to start with this basic scaled delivery model, getting ARTs organized and trained, and then adding more ARTs incrementally. Once program-level organization and delivery practices have been established,  then expand to a portfolio level. The Lean Portfolio Management Core Competency is concerned aligning strategy with execution, and with ensuring those initiatives (epics) with the highest business value are appropriately prioritized and funded. 

To summarize:

  • At the portfolio level, business initiatives (epics) are elaborated into features that describe the intended capabilities of the product or value stream. (not detailed feature definitions at this stage).
  • At the program level, features are fully defined in terms of benefits and acceptance criteria, and initial user stories are proposed (headline-level detail only).
  • At the team level, user stories are refined further (via backlog refinement) until they are sufficiently well-defined (INVEST) to be ready for implementation via sprints.

In practice, epic and feature refinement for the next PI runs concurrently with current PI execution.

Concurrent PI Preparation and Delivery
Concurrent PI Preparation and Delivery

Definitions of Ready, both at the epic and feature levels, have been proposed by the SAFe framework, in terms of templates, here and here. Epics require a vision (in elevator pitch format) and a list of required features/capabilities. For features, it is recommended that a list of benefits (why do we need this feature) and acceptance criteria are required for each one. The output of one set of practices feeds another via backlogs of ready work, often realized as a series of interconnected kanbans.

 Epic (Portfolio) Backlog >> Program Backlog >> Iteration Backlog:
All three sets of processes run concurrently, and largely independently.  There would be one instance of this framework for each ‘Value Stream’ (i.e. Product or Solution or Initiative) in the organization’s portfolio. The overall goal is that work flows continuously from solution vision to production.
Interconnected Backlogs
Interconnected Backlogs

The preceding has been a high level summary of the steps required to get work defined, planned and delivered from business portfolio to working software. The question now is – what mechanics or governance mechanisms do we need to support these processes in a simple but consistent fashion. Each of the 3 major operational areas comprises a set of practices, artifacts, and roles. In what follows we will elaborate on each of these 3 parts of the framework.

SAFe at its Simplest

Here is a simple way to think about scaling up to a program level with multiple teams. First, we are all very familiar with scrum, which is frequently represented as follows:

Scrum Framework
Scrum Framework

Think of a SAFe program as a higher level abstraction of scrum, following an identical pattern of: Planning – Executing – Inspection/Adaptation – Repeat. Here is the program level represented as a PDCA cycle:

SAFe Program Framework
SAFe Program Framework

The portfolio management level is also intended to operate as a PDCA loop:

Portfolio Management PDCA Cycle
Portfolio Management PDCA Cycle

The details at each level are of course different, but the roles, events and artifacts serve essentially the same purpose but at a different levels of abstraction: PO’s own and manage backlogs, SM’s work to remove impediments or waste and improve the flow of work. The general empirical approach with its inspect and adapt feedback cycle  is intended to operate at  each level.

Portfolio Governance

Portfolio governance refers to the management of business initiatives that require work at the program and team levels. Specifically:

  • Deciding which initiatives to undertake and in what order
  • Making changes to in-flight initiatives

A portfolio kanban system is an effective way to manage this process and track progress. The portfolio management team (epic owners and the enterprise architect) drives this process via regular portfolio management (or portfolio grooming) meetings. Typical governance tasks include:

  • Review epic definitions
  • Review epic business cases
  • Identify architectural enablers
  • Determine priorities, estimates and epic rankings (e.g. using WSJF)
  • Update the kanban board

The portfolio kanban has the following states:

  • Funnel – initial state for all new ideas/proposals pending review and analysis
  • Review – Epic defined in terms of elevator pitch (vision statement) and key features identified
  • Analysis – One -page business case, solution alternatives, enablers, relative ranking (WSJF), and Go-No-Go decision
  • Portfolio Backlog (Approved Epics)

Program Governance and Feature Intake Process

The critical activities at the program level include the following:

  • Portfolio Alignment: Ensuring the work being defined for the delivery teams aligns closely with the business portfolio, the roadmap and vision.
  • Release (PI) Planning Readiness: Features in the program backlog have been defined in sufficient detail to support user story creation and estimation. A feature intake kanban is usually used to manage this process.
  • Release (PI) Planning Event: The event successfully achieves alignment between business owners and the program teams to a common, committed set of Program Objectives and Team Objectives for the next release time-box.
  • PI Execution: The teams successfully deliver a large percentage of the committed objectives.
  • PI Retrospective: An end of PI retrospective is held to give program teams the opportunity to inspect and adapt, and thereby improve their effectiveness over time.

A key practice at the program level is to get features sufficiently elaborated, refined and prioritized that they can be consumed by delivery teams.

Like epics, feature intake can also be managed using a kanban with a work-flow that shows the status of feature requests as they are elaborated from one-liners into fully defined program backlog items with business benefits and acceptance criteria.

PI Planning

The PI Planning process exists to estimate the scope of what can be delivered within the next delivery time-box – typically 5 to 6 sprints.  At the planning event features will be estimated by breaking them down into stories and sizing them. One goal is to estimated the total scope of what can be delivered by reconciling features estimates with team production capacity. The planning process also needs to identify and account for feature dependencies between teams. The primary output of the planning event is a is a summary of feature scope and feature delivery timeline, usually captured in the Program Wall-Board artifact as follows:

Program Wall Board
Program Wall Board

The planning process runs on a fixed cadence, the release timing is business-need driven.

Team Practices/Feature Delivery

Once features have been elaborated into user stories with acceptance criteria they are ready for pulling into a sprint for delivery. The de-facto agile delivery framework is scrum (although Kanban, or Scrumban, may also be used).  

The Holy Grail for agile teams is to develop the capability of building a releasable product increment as the output of every iteration. Additional practices may be required to support this objective, including ‘technical practices’ like TDD, BDD, and DevOps frameworks that support test automation, and continuous integration. These can be considered an essential prerequisite to getting a production quality product increment out of every iteration.

DevOps

A typical continuous integration configuration is summarized in the following diagram.

Continuous Integration
Continuous Integration

In this system we have setup a CI system such as Hudson – an open source CI tool. Hudson integrates with other CI-related tools from multiple vendors, such as:

  • SCM Systems: Perforce, Git
  • Build Tools: Maven, Ant
  • Unit Testing Frameworks: Junit, XUnit, Selenium
  • Code Coverage Tools: Clover, Cobertura

Hudson orchestrates all of the individual sub-systems of the CI system, and can run any additional tools that have been integrated. Here is a step-by-step summary of how the system works:

  1. Developers check code changes into the SCM system
  2. Hudson constantly polls the SCM system, and initiates a build when new check-ins are detected. Automated units tests, static analysis tests and functional regression tests are run on the new build
  3.  Successful builds are copied to an internal release server, from where they can be tested further by the development team, or
  4. loaded into the QA test environment, where independent validation of new functionality can be performed at system level.
  5. Test results are reported back to the team

Knowing that every change made to an evolving code-base resulted in a correctly built and defect-free image is invaluable to a development team. Inevitably, defects do get created from time to time. However, identifying and correcting these early means that the team will not be confronted with the risk of a large defect backlog near the end of a release cycle, and can be confident in delivering a high quality release on-time.

Setting up a continuous integration system is not a huge investment, and a rudimentary system can be set up fairly quickly, and then enhanced over time. The payback for early detection and elimination of integration problems and software defects dramatically outweighs the costs. Having the confidence that they are building on a solid foundation frees up development teams to devote their energies into adding new features as opposed to debugging and correcting mistakes in new code.

A SAFe Adoption Plan

SAFe has recommended a 12-step adoption roadmap here. This approach has proven to be effective across many organizations, and can be summarized as follows:

SAFe Adoption Roadmap
SAFe Adoption Roadmap

Details will vary by organization based on their needs, but essentials include:

  • Make the Go SAFe decision. 
  • Train internal change agents and organizational leaders needed to support the transformation.
  • Identify Value Streams (products and/or services that the organization provides. For IT organizations that exist to support business operations, identify the fundamental business processes that must be supported, for example hiring/onboarding, payroll, supplier contracts, and so on). Identify the ARTs needed to support these value streams. Setting up ARTs based on Value Streams is one of the most consequential actions that can be performed by an organization.
  • Start with one or a small number of ARTs. Ensure ART roles (Product Managers and RTEs) are in place and trained. Train the ARTs in PI Planning and PI Execution.
  • Provide coaching for the ARTs as they plan and execute their first PI.
  • Learn and potentially make adjustments, then launch more ARTs
  • Extend adoption to the Portfolio level.

SAFe is positioned as a framework that must be evolved by each organization beyond its baseline patterns. This takes time but it is important to leverage the built-in Inspect & Adapt mechanisms to learn from experience and continuously improve.

Conclusions

SAFe provides a proven framework for scaled delivery. The fundamentals of the framework include re-organizing development organizations into ARTs and basing these on Value Streams. This is an essential first step, and ensures that teams can operate with maximum independence and hence delivery speed. For many organizations this step may be more challenging than installing the basic mechanics of PI Planning and execution, and if not done well can severely offset any benefits expected from adopting SAFe.

Print Friendly, PDF & Email

Similar Posts