Grass Roots Agile
Checks and Balances
Managing dependencies between classes, package, and components throughout the development lifecycle is imperative when developing a software product that accommodates change and meets the business objectives. While breaking larger teams into smaller groups of developers helps establish fewer channels of communication, these smaller teams tend to introduce other architectural and integration issues. For larger projects, it’s likely these issues will affect your software before your first production deliverable. Agile practices applied throughout the lifecycle help minimize dependencies and encourage more frequent integration. An automated and repeatable build serves as a system of checks and balances that help ensure large teams maintain forward and sustainable progress.
Separate teams should be disciplined in releasing their changes to the source code repository on a frequent basis. Frequent builds help identify integration issues almost instantaneously, and help avoid the risk associated with Big Bang integration. Automated unit tests verify the integrity of code released by each team, while more complete integration and functional tests can verify behavior spanning team boundaries. Ensuring the complete system is free of compilation errors allows for frequent profiling of the application to identify serious issues related to performance, memory, and other environmental constraints. A shared environment that closely resembles the future production environment is also a critical aspect to help identify environmental issues. Additional techniques help add to this system of checks and balances to ensure development teams minimize dependencies and emphasize integration earlier in the software lifecycle.
- Dependency Metrics and Diagrams. Utilities, such as JDepend and JarAnalyzer for Java, are available that allow teams to obtain real and accurate feedback on the structural relationships between software packages and components. Understanding module and package dependencies allows developers to somewhat objectively evaluate the cost of change. As dependencies become excessive, such utilities help guide developers in prioritizing areas of the system that are higher priority candidates for refactoring.
- Levelized Build. Cyclic dependencies, recognizable as bi-directional relationships between packages or components, are especially dangerous as neither can be tested or deployed in isolation. One way to enforce acyclic relationships is building components individually, and including in the build and test execution path only those modules required to compile and test the component being built.
- Parallel Build. As systems grow, build times tend to slow. This is natural as more code must be compiled and more tests are introduced that exercise more code. When build time degradation occurs, it’s important to identify ways to increase the efficiency of your build process. One technique is to build components in parallel. If dependencies are managed carefully, components with no dependencies on each other can be built simultaneously.
- Component Tests. Components built in isolation, can be tested in isolation. For each component represented by a deployable unit, creating a corresponding test component helps ensure the functional integrity of the component. Test components are typically manifest as suites of individual unit tests, and aid any refactoring to help maintain the design integrity of the component.
- Integration Tests. Test suites that verify component integration certainly help ensure that the interfaces between components remain compatible, but also help identify behavioral issues, as well. Incorporating these test suites into your build process ensures that integration tests are frequently executed, reducing the likelihood that extended periods of time pass between periods of integration.
Executing test cases frequently offers another form of very useful feedback related to the functional correctness of the application. Developers can be notified of failed tests, including an indication of an area of the application that needs immediate attention. Notifying all developers of test status keep the team informed of the functional state of the application. When teams of developers are organized around process, automated integration testing helps the technology experts that span teams evaluate test results for the system and points to potential problem areas.
Running Tested Features
Running Tested Features is a metric used to measure the delivery of running, tested features to business clients, and collect their feedback. Suggested by Ron Jeffries as a way to increase agility and team productivity by continuously monitoring the software development effort from project inception through delivery, RTF is simple to define.
Break the software down into features that represent observable value to the business. For each feature, create one or more automated acceptance tests that validate the feature. Consolidate all features into a single, integrated product and continuously execute the automated acceptance tests. At any given moment, the project team should know the number of features that are passing all acceptance tests. This number represents the RTF metric.
RTF is a continuous measurement and, all things being equal, should grow linearly throughout the project. To explore what this means, we must first compare agile development with iterative development. Iterative development teams tend to timebox their iterations, marking each iteration with a milestone focused on delivering a planned set of features to clients every two to four weeks. Iteration planning defines the work included in future iterations based on the amount of work accomplished in previous iterations. Yet within these iterations, teams should deliver functional and valuable features daily, hourly, or even more frequently. Teams should not be constrained to delivering features at the end of each iteration. While project management defines iterations that last weeks or months, primarily for planning, the development team works in iterations that lasts days, hours, or minutes. And each feature that is released throughout an iteration planning window is a feature that can be measured and contributes to RTF.
On many software development efforts, we aim to align each process team to the same iteration schedule. This is not necessary. Each process team can work within its own iteration planning window. Agile practices, including continuous integration, ensure individual process teams remain aligned, and the processes of continuous release, continuous builds, and continuous deploys encourages the smaller iterations that often go unacknowledged on many development efforts. It is these smaller iterations where software growth occurs, and where we continuously measure Running Test Features.
Figure 2 depicts a multi-dimensional agile development effort, where iterations overlap but where RTF experiences linear growth as each iteration nears completion. Critics might argue that RTF cannot experience consistent linear growth due to unpredictable setbacks throughout the development lifecycle. While it’s true that teams will experience challenges that limit RTF growth for a short period of time, analyzing RTF reports over time will show positive growth.