Agile Zone is brought to you in partnership with:

Arnon Rotem-Gal-Oz is the director of technology research for Amdocs. Arnon has more than 20 years of experience developing, managing and architecting large distributed systems using varied platforms and technologies. Arnon is the author of SOA Patterns from Manning publications. Arnon is a DZone MVB and is not an employee of DZone and has posted 68 posts at DZone. You can read more from them at their website. View Full User Profile

Evolving Architectures - Architecture Retrospective

08.13.2008
| 10438 views |
  • submit to reddit
Retrospectives, every "agile" team does retrospectives.What are retrospectives anyway?

A retrospective is a meeting where the team takes a look and inspect the past, in order to adapt and improve the future.

Agile or not, our team does a retrospective at the end of each iteration (every two weeks in our case). We try to look at what worked, what didn't , how we are meeting our goals etc, how is the product going etc.. These meetings provide a lot of value for steering us at the right direction.
On going retrospectives that look at the near past allows for suppleness and change adaptation and they are very powerful at that - However it is sometimes worthwhile to reflect over longer periods of time.

One area where longer perspective is important is the architecture of the project. Evolving an architecture you run the risk of accepting wrong decisions - mostly because architectural decisions have long term implications, while YAGNI, time constraints and life in general drive you toward short term gains.

Again, taking an example from my current project, working towards the first release, we took a few major decisions during the development e.g.
  • federated resource management - Taking into consideration the fallacies of distributed computing we decided that we'd have local resource managers that will take care of resource utilization and allocation. The resource managers will have a hierarchy where they'd communicate with each other to gain the "bigger picture"
  • Introduce Parallel Pipelines - handle image understanding by dividing the work between specialized components.
  • RESTful control channel - to use a "lingua franca" between all component types so that we can easily integrate across platforms and languages
  • local failure handling - resources and components handle failure by themselves
  • Communication technology (WCF in our case) is isolated from the business logic by an Edge Component
  • etc.
Once we finished delivering the first release. We took a few "days off" to consider what we've done thus far. updated our quality attribute list per our knowledge working with the system and looking at some customer scenarios. studies the things we liked/didn't like in the design and architecture of the working system. and revised a few of our decisions for instance
  • We found that rushing to a working system we introduced some excess coupling to a specific technological solution (for video rendering). We initiated a few proof of concepts and found out how to both isolate the technology from the rest of the system as well as allow more technology choices.
  • We found that the some of the data flows were not as clean as we thought they'd be - adding new features caused more resource interactions than we thought when we partitioned the resources. We redefined some of the resource roles to get less message clutter (and higher cohesion)
  • The federated resource management works well, but introduce needless latency in session initiation. We now opted for introduce "Active services" which are more autonomous.
  • Add a blogjecting Watchdog in addition to local failure handling to both increase the chances of failure identification and recovery as well as get a better picture in a centralized Service Monitor.
  • RESTful control channel worked well and will continue for later release
  • Some of the scale issues will be handled by introducing "Virtual Endpoints" while some would continue to use autonoumous endpoint creation and liveliness dissemination (hopefully learning from the mistakes of others)
  • etc.
The result of these and the other decisions we've maid is a rework plan that will (hopefully anyway) make our overall solution better.
What we see is that we evolved our architecture as we went forward. While all the the decisions we made seemed right at the time we took them, only through reviewing them in a wider perspective (architecture retrospective) we identified the decisions that we need to change and the ones that we have to enhance. The insight you gain after working on a project for awhile are much better than the initial thoughts you have or the understanding you master in the initial interations.
I think it is essential to review the architecture once you've gained more experience with the realities of the system you write (vs. the precieved realities you have on the get go)

By the way if you work with a waterfall approach your situation is worse. Since in this case you take your decisions before you write any code so, you don't even have the benefit of POCs, and working code to enhance your insights


PS
if you have the MEAP version of SOA Patterns you can read more on the patterns I've mentioned here: Active service in chapter 2, blogjecting watchdog in chapter3, Service Monitor in chapter 4, Parallel Pipelines in chapter 3, Edge Component in chapter 2
References
Published at DZone with permission of Arnon Rotem-gal-oz, author and DZone MVB. (source)

(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)

Comments

Cristian Popovici replied on Wed, 2008/08/13 - 11:21am

A very nice discussion on retrospectives:

http://se-radio.net/podcast/2008-07/episode-105-retrospectives-linda-rising

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.