DevOps Zone is brought to you in partnership with:

Abraham has posted 1 posts at DZone. View Full User Profile

Quality Levels: the Hole in Software Methodologies

  • submit to reddit


About six years ago one of the authors of this document began almost simultaneously two different software projects. One was a signal processing desktop application that needed to be extensible through plugins. The tool should provide an API to plugins, to facilitate their construction. This project would involve several developers. The second project’s goal was to perform a sensitivity analysis of a three-dimensional model reconstruction of electron microscopy images. This required developing a piece of software to invoke the model (a program) and to perform an analysis of the results obtained using several tools (other existing programs).

Both projects were carried out successfully. The first one was developed in Java and is still maintained and extended today. It has begun to be used outside of the organization where it was created and it is expected that it in ten years from now it will still be in use. The second one took the form of a Python script about three hundred lines long. It ran once successfully (it took about six days on a big machine) and gave appropriate results. At present, the author does not even retain a copy of this program.

The question that this paper seeks to answer is: does it make sense to use the same software development process for both projects? Obviously, not. It is not the same scenario a project designed to be extended, which is expected to be used (and therefore maintained) for many years and which will involve several developers, that a "project" (barely a script) that will only run once, developed by a single person, that will not be extended, and that it will not be necessary to maintain. From the point of view of ROI[1]: what sense makes it to invest time to make that script more readable and maintainable if when it runs successfully once its usefulness is over?

That script was the first program that I (Abraham) wrote in Python. I took many shortcuts and the code was horrible. When I got it to work, I automatically entered in "refactoring mode" and started using my text editor to give better names to variables and try to structure the code in functions. After a while, a question began to haunt me: Why was I doing that? My developer instinct told me I should write readable and maintainable software. However, in reality I was wasting my time. It did not make any sense to invest more effort into that piece of software. All I had to do was to run it and wait six days to get the results.

The white elephant in the room of software development methodologies

Not all software we develop requires the same quality. It is not the same to develop software that will run only once, and probably will never need to be changed except to correct bugs, that software that is expected to be used for years and that will be continually extended. It is not the same to develop software that is going to be used in isolation, that a piece of software that will be integrated with other applications, that a piece of software that exposes an API on top of which more software will be built. The amount of effort and resources required to develop a framework that will be publicly available is not the same amount of effort and resources needed to develop an Intranet web application. Even though both applications may have exactly the same number of lines of code. The first development should have a higher quality and therefore requires more resources and more time.

What is stated above is a real truism. Any experienced developer knows it. Then, Why do software development methodologies not take this into account? All software development methodologies, from agile to those more traditional, tend to define a process that is geared to achieving the highest quality possible. But we do not always want, or can afford, the highest possible quality. Time and resources are always limited, and therefore they dictate that a public framework and a simple Intranet web application do not require the same quality and, therefore, should not use the same development process.

We argue that from the beginning of the development process, the quality level that makes sense to be reached should be explicitly defined, and that this level should influence the development process. We want to make perfectly clear that we are not looking for excuses to develop poor quality software. Perhaps in an ideal world all software developed would have the highest quality (often, this seems to be the goal that software development methodologies try to reach). But in the real world this is not the case, and software development methodologies should take this reality in account defining different processes to achieve different levels of quality.

It might seem an elemental reflection, but since we had this realization, we began to address software projects differently. Explicitly defining at early stages of development what level of quality we wanted/needed to achieve, and based on that, what techniques and tools should be used has changed the way we face software projects. In fact, developers before embarking on a software project do this analysis in their minds, and on that basis they make a series of decisions that will guide the development. However, this process is currently very informal, and it often is undocumented.

We believe that this situation resembles the situation in which refactoring 20 years ago: ​​good developers refactored their software, but it was an informal process. We argue that the definition of the level of quality to be achieved in a software project should be formal, it should be done explicitly, it should be documented and perhaps it should even be part of a business contract for the development of an application. Formalizing these tasks that we all do informally will enable the sharing of knowledge on how to pursue different quality levels in a project in a more effective way, in the same way that the concept of refactoring helped to create a common vocabulary to talk about and share knowledge about refactoring, and facilitated the creation of a set of tools to support these tasks.

Good practices

The main idea of this paper is that the definition of the overall level of quality to be achieved in a software project must be one of the explicit steps in the analysis stage of the project and it should influence the software development process.

This is the most important message we want to convey. Now we shall reflect on various factors that impact the quality level required by a project. Later we will discuss a series of recommendations (which we found useful, but that everyone will have to adapt to their circumstances) to achieve different quality levels corresponding to different stereotypes of projects. As almost any good practice in the software development world, these recommendations should be taken with caution and adapted to the context where they will be applied. These recommendations are only "good practices" that worked for the authors of this paper. Possibly others in the future will share their own best practices, and if this occurs, we are sure they will disagree totally or partially with ours. There will never be a consensus in this regard, just as there is currently no consensus on what is the software development methodology to use. We believe the details are not as important as the overall concept itself.

Published at DZone with permission of its author, Abraham Otero. (source)

(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)


Peter Draheim replied on Thu, 2013/11/07 - 8:24am

Interesting article - great concept - are you planning on defining the Quality Levels (1-5 stars) or did I miss it?

Abraham Otero replied on Thu, 2013/11/07 - 1:21pm

The closest thing to a "definition" of the quality levels would be the recommendations (Rules of thumb) given in the third column of the tables. We could say that they somehow implicitly define which means more or less "stars". This is not an accurate definition at all. Butit  is the best that Francisco and I have managed to do.

Amit Mujawar replied on Tue, 2013/11/26 - 11:15pm

 I think there is a simple analogy - compare a ferrari with a normal sedan car, you've got to pay more to get more features and best of engineering. if u start defining quality level in terms of what this product if worth for someone who is buying it or using it. or in other words, what is at stake? (most often in monetory terms but not always) then it is easy to relate the importance of quality.

another important factor is what is the penalty of not going things right - what would we sacrifice or risk when moving from level 5 to level 4 and so on.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.