Blog root page
Previous post in category
next post in category
Agile processes are getting a lot of press lately. It is important to realize that there are actually two main camps in using agile processes. The best known one is for OOP-based agile processes like XP and Scrum. These processes are highly focused on writing 3GL code and any OOA/D is highly distilled into practices like system metaphors and militant refactoring. They also incorporate a very fine-grained development life cycle model for Incremental, Iterative Development (IID). In these processes any modeling at the UML level is strictly optional and throwaway. These processes rely on applying very specific programming practices within a very formal process. These processes tend to be very rigid because they dictate developer activity down to the code fragment level and they are very specific about exactly what programming practices will be employed and how they will be employed. (As a group, the OOP-based agile processes are among the moos detailed and rigid processes that I have ever seen in software development; they are rivaled only by the operational standards for occupations like telephone repairman.) In effect they represent the culmination of decades of hard-won experience with writing 3GL code.
At the other end of the spectrum are the model-based agile processes that are based upon translation. In translation one employs a true 4GL solution representation, such as UML, to resolve functional requirements and relies on a tool to automate optimization of nonfunctional requirements. These approaches emphasize design reuse through militant automation to address many 3GL concerns. In effect, they represent a paradigm shift away from 3GL programming comparable to the paradigm shift of the '60s that moved from Assembly programming to 3GL programming. Hybrid approaches, such as Ambler's Agile Modeling, lie in the middle.
Both approaches represent valid evolutions of software engineering but they have evolved in quite different directions to the point that they are almost impossible to compare on a feature-by-feature basis. However, one can compare them at the megathinker level based on how they solve various software development problems, which I will attempt below. In that comparison I will employ XP as the post child for OOP-based agile processes and Model-Based Software Engineering (MBSE) as the poster child for translation processes.
Requirements gathering. Requirements gathering in XP is quite informal; often verbal in nature. XP depends upon defining requirements for one functional increment (usually on the order of a couple of programming days of effort) at a time and having a customer available to clarify the requirements on at least a day-to-day basis. This is a logical outgrowth of fine-grained IID and a strong belief that only the customer can really define requirements (e.g., throwing requirements "over the wall" in a formal specification just invites misinterpretations). While having the customer intimately involved in the development at all times is clearly a good way to ensure the right requirements are properly implemented, it is also an Achilles Heel for agile processes. If the customer (or a surrogate, such as a marketing representative) is not readily available on a daily basis, the informal nature of requirements gathering will tend to fall apart. That places a major constraint on the business environments where processes like XP can be deployed.
In MBSE the requirements can be supplied either in terms of formal specifications (use cases, etc.) or informally in the form of discovery of the problem space with direct customer involvement. The former is common for large, distributed projects while the latter is common for projects at the scale of 5-20 person teams. Validation depends upon the fact that the solution only deals with functional requirements and the solution itself is abstracted solely in customer terms. (The PIM model should be fully implementable without change in the customer's environment as a manual system, however inefficient that might be.) In theory this allows the customer to be more directly involved with the actual solution because of the abstract nature of the UML representation. As a practical matter, though, there tends to be a significant learning curve for that level of customer participation beyond the static models.
Development Life Cycle Model. In the OOP-based agile processes IID is mandatory. More important, the granularity is rigidly defined and increments are quite small. An increment is typically three calendar weeks and individual "stories" specified within that increment are usually 1-4 days. The maximum size of increments is mandated and the various activities (e.g., estimation, testing, etc.) are strongly tied to small increments. A key tenet is that whatever is constructed in an increment must be executable so that it can be fully tested for acceptance by the customer. A key goal is to avoid the classic "90% done" syndrome that plagued large projects in the SA/SD era.
While the translation processes are quite amenable to IID, it is not mandatory so they could be applied in the traditional Grand Waterfall manner. However, the "waterfall" is quite different because the "design" and "implementation" stages of the classic software waterfall are completely automated by the transformation engine. Unfortunately a common misconception about model-based processes is that all the models for the project must be created "up front". (BUDF = Big Design Up Front.) This is not true at all. All one need to model "up front" is the minimum needed to execute something, just as in the OOP-based processes. As a practical matter one actually has to model a lot less than a corresponding OOP-based increment actually implements because one is only concerned with functional requirements in the model. That is, all the "boilerplate" to address nonfunctional requirements properly is handled by transformation engine automation. The bottom line is that executable MBSE models can be created at the same or even smaller increments than in OOP-based processes. In other words, the development life cycle model is completely customizable.
Testing. Testing is an integral part of OOP-based agile processes. This is highly laudable given the sad state of traditional developer testing. Both test-first and test-driven design are highly desirable process elements. However, processes like XP employ testing as more than just an important element of the process; it is actually central to the development philosophy. For example, in XP the only formal expression of requirements are the tests provided by the developers and customers; if they pass, then the software satisfies the requirements by definition. In my opinion, this has two pitfalls. The first is that it assumes developers and customers can write good tests, which is unlikely at best in my experience.
[There is also a resource issue. Writing acceptance tests for a 10-person XP team would require roughly two customers working full time (Ron Jeffries; private communication). One can substitute an external QA team of professional test writers as a surrogate for the customer here. However, that leads to another problem: synchronization. How does one ensure that both the QA team and the developers are getting the same requirements, given the very information process for eliciting requirements on a verbal, ad hoc basis?]
More important, it is tied to the traditional view that product quality can be assured through "testing out" defects. That view was dismissed in other engineering disciplines in the '80s when the PacRim nations revolutionized product quality based upon defect prevention. Testing can bring one close to 5-Sigma reliability but to get to 6-Sigma and beyond one must practice militant defect prevention where testing becomes a tool for process monitoring rather than ensuring product quality. The XP view of testing, when combined with its already very rigid process structure, concerns me because it tends to stifle process improvement directed at preventing defects.
Alas, the model-based processes do not have any built-in policy or infrastructure for testing. The translation processes do allow early testing of the solution models for functional requirements because the models themselves are executable. Because the translation processes emphasize good application partitioning with disciplined subsystem interfaces, one can also conveniently do full functional testing of subsystems. But the amount, quality, and goals of testing are left as an exercise for the development environment. On the other hand, I have to point out that statistically the reliability of translation applications tends to be substantially better than applications built via elaboration (i.e., manually coded at the 3GL level). I attribute that to focus on functional requirements, economies of scale in design reuse, and the compactness of a 4GL representation.
Dependency management. Most of the refactoring in OOP-based processes is focused on dependency management (simplistically: ensuring implementation dependencies form a directed, acyclic graph). This is driven by two considerations. One is to apply OOA/D principles (e.g., encapsulation, implementation hiding, etc.) to the development. Much of the content of Fowler's book, Refactoring is a distillation of good OOA/D into "cookbook" practices at the object and code fragment level. Unfortunately the other driving factor lies in dealing with physical coupling in the OOPLs. The OOPLs do a fine job of minimizing logical coupling but a pretty awful job on physical coupling (i.e., what one compilable unit needs to know about another). Most of dependency management is focused on the physical coupling problem. That means that a lot of the development activity at the 3GL is focused on solving a developer problem -- code maintainability -- rather than the customer's problem. (Note this is a general 3GL development problem; the OOP-based processed just incorporate a solution explicitly.)
In the translation processes 3GL physical coupling is completely irrelevant because one never has to maintain the code directly. Instead, any changes are made to the models and the relevant code is regenerated. In addition, the level of abstraction (OOA) of the models is quite high and the focus is on abstracting the problem space correctly. So, unlike the OOP-based processes where refactoring begins after passing the tests, one simply stops modeling when the tests pass. Thus refactoring is usually only done to ensure correctness in resolving functional requirements (i.e., "fixing" bugs). That is, one employs good OOA practices at a high level of abstraction to ensure that the problem space is captured correctly and those practices ensure long-term maintainability at the model level. (I would also argue that the OOA practices employed are the purest form of of OO development because they are unsullied by the compromises that the OOPLs make with the hardware and the computational models of Turing and von Neumann.)
Blog root page
previous post in category
next post in category