Login | Register   
LinkedIn
Google+
Twitter
RSS Feed
Download our iPhone app
TODAY'S HEADLINES  |   ARTICLE ARCHIVE  |   FORUMS  |   TIP BANK
Browse DevX
Sign up for e-mail newsletters from DevX


advertisement
 

Modeling to Invariants: A Faster Route to Design Convergence : Page 2

At SD West, one man finds the heart of modeling—and opens his eyes to the power of invariants—thanks to the founder of MDA.


advertisement
The Idea of Convergence
Convergence is to come to some final outcome. In the case of software development this is the process of designing software such that it becomes a finished program/finished design that accomplishes what the user wanted. It may not include every feature conceptualized, but the program matches the vision.

There have been many different approaches to convergence such as upfront design, refactoring, phases, and iterative development; all of these tackle the problem of trying to encapsulate the vision of the final program. The main difficulty with all these methods is that they do not implicitly force convergence. Upfront design assumes that the model is already finished except for some minor changes but doesn't allow for major architectural shifts. Refactoring works well but there is no evidence that it's a better solution than the original. Phases focus on functional development, but just adding functions to a program doesn't force convergence to the vision; it just checks off the requirements. Iterative development, in the words of many agile developers, stops when the project either runs out of money or time—that certainly doesn't sound like a convergent idea. In all of these different methods there is no separation between design elements that are invariant and those that change. There is no method for something that changes to stop changing.

To demonstrate the need for forced convergence I'll run through an example of building (structural) architecture and how it relates to dynamic systems. In building architecture there is a frame. As the frame becomes more fixed and elements are added the rest of the building becomes fixed and unchangeable. There is a steady progression from the structure of the building to its final form—from changeable elements to invariant structure. In buildings we have the vision and the blueprint. In modeling dynamic systems such as valves we rely on modeling to the invariant conditions. Without invariant conditions a dynamic system has infinite sets of solutions that may or may not converge to some final solution. By adding to the set of invariants the dynamic system will become more predictable and the structure converges to a final form.



In some dynamic systems adding random invariant conditions forces a false convergence that leads to an incorrect solution. This leads us to checking that the invariant conditions are not arbitrary but have a reason for being invariant. This is where Steve's idea of modeling to the invariants made sense to me. If I add invariants only when they truly become invariants and look for the simplest solution, maybe I can find that correct path in my applications the same way that Steve's valve solution yielded the simplest and correct solution for him.

Dynamic systems have one problem that is different than building architecture. Building architecture is fixed by the vision. In dynamic systems such as software design anything and everything within the vision is liable to change. Sometimes even an element that was an invariant must change. But if you keep changing invariant conditions the system is guaranteed never to converge.

In a dynamic system, you can use a parameter such as the Monte Carlo methods to arrive at convergence. In software there is no equivalent. However, a quick method to simulate this is to set a certainty factor at the beginning of the project. If the certainty of an invariant being correct is greater than the certainty factor for the project, then we leave that factor as invariant; otherwise we change the invariant to variant. The certainty factor of the invariant will either increase or decrease as the iterations go forward until eventually the strong invariant conditions are stable and the weak invariant conditions become variants.

Author's Note: You can set your certainty factor higher or lower depending on how many elements you feel it's appropriate to include as invariants. The more invariants the higher the cutoff, but the slower the convergence (see Figure 1).

Here's an example of how this works: If I set the invariant condition high, at 80 percent certainty, this means I feel 80 percent confident that I have the correct invariants before going to the next iteration. In the next iteration I find that one of my invariants changes to only a 70 percent confidence. I change that invariant to variant for the next iteration. If, during the next iteration, I find that the element is above the 80 percent certainty factor it again becomes invariant. I know that there is a final convergence when all of the elements have an 80 percent certainty factor or I feel they will never hit the 80 percent certainty factor that we need for an invariant. The stability of the final convergences is usually tested for a couple of iterations to see if there are any changes to the number of invariants.

Figure 1. Number of Invariants with a Given Certainty Factor: The graph is used to describe convergence and should not be seen as a statistical graph. I show that for high levels of certainty the convergence is quicker but results in lower invariant percentages. For lower certainty factors the invariant percentages are higher but with less convergence. For middle levels of certainty the invariant percentages swing up and down more, resulting in longer time to converge with an average number of invariants.

When We Fail to Converge
A real world example will give this some context. I was a consultant for a company that wanted a program to do valve configurations (this had nothing to do with the type or size of valves used by Stephen Mellor). We had a very clear diagram of what was asked for. We built some prototypes using the core technologies for the project. We also represented the application with a simplified UML model. Over the coming months we did no less than five iterations. The first iteration impressed the primary stakeholder. He complimented us as being the only ones to have really captured the vision. A couple tweaks were needed however. The second iteration was even more positive. There were now four stakeholders in the room and they agreed that we were almost there. The third iteration was fantastic. A show was coming up, we would get some press. Life was good.

Then came the fourth iteration. It was a dark, gloomy day. It was one day before the show, we had seven stakeholders in the room. One button was out of place. Not a big problem to fix, but once that problem was identified, there came a landslide of new problems. We designed this as a tree-based application, and they now wanted a wizard based solution; we added valves on one consolidated screen, and they wanted multiple screens; the pictures of the valves for ordering impressed marketing but ticked off the engineers—the colors were bad, the flow was bad, and even the valve calculations were called into question.

At the end of the meeting the application we had designed didn't seem to fit even the most basic needs. We didn't attend the show and we had to spend yet another iteration to fix all of the supposedly signed off items that they wanted changed.

How could modeling to the invariant have helped? In our case the big mistake was not to separate the invariants from the changeable elements at the beginning of the design. We took the client's vision and coded it into a prototype. During the successive processes we added and subtracted elements at the whims of the client. In smaller iterations we asked for client feedback. We assumed that all the elements were correct and never separated what the client thought was the essential elements—the invariants—from more frivolous changes. In the end, we ended up changing some of the structural elements themselves.

We never asked if the MDI window-based tree approach was invariant because it came from the original vision of the client. It turns out that while initially that vision looked good to the client, they had a different vision. If we had identified the invariants with a high degree of confidence vs. variant elements, we would have separated what we were certain would not change and concentrated our buy-in to those variant elements. In doing so, the stakeholders focus would have been on changing the variant elements and not any element. Instead we asked whether some of the changeable conditions, such as the way we captured the inputs for calculating the valves, were OK. The input conditions never changed (and we had made them the easiest part to change) but the overall window structure did. Everything was open to be changed and we could not force convergence.

If we'd had the concept of invariants in place and a certainty factor of 60 percent to 70 percent established, we would have been able to succeed more quickly. The requirements seemed solid but we didn't have enough domain knowledge to judge them. After the third iteration, we could have changed the certainty factor to 90 percent to widen the separation to what was really invariant and what wasn't. And by documenting this process we could show our decision process to the executive managers who were paying our bills.

Modeling to invariants can work in an agile method because it promotes iterations and constant refactoring. It controls the refactoring, which isn't necessarily a bad thing. Developers should always be reevaluating what they've done; invariants provide a convergence path.

Next month, I'll continue this UML series with an article on component diagrams; later, I'll analyze sequence diagrams.



Mark Goetsch is an Enterprise Software Architect with Wheels, Inc, and has more than fifteen years of experience in software development, enterprise modeling, and software architecture. He has another seven years experience as a trader, dealer, Broker, and an expert in e-trading. He was one of the enterprise modelers of the Tapestry Project at ABNAMRO, one of the most extensive uses of UML component and deployment diagrams to date. He is the lead architect for the MAP (Meta-Architectural Processes) framework, which is a framework for mapping the role of the software architect into software development processes. Mark is certified in Intermediate UML with the OMG and a member of WWISA. He also has a Masters in Distributed Systems from DePaul University.
Comment and Contribute

 

 

 

 

 


(Maximum characters: 1200). You have 1200 characters left.

 

 

Sitemap
Thanks for your registration, follow us on our social networks to keep up-to-date