devxlogo

Modeling to Invariants: A Faster Route to Design Convergence

Modeling to Invariants: A Faster Route to Design Convergence

t was the final night of the SD West conference in Santa Clara, CA, as attendees wandered the hotel, seeking the locations of the different BOF (bird of a feather) sessions that we were interested in. Scott Meyers, the C++ guru, was leading a large band through the bar towards a multi-threading BOF; the mere size of the group gave me second thoughts on the popularity of Java, VB, or C#.

I, however, was searching for the MDA BOF. I had been to Stephen Mellor‘s MDA session earlier and looked forward to spending some time finding out more about MDA and, more importantly, to ask how it became associated with the agile movement. I half expected Bob Martin to show up since he and Mellor had had fun trading quips during the morning sessions. Unfortunately, “Uncle Bob” didn’t show for this one but that didn’t prevent it from being one of those sessions that changes your life.

As I walked around the hotel bar looking for the BOF I kept wondering if I was in the wrong place. Luckily I met up with another attendee who also hailed from Chicago and we sat together looking around for the session. Then, right on cue, Stephen Mellor appeared. After a couple of beers and some “California-style” health food pizza arrived, we were ready to begin.

After we each went around the room, giving pat answers that explained who we were, what we did, and what we expected out of MDA, Steve proceeded to give us the 5-minute version of his earlier MDA lecture. We ordered some spicy wings to go with our pizza. Then we spent some time talking about the role of MDA in our firms. One person had even done some real MDA modeling. Even more beer arrived, so much that we had to give some away. Then I asked the question that was burning in my mind: “How did MDA ever get associated with modeling?”

Editor’s Note: Bob Martin is one of the framers of the Agile Manifesto. He is called “Uncle Bob” because of his relationship with people like the three Amigos (developers of UML).

In 2001, Steve explained, he was approached by some of the soon-to-be-agile folks to attend a meeting in the Wasatch Mountains of Utah. Though he wasn’t sure why a modeler like himself was asked but he went anyway. This turned out to be the signing of the Agile Manifesto. You will even see Steve’s name as one of the signatories. During this meeting Steve asked the agile crowd why modeling was bad. The answer, of course, was that there should be nothing separating one from developing executable code—that the only way to really know that a system works is through the executable code. After thinking on this a bit Steve asked: ‘what if the models were executable? Would that fit the agile bill?’ ‘Of course it would,’ replied “Uncle Bob” and the agile alliance. The answer to this, of course, was MDA.

Steve’s was a great story, and at the time I thought that it alone should make the night worthwhile. I could never have been more wrong.

Opening My Eyes to Invariants
During the rest of the conversation the topic turned to modeling. I asked Steve: ‘How do we know when a model is good?’ Steve answered with another story, this one told how many years ago he was working on a project to help straighten out a mess with a configuration of pipes and valves. A company had been adding to this system of pipes and valves for years and was considering making a change to it. They asked two of the senior engineers to suggest how best to change it. The first engineer suggested moving a couple of pipes around. To the shock of management the other engineer panicked claiming that such a move would blow up the whole configuration. They called Steve and his team to take a stab at it. After a couple of months of modeling efforts they finally came up with a solution. Steve and his team decided to model only the invariants?those parts of the configuration that had to stay in the same place. After a few versions of modeling these invariants they saw the final solution, which was far simpler than any of the other versions.

The point of the story, of course, was to model to the level of invariants. This shook me to the core. A few hours later the BOF was finished. But I spent the next couple of weeks thinking on this new idea. Though it seemed rather obvious, the ramifications of it proved very interesting.

See also  Should SMEs and Startups Offer Employee Benefits?

Modeling to the Level of Invariants
Modeling to the level of invariants is not a new concept in software development. We use it all the time without realizing it. Invariants are those elements that do not change from one iteration of a process to another. Top-down design assumes that as we start to model the next levels we don’t modify anything in the higher level. The elements in the higher level are called the invariants. In fact, use cases have invariant conditions that do not change from the beginning of the use case to the end. They are still used in the use case but never changed.

RichMen, Inc., the fictional example from my UML article series here on DevX, is a good example of modeling to the level of invariants. The first article only considered the classes for Brokers, Clients, and Securities. I needed at least those three classes to even begin to attack modeling the application. At this point I didn’t know anything else. Because these are core concepts, they will be there throughout the modeling process. This is what is meant by invariants. I did however modify Securities quite a bit, which is not prohibited but should be discouraged. In the convergence section on page 2, I’ll show how to discourage the modification of invariants.

The invariants that are modeled become the playground for our design sessions. Napkin drawings, whiteboards, and other semi-permanent methods for modeling are used to capture all the other classes to be coded, refactored, and tested. The invariant model however won’t change during this process.

Just as Steve refactored his pipe model we iteratively refactor. When we are confident that we have a solid design we take the results of the previous iteration, add them to the invariants, decide what is still invariant and that becomes our next model. We iterate over and over again until we find a point where we can’t add to the invariant elements anymore. This is the idea of convergence.

Author’s Note: Notice that in the previous example (the one from my earlier article), UML was not a significant part of the modeling process. Modeling software doesn’t always mean diagramming. It means building a representation of reality. Modeling can be in code in the case of prototyping or UML when diagramming. Each case is valid as long as the end result is to transform the vision into reality. Using a prototype should also not be confused with production code. The act of modeling is to remove all of the extraneous parts until all that is left is the minimal amount needed to solve the design problems. In agile development this is sometimes referred to using the process pattern for finding the simplest solution, but I think it’s a misnomer that this minimalist approach can usually work in production. The act of producing production code, no matter how simple, always leads one to add extra elements that are needed for testing and acceptance but may not be needed to demonstrate the worth of the model.

The Idea of Convergence
Convergence is to come to some final outcome. In the case of software development this is the process of designing software such that it becomes a finished program/finished design that accomplishes what the user wanted. It may not include every feature conceptualized, but the program matches the vision.

There have been many different approaches to convergence such as upfront design, refactoring, phases, and iterative development; all of these tackle the problem of trying to encapsulate the vision of the final program. The main difficulty with all these methods is that they do not implicitly force convergence. Upfront design assumes that the model is already finished except for some minor changes but doesn’t allow for major architectural shifts. Refactoring works well but there is no evidence that it’s a better solution than the original. Phases focus on functional development, but just adding functions to a program doesn’t force convergence to the vision; it just checks off the requirements. Iterative development, in the words of many agile developers, stops when the project either runs out of money or time?that certainly doesn’t sound like a convergent idea. In all of these different methods there is no separation between design elements that are invariant and those that change. There is no method for something that changes to stop changing.

See also  AI's Impact on Banking: Customer Experience and Efficiency

To demonstrate the need for forced convergence I’ll run through an example of building (structural) architecture and how it relates to dynamic systems. In building architecture there is a frame. As the frame becomes more fixed and elements are added the rest of the building becomes fixed and unchangeable. There is a steady progression from the structure of the building to its final form?from changeable elements to invariant structure. In buildings we have the vision and the blueprint. In modeling dynamic systems such as valves we rely on modeling to the invariant conditions. Without invariant conditions a dynamic system has infinite sets of solutions that may or may not converge to some final solution. By adding to the set of invariants the dynamic system will become more predictable and the structure converges to a final form.

In some dynamic systems adding random invariant conditions forces a false convergence that leads to an incorrect solution. This leads us to checking that the invariant conditions are not arbitrary but have a reason for being invariant. This is where Steve’s idea of modeling to the invariants made sense to me. If I add invariants only when they truly become invariants and look for the simplest solution, maybe I can find that correct path in my applications the same way that Steve’s valve solution yielded the simplest and correct solution for him.

Dynamic systems have one problem that is different than building architecture. Building architecture is fixed by the vision. In dynamic systems such as software design anything and everything within the vision is liable to change. Sometimes even an element that was an invariant must change. But if you keep changing invariant conditions the system is guaranteed never to converge.

In a dynamic system, you can use a parameter such as the Monte Carlo methods to arrive at convergence. In software there is no equivalent. However, a quick method to simulate this is to set a certainty factor at the beginning of the project. If the certainty of an invariant being correct is greater than the certainty factor for the project, then we leave that factor as invariant; otherwise we change the invariant to variant. The certainty factor of the invariant will either increase or decrease as the iterations go forward until eventually the strong invariant conditions are stable and the weak invariant conditions become variants.

Author’s Note: You can set your certainty factor higher or lower depending on how many elements you feel it’s appropriate to include as invariants. The more invariants the higher the cutoff, but the slower the convergence (see Figure 1).

Here’s an example of how this works: If I set the invariant condition high, at 80 percent certainty, this means I feel 80 percent confident that I have the correct invariants before going to the next iteration. In the next iteration I find that one of my invariants changes to only a 70 percent confidence. I change that invariant to variant for the next iteration. If, during the next iteration, I find that the element is above the 80 percent certainty factor it again becomes invariant. I know that there is a final convergence when all of the elements have an 80 percent certainty factor or I feel they will never hit the 80 percent certainty factor that we need for an invariant. The stability of the final convergences is usually tested for a couple of iterations to see if there are any changes to the number of invariants.

Figure 1. Number of Invariants with a Given Certainty Factor: The graph is used to describe convergence and should not be seen as a statistical graph. I show that for high levels of certainty the convergence is quicker but results in lower invariant percentages. For lower certainty factors the invariant percentages are higher but with less convergence. For middle levels of certainty the invariant percentages swing up and down more, resulting in longer time to converge with an average number of invariants.
See also  Should SMEs and Startups Offer Employee Benefits?

When We Fail to Converge
A real world example will give this some context. I was a consultant for a company that wanted a program to do valve configurations (this had nothing to do with the type or size of valves used by Stephen Mellor). We had a very clear diagram of what was asked for. We built some prototypes using the core technologies for the project. We also represented the application with a simplified UML model. Over the coming months we did no less than five iterations. The first iteration impressed the primary stakeholder. He complimented us as being the only ones to have really captured the vision. A couple tweaks were needed however. The second iteration was even more positive. There were now four stakeholders in the room and they agreed that we were almost there. The third iteration was fantastic. A show was coming up, we would get some press. Life was good.

Then came the fourth iteration. It was a dark, gloomy day. It was one day before the show, we had seven stakeholders in the room. One button was out of place. Not a big problem to fix, but once that problem was identified, there came a landslide of new problems. We designed this as a tree-based application, and they now wanted a wizard based solution; we added valves on one consolidated screen, and they wanted multiple screens; the pictures of the valves for ordering impressed marketing but ticked off the engineers?the colors were bad, the flow was bad, and even the valve calculations were called into question.

At the end of the meeting the application we had designed didn’t seem to fit even the most basic needs. We didn’t attend the show and we had to spend yet another iteration to fix all of the supposedly signed off items that they wanted changed.

How could modeling to the invariant have helped? In our case the big mistake was not to separate the invariants from the changeable elements at the beginning of the design. We took the client’s vision and coded it into a prototype. During the successive processes we added and subtracted elements at the whims of the client. In smaller iterations we asked for client feedback. We assumed that all the elements were correct and never separated what the client thought was the essential elements?the invariants?from more frivolous changes. In the end, we ended up changing some of the structural elements themselves.

We never asked if the MDI window-based tree approach was invariant because it came from the original vision of the client. It turns out that while initially that vision looked good to the client, they had a different vision. If we had identified the invariants with a high degree of confidence vs. variant elements, we would have separated what we were certain would not change and concentrated our buy-in to those variant elements. In doing so, the stakeholders focus would have been on changing the variant elements and not any element. Instead we asked whether some of the changeable conditions, such as the way we captured the inputs for calculating the valves, were OK. The input conditions never changed (and we had made them the easiest part to change) but the overall window structure did. Everything was open to be changed and we could not force convergence.

If we’d had the concept of invariants in place and a certainty factor of 60 percent to 70 percent established, we would have been able to succeed more quickly. The requirements seemed solid but we didn’t have enough domain knowledge to judge them. After the third iteration, we could have changed the certainty factor to 90 percent to widen the separation to what was really invariant and what wasn’t. And by documenting this process we could show our decision process to the executive managers who were paying our bills.

Modeling to invariants can work in an agile method because it promotes iterations and constant refactoring. It controls the refactoring, which isn’t necessarily a bad thing. Developers should always be reevaluating what they’ve done; invariants provide a convergence path.

Next month, I’ll continue this UML series with an article on component diagrams; later, I’ll analyze sequence diagrams.

devxblackblue

About Our Editorial Process

At DevX, we’re dedicated to tech entrepreneurship. Our team closely follows industry shifts, new products, AI breakthroughs, technology trends, and funding announcements. Articles undergo thorough editing to ensure accuracy and clarity, reflecting DevX’s style and supporting entrepreneurs in the tech sphere.

See our full editorial policy.

About Our Journalist