devxlogo

Architecting Stable Systems and Solid Code

Architecting Stable Systems and Solid Code

Software is arguably the most complicated thing created by humans. The amount of moving parts is orders of magnitude more than any physical manufactured object. Software is also getting into more and more aspects of our life and the environment. In 2011 Mark Andreessen coined the phrase “Software is eating the world“. Now, add the fact that software is infamously unreliable and you get a pretty volatile situation. That’s were solid code comes in. If we can write more solid code and build more reliable systems, then software will become a power for good and not a ticking time bomb??? remember the Y2K scare?

What Is Solid Code?

Solid code does what the developers intended it to do and can gracefully handle anything you throw at it. Note, the part about the intention of the developers. Sometimes the code is solid, but the intention was wrong. That should be addressed at a different level of requirements gathering. But, I don’t recommend a waterfall approach here. Often requirements are fuzzy to begin with, and during development and incremental deployments, they become more refined and clear. That means that solid code should also support refactoring and modifications. A code that was built to a strict spec and can’t be modified is not solid. It is just plain difficult.

Different systems and applications have different profiles of importance, risk and consequences of errors. For example, a script you write to automate some tasks for yourself is not the same as code that launches a space shuttle.

The Cost of Solid Code

Writing solid code is not free. It takes time and effort as you’ll see in the following sections. That applies in particular to large code bases that need to evolve. That means you need to define what’s important to you and adjust the level of rigor you follow.

Architecture and Design

In large systems, architecture is the key as they take a long time to develop, they are supposed to provide a lot of value and require more developers to develop. As a result, if the architecture is not solid, the system will devolve into a mess very quickly. Dependency hell, in particular, can bring large systems down very quickly. The longer a system takes to develop, the higher the likelihood that team members will leave, there will be a need to revamp the original business requirements and to upgrade to new technologies. Without a strong architectural vision and integrity, there is only a very small chance it will all work out magically. Hard work will not compensate for brittle architecture.

Small Is Beautiful

I discussed the perils of large systems in depth in the previous section. The way to battle this is to break down the system into small loosely-coupled components. This is not easy, but required. Many component technologies exist for in-process, out-of-process and cross-machine use cases. For distributed systems APIs and micro-services are the way to go.

Dependencies

Dependencies are the bane of every non-trial software system. The way to combat dependency hell is by following the dependency injection rule. Components should receive all their dependencies from the outside and communicate through abstract interfaces or APIs. This applies to classes and object in-process as well as clients, servers and services in the distributed case.

Information Hiding

The need-to-know principle applies to software systems too. If component X knows nothing about component Y, then X will not be impacted at all if Y is modified, eliminated, split into Y1 and Y2, etc. Less information = stable system. That works well of course with small components that interact through interfaces/APIs.

Testing

Thorough and deep testing is mandatory. By definition, if you didn’t test, you don’t know that it works. If you don’t know that it works, assume it doesn’t and test it to ensure it does work. In some cases, you may accept the risk of something not working if the consequences of failure are understood and contained properly.

Refactoring

Large systems evolve. You have to refactor diligently to make sure the code matches the current understanding of the system. If you neglect refactoring, you’ll end up with a misalignment and over time will not be able to add new features and fix bugs. Refactoring is like flossing. Do it!

Code Review

Code review helps make sure that a team of developers is aligned about how to evolve the system. It also helps new developers to learn and get real-time feedback from more experienced developers. It also assists with catching issues that the tests don’t cover early, giving you an opportunity to improve the tests.

Performance

Solid code that is well-designed is easier to optimize because it’s easier to pinpoint the real bottlenecks and improve them with little impact on other parts of the system. In some cases, there may be tension between flexibility and performance. Except for really, really low-level projects, you can typically keep the system flexible and yet hit your performance goals. It takes skill and a great deal of profiling.

Writing solid code requires following a lot of best practices, making judgement calls and having agility to adapt. For large-scale, critical distributed systems you need at least one experienced and capable system architect that is well versed in all these practices, knows how to mix them together and who is able to guide the team forward.

Related Articles

devxblackblue

About Our Editorial Process

At DevX, we’re dedicated to tech entrepreneurship. Our team closely follows industry shifts, new products, AI breakthroughs, technology trends, and funding announcements. Articles undergo thorough editing to ensure accuracy and clarity, reflecting DevX’s style and supporting entrepreneurs in the tech sphere.

See our full editorial policy.

About Our Journalist