More Just Isn’t More…

Most have probably heard the expression ‘less is more‘, or know of the ‘keep it simple and stupid‘ principle. These are general and well-accepted principles for design and architecture in general, and something that any software architect should aspire to. Similarly, Richard P. Gabriel (a major figure in the world of Lisp programming language, accomplished poet, and currently an IBM Distinguished Engineer) coined the phrase ‘Worse Is Better‘, in formulating a software concept that Christopher Alexander (a ‘real’ architect famous within software engineering) might have termed ‘piecemeal growth’ – e.g., start small and add later.

But what happens if we continue to add more, even if they are just small, simple pieces?

The suggestion is that more isn’t just… more.

A bit of history digging reveals that, in 1962, Herbert Simon (granted the ACM Turing Award in 1972, and Nobel Memorial Prize in 1978) published The Architecture of Complexity, describing the structure and properties of complex systems. He concluded that complex systems are inherently hierarchical in structure, and exhibit near-decomposability property – e.g., a complex system is essentially a system of mostly independent systems. In 1972, P.W. Anderson (indirectly) builds this line of thought in “More is Different“. Anderson argued that an understanding of how the parts (e.g., systems) work, does not equate to an understanding of how the whole works. We can all relate to the point that biology, medicine, and science can explain, to a large degree, how our bodies work, yet when all the parts are combined into a human body, elements such as emotion, intelligence, and personality form part of the ‘whole’, and are distinctly not medicine or biology. Yet, in Simon’s terminology, the human body is a system of systems exhibiting the near-decomposability property – otherwise, heart transplant wouldn’t be possible.

The near-decomposability is a property we desire as part of software design – although better known as modularity. Software architecture is basically ‘to separate into components or basic elements’, based primarily on the fundamental principle of information hiding first formulated in Parnas‘s seminal paper of 1972: On the criteria to be used in decomposing systems into modules. But as Richard Gabriel argued in his essay, Design Beyond Human Abilities, there is a key difference between software modularity, and the near-decomposability property of systems of systems.

Within a software engineering context, a module is traditionally defined by what it does rather than by who produced it – the latter is the definition used by Simon and Anderson.

This is a significant difference, and one we need to get used to as our software systems become increasingly complex.

Those of you who know Conway’s law from 1968, shouldn’t be surprised however. The ‘law’ states, “a software system necessarily will show a congruence with the social structure of the organization that produced it”. In other words, the modular structure of a software system reflects that of the organisation’s social structure – e.g., who rather than what. This doesn’t imply that you should skip your ‘structured design’, ‘object oriented programming’, or ‘patterns’ course at uni and head down to the bar to socialise instead, but I think there is a number of implications that software architects, especially, need to pay attention to, when dealing with a system of software systems:

  • Software architects can no longer pretend that they are software versions of ‘real’ building architects. We’ll need more skills similar to those possessed by policy makers, negotiators, and other communicators (though not necessarily politicians). Robert Schaefer’s ‘Software maturity: design as a dark art‘ is a place to start.
  • The duplication of software functionality or applications within organisations or commercial enterprises isn’t necessarily a bad thing in itself, but instead we need to focus on warning signs such as duplicated organisational roles (or responsibility overlaps), or lacking communication channels (related to the first point). I know many might be up in arms by the mere suggestion of duplication (wasting resources is never good!), but we need to be serious about the ‘cost of re-use’, versus the ‘cost of (almost) re-implementation’. Even when viewed within the context of Total Cost of Ownership, my belief is that the former isn’t always the clear winner.
  • Focus on interoperability instead of integration. So what’s the difference? NEHTA‘s Interoperability Framework captures the difference as the ability to maintain integration over time at minimum effort: the ability to maintain integration despite changes to, or replacement of, the communicating systems. Other references include the comprehensive ‘A Survey of Interoperability Measurement’ – if you can’t measure it, then you are not going to get it.

Today’s realities of software development are essentially about adding, changing, or removing parts of an existing complex software system, through a continuous process of negotiations, bargaining, and policing. Our ability to do this is directly linked to our software’s ability to interoperate.

John Brøndum
John Brøndum has worked as an IT Architect and Consultant since 1997 for a number of technology companies, including IBM, across finance, retail, telecommunications, energy, and rail. With a wealth of project based experience (e.g., Service Oriented Architecture, Business Process Management, Reference Architecture), John is currently a Ph.D. candidate at UNSWand NICTA. His research is focused on the architectural complexities caused by increasing inter-dependencies between highly integrated enterprise applications. John works independently as a consulting architect, offering a range of architectural services.

The Related Post

This article was originally featured in the July/August 2009 issue of Better Software magazine. Read the entire issue or become a subscriber. People often quote Lord Kelvin: “I often say that when you can measure what you are speaking about, and express it in numbers, you know something about it; but when you cannot express ...
The Testing Domain Workbook is the most extensive and exhaustive work you will ever find on a specific testing technique (or related techniques if you include equivalence class analysis and boundary testing as the book does). What I like best is the combination of academic background and roots combined with practical experience and industrial practice. All the concepts are ...
  Explore It! is one of the very best software testing books ever written. It is packed with great ideas and Elisabeth Hendrickson’s writing style makes it very enjoyable to read. Hendrickson has a well-deserved reputation in the global software testing community as someone who has the enviable ability to clearly communicate highly-practical, well-thought-out ideas. ...
Introduction Keyword-driven methodologies like Action Based Testing (ABT) are usually considered to be an Automation technique. They are commonly positioned as an advanced and practical alternative to other techniques like to “record & playback” or “scripting”.
One of the most common challenges faced by business leaders is the lack of visibility into QA activities. QA leaders have a tough time communicating the impact, value, and ROI of testing to the executives in a way that they can understand. Traditional reporting practices often fail to paint the full picture and do not ...
It’s a bird! It’s a plane! It’s a software defect of epic proportions.
There are many ways to approach test design. These approaches range from checklists to very precise algorithms in which test conditions are combined to achieve the most efficiency in testing. There are situations, such as in testing mobile applications, complex systems and cyber security, where tests need to be creative, cover a lot of functionality, ...
Think you’re up for a challenge? Print this word search out! See if you can find all the words and learn a few new software testing terms in the process. To see how you’ve done, check your answers in the answer key below. *You can check the answer key here.
Trying to understand why fails, errors, or warnings occur in your automated tests can be quite frustrating. TestArchitect relieves this pain.  Debugging blindly can be tedious work—especially when your test tool does most of its work through the user interface (UI). Moreover, bugs can sometimes be hard to replicate when single-stepping through a test procedure. ...
I’ve been reviewing a lot of test plans recently. As I review them, I’ve compiled this list of things I look for in a well written test plan document. Here’s a brain dump of things I check for, in no particular order, of course, and it is by no means a complete list. That said, if you ...

Stay in the loop with the lastest
software testing news