More Just Isn’t More…

Most have probably heard the expression ‘less is more‘, or know of the ‘keep it simple and stupid‘ principle. These are general and well-accepted principles for design and architecture in general, and something that any software architect should aspire to. Similarly, Richard P. Gabriel (a major figure in the world of Lisp programming language, accomplished poet, and currently an IBM Distinguished Engineer) coined the phrase ‘Worse Is Better‘, in formulating a software concept that Christopher Alexander (a ‘real’ architect famous within software engineering) might have termed ‘piecemeal growth’ – e.g., start small and add later.

But what happens if we continue to add more, even if they are just small, simple pieces?

The suggestion is that more isn’t just… more.

A bit of history digging reveals that, in 1962, Herbert Simon (granted the ACM Turing Award in 1972, and Nobel Memorial Prize in 1978) published The Architecture of Complexity, describing the structure and properties of complex systems. He concluded that complex systems are inherently hierarchical in structure, and exhibit near-decomposability property – e.g., a complex system is essentially a system of mostly independent systems. In 1972, P.W. Anderson (indirectly) builds this line of thought in “More is Different“. Anderson argued that an understanding of how the parts (e.g., systems) work, does not equate to an understanding of how the whole works. We can all relate to the point that biology, medicine, and science can explain, to a large degree, how our bodies work, yet when all the parts are combined into a human body, elements such as emotion, intelligence, and personality form part of the ‘whole’, and are distinctly not medicine or biology. Yet, in Simon’s terminology, the human body is a system of systems exhibiting the near-decomposability property – otherwise, heart transplant wouldn’t be possible.

The near-decomposability is a property we desire as part of software design – although better known as modularity. Software architecture is basically ‘to separate into components or basic elements’, based primarily on the fundamental principle of information hiding first formulated in Parnas‘s seminal paper of 1972: On the criteria to be used in decomposing systems into modules. But as Richard Gabriel argued in his essay, Design Beyond Human Abilities, there is a key difference between software modularity, and the near-decomposability property of systems of systems.

Within a software engineering context, a module is traditionally defined by what it does rather than by who produced it – the latter is the definition used by Simon and Anderson.

This is a significant difference, and one we need to get used to as our software systems become increasingly complex.

Those of you who know Conway’s law from 1968, shouldn’t be surprised however. The ‘law’ states, “a software system necessarily will show a congruence with the social structure of the organization that produced it”. In other words, the modular structure of a software system reflects that of the organisation’s social structure – e.g., who rather than what. This doesn’t imply that you should skip your ‘structured design’, ‘object oriented programming’, or ‘patterns’ course at uni and head down to the bar to socialise instead, but I think there is a number of implications that software architects, especially, need to pay attention to, when dealing with a system of software systems:

  • Software architects can no longer pretend that they are software versions of ‘real’ building architects. We’ll need more skills similar to those possessed by policy makers, negotiators, and other communicators (though not necessarily politicians). Robert Schaefer’s ‘Software maturity: design as a dark art‘ is a place to start.
  • The duplication of software functionality or applications within organisations or commercial enterprises isn’t necessarily a bad thing in itself, but instead we need to focus on warning signs such as duplicated organisational roles (or responsibility overlaps), or lacking communication channels (related to the first point). I know many might be up in arms by the mere suggestion of duplication (wasting resources is never good!), but we need to be serious about the ‘cost of re-use’, versus the ‘cost of (almost) re-implementation’. Even when viewed within the context of Total Cost of Ownership, my belief is that the former isn’t always the clear winner.
  • Focus on interoperability instead of integration. So what’s the difference? NEHTA‘s Interoperability Framework captures the difference as the ability to maintain integration over time at minimum effort: the ability to maintain integration despite changes to, or replacement of, the communicating systems. Other references include the comprehensive ‘A Survey of Interoperability Measurement’ – if you can’t measure it, then you are not going to get it.

Today’s realities of software development are essentially about adding, changing, or removing parts of an existing complex software system, through a continuous process of negotiations, bargaining, and policing. Our ability to do this is directly linked to our software’s ability to interoperate.

John Brøndum
John Brøndum has worked as an IT Architect and Consultant since 1997 for a number of technology companies, including IBM, across finance, retail, telecommunications, energy, and rail. With a wealth of project based experience (e.g., Service Oriented Architecture, Business Process Management, Reference Architecture), John is currently a Ph.D. candidate at UNSWand NICTA. His research is focused on the architectural complexities caused by increasing inter-dependencies between highly integrated enterprise applications. John works independently as a consulting architect, offering a range of architectural services.

The Related Post

At VISTACON 2011, Jane sat down with LogiGear Sr. VP, Michael Hackett, to discuss complex systems.
Trying to understand why fails, errors, or warnings occur in your automated tests can be quite frustrating. TestArchitect relieves this pain.  Debugging blindly can be tedious work—especially when your test tool does most of its work through the user interface (UI). Moreover, bugs can sometimes be hard to replicate when single-stepping through a test procedure. ...
Creative Director at the Software Testing Club, Rob Lambert always has something to say about testing. Lambert regularly blogs at TheSocialTester where he engages his readers with test cases, perspectives and trends. “Because It’s Always Been Done This Way” Study the following (badly drawn) image and see if there is anything obvious popping in to ...
There are many ways to approach test design. These approaches range from checklists to very precise algorithms in which test conditions are combined to achieve the most efficiency in testing. There are situations, such as in testing mobile applications, complex systems and cyber security, where tests need to be creative, cover a lot of functionality, ...
One of the most dreaded kinds of bugs are the ones caused by fixes of other bugs or by code changes due to feature requests. I like to call these the ‘bonus bugs,’ since they come on top on the bug load you already have to deal with. Bonus bugs are the major rationale for ...
They’ve done it again. Gojko Adzic, David Evans and, in this book, Tom Roden, have written another ‘50 Quick Ideas’ book. And this one is equally as good as the previous book on user stories. If not even better.  
I’ve been reviewing a lot of test plans recently. As I review them, I’ve compiled this list of things I look for in a well written test plan document. Here’s a brain dump of things I check for, in no particular order, of course, and it is by no means a complete list. That said, if you ...
This article was originally featured in the July/August 2009 issue of Better Software magazine. Read the entire issue or become a subscriber. People often quote Lord Kelvin: “I often say that when you can measure what you are speaking about, and express it in numbers, you know something about it; but when you cannot express ...
This is an adaptation of a presentation entitled Software Testing 3.0 given by Hung Nguyen, LogiGear CEO, President, and Founder. The presentation was given as the keynote at the Spring 2007 STPCON conference in San Mateo, California. Watch for an upcoming whitepaper based on this topic. Introduction This first article of this two article series ...
Introduction Software Testing 3.0 is a strategic end-to-end framework for change based upon a strategy to drive testing activities, tool selection, and people development that finally delivers on the promise of Software Testing. For more details on the evolution of Software Testing and Software Testing 3.0 see: The Early Evolution of Software Testing Software Testing ...
Let’s look at a few distinctions between the two process improvement practices that make all the difference in their usefulness for making projects and job situations better! An extreme way to look at the goals of these practices is: what makes your work easier (retrospective) versus what did someone else decide is best practice (post-mortem)? ...
People rely on software more every year, so it’s critical to test it. But one thing that gets overlooked (that should be tested regularly) are smoke detectors. As the relatively young field of software quality engineering matures with all its emerging trends and terminology, software engineers often overlook that the software they test has parallels ...

Stay in the loop with the lastest
software testing news

Subscribe