More Just Isn’t More…

Most have probably heard the expression ‘less is more‘, or know of the ‘keep it simple and stupid‘ principle. These are general and well-accepted principles for design and architecture in general, and something that any software architect should aspire to. Similarly, Richard P. Gabriel (a major figure in the world of Lisp programming language, accomplished poet, and currently an IBM Distinguished Engineer) coined the phrase ‘Worse Is Better‘, in formulating a software concept that Christopher Alexander (a ‘real’ architect famous within software engineering) might have termed ‘piecemeal growth’ – e.g., start small and add later.

But what happens if we continue to add more, even if they are just small, simple pieces?

The suggestion is that more isn’t just… more.

A bit of history digging reveals that, in 1962, Herbert Simon (granted the ACM Turing Award in 1972, and Nobel Memorial Prize in 1978) published The Architecture of Complexity, describing the structure and properties of complex systems. He concluded that complex systems are inherently hierarchical in structure, and exhibit near-decomposability property – e.g., a complex system is essentially a system of mostly independent systems. In 1972, P.W. Anderson (indirectly) builds this line of thought in “More is Different“. Anderson argued that an understanding of how the parts (e.g., systems) work, does not equate to an understanding of how the whole works. We can all relate to the point that biology, medicine, and science can explain, to a large degree, how our bodies work, yet when all the parts are combined into a human body, elements such as emotion, intelligence, and personality form part of the ‘whole’, and are distinctly not medicine or biology. Yet, in Simon’s terminology, the human body is a system of systems exhibiting the near-decomposability property – otherwise, heart transplant wouldn’t be possible.

The near-decomposability is a property we desire as part of software design – although better known as modularity. Software architecture is basically ‘to separate into components or basic elements’, based primarily on the fundamental principle of information hiding first formulated in Parnas‘s seminal paper of 1972: On the criteria to be used in decomposing systems into modules. But as Richard Gabriel argued in his essay, Design Beyond Human Abilities, there is a key difference between software modularity, and the near-decomposability property of systems of systems.

Within a software engineering context, a module is traditionally defined by what it does rather than by who produced it – the latter is the definition used by Simon and Anderson.

This is a significant difference, and one we need to get used to as our software systems become increasingly complex.

Those of you who know Conway’s law from 1968, shouldn’t be surprised however. The ‘law’ states, “a software system necessarily will show a congruence with the social structure of the organization that produced it”. In other words, the modular structure of a software system reflects that of the organisation’s social structure – e.g., who rather than what. This doesn’t imply that you should skip your ‘structured design’, ‘object oriented programming’, or ‘patterns’ course at uni and head down to the bar to socialise instead, but I think there is a number of implications that software architects, especially, need to pay attention to, when dealing with a system of software systems:

  • Software architects can no longer pretend that they are software versions of ‘real’ building architects. We’ll need more skills similar to those possessed by policy makers, negotiators, and other communicators (though not necessarily politicians). Robert Schaefer’s ‘Software maturity: design as a dark art‘ is a place to start.
  • The duplication of software functionality or applications within organisations or commercial enterprises isn’t necessarily a bad thing in itself, but instead we need to focus on warning signs such as duplicated organisational roles (or responsibility overlaps), or lacking communication channels (related to the first point). I know many might be up in arms by the mere suggestion of duplication (wasting resources is never good!), but we need to be serious about the ‘cost of re-use’, versus the ‘cost of (almost) re-implementation’. Even when viewed within the context of Total Cost of Ownership, my belief is that the former isn’t always the clear winner.
  • Focus on interoperability instead of integration. So what’s the difference? NEHTA‘s Interoperability Framework captures the difference as the ability to maintain integration over time at minimum effort: the ability to maintain integration despite changes to, or replacement of, the communicating systems. Other references include the comprehensive ‘A Survey of Interoperability Measurement’ – if you can’t measure it, then you are not going to get it.

Today’s realities of software development are essentially about adding, changing, or removing parts of an existing complex software system, through a continuous process of negotiations, bargaining, and policing. Our ability to do this is directly linked to our software’s ability to interoperate.

John Brøndum
John Brøndum has worked as an IT Architect and Consultant since 1997 for a number of technology companies, including IBM, across finance, retail, telecommunications, energy, and rail. With a wealth of project based experience (e.g., Service Oriented Architecture, Business Process Management, Reference Architecture), John is currently a Ph.D. candidate at UNSWand NICTA. His research is focused on the architectural complexities caused by increasing inter-dependencies between highly integrated enterprise applications. John works independently as a consulting architect, offering a range of architectural services.

The Related Post

Introduction This article discusses the all-too-common occurrence of the time needed to perform Software Testing being short changed as specification, development, and unforeseen “issues” cause the phases prior to testing to expand. The result is that extreme pressure is placed upon the testing organization to perform the testing function within a reduced time frame. The ...
People who follow me on twitter or via my blog might be aware that I have a wide range of interests in areas outside my normal testing job. I like to research and learn different things, especially psychology and see if it may benefit and improve my skills and approaches during my normal testing job. ...
As I write this article I am sitting at a table at StarEast, one of the major testing conferences. As you can expect from a testing conference, a lot of talk and discussion is about bugs and how to find them. What I have noticed in some of these discussions, however, is a lack of ...
This article was developed from concepts in the book Global Software Test Automation: Discussion of Software Testing for Executives. Introduction Many look upon Software Testing as a cost. While it is true that Software Testing does cost money, in many cases significant amounts of money, it is also an activity that helps an organization to ...
This article was originally featured in the May/June 2009 issue of Better Software magazine. Read the entire issue or become a subscriber. In my travels, I’ve worked with a number of companies that have attempted to assess the quality of their testing — or worse, their testers — using poorly considered metrics. Sometimes the measurement ...
Jeff Offutt – Professor of Software Engineering in the Volgenau School of Information Technology at George Mason University – homepage – and editor-in-chief of Wiley’s journal of Software Testing, Verification and Reliability, LogiGear: How did you get into software testing? What do you find interesting about it? Professor Offutt: When I started college I didn’t ...
Creative Director at the Software Testing Club, Rob Lambert always has something to say about testing. Lambert regularly blogs at TheSocialTester where he engages his readers with test cases, perspectives and trends. “Because It’s Always Been Done This Way” Study the following (badly drawn) image and see if there is anything obvious popping in to ...
The Testing Domain Workbook is the most extensive and exhaustive work you will ever find on a specific testing technique (or related techniques if you include equivalence class analysis and boundary testing as the book does). What I like best is the combination of academic background and roots combined with practical experience and industrial practice. All the concepts are ...
March Issue 2020: Smarter Testing Strategies for The Modern SDLC
LogiGear Magazine March Issue 2018: Under Construction: Test Methods & Strategy
Test plans have a bad reputation, and perhaps, they deserve it! There’s no beating around the bush. But times have changed. Systems are no longer “black boxes” where QA Teams are separated from design, input, and architecture. Test teams are much more technically savvy and knowledgeable about their systems, beyond domain knowledge. This was an old ...
Karen N. Johnson began as a technical writer in 1985 and later switched to software testing in 1992. She maintains a blog at TestingReflections, a collaborative site where she is featured as a main contributor. In her latest entry, she discusses search testing with different languages. Here is an excerpt from her blog: “I started ...

Stay in the loop with the lastest
software testing news

Subscribe