More Just Isn’t More…

Most have probably heard the expression ‘less is more‘, or know of the ‘keep it simple and stupid‘ principle. These are general and well-accepted principles for design and architecture in general, and something that any software architect should aspire to. Similarly, Richard P. Gabriel (a major figure in the world of Lisp programming language, accomplished poet, and currently an IBM Distinguished Engineer) coined the phrase ‘Worse Is Better‘, in formulating a software concept that Christopher Alexander (a ‘real’ architect famous within software engineering) might have termed ‘piecemeal growth’ – e.g., start small and add later.

But what happens if we continue to add more, even if they are just small, simple pieces?

The suggestion is that more isn’t just… more.

A bit of history digging reveals that, in 1962, Herbert Simon (granted the ACM Turing Award in 1972, and Nobel Memorial Prize in 1978) published The Architecture of Complexity, describing the structure and properties of complex systems. He concluded that complex systems are inherently hierarchical in structure, and exhibit near-decomposability property – e.g., a complex system is essentially a system of mostly independent systems. In 1972, P.W. Anderson (indirectly) builds this line of thought in “More is Different“. Anderson argued that an understanding of how the parts (e.g., systems) work, does not equate to an understanding of how the whole works. We can all relate to the point that biology, medicine, and science can explain, to a large degree, how our bodies work, yet when all the parts are combined into a human body, elements such as emotion, intelligence, and personality form part of the ‘whole’, and are distinctly not medicine or biology. Yet, in Simon’s terminology, the human body is a system of systems exhibiting the near-decomposability property – otherwise, heart transplant wouldn’t be possible.

The near-decomposability is a property we desire as part of software design – although better known as modularity. Software architecture is basically ‘to separate into components or basic elements’, based primarily on the fundamental principle of information hiding first formulated in Parnas‘s seminal paper of 1972: On the criteria to be used in decomposing systems into modules. But as Richard Gabriel argued in his essay, Design Beyond Human Abilities, there is a key difference between software modularity, and the near-decomposability property of systems of systems.

Within a software engineering context, a module is traditionally defined by what it does rather than by who produced it – the latter is the definition used by Simon and Anderson.

This is a significant difference, and one we need to get used to as our software systems become increasingly complex.

Those of you who know Conway’s law from 1968, shouldn’t be surprised however. The ‘law’ states, “a software system necessarily will show a congruence with the social structure of the organization that produced it”. In other words, the modular structure of a software system reflects that of the organisation’s social structure – e.g., who rather than what. This doesn’t imply that you should skip your ‘structured design’, ‘object oriented programming’, or ‘patterns’ course at uni and head down to the bar to socialise instead, but I think there is a number of implications that software architects, especially, need to pay attention to, when dealing with a system of software systems:

  • Software architects can no longer pretend that they are software versions of ‘real’ building architects. We’ll need more skills similar to those possessed by policy makers, negotiators, and other communicators (though not necessarily politicians). Robert Schaefer’s ‘Software maturity: design as a dark art‘ is a place to start.
  • The duplication of software functionality or applications within organisations or commercial enterprises isn’t necessarily a bad thing in itself, but instead we need to focus on warning signs such as duplicated organisational roles (or responsibility overlaps), or lacking communication channels (related to the first point). I know many might be up in arms by the mere suggestion of duplication (wasting resources is never good!), but we need to be serious about the ‘cost of re-use’, versus the ‘cost of (almost) re-implementation’. Even when viewed within the context of Total Cost of Ownership, my belief is that the former isn’t always the clear winner.
  • Focus on interoperability instead of integration. So what’s the difference? NEHTA‘s Interoperability Framework captures the difference as the ability to maintain integration over time at minimum effort: the ability to maintain integration despite changes to, or replacement of, the communicating systems. Other references include the comprehensive ‘A Survey of Interoperability Measurement’ – if you can’t measure it, then you are not going to get it.

Today’s realities of software development are essentially about adding, changing, or removing parts of an existing complex software system, through a continuous process of negotiations, bargaining, and policing. Our ability to do this is directly linked to our software’s ability to interoperate.

John Brøndum
John Brøndum has worked as an IT Architect and Consultant since 1997 for a number of technology companies, including IBM, across finance, retail, telecommunications, energy, and rail. With a wealth of project based experience (e.g., Service Oriented Architecture, Business Process Management, Reference Architecture), John is currently a Ph.D. candidate at UNSWand NICTA. His research is focused on the architectural complexities caused by increasing inter-dependencies between highly integrated enterprise applications. John works independently as a consulting architect, offering a range of architectural services.

The Related Post

LogiGear_Magazine–March_2015–Testing_Strategies_and_Methods-Fast_Forward_To_Better_Testing
People rely on software more every year, so it’s critical to test it. But one thing that gets overlooked (that should be tested regularly) are smoke detectors. As the relatively young field of software quality engineering matures with all its emerging trends and terminology, software engineers often overlook that the software they test has parallels ...
Introduction This article discusses the all-too-common occurrence of the time needed to perform Software Testing being short changed as specification, development, and unforeseen “issues” cause the phases prior to testing to expand. The result is that extreme pressure is placed upon the testing organization to perform the testing function within a reduced time frame. The ...
They’ve done it again. Gojko Adzic, David Evans and, in this book, Tom Roden, have written another ‘50 Quick Ideas’ book. And this one is equally as good as the previous book on user stories. If not even better.  
VISTACON 2010 – Keynote: The future of testing THE FUTURE OF TESTING BJ Rollison – Test Architect at Microsoft VISTACON 2010 – Keynote   BJ Rollison, Software Test Architect for Microsoft. Mr. Rollison started working for Microsoft in 1994, becoming one of the leading experts of test architecture and execution at Microsoft. He also teaches ...
This article was developed from concepts in the book Global Software Test Automation: Discussion of Software Testing for Executives. Introduction Many look upon Software Testing as a cost. While it is true that Software Testing does cost money, in many cases significant amounts of money, it is also an activity that helps an organization to ...
Internet-based per-use service models are turning things upside down in the software development industry, prompting rapid expansion in the development of some products and measurable reduction in others. (Gartner, August 2008) This global transition toward computing “in the Cloud” introduces a whole new level of challenge when it comes to software testing.
D. Richard Kuhn – Computer Scientist, National Institute of Standards & Technology LogiGear: How did you get into software testing? What did you find interesting about it? Mr. Kuhn: About 10 years ago Dolores Wallace and I were investigating the causes of software failures in medical devices, using 15 years of data from the FDA. ...
In software testing, we need to devise an approach that features a gradual progression from the simplest criteria of testing to more sophisticated criteria. We do this via many planned and structured steps, each of which brings incremental benefits to the project as a whole. By this means, as a tester masters each skill or area ...
David S. Janzen – Associate Professor of Computer Science Department California Polytechnic State University, San Luis Obispo – homepage LogiGear: How did you get into software testing and what do you find interesting about it? Professor Janzen: The thing I enjoy most about computing is creating something that helps people. Since my first real job ...
First, let me ask you a few questions. Are your bugs often rejected? Are your bugs often assigned back to you and discussed back and forth to clarify information? Do your leaders or managers often complain about your bugs?
At VISTACON 2011, Harry sat down with LogiGear Sr. VP, Michael Hackett, to discuss various training methodologies. Harry Robinson Harry Robinson is a Principal Software Design Engineer in Test (SDET) for Microsoft’s Bing team, with over twenty years of software development and testing experience at AT&T Bell Labs, HP, Microsoft, and Google, as well as ...

Stay in the loop with the lastest
software testing news

Subscribe