How Google Tests Software

Having developed software for nearly fifteen years, I remember the dark days before testing was all the rage and the large number of bugs that had to be arduously found and fixed manually. The next step was nervously releasing the code without the safety net of a test bed and having no idea if one had introduced regressions or new bugs.

When I first came across unit testing I ardently embraced it and am a huge fan of testing in various forms — from automated to smoke tests to performance and load tests to end user and exploratory testing. So it was with much enthusiasm that I picked up How Google Tests Software”— written by some of the big names in testing at Google. I was hoping it would give me fresh insights into testing software at “Google Scale” as promised on the back cover, hopefully coupled with some innovative new techniques and tips. While partially succeeding on these fronts, the book as a whole didn’t quite live up to my expectations and feels like a missed opportunity.

The book is written in an informal, easy to read manner and organized in such a way that readers can read chapters in any order or just choose to focus on the parts that interest them. One annoying layout choice is to highlight and repeat certain key sentences (as is often done in magazines) resulting in one reading the same thing twice, often only words away from the original sentence. Thankfully this is only the case in the first two chapters, but it highlights the variable quality of this book — possibly due to the authors having worked separately on different chapters. How Google Tests Software isn’t a book for people new to testing or software development. The authors assume you know a fair amount about the software development lifecycle, where testing fits into this and what different forms testing can take. It is also largely technology neutral, using specific examples of testing software that Google uses only to illustrate concepts.

After a brief introduction as to how testing has evolved over time at Google, the book devotes a chapter to each of the key testing-related roles in the company: the ‘Software Engineer in Test’ (SET), the ‘Test Engineer’ (TE) and the ‘Test Engineering Manager’ (TEM). SETs are coders who focus on writing tests or frameworks and infrastructure to support other coders in their testing. The TE has a broader, less well-defined role and is tasked with looking at the bigger picture of the product in question and its impact on users and how it fits into the broader software ecosystem. These two sections form the bulk of the book in terms of pages and interesting content. The TEM is essentially what the name says — someone who manages testers and testing and coordinates these activities at a higher level within Google.

The descriptions of each of these testing roles highlights the ways Google’s thinking about testing has matured and also shows how some of these approaches differ from other companies’. There are also explanations of the tools and processes that people in these roles use and follow and this for me was the most interesting part of the book.

Topics covered include: specific bug tracking and test plan creation tools; risk analysis; test case management over time; and automated testing. Particularly of note are discussions on using bots to perform testing of web pages to detect differences between software releases, cutting down on the amount of human interaction required as well as the opposite approach — using more humans via “crowd sourced testing” among first internal and then select groups of external users. The tools that Google utilizes to simplify tester’s jobs by recording steps to reproduce bugs and simplifying bug reporting and management sound very useful. Many of the tools described in the book are open source (or soon to be opened) and are probably worth following up on and investigating if this is what you do for a living.

In addition to the main body of text most chapters also include interviews with Google staff on various testing related topics. Some of these are genuinely interesting and give the reader a good idea of how testing is tackled at Google on a practical level. However some of the interviews fall into the “navel gazing” camp (especially when the authors interview one of themselves) and feel more like filler material. I enjoyed the interviews with Google hiring staff the most — their take on how they recruit people for testing roles and the types of questions they ask and qualities they look for make a lot of sense. The interview with the GMail TEM was also good and illustrated how the concepts described in the book are actually performed in practice. The interviews are clearly marked and can thus be easily skipped or skim read but one wonders what more useful text could have been included in their place.

The book wraps up with a chapter that attempts to describe how Google intends to improve their testing in the future. The most valuable point here is how testing as a separate function could “disappear” as it becomes part and parcel of the product being developed like any other feature, and thus the responsibility of all of the people working on the product as opposed to it being a separate thing. Another key point made throughout the book is how the state of testing at Google is constantly in flux which makes sense in such a fast moving and innovative company but leaves one questioning how much of this book will still be relevant in a few year’s time.

How Google Tests Software isn’t a bad book but neither is it a great one. It has some good parts and will be worth reading for those who are interested in “all things Google.” For everyone else I’d recommend skimming through to the parts that grab your attention most and glossing over the rest.

Adrian
Adrian is a hands-on technical team lead at Last.fm in London where he works with a team focusing on the services behind the scenes that power the popular music website. Prior to this Adrian worked in Amsterdam for 2 mobile startups (both of which failed), a bank (good money but dull), a content management system (even duller), a digital rights company (good in theory, evil in practice) and in South Africa for a multimedia company. You can visit his website at http://massdosage.co.za

 

 

Adrian Woodhead
Adrian is a hands-on technical team lead at Last.fm in London where he works with a team focusing on the services behind the scenes that power the popular music website. Prior to this Adrian worked in Amsterdam for 2 mobile startups (both of which failed), a bank (good money but dull), a content management system (even duller), a digital rights company (good in theory, evil in practice) and in South Africa for a multimedia company. You can visit his website at http://massdosage.co.za

The Related Post

Training has to be fun. Simple as that. To inspire changed behaviors and adoption of new practices, training has to be interesting, motivating, stimulating and challenging. Training also has to be engaging enough to maintain interest, as trainers today are forced to compete with handheld mobile devices, interruptions from texting, email distractions, and people who think they ...
This article was developed from concepts in the book Global Software Test Automation: Discussion of Software Testing for Executives. Quality cost is the sum of all costs a company invests into the release of a quality product. When developing a software product, there are 4 types of quality costs: prevention costs, appraisal costs, internal failure ...
Plan your Test Cases with these Seven Simple Steps What is a mind map? A mind map is a diagram used to visually organize information. It can be called a visual thinking tool. A mind map allows complex information to be presented in a simplified visual format. A mind map is created around a single ...
D. Richard Kuhn – Computer Scientist, National Institute of Standards & Technology LogiGear: How did you get into software testing? What did you find interesting about it? Mr. Kuhn: About 10 years ago Dolores Wallace and I were investigating the causes of software failures in medical devices, using 15 years of data from the FDA. ...
Introduction Keyword-driven methodologies like Action Based Testing (ABT) are usually considered to be an Automation technique. They are commonly positioned as an advanced and practical alternative to other techniques like to “record & playback” or “scripting”.
Most have probably heard the expression ‘less is more‘, or know of the ‘keep it simple and stupid‘ principle. These are general and well-accepted principles for design and architecture in general, and something that any software architect should aspire to. Similarly, Richard P. Gabriel (a major figure in the world of Lisp programming language, accomplished poet, and currently ...
PWAs have the ability to transform the way people experience the web. There are a few things we can agree we have seen happen. The first being that we figured out the digital market from an application type perspective. Secondly, we have seen the rise of mobile, and lastly, the incredible transformation of web to ...
People rely on software more every year, so it’s critical to test it. But one thing that gets overlooked (that should be tested regularly) are smoke detectors. As the relatively young field of software quality engineering matures with all its emerging trends and terminology, software engineers often overlook that the software they test has parallels ...
With this edition of LogiGear Magazine, we introduce a new feature, Mind Map. A mind map is a diagram, usually devoted to a single concept, used to visually organize related information, often in a hierarchical or interconnected, web-like fashion. This edition’s mind map, created by Sudhamshu Rao, focuses on tools that are available to help ...
From cross-device testing, to regression testing, to load testing, to data-driven testing, check out the types of testing that are suitable for Test Automation. Scene: Interior QA Department. Engineering is preparing for a final product launch with a deadline that is 12 weeks away. In 6 weeks, there will be a 1 week quality gate, ...
This is an adaptation of a presentation entitled Software Testing 3.0 given by Hung Nguyen, LogiGear CEO, President, and Founder. The presentation was given as the keynote at the Spring 2007 STPCON conference in San Mateo, California. Watch for an upcoming whitepaper based on this topic. Introduction This first article of this two article series ...
Creative Director at the Software Testing Club, Rob Lambert always has something to say about testing. Lambert regularly blogs at TheSocialTester where he engages his readers with test cases, perspectives and trends. “Because It’s Always Been Done This Way” Study the following (badly drawn) image and see if there is anything obvious popping in to ...

Leave a Reply

Your email address will not be published.

Stay in the loop with the lastest
software testing news

Subscribe