Developer Testing? What is Testing and QA’s Place? : Part 1 of 2

As organizations’ Agile methodologies become mature, one of the trends in software development is for Developers to take on more testing, and are eliminating the QA job function. In this article, written by Michael Hackett, we explore how Devs test, to the misconceptions about Devs & the SDLC in this 1st part of a 2-part series.

Introduction

There are times in software development when new practices or new ideas take hold. Right now, one of those trends is the push for Developers to take on more of the testing workload. Part of this trend is also to eliminate the old QA job function. This shift is happening as organizations’ Agile methodologies are becoming more mature. For obvious business reasons–time to market, delivering customer value, etc.–companies are asking Software Developers to work faster and to do more with less. Developers need to deliver continuously, a lot faster, and while testing their own work–deliver value faster, get feedback and hope customers are understanding and tolerant when issues or bugs happen. That may seem stark; but, eliminating the old “QA” job function is a trend and experiment today that needs further investigation.

There are some modern software development ideals that support this, including shift-left and the Lean principle of Quality at Every Step. Implementing these practices will lead to higher quality and faster Continuous Delivery, and I wholly support these practices. But, these 2 ideas do not mean removing the QA-type testing function altogether. Rather, ideas like these need to be implemented in addition to QA functions–not instead of it! 

There are important ramifications to reducing or eliminating the Test Team or QA function on delivering value to the customer, support costs, and the success of the release as a whole. This article is part 1 of a 2-part series that will examine the ramifications of this push towards increased Developer testing and reducing Test Teams and their test efforts. Part 1 will focus on common misconceptions about Developer testing, providing insight into the Developer’s role, and examine the impact these changes will have for them. Part 2 will then focus beyond Developer testing and will include QA functions that cannot simply be “tossed out;” part 2 will also delve into the changes that can be made to better support more reliable software release in the software development lifecycle (SDLC).

Let’s dive into part 1 by starting with a look into the Developer’s role in the SDLC.

The SDLC & Developer Testing

The trend today as I see it is more Developer testing––great. And sometimes, Developers fully take over the testing function. This is more problematic.

SDLC is important; but, culture matters more. Remember the first idea in the Agile manifesto: People over process. The way people communicate and collaborate, along with the overall culture and support the team receives, is far more important than following a process. Another example would be the Lean principle of “Empower the Team,” which advocates to let the team decide; the team knows its capabilities and limitations, as well as their product area. The idea here is that the individual teams will do a better job deciding how to build and how to test their functionality than an outside decision maker. 

There are many foundation principles of modern development that would cause quite a stir among most teams. Most knowledge workers–Software Developers included–are quite opinionated and vocal about their environment and day-to-day work structure. This makes SDLC and culture a minefield to navigate.

For this piece though, I want to focus on some other modern SDLC topics, namely the Developer and Tester balance–yin and yang.

It should be said that merging Test Teams with Development teams isn’t a new idea; this happened in Agile and Scrum, when development and testing got put on the same cross-functional Scrum team. DevOps is now pulling operations into this group, which leads some people to call DevOps, “Agile for Ops.”

However, Agile did not mean Test Teams went away. For example, most often, the testing tasks in question are those to be completed to meet the Definition of Done (DoD). Development was never intended to take over many of those responsibilities.

Most Developers I work with: 

  • Resist taking over Test Team/QA-type testing tasks.
  • Have never been trained or skilled at Tester-type testing.
  • Do not want to go near bigger, longer, higher-level, workflow, integration, full transaction, and most often UI-driven Automation scripts.

At this point in software development, everyone should know that Developer unit testing is one thing, and E2E (end-to-end) testing is another. And, testing as “Testers”–complex, user-style testing as opposed to simple validation–is irreplaceable. Most teams need more and better testing, with smarter Automation, not only the unit testing that Developers typically do.

If you are moving to DevOps, you need CI/CD and it needs shift-left and Quality at Every Step. 

Just like Agile, DevOps does not mean that Test Teams go away; but, it absolutely means you need more and earlier testing. Let’s look at how this works in relation to different SDLC methodologies and the cultural impacts of those methodologies.

CI/CD

For organizations to be doing Continuous Integration/Continuous Delivery (CI/CD) today, they’ll need to have robust smoke tests and full regression suites. Smoke tests will need to be run after the unit test suite as a part of the build validation process–the process that confirms whether or not you have a good build. Longer, full regression Automation suites are run after the build is validated in concurrence with more development and other tasks.

This leads me to a few questions:

  • For the organizations who have cut back, or reorganized their QA organizations in the name of “Developers doing more testing,” who will now build cross-functional automated smoke tests or build a validation suite of tests (not the Developer-built unit tests)?
  • Who will maintain and deprecate no longer relevant suites of QA tests that Test Teams previously wrote?
  • Who will design, build, debug, maintain, run, analyze, and fix the Automation suite throughout the SDLC?

The smoke tests are the easy part since they are small. When it gets to the full regression suite, emotions can run high. The maintenance and failure analysis on this suite alone is enough to keep a Test Team fully occupied. How will a Development team, whose main responsibility is building new functionality, have the time to do this? Spoiler alert: Faster velocity will only increase the risk to release ratio.

I had a certain e-commerce retailer who needed multiple, unique suites of smoke tests to ping credit card processors, check connections with all of their processing banks, as well as check connections with all the shippers for availability and readiness (which were different from tests for cost, connection, and time.) The Development team wanted nothing to do with the building, running, analysis, or maintenance of these tests. They complained it was “not their job” and either QA, operations, or somebody else needed to own this Automation. It took too much time. Test Teams very often do more than re-run and automate validation through the UI.

That is the Test Automation conundrum. The testing issues are more complex. Developers and Test Engineers do not think the same: They have different skill sets, different goals, and different priorities–they have different jobs altogether. Thus, if you’re going to truly be doing CI/CD, you’ll still need a designated QA governance and teams supporting your end-to-end quality goals.

Shift-Left & Quality at Every Step

Shift-left means shifting specific testing tasks to earlier in the SDLC (at least, the ones that you can shift.) Test Early, Test Often is a great mindset to have, and this goes hand-in-hand with Quality at Every Step.

This is the foundation of a Continuous Testing practice. Shift-left means Developers need to do more unit testing and more integration testing, while Developers and Technical Testers need to do more service level and API testing. But, as I hinted earlier, not all testing tasks can be shifted earlier in the SDLC.  

The thing about Continuous Testing and Quality at Every Step, is that you do one thing and test it. Then, you do another thing and test that. The purpose here is if you break something but you only changed one thing, then you know exactly why the break happened. This will make defect isolation and fixing much easier and much faster.

This is a great practice and leads to higher quality deliveries. But, it’s also a lot of testing, including late-stage testing. Moving from a development environment to an integration environment? Test it. Moving to a “System Test” environment (a.k.a. fully integrated, no mocking, maybe a mirror of bigger, live databases)?–different tests have to be run. Moving to UAT or pre-production or staging environments?–test it.

I do want to repeat one thing–actually, 3 things–I mentioned earlier:

  • Most Developers I work with resist taking over testing tasks.
  • Most Developers I know have never been trained or skilled at Tester-type testing.
  • Most Developers do not want, nor have the capacity to do more, longer, higher-level, and perhaps UI-driven Automation scripts.

All of that testing mentioned above is going to need to get done at some point, and the Developers are not going to want to do it; but, more importantly, the Developers most likely won’t have the bandwidth to do it even if they wanted to. So once again, Developers doing more testing? Great! Developers doing all testing? Infeasible.

Knowing the Customer

A key part of any shift that happens in the SDLC is the oft forgotten customer. The Agile principles of Delivering Customer Value and the Job Being Customer Satisfaction is complex.

The job of every Developer is delivering value to the customer. In college during my Software Engineering program, I learned about writing code–not delivering customer value. I never once heard the phrase “customer satisfaction,” or even more so “value stream,” during engineering school. I learned that if my code compiled, it was fine. Now, I’m being told delivering customer value is my main job?

I very often hear from Developers they have no connection to the users. How is a Developer supposed to deliver customer value when they don’t know who the customer is?

Any shift is only going to be possible if Developers are given a lot of information, knowledge, or access to the customer.

This immediately draws in the Product Owner (PO). Developers need constant collaboration and access to the PO. When some people talk about Quality at Every Step, it starts with code. This is really missing the point. Quality starts with good requirements and user stories. There is no way you can release a quality product without quality requirements. Everyone who works in software knows this.

For the purpose of this topic, this weighs heavily in 2 areas: Test-driven development (TDD) and Testing like a User–either calling it System Testing or UAT.

Do you have great, thought-through user stories with lots of acceptance criteria? What about daily communication, collaboration, and access to a knowledgeable PO? If not, TDD will not happen.

User-style testing, user scenarios, user workflows–all of which is normal Test Team/QA-style testing–will not work without knowledge of the user or testing skills. This type of testing will be difficult, problematic, probably manual, and not done well, which will lead to an increased number of missed bugs and an increase in support costs.

Ask yourself:

  • How great are our user stories?
  • How well do our developers know the users

The answers here will be key to successful testing.

The Developer Part

How Developers Test

As part of my work, I go to other companies to help coach and train people and teams about testing. One of the most misunderstood areas is test-driven development (TDD). 

TDD, the eXtreme Programming (XP) practice, is writing the test first then writing code to satisfy that test. Sounds great. Like magic. It makes the code efficient and economical, but most importantly, it makes the code testable. And, you get a lot of unit tests. Big plus! From my experience, PM/POs often hesitate spending a lot of time with the team at the beginning of the sprint to fully (fully!) understand and map out the user story and feature.   

However, TDD is predicated on thorough user stories, or at least full collaboration with the Product Owner (PO) and knowing what to build–how many times have you said or heard the phrase, “I will build you what you want; just tell me what you want!”? The Behavior Driven Development (BDD) version of TDD places an even more clear emphasis on knowing what you are building by focusing on Given-When-Then (GWT) statements (Gherkins) questioning paradigm to be very clear on system behavior before the system is built. More than once, I have seen POs bristle at this responsibility and exhaustive questioning by the team.

Also, unit testing has value. And they are good. But this can’t be it. What does a passed unit test tell you? Is that really enough to assess a product’s quality? High-quality code does not mean high-quality product. This is why business-facing quality is essential (not to say that technology-facing quality is not essential, but there are many reasons that lead to failures even when the code is working correctly.)

TDD is great, and BDD is a great practice of TDD that adds easier Test Automation when done right. At the same time, we need to remember that Developer testing lacks integration and usability assessments. Unit testing (test-driven or not) is great! Do more of it if you can! Does it lead to faster delivery? It should. But, does it lead to higher customer satisfaction? Happier end-users? Less support calls? At the same time, testing like Devs is not testing like Testers or acting like customers.

Those goals are what QA governance within Agile is for.

A Developer vs. Testing Mindset

The bigger issue with Developers taking testing tasks requires a mindset change, and capacity to support the test work beyond the development responsibilities. To help with them shift in mindset, Developers should be asking:  

  • What is testing about?
  • How to test?
  • What to test

Test Teams and testing culture overall have built these practices over decades. Now, as Developer and Test Teams are coming together and are sharing more of the testing workload, it’s great to share the information and share the tasks across the whole team. But, let’s first look at how Developers have historically approached testing. 

The core difference is in mindset–not practice. Oversimplified for our discussion:

  • Developers build and Testers break; 
  • Developers validate, while Testers discover and explore; 
  • Developers test based on code (white-box), while Testers test based on use and users (black-box).

This is very oversimplified, but I am sure you get the point. 

If today you want your Development teams to do more validation, great! Unit testing can play an important part of your overall QA strategy, but it cannot be the only approach to testing that a company relies on. Development teams have a different understanding, or no understanding at all of the other QA functions and how to prioritize it over development responsibilities.

Once Developers take on the responsibility for functional validation, who does all the other “testing work”? Validation is not testing.

Who does the Exploratory Testing to fully focus on user testing, error testing, and finding gaps in the user story? Who does the integration/workflow Automation and testing? What about the non-functional testing (performance, load, security, UX/usability, 508/accessibility testing)? Who knows the customer and customer behaviors? 

As you see, there are many other facets to delivering quality software, and it takes teams of QA professionals to deliver these needs. Testing is creative, detective work; it’s investigative and all of this takes time to learn and develop. Even for testers, it takes a chunk of time and a clear head to do it successfully. When Developers are tasked with this, they already do not have the time, and when it’s not done, it leads to trouble.

I have one big customer who hired me to work only with their Development teams on exploratory, how-to-break, error-guessing, customer tests (a.k.a. workflow/transaction, end-to-end style) and not talk at all about validation. This work includes forced-error testing, fault injection, destructive testing, and negative testing. People know what this is, but doing it is a very different story. It is slow, repetitive, creative, and time-consuming. It’s more intelligent than Chaos Monkey (which itself is great, but serves a different purpose).

Here’s an example: Many companies’ worst public errors are Unhandled Errors. Think about the very high-profile, public bugs: Facebook’s biggest outages, Netflix’s biggest outages, Robinhood’s stock trading outage, Bank of America problems… the list goes on. All of these come from “unexpected error conditions,” or “unexpected errors.” Think about who tests in these areas (hint: it’s not Developers.) If Developers take over testing tasks, who spends time doing these tests?

Often these situations are really deep, devious, and complex. But at the same time, every valuable Tester is specifically skilled at trying-to-break, injecting faults, concurrency testing, boundary and corner case data and situations, etc. Each of these take training and skill, but most of all, time–an enquiring mind, patience, redundancy, and creativity. Do Test Teams find all of these problems before release? Definitely not 100%. But, they do find many! They find bottlenecks, odd behaviors, memory issues that let into the wilds of production environments, and, over time, would have the same chilling effects. You have to ask yourself this question: Do your Developers have the training, patience, creativity, and, most of all, time to hunt for bugs or problem areas? Or, do they barely have time for unit test validation?

Even when the topic of unit testing comes up, you can run 100 unit tests or 1,000 unit tests to see that some functional area or feature set works. The minute you integrate it into a workflow with other functions written by other teams or third parties, it can fall apart! And then, you are back to square one.

Developer Testing Skills to be Developed

If I am a Developer and I write a chunk of code, I better be able to automate a happy path test for that.
A happy path test is where the code works right down the middle, no corner cases, no edge cases, no boundary data, no error conditions–it works right down the middle. It always has to work. It rarely finds bugs (so, it is actually of minimal value.)
But, can that same Developer write some valuable user workflows? Can they automate end-to-end tasks? Do they even know one or 2 full end-to-end paths?

Do your Developers:

  1. Have that much knowledge of the users?
  2. Have test design skill?
  3. Have the time and want to own UI-level Automation?
  4. Automate tests that contain boundary data, corner cases, rare data, really large pieces of data, really small pieces of data, illegal or invalid data? Different character sets? Will they actually find bugs?
  5. Test across browsers or against multiple mobile devices?

This is hard-core Quality Engineer/Test Engineer work that has tremendous value even when some people on a Development team might roll their eyes at this thinking. Some naïve or old school thinking developers may think, “We don’t need to do all that. If a user finds a problem on their device, it’s their problem.” We know this is not the case, and therefore substantiates the claim that Developers simply cannot truly “take over” all of the testing tasks. This will also require buy-in from management. If Developers are taking over more testing tasks, estimates will definitely change (a.k.a. get longer) the more testing developers do. For unit testing and feature/epic/integration level, this all takes time and a clear head, not new skills under time pressure.

Summary

As you can see, the movement of “Developer testing” is quite complex and multi-faceted. Creating an effective Developer testing strategy takes time, proper planning, and buy-in from everyone on the team; it’s not as easy as deciding, “Okay, we’re going to start having our Developers test!”

With trends like Agile and DevOps putting QA and development on the same team, leaders need to recognize the fact that teams must become cross-functional; just because monolithic QA organizations have melded into larger Agile organizations does not mean that End-to-End Testing and QA functions are prehistoric. It’s a double-edged sword: Just because Developers don’t know how to test like a Tester doesn’t mean they cannot learn how to, but just because Developers can learn how to test like a Tester doesn’t mean they should have to. Instead, leaders need to promote collaboration between Developers and Testers on the same team, foster a culture of teamwork, while looking for solutions to gaps within quality assurance expectations.

Now that we have a better understanding of Developers as well as many of the misconceptions regarding Developer testing, part 2 of this series will focus on test organizations beyond Developer unit testing and what these do, including understanding the basis of their work, the importance it has within the SDLC, and why it’s essentially irreplaceable. Make sure you’re on LogiGear’s mailing list so that you don’t miss part 2 when it comes out! If you’re looking to learn more about Developer Testing than please check out our 2 parts Webinar Balancing Developer Testing and Tester Testing: https://www.logigear.com/resources/multimedia/videos/balancing-developer-testing-and-tester-testing-in-the-modern-sdlc-part-1-of-2
https://www.logigear.com/resources/multimedia/videos/balancing-developer-testing-and-tester-testing-in-the-modern-sdlc-part-2-of-2

Michael Hackett
Michael is a co-founder of LogiGear Corporation, and has over two decades of experience in software engineering in banking, securities, healthcare and consumer electronics. Michael is a Certified Scrum Master and has co-authored two books on software testing. Testing Applications on the Web: Test Planning for Mobile and Internet-Based Systems (Wiley, 2nd ed. 2003), and Global Software Test Automation (Happy About Publishing, 2006). He is a founding member of the Board of Advisors at the University of California Berkeley Extension and has taught for the Certificate in Software Quality Engineering and Management at the University of California Santa Cruz Extension. As a member of IEEE, his training courses have brought Silicon Valley testing expertise to over 16 countries. Michael holds a Bachelor of Science in Engineering from Carnegie Mellon University.

The Related Post

A lack of planning or foresight can doom Digital Transformation (DX). To provide the best customer experience (CX) possible, check out this strategic guide to effectively plan and implement your DX. Many organizations have sped up their processes in the name of Digital Transformation (DX). Part of these changes were business necessities brought about by ...
In Agile/Safe Development, quality is sometimes sacrificed for scalability. Learn how to correctly scale your system without diminishing quality. Over the past few years, I have had the opportunity to watch and participate in a variety of projects related to testing numerous systems. Over time these systems have grown to reflect various configurations designed to ...
Introduction This article is part 2 of the 2-part series, Developer Testing? What is Testing and QA’s Place? Part 1 explored modern SDLCs, such as Agile, SCRUM, and Lean, specific to the dynamic of Developer Testing. It also discussed the traditional role of the Developer and the “testing” responsibilities typically delegated to them. The motivation ...
Has this ever happened to you: You’ve been testing for a while, perhaps building off of a branch, only to find out that, after all of this time, there is something big wrong. It’s a bad build and now you have to go backwards, fix something, and get a new build. Basically, you just wasted ...
One of the most common challenges faced by business leaders is the lack of visibility into QA activities. QA leaders have a tough time communicating the impact, value, and ROI of testing to the executives in a way that they can understand. Traditional reporting practices often fail to paint the full picture and do not ...
What is Continuous Testing? In today’s high-tech, modern world, innovation is driven by 3 things: speed, efficiency, and quality. This rings true across essentially every industry, and the Software Testing industry is no different. In order to stay competitive in a growing marketplace, software companies need to release their products faster, at a higher-quality, and ...

Leave a Reply

Your email address will not be published.

Stay in the loop with the lastest
software testing news

Subscribe