The Internet of Things: Software Testing’s New Frontier – Part 1

Picture4What you need to know for testing in the new paradigm

This two part article analyzes the impact of the Internet of Things (IoT) product development on traditional testing.

Part one of this series starts with a wide view on the IoT, embedded systems and device development aspects of testing. Part two, to be published in the September issue, will focus on mobile: connectivity, data, security, performance and remote control—commonly from a smart phone.

Embedded systems have been around a long time, and consumers have had internet connectivity for over to two decades, however the explosive growth of internet enabled devices is just in its infancy. Ubiquitous computing is happening now on a large scale. 

The testing challenges that are arising out of this explosive growth are very intriguing. Testing roles are changing. People who were trained as traditional testers, working on well understood systems—test engineers—are being tasked with testing a flood of devices on unknown or new platforms. Due to rapid change, acquiring the skills, knowledge and strategies comes from on-the-job training so you have to take what you know and adapt it to the situation at hand.

By traditional software test teams, I mean teams that are made up of a mix of technical testers and subject matter experts, black and gray-box testers who are typically unfamiliar with testing during hardware development; all of whom will need to adapt rapidly to new platforms, new test types and build new test skills.

The risks involved in testing the IoT can be much greater than traditional application testing. There are apps being developed for devices that connect to other devices and/or systems across the internet, which opens avenues for failures. If you miss or discount a bug, it can cause a ripple effect, and your company may face significant liability.

The systems that make up the IoT are very complex. New and more intelligent sensors are produced every day. Just a few years ago, the hardware sensors and device did all the work. Now, estimates are that software does more than 50% of the work on embedded systems; that is a big shift.

For the reasons mentioned, I will focus on test issues and strategy as it applies for testing the IoT piece rather than on embedded system testing piece. Embedded system testing is well-understood, and there are many sources of information already published on it.

A strong test strategy: Your test strategy must be effective to be successful.

Arnold Berger of the University of Washington points out in The Basics of Embedded Software Testing, “Many studies (Dataquest,EE Times) have shown that more than half of the engineers who identify themselves as embedded software and firmware engineers spend the majority of their time fixing embedded systems that have already been deployed to customers.”

This is a startling piece of information to me. Is this because of poorly planned projects; no attention to quality during development, or is the reason simply not knowing how to test these types of systems? Clearly, any IoT or embedded project has to include a great testing foundation or you may be doomed to becoming an expensive support person.

To get started, you need to have a great testing practice in place. Testing processes and practices must be right on target to have any hope of executing an effective testing job. Clear requirements, detailed user stories, unit testing, continuous integration, lean test plans, coverage measurements, great communication, etc.- all need to be part of your regular development process. Programmers must practice designing for testability—writing callable test hooks into the code—inorder to benefit the entire product team. Good programming practice and team processes will go far in releasing a higher quality, safer, more secure product. 

Your regular test strategy is a good place to begin. Validating functionality, installing upgrades, building smoke tests and regression suites will make sure these are the very best they can be will help verify the product can do what it is intended to do.

Testing is easier to do if you have behavior models since a lot of devices have limited or no UI, and many are total black-boxes. Behavior or state models, and even object diagrams will help plan your testing.

Failure and error testing in this new environment requires more focus than typical application test strategy. Forced error testing, where you inject error conditions into your system to check for proper handling, recovery, and where needed; messaging, all need to happen, not only on the software but also on the hardware. Failover, DR (disaster recovery) – already part of a good test strategywill grow in importance with the addition of testing hardware failures.

Unlike typical applications, your mix of automated and manual testing may not just be dictated by your skill level and tool. There will be situations that can’t be adequately tested with manual processes. Variations in models, sub-models, software versions, and configurations will complicate testing and test automation.

New platforms and the need for tools: embedded system platforms do not have the tool support you may be used to.

Most often embedded systems—traditionally stand-alone—have had unique, one-off, home-grown, kluged systems and architecture. Then a few industry leaders began to emerge. Having a stable platform leads to a good IDE (integrated development environment) with developer tools, easily available knowledge about the platform and its limits, recommendations, etc.

WindRiver created what has become a hugely successful embedded platform. But now the flood gates have opened. Apple wants iOS to be the platform of choice for home and health IoT devices and Google obviously wants it to be Android. Microsoft has had an embedded software platform for years that has tool and information support, and integration into all other Microsoft solutions. Still, many devices have unique and not-well-known environments. This can lead to marginal validation and testing of the hardware and little effective gray-box testing.

Without common platforms, tools will be scarce- especially QA type test tools as opposed to programmer tools. As we know from the recent growth in smart phone platforms, development of test tools lag. Lack of tools and under-the-covers access hurts the test effort.

Since many of the devices of the IoT have limited or no UI, traditional testers cannot rely on taking matters into their own hands to exercise and stress a system. Somehow you have to get consoles, viewers and simulators to get access beyond the black-box. You will need tools, from memory meters to logs, to code tracers and automation, or your test effort will be severely hampered.

It is crucial that you make your tool needs known to the team. The tools you normally use in your regular test process are a good place to start for testing the devices as well.

Platform and Environment Knowledge for Gray-box Testing: gray box testing is the most effective testing but you need to information about how things work.

The most daunting aspect of this new frontier for most test teams is trying to understand the architecture, the OS and its nuances, dive into 3rd party hardware, apps, firmware, understand new connectivity protocols and hardware device limitations as fast as possible. This is all necessary in order to design the most effective test cases. Even then you hope the things you don’t even know about the system will not bite you.

Gray-box testing is focused between the code and whatever black-box interface your product has, aided by whatever information you can get of the system. Error guessing is a long-standing method in testing, but in many cases, it is difficult to guess where and what errors may be lurking with little-to-no information on how the system works.

The more information you have, the better you will test. So, gather every document you can; read, read, read. Teach yourself new technologies, and share new information among other testers and your whole team.

It will also be necessary to ask a lot of questions: what about the software is unique, special, newly written or re-written? What interaction do the sensors have with each other (M2M)? What protocols does the device use to talk to the remote control? To other devices? To cloud APIs? What concurrency situations can be set up? What race conditions are possible and Impossible? Which are going to happen every day? Which are never supposed to happen—ever? Your questioning and information seeking ability will be the key to great bug finding.

Real Time, or Real Time Operating System: RTOS has unique performance standards and functionality and demands for testing on real device rather than simulators. 

Real-time systems are unique in that the functionality, messages or events are ultimately time sensitive. Many are safety or mission critical systems where a few milliseconds can mean the difference between life and death. Safety critical systems, from medical devices to anti-locking brakes in cars, to house alarms; need superfast response time.

Devices for used for financial and commodity trading services—where seconds can mean a profit or loss of billions of dollars—may need to respond in tenths of seconds so that the entire system will respond in seconds.

Real time systems need higher levels of reliability than typical applications and even typical embedded devices. Special test suites need to be designed to test “critical sequences”, the scenarios or sequences that cause the greatest delay from trigger to response .

These systems always have unique scheduling routines that need to be verified in addition to race conditions, error handling and concurrency tests. There may also be queues, buffers and varying memory availability that need to be tested.

Acceptance testing clearly has to happen on actual devices. Simulating any part of a critically time sensitive environment will not give realistic results. This does not mean simulators are not useful on real time systems, it just means that simulators are great for testing early but do not replace testing on the actual device.

Systems connected through the internet complicate things. Normally with real time systems there are bandwidth issues, but usually not power issues. However, the internet opens up performance and interoperability problems that need to be overcome.

You can test a home alarm system calling the police. You can also test the home alarm system calling the police with the air conditioner and microwave and clothes dryer on. You can also test the home alarm system calling the police under a power flux as well as with 3 or 4 people in the house streaming movies. This might be a base test and get more complicated from here.

Creating these types of tests requires great test design skills and very clear benchmarks from the team as to the service level and performance tooling skill.

The benchmarks for real time system tests include agreements from sales, marketing, legal departments or regulatory compliance which have to be validated.

Parallel Hardware, OS and Software Development: concurrent development projects needs great communication and a lot of re-testing.

A part of the embedded systems lifecycle that traditional desktop and web teams will find very different is the development of the device itself.

Hardware can be in development while a different team works on the OS and perhaps firmware, with different teams making “software” applications, connectivity, interfaces, API calls, databases, etc.. All of this can take place in parallel, and without a lot a information sharing.

If you are new to this area, it is more common than you would think that test teams from various parts of the product do not know or see each other much. They may not share much information, and likely have very different skill sets. This lack of collaboration has a large impact on testing. Improving communication and sharing knowledge are obvious areas to incorporate into your processes to improve testing.

Software teams can find they are building code for a moving hardware target. In my experience, the hardware teams are king. They are in control of whatever gets included or left out. When the hardware product is done, the software teams often have to adjust to the new hardware and re-do all the tests. Very often, the software had to be adjusted to make up shortcomings of the hardware.

It is pretty much the same with the system or OS. Whatever the OS team includes is it. The software or apps teams, usually the last teams in the process, might have to readjust to the new target and re-run all the tests as though for the first time. This does not simply mean re-run a regression suite, it may require re-doing exploratory testing, error guessing and all the tests on the new hardware and OS.

Software teams, can’t wait to schedule their work until the hardware and OS team is done—nor should they. Software teams often find bugs, issues or limitations using unfinished functionality on beta stage hardware and OSs that the hardware and OS teams did not catch.

Test Automation Diverse test needs, Lack of tools, testing on simulators or through consoles complicates test automation 

Varieties of configurations, versions, patches, updates and supported devices and platforms makes automation mandatory and complex.

Finding a tool specific to a platform may not be possible, so customization is essential.

Emulators are very useful for device automation. However, a simulator is not the same as a device. If all the automation is only on a simulator a lot of manual testing will have to be done on the actual device.

As always with automation, test design is the primary key to success. Every type of testing we have covered has unique flavors of automation. Databases, install and upgrade, interoperability, connectivity, performance, security—all have different needs for successful test automation independent to functionality validation and testing.

Summary

There is no magic answer for how to test the IoT. It is complicated with many unknowns, but it is also exciting., Adding internet connectivity, to embedded systems will build skills to take you far into testing in the 21st century. Seek information. Build skills in a variety of test types, platforms and tools.

It’s currently a kind of “Wild West” mentality in this blossoming industry with few standards. Many platform providers have little real focus on performance, security and interoperability. This will undoubtedly change over time. But for now – you are testing in uncharted waters.

Test early, test often. Report risk and coverage limitations even more than you report what you have actually tested.

Remember, in part two, we will be investigating more mobile internet considerations such as, remote control, performance testing, security testing, cloud APIs, Big Data testing and interoperability testing.

Michael Hackett

Michael is a co-founder of LogiGear Corporation, and has over two decades of experience in software engineering in banking, securities, healthcare and consumer electronics. Michael is a Certified Scrum Master and has co-authored two books on software testing. Testing Applications on the Web: Test Planning for Mobile and Internet-Based Systems (Wiley, 2nd ed. 2003), and Global Software Test Automation (Happy About Publishing, 2006).
He is a founding member of the Board of Advisors at the University of California Berkeley Extension and has taught for the Certificate in Software Quality Engineering and Management at the University of California Santa Cruz Extension. As a member of IEEE, his training courses have brought Silicon Valley testing expertise to over 16 countries. Michael holds a Bachelor of Science in Engineering from Carnegie Mellon University.

Michael Hackett
Michael is a co-founder of LogiGear Corporation, and has over two decades of experience in software engineering in banking, securities, healthcare and consumer electronics. Michael is a Certified Scrum Master and has co-authored two books on software testing. Testing Applications on the Web: Test Planning for Mobile and Internet-Based Systems (Wiley, 2nd ed. 2003), and Global Software Test Automation (Happy About Publishing, 2006). He is a founding member of the Board of Advisors at the University of California Berkeley Extension and has taught for the Certificate in Software Quality Engineering and Management at the University of California Santa Cruz Extension. As a member of IEEE, his training courses have brought Silicon Valley testing expertise to over 16 countries. Michael holds a Bachelor of Science in Engineering from Carnegie Mellon University.

The Related Post

To help testers gain an edge, here’s a list of free resources Mobile testing is making leaps and bounds of progress in the overall testing space. As this field is highly dynamic, a tester must constantly evolve and improvise his or her knowledge of mobile testing. To help software testers gain an edge, I have compiled the following list ...
In today’s mobile-first world, a good app is important, meaning an effective Mobile Testing strategy is  essential.  
Strategies to Approach Mobile Web App Testing Mobile web technology has been continuously changing over the past few years, making “keeping up” challenging. In this article, Raj Subramanian covers the latest trends and changes happening in the mobile web and how testers can prepare for them.
Whether Or Not You Have a Mobile App You’re walking down the street. You see something interesting, and you want to know more about it. What do you do? Do you wait until you get home, open up your laptop, and type “google.com” into your search bar?
  LogiGear_Magazine_September 2016_Testing SMAC Down  
By focusing on test design, analyzing test requirements and optimizing the approach to testing, it’s possible to maximize mobile test automation cost effectively. In a previous article we outlined the importance of understanding the mobile ecosystem and test design for planning and executing mobile testing. The focus of this article is about efficient mobile test ...
LogiGear Magazine – November 2011 – Mobile Application Testing Issue
Convergence of Social Media, Mobile, Analytics, & Cloud [SMAC] is one of the hottest trends these days. It is a major business agenda forcing organizations to rethink their strategies and increase technology investments in this direction.
Drawing from the Greek mythology of the lotus eaters, Anne-Marie Charrett warns testers to be weary of enjoying early success too soon upon finding high impact bugs.
LogiGear Magazine – September 2013 – Mobile Testing
Users aren’t likely to forgive and forget buggy apps. Mobile has big implications for business. The mobile experience is the customer experience, and you don’t get many second chances.
Manual testing teams may not be able to test all the processes with each build Test automation of applications has been around for many years. There are many of us in the automated testing field that started very early in the test automation phase, but the introduction of mobile devices has brought on a new angle ...

Leave a Reply

Your email address will not be published.

Stay in the loop with the lastest
software testing news

Subscribe