The Internet of Things: Software Testing’s New Frontier – Part 1

Picture4What you need to know for testing in the new paradigm

This two part article analyzes the impact of the Internet of Things (IoT) product development on traditional testing.

Part one of this series starts with a wide view on the IoT, embedded systems and device development aspects of testing. Part two, to be published in the September issue, will focus on mobile: connectivity, data, security, performance and remote control—commonly from a smart phone.

Embedded systems have been around a long time, and consumers have had internet connectivity for over to two decades, however the explosive growth of internet enabled devices is just in its infancy. Ubiquitous computing is happening now on a large scale. 

The testing challenges that are arising out of this explosive growth are very intriguing. Testing roles are changing. People who were trained as traditional testers, working on well understood systems—test engineers—are being tasked with testing a flood of devices on unknown or new platforms. Due to rapid change, acquiring the skills, knowledge and strategies comes from on-the-job training so you have to take what you know and adapt it to the situation at hand.

By traditional software test teams, I mean teams that are made up of a mix of technical testers and subject matter experts, black and gray-box testers who are typically unfamiliar with testing during hardware development; all of whom will need to adapt rapidly to new platforms, new test types and build new test skills.

The risks involved in testing the IoT can be much greater than traditional application testing. There are apps being developed for devices that connect to other devices and/or systems across the internet, which opens avenues for failures. If you miss or discount a bug, it can cause a ripple effect, and your company may face significant liability.

The systems that make up the IoT are very complex. New and more intelligent sensors are produced every day. Just a few years ago, the hardware sensors and device did all the work. Now, estimates are that software does more than 50% of the work on embedded systems; that is a big shift.

For the reasons mentioned, I will focus on test issues and strategy as it applies for testing the IoT piece rather than on embedded system testing piece. Embedded system testing is well-understood, and there are many sources of information already published on it.

A strong test strategy: Your test strategy must be effective to be successful.

Arnold Berger of the University of Washington points out in The Basics of Embedded Software Testing, “Many studies (Dataquest,EE Times) have shown that more than half of the engineers who identify themselves as embedded software and firmware engineers spend the majority of their time fixing embedded systems that have already been deployed to customers.”

This is a startling piece of information to me. Is this because of poorly planned projects; no attention to quality during development, or is the reason simply not knowing how to test these types of systems? Clearly, any IoT or embedded project has to include a great testing foundation or you may be doomed to becoming an expensive support person.

To get started, you need to have a great testing practice in place. Testing processes and practices must be right on target to have any hope of executing an effective testing job. Clear requirements, detailed user stories, unit testing, continuous integration, lean test plans, coverage measurements, great communication, etc.- all need to be part of your regular development process. Programmers must practice designing for testability—writing callable test hooks into the code—inorder to benefit the entire product team. Good programming practice and team processes will go far in releasing a higher quality, safer, more secure product. 

Your regular test strategy is a good place to begin. Validating functionality, installing upgrades, building smoke tests and regression suites will make sure these are the very best they can be will help verify the product can do what it is intended to do.

Testing is easier to do if you have behavior models since a lot of devices have limited or no UI, and many are total black-boxes. Behavior or state models, and even object diagrams will help plan your testing.

Failure and error testing in this new environment requires more focus than typical application test strategy. Forced error testing, where you inject error conditions into your system to check for proper handling, recovery, and where needed; messaging, all need to happen, not only on the software but also on the hardware. Failover, DR (disaster recovery) – already part of a good test strategywill grow in importance with the addition of testing hardware failures.

Unlike typical applications, your mix of automated and manual testing may not just be dictated by your skill level and tool. There will be situations that can’t be adequately tested with manual processes. Variations in models, sub-models, software versions, and configurations will complicate testing and test automation.

New platforms and the need for tools: embedded system platforms do not have the tool support you may be used to.

Most often embedded systems—traditionally stand-alone—have had unique, one-off, home-grown, kluged systems and architecture. Then a few industry leaders began to emerge. Having a stable platform leads to a good IDE (integrated development environment) with developer tools, easily available knowledge about the platform and its limits, recommendations, etc.

WindRiver created what has become a hugely successful embedded platform. But now the flood gates have opened. Apple wants iOS to be the platform of choice for home and health IoT devices and Google obviously wants it to be Android. Microsoft has had an embedded software platform for years that has tool and information support, and integration into all other Microsoft solutions. Still, many devices have unique and not-well-known environments. This can lead to marginal validation and testing of the hardware and little effective gray-box testing.

Without common platforms, tools will be scarce- especially QA type test tools as opposed to programmer tools. As we know from the recent growth in smart phone platforms, development of test tools lag. Lack of tools and under-the-covers access hurts the test effort.

Since many of the devices of the IoT have limited or no UI, traditional testers cannot rely on taking matters into their own hands to exercise and stress a system. Somehow you have to get consoles, viewers and simulators to get access beyond the black-box. You will need tools, from memory meters to logs, to code tracers and automation, or your test effort will be severely hampered.

It is crucial that you make your tool needs known to the team. The tools you normally use in your regular test process are a good place to start for testing the devices as well.

Platform and Environment Knowledge for Gray-box Testing: gray box testing is the most effective testing but you need to information about how things work.

The most daunting aspect of this new frontier for most test teams is trying to understand the architecture, the OS and its nuances, dive into 3rd party hardware, apps, firmware, understand new connectivity protocols and hardware device limitations as fast as possible. This is all necessary in order to design the most effective test cases. Even then you hope the things you don’t even know about the system will not bite you.

Gray-box testing is focused between the code and whatever black-box interface your product has, aided by whatever information you can get of the system. Error guessing is a long-standing method in testing, but in many cases, it is difficult to guess where and what errors may be lurking with little-to-no information on how the system works.

The more information you have, the better you will test. So, gather every document you can; read, read, read. Teach yourself new technologies, and share new information among other testers and your whole team.

It will also be necessary to ask a lot of questions: what about the software is unique, special, newly written or re-written? What interaction do the sensors have with each other (M2M)? What protocols does the device use to talk to the remote control? To other devices? To cloud APIs? What concurrency situations can be set up? What race conditions are possible and Impossible? Which are going to happen every day? Which are never supposed to happen—ever? Your questioning and information seeking ability will be the key to great bug finding.

Real Time, or Real Time Operating System: RTOS has unique performance standards and functionality and demands for testing on real device rather than simulators. 

Real-time systems are unique in that the functionality, messages or events are ultimately time sensitive. Many are safety or mission critical systems where a few milliseconds can mean the difference between life and death. Safety critical systems, from medical devices to anti-locking brakes in cars, to house alarms; need superfast response time.

Devices for used for financial and commodity trading services—where seconds can mean a profit or loss of billions of dollars—may need to respond in tenths of seconds so that the entire system will respond in seconds.

Real time systems need higher levels of reliability than typical applications and even typical embedded devices. Special test suites need to be designed to test “critical sequences”, the scenarios or sequences that cause the greatest delay from trigger to response .

These systems always have unique scheduling routines that need to be verified in addition to race conditions, error handling and concurrency tests. There may also be queues, buffers and varying memory availability that need to be tested.

Acceptance testing clearly has to happen on actual devices. Simulating any part of a critically time sensitive environment will not give realistic results. This does not mean simulators are not useful on real time systems, it just means that simulators are great for testing early but do not replace testing on the actual device.

Systems connected through the internet complicate things. Normally with real time systems there are bandwidth issues, but usually not power issues. However, the internet opens up performance and interoperability problems that need to be overcome.

You can test a home alarm system calling the police. You can also test the home alarm system calling the police with the air conditioner and microwave and clothes dryer on. You can also test the home alarm system calling the police under a power flux as well as with 3 or 4 people in the house streaming movies. This might be a base test and get more complicated from here.

Creating these types of tests requires great test design skills and very clear benchmarks from the team as to the service level and performance tooling skill.

The benchmarks for real time system tests include agreements from sales, marketing, legal departments or regulatory compliance which have to be validated.

Parallel Hardware, OS and Software Development: concurrent development projects needs great communication and a lot of re-testing.

A part of the embedded systems lifecycle that traditional desktop and web teams will find very different is the development of the device itself.

Hardware can be in development while a different team works on the OS and perhaps firmware, with different teams making “software” applications, connectivity, interfaces, API calls, databases, etc.. All of this can take place in parallel, and without a lot a information sharing.

If you are new to this area, it is more common than you would think that test teams from various parts of the product do not know or see each other much. They may not share much information, and likely have very different skill sets. This lack of collaboration has a large impact on testing. Improving communication and sharing knowledge are obvious areas to incorporate into your processes to improve testing.

Software teams can find they are building code for a moving hardware target. In my experience, the hardware teams are king. They are in control of whatever gets included or left out. When the hardware product is done, the software teams often have to adjust to the new hardware and re-do all the tests. Very often, the software had to be adjusted to make up shortcomings of the hardware.

It is pretty much the same with the system or OS. Whatever the OS team includes is it. The software or apps teams, usually the last teams in the process, might have to readjust to the new target and re-run all the tests as though for the first time. This does not simply mean re-run a regression suite, it may require re-doing exploratory testing, error guessing and all the tests on the new hardware and OS.

Software teams, can’t wait to schedule their work until the hardware and OS team is done—nor should they. Software teams often find bugs, issues or limitations using unfinished functionality on beta stage hardware and OSs that the hardware and OS teams did not catch.

Test Automation Diverse test needs, Lack of tools, testing on simulators or through consoles complicates test automation 

Varieties of configurations, versions, patches, updates and supported devices and platforms makes automation mandatory and complex.

Finding a tool specific to a platform may not be possible, so customization is essential.

Emulators are very useful for device automation. However, a simulator is not the same as a device. If all the automation is only on a simulator a lot of manual testing will have to be done on the actual device.

As always with automation, test design is the primary key to success. Every type of testing we have covered has unique flavors of automation. Databases, install and upgrade, interoperability, connectivity, performance, security—all have different needs for successful test automation independent to functionality validation and testing.

Summary

There is no magic answer for how to test the IoT. It is complicated with many unknowns, but it is also exciting., Adding internet connectivity, to embedded systems will build skills to take you far into testing in the 21st century. Seek information. Build skills in a variety of test types, platforms and tools.

It’s currently a kind of “Wild West” mentality in this blossoming industry with few standards. Many platform providers have little real focus on performance, security and interoperability. This will undoubtedly change over time. But for now – you are testing in uncharted waters.

Test early, test often. Report risk and coverage limitations even more than you report what you have actually tested.

Remember, in part two, we will be investigating more mobile internet considerations such as, remote control, performance testing, security testing, cloud APIs, Big Data testing and interoperability testing.

Michael Hackett

Michael is a co-founder of LogiGear Corporation, and has over two decades of experience in software engineering in banking, securities, healthcare and consumer electronics. Michael is a Certified Scrum Master and has co-authored two books on software testing. Testing Applications on the Web: Test Planning for Mobile and Internet-Based Systems (Wiley, 2nd ed. 2003), and Global Software Test Automation (Happy About Publishing, 2006).
He is a founding member of the Board of Advisors at the University of California Berkeley Extension and has taught for the Certificate in Software Quality Engineering and Management at the University of California Santa Cruz Extension. As a member of IEEE, his training courses have brought Silicon Valley testing expertise to over 16 countries. Michael holds a Bachelor of Science in Engineering from Carnegie Mellon University.

Michael Hackett
Michael is a co-founder of LogiGear Corporation, and has over two decades of experience in software engineering in banking, securities, healthcare and consumer electronics. Michael is a Certified Scrum Master and has co-authored two books on software testing. Testing Applications on the Web: Test Planning for Mobile and Internet-Based Systems (Wiley, 2nd ed. 2003), and Global Software Test Automation (Happy About Publishing, 2006). He is a founding member of the Board of Advisors at the University of California Berkeley Extension and has taught for the Certificate in Software Quality Engineering and Management at the University of California Santa Cruz Extension. As a member of IEEE, his training courses have brought Silicon Valley testing expertise to over 16 countries. Michael holds a Bachelor of Science in Engineering from Carnegie Mellon University.

The Related Post

LogiGear Magazine December 2012 – Mobile Test Automation  
A sampling of some free, online, and easy-to-use mobile device emulators that can help get you started with testing. ScreenFly A free, customizable tool to test your website on any screen size, including desktops, tablets, televisions, and mobile phones.
Steps that will enable you to identify the weaknesses of your new app, its vulnerabilities and strengths. So you’ve just finished developing a nifty, customisable app that can help farmers track their produce from source to market via their mobile phone. You’re elated and want to get started marketing it right away. Not to burst ...
CEO and founder of mVerify Corporation, Robert V. Binder tackles questions from field testers regarding such issues as strategic considerations when dealing with single stack apps versus globalized enterprise mobile apps, and methods and tools that developers and testers should be aware of. He also offers his own advice from lessons learned from experience. 1. ...
  Mobile analytics experts Julian Harty and Antoine Aymer have teamed up to deliver a 161-page handbook designed to help you “enhance the quality, velocity, and efficiency of your mobile apps by integrating mobile analytics and mobile testing”.
 LogiGear_Magazine_October_2014_Testing_Smart_and_Mobile
This is the second part of a two part article that analyzes the impact of product development for the internet of things (IoT) on software testing.  Part one of this article (LogiGear Magazine, Sept 2014) gave a wide view on the IoT, embedded systems, and the device development aspects of testing on these projects. This ...
In the last issue on testing the SMAC stack we talked about the social and mobile aspects of testing. We will be referring to them in this article. In this issue part 2, we focus on the Analytics and Cloud aspect. The goal of this article is to understand a simple landscape of analytics and cloud.
Don’t make the mistake of assuming too many similarities. It is common knowledge that mobile applications don’t function in the same way as their web-based counterparts. The user experience is affected by a few other factors such as device and network capability. If you are building out a performance testing strategy for your mobile website ...
I am not a big fan of concepts which moves industry standards to IT. I am rather a Agile and Scrum guy. Managing multiple projects at once and trying to set a highest quality standard is a challenge and this book shows how industrial language can be translated into software development. I do not think that it ...
In today’s mobile-first world, a good app is important, meaning an effective Mobile Testing strategy is  essential.  
Devices matter. We don’t yet trust the mobile devices like we trust desktops and laptops. In the course of testing traditional web applications, rarely do you have to think about the model of the actual machine. In mobile, however, the behavior of an application can vary from device to device. You can no longer just ...

Leave a Reply

Your email address will not be published.

Stay in the loop with the lastest
software testing news

Subscribe