The Internet of Things: Software Testing’s New Frontier – Part 2

4 -Sep-2014-01This is the second part of a two part article that analyzes the impact of product development for the internet of things (IoT) on software testing. 

Part one of this article (LogiGear Magazine, Sept 2014) gave a wide view on the IoT, embedded systems, and the device development aspects of testing on these projects. This article continues with the focus on connectivity, data and mobile pieces including security, performance and remote device control, commonly from a smartphone.

Remote control

Remote access is nothing new to embedded systems. From HVAC systems, to medical devices, to factory automation, embedded systems have been used to monitor, change setting and turn things on or off from a remote location.

Many of us use IoT devices with remote services without thinking about it as remote control. Digital video recording set-top boxes, such as a Tivo device, use remote control for downloading and playing movies from a browser, not only from home, but anywhere there is internet access. And it’s now common to use mobile phones to operate video cams to check on the baby, grandma or even the dog. It’s also hard to imagine the things you can’t do remotely, and remote interactivity is a key selling point for IoT devices.

Remote control of devices present us with three main testing issues: Interoperability, compatibility and security. Interoperability issues cause people to have specific remotesfor each device, which pretty much defeats the idea of ease and usability. But this also creates opportunities fast moving players in the market to grab market share.

Jean-Louis Gassée (Apple initial alumni team, and BeOS co-founder) recently predicted we will likely have the problem he calls the “basket of remotes”, where we’ll have hundreds of applications to interface with hundreds of devices that don’t share protocols for speaking with one another. Cooperation among devices and remote controls seems too much to ask for at this point. This means that supported and not supported devices and platforms must be clearly defined then tested to agreed upon coverage under the range of conditions the remote control can operate.

There’s also the dependency issue. Your embedded device may be working fine, but a remote control, or smartphone under low battery conditions may automatically turn off a sensor, or reduce functionality to conserve power. The result is that your device does not work as expected. Testing remote control devices for loss of signal conditions and a host of interrupt conditions may be a bigger testing job to scope and execute than testing your limited function device.

Wearables

The word wearable in the IoT can mean many things. Wearables are not tested as a group, rather their testing implications are defined by the device type:

  • Wearable can mean Google Glass, which is a complete internet enabled device by itself.
  • It can mean a multifunction Apple Watch or a limited function “smart” watch that can only receive notifications but take no action. Most smartwatches currently available need another device, such as a mobile phone for remote control or a UI to other functions.
  • Wearable can also mean an embedded system by itself that operates with remote control or M2M in a different way, such as accelerometers and pedometers that measure movement and collect data for physical therapy.

On the surface, testing wearable devices in the IoT is just like testing any other similar device:

  • Individual sensors are tested as individual sensors— wearable or not.
  • Watches with sensor functionality and a very limited UI can be tested from a mobile device.
  •  Wearable as command center can be tested as such.

When discussing wearables, it is important to remember one of the key aspects of handheld mobile device testing: haptics. These are the various communication sensations from the device that you feel, like a click, a vibration, or a pulse. We know from mobile device testing this limits the use of emulators/simulators— an emulator cannot vibrate in your hand! Many wearable devices have haptic sensations that must be tested using the real device for function, multiple functions with perhaps multiple sensations in sequence, usability, and timing/race conditions.

Headsets like smart helmetsfor firefighters for example, often include sensors for temperature, motion, geolocation, and gesture. With a camera and a variety of communication sensors there is also a display. These systems carry out common computer tasks, under uncommon conditions all hands-free via WiFi and Bluetooth. Wearables as command centers are growing very rapidly in public safety, construction and industrial application. They present interesting and complex test conditions and scenarios to create or simulate.

Range

With mobility, range becomes an issue. We know mobile 3g and 4G has to be tested under various signal strength conditions as well as loss of signal. The same goes for wi-fi. For other technologies there are often very different cases to test. For example, each Radio Frequency Identification (RFID) chip has a defined range from 10cm (the maximum range for Near Field Communications—NFC) to 100 meters or more with certain chips.

RFID tags, readers/ transponders come in many varieties. Some in “smart cards” used similar to NFC, to long range RFID tags like those used in toll collection systems that can be read at 300 feet/~100m(the read ranges depend on power source and antennae, among other factors).

RFID tags are the most common internet of things sensors. They are already called “the duct tape of the digital age.” RFID tags are attached to cash, boxes, clothing, possessions, and even implanted in pets and people. There is a large variability in the devices capacity and function. You can’t expect to test one RFID chip in and out of range and expect to satisfy any type of test requirement.

Emulators/ Simulators

Emulators, or simulators are, and will continue to be needed for testing mobile and IoT systems.

Note: Emulators and simulators are similar but different- for convenience relating to testing the IoT, I will use simulator only.

These software tools make it possible to test situations that would be nearly impossible to replicate consistently using actual devices. A few good examples are testing for any fault or interruption and what happens when signal is lost during data transfer. From a testing perspective, simulators can be used for internal tests of base functionality, but should definitely not be used to sign-off any critical application.

Simulators can greatly facilitate early testing and may be the only window into a device with no UI. While this makes them extremely useful it’s necessary to be aware of their major limitations:

  • The emulator software itself may have bugs.
  • Emulators may not (likely don’t) replicate all the features as the production devices.
  • The Haptic experience.
  • Race conditions, asynchronous, and nondeterministic events may be impossible to simulate.

For some devices, a simulator may be the only way to automate testing. And this can lead to the situation where it’s necessary to create the simulation software. It’s not uncommon for this to take as much time and effort as building software for the actual device, and a reason some companies are reluctant to make the investment.

Simulators are extremely valuable for mobile device testing due to the sheer number of devices in the market. You can quickly test a lot of versions by automating the tests and running simulators on multiple machines or virtual machines. Obviously, testing actual devices is ideal, but it’s not always practical, especially when it comes to Android. Android is open source and there are a lot of variations that will need to be tested. Although a generic Android simulator can be used, its best to use a simulator supplied by the device manufacture.

The thing to keep in mind is that a simulator is an approximation of a device. The key to simulators being helpful in your test strategy is knowing when they are really useful and also knowing their limitations and drawbacks.

 Connectivity and Communication

Devices with embedded systems have been around for quite a while. Even the new sensors being developed aren’t much different than their predecessors. It’s when all these devices socialize across the internet that things get into high gear fast. Issues of performance, security, communication protocols with various bandwidth, low power and a myriad other events that have the potential to cause headaches-will at some point.

There are many common communication protocols a device may use. Some devices will use multiple technologies like MQTT for low power messaging, TCP/IP and Wi-Fi for remote control from a mobile phone or remote mainframe, cellular (#G or 4G), RFID, Bluetooth for low power device communication and more. Any of these may be required to work in a low power, limited memory environment that will be placed behind a concrete wall. How about testing all this connectivity? It is a lot!

How can you test the reliability, scalability, power efficiency of the chosen topologies and wireless protocols and radio technology? You will need knowledge, tools, and probably help. You’ll also likely need to automate some tests on a simulator. You can expect to miss bugs since there are no tools to test directly on the device itself if it has a unique OS. Maybe your team will need a lot of help. If this is the case, remember reporting risk is a foundation skill for every test engineer.

Performance Testing

As millions of new devices get attached to the internet, it is obviously increasing the infrastructure burden. Load, stress, failover and performance testing are required. I find that lip serviceis paid to performance testing but commitment is lacking. Surveys of CIO’s bear this out. Many say performance testing is seen as important but not being done. So performance testing becomes a nice to have. This is a problem. Performance and security testing is not free. It takes skill, tools, commitment and time. Many companies choose to take their chances with performance and/or security of their product, and then hope for the best— but hope is not a strategy.

The new crop of internet connected embedded systems requires a more advanced level of performance testing. For a variety of reasons performance testing may now be the starting place for a device test project instead of merely nice to have:

  • device limited network bandwidth,
  • interrupts,
  • low battery,
  • timing and scheduling of real timesystems (RTOS),
  • transmission of large data sets,
  • intermittent connectivity,

Performance testing is not incidental. It is a separate concerted effort with different tools, different skills, SLAs (service level agreements) and benchmarks that come from other teams (marketing, legal, sales, customers, competitors, etc.).

Failover or DR (disaster recovery) testing for both hardware and software (when a service or the system go down, what happens?) is critical. Failover testing will need many varied scenarios including soap opera (rare, boundary, large data, once-in-a-lifetime condition) testing.

Borrowing from game testing, there is an additional performance test type called soak testing. Most of the IoT devices being developed are planned for long term use without powering down or rebooting. Soak testing runs the system for extended periods without stopping, restarting, rebooting, just like the real-word scenario.

Testers are not the ones who can drive more performance testing, but you do need to stay vigilant about risk assessment and communicate the issues.

Data: collection and transmission

A big part of the IoT is the collection of data. So much is written these days about the connection of the IoT and big data. Whether to give context awareinformation, forecast behavior, recognize patterns or optimize performance, using large sets of data with other large sets of data— such as correlating home heater and air conditioner use to ambient temperature and other weather data from the National Weather Service—the comparison and manipulation of this big data can, clearly provided giant economic, communal and environmental benefits. Big data means tester skills for database testing, data integrity testing, database performance testing, and testing cloud APIs is essential.

 Security

For test engineers, security testing has a few facets. First, there are generally requirements about security, views and access. Functional testers normally validate and test these in daily work. Intrusion and penetration testing— protection from hackers— is usually done by different teams with different knowledge, skills and tools. Both types are absolutely necessary. Like performance testing, many surveys show organizations are not doing (adequate) security testing. There are well-known and sad stories here already. I’m now concerned about who’s in my wallet since the last trip to Home Depot.

Today’s new cars are chock-full of computer chips, sensors and nanotechnology having up to 100,000 lines of software code. From financial headlines July 15, 2014: “Compared with cars and trucks of a decade ago, our cars and trucks are staggeringly complex,” said Lisa McCauley, vice president and general manager for Battelle Cyber Innovations, a nonprofit research organization.

Andrew Brown, Delphi vice president and chief technologist, said it would be difficult for a hacker to break into a car’s infotainment systems remotely. But for a hacker who is familiar with the software it’s a different story. “If you can get that sort of access, almost anyone could break into the system,” Brown said. “Without that, it is very difficult.” Let’s hope.

The U.S. National Intelligence Council predicted: “to the extent that everyday objects become information security risks, the IoT could distribute those risks far more widely than the Internet has to date.”

When discussing the Internet of Things in 2012, then CIA chief David Petraeus made the famous statement: “We’ll Spy on You Through Your Dishwasher”. Just think for a moment, there was already someone, somewhere testing that!

“Security experts this month tested 275 Apple iOS- and Android-based mobile banking apps from 50 major financial institutions, 50 large regional banks, and 50 large U.S. credit unions. Overall, they found that eight out of 10 apps were improperly configured and not built using best practices software development. Among the big-name banks whose mobile apps were tested by security firm Praetorian include Bank of America, Citigroup, Wells Fargo, Goldman Sachs, Morgan Stanley, Capital One Financial, and Suntrust Banks. Praetorian did not disclose how each bank’s apps fared in the tests.

Nathan Sportsman, founder and CEO of Praetorian, says the security weaknesses in the mobile banking apps he and his team tested are not pure software vulnerabilities, so they are relatively low-risk issues for exploitation.”

http://www.darkreading.com/vulnerabilities—threats/weak-security-in-most-mobile-banking-apps/d/d-id/1141054?

Weak Security In Most Mobile Banking Apps,12/12/2013, Kelly Jackson Higgins

Information Week: Dark Reading

Interoperability

Recently at the Apple Worldwide Developers Conference, Tim Cook, Apple CEO pointed to iOS as the platform of choice for the IoT and highlighted the continued fracturing of other mobile platforms [read: Android] as the reason why.

After the hype clears, when discussing the IoT, the next topic is the lack of standards. This is mainly the result of competing and uncooperative product companies all vying to be king of the market. We are stuck with this for a while and it is not a good thing and makes testing more time-consuming in tight schedules, more complex when it need not be and have us make difficult test coverage suggestions.

There are competing standards for all aspects of devices from power to internet connectivity to security. Sensors often have a bad time talking to each other. There are a few common sensor specifications and the standards are not secrets. Sensor standards using IEEE 1451 specifications have been routinely proposed – but- the industry has so far chosen to do the easiest, most convenient and cheapest thing that later causes functional interoperability problems, rework, scalability and performance problems.

Test teams will not solve these problems, but we do have to test for interoperability. This is difficult enough when it comes to a traditional embedded device working with other devices. Again, once you connect to the internet, you open the door to many more issues. Multiple connectivity protocols, cloud API interoperability, remote control from a desktop browser, tablet, smartphone, watch, car… you see where this is going.

So what to do? First- find out what interoperability programmers are planning. Next find out what your marketing or sales team is “supporting” or compatible with. Then, document your testing scope and plan for a lot of testing.

Summary

There is no magic answer for how to test the IoT. It is complicated with many unknowns, and there are some very unique test considerations. All of this is on the cutting edge and it’s exciting.

By adding internet connectivity, databases, Big Data and cloud services to your experience base you are building skills that will be relevant for the long-term. Learn and grab information. Build skills in a big variety of test types discussed here.

Its currently a kind of “Wild West” mentality with very few standards. Many platform providers think they are already king-of-the-hill and place little real focus on performance, security and interoperability. This will undoubtedly change over time. But for now you are testing in uncharted waters.

Test early, test often. Leverage automation as much as you can. And, of course, you will be testing under tighter and tighter product development deadlines and budgets. Test as much as you can and for my opinion, report risk and coverage limitations even more than you report what you have actually tested.

Michael Hackett

Michael is a co-founder of LogiGear Corporation, and has over two decades of experience in software engineering in banking, securities, healthcare and consumer electronics. Michael is a Certified Scrum Master and has co-authored two books on software testing. Testing Applications on the Web: Test Planning for Mobile and Internet-Based Systems (Wiley, 2nd ed. 2003), and Global Software Test Automation (Happy About Publishing, 2006).
He is a founding member of the Board of Advisors at the University of California Berkeley Extension and has taught for the Certificate in Software Quality Engineering and Management at the University of California Santa Cruz Extension. As a member of IEEE, his training courses have brought Silicon Valley testing expertise to over 16 countries. Michael holds a Bachelor of Science in Engineering from Carnegie Mellon University.

Michael Hackett
Michael is a co-founder of LogiGear Corporation, and has over two decades of experience in software engineering in banking, securities, healthcare and consumer electronics. Michael is a Certified Scrum Master and has co-authored two books on software testing. Testing Applications on the Web: Test Planning for Mobile and Internet-Based Systems (Wiley, 2nd ed. 2003), and Global Software Test Automation (Happy About Publishing, 2006). He is a founding member of the Board of Advisors at the University of California Berkeley Extension and has taught for the Certificate in Software Quality Engineering and Management at the University of California Santa Cruz Extension. As a member of IEEE, his training courses have brought Silicon Valley testing expertise to over 16 countries. Michael holds a Bachelor of Science in Engineering from Carnegie Mellon University.

The Related Post

What you need to know to get started Introduction to mobile application testing Gone are the days when the telephone was an appliance that sat in a corner and had to ring to get our attention, or a computer was a machine only few people used — these devices are now an extension of our ...
  LogiGear_Magazine_September 2016_Testing SMAC Down  
The mobile application ecosystem is very dynamic. OEMs are launching new devices and new customization, and new OS versions are delivered every now and then. This is the constant challenge that most enterprises face.
Manual testing teams may not be able to test all the processes with each build Test automation of applications has been around for many years. There are many of us in the automated testing field that started very early in the test automation phase, but the introduction of mobile devices has brought on a new angle ...
The outbreak of smartphones and tablets forces us to be digitally available with speed. Keeping pace with communication tool developments, Lindiwe Vinson defines the methods used at Organic, Inc. where she leads her team discovering bugs using various key programs for both PC and Mac platforms.
A sampling of some free, online, and easy-to-use mobile device emulators that can help get you started with testing. ScreenFly A free, customizable tool to test your website on any screen size, including desktops, tablets, televisions, and mobile phones.
Testing appears to be the least popular topic in Android development circles based on the relatively few books on Android app testing. Most tend to focus on development because, unfortunately (but true), application testing isn’t be something most developers think much about, or if they do, they don’t do it systematically (I’m guilty of this ...
Whether Or Not You Have a Mobile App You’re walking down the street. You see something interesting, and you want to know more about it. What do you do? Do you wait until you get home, open up your laptop, and type “google.com” into your search bar?
Steps that will enable you to identify the weaknesses of your new app, its vulnerabilities and strengths. So you’ve just finished developing a nifty, customisable app that can help farmers track their produce from source to market via their mobile phone. You’re elated and want to get started marketing it right away. Not to burst ...
Organizations need to implement automated testing strategies designed specifically for mobile applications. Mobile device usage continues to gain momentum at increasing speed. Enterprises that delay mobile adoption face the danger of becoming competitively disadvantaged. But, before jumping in headlong, you need to be fully aware of the unique challenges that can arise when developing and implementing ...
iOS culture, even in many large organizations with skilled engineers, is behind on up-to-date testing practices. Agile development has long been all the rage; indeed, in most modern development shops the great agile methodologies are old hat. If you come from a software background like Ruby on Rails, Python, or certain Java niches, you may–until ...
CEO and founder of mVerify Corporation, Robert V. Binder tackles questions from field testers regarding such issues as strategic considerations when dealing with single stack apps versus globalized enterprise mobile apps, and methods and tools that developers and testers should be aware of. He also offers his own advice from lessons learned from experience. 1. ...

Leave a Reply

Your email address will not be published.

Stay in the loop with the lastest
software testing news

Subscribe