The 4-Wheeled Monster

5 roadblocks in vehicular autonomy that complicate Software Testing

Experts in the field have previously referred to air travel as somewhat of a gold standard for autonomous vehicle safety, but after Boeing’s two tragedies, that analogy can no longer be used when talking about self-driving cars. This was after Boeing’s 737 MAX Jets have found themselves grounded following software issues that resulted in the deaths of nearly 400 people. However, it’s not the technology that failed in Boeing’s case; rather, it was the pilot’s re-training—or lack thereof—as well as a lack of standard safety features regarding the software that caused the accidents. Moving forward, consumers are not asking if they can trust autonomous cars’ technology, but instead, consumers are wondering, “Can we trust companies to properly develop these technologies and trust government bodies to regulate them?” Yet, no one asks how we can trust humans to properly operate non-autonomous vehicles. Rather, we just subject them to quasi-regular tests and send them on their way. Humans are not perfect: they text and drive, apply makeup while driving, eat while driving, or in some cases drink and drive, or fall asleep at the wheel-the list is exhaustive. Machines on the other hand do not partake in these  dangerous, behind-the-wheel activities; with their sensors and processors, they can easily navigate the roads and minimize operator error driven accidents.

But, there is one thing the human mind still can do better than the machine: analyze the unexpected. If a young child suddenly dashes into the street, the human brain will have a direct reaction, which is to slam on the brakes. A computer, on the other hand, has mere seconds to analyze the situation: Are there surrounding cars that will be hit if it swerves to avoid the child? Are there cars following closely behind that will rear end the car if it slams the brakes? Should it just proceed as if nothing is there?

These are the tough choices we as human vehicle operators (drivers) must be prepared to make at all times. Is the technology behind autonomous cars good enough to do the same?

Here are some common qualms consumers and Software Testers alike have regarding autonomous vehicles.

1. Unpredictable Humans

Computer algorithms can handle equipping autonomous driving software to handle the rules of the road—stop at a stop sign, don’t cross over a double yellow, obey the speed limits, etc. But, what computers cannot control is the behavior of other, human drivers on the road. As mentioned earlier, humans are not perfect drivers: they speed, tailgate, they cross double yellows, they even run red lights sometimes. An evolving solution for this issue is vehicle-to-vehicle (V2V) communication. However, this technology is still in the early development stages; furthermore, this technology will only be a viable solution when a majority of vehicles on the road are equipped with it. This means this will be a solution for the distant future, but will be largely dependent on consumers buying newer model year vehicles.

2. Weather

Human drivers have enough trouble navigating through hazardous weather conditions like rain, fog, snow, or hail and this is no different for autonomous cars. Autonomous cars maintain their lane by using cameras that track the lines of the road. Falling snow and rain can make identifying upcoming objects difficult for laser sensors. Reports of on-road tests of autonomous cars constantly cite weather as a primary cause in system failure. While there is no direct fix to this, it is something engineers will need to address as autonomous car companies begin testing their systems in snow ridden states such as Pennsylvania and Massachusetts.

3. Infrastructure

Although we would like them to be, roads are not perfect. Pot holes, sink holes, and cracked pavement are all daunting tasks an autonomous car must accomplish. What is that dark circle 150 feet ahead? Is it a puddle or a pot hole? Is it maybe just a shadow? However, it is currently unsure whether or not this is a true issue; we must design technology to work in the world that exists, not the utopia we wish it to be. Currently, multiple states are undergoing the process of removing the installed lane markers—known as Botts’ dots—and are instead replacing them with painted lines. This is because, these dots cannot always be recognized by the sensors on an autonomous vehicle. Additionally, inclement weather can cover these dots, making it near impossible for the vehicle’s camera system to identify and maintain lanes.

So, as a means of fostering the growth and implementation of autonomous vehicles, California is opting to replace them with wider, thicker, reflective lane markings, as these can be easily identified by sensors. Yet, not all infrastructure problems can be immediately addressed and fixed like Botts’ dots. But, it does pose the question of how a vehicle will react at sunset in an urban, downtown setting when the shadows of skyscrapers plague the road. Will autonomous vehicles mediate traffic congestion, or will they make it worse by stopping at the foot of a shadow?

4. Emergency Situations

Technology can sometimes fail. At the time of this writing, there is no car that is solely autonomous; they all require a driver be in the driver’s seat of the vehicle to intervene on the system’s behalf if something goes wrong.

But what happens if the safety driver does not take control of the situation? More importantly, what happens if the safety driver does not know they need to take control of the situation? The Information reported a story of an incident of this caliber with the self-driving car company Waymo. The safety driver behind the wheel happened to fall asleep after about an hour of testing. In the process of falling asleep, he inadvertently touched the gas pedal, returning the car to manual mode. With no proper notification to the unconscious driver, the vehicle eventually collided with a median. This story is all too common in regards to current autonomous driving solutions out there such as Tesla’s Autopilot. In one particular incident in March of 2018, a Tesla Model X owner died after failure to regain control of their vehicle before fatally colliding with a concrete barrier. In the investigation, Tesla stated that the vehicle reported that in the 6 seconds and 150 meters before the accident, following numerous audio and visual warnings, the driver’s hands did not touch the wheel and no corrective actions were taken.

While Tesla does instruct Autopilot users to stay completely involved with the drive during Autopilot, it seems eerily similar to the Boeing story. How will automakers and autonomous vehicle developers properly train users of these cars to use the system?

5. Hacking the Car

When it comes to computers, hacking and hackers are the unruly side effects we have to deal with. Given the amount of computer systems and software that are vital to the autonomous car’s function, hacking seems near certain. Hacking cars is already an issue with non-autonomous vehicles. Wireless carjackers can already hack into computer systems of cars, toying with the horn, disabling the brakes, even cutting off acceleration. Most counterarguments to this issue reference big-data breaches—such as the Target data breach—and note that they have not hindered the growth of the consumer internet; many times, these breaches happen and society shrugs its shoulders and moves on. However, hacking a 2-ton vehicle proves exponentially more dangerous to both the occupants of the vehicle and the surrounding area. It will be up to auto manufacturers and software developers to protect their car’s software to the best of their ability.

Finally, what about system outages? In early May, BMW drivers reported outages in their vehicles’ infotainment systems: BMW ConnectedDrive. The affected aspect was the Apple CarPlay interface. While a rather minor inconvenience in this instance, it does lead to the question: What if future, autonomous vehicles’ software “goes out,” leaving consumers stranded? Or, worse, what if the system shuts down while traveling and carrying passengers?

Summary

Despite the qualms, self-driving cars aren’t slowing down. According to CB Insights, $4.2 billion was allocated to autonomous driving programs in just the first three quarters of 2018. However, don’t expect full autonomy just yet. The Society of Automotive Engineers has a 0-5 ranking scale for autonomy in cars, with level 0 being all major systems controlled by humans and level 5 being the car being completely capable of self-driving in every situation. Level 5 technologies are seemingly getting further and further away, but based on automaker and technology developer estimates, level 4 self-driving cars could become available for sale in the next couple of years, meaning the car would be capable of autonomous driving in some scenarios, though not all. Ford Motor Company’s CEO, Jim Hackett recently announced that the industry overestimated the arrival of autonomous vehicles. Hackett claims that Ford will still deliver on its promise of self-driving cars for commercial services in 2021, but not to the previously stated magnitude nor autonomy. This acts as a set up for a commonly asked question: Will autonomous cars ever have the ability to be truly autonomous with no geographical limitations? Another question is in regards to regulations: What are they? For some insight on other automotive and software regulations and how they’re evolving, check out our cover story!

Noah Peters
Noah Peters is from the Bay Area. He holds a Bachelor's Degree in Marketing from the University of San Francisco. Noah started at LogiGear as a Marketing Intern and transitioned to a full-time Marketing Associate role post-graduation. Noah is passionate about content creation and SEO, and works closely with all content produced by LogiGear, including the LogiGear Magazine, the LogiGear Blog, and various LogiGear eBooks. In his free time, you can find Noah researching the automotive industry or teaching high school marching band.

The Related Post

The path to continuous delivery leads through automation Software testing and verification needs a careful and diligent process of impersonating an end user, trying various usages and input scenarios, comparing and asserting expected behaviours. Directly, the words “careful and diligent” invoke the idea of letting a computer program do the job. Automating certain programmable aspects ...
All too often, software development organizations look at automating software testing as a means of executing existing test cases faster. Very frequently there is no strategic or methodological underpinning to such an effort. The approach is one of running test cases faster is better, which will help to deliver software faster. Even in organizations that ...
Jenkins is a Continuous Integration (CI) tool that controls repeatable tasks in software development. Check out this guide to see how TestArchitect seamlessly integrates with Jenkins to establish a CI environment for Automated Testing.
I’ve been teaching a lot lately, was in India for one week, and I’m off to Seattle in two weeks to teach on performance topics. I thoroughly enjoy teaching, it allows me to stay sharp with current trends, and provides a nice break from the “implementation focus” that I generally have day to day.
Test automation can provide great benefits to the software testing process and improve the quality of the results…. but its use must be justified and its methods effective. The reasons to automate software testing lie in the pitfalls of manual software testing… As we all know too well, the average manual software testing program:
LogiGear Magazine – March 2011 – The Agile Test Automation Issue
Take 5 and test your knowledge on the Software Testing essentials covered in our March 2019 issue. Good luck!
Developers of large data-intensive software often notice an interesting — though not surprising — phenomenon: When usage of an application jumps dramatically, components that have operated for months without trouble suddenly develop previously undetected errors. For example, the application may have been installed on a different OS-hardware-DBMS-networking platform, or newly added customers may have account ...
The guide for CUI Automated Testing strategies, including chatbot testing and voice app testing. In the Software Testing industry, trends come and go that shape the future of testing. From Automation in Agile, to the DevOps era we are now in, trends are what evolve and improve our testing processes and ideologies. Currently, many researchers ...
Divide and conquer was a strategy successfully employed by ancient Persian kings against their Greek enemies. It is a strategy that can still be used successfully today. Fundamentally, by dividing something into smaller more manageable pieces (in the case of the ancient Persians, they divided the Greek city states), it becomes much more manageable.
September Issue 2019: Advancing Automation
We’ve scoured the internet to search for videos that provide a wealth of knowledge about Test Automation. We curated this short-list of videos that cover everything from the basics, to the more advanced, and why Test Automation should be part of part of any software development organization. Automation Testing Tutorial for Beginners This tutorial introduces ...

Leave a Reply

Your email address will not be published.

Stay in the loop with the lastest
software testing news

Subscribe