5 roadblocks in vehicular autonomy that complicate Software Testing
Experts in the field have previously referred to air travel as somewhat of a gold standard for autonomous vehicle safety, but after Boeing’s two tragedies, that analogy can no longer be used when talking about self-driving cars. This was after Boeing’s 737 MAX Jets have found themselves grounded following software issues that resulted in the deaths of nearly 400 people. However, it’s not the technology that failed in Boeing’s case; rather, it was the pilot’s re-training—or lack thereof—as well as a lack of standard safety features regarding the software that caused the accidents. Moving forward, consumers are not asking if they can trust autonomous cars’ technology, but instead, consumers are wondering, “Can we trust companies to properly develop these technologies and trust government bodies to regulate them?” Yet, no one asks how we can trust humans to properly operate non-autonomous vehicles. Rather, we just subject them to quasi-regular tests and send them on their way. Humans are not perfect: they text and drive, apply makeup while driving, eat while driving, or in some cases drink and drive, or fall asleep at the wheel-the list is exhaustive. Machines on the other hand do not partake in these dangerous, behind-the-wheel activities; with their sensors and processors, they can easily navigate the roads and minimize operator error driven accidents.
But, there is one thing the human mind still can do better than the machine: analyze the unexpected. If a young child suddenly dashes into the street, the human brain will have a direct reaction, which is to slam on the brakes. A computer, on the other hand, has mere seconds to analyze the situation: Are there surrounding cars that will be hit if it swerves to avoid the child? Are there cars following closely behind that will rear end the car if it slams the brakes? Should it just proceed as if nothing is there?
These are the tough choices we as human vehicle operators (drivers) must be prepared to make at all times. Is the technology behind autonomous cars good enough to do the same?
Here are some common qualms consumers and Software Testers alike have regarding autonomous vehicles.
1. Unpredictable Humans
Computer algorithms can handle equipping autonomous driving software to handle the rules of the road—stop at a stop sign, don’t cross over a double yellow, obey the speed limits, etc. But, what computers cannot control is the behavior of other, human drivers on the road. As mentioned earlier, humans are not perfect drivers: they speed, tailgate, they cross double yellows, they even run red lights sometimes. An evolving solution for this issue is vehicle-to-vehicle (V2V) communication. However, this technology is still in the early development stages; furthermore, this technology will only be a viable solution when a majority of vehicles on the road are equipped with it. This means this will be a solution for the distant future, but will be largely dependent on consumers buying newer model year vehicles.
2. Weather
Human drivers have enough trouble navigating through hazardous weather conditions like rain, fog, snow, or hail and this is no different for autonomous cars. Autonomous cars maintain their lane by using cameras that track the lines of the road. Falling snow and rain can make identifying upcoming objects difficult for laser sensors. Reports of on-road tests of autonomous cars constantly cite weather as a primary cause in system failure. While there is no direct fix to this, it is something engineers will need to address as autonomous car companies begin testing their systems in snow ridden states such as Pennsylvania and Massachusetts.
3. Infrastructure
Although we would like them to be, roads are not perfect. Pot holes, sink holes, and cracked pavement are all daunting tasks an autonomous car must accomplish. What is that dark circle 150 feet ahead? Is it a puddle or a pot hole? Is it maybe just a shadow? However, it is currently unsure whether or not this is a true issue; we must design technology to work in the world that exists, not the utopia we wish it to be. Currently, multiple states are undergoing the process of removing the installed lane markers—known as Botts’ dots—and are instead replacing them with painted lines. This is because, these dots cannot always be recognized by the sensors on an autonomous vehicle. Additionally, inclement weather can cover these dots, making it near impossible for the vehicle’s camera system to identify and maintain lanes.
So, as a means of fostering the growth and implementation of autonomous vehicles, California is opting to replace them with wider, thicker, reflective lane markings, as these can be easily identified by sensors. Yet, not all infrastructure problems can be immediately addressed and fixed like Botts’ dots. But, it does pose the question of how a vehicle will react at sunset in an urban, downtown setting when the shadows of skyscrapers plague the road. Will autonomous vehicles mediate traffic congestion, or will they make it worse by stopping at the foot of a shadow?
4. Emergency Situations
Technology can sometimes fail. At the time of this writing, there is no car that is solely autonomous; they all require a driver be in the driver’s seat of the vehicle to intervene on the system’s behalf if something goes wrong.
But what happens if the safety driver does not take control of the situation? More importantly, what happens if the safety driver does not know they need to take control of the situation? The Information reported a story of an incident of this caliber with the self-driving car company Waymo. The safety driver behind the wheel happened to fall asleep after about an hour of testing. In the process of falling asleep, he inadvertently touched the gas pedal, returning the car to manual mode. With no proper notification to the unconscious driver, the vehicle eventually collided with a median. This story is all too common in regards to current autonomous driving solutions out there such as Tesla’s Autopilot. In one particular incident in March of 2018, a Tesla Model X owner died after failure to regain control of their vehicle before fatally colliding with a concrete barrier. In the investigation, Tesla stated that the vehicle reported that in the 6 seconds and 150 meters before the accident, following numerous audio and visual warnings, the driver’s hands did not touch the wheel and no corrective actions were taken.
While Tesla does instruct Autopilot users to stay completely involved with the drive during Autopilot, it seems eerily similar to the Boeing story. How will automakers and autonomous vehicle developers properly train users of these cars to use the system?
5. Hacking the Car
When it comes to computers, hacking and hackers are the unruly side effects we have to deal with. Given the amount of computer systems and software that are vital to the autonomous car’s function, hacking seems near certain. Hacking cars is already an issue with non-autonomous vehicles. Wireless carjackers can already hack into computer systems of cars, toying with the horn, disabling the brakes, even cutting off acceleration. Most counterarguments to this issue reference big-data breaches—such as the Target data breach—and note that they have not hindered the growth of the consumer internet; many times, these breaches happen and society shrugs its shoulders and moves on. However, hacking a 2-ton vehicle proves exponentially more dangerous to both the occupants of the vehicle and the surrounding area. It will be up to auto manufacturers and software developers to protect their car’s software to the best of their ability.
Finally, what about system outages? In early May, BMW drivers reported outages in their vehicles’ infotainment systems: BMW ConnectedDrive. The affected aspect was the Apple CarPlay interface. While a rather minor inconvenience in this instance, it does lead to the question: What if future, autonomous vehicles’ software “goes out,” leaving consumers stranded? Or, worse, what if the system shuts down while traveling and carrying passengers?
Summary
Despite the qualms, self-driving cars aren’t slowing down. According to CB Insights, $4.2 billion was allocated to autonomous driving programs in just the first three quarters of 2018. However, don’t expect full autonomy just yet. The Society of Automotive Engineers has a 0-5 ranking scale for autonomy in cars, with level 0 being all major systems controlled by humans and level 5 being the car being completely capable of self-driving in every situation. Level 5 technologies are seemingly getting further and further away, but based on automaker and technology developer estimates, level 4 self-driving cars could become available for sale in the next couple of years, meaning the car would be capable of autonomous driving in some scenarios, though not all. Ford Motor Company’s CEO, Jim Hackett recently announced that the industry overestimated the arrival of autonomous vehicles. Hackett claims that Ford will still deliver on its promise of self-driving cars for commercial services in 2021, but not to the previously stated magnitude nor autonomy. This acts as a set up for a commonly asked question: Will autonomous cars ever have the ability to be truly autonomous with no geographical limitations? Another question is in regards to regulations: What are they? For some insight on other automotive and software regulations and how they’re evolving, check out our cover story!