Most Read This Week
Testing Basics Might Have Averted Obamacare Health Site Fiasco
Attention to testing best practices could’ve avoided a hellish user experience and bad PR
By: Vu Lam
Dec. 12, 2013 08:09 AM
It made headlines for all the wrong reasons when it launched on October 1, but things could have been so different for the HealthCare.gov website if only it had been tested properly before release. Users trying to enroll encountered all sorts of glitches, including very slow page updates, "page not found" errors and frequent crashes.
Early server outages were blamed on an unexpectedly high volume of traffic as nearly 5 million Americans tried to access the website on day one, but it soon emerged that serious flaws existed in the software, and the security was not properly assessed or signed off.
According to CBS, the security testing was never completed. Fox uncovered a testing bulletin from the day before the launch that revealed the site could only handle 1,100 users "before response time gets too high." The Washington Examiner revealed, via an anonymous source, that the full testing was delayed until just a few days before the launch and instead of the 4 to 6 months of testing that should have been conducted it was only tested for 4 to 6 days.
Amid the apologies, the resignations, and the frantic efforts to fix it up by the end of November, there are serious and important lessons to be learned. A proper testing plan with a realistic schedule would have prevented this catastrophe.
Start with an Estimate
That plan will be based on documentation outlining the requirements of the software and discussion with the developers, as well as the wealth of experience that testing professionals possess. If requirements change significantly, or new requests are introduced, then the plan must be altered to cater for that. This is one major area where things obviously went awry. According to the Washington Examiner's source there were "ever-changing, conflicting and exceedingly late project directions. The actual system requirements for Oct. 1 were changing up until the week before."
This is a clear recipe for disaster.
To adapt testing for modern software development it pays to get testers involved earlier in the process. They need to understand the system and really identify with the end user. It's much more cost effective to fix flaws and bugs sooner rather than later.
There's a logistical consideration as well. Each new build means a full regression test, bug fix verification, and a healthy dose of exploratory testing to make sure the new features are working as intended. It's important for the test team to scale up as the amount of work grows, and as much of the regression testing as possible should be automated to reduce the workload.
Targeted exploratory testing is the perfect complement to scripted testing. It requires some creative thinking and some freedom for the tester, but it can be a great way of emulating an end user and ensuring that specific features and functions actually deliver what they're supposed to. Properly recorded by good cloud-based testing tools, the data can be used to provide clarity for developers trying to fix problems, and it can serve as the basis of scripted testing or even automated tests in the future.
The ultimate aim is traceability, usability, and transparency.
If this data is gathered then it becomes easier to apply root cause analysis at a later date and discover where things went wrong. Remember that the earlier you can catch and fix the defect, the cheaper and easier it is to do. Identifying the root causes of the problems with the HealthCare.gov website requires an objective analysis of the original requirements, the documentation, the code implementation and integration, the test planning, and the test cycles. Understanding what went wrong through this process could ensure that the same mistakes are not made again in the future.
Knowing When to Pull the Trigger
QA departments are not the gatekeepers for projects, business decisions are always going to trump everything else, and the pressure to deliver ensures that every project launches with defects in it, but you ignore them at your peril. If the testers had been consulted about the state of the website and the back end before launch, you can bet they would have pointed out that it wasn't ready for prime time. A one- or two-month delay would undoubtedly have been greeted with some alarm and criticism, but it would have caused far less damaging PR than releasing an unfinished and potentially insecure product.
Reader Feedback: Page 1 of 1
Subscribe to the World's Most Powerful Newsletters
Today's Top Reads