A few years ago (2007), while consulting with the Centers for Disease Control and Prevention (CDC), they elected to revise and update their website, CDC.gov. One of their primary goals during this redesign process was to optimize the usability of the homepage, and some of the second-level and third-level pages.
We organized a six-person user experience team, and created a plan to ensure that major, meaningful usability-related activities would be appropriately carried out. Over a six-month period, we conducted the following major usability activities:
- Conducting a review of past usability studies on the current CDC.gov site
- Interviewing users, stakeholders, partners, and web staff
- Conducting detailed analyses of web, search and call logs
- Analyzing the user survey data from the American Customer Satisfaction Index
- Surveying the ideas and attitudes of CDC leadership, employees and web staff
- Conducting a card-sort activity
- Conducting parallel design sessions
- Producing a series of wireframes
- Creating graphically-oriented prototypes
- Conducting several usability test and evaluation activities
This blog article will focus only on the major usability activities related to the last activity, ‘usability testing’, particularly as it was related to revising the CDC.gov homepage. The usability testing included a baseline test, first-click tests, and final prototype tests. Overall, 170 participants were tested using over 100 task scenarios in three major usability tests. These usability tests eventually showed a success rate improvement of 26%, and a satisfaction score improvement of 70%.
We used Bailey’s model of usability testing levels to help guide our decisions about the types of usability tests to perform. This model proposes five usability testing levels:
- Level 1 – Traditional inspection evaluations, e.g., heuristic evaluations, expert reviews, etc.
- Level 2 – Algorithmic reviews with scenarios
- Level 3 – Usability tests that are moderately controlled, and involved a relatively small number of test participants (~8)
- Level 4 – Usability tests that are tightly controlled, but use only enough participants to make weak inferences to the population, and
- Level 5 – Usability tests that are very tightly controlled, and use a sufficient enough number of participants to make strong inferences to the population.
Because of the well documented weaknesses of inspection evaluations, the User Experience Team elected not to use any Level 1 testing, and to do Level 2 testing only on the final, revised homepage. The final algorithmic evaluation was based on the usability guidelines contained in the book, Research-Based Web Design & Usability Guidelines.
The existing CDC.gov homepage had been around with very few changes since February, 2003 (about 4 years). During that time there had been many surveys, studies, and tests, recommending small changes to the homepage. We were interested in collecting data where we could make fairly strong inferences to the user population, and consequently did few Level 3 tests. Most of our usability tests were either Level 4 or Level 5.
 Janice Nall, Bob Bailey, Cari Wolfson, Catherine Jamal, Colleen Jones, Nick Sabadosh
 Bailey, R.W., Applying usability metrics, Web Managers University, May 16, 2006.
 Bailey, R.W., Comparing heuristic evaluation and performance testing, User Interface Update – 2004.
 Koyani, S.J., Bailey, R.W. and Nall J.R., Research-Based Web Design & Usability Guidelines, U.S. Government Printing Office, 2006.