In this post, I continue with part 2 of my 3-part article about improving the quality of eLearning and avoiding mistakes that can lead to embarrassing and sometimes reputation-diminishing review cycles. In the previous post, I discussed the importance of quality and introduced you to the concept of adapting software test scripts to eLearning projects. Now, we will move from planning to designing our testing process and test document.
The first thing to understand about test scripts is that they should serve the quality assurance process. In other words, the QA script should not be a stand-alone product. It should be a reflection and confirmation of standards and other quality checks farther upstream in your workflow. For example, developers should carefully check functionality as they develop by using preview functions in the authoring tool. Videos should be checked for content playback outside the authoring or hosting environment.
As you consider the design for the QA script, the focus is on course functionality, standards, and playback. The script should not be an evaluation of instructional design and factual accuracy. Those domains should be reserved to the instructional design process and the SME review process. If you allow design to be combined with testing, you run the risk of the process becoming complex and iterative. Iteration can be helpful during the design process, but is not helpful during testing. The goal for testing should be narrowly defined. It is to ensure that the product deployed to learners accurately represents the design document, meets organizational standards, and functions as intended.
Allow me to repeat a point from my previous post: you can’t execute a proper test without a complete design document. The design document defines the benchmark for testing and without a complete design document, it is impossible to know if the course was developed as designed and should be passed for a given criteria. I’ll extend this point just a bit: you can’t execute a proper test without a standards document. Yes, we all know and love standards documents, right? Well maybe not, but they perform an important role during testing, and team members who perform testing should be familiar with your eLearning standards.
The QA script document is a final comprehensive verification of the course functionality in the target deployment environment. In other words, our test process should approximate the end-to-end Learner experience.
Now let’s get into an overview of how you should design a test script for your courses. The first step is to assemble and take stock of your input documents: your design document or storyboard, your script if the script is stored separately from your storyboard, and finally, your course standards. I suggest building your test script in a collaborative spreadsheet tool, like Google sheets or a shared Excel worksheet. Using a collaborative document will allow multiple team members to work on the document concurrently. In our workflow the QA script is a tab on the larger design document workbook.
We organize our test script with a column for each course segment on the X axis. What is a course segment? The simple answer is, that’s up to you. It could be a slide, or a topic or whatever division of content makes sense with your design. One caution is that your definition of “segment” should be granular and comprise no more than 3-5 minutes of content. If you define your segments at the lesson level, or let’s say 5-10 minutes of content, it will be difficult to check for and document specific problems.
On the Y-axis, we document testing categories along with the respective test items to be evaluated. I find that categories help the testing analyst to stay focused on a specific aspect of the course design.
Where the columns and rows intersect, the testing analyst will document either “pass” or “fail.” This intersection point is not the place for long explanations of the problem. We add an additional column for notes, if needed. If your test script is specific and your segments are narrowly defined, detailed notes may not be necessary.
Now that you understand the basic structure of the test script, let’s talk about how you determine categories and specific test items. A common question is “what items should I test.” To answer this question, lean on your experience with previous projects. Ask yourself:
- What problems show up again and again during reviews?
- Are there areas of inconsistency in your projects?
- Where are we frequently not compliant with company standards?
- What problems cause users to contact support?
- Are there common LMS reporting problems that result from content publish settings?
These are just a few questions to get you started, but I suggest that you don’t compile the list in isolation. Ask Instructional Designers, Developers and Graphics professionals working on your team. I bet each will have a different perspective on the items that should be included in your script.
Also, don’t stop with your team. Use the testing process as an opportunity to engage and brainstorm with your extended team. For example, include a representative from your support organization and LMS team. Each party brings a new perspective to the problem.
Okay, you have met with your team and considered the pain points on previous projects. You may end up with a list that is larger than you expected. If you are the team leader, you should narrow the list and group items into categories.
Let me share a few examples of categories we use on our team:
- “Text on screen” – this section covers common text issues like alignment, capitalization, bullets, indentation, and margins.
- “Graphics” covers positioning, color palette, compliance with design document, highlights and synchronization.
- “Video” covers, playback, transitions, positioning, and audio level.
- “Audio” covers levels, quality and synchronization
- “Interface Elements” covers function, rollover state, cursor behavior, and sounds associated with buttons
- “Assessment standards” addresses stem text, answer types, remediation, answer behavior and logic.
This is just a quick sample of categories and the types of items included in each. Your script will likely have different categories and test items. There really is no one correct design for a test script. Whatever your categories and test items, make sure they are specific. “Text is correct” will not cut it. A better test item is “stem text is less than 50 words and 5 lines,” or “graphics have 30-pixel margin.”
I suggest you create a master test script and remove items that may not apply to a specific project. You must also add items that are unique to a specific project. For example, if you have designed a complex interaction, you should add a category for the interaction that includes all test items relevant to the specific interaction design.
That wraps up my overview of designing a test script, but don’t worry, at the end of this series, I will provide my notes on eLearning QA and testing. The guide includes a template pre-populated with a few categories and test items.
Be sure to join me for my next post. In the third and final post of this series I will provide an overview and cover a few “best practices” for executing your test script on a project.
Until then, have a great week and keep stretching your skills!