In this post, I conclude my 3-part series on improving the quality of eLearning and avoiding mistakes that can lead to embarrassing and sometimes reputation-diminishing review cycles. In the previous 2 posts, I discuss the importance of quality and introduce you to the concept of adapting software test scripts to eLearning projects. I also explain how to design your testing process and test script. Now we will move from Design to Execution, where we will cover best practices for using your test script to structure and organize the testing process. Make sure to stick around until the end so you can download a summary of this series that includes notes and a test script template.
Before we get started, let’s recap the processes and documents you should have ready before you begin testing. As we discussed previously, your design document is critical to the testing process. I also explained that good quality control requires well-established and documented standards. Last, and certainly not least, you need a well-defined and specific test script.
There’s just one more very important thing you need, and that’s a completed course. I know it seems obvious, but note that I say “completed” course. My point is that you don’t want to begin the testing process with a course that you know has serious gaps and defects. Doing so will cost time and tempt your team to slip back into design mode. Of course, there is a judgement call to be made, most likely by the team leader or project manager about the level of completion necessary before testing begins. I prefer that testing be completed BEFORE the project is submitted to the project sponsor or company leadership.
You may be thinking: should testing be completed before sharing the course with the SME? My answer is, “It depends.” On some projects, the SME works as an integral part of the team and may even be asked to execute testing. On other projects the SME is the project sponsor and works more as an advisor to the project. The Project Manager should make the call about exactly when in the process testing will begin.
Okay, we have recapped the input documents and covered the issue of “when” testing should be completed. Now let’s talk about “who.” If you read or listened to my previous posts, you already know that the testing analyst should not be the developer who created the course. I also avoid using the Instructional Designer who designed the course. Instead of identifying a specific role who should execute the test, it may be better for me to list the ideal “traits” of the person acting as the testing analyst. A testing analyst is a person who is:
- comfortable and competent reading and interpreting design documents
- familiar with project standards
- detail oriented
- focused, even when dealing with tedious tasks.
…and did I mention “detail oriented?” That’s a really important one. Let’s be honest, testing is not the most enjoyable part of the development process. It often gets assigned to the most junior member of the development team. This may be a mistake if the junior team member is not qualified for the task. Whoever you choose, make sure they have the right traits for the job and they are trained on the overall testing and documentation processes.
We now know when and who, so let’s discuss “on what.” In other words, what is the appropriate testing platform? Previously I used the term “target deployment environment.” Sounds pretty fancy, right? Not exactly. The test environment should be at the low end of computers that are used by learners. Ideally, you already define minimum specifications for your courseware. If you do, refer to the minimum requirements document to determine the low-end and then use a machine and browser that resembles the minimum configuration.
Ten years ago, testing on a low-end machine running Internet Explorer version 6 was all you needed. In the modern computing environment, the problem is more complicated. Ideally you need to determine the operating systems, browsers and devices you will support, then test using these devices. If you don’t have the resources to cover all possible platforms (and really, who does?), I suggest picking the most common desktop and mobile systems that your courseware supports. You may want to have your testing analyst use a desktop and mobile platform as they execute the script. Alternatively, you can complete a separate test cycle for each platform.
A common mistake is testing using your development computers. In our environment, we generally develop with high-end Macs on a high-speed network. Our customers, however, generally use 2 to 3-year-old PCs. More than once a developer has concluded that a design is working well, only to learn that functionality is not working on the target platform. Therefore, we keep a variety of older PCs on hand for testing.
Before we conclude our discussion on “testing environment,” let’s not forget to consider how the test course is hosted. A common mistake is to test the course outside of the hosting environment used by learners. For example, let’s say you want to test a new course but you don’t want to bother with going through the process of loading the course into your corporate LMS. You decide to conduct testing by publishing the course for web, then load the course on a web server you control. Testing this way will certainly simplify the process, but it is not a true test of your course. To fully and completely test your course, you really have no choice but to host the course on the same or similar environment that will be used for deployment. When you don’t test on the target LMS, there are too many variables that may mislead or invalidate your results.
If testing on your target LMS is not possible before deployment, you should use an alternative LMS for your testing. This approach does not always uncover the full range of issues you will encounter when testing with your internal LMS. It will, however, be a closer approximation versus hosting on a web server without an LMS.
Okay, let’s assume you followed all the steps and successfully completed your test script. What’s next? I recommend that you hold a meeting with your team to review and discuss the results. Be sure to include representatives from Instructional Design, Development, and Media. Review the defects and come up with a plan for addressing the issues. Don’t allow this meeting to become a platform for recrimination! There are a thousand reasons courses have defects, and many have nothing to do with the person who implements the design. I find that many defects are the result of unclear design documents, incomplete standards, design interpretation, and other process issues. This meeting is a great opportunity to identify where the process got off track and ensure corrections are made for the next course.
We have reached the end of this series on improving the quality of your courses by planning, designing and implementing a testing process. Quality assurance and testing is an often forgotten, but important part of creating professional courseware and eLearning products. Thank you for joining me for this series and taking the first step to implement a quality assurance process or improve your current process.
Be sure to download the summary of this series that includes notes and a useful test script template to get you started.
Until next time, have a great week and keep stretching your skills!