Today we learn how to do some testing and evaluate a multimedia project in proper way.
Here are some infos about the testing and evaluating
What should you evaluate?
Students, instruction, process
Evaluating your multimedia educational materials ultimately is the process of engaging not just with the question, "How well did the students learn?" but also with the question, "How much of the students' learning is due to the multimedia educational materials you've created?" Newby et al distinguish evaluating the students from evaluating the instruction (in our case, instruction can refer specifically to multimedia materials). Techniques for evaluating student performance is a large topic and beyond the scope of this course. You can evaluate how well students learn by measuring changes in knowledge through assessment (Level 2 in Lee & Owen's Table 25.1) or by evaluating how their real-world performance has been affected (Level 3 in Lee & Owen's Table 25.1, also p. 193). Another area that Newby et al touch on is evaluating how well students use multimedia to generate material (such as portfolios and blogs) which you can assess. This is a very interesting and very relevant topic which is also beyond the scope of our course; see Newby et al's reference list on page 253 as a starting point for assessing student-generated multimedia.You might also need to evaluate the development process itself. Take the case of a workplace training intervention. After you have deployed the instruction, evaluated student performance, made predictions or actually observe changes in worker performance, and kept track of all the development costs, you might evaluate the return on investment for the company. (Evaluating ROI is Level 4 in Lee and Owen's Table 25.1)
For this course, we are interested in evaluating the multimedia
educational material itself, not the students nor the process, so let's
look at some ways in which we can do that.
When you should evaluate?
Techniques for evaluating usefulness
Techniques for assessing usefulness are discussed by Newby et al, which include the following:- Use pre- and post-tests to determine the effectiveness of
the materials
- Use student tryouts to determine the effectiveness, efficiency, and appeal
- Use direct observation to determine the effectiveness and
efficiency
- Talk to students to determine appeal and effectiveness
- Ask a colleague to review the materials
- Ask a colleague to observe how the learners use the tool during deployment