Two 9th-graders, Joshua Li and Martin Schneider, are trying to prove the efficacy and effectiveness of competition within educational video games. They developed two versions (one with a virtual competitor, one without) of a math game targeted for 4th-grade students, then conducted a field study to test their hypothesis.
As a result of their initial research study, they became Finalists in the recent Google Global Science Fair. They want to continue their research efforts, and are seeking elementary schools to assist in testing their hypothesis and game.
Please consider becoming a test site for this worthy study. Inquiries can be directed to [email protected].
Project Summary
Educational video games have emerged as a new medium for teaching core concepts and supplementing existing curriculum. These games usually try to entice students with fancy graphics, sound effects, etc. However, in real life, studies have shown that competing with someone increases motivation and performance. The same principle might apply to educational video games.
We hypothesized that if a human-like virtual competitor was incorporated into a math game, students may become more motivated to answer more questions and may even work harder to get correct answers. To test our hypothesis, we developed a game with two versions. The two versions were identical, except one had a virtual competitor named Bob who encouraged and competed with the student, and the other did not have Bob.
In our first round of testing, we conducted a study on 62 fourth-graders. Each student played two 20-minute sessions, once with Bob and once w/o Bob. Our results showed that the presence of Bob led to a significant increase in the number of questions answered by the students (3.8 more questions per student, p-value=0.03). Further, Bob’s effect was most significant (5.7 more questions per student, p-value=0.009) for mid-level performing students.
These results suggest that a virtual competitor can help increase students’ interest in online learning. However, during the 20-minute sessions, we did not detect significant differences in percentage of correct answers. Still, these results are encouraging as increased interest may lead to more practice, and hopefully better math performance in the long run.
We hope to conduct a second round of testing to help determine the game’s effectiveness in improving performance. We'd like to conduct the next round of testing in a similar format as our first round, but with a larger pool of subjects over a longer period. This would enable us to analyze the long term effects of Bob on performance increases in elementary school students, which we couldn't do with our original test group.
The experiment could be re-run in the same way as before with extended testing times. Alternatively, we could give students free reign with their accounts, allowing them to play both at home and at school. However, this structure would require large numbers of students, as each student could only be assigned to one version. Essentially, we need to acquire more data, and we’ll mold precise details to whatever fits our testing group.
We don’t have to be physically present for any testing. The only materials that test subjects need are computers to play the game, and ideally a supervisor who could resolve basic technical issues (like helping them log in). Our game collects and automatically forwards all pertinent data to us. The game can also stop a student from playing (or stop collecting data) after a set amount of time. However, to measure performance, we need to ensure that students don’t cheat or use a calculator or other aid, so the supervisor would need to keep an eye on that as well.
Given the above, specific location isn’t that important. Ideally, subjects would be elementary school students, as the game is aimed towards younger children. Subjects need a basic internet connection, but it doesn’t need to be particularly fast.