No one questions assessment in gaming. Yet everyone gets an opinion of educational assessment.
It seems like every day there is a new piece of research on assessment. How, why, when, and where, are all debated ad nauseam. It seems like the only thing we can agree on is the ‘who’. Most individuals seem to agree that we should be assessing students to determine student knowledge. However, I bet if I look deep enough I could find research that contradicts that tidbit of logic. Therein may lie the real problem–it may not be that individual educators are unclear as to what purposes assessment, but that assessment becomes a problem of institutional power struggles over policy and funding? The lack of a common and consistent method for describing whether or not a student is learning what they’re supposed to be learning is embarrassing as a professional educator. Likewise, as any good researcher will tell you, having a consistent measuring tool allows you to show longitudinal data. When assessment models change every couple years, you might as well throw out the old data as it doesn’t translate into the new metrics.
Agreeing what knowledge looks like and how to issue a consistent valuation of that knowledge seems somewhat straightforward, as schools have taught many of the same subjects for decades. However, as soon as this discussion gets anywhere near touchy areas such as student assessment results as proxy for teacher evaluation, assessment results for funding allocation determination, assessments as gatekeepers for student entry into programs, logic and agreement seem to fly out the window. In fact, I challenge you to read any article about assessment and analyze it for the presence ten logical fallacies often taught in debate classes. Other conversational control strategies easily become apparent, such as the “Red Herring” strategy. See the recent article over at Inside Higher Ed for the starting point of this rant: https://www.insidehighered.com/news/2019/04/17/advocates-student-learning-assessment-say-its-time-different-approach
Inversely, during the game design process, conversations about victory and failure conditions exist from the start. In game design, failure is usually part of the pathway to success. If you don’t know what conditions a player needs to achieve to win, how can you design the process surrounding that. Likewise, the game designers need to keep in mind what failure should look like. Are there actions that should be instant failure, or should there just be a setback? It’s almost as if the game designers are building out a rubric for scoring a players ability to understand the game strategy.
When a player beats a well-designed game, no one comes back and debates the validity of the players understanding of that game. The assessment comes from the demonstration of mastery of the environment set by the designer. Game designers can collect that data and measure how one gamer’s score related to other player’s attempt.
Yet somehow when a student succeeds in a well-designed course, everyone has an opinion of what that success is actually measuring. As well, when a student fails in that same course, the reasons given for failure are often different–and often not related to what’s being measured. Why is it when a game designer designs something it is universally accepted what their scores measure, and yet when an educator designs a course everything is up for discussion? What is the real issue being argued in education?
Seems like the longer we spend arguing in the game of assessment, the more students stand to lose. How can we support student success if we can’t even agree on the assessment that defines success?
-Dr. Andrew Peterson, Coordinator of Instructional Technology