top of page
  • Writer's pictureZubia Mughal

Agile Development: What You didn’t Know About Responsive Evaluation

Have you been pacing up and down the hallway while the reviewers fill out their evaluation form on your eLearning program?

A great way to measure the usability of an instruction design model is by reviewing the gold version of your eLearning project. The gold version is the final version where your product is supposedly “100%” ready to be rolled out for learning consumption.


During this phase of your instruction design process, do you find yourself anxious, stressed out? Butterflies in the belly? Are you thinking along these lines: Would the program be acceptable by all stakeholders? Hopefully there are no use case errors. Hopefully the special case prototype you spent literally a day designing, works well.


Have you been pacing up and down the hallway while the reviewers fill out their evaluation form on your eLearning program?


Ok, if this is you, then this article is definitely for you.


If you have been using a linear instruction design model, akin to the Waterfall Model of product design (hint: ADDIE and its family), then your plight at the gold roll out stage is inevitable.


What we (including me) have been missing out is the requirement to review on the fly. Think about your beautiful dream house. You had approved the construction designs and now the house is ready, standing in front of you. Do y


ou test the house for weather-resistance? Would you test whether the house would face floods and thunderstorms effectively and keep you safe? Sounds very very absurd isn’t it?


Well, this is how we have been dealing with the eLearning (or instructor-led programs) in our ID projects. We build the whole program and then test it in the end. We discover a sizable number of errors (blunders?) and dutifully take notes on how to fix them.


This is what I call, the 11th hour slog! Most of your team members have already moved on to the next project.


Sadly, its up to you to recall all prototypes and use cases and determine which ones are related and which ones aren’t.


That is, if you did prototype….


My apologies for walking you through this disturbing scenario. But I am sure we all have, at some points of our careers, gone through similar catastrophes. So how does an Agile Model like SAM overcome this shortcoming?


SAM, or the Successive Approximation Model allows for evaluation within the design phase as opposed to towards the end. This is possible through the review of prototypes and an iterative deliverable. In the education jargon, this is also known as formative evaluation. This is as opposed to the summative evaluation – the grand, final evaluation at the very end. The sound of correcting things as you go is much better than anticipating or sometimes dreading the end!

Iterative evaluation also has a great impact on the quality and effectiveness of the learning materials. SAM allows you to test, measure and improve at each step of the instruction design process.

This also means less worry, reduced set-backs and a steady progress towards the end.


As an old faithful to the ADDIE Model, I am by no means, advocating SAM in this article. However, after experiencing the heartache when pushing out the alpha and beta stages of my eLearning product, I have decided to mix up the two for a better design strategy.


41 views0 comments

Comments


bottom of page