The challenge: Teams of reviewers who were contracted to review science instructional materials struggled with consistently evaluating whether the reviewed materials were designed to include a phenomenon to explain, a problem to solve, or provided neither.
The process: The first part of the process was to identify when and why review teams disagreed and what criteria they used as they tried to reach consensus around phenomena and problem identification. The next step was to consult with SMEs to identify criteria that could be consistently used during the independent work and the consensus discussions.
The solution: After criteria were established, I determined that an interactive decision tree would be the best solution to increase efficiency and reduce errors during reviews. I developed the decision tree using Storyline 360. Reviewers received training to make sure they interpreted the criteria within the decision tree the same way and then used the decision tree during their independent work. This resulted in fewer disagreements when the team met to reach consensus. In cases of disagreement during consensus scoring, reviewers evaluated their evidence against the criteria in the decision tree to more quickly and accurately reach consensus scores.
Note: This sample decision tree shows the functionality of the interactive tool used by the review team; however, specific content information has been removed and the color scheme has been changed.