"...compared to what?" should provoke us to look for three critical pieces of context:
- Compared to the standard. How does our data look like versus an accepted standard? Did we meet that standard?
- Compared to ourselves. What does the historical data say? Does the data show movement from the last time we measured the same thing? What are the trends over time?
- Compared to others. What are the possibilities? How are others doing? This will give us an idea of what we could aspire to do.
Compared to the standard: The expected standard for student proficiency subject was 65%. All of the three schools met the standard, despite a wide margin between highest and lowest percentages. Without including the context, it would be easy to label Schools B and C as under-performing. Now how would you rank school quality? Will that opinion change when the expected standard increases to 85% in the next year and none of the schools meet the standard?
Compared to ourselves: Looking back at four years of testing, we get a vastly different picture.
Has your opinion changed on Schools A, B, and C concerning school quality?
Compared to others: How did other schools perform?
When other schools are added to the comparison, it becomes clear that not all showed similar levels of success. Schools A, B, and C have a significantly higher percentage of proficiency than other schools. School B has shown that performance gains can be made, and may serve as a model to Schools D, E, and F.
This example is meant to show how critical it is to ask for deeper context in each piece of data and decision we review. The next time you see data or a statistic presented by itself, don't forget to ask...compared to what?
No comments:
Post a Comment