Image credit: Garbage Collector by clement127 – CC Licensed on Flickr
There’s an expression in computing – Garbage In, Garbage Out. It doesn’t matter how pretty your graphs are, if the data underneath is garbage, you may as well not have bothered.
In all the conversations I’m seeing around rethinking assessment at the moment the one thing that I’d hoped to see more of is conversations about the validity of the data.
Do the scales that we invent actually match how students learn?
If learning is really invisible, are the measures of performance we’re adopting to measure them by proxy really the best ones we can have?
The detail we use to describe levels of outcome in one piece comes at the cost of transferability to other work. Is the price of losing ‘trackability’ worth paying for better learning in the specific piece? (I’m inclined to think, but this is early day thinking)
No answers yet, but they might start to emerge in future posts.