Knowledge is one of my favourite topics, analysing what people think and how they know. I remember a paper that I read in my MEd from a researcher who purported to be a firm believer in Socio-constructivism and who was very critical of TAs who, in his mind, failed to deliver lessons that appropriately embodied socio-constructivist ideals. But, interestingly, his methods of data collection and evaluation of the lessons all were built on a positivist stance. So what he believed explicitly and how he practised teaching and learning implicitly were hugely at odds.
This sticks in my mind as I think its a pervasive problem. We are trying to rebuild and reframe our assessment model and curriculum in my program. But every barrier we come across seems to be that people in general fundamentally believe in the old ways of knowing. Even the biggest advocates for the change cling to old assessment practices (how will we _know_ that students have learned this without a final exam?) and old delivery practices (but, for some information, you do just need to lecture).
I think I’m also guilty of this. I find numbers and stats and data comforting in some ways. Measuring how long students spend on module X or how many attempts they needed for question Y. But I believe more and more that learning is messy and cannot be neatly packaged. Which means, assessment needs to be messy too.
I position myself as a knowledge translator – I work with many different groups who all speak different (discipline/community) languages. So I’m often explaining the technical to non-techies or the academic to clinicians. In this role, I’m also the generator of new knowledge that results from the intersection between those communities.