And “21st century learning and teaching” has, arguably, made matters worse.
The big challenge facing ICT leaders in schools is often not to do with providing the facilities, but in encouraging teachers to use those facilities – and to do so in a meaningful way. We’ve all seen examples of where students are allowed to play on the computer if they’ve finished their “real” work, or where students whose regular teacher is not in school, and for whom no work has been set, get to do the same.
I always have the impression – I know not why – that people who educate their children at home (known as “homeschoolers” in the USA) are somehow not regarded as “proper” teachers. Yet if you think about it, they potentially have much less of a support network than teachers in a school, and less guidance on how to do things. If I am correct in such sweeping assumptions, perhaps there is something the rest of us can learn from them in certain areas? I mean, if they have had to do a lot of figuring things out for themselves, to find out what works and what doesn’t work in their particular context, it would be a wasted opportunity to not benefit from that in some way.
A case in point is assessing youngsters’ understanding of ICT. It’s a notoriously difficult thing to do. Without going into a lot of detail now (see this article for more, although it needs some updating), the chief issues are the following:
- Is the assessment valid, ie does it measure what it purports to measure? You could be measuring literacy, for instance.
- Is it reliable? That is, if you applied the same test to similar pupils elsewhere, or the same pupils tomorrow, would the results come out more or less the same?
- Are you assessing the pupil’s own work, or a joint effort? How do you know what the pupil has done by themselves?
- The nature of the assessment can itself affect the result. If the pupils have learnt something using technology, testing them with a pencil and paper test is not likely to be appropriate. It will almost certainly yield a different outcome than if you used technology for the assessment. Similarly, if the pupils have been learning through scenario/problem-based learning and are tested through multiple choice, there is likely to be a question about validity.
- Rubrics: I am not sure they are ever really valid, and think they tend to be either too “locked down” or not objective enough.
To coin a phrase from Howard Gardner, I want to know if our children are reaching a level of "genuine understanding". In other words, I want to see if they have moved beyond basic mastery of the material towards a deeper, richer level of understanding.
This resonates with me. I sometimes meet people who know a lot of stuff and yet have no clue how to apply their knowledge in a real situation. It’s as if they know, but do not truly understand.
Ashley goes on to say that the usual sort of testing regime had unfortunate side effects:
As a matter of fact, our then second-grader, directly associated her daily mood with how well she performed on a given test.
As a consequence,
We take a more organic approach versus a rigid, test-driven curriculum. Assessment is often done through formal discussions, projects, and portfolios.
Have the pupils fared badly in compulsory tests? Quite the opposite. Ashley’s inspiring post (do go to it and read it in its entirety) suggests that if you can drag yourself away from checkboxes, point scores and all the rest of it, assessment can be both enjoyable and reasonably accurate.