ICT & Computing in Education

View Original

Job-seeking as a metaphor for ICT assessment

When I saw several hundred people lining up for some sort of job registration recently, I immediately thought of the challenges of assessing pupils’ educational technology capability. A bit of a stretch? Not necessarily.

Assessment – any kind of assessment – is hard. A huge challenge is to make sure that what you think you’re assessing actually is what you’re assessing. For example, you may think you’re assessing their understanding of the subject, but you might really be assessing, in effect, their ability to read and comprehend the questions.

This is known as the validity problem, and with ICT there is another dimension, that of the skills. For instance, when I first had a go at the then Teacher Training Agency’s ICT test for trainee teachers, I failed abysmally. But that reflected the fact that (a) it was an unfamiliar environment and (b) I hadn’t bothered to read the instructions. (Well, that’s my story anyway, and I’m sticking to it Tongue)

In line for a job?So this photo seems to me to be a good visual metaphor for this validity issue. Here we have a line of people seeking a job, or to register for a job, and who were prepared to stand there for at least an hour I should imagine. (The group of people shown in the photograph was a very small subsection of the whole line.) In a sense, these jobs look like being allocated at least partly according to whether you have the time and the stamina to line up, and how good you are at selling yourself in a face-to-face situation.

Of course, having to apply for a job in the more traditional way also comes up against the validity problem, because some people who are eminently suited for the job don’t get called for interview because they’re not good at selling themselves in writing.

If we apply this thinking to the assessment of ICT capability, tests aren’t the full answer, partly for the reason already given, and partly because the nature of the test itself is important. That is, it will (or should) differ according to whether you’re assessing a practical skill  or theoretical understanding. But neither is group project work a panacea, because then you have the problem of sorting out who has done what, and whether you’re (inadvertently) assessing collaboration skills instead of ICT skills.

I don’t think we can ever get away from the validity issue entirely. All you can do, I think, is to have as many different approaches as possible, in the hope that the advantages of some will outweigh the deficiencies of others. Unfortunately, that will also mean that, to some extent, unbridled confidence in the efficacy of high stakes testing is likely to be misplaced.

Any thoughts on this?

--

Subscribe to Computers in Classrooms, the free e-newsletter for people with a professional interest in educational ICT.

Enhanced by Zemanta