Entries in assessing ICT (12)
How would you rate the apple pie shown in the photo? Yes, I know the first thing that comes to mind is probably “Disgusting!”, because my food presentation skills are not what they ought to be. (Believe it or not, the apple pie depicted has not been eaten.) But how you assess my efforts must depend on what exactly you’re looking for. (I realise this is kind of obvious, but please bear with me.)
In the changing room where I go swimming, there’s a machine that does everything. It measures your height, weight, Body Mass Index (BMI), and about half a dozen other things. I’m surprised it doesn’t measure my waist and shoe size as well. Yet, after using it twice, I have given up on it.
When I saw several hundred people lining up for some sort of job registration recently, I immediately thought of the challenges of assessing pupils’ educational technology capability. A bit of a stretch? Not necessarily.
Here are two questions you might like to use in order to get a discussion going with your colleagues. They are both concerned with assessing ICT capability.
As well as assessing students' understanding at any given time, you will also need to record their progress over time. Here are five suggested ways of doing this.
As I said in a previous article about it, Switched-On ICT is the name of the primary (elementary) scheme of work I've been involved with, as Series Editor. That role has entailed advising on assessing pupils' ICT capability, and helping to make sure that the instructions and assessment opportunities and statements are both consistent and accurate.
The text is engaging, with topics such as We Are Explorers, and makes full use of Web 2.0 and other free applications as well as schools' Learning Platforms. Here is a list of what I see as its strengths:
I must not correct that spelling error. I must ignore that apostrophe. I must -- Ah, good day to you; thank you for joining me. You have caught me reminding myself that the role of Series Editor does not include the usual sort of proof-reading. But I'm getting ahead of myself.
I was very privileged to be able to chair, and listen to, an online talk by Ashley Allain today, and I’d like to pick out just a few of the many incisive points Ashley made.
And “21st century learning and teaching” has, arguably, made matters worse.
It is not enough to teach students how to understand information and communications technology. At some point you are going to have to assess their knowledge and understanding.
Here are 5 broad suggestions of how to do so effectively.
1. Set open-ended tasks rather than closed tasks
For example, say: “Produce a poster” rather than “Produce a poster using Microsoft Publisher”. By the same token, don't be too prescriptive in what needs to be included. Instructions like "include 2 pieces of clip-art" do not easily lend themselves to assessment of much more than the student's ability to select appropriate illustrations. In fact, such a painting-by-numbers approach may be useful as a training exercise, but ultimately all you can really assess is how good the students are at following instructions.
The open-ended approach can be adapted for use with all age groups, in my experience.
2. Use a problem-solving approach rather than a skills-based approach
This suggestion assumes that the course is a problem-solving one rather than once concerned purely with skills. In some circumstances it will be quite appropriate to ask students, say, to create a spreadsheet consisting of 5 worksheets and involving the use of the IF function. However, for the sorts of courses I'm thinking about, a question that requires problem-solving is much better, for these reasons:
- It does not require there to be one right answer.
- It provides an opportunity to discuss with the student why they opted for a particular solution -- and why they did not choose an obvious alternative.
- It provides scope for out-of-the-box thinking. The trouble with telling students they must (to continue with the example) use an IF function precludes them from coming up with a more creative, and potentially better, solution of their own.
3. Watch what students do in the lesson
The finished product indicates very little about ICT capability. In the absence of other information, it’s the process that counts. The biggest problem with making a statement like this is that teachers and others can sometimes extrapolate from it to suggest that the process is all that matters. This is patently not the case, as a simple example will illustrate:
Your boss asks you to prepare a presentation on the subject of what the school offers by way of ed tech facilities, to be shown to prospective parents at a forthcoming Open Day. You prepare a fantastic presentation, using all the bells and whistles (appropriately, of course), on the topic of what ed tech facilities the school will offer in 5 years' time once an impending refurbishment programme has been completed.
The way you prepared it is sheer brilliance: you create an outline in a word processor, import it into a presentation program in a way that automatically creates slides and bullet points, and all your illustrations are original, created by you and your students.
Given the fact that your presentation is actually irrelevant, or at least not what the boss asked for, how likely is it that your boss will congratulate you on your presentation on the grounds that the way you went about preparing it was exemplary?
4. Avoid the temptation to atomise
Do not disassemble the Level Descriptors in the National Curriculum Programme of Study (in England and Wales), or, indeed, any set of national standards. The English and Welsh ones are intended as holistic descriptions rather than atomistic ones, and it is likely that the same is true of other countries' standards (but you will need to verify that, of course).
5. Assess what students say about the work they have done
You may find it useful to use a standardised approach, but I have always found that you can pick up a lot from a fairly open-ended discussion. It's interesting to explore, for example, if they understand why they have done something. (An answer along the lines of "Because the teacher told me to" is not good enough.)
This article was first published on 2nd January 2008.
I always have the impression – I know not why – that people who educate their children at home (known as “homeschoolers” in the USA) are somehow not regarded as “proper” teachers. Yet if you think about it, they potentially have much less of a support network than teachers in a school, and less guidance on how to do things. If I am correct in such sweeping assumptions, perhaps there is something the rest of us can learn from them in certain areas? I mean, if they have had to do a lot of figuring things out for themselves, to find out what works and what doesn’t work in their particular context, it would be a wasted opportunity to not benefit from that in some way.
A case in point is assessing youngsters’ understanding of ICT. It’s a notoriously difficult thing to do. Without going into a lot of detail now (see this article for more, although it needs some updating), the chief issues are the following:
- Is the assessment valid, ie does it measure what it purports to measure? You could be measuring literacy, for instance.
- Is it reliable? That is, if you applied the same test to similar pupils elsewhere, or the same pupils tomorrow, would the results come out more or less the same?
- Are you assessing the pupil’s own work, or a joint effort? How do you know what the pupil has done by themselves?
- The nature of the assessment can itself affect the result. If the pupils have learnt something using technology, testing them with a pencil and paper test is not likely to be appropriate. It will almost certainly yield a different outcome than if you used technology for the assessment. Similarly, if the pupils have been learning through scenario/problem-based learning and are tested through multiple choice, there is likely to be a question about validity.
- Rubrics: I am not sure they are ever really valid, and think they tend to be either too “locked down” or not objective enough.
To coin a phrase from Howard Gardner, I want to know if our children are reaching a level of "genuine understanding". In other words, I want to see if they have moved beyond basic mastery of the material towards a deeper, richer level of understanding.
This resonates with me. I sometimes meet people who know a lot of stuff and yet have no clue how to apply their knowledge in a real situation. It’s as if they know, but do not truly understand.
Ashley goes on to say that the usual sort of testing regime had unfortunate side effects:
As a matter of fact, our then second-grader, directly associated her daily mood with how well she performed on a given test.
As a consequence,
We take a more organic approach versus a rigid, test-driven curriculum. Assessment is often done through formal discussions, projects, and portfolios.
Have the pupils fared badly in compulsory tests? Quite the opposite. Ashley’s inspiring post (do go to it and read it in its entirety) suggests that if you can drag yourself away from checkboxes, point scores and all the rest of it, assessment can be both enjoyable and reasonably accurate.