I keep coming across articles and research about schools’ AI policies — or the lack of them. It seems to me that we’ve been here before, with policies about teachers’ and departments’ use of technology, and e-safety. There is a familiar pattern:
Someone realises that everything is a bit ad hoc or completely absent, and expresses the need for someone to draft a policy about it.
That “someone” is either the head of computing, ed tech co-ordinator or similar, or the IT technician.
Alternatively, a template policy is downloaded from the internet, and the school puts its name in the appropriate slot.
The policy is then either distributed to all members of staff, or announced in a school bulletin.
It is then filed in the Principal’s Office, the IT technician’s office, or numerous waste paper baskets.
Job done, another box is ticked.
Terry, thinking about, and unimpressed by, the typical education policy formation process.
Yes, I am cynical, but tell me I’m wrong. In one of my jobs as Head of Computing, the Ofsted person assigned to me asked me what I thought of the school’s Equal Opportunities Policy. I replied that unfortunately I hadn’t had a chance to read it since it appeared in my pigeon-hole the previous day. He said nothing, but a wry smile flickered across his face. He knew what was going on, he could see right through the bullshit. Why would I wish to be seen to be party to this rubbish?
When I was working as an ICT advisor, the government of the day announced that schools which didn’t have an e-safety policy would be denied technology-designated funding from the government. Many of the schools in the district I worked in had such a policy; several didn’t.
My boss: We have a pro-forma e-safety policy. If you get those schools without a policy to put their name in it and sign it, we can approve the funding.
Me: Well, surely we should withhold the funding until they actually do something themselves?
My boss: Are you going to be the person explaining to a group of ferocious headteachers why they are not getting the funding?
The next year I was working for the Qualifications and Curriculum Authority, and we had a multi-agency meeting about e-safety in schools. A young man from the department of education thumped his fist on the table and declared: We need to make sure that only those schools with an e-safety policy in place gets the funding. His older colleague said: And who is going to do that? There are 30,000 schools in the country, and only you and me in the office.
It was at that point I realised that all the announcements and initiatives emanating from the department for education were all smoke and mirrors: there was nothing behind the curtain! Now, maybe it’s all changed now, but I would need some convincing.
Back to the issue of schools’ AI policies. Imposing one from above never really works in my experience. Setting up a committee ends up as a talking shop where nothing ends up being done, or is like an elephant’s giving birth: it’s done at a high level, with a lot of noise, and takes two years to see results.
I’m inclined to the view that what senior leadership teams should do is have what I call a very thin policy, or baseline, and then allow each teacher and area in the school to build on that as they wish.
For instance, you might stipulate that if AI is used in the production of a scheme of work, say, that fact should be stated somewhere. That would set a good example I think. The policy document might also state that AI shouldn’t be used to produce entire documents which are then passed off as the teacher’s own work.
There is a more fundamental issue I think: are teachers actually using AI, and if not, why not?
But so what? Well, I think Andrew Ng, the co-founder of Google Brain, was probably correct when he said, “AI won’t replace people, but maybe people that use AI will replace people that don’t.” That would apply to teachers too, in my opinion.
To be implented and to mean anything, any school policy must:
meet genuine needs;
be easy to implement.
Meeting genuine needs
By “genuine” I mean real, not doing something in order to tick a box or satisfy some artificial requirement that benefits nobody. When it became feasible to have computers in classrooms there were some headteachers who would walk around the school in order to check whether the computers were on, as if they thought kids would learn stuff through a process of osmosis. Same thing happened when classrooms started to acquire interactive whiteboards. A speaker from Ofsted went even further: he told the assembled group of advisors on a training day that if computers weren’t available then the teacher should just mention them. I asked the then head of ICT at Ofsted if that was indeed the official Ofsted stance, and he shook his head and said “They have had so much training. No, it is absolutely not Ofsted’s official policy.”
So what would count as a genuine need? Something that will make the teacher’s job easier, more efficient or more effective. That means that teachers need to know about what they could do with AI, and I don’t mean giving them a long list of ideas. If something looks overwhelming, it will be ignored.
I’ve included in my newsletter, Digital Education, a few things I’ve tried out, which may give you some ideas for whetting teachers’ appetites. See Midweek Meanderings #2.
Be easy to implement
I don’t believe that giving people a long list of prompts, each of which is half a page long, is useful. It’s too complicated. Teachers are intelligent people. If a prompt doesn’t give them quite what they want they can refine it through an iterative process. Some AI programmes, like Google’s NotbookLM and ChatGPT, make that process dead easy by suggesting further avenues of exploration — in effect prompts —-themselves.
There are other things you can do to help teachers implement AI if they can see some potential benefits of doing so, and I will come back to this another time.
