Let’s face it: most of us are fairly obsessed with how much influence we have, and tend to use social media data as a proxy for measuring it. But in this as in other cases such an approach is a very blunt instrument, for reasons I’ll explain.
Generally speaking, statistics tend to be flawed, even if they have been scrupulously arrived at. The reason, as George Orwell pointed out in one of his essays, is that they don’t tell the whole story.
Orwell gives the example of statistics concerning spending on central heating in Africa. Without additional information, such as the average temperature, you might be inclined to believe that people in Africa were freezing most of the time.
Darrel Huff, in his classic work How to Lie with Statistics, discusses the ‘little figures that are not there’, such as the number of people who took part in a survey, or the range of the numbers below and above the mean average.
In other words, when we're presented with statistics, or chasing after higher numbers (of followers, views, retweets or whatever), we should really look further than just the stats. I'll give you a few examples of the kind of thing I mean.
OK, I admit it: I'd love to have more Twitter followers. The number has stayed around 10.5k for some time. I know what I need to do to gain more, but it's hard to find the time. Am I necessarily missing out though?
For a long time now I've sworn by the 1% rule, which is that out of a given population, only around 1% will do something, on average. The 'something' could be retweeting your tweets, or commenting on your blog posts, or something else. It's not a hard and fast rule, but it has been observed by many people for a long time.
However, that is not 1% of your Twitter followers, but 1% of the fraction of those followers who happen to see your tweet. Here's a concrete example. Tom Hodgkinson, founder of The Idler, mentions in his book Business for Bohemians that one of his tweets was retweeted by Stephen Fry. At the time, Fry had 5 million followers. As a result, he sold 8 more tickets to the event he was running – a 'return' of 0.00016%. He also had 100 more people sign up to his newsletter. Not bad, although that represents only a 0.002% 'return'.
A similar experience was reported by Mark Schaefer in The Content Code. I conclude from this that, welcome as it might be to be noticed by a Twitter celebrity and gain an extra 100 subscribers as a result, it's not a viable strategy to adopt in any kind of deliberate way. What is probably better is engagement and loyalty, which are acquired slowly over a long period. I should be interested to learn, for example, how many of those extra 100 subscribers ever open The Idler's newsletter. I should imagine not very many.
The photo below does not, sadly, show the number of retweets of my last comment on Twitter.
The number of views of videos, especially on Facebook, is also a suspect statistic. On YouTube, a view counts as watching a video for 30 seconds. If you post a video of 30 minutes in length, that doesn't sound to me like a great result.
Still, at least watching a video for at least 30 seconds takes some commitment. On Facebook, a video view is counted as watching it for 3 seconds, which is about the length of time it takes for the video to travel from the bottom of your screen to the top of your screen as you scroll through your news feed. Indeed, it even counts as a view if you have the audio turned off.
So is it worth bothering?
I suppose that when it comes to influence, what matters is not so much the quantity of followers and so on one has, but their quality, as defined in terms of their commitment to you, and their influence. To take an extreme example, if you had only one Twitter follower, and that was the Secretary of State for Education, you would probably have more influence than if you were followed by a few hundred thousand people.
Examples from research
The field of education research is replete with examples of where the statistics probably only tell you part of the story. Again, let me give you a couple of examples, taken from research from the British Educational Suppliers Association (BESA).
First, according to BESA's recent research into ICT spending, "only 33% of secondary schools and 60% of primary schools consider that they are sufficiently equipped with ICT infrastructure and devices."
As I pointed out in a talk I gave at a ResearchEd conference a couple of years ago, newspapers tend to seize on statistics like these to promote the view that all the money that schools have been given (in the past) or spent on ICT must have been wasted, otherwise how else can one explain that the schools are not well-enough equipped?
The answer probably lies in a very simple fact. No ICT or Computing leader is going to say that they have enough resources. There is always more one can buy. Indeed, it is perfectly reasonable to argue that the use of education technology is so embedded in the school that demand for it outstrips supply. In other words, not feeling sufficiently well-equipped could be an indicator of something really good going on.
Another example: in BESA's report on ICT in Schools 2017, the following statements are made:
"Overall, there are approximately 316,500 computers in primary schools (20% of the total), which are more than five years old. Across secondary schools the number of older desktop computers is significant at 28% of the total. Much of the reason for this is the continued high level of older desktop computers."
"In recent years only around a quarter of computers in primary schools were likely to be marked as ineffective. The latest results suggest a sharp increase in computers that are deemed ineffective for use in primary schools. The proportion of ineffective computers is also continuing to expand across secondary schools suggesting pressure to make additional provision redundant." (Where 'ineffective' is defined as being due to condition, age or specification.)
On the face of it, these statistics are quite awful. The computers in schools are getting older and, by implication, less and less fit for purpose. However, while not denying the need for more, and more up-to-date equipment, in schools, a couple of good questions to ask here would be:
- What are these computers being used for?
- Where are they being used?
When I was head of ICT and Computing, there were a few things I did with older equipment, especially when I was fortunate enough (or insistent enough) to be given money with which to buy new computers:
- Offer computers to other departments. For example, the Art department acquired an old computer for the purpose of doing word processing. I gave the D & T department around 8 computers for their students to take apart and put together again, and on which to learn computer programming.
- I used an old computer as a printer server, and another old one as a stand-alone research station.
- The special educational needs department took a few computers on which to run some of their games, which didn't require huge amounts of processing power.
My 'takeaway' from all of these examples is simply this: when we see a set of statistics, whether they pertain to equipment in schools, number of followers or anything else, we need to ask a few questions:
- Are there any 'missing' statistics?
- Are these statistics enough to base any conclusions on, that is, what else would it be good to know?
- How should we interpret the statistics?
The links to books in this article are Amazon affiliate links.
I should like to thank BESA for providing me with the statistics I referred to, and for permission to include them here. Obviously, the conclusions I've drawn from them are my own.
This article was originally published in my newsletter, Digital Education. If you liked it, why not subscribe? The information and sign-up form are here: Digital Education.