DIGITAL EDUCATION SUPPLEMENT: 2017 RETROSPECTIVE ISSN 2049-9663

Recipes as algorithms: food for thought

Books:

 Weapons of Math Destruction, by Cathy O’Neil

The Econocracy, by Earle, Moran and Ward-Perkins

 (Amazon affiliate links.)

 There really is no need to use the example of a recipe when teaching kids what an algorithm is. Recipes are not a perfect fit, for reasons I’ll explain in a moment.

A much better example is a list.

 Many algorithms are, in effect, a black box. Drawing by Terry Freedman

Many algorithms are, in effect, a black box. Drawing by Terry Freedman

 There are two kinds of list, bulleted and numbered. Common convention, enshrined in one of the British Standards (the one for writing computer manuals if memory serves) has it that items in the former can be addressed in any order. The numbered list, on the other hand, must be dealt with in order, starting (obviously) with number one.

 An example might be installing software, in which case the list might look something like this:

1.       Backup essential files.

2.       Make a restore point.

3.       Install the software.

4.       Check that it works.

That’s an algorithm by any other name, and meets the requirement that the order of execution is important. For instance, there’s not much point in taking a restore point after the new software has caused damage. At least, not only then.

Nevertheless, there are good reasons to use recipes as an analogy, though not so much because they are a good example, but because of the problems they entail. Indeed, recipes can be used to illustrate the need for digital literacy in the sense of having a sense of ethics and moral responsibility, and the need for precise language.

The first thing to consider when using recipes as an example is the dish itself. Too often programs are written (or created) that may be excellent in terms of coding but are morally dubious.

For example, there have been suggestions that IBM’s computers were used by the Nazi regime. See The Punch Card Conspiracy for a balanced review of a book making these claims. Even if the claims are untrue, the point is that a computer program could be written that would sort people into different groups, solely to make it easier to get rid of some of them. Such a program might be brilliant in terms of how it works, but is it something we would want?

The software that analyses crime data may cause black people to be over-represented, as described in this video and transcript by Cathy O’Neil.

In a similar way, when some academics decided to look at the crime hotspots according to white collar crime rather than street crime, they came up with a very different picture.

These are examples of programs -- algorithms -- that work well, but which are flawed because of their suspect morality or their in-built biases.

In short, although programming itself may be objective, the uses to which it is put may not be.

Weapons of Math Destruction by Cathy O’Neil explores the hidden biases issue very well. I mentioned this in a previous issue of the newsletter, but it’s worth repeating in this context I think --

(especially as I’ve read more of it reviewed it now)

 

In a series of chapters covering issues such as job recruitment, justice, advertising and other areas, O’Neil shows how flawed programs are leading to unforeseen consequences. Quite often, a positive feedback loop is set up, whereby somebody cannot do or achieve something because of the way the program has been written, and that refusal itself gets into the system and provides a further reason for future refusals.

The only people who really understand how the program works are the companies that created them. Given that they are earning handsomely from their use through licences, they have no incentive to correct them, much less reveal how they work.

I’ve explained this in a very generic way, but that is pretty much the gist of the matter. It reminds me very much of a short science fiction story by Gordo. Dickson, called Computers Don’t Argue. In the story, a man who is mistakenly accused of failing to pay for or return a book obtained from a book club ends up on Death Row, thanks to the computer system governing the whole process. The story was very prescient: it was originally published in 1965.

The Econocracy deals with a situation which, although not involving technology, is in some respects very similar. When the government declares that the economy is in a mess, and the only solution is to tighten our belts (‘austerity measures’) my response to anyone who would listen is that that “solution” is only the “only” one because of the economic model being used to explain how the economy works. If you prefer a different model, you end up with a different solution.

It used to an in-joke amongst students and teachers of Economics that if you asked ten economists their opinion, you’d get ten different answers -- 11 if one of them went to Harvard. But now, according to this book, only one macroeconomic model is taught at most universities (this is in the UK, but I have the impression that the same is true in other countries).

Another development, which was starting even when I was university, is that academic Economics has become increasingly mathematical, and even more divorced from the real world than it ever used to be (and that is saying something).

One consequence of these developments is that  non-economists cannot understand the models that economists use, and therefore cannot effectively argue with their pronouncements: you have to take it all on trust. Given that economists have a major influence on governmental policy, that is not a great position for society to be in. It means, in effect, that we’re all at the mercy of an underlying model that very few people understand, and who are therefore effectively excluded from taking part in any debate. You can see why I regard this as having similar characteristics to the situation described in O’Neil’s book. In each case decisions are made according to the inner logic of a process that very few people can understand or perhaps even know about.

How might such topics be approached via recipes? There is no exact match, of course, but here are my ideas about this.

First, your choice of dish to make is influenced by cultural, ethical and other factors. For example, I wouldn’t make a sponge cake because I’ve cut down on that sort of thing.

Secondly, recipes themselves are, or seem to be, influenced by factors other than the dish itself. I came to this conclusion after my wife pointed out to me that if you compare the recipes for a particular dish published today with one published 40 years ago, the modern one has several times more sugar in it.

Even the recipes of canned and packet foods change over time, in response to changing tastes, or to circumstances (see, for example, this article about baked beans).

Another problem with recipes is that they do not use precise language. What does “stir briskly” mean, for example? A recipe can be much looser than a computer program’s code.

In conclusion, I’d prefer to teach algorithms 101 by using a numbered list analogy, and recipes for highlighting other programming-related issues. What’s your opinion of all this? You can let me know simply by emailing me or, better still, posting an article on your own blog and then sending me the link.

Other articles concerning algorithms:

Computational thinking? Algorithms? Why all the jargon?

Why I dread the thought of benign algorithms

Outsourcing assessment to an algorithm

What I'm about to start reading:

I'll be reviewing this book in Digital Education. (That's an Amazon affiliate link, by the way.)

For the full list of articles featured in our 2017 Retrospective, please visit:

2017 Retrospective: Index of featured articles