ICT & Computing in Education

View Original

Quick look: The Language of Deception

Click the cover to see this on Amazon (affiliate link)

From a quick perusal (it came only recently), and as the title suggests, this book deals with the dangers of artificial intelligence insofar as fooling us is concerned. For example, it demonstrates how a script could be written to make a bot seem so lifelike that it could get people to hand over their social security details.

In this context, the Turing Test comes to mind. I’m not convinced to any extent at all that not being able to tell the difference between a computer and a person means that the computer is intelligent. However, the original formulation of Turing’s ‘imitation game’ was whether a machine could be perceived as being intelligent.

Apparently, people have been using ChatGPT to interact with potential romantic partners. A college used it to generate a letter addressing the grief and trauma caused by a recent mass shooting. The college authorities might have saved themselves from a backlash had they not insterted the words ‘paraphrased from a response by ChatGPT’.

That was bound to be noticed, and noticed it was, which brings us on to our own responsiblity to be perceptive. For example, i a ‘deepfaked’image, part of the text was distorted, potentially giving the game away. A few weeks ago I myself used an AI image generator to create a cover for the book The Girl At The Tram Stop. It’s not a photo and therefore not the same as a deepfake, but someone pointed out to me that one of the people in it has no feet. I hadn’t noticed! In a different experiment, I used an AI image generator to create a photorealistic image. The result was very lifelike, but if you looked closely you would notice that one of the subject’s hands was distorted.

‘Deception’ is very readable with some great examples.