“His manner was so friendly that I forgot to put on my cockney accent, and he looked closely at me, and said how painful it must be for a man of my stamp, etc. Then he said, ‘I say, you won’t be offended, will you? Do you mind taking this?’ ‘This’ was a shilling, with which we bought some tobacco and had our first smoke that day. This was the only time in the whole journey when we managed to tap money.”
George Orwell, ‘Hop-Picking’, October 1931.Collected Essays, Journalism and Letters of George Orwell Volume 1 An Age Like This 1920-1940. Penguin, 1968, p. 83
Clearly, the old school tie works, even when it isn’t worn. Incidents like this pop up several times in George Orwell’s writings of the 30s, in articles like “The Spike” and in Down and Out in Paris and London (1933) etc., and they always make him uncomfortable. The reminder of the deference that he was, in his original identity as old Etonian Eric Blair, accustomed to and had been trained for in his daily life was both welcome and unwelcome. Unwelcome, because firstly, it was embarrassing to be ‘unmasked’ in front of people with whom he had become friends precisely because at the level of society at which they existed – and in these writings it is the poorest of the working class or the unemployed and destitute – there wereno class distinctions anymore; as he says in Down and Out in Paris and London, regarding a typical London lodging house:
All races, even black and white, mixed in it on terms of equality. There were Indians there, and when I spoke to one of them in bad Urdu he addressed me as ‘tum’ – a thing to make one shudder, if t had been India. We had got below the range of colour prejudice.
George Orwell, Down and Out in Paris and London, Gollancz 1933, p. 150
Though Orwell was sometimes taken aback by the levelling effect that poverty had, he welcomes it too – his occasional unmasking as a “gentleman” was an unpleasant reminder of his abandoned life as a police officer and tool of colonial oppression in Burma. But it was also useful in a way – not just because money and gentle treatment was welcome after weeks or months of hardship, but because it was a stark and simple illustration of exactly the kind of injustice, inequality and disparity he sought to draw attention to with his writing. Orwell is happy to write openly about his deception, partly because it was essentially harmless and necessary, in order to truly experience the kind of life he wanted to write about. But perhaps he was also comfortable doing so because, much as he would have liked to have ‘proletarian’ readers, – and probably did have a few – he was mainly writing for an audience of his peers; the political class who could, if they really wanted to, improve the lives of the vast, faceless mass of unemployed and homeless that they were no doubt aware of, but preferred to think about, if at all, as feckless layabouts who probably deserved their lowly status.
There were of course many working-class readers in the 1930s, possibly even more than there are now, given the enormous output of publishers of what Orwell calls “cheap novels” in that era, not to mention the libraries, newspapers and periodicals designed to cater for every possible niche hobby, that he lists in his 1940 essay ‘Boys’ Weeklies‘. In fact, he notes in Down and Out in Paris and London, that even the unemployed, homeless underclass of itinerant tramps were voracious readers of Buffalo Bill novels and the like, whenever they could get hold of them. Of course, even the most ‘proletarian’ newspapers and publishing houses were owned in the 1930s by people with backgrounds similar to Orwell’s – and by and large they still are. Likewise, at that point it would probably have seemed natural that it was this same class who were to be found running the more recently established broadcasters, notably the BBC. Natural, because before WW2, the role of the upper class was still very much seen as ‘the management’ of the British Empire with the middle class as administrators, but both far outnumbered by the working class who did the work (well, management and administration are work too, but you know what I mean).
What might – or on reflection, might not – have surprised Orwell is that 70+ years after his death, when class differences have been (or appear to have been) diminished, the leaders, for a while, of the relatively extreme left and right-wings of British politics and who appealed openly to the working classes should have been ex-public schoolboys called Nigel and Jeremy. It might surprise him too, to find that members of the openly-elitist public-school educated minority to which he belonged would still be going around pretending to be ‘ordinary blokes,’* almost like he did, in newspapers and especially on television. There are differences in the 21st century; the working class, although now interchangeable to a far greater extent with the middle class, are by virtue of numbers, the main demographic catered to by TV and so whereas Orwell was trying to blend with his social inferiors to prove a point to his peers, undercover toffs today are mostly trying to blend with them in order to appeal directly to, and ultimately financially benefit from, those working class people.
* even as a working class person I inwardly cringe writing “ordinary bloke,” but I think it’s the correct phrase in this instance for what these people think they represent. But bloody hell, “ordinary bloke” – from here on in I’ll just write “OB”
Some observations; the incognito upper class type seems mostly to be a male thing. The female counterparts of these kinds of commentators and presenters are there – Kirstie Allsop or Mary Berry spring to mind, but unlike the men they seem content to be unselfconsciously posh, which is fair enough.
There are various different versions of the type. Some are benign and essentially innocent; people who, one assumes, would have been dropouts whether or not opportunities in TV beckoned and whose scruffy clothes and sloppy speech were probably originally adopted to annoy their parents or just as a way of opting out of the expectations that come along with class privilege (but you get to keep the privilege anyway, so… )
Since at least the 1960s, the pop and rock music business has always been full of these kind of people – and since the 60s too, their opposite has existed; the vastly wealthy who weren’t born into an upper class background. It’s possible that these people, rock stars and entrepreneurs act in some ways as role models to the posh OBs
.
The kind of TV shows made by the benign-dropout demographic tend to reflect a somewhat genteel outsider status* and are often geared towards niche hobbies and interests, so that the whole thing has the aura of the upper class dilletante of the 20s, dabbling in publishing modernist poetry or abstract art. This is a public role in a way, but although it allows the presenter to share his views on the world and life in general, it feels essentially more like a sharing of enthusiasms than anything overtly or covertly patronising or manipulative.
*I do realise that every word of this is probably wildly unfair and doesn’t take into account any of the genuine struggles that come with class expectations etc: oh well.
Where it feels less benign and perhaps more deceptive is when the “OB”-ness of the presenter is an embodiment of what he thinks an actual “ordinary bloke” is like. Perhaps not surprisingly, the evidence suggests that the posh public schoolboy assumes that the OB is what the tabloid press – also, it should be noted, owned by posh ex-public schoolboys – tries to condition them to be. No doubt there are working class people who are old fashioned, conservative, unreconstructedly misogynistic, knee-jerk racist xenophobes, impatient with anything that might seem effete – but it’s also clear that the tabloid press wants them to be that way and does what it can to continue and spread these attitudes. Which is logical enough; the whole point of the class system is to preserve itself and ensure the survival of privilege, blood lines and all that crap. An interesting question – which I don’t know the answer to – is whether it is it self-awareness or self-deception that makes the ersatz OB hide his upper-class accent for TV purposes. Either way it’s probably a wise move, because if there’s one thing that seems risibly effete to the kind of proletarian the tabloid press imagines, it’s the particular kind of upper class speech nurtured in the most expensive and exclusive public schools.
It seems that on the whole, the public is pretty much okay with the fake OB as entertainer and cultural commentator; except for those regular instances when he goes “too far.” But the whole raison d’etre of this kind of public figure is to test the boundaries of what is acceptable, always with the safety net that the whole persona is so obviously contrived that nothing they say can ever be taken seriously, surely? But it’s notable that the self-consciously “outrageous” incidents that pop up from time to time, that seem to simultaneously mark out where those boundaries are and make reactionary attitudes just a little bit more acceptable, always come from the same place. It’s that sweet spot where the tabloid-owner’s classist projection of the “ordinary bloke” – impatient with having to respect people, constantly at war with ‘political-correctness-gone-mad’ – happens to coincide and blend with the underlying upper class snobbery and prejudice that we aren’t supposed to notice, because of that bluff OB exterior. Class prerogatives, racism, classism, the fear of privilege being eroded, the snooty, outraged ‘don’t-you-know-who-I-am?’ loathing of having to deal with or, god forbid, defer to social or racial inferiors; the fear of change. But never mind, it’s all just a joke, innit, and if you take it seriously then you are a puritanical killjoy and who would ever want to be that? No self-respecting ordinary bloke, anyway.
What was the first thing that scared you? The answer to that question is no doubt buried deep in your subconscious and could be almost anything. What was the first thing you sought out because you wanted to be scared? That should be easier to answer but for me at least, it isn’t really.
Well, there was Halloween, and Guy Fawkes Night still used to have a certain frisson in the days when effigies were burned on communal bonfires; an archaic-sounding memory now that November 5th is marked, if at all, by a few fireworks and now that Guy Fawkes has a new life as the face of anonymous protest, thanks to the weak movie adaptation of David Lloyd and Alan Moore’s classic graphic novel V for Vendetta. Whether many of the people using the likeness of “V” know that the real Fawkes’s aim was to restore an absolutist Catholic monarchy, rather than to restore power to the people, or whether most of them even know who Guy Fawkes was, I can’t say.
At some point in early childhood I became aware – as we all do – of the classic horror villains; Dracula, Frankenstein’s monster, werewolves, the mummy. Those same creatures in fact that, as horror film-loving adults, are famous as ‘the Universal monsters’ – an appropriate/fortuitous name as they are or at least were a kind of lingua franca for kids in the western world. But at the same time, it’s hard to say when exactly one became aware of them. I was bought (and still own), Dracula’s Spinechillers Annual (more about that here) for Christmas when I was eight – but that was hardly my introduction to Dracula. So what was? The earliest memories of these icons that I can pinpoint are parodies, things like The Munsters which, though already a couple of decades old were still regularly aired when I was a child. Then there was Carry On Screaming and of course specifically made-for-children cartoons like the Groovie Ghoulies – also of a certain vintage by then and the more up-to-date The Drac Pack. But although these were all light and funny, even when watching them as a young child, Dracula/Frankenstein/The Mummy etc remained first and foremost horror characters and the enjoyment of those comical versions depended on knowing about the ‘real’ ones. I remember thinking that The Drac Pack wasn’t scary enough. But compared to what?
In Dracula’s Spinechillers Annual – surely aimed squarely at the hardback annual audience (was this only a UK thing?), the same kids who bought, or were given, the Grange Hill Annual, the Beano or Dandy or Jackie or the annual Blue Peter book. And yet, in the Dracula annual there are beautifully drawn comic strip adaptations – as faithful as they can be for their brief length – of a couple of classic Hammer horror movies. Dracula (1958) and Twins of Evil (1971) were “x-rated” at the time of their release, but by the 80s would probably have been rated 15 – but even so, the comic adaptations come complete with titillating glimpses of nudity and splashes of blood that weren’t typical for kids annuals, to say the least. I hadn’t seen the movies at the time but I remember that even then I was aware of Hammer films, and thought of them as something old and harmless, rather than actually scary. I’d seen bits of them late at night on TV, mainly sequels; I saw Dracula, Prince of Darkness and The Scars of Dracula years before I ever saw the original, superior 1958 Dracula, but nothing from them sticks out much in my mind so, I can’t imagine I was particularly scared by them.
But at some point, as an older but still pre-teen child, I became a horror fan. While the theory of gateway drugs has been discredited regarding actual drugs, there’s a lot to be said for the idea in different contexts – as a teenage heavy metal fan you (it seemed inevitably) wanted to find music that was heavier, faster, more harsh. As a young reader of what passed for children’s horror fiction (I have the vaguest memories of enjoying Terrance Dicks’s Wereboy! and Cry Vampire! as mentioned here) you equally wanted to find ‘harder stuff’ – if not more scary, then at least more nasty and graphic. Which is not to say that (in either literature or music) you inevitably stick with the hard stuff; my liking for Stephen King long outlasted my liking for Shaun Hutson. In Hutson’s defence, his books were, as a teenager, ‘cool’ in a way that Stephen King’s only sporadically were, and although I don’t remember ever being actually scared by a Shaun Hutson book, he had other virtues; the pace, the energy, the humour – and to this day the opening of his 1983 classic Spawn (mentioned in various places, notably here) – my first encounter with his work – is the only time that reading a horror novel has made me feel physically sick. No wonder he became a favourite of my teenage years.
But I’m getting ahead of myself; if Shaun Hutson marked the zenith of my teenage horror addiction, the initial drug that set me on that road to excess happened a good few years earlier. There were children’s books borrowed from the library which for the most part didn’t really stay with me, although I remember the cover of a book of ghost stories I read then (surely edited by Peter Haining) vividly. As far as being scared goes, the things I remember most from childhood fall into the category of genuine not-fun fear (fear of older kids, skinheads, stuff like that) but also fun real-life fear; walking by a house where a ‘bad man’ lived, being on the streets at Halloween or (to some extent) Guy Fawkes night. The decline of November 5th is often attributed to the tightening of safety rules around fireworks, but I’d say its unique atmosphere actually died out just before that, when the making and burning of effigies (I still knew what “Penny for the Guy” was but I don’t remember kids of my generation doing that) was replaced by the bigger and more exciting (but less intimate and far less peculiar) spectacle of bigger and better communal firework displays.
I was still at Primary school when I saw the first horror film that seemed genuinely creepy to me, The Omen. But it was essentially a dead end for a few years as primary school kids then had no way of accessing real horror movies, at least not without the collusion of adults and a budget beyond what I think was normal in my peer group. So my main route to being what could be termed a horror fan (though I don’t think it would occur to me at that point that it was a specific genre I was drawn towards) was through reading. There’s another story to be told that begins with the hugely popular Fighting Fantasy series of game books, which leads (with some help from Iron Maiden’s mascot Eddie; an important horror icon in his own way) towards HP Lovecraft, but for me, I think the real gateway drug that led me directly to Stephen King and James Herbert was Robert Westall.
Westall is best remembered now as a children’s author who wrote about WW2, and especially the Blitz. His most important book will probably always be his first, the iconic 1975 novel The Machine Gunners, winner of the Carnegie medal, which was made into an equally iconic TV show. And it deserves its fame – its story of a gang of Tyneside (actually, Garside; like most of his books The Machine Gunners is set in the fictional town of Garmouth, standing in for his own home town of Tynemouth) teenagers who ‘liberate’ a machine gun from a crashed German bomber plane and set up their own fortress to defend themselves and their town against the predicted Nazi invasion, in the face of what they see as the inadequate response of adult society to the situation. It remains both gripping and moving and is expertly told by a writer who had been a child during the war and was able to give a vivid account of the child’s eye view of ‘the home front,’ but who had also been a teacher with a teacher’s insight into children and their behaviour. Like most of the best children’s fiction it never talks down to its audience, and even allows its protagonists to swear when the realism of the story demands it, which was, quaintly, hugely impressive to children of the ‘80s.
The Machine Gunners TV series was broadcast when I was 9 and I first read the book around that time. It’s not a horror novel in any sense, but there are horrific elements within it. Aside from the general dread and tension of wartime, one scene in particular made a big impression on me, not only because of the gore, but also the subtly ominous build-up to the moment of horror, something which Westall would employ even more effectively in his horror-oriented novels. Near the start of the book, its hero Chas McGill has ventured into “The Wood” which
“was bleak and ugly[…] Some said it was haunted, but Chas had never found anything there but a feeling of cold misery, which wasn’t exciting like headless horsemen. Still, it was an oddly discouraging sort of place” (Machine Gunners, 1975, p.13)
This time though, Chas does find something; the remains of the tail end of a German bomber plane which has been shot down, but which still has its machine gun attached. He climbs the wreckage to get the gun, and the description of what happens next stayed with me for years:
“He peered over the edge of the cockpit. The gunner was sitting there, watching him. One hand, in a soft fur mitt, was stretched up as if to retrieve the gun; the other lay in his overalled lap. … His right eye, pale grey, watched through the goggle-glass tolerantly and a little sadly. He looked a nice man, young. The glass of the other goggle was gone. Its rim was thick with sticky red, and inside was a seething mass of flies, which rose and buzzed angrily at Chas’s arrival, then sank back into the goggle again. For a terrible moment, Chas thought the Nazi was alive, that the mitted hand would reach out and grab him. Then, even worse, he knew he was dead.” (Machine Gunners 1975 p15)
After The Machine Gunners, the next Westall book I read was his excellent ‘Brave New 1984’-style dystopia Futuretrack 5 – again, not horror, but often horrifying, especially the scene near the beginning where the narrator Henry Kitson, head boy at an expensive public school, first becomes aware of the very different lives lived beyond the boundaries of his own privileged existence, and which for me entirely overshadowed the whole book when I first read it:
“… Peering through my jungle, I saw a man with no nose. He’d had a nose; I could see where it had been. Now he just had two holes to breathe through. He’d no eyebrows either. Just purple rings around his eyes, making them look tiny and staring.” (Futuretrack 5, 1985, p. 18)
This is Kitson’s first sight of an “Unem”, one of the army of unemployed who is killed shortly afterwards by the authorities. When Kitson asks his father what an Unem is (children asking adults awkward and difficult questions is a recurring theme throughout Westall’s books for children), the reply is chilling;
‘Shut up’, shouted my gentle father. ‘All you need to know is this – if you ever tell anybody what happened, you won’t have a home or a father or a mother.’(Futuretrack 5, 1985, p.19-20)
After Futuretrack 5 I read as many Robert Westall books as I could get my hands on, and four in particular, all of which fit more or less within the horror genre, have stayed with me and at times unnerved me probably as much any book I’ve ever read has. In fact, they remain creepy now, if read in the right frame of mind, and are for me the most enjoyable of Westall’s many good books. Those four are The Wind Eye (1976), The Watch House (1977, now scandalously out of print), The Devil on the Road (1978; ditto) and The Scarecrows (1981), which, like The Machine Gunners, won the Carnegie medal. The Wind Eye is probably the least good of the four, but it has some powerful scenes. The action, which involves the bleak Northumbrian coastline, time travel, satanic goats and St Cuthbert, takes place when a troubled family (the central characters are three children from two broken marriages, whose incompatible parents have recently married) go to stay in the house of a distant and eccentric relative who has disappeared and been declared dead. But one of the book’s most effective moments comes right at the beginning, before the family even reaches the predictably ramshackle and spooky house:
“Oh, I’m shocking our little Christian here. So unlike her beloved Father. Don’t be such a prig, Beth. It doesn’t mean a thing.” And she placed her blue shoe on the black marble slab. Nothing moved; nothing fell. But in that instant Beth knew that someone had become aware of them.” (The Wind Eye, 1976, p.12)
This anticipates some of Westall’s most creepy moments, especially a key scene in The Scarecrows, but although The Wind Eye builds to an appropriately stormy and tempestuous climax, The Watch House is far more effectively chilling throughout, probably because, like Westall’s later horror-oriented novels, the action revolves around a single, complex and isolated character rather than a group.
The Watch House, which, like The Machine Gunners, was the subject of a TV series – though a sadly inferior and often laughable one – is the most traditional of Westall’s horror novels. The book is a kind of haunted house story, where a troubled teenage girl, away from home while her parents go through a difficult separation, becomes the focus of ghostly activity. The haunting initially centres around the Watch House, the somewhat dilapidated home of the Garmouth Volunteer Life Brigade, a kind of down-at-heel, local RNLI founded when the town was still a busy fishing port.
The atmosphere, landscape and ingredients of the story are established with skillful economy within the first few pages as the heroine Anne, driven by her spoiled and unsympathetic mother, arrives in Garmouth, where she is to be dumped on her mother’s old nanny for the holidays while the separation is hammered out at home. Garmouth, already depicted in The Machine Gunners as a town whose best years perhaps lay behind it, even in the 40s, is seen in more detail here. It’s a typical fishing town, still busy but slightly dowdy in the recession years of 1970s Britain. Decay is everywhere; Anne is introduced early on to the Black Middens, great rocks in the estuary of the Gar, historically the source of the shipwrecks which are at the book’s heart, but now tamed by great concrete piers. A sea wall, begun but discontinued when funding ran out, snakes along the foot of the cliffs on which the Watch House stands. The cliffs are crumbling, as are the ruins of a medieval priory with its slightly dilapidated coastal graveyard; “The sea must eat away the cliff, thought Anne. Some wild nights, bones long buried in earth must receive final burial in sea.” (The Watch House, 1977, p.10)
And then of course there’s the Watch House itself, established almost immediately as a sinister, but fascinating and alluring presence:
“The road ended at the Watch House, which loomed over them as they got out of the car. Built of long white planks, sagging with the years, it had a maritime look. Like a mastless, roofed-in schooner becalmed in a sea of dead grass. Through its windows showed a dark clutter of things that couldn’t be recognised. This clutter and a lack of curtains made the windows look like eyes in a white planked face.” … “The Watch House was well-named. It did seem to watch you. But it was only the effect of dark windows in white walls.” (The Watch House, 1977, p.10-11)
For the first two parts of the novel, the Watch House is at the centre of the supernatural action. A working base for the now-rarely-needed Life Brigade, by this time a group of old, retired men, it also houses their memorabilia. Like the house in The Wind Eye it’s full of fascinating curios. But whereas the house had belonged to one man with a fascination for the past, the Watch House is a repository for generations’ worth of knick-knacks; old photographs, items rescued from shipwrecks, ship’s figureheads, even the bones of the dead found among the Black Middens but never identified. Initially a project for Anne to pass the time, the cleaning, organising and documenting of the Watch House’s contents becomes an obsession and initiates the connection between Anne and a ghostly presence, known affectionately to the members of the Brigade, as ‘the Old Feller.’ Hitherto known and only half believed-in as a somewhat playful spirit who knocks things over and leaves messages in the dust, when Anne arrives his messages become frequent and unambiguously urgent and personal; they are a cry for help.
Anne’s status as a sympathetic outsider, as well as the somewhat lonely figure at is reinforced throughout the novel, where the other characters are almost all arranged around her in paired opposites. There are Purdie and Arthur, the elderly couple she is staying with, she old fashioned and disapproving, he mischievous and childlike; the friends Anne makes, Pat and Timmo, Pat cosy and docile, the simian Timmo energetic, cerebral and inquisitive; the two clergymen, Father Fletcher – the local Church of England vicar, cheerful, straightforward and relaxed, and Father da Souza, an American Catholic priest, fiery, dynamic and antagonistic. Even Anne’s parents, peripheral but essential elements in the story, fit in with this pattern, Anne’s mother is fashionable, demanding, cold and impatient while her father – who barely appears – is warm, caring, disorganised and ultimately, perhaps a less sympathetic figure than the author intends. Finally, there are the ghosts themselves; the Old Feller, harmless, terrified and childlike, and the real villain, the ghost of a murderous army officer named Hague, who is bullying, menacing and violent. In each of these cases Anne comes between the other characters, at times more-or-less harmoniously (keeping the peace between Purdie and Arthur and Pat and Timmo) and at others inadvertently stoking tension. Anne’s own personality, less flamboyant than most of the cast, is mainly brought out in contrast with the others and essentially we see her as an ordinary, lonely teenager. She’s clever and industrious, mild-mannered, but also easily bored. There’s a sharper side to her nature too, mainly expressed when her mother is around, which can be surprising and no doubt helped to earned the book its Puffin Plus (older children and teens) status. We meet this side of Anne right at the beginning of the novel, when, approaching Garmouth, her mother warns her about Arthur;
“Never made anything of himself, even by their standards. He takes advantage, given half a chance. You’ll need to watch him.” “What is he – a rapist?” “I wish you wouldn’t talk like that” (The Watch House p.9)
Anne, already not thrilled at this enforced holiday with near-strangers, is clearly trying to antagonise her mother, but as we discover, her cynicism is well-founded, not because of Arthur himself (who is a harmless, if irritatingly childish old man), but because she is used to the unwanted attentions of her mother’s boyfriend, the loathsome “Uncle Monty”. Late in the novel, when her mother threatens to take her home to London:
“’I don’t want to live with you. I can’t stand having that man around the place the whole time.” […] “You mean Uncle Monty? He’s just a friend, you silly goose. He’s just helping me settle in, that’s all.’ ‘By spending all night in your bedroom while Daddy’s away? […] He can’t keep his hands off me either. He’s always trying to touch me, when you’re not watching. And give me wet open-mouth kisses.’ It was true. So why was it so terrible to say it?” (The Watch House, p.158)
We are reminded throughout the book that Anne is a teenager and not a child; she is at her most teenager-ish when she goes to the local Youth Club disco in the hope of meeting people her own age:
“She’d thought hard what to wear at the Youth Club, and finally decided on plain Wranglers with a Wrangler top. […] Nothing for little cats to get their tongues around; nothing for them to pick holes in. Course, they’d pick holes anyway. But not such painful ones.” [The Watch House, p.65]
Initially, all of the ghostly activity happens within the Watch House itself and takes the form of writing in the dust on the display cases and flickering lights, but when, a few years after reading The Watch House, I first read Stephen King’s IT, the scenes where that novel’s young protagonists first encounter Pennywise irresistibly reminded me of Anne’s first unambiguous encounter with ghosts after the Garmouth carnival, a beautifully effective and atmospheric piece of writing:
“As she got further along the pier, and the sky darkened, the family groups thinned out. She passed through the last, and was alone. Except for one small person in Victorian top-hat and frock-coat, hurrying ahead of her towards the lighthouse. Head down and hands behind his back. Alone among the crowds he looked anxious. He kept peering over his shoulder at her, his face a white blur in the dusk. […] Didn’t she know him? Of course not. It was just that he looked like that picture of Isembard Kingdom Brunel, who built the Great Western. Except Brunel had looked so much cockier with that big cigar. Not so scared… And then she knew, quite certainly, that she was looking at a ghost. Because the light on the South Pier came on, and shone right through his face. […] ‘It’s me, Anne,’ she took a step forward. The ghost writhed away. ‘Whatever’s the matter?’ Her voice rose to a scared shriek. This had happened before to her. Where? Where? In the orchard with Cousin Jane. She had walked towards Cousin Jane, and Jane had shrieked with terror. Because Anne, all unknowing, had a spider in her hair, and Jane was terrified of spiders. […] Anne whirled round. Something faded round the curve of the lighthouse. Something red. There was a strong gust of seaweed; the smell of the bottom of a river. […] She tried doubling back. Nothing. The Old Feller was gone. She was alone with something red that stank of the river and had terrified a ghost.” (The Watch House p.116-7)
During the first two acts of the novel, Westall expertly raises the tension and confounds expectations, the simple haunting becoming something more complex and less predictable as Anne’s not-always-harmonious relationship with her newfound friends complicates things further. Then, as we enter the novel’s final phase, The Watch House has a feature that I’ve always loved in horror novels and one which I associate with (again) IT in particular – the period of research, usually during a lull in horrific activity after the threat has been established. In The Watch House, Anne initially assumes that the ghost – The Old Feller – is trying to engage her help to save the Watch House – which he, as founder of the Garmouth Volunteer Life Brigade had built – from financial and physical ruin and by extension save the Life Brigade itself. But once Anne has helped to secure the future of the Watch House as a museum and the hauntings don’t stop, it becomes clear that more than one spirit is involved.
After a session of hypnosis with her new friends Pat and Timmo proves both disturbing and revealing it becomes clear that understanding the problem requires more detailed local knowledge than Anne has. She talks to the oldest member of the life Brigade, the 95-year-old Bosun, who gives her an eye witness account of events she has previously seen under hypnosis, through the Old Feller’s eyes. She again enlists the help of Timmo. Introduced in the guise of ‘Doctor Death’, an eccentric DJ running the youth club disco, Timmo is an older teenager, a medical student with a huge variety of interests and expertise, but no real attention span. Timmo is knowledgable and freakishly intelligent, but his interest in the paranormal is as playful and skeptical rather as it is genuine and after the dramatic first hypnosis session, Anne only reluctantly agrees to do it again. Before that happens, Anne insists on some more concrete research, but as is common during these kinds of interludes in horror fiction, she suffers from a sense of dislocation that makes rational thought difficult:
“Next morning, Timmo had to bully her all the way up the hill to Front Street. If he hadn’t called for her, she would never have got out of bed. Her legs felt like lead; she had hardly slept. Front Street, full of shoppers and red double-decker buses, was insubstantial, like a dream. It was the real world that was ghostly now.”(The Watch House, p.131)
The novel’s final act brings the story to a feverish pitch as the supernatural events become more deadly and Anne’s mother arrives in Garmouth, threatening to take her back to London. The climax, involving the two priests in an extended exorcism – surely influenced by the final scenes in the movie version of William Peter Blatty’s The Exorcist – is powerful but, like the ending of this article, a little bathetic. Although narratively satisfying, it’s loud and apocalyptic where the novel’s most effectively eerie moments are quiet and understated. The scenes that lingered in my mind – and which remain the most vivid to me decades later – are those when Anne, alone in the Watch House, is menaced by Hague, or when she is stalked by a mangy, grave-digging dog in the old Priory churchyard. As horror fiction, these are among the finest scenes that Westall ever wrote. Anne too, is a surprisingly vivid and sympathetic character; Westall’s female characters are often on the verge of caricature and his usual (youthful, male) protagonists tend to have a manly impatience with the women in his books. I would hesitate to call Westall’s books misogynistic, but there is sometimes a strain of male chauvinism to them which seems to belong to the author as much as it does to the characters. It’s also an oddity perhaps worth mentioning that of all the books I read as a child – and there were quite lot of them – Westall’s are the only ones I recall which almost invariably have a flippant reference to rape in them, which definitely feels bizarre in the 21st century. The Watch House itself is very much a product of the 1970s – with much that that entails; chauvinism, mild homophobia, flared trousers – in a way that The Machine Gunners wasn’t, which possibly accounts for its currently out-of-print status. But it’s a shame, with some kind of preface/disclaimer about its dated attitudes and language, it could easily go on to scare new generations of children, and get them hooked on the mysterious delights of the horror genre.
Imagine a culture so centred on wealth, property and power that it becomes scared of sex and frets endlessly about what it sees as the misuses of sex. A culture that identifies breeding so closely with with money, wealth and status, and women so closely with breeding and therefore with sex that, when looking to replace traditional symbols of birth and regeneration it rejects sex and even nature and, in the end makes the embodiment of motherhood a virgin and the embodiment of rebirth a dead man. Unhealthy, you might think; misanthropic even – and yet here we are.
But when that culture loses its religious imperative, what should be waiting? Those old symbols of fertility; rabbits and eggs. But whereas Christianity in its pure form found it hard to assimilate these symbols, preferring instead to just impose its own festival of rebirth on top of the pagan one, capitalism, despite being in so many ways compatible with the Judeo-Christian tradition, is essentially uninterested in spiritual matters. So even though it’s mostly pretty okay with Christianity, which creates its own consumer-friendly occasions, it proves to be equally okay with paganism too, as long as it can sell us the pagan symbols back in a lucrative way.
Easter is, after all, a mess to begin with; its name is pagan (Ēostre or Ôstara, Goddess of the spring) and its Christian traditions, even when embodied in the tragic idea of a man being killed by being nailed to a cross was never entrenched enough to suppress the essentially celebratory, even frivolous feeling that spring traditionally brings. Okay, so Christ ascending to heaven is pretty celebratory without being frivolous; but as, in the UK at least, represented by a hot cross bun, with its cross on the top to represent the crucifix and even – to play up the morbid factor that is so central to Christianity – its spices that are supposed allude to the embalming of Christ’s dead body, it’s hardly solemn: it’s a bun.
On the other hand, birth, since the dawn of time and to the present day, is not just a simple cause for rejoicing and in that the Christian tradition, though it tries to remove the aspects that seem most central to birth to us: women, labour (the word presumably wasn’t chosen accidentally) and procreation, probably tells us more about the seriousness and jeopardy of childbirth than the Easter bunny does.
The patron (matron?) saint of childbirth is no help; St Margaret in herself has nothing to do with birth (although she was presumably born), but becomes its saint through the symbolic act of bursting out of the dragon who ate her – a strange analogy but one that reflects the hazardous nature of childbirth in medieval times, when mortality rates were high, not just for babies but for their mothers. Rabbits may represent – in ancient cultures across the world, from Europe to Mexico and beyond – fecundity, but it’s an animal idea of fertility for its own sake that has nothing to do with the practical or emotional aspects of producing new human beings.
Pregnancy in Western art was a rarity until fairly recently; and even now, the puritanical ideas inherited from Victorian Christianity mean that the apparently pregnant Eve of Jan Van Eyck’s Ghent Altarpiece (completed in 1432) is a subject of debate: Eve pregnant with humankind makes sense, and the 15th century was certainly more in touch with the realities of human life than the 19th and early 20th century men who codified the canon of Western art history – but maybe she is simply the medieval/gothic ideal of femininity as seen in illuminated manuscripts and carvings; small shoulders, small breasts, big hips and stomach – given an unusually realistic treatment.
As the nineteenth century gave way to the 20th, Gustav Klimt was able to bring the beauty and wonder of pregnancy and birth to art with Hope I, his beautiful female figure of hope and renewal glowing against a background of death and peril, but it’s only really when women begin painting that that the symbolic and magical aspects of motherhood are reconciled with the more sombre, earthly spirituality that Christianity preferred to represent in a dying man and with the fundamental animal nature of humankind, without that being a negative thing. A painting like Paula Modersohn-Becker’s Reclining Mother and Child II (1907) shows all of the human aspects that were embodied in the contorted Christian images of the Virgin Mary, crucifixion and Christ’s rebirth: human beings that are fragile, loving, vulnerable and dependent on each other, but also the things that were missing; biology and the bonds it creates. The magic of Klimt, but not represented in a titillating way, and depicted in concrete rather than symbolic terms.
For the generation after Paula Modersohn-Becker, everything was seen through the fragmenting prism of World War One, and so Otto Dix, more cynical, less intimately involved, shows us the physical discomfort of pregnancy minus its magic. Dix, despite his famous claim, “I’m not that obsessed with making representations of ugliness. Everything I’ve seen is beautiful.” took a definite pride in shocking viewers with his art; as he also said; “All art is exorcism. I paint dreams and visions too; the dreams and visions of my time. Painting is the effort to produce order; order in yourself. There is much chaos in me, much chaos in our time.” By the time Dix painted these pictures he was a father himself, but although his paintings of his family reveal a more tender, if just as incisive, aspect to his art, here he paints as a pitiless observer, knowing that his work was challenging and confrontational to the generally conservative audience of his time; a time when, like ours, forces of intolerance and conservatism were closing in on the freedom embodied in art this truthful.
But despite his clinical eye and devotion to the ‘new objectivity’ (“The Neue Sachlichkeit – I invented it“) Dix’s truth is a dramatic, heightened kind, designed to penetrate the complacency of his era. Meanwhile, his pupil, Gussy Hippold-Ahnert tackled the same subject and almost certainly even the same model with a realism that is at first less striking but also far less dramatizing. Gussy was of course a woman and is not showing us, as Dix seems to be, a faceless being representing the eternal, but rarely remarked on hardship involved in the joyous business of continuing the species. Instead, Hippold-Ahnert shows us a woman who happens to be pregnant; both paintings are realistic, both are objective and, as with the symbolic sacrifice of Christ and the eternally recurring bunny, both display different aspects of the truth.
*firstly, may change this title as it possibly sounds like I’m saying the opposite of what I’m saying*
That western culture¹ has issues with womens’ bodies² is not a new observation. But it feels like the issues are getting stranger. Recently there have been, both on TV (where the time of showing is important) and online (where it isn’t), cancer awareness campaigns where women who have had mastectomies are shown topless (in the daytime). This is definitely progress – but it also seems to simultaneously say two different things with very different implications. On the one hand it’s – I would say obviously – very positive; it is of course normal to have a life-changing (or life saving) operation and the scars that come with it, and it can only be helpful to minimise the fear surrounding what is a daunting and scary prospect for millions of people. Normalising in the media things that are already within the normal experience of people – especially when those things have tended to be burdened with taboos – is generally the right thing to do. These scars, after all are nothing to be ashamed of or that should be glossed over or hidden from view. I hope not many people would argue with that. But at the same time isn’t it also saying, ‘yes it’s completely normal and fine for a woman to be seen topless on daytime TV, or on popular social media sites, as long as she’s had her breasts³ cut off?’ That seems less positive.
¹I’m sure western culture isn’t alone in this, but ‘write about what you know’ (not always good advice, but still). I’m also aware that this whole article could be seen as a plea for more nudity. I’m not sure that’s what I mean
² might as well say it, this article deals mainly with old fashioned binary distinctions, but misogyny applies equally to trans women and I think what I say about men probably applies equally to trans men.
³ or her nipples, on social media
Looked at this way, this positive and enlightened development seems to be (inadvertently?) reaffirming ancient and (surely!) redundant arguments, but in a completely confused way. Non-sexual nudity, whatever that means, has always been okay with the establishment(s) in some circumstances. Now, one could argue from the context (cancer awareness campaign) that the nudity is desexualised, and I think that’s why it is allowed to be aired at any time of day. (In fact, the Ofcom (UK TV regulating authority)’s rules on nudity – which are aimed at ‘protecting the under 18s’ from nudity, as strange a concept as it’s always been*, are pretty simple:
Nudity
1.21: Nudity before the watershed [9 pm in the UK], or when content is likely to be accessed by children (in the case of BBC ODPS), must be justified by the context.
*Interestingly, Ofcom’s rules about nudity are listed between their rules about Sexual behaviour and their rules about Exorcism, the occult and the paranormal
So presumably, Ofcom (rightly) considers this context to be justified, because the naked body is not being presented in a sexual context. But, at the same time, one thing the cancer awareness film demonstrates – and which it seems it’s at least in part supposed to demonstrate – is that there’s nothing undesirable about the female body post-mastectomy. (admittedly it’s entirely possible that this is just me, projecting the notorious male gaze onto the subject, as if that’s the determining factor in what attractiveness is or isn’t*) . But then, the people that devised and created the film are not the same people that determine what can be shown on TV or online and when.
But even accepting that it’s permitted to show a topless woman on TV during the daytime because it’s de-sexualised nudity, why is that better? Two opposing arguments, a puritanical/right-wing one and a feminist one might both be skeptical (*rightly? see above) of me, as a heterosexual male writing about this. But if the price of women being regarded equally, or taken seriously, or not being somehow reduced by the male gaze (but also the child’s gaze, since on TV at least, nudity tends to be fine after children’s standard bedtimes and on the internet is theoretically policed by child locks) is to de-sexualise them, then that is no less problematic – and in a way really not that different – from the traditional, paternalistic Western view which sees the Virgin Mary as the ultimate exemplar of female-kind. And if sex or desire is itself the problem then not allowing female nudity is also, typically, reducing the visibility of women for what is in essence a problem of male behaviour.
It’s worth looking at the fact that nudity is even an issue in the first place, considering that we all privately live with it, or in it, every day of our lives. In many world cultures of course, it isn’t and never has been a problem, unless/until Westerners have interfered with and poisoned those cultures, but it’s widespread enough elsewhere too, to be a human, rather than purely western quirk. It possibly has a little to do with climate, but it definitely has a lot to do with religion.
But the fact is that, in Western culture, even before the era of the Impressionists and their selectively nude women or the (as it now looks, very selectively) permissive society of the 1960s, female nudity has been perfectly acceptable to depict for hundreds of years; as long as the nude female is either mutilated (say, a virtuous martyr like the Roman suicide Lucretia), the victim of alien (non-Christian) assailants (various saints*) or, turning the tables, if she is a heathen herself (various classical figures, plus Biblical villains like Salome; a favourite subject with the same kind of sex & violence frisson as Lucretia)
*I didn’t realise when I posted this article that today (5th February) is the Feast day of St Agatha, the patron saint of – among other things – breast cancer. I’m not a believer in supernatural or supreme beings, but that’s nice.
Even in Reformation Germany – surely one of the least frisky periods in all of western civilisation – in the private chambers of the privileged male viewer, nudity – especially female nudity – was there in abundance, providing it came with various kinds of extenuating nonsense; dressed up (or rather, not dressed up) in the trappings of classical antiquity. Okay, so maybe a woman can’t be flawless like Christ, but she can be nude and beautiful too, as long as she is being murdered, or stabbing herself to preserve her virtue, or is sentenced to everlasting damnation.
Men, of course could, in art, and can on TV or anywhere else, be more or less naked (admittedly with a fig-leaf or something similar) at any time because – I assume – of Jesus. Otherwise how to explain it? The male chest is arguably less aesthetically pleasing than the female one, and certainly less utilitarian in the raising of infants, but in deciding that it is less sexual, our culture makes lots of assumptions or directives that come from religious, patriarchal roots.
The dissonance between the ways that female and male nudity are treated in our culture has its roots in Christianity and its iconography and although in the UK we’re technically the children of the Reformation, what’s striking is how little difference there really was between the way nudity was treated in the Catholic renaissance and the Protestant one.
In both Catholic and Protestant cultures, the art that was not solely designed for the private, (adult) ‘male gaze’ was almost entirely religious. Popes and Puritans both found themselves in the same odd position; Jesus must be perfect and preferably therefore beautiful, whatever that meant at the time – but more than that, it would be blasphemous – literally criminal – not to portray Christ as beautiful.4 But in addition to being perfect, he must, crucially, be human. Understandably, but ironically, it seemed the obvious way to depict human beauty and perfection was without the burden of clothes. The human aspect is after all how the people of the Renaissance could (and I presume people still can) identify with Christ, in a way that they never do with God in other contexts, where that identification would be as blasphemous as a deliberately ugly Christ.
But how was one supposed to regard the nearly nude, technically beautiful body of Christ? With reverence, of course. But revering and worshipping the naked beautiful body of a perfect human being is not something that a misanthropic (or if that’s too strong, homo-skeptic5) religion can do lightly. Helpfully, the part of Christianity that puts the (nearly) naked figure at the centre of our attention is the human sacrifice ritual of the crucifixion and its aftermath. That bloody, pain-filled ritual allows the viewer to look at Jesus with pity and empathy and tempers (one would hope; but who knows?) the quality of desire that the naked beautiful body of a perfect human being might be expected to engender. And to that Renaissance audience, the reason for that desire was another, but far more ambiguous subject for artists; Adam and Eve.
4 There are special cases though, see below re Grunewald
5Doesn’t Alan Partridge call himself homoskeptic at some point? But what I mean is – and I’m sure many Christians would take serious issue with this – that Christianity/the Christian God is in theory all-accepting of humans and their frailties, but somehow humans as they are are never quite good enough to escape negative judgement. Not just for things like murder or adultery that are within their power to not do, but things that are in their nature. And then, making a human being who must be killed for the things that other human beings have done or will by their nature do seems on the one hand not very different from an imaginary pagan blood sacrifice cult in a horror movie and on the other, kind of misanthropic
Adam and Eve were a gift to the Renaissance man seeking pervy thrills from his art collection because they are supposed to be sexy. Here are the first humans, made, like Christ, in God’s image and therefore outwardly perfect; and, to begin with, happily nude. But in almost immediately sullying the human body, Adam and Eve are fallible where Christ is not. But how to depict the people that brought us the concept of desire except as desirable? Because they are not only not our saviours, but the opposite, their nudity can afford to be alluring, as long as the lurking threat of that attraction is acknowledged.
Alongside the problems of the iconography in art came the practical problems of making it; and I think that one of the reasons that, of the main ‘Turtles’ of the Italian Renaissance,6Raphael was elevated to the status he enjoyed for centuries, is that his nude women suggested that he might actually have seen some nude women. For all their athletic/aesthetic beauty, figures like Michelangelo’s Night (see below) and his Sistine Chapel Sibyls are the product of someone who found that the church’s strictures on female nudity (no nude models) happened to strike a chord with his own ideas of aesthetic perfection. Likewise, Leonardo’s odd hybrid woman, the so-called Monna Vanna (possibly posed for by one of his male assistants) seems to demonstrate an uncharacteristic lack of curiosity on the artist’s part.
6childish
One way around the problem of naked human beauty was – as it seems still to be – to mutilate the body. Paintings like Mattias Grünewald’s agonised, diseased-looking Jesus (perhaps the most moving depiction of Christ, designed to give comfort and empathy to sufferers of skin diseases) and, on (mostly) a slightly shallower level, the myriad Italian paintings of the martyrdom of St Sebastian, do much the same as those Lucretias and St Agathas; they show the ideal of the body as god intended it, while punishing its perfection so we can look at it without guilt.
This feels, for all its beauty, like the art of sickness. What kind of response these St Sebastians are supposed to evoke can only be guessed at; and the guesses are rarely ones the original owners of the paintings would have liked. Empathy with and reverence for the martyred saint, obviously; but while Grunewald’s Christ reflects and gives back this sense of shared humanity with the weight of his tortured body and his human suffering, St Sebastian gives us, what? Hope? Various kinds of spiritual (it’s in the eyes) and earthly (relaxed pose and suggestive loincloth) desire?
There are lots of fascinating themes and sub-themes her, but for now, there you have it; Christ may have, spiritually, redeemed all of humankind, but aesthetically speaking, women remain (as Narnians would say) ‘daughters of Eve’.
Nowadays, tired presumably of the restrictions on their lives, men have liberated themselves enough that we don’t even need St Sebastian’s spiritual gaze, or a hint of damnation, to justify our nudity. In what remains an essentially patriarchal society, just advertising a razor, or underwear, or perfume, or chocolate, or taking part in a swimming event, or even just being outside on a warm day is enough to justify our bodies, as long as they don’t veer too far from that Christlike ideal, and as long as they aren’t visibly excited. But even now, women – who can look like our mother Eve, but not our reborn father Christ – can be more or less naked too, at any time of day they like (on TV or online at least); just as long as they are mutilated.
Tell me now, I beg you, where Flora is, that fair Roman; Archippa, and Thaïs rare, Who the fairer of the twain? Echo too, whose voice each plain, River, lake and valley bore; Lovely these as springtime lane, But where are they, the snows of yore?¹
François Villon, Ballade des dames du temps jadis(1461)¹
My uncle died two years ago now, but his Instagram account is still there. How many dead people live on in their abandoned social media accounts? The future never seems to arrive, never really exists, but history never ends. For over a quarter of a century, social media has mirrored and shaped lives, always evolving, but leaving behind its detritus just like every other phase of civilisation. Where are the people we were sociable with on the forgotten single-community (bands, hobbies, comedy, whatever) forums and message boards of the 90s and 2000s², or the friends we made on MySpace in 2005? Some live on, ageing at an only slightly faster rate than their profile pictures (Dorian Gray would now age privately at home, his picture migrating untouched from MySpace to Facebook to Twitter to Instagram to TikTok etc), others lost, vanished, dead? But still partially living on, like those sunlit American families in the home movies of the 50s and 60s.
Twenty-five years is a long, generation-spanning time, but, just as abstract expressionism essentially still lives on, in almost unaltered forms but no longer radical, long past the lifetimes of Rothko, Jackson Pollock and de Kooning, so the (just) pre-internet countercultural modernity of the late 80s and early 90s, the shock-monster-gender-fluid-glam of Michael Alig and the Club Kids, still prevalent back in the Myspace era³, (captured brilliantly in the 1998 ‘shockumentary’ Party Monster and less brilliantly in the somewhat unsatisfactory 2003 movie Party Monster) lives on and still feels current on Instagram and Tiktok and reality TV and in whatever is left of the top 40. Bulimic pop culture eats reconstituted chunks of itself and just as the 60s haunted the early 90s, bringing genuine creativity (Andrew Weatherall, to pick a name at random) and feeble dayglo pastiche (Candy Flip, to deliberately target a heinous offender), a weird (if you were there) amalgam of the 1980s and 90s haunts the 2020s, informing both the shallow dreck that proliferates everywhere and some of the genuine creativity of today.
‘I’m ready now,’ Piper Hill said, eyes closed, seated on the carpet in a loose approximation of the lotus position. ‘Touch the spread with your left hand.’ Eight slender leads trailed from the sockets behind Piper’s ears to the instrument that lay across her tanned thighs.
entering cyberspace in William Gibson’s Mona Lisa Overdrive (1988) Grafton Books, p.105.
Cyberspace, like any landscape which people have inhabited, has its lost cultures and ruins, becoming ever more remote and unknowable with the passing of years, but, like Macchu Picchu or the Broch of Gurness, retaining a sense that it all meant something significant once. The not-quite barren wastelands of Geocities and Xanga, the ruined palace of MySpace, a Rosetta stone partly effaced with dead links and half forgotten languages; photobucket, imageshack, tripod, what do these mean if you’re 15? Would the old, useable interface of MySpace seem as charmingly quaint and remote now as the penpal columns in the pages of ’80s music magazines do?
But there was a time when Lycos, Alta Vista and Ask Jeeves were peers of Google, and Bebo rivalled Facebook and Twitter, both now seemingly in senile phases of their development. Until very recently Facebook (Meta) and Twitter were brands that were seemingly unassailable, but empires do fall, albeit more slowly than bubbles burst.4 And meanwhile, the users of social networks age and die and give way to generations who remember them, just as the Incas and the Iron Age Orcadians are remembered by their monuments, if nothing else. Depressing, when you think about it; probably won’t write about history next time.
It’s funny. Don’t ever tell anybody anything. If you do, you start missing everybody. JD Salinger, The Catcher In The Rye, Penguin, 1958, p.220
¹ translated by Lewis Wharton in The Poems of François Villon, JM Dent & Sons, 1935, p54. Not reading French – I seem to go on about that a lot – this is my favourite translation I’ve come across, although apparently it’s a pretty free one, judging by the literal – but still quite nice – one here
² the continuing success of Reddit suggests that people never really grew discontented with the interface of the Kiss online fanclub c. 2005 (etc etc)
³It’s weird to note that the Club Kids would be considered – even without the murder etc – just as outrageous today as in the late 80s, even though their aesthetic was itself put together from a mix of Bowie, gore movies, Japanese pop culture etc etc. But then as I think I recently noted, there are people who still find the word fuck outrageous, after something like a millennium.
4Online and mainstream culture, even after this quarter century, remain mysteriously separate. Online news unfolds as it happens, but meanwhile in the daytime world, mainstream culture hangs on to husks even older than Geocities; publicly owned TV news shows don’t look to what’s happening now, but pore over the front pages of newspapers – yesterday’s news… today! – simultaneously being redundant and ensuring that newspaper owners’ views get publicity beyond their dwindling readership and therefore giving them an artificial sense of relevance. Which is really just about money, just as Google and Facebook are; the crumbling aristocracy of print media, its tendrils still entwined with the establishment, versus the new money, steadily buying its way in.
The police in 2020 may feel beleaguered by the pressure to account for their actions and act within the boundaries of the laws that they are supposed to be upholding, but despite the usual complaining from conservative nostalgists about declining standards of respect, the question of ‘who watches the watchmen’ (or, ‘who will guard the guards’ or however Quis custodiet ipsos custodies? is best translated) is hardly new, and probably wasn’t new even when that line appeared in Juvenal’s Satires in the 2nd century AD. In fact, in the UK (since I’m here), from their foundations in the 18th century, modern police forces (or quasi-police forces like the Bow Street Runners) were almost always controversial – and not surprisingly so.
It’s probably true that the majority of people have always wanted to live their lives in peace, but ‘law and order’ is not the same thing as peace. The order comes from the enforcement of the law, and ‘the law’ has never been a democratically agreed set of rules. So law and order is always somebody’s law and order; as is often pointed out, most of the things we regard as barbaric in the 21st century, from slavery and torture to child labour and lack of universal suffrage were all technically legal. ‘Respect for the law’ may not just be a different thing from respect for your fellow human beings, it might be (and often has been) the opposite of it; so it’s no wonder that the position of the gatekeepers of the law should often be ambiguous at best.
And, as it tends to do – whether consciously or not – popular culture reflects this situation. Since the advent of film and television, themes of law enforcement and policing have been at the centre of the some of mediums’s key genres, but Dixon of Dock Green notwithstanding, the focus is only very rarely on orthodox police officers following the rules faithfully. Drama almost invariably favours the maverick individualist who ‘gets the job done’* over the methodical, ‘by the book’ police officer, who usually becomes a comic foil or worse, while from the Keystone Cops (or sometimes KeystoneKops) in 1912 to the present day, the police in comedies are either inept or crooked (or both; but more of that later).
*typically, the writers of Alan Partridge manage to encapsulate this kind of stereotype while also acknowledging the ambiguity of its appeal to a conservatively-minded public, when Partridge pitches ‘A detective series based in Norwich called “Swallow“. Swallow is a detective who tackles vandalism. Bit of a maverick, not afraid to break the law if he thinks it’s necessary. He’s not a criminal, you know, but he will, perhaps, travel 80mph on the motorway if, for example, he wants to get somewhere quickly.’ i.e. he is in fact a criminal, but one that fits in with the Partridgean world view
But perhaps the police of 2020 should think themselves lucky; they may be enduring one of their periodic crisis points with public opinion, but they aren’t yet (again) a general laughing stock; perhaps because it’s too dangerous for their opponents to laugh at them for now. But almost everyone used to do it. For the generations growing up in the 70s and 80s, whatever their private views, the actual police force as depicted by mainstream (ie American, mostly) popular culture was almost exclusively either comical or the bad guys, or both.
The idiot/yokel/corrupt/redneck cop has an interesting cinematic bloodline, coming into their own in the 60s with ambivalent exploitation films like The Wild Angels (1966) and genuine Vietnam-war-era countercultural artefacts like Easy Rider, but modulating into the mainstream – and the mainstream of kids’ entertainment at that – with the emergence of Roger Moore’s more comedic James Bond in Live and Let Die in 1973. This seems to have influenced tonally similar movies like The Moonrunners (1975; which itself gave birth to the iconic TV show The Dukes of Hazzard, 1979-85), Smokey and the Bandit (1977), Any Which Way You Can (1980) and The Cannonball Run (1981). Variations of these characters, police officers usually concerned more with the relentless pursuit of personal vendettas than actual law enforcement, appeared (sometimes sans the redneck accoutrements) in both dramas (Convoy, 1978) and comedies (The Blues Brothers, 1980), while the more sinister, corrupt but not necessarily inept police that pushed John Rambo to breaking point in First Blood (1982) could also be spotted harassing (equally, if differently, dysfunctional Vietnam vets) The A-Team from 1983 to ’85.
In fact, the whole culture of the police force was so obviously beyond redemption as far as the makers of kids and teens entertainment were concerned, that the only cops who could be the good guys were the aforementioned ‘mavericks’; borderline vigilantes who bent or broke or ignored the rules as they saw fit, but who were inevitably guided by a rigid sense of justice and fairness generally unappreciated by their superiors; reaching some kind of peak in Paul Verhoeven’s masterly Robocop (1987). Here, beneath the surface of straightforward fun scifi/action movie violent entertainment, the director examines serious questions of ‘law’ vs ‘justice’ and the role of human judgement and morality in negotiating between those two hopefully-related things. Robocop himself is, as the tagline says ‘part man, part machine; all cop’ but the movie also gives us pure machine-cop in the comical/horrific ED-209, which removes the pesky human element which makes everything so complicated and gives us an amoral killing machine. It also gives us good and bad human-cop, in the persons of the always-great Nancy Allen; whose sense of justice is no less than her robot counterpart, but whose power is limited by the machinations of the corrupt hierarchy of the organisation she works for, and who is vulnerable to physical injury, and the brilliant Ronny Cox; very aware of the (practical and moral) problems with law enforcement and more than happy to benefit personally from them.
The following year, Peter Weller (Robocop himself) returned in the vastly inferior Shakedown, worthy of mention because it too features unorthodox/mismatched law enforcers (a classic 80s trope, here it’s Weller’s clean-cut lawyer and Sam Elliott’s scruffy, long haired cop) teaming up to combat a corrupt police force; indeed the movie’s original tagline was Whatever you do… don’t call the cops. And it’s also worthy of mention because its UK (and other territories) title was Blue Jean Cop, though sadly lacking the ‘part man, part blue jean; all cop’ tagline one would hope for). Into the 90s, this kind of thing seemed hopelessly unsophisticated, but even a ‘crooked cops’ masterpiece like James Mangold’s Cop Land (1997) relies, like Robocop, on the police – this time in the only mildly unconventional form of a good, simple-minded cop (Sylvester Stallone), to police the bad, corrupt, too-clever police, enforcing the rules that they have broken so cavalierly. The film even ends with the explicit statement (via a voiceover) that crime doesn’t pay; despite just showing the viewer that if you are the police, it mostly seems to, for years, unless someone on the inside doesn’t like it.
With this focus on ‘the rules’, whether bending them a-la Starsky and Hutch (and the rest), hand-wringing over said rule-bending, like the strait-laced half of many a mismatched partnership (classic examples; Judge Reinhold in 1984’s Beverley Hills Cop or Danny Glover in Lethal Weapon, another famous ‘unorthodox cop’ movie from the same year as Robocop) or disregarding them altogether like Clint Eastwood’s Dirty Harry, it’s no surprise that the training of the police should become the focus of at least one story. Which brings us to Police Academy.
Obviously any serious claim one makes for Police Academy is a claim too far – it’s not, nor was it supposed to be – a serious film, or even possibly a good one, and certainly not one with much of a serious message. But its theme is a time-honoured one; going back to the medieval Feast of Fools and even further to the Roman festival of Saturnalia; the world upside down, lords of Misrule… And in honouring this tradition, it tells us a lot about the age that spawned it. Police Academy purports to represent the opposite of what was the approved behaviour of the police in 1984 and yet, despite its (not entirely unfounded) reputation for sexism and crass stereotypes it remains largely watchable where many similar films do not, while also feeling significantly less reactionary than, say the previous year’s Dirty Harry opus, Sudden Impact.
While a trivial piece of fluff, Police Academy is notable for – unlike many more enlightened films before and since – passing the Bechdel test (but don’t expect anything too deep though; and not just from the female characters) as well as having noticeably more diversity among its ensemble cast than the Caddyshack/National Lampoon type of films that it owed its comedy DNA to. Three prominent African-American characters with more than cameo roles in a mainstream Hollywood movie may not seem like much – and indeed it definitely isn’t – but in an era when the idea for a film where a rich white kid finds the easiest way to get into college is by disguising as a black kid not only got picked up by a studio, but actually made it to the screen, it feels almost radical. Those three actors – Marion Ramsey, Michael Winslow and the late Bubba Smith could look back on a series of movies which may not have been* cinematic masterpieces, but which allowed them to use their formidable comedic talents in a non-token way, without their race being either overlooked (they are definitely Black characters rather than just Black actors playing indeterminate characters) or portrayed in a negative sense. It’s not an enlightened franchise by any means; the whole series essentially runs on stereotypes and bad taste and therefore has the capacity to offend, and although there are almost certainly racial slurs to be found there, alongside gross sexism, homophobia etc, the series is so determined to make fun of every possible point of view that it ends up leaving a far less bad smell behind it than many of its peers; definitely including the aforementioned (or at least alluded to) Soul Man (1986). *ie they definitely aren’t
Despite its good nature though, there is a mild kind of subversion to be found in the Police Academy films. With the Dickensian, broadly-drawn characters comes a mildly rebellious agenda (laughing at authority), but it also subverts in a more subtle (and therefore unintentional? who knows) way, the established pattern of how the police were depicted. Yes, they are a gang, and as such stupid and corrupt and vicious and inept, just like the police of Easy Rider, Smokey and the Bandit, TheDukes of Hazzard, et al – but unlike all of those things, Police Academy offers a solution in line with its dorky, good natured approach; if you don’t want the police to suck, it implies, what you need to do is to recruit people who in the 80s were not considered traditional ‘police material’ – ethnic minorities, women, smartasses, nerds (and at least one dangerous gun-worshipper, albeit one with a sense of right and wrong). So ultiimately, like its spiritual ancestors, Saturnalia and the Feast of Fools, Police Academy is more like the safety valve that ensures the survival of the status quo rather than the wrecking ball that ushers in a new society. Indeed, as with Dickens and his poorhouses and brutal mill owners, the message is not – as you might justifiably expect – ‘we need urgent reform’, but ‘people should be nicer’. Hard to argue with, as far as it goes, but as always seems to be the case*, the police get off lightly in the end.
*there is one brutal exception to this rule, the 1982 Cannon & Ball vehicle The Boys In Blue; after sitting through an impossibly long hour and a half of Tommy and Bobby, the average viewer will want not only to dismantle the police force, but the entire western culture that produced it.
On the rare occasions that anyone asks me anything about my writing, it’s usually about music reviews. The consensus seems to be that a good review (I don’t mean a positive one) should either be ‘listen to the music and say if it’s good or bad’, or ‘listen to the music and describe it so that other people can decide whether it’s their cup of tea, but keep your opinion out of it’. As it happens, I’ve given this subject a lot of thought, not only because I write a lot of reviews, but I also because I read a lot of reviews, and some of my favourite writers (Charles Shaar Murray is the classic example) manage to make me enjoy reading about music even when it’s music that I either already know I don’t like, or that I can be fairly certain from reading about it that I won’t like. Because reading a good article about music is first and foremost ‘reading a good article’.
Anyway, over the course of pondering music reviews I have come to several (possibly erroneous) conclusions:
* “star ratings” HAVE TO BE relative and all stars don’t have the same value. For instance, one might give a lesser album by a great artist 3 stars, but those are not the same 3 stars one would give a surprisingly okay album by a generally crappy artist.
* Musical taste is, as everyone knows, entirely subjective, but reviewing (for me at least) has to try be a balance between objective and subjective; just listening to something and saying what you think of it is also valid of course.
* Objective factors alone (see fun pie chart below) can never make an otherwise bad album good, but subjective factors can.
* ‘Classic’ albums make a nonsense of all other rules.
Let’s examine in more detail, with graphs! (are pie charts graphs?):
Objective factors:
Objective factors (see fun pie chart) are really only very important when the reviewer doesn’t like the music: when you love a song, whether or not the people performing it are technically talented musicians/pitch perfect singers etc is entirely irrelevant.
But, when an album or song (or movie, book etc) is dull or just blatantly abysmal, some comfort (or conversely, some outrage and annoyance) can be gained from the knowledge that at least the participants were good at the technical aspects of what they were doing, even if they are ultimately using those skills for evil.
Subjective Factors:
Although there are many subjective factors that may be relevant; nostalgia for the artist/period, personal associations, all of these really amount to either you like it or you don’t; simple but not necessarily straightforward.
The positive subjective feeling ‘I like it!’ can override all else, so that an album which is badly played, unoriginal, poorly recorded and awful even by the artist’s own standards can receive a favourable review (though the reviewer will hopefully want to point out those things)
Meanwhile the negative subjective feeling ‘I don’t like it’ can’t help but affect a review, but should hopefully be tempered by technical concerns if (an important point) the reviewer feels like being charitable. They may not.
Ideally, to me a review should be something like 50% objective / 50% subjective (as in the examples somewhere below) but in practice it rarely happens.
“Classic” status:
The reviewing of reissued classics can be awkward, as ‘classic’ status in a sense negates reviewing altogether; it is completely separate from all other concerns, therefore said classic status can affect ratings just because the album is iconic and everyone knows it. Reviews of new editions of acknowledged classics usual become either a review of what’s new (remastered sound, extra tracks etc) or a debunking of the classic status itself; which as far as I know has never toppled a classic album from its pedestal yet.
Classic album status is normally determined by popularity as much as any critical factors, but popularity itself shouldn’t play a part in the reviewer’s verdict; just because 30,000,000 people are cloth-eared faeces-consumers, it doesn’t mean the reviewer should respect their opinion, but they should probably acknowledge it, even if incredulously. Sometimes or often, classic status is attained for cultural, rather than (or as well as) musical reasons*, and it should be remembered that albums (is this still true in 2020? I don’t know) are as much a ‘cultural artefact’ (in the sense of being a mirror and/or record of their times) as cinema, TV, magazines or any other zeitgeist-capturing phenomenon.
* in their very different ways, Sgt Pepper’s Lonely Hearts Club Band, Thriller and The Spice Girls’ Spice were all as much ‘cultural phenomena’ as collections of songs
SO ANYWAY; how does this all work? Some examples:
I once offended a Tina Turner fan with an ambivalent review of the 30th anniversary edition of Ms Turner’s 1984 opus Private Dancer.
As a breakdown (of ‘out of 10’s, for simplicity) it would look something like this:
TINA TURNER: PRIVATE DANCER (3OTH ANNIVERSARY EDITION)
Objective factors * musicianship – 9/10 – hard to fault the adaptability or technical skill of her band * songwriting – 6/10 – in terms of catchy, verse-chorus-verse efficiency & memorableness these are perfectly good songs, if a bit cheesy & shallow & therefore a waste of Tina Turner * production – 9/10 – no expense was spared in making the album sound good in its extremely shiny, 80s way * originality – 0/10 – as an album designed to make TT into a successful 80s artist, it wasn’t really supposed to be original, so hard to actually fault it in that respect * by the standards of the artist – 2/10 – in the 60s/70s Tina Turner made some great, emotionally forceful, musically adventurous and just great records. In 1984 she didn’t.
Overall: 26/50 = 5.2/10
Subjective Factors
* I don’t like it: 1/10 (but not 0, because Tina Turner is a legend and it would be wrong to deny that somehow)
Overall 5.2/10 + 1/10 = 6.2/20 = 3.1/10 = 1.55/5 (round up rather than down, out of respect for Tina) = 2 stars
and in fact I did give the album two stars, though I didn’t actually do any of the calculations above; but it’s pleasing to find out that the instinctive two stars is justified by fake science.
by way of contrast, a favourite that seems to be an acquired taste at best:
VENUSIAN DEATH CELL: HONEY GIRL (2014)
Objective factors * musicianship – 1/10 – David Vora’s guitar playing is not very good, plus the guitar is out of tune anyway, and his drumming is oddly rhythm-free * songwriting – 2/10 – the songs on Honey Girl are not really songs, they may be improvised, they don’t have actual tunes as such * production – 0/10 – David pressed ‘record’ on his tape recorder * originality – 10/10 – Vora doesn’t sound like anyone else, his songs are mostly not about things other people sing about * by the standards of the artist – 9/10 – I like all of Venusian Death Cell’s albums, they are mostly kind of interchangeable, but Honey Girl is one of the better ones (chosen here over the equally great Abandonned Race only because of the uncanny similarities between the cover art of Honey Girl and Private Dancer).
Overall: 22/50 = 4.4/10
Subjective Factors
* I like it: 9/10 (but not 10, because if encouraged too much David Vora might give up and rest on his laurels. Though if he did that I’d like to “curate” a box set of his works)
Overall 4.4/10 + 9/10 = 13.4/20 = 6.7/10 = 3.35/5 (round up rather than down, out of sheer fandom) = 4 stars
And in fact I did give Honey Girl four stars, but I’ve yet to hear of anyone else who likes it. Which is of course fuel for the reviewer’s elitist snobbery; win/win
Star Ratings
I’ve used scoring systems above, but the writers I like best rarely use scores or ‘star ratings’. I don’t think anybody (artists least of all) really likes star ratings or scores because they immediately cause problems; if, for instance, I give the Beach Boys’s Pet Soundsfour stars (and the critical consensus says you have to; also, I do love it), then what do I give Wild Honey or Sunflower, two Beach Boys albums that are probably demonstrably ‘less good’, but which I still like more? But at the same time, I suppose scores are handy, especially for people who want to know if something is worth buying but don’t want an essay about it – and who trust the reviewer. The best ‘score’ system I’ve ever seen is in the early 2000s (but may still be going?) fanzine Kentucky Fried Afterbirth, in which the genius who writes the whole thing, Grey, gives albums ratings out of ten ‘cups of tea’ for how much they are or aren’t his cup of tea; This may be the fairest way of grading a subjective art form that there can possibly be.
Critical Consensus
I mentioned the critical consensus above, and there are times when it seems that music critics seem to all think the same thing, which is how come there’s so much crossover between books like 1000 Albums You Must Hear Before You Die (I always feel like there’s an implied threat in those titles) and The Top 100 Albums of the Sixties etc. I’m not sure exactly how this works, because like most people I know who love music, my favourite albums and songs aren’t always (or even usually) the most highly regarded ones. My favourite Beatles album isn’t the ‘best’ one (Revolver, seems to be the consensus now); Songs in the Key of Life is the Stevie Wonder album, but it’s probably my third or fourth favourite Stevie Wonder album; I agree that Bruce Dickinson is a metal icon but I kind of prefer Iron Maiden with Paul Di’anno (granted PD wouldn’t be as good as Bruce at things like Rime of the Ancient Mariner but it’s less often mentioned that Bruce is definitely not as good at singing Wrathchild etc as Paul was.) Much as I genuinely love The Velvet Underground and Nico, I genuinely love the critically un-acclaimed Loaded just as much; there are so many examples of this that the idea of an actual critical consensus that means anything seems like nonsense.
I’ve been writing music reviews for many years now, but my own involvement with ‘the consensus’ is rare and the only solid example I can think of is a negative one. I thought – and I still think – that Land, the fourth album by Faroese progressive metal band Týr, is the best thing they’ve ever done. I gave it a good review, not realising that the critical tide was turning against the band, and, for whatever reason (fun to speculate but lack of space is as likely as anything), my positive review never appeared in print. It wouldn’t have made any real difference to the band or to the album’s reception in general, but it did make me feel differently about albums that are notoriously bad (or good). Who is deciding these things? I’m a music critic and I’m not. And although I – like, I think everyone – take reviews with a pinch of salt anyway (someone else liking something is a strange criteria for getting it, when you think about it), I have to admit if I hadn’t had to listen to Land (which I still listen to every now & then, over a decade later), I wouldn’t have been in a hurry to check out the album after reading again and again that it was dull and boring.
Throughout this whole article the elephant in the room is that, at this point, the whole system of reviewing is out of date. You can almost always just listen to pretty much anything for free and decided yourself whether you like it, rather than acting on someone else’s opinion of it. But in a way that makes the writing more important; again, like most people, I often check things out and stop listening at the intro, or half way through the first song if I just don’t like it – except when I’m reviewing. Reviewers have to listen to the whole thing, they have to think about it and say something relevant or contextual or entertaining.* And if the reviewer is a good writer (Lester Bangs is the most famous example, though I prefer Jon Savage or the aforementioned CSM and various nowadays people), their thoughts will entertain you even if the music ultimately doesn’t.
*worth a footnote as an exception which proves the rule is a notorious Charles Shaar Murray one-word review for the Lee Hazlewood album Poet, Fool or Bum: “Bum.”
There are relatively few times in life when it’s possible to switch off your mind and enter a trance-like state without going out of your way to do so; but sitting in a classroom for a period (or better yet, a double period) of whatever subject it is that engages you least is one of those times. When the conditions are right – a sleepy winter afternoon in an overly warm room maybe, with darkness and heavy rain or snow outside and the classroom lights yellow and warm, the smell of damp coats hung over radiators and a particularly boring teacher – the effect can be very little short of hypnotic. The subject will be a matter of taste, for me the obvious one I detested was Maths, but I think that something like Geography or ‘Modern Studies’ (strangely vague subject name), where I wasn’t concerned so much with not understanding and/or hating it, would be the optimum ‘trance class’.
There’s nothing like school for making you examine the apparently stable nature of time; if, as logic (and the clock) states, the 60 or so minutes of hearing about ‘scarp-and-vale topography’ really was about the same length of time as our always-too-short lunch hour, or even as was spent running around the rugby pitch, then clearly logic isn’t everything, as far as the perception of human experience is concerned.
But it would not be true to say that I did nothing during these long, barren stretches of unleavened non-learning. Mostly, I doodled on my school books. Sometimes this was a conscious act, like the altering of maps with tippex to create fun new supercontinents, or the inevitable (in fact, almost ritualistic, after 7 years of Primary school) amending of the fire safety rules that were printed on the back of every jotter produced by The Fife Regional Council Education Committee. Often these were just nonsensical, but even so, favourite patterns emerged. I had a soft spot for “ire! ire! ire! anger! anger! anger!” (in the interests of transparency I should probably point out that I was almost certainly unaware at the time that ire means anger), and the more abstract “fir! fir fir! Dang! Dang! Dang!” (see?), but some things like ‘Remember Eire hunts – Eire kills’ were fairly universal. But also, there was the whiling (or willing) away of time by just doodling, in margins, on covers, or if the books didn’t have to be handed in at the end of the class, just anywhere; band logos and Eddies* and cartoon characters. Later, towards the end of my high school career, there’s a particularly detailed and baroque drawing of a train going over a bridge (something I wouldn’t have had much patience for drawing in an actual art class) which immediately summons up the vivid memory of a particularly long Geography class, and even which pen – a fine felt tip I liked but couldn’t write neatly with** – that I drew it with.
*Eddie = ‘Eddie the head’, Iron Maiden’s beloved zombie mascot, created – and painted best – by Derek Riggs
**i.e. ‘I wrote even less neatly than usual with’
If I could recall the things I was supposed to learn in classes this well I would have done much better at school. But the point of doodling is that it’s whatever it is your hand draws when your brain isn’t engaged; or, as André Breton put it, drawings that are ‘dictated by thought, in the absence of any control exercised by reason, exempt from any aesthetic or moral concern.’*
This is in fact from his definition of what surrealism is; ‘psychic automatism in its pure state’ and later, in The Automatic Message (1933) Breton went further, influenced by his reading of Freud, specifically referencing what would later become known as art brut or ‘outsider art’ – drawings by the mentally ill, visionaries, mediums and children – as ‘surrealist automatism’. Although it might seem to – well, it definitely does – give too much dignity and importance to the time-wasting scrawls of teenagers to consider them anything but ephemeral, the strange faces, swords, cubes, eyes, tornadoes and goats that littered my school books aged 12-14 or so do seem to preserve, not just the kind of pantheon almost every child/teenager has – made up of favourite bands, TV shows, cartoon characters etc – but a kind of landscape of enigmatic symbolism that comes from who-knows-where and perhaps represents nothing more than the imagination crying for help from the heart of a particularly stimulus-free desert. But in the end, that’s still something.
*André Breton, Manifesto of Surrealism 1924, published in Manifestoes of Surrealism, Ann Arbor paperbacks, tr. Richard Seaver and Helen R. Lane, 1972, p.26
I suppose I should warn people: this is pretty much all spoilers.
Television has always had one big advantage over cinema – time – which should really make it the better medium for drama. After all, the novel is almost always superior to the short story for depth, breadth, detail, plot and character development; and yet, there are more of all of those things in, say, the three hours of Scorsese’s Goodfellas than in 60+ years of Coronation Street. What happens in fact – even in shows that only last a few seasons – is more often stagnation, repetition, a growing sense of desperately trying to fight for ratings by increased sensationalism or controversy. But despite the smartass and I’m sure unoriginal title here (I intentionally haven’t checked), I don’t think television needs to be revolutionised, it just needs to act as though its virtues – especially the time and intimacy it has – are virtues, and not try to import the features of a Hollywood blockbuster into a more modestly sized format. But there is one thing that TV could and should learn from cinema; the satisfying (all different kinds of satisfying) ending that is mostly mandatory in film and in most cases isn’t just a tacked-on afterthought.
TV advertising as movie posters; Stranger Things embodying its 80s setting, Dark its disorienting fractured quality
I first saw mention of Dark online just after season one had launched, where it was described as a kind of German Stranger Things. The two shows are almost entirely unalike, but the comparison is a natural one; both belong to the world of the Netflix blockbuster, both are somewhere in the sci-fi/horror genre, both feature young protagonists, both are set (in the case of Dark, only partly) in the 80s. And both seem to owe something to successful movies, but the contrast here is a significant one; Stranger Things (especially in its opening, best season) owes a lot to JJ Abrams’s nostalgic, fun, Spielberg-esque Super 8 (2011), an end-of-the-70s-set movie that is in equal measures a sci-fi adventure movie and a rites of passage film about teenagers and friendship, ET-meets-Stand By Me. Super 8 is essentially a story about young teens trying to find their place in a world/universe that is bigger and scarier than they realised and discovering along the way that ‘the authorities’ aren’t to be trusted and that their parents are really just as in the dark about everything as the kids are themselves. And a space monster. It succeeds because it’s slick and well made and has a lot of heart, but also – especially – because the young cast were great; Stranger Things season one mirrored almost all of those things too.
But there is – thankfully, so far – no sequel to Super 8. In borrowing so heavily from highly cinematic sources, Stranger Things also borrowed the structure – including the big finale –of a Hollywood blockbuster. But like many of those, because it was successful it therefore demanded a sequel that was in no way implied by the original story. So what you had instead was a fairly enjoyable season two, with even more sense of “the 80s”, not the actual 80s experienced by people who were alive then, but endless, not always concurrent pop cultural references that in the end made it feel as weirdly dislocating as the 60s of a TV show like Heartbeat where Elvis Presley, the twist, hippies and the summer of love all seem to be happening at the same time. The story to season two though did have the authentic-in-a-way feel of an 80s horror movie sequel – a fun but slightly unsatisfactory Freddie’s Revenge, we-made-a lot-of-money-last-time, what-can-we-do-now type sequel. And then season three was the inevitable diminishing returns sequel, only now it didn’t even pretend to be the actual 80s at all, just the 80s that people who have seen cheesy Hollywood movies would experience, where Soviet Russians really were the almost robot-like villains of Rocky IV or Red Dawn. I feel like younger people might want to know that this was American paranoia/propaganda, rather than historical fact. Although I’m sure there really were Soviet spy stations (with people wearing actual military uniforms!) hidden under malls all over the US. This was a disappointingly stupid show and also – inevitably – suffered from the kind of awkwardness that always happens with casts of children as time passes, an issue from the Our Gang and Bowery Boys franchises of the 1930s onwards. Imagine what it might have been like if they’d made a Goonies sequel a couple of years later with teenage Goonies instead of children – the pre/early teens are very different, friendship-wise from what comes later, and although there’s a lot of bittersweet drama to be found in that, Stranger Things was barely concerned with it at all. But it was successful, so there will be more of it.
This is the downfall of blockbuster TV; whereas movie franchises limp to their inevitable demise, becoming weaker and weaker carbon copies of what went before, TV dramas (and sitcoms too, if they go on too long) devolve into soap operas, concerned more with the relationships between the protagonists instead of putting those characters into meaningful stories. And then, when the viewing figures fall, they get cancelled. Stranger Things 4 may be great – I hope it is – but it might also be a lot of squabbling teenagers in what should probably be the 90s by now but which may be marked – appropriately I guess – by references to Ghostbusters 2, Back To The Future 2 (or Friday the 13th Part 7 and A Nightmare on Elm Street 5), hair metal and whatever commercials, candy and hairstyles the producers think shout ‘late 80s’ most loudly. It would be nice though to have a bit of imagination and a proper ending. In TV terms I’d say it’s far better to have an end in sight and be missed when you go than to be cancelled and remembered as something that was once good but got milked to death; but that’s just me maybe.
Meanwhile Dark felt cinematic too, but in a very different way. Whereas Stranger Things seemed to have its genesis in Super 8, Dark seems to owe some of its ideas and a lot of its atmosphere to Richard Kelly’s Donnie Darko (2001), a very different 80s-set film in which a troubled teenager is caught in a series of strange events caused by a loop in time which must be undone in order to restore equilibrium to his/the world; but at a tragic cost. The basic themes of Donnie Darko are not really a million miles removed from those of Super 8, but whereas that movie’s protagonists are in the awkward, bittersweet children-into-teens phase, discovering the boundaries of their childhood friendships and the awakening of sexual desire etc, Donnie is a depressed, disillusioned but still idealistic 17 year old, looking for answers to the big questions of life and death but finding that – like the Super 8 kids – no-one, however much authority they seem to have, really knows any more than he does. And it’s also about time travel.
What Dark did (I write this assuming they won’t spoil it with a 4th season) is what TV drama so rarely does, but which cinema almost always does – it has a sense of overall structure, an ending in mind even as it begins (more than that, that’s one of the major themes running through the show itself). Unlike with Stranger Things, seasons two and three of Dark were not only implied by the events of season one, they have to happen to bring the story to any kind of satisfactory close. One of the strengths of Stranger Things is that if it had been cancelled after the first season it would have been just as good; but Dark would have been incredibly frustrating. This is quite a fundamental difference; when the plot of a (drama) show becomes secondary to the characters it can absolutely still be great, it’s just that, while it remains popular enough to justify making it, it has no real need to be any good, like the aforementioned Friday the 13ths
On the other hand, a strength (and I guess from the financial point of view, a weakness) of Dark is that, as it stands now, the show can only be continued by ruining it and undoing the perfectly formed story that was told. That story (as implied from the beginning but explicitly mentioned from season two onwards) was an increasingly complicated knot (the moment where one character was revealed to be her own grandmother and therefore her own granddaughter was perhaps the pinnacle of the show’s brain-hurting complexity) and, in the end, Alexander the Great-like, the writers simply cut through it. But although that sounds disappointing – and initially, the final season felt like a sidestep rather than a continuation – it ultimately made total sense and explained every bizarre and apparently illogical detail of what had come before it, as well as reinforcing the significance of background details that were there from the very beginning of the show, such as the strange trefoil symbol that appeared on the doors to the time portals.
But although I’ve stressed the importance of the plot, where Dark really utilises the virtues of television over film is in the time it spends developing a whole set of characters, at various stages of their lives, in ways that make them feel real and believable. Some of the show’s initially least likeable secondary characters, such as the local Policeman Egon Tiedemann, in the end become tragic figures, not because of anything especially dramatic (though lots of dramatic things happen to them) but just because we see them, young, middle aged, old, repeating their mistakes, invariably making the wrong decisions and never really coming to grips with their own lives before they are over. It also makes us re-evaluate the villains as well as the heroes (sometimes there is no difference between the two). At the beginning of season one it’s immediately obvious that the apparent itinerant preacher Noah is a (slightly cheesy) villain. By the end of season three it turns out he wasn’t any kind of evil mastermind but was no better off than anyone else, a tragic, literally misconceived figure, trapped in circumstances beyond his control, doing horrible things in apparently good faith, to no avail whatsoever.
The representation of the same characters in different time periods is occasionally done in cinema – Richard Linklater and Martin Scorsese spring to mind – but it comes far more naturally to television, with its ability to really stretch out; and yet it hardly ever happens. Soap operas can run literally for decades, with actors ageing in real time and yet never lose the feeling of utter triviality that separates them from great drama; perhaps because although the characters inevitably end, the show trundles on; like life, arguably, but I’m not going to pursue that metaphor. It’s no coincidence that most soaps (in the UK at least) are named after their location, the one immutable element in the show.
The fact that – as in Donnie Darko – the ‘happy ending’ of Dark involves the death (or in this case the non-existence) of characters who the viewer has come to like, love, identify with, empathise with etc – and yet still feels like the right ending – is testament to the skill of the makers of the show. And more importantly – and here it goes beyond Donnie Darko – the final reveal of the origin of the temporal anomaly surrounding the town of Winden was right. Not some random occurrence like the aeroplane engine that ‘should’ have killed Donnie, but an event that logically implies all that follows and explains some of its more enigmatic characters (not least her-own-grandmother-and-granddaughter Charlotte). Written down, the basic theme sounds a bit trite – trying to change the past can destroy the present and future – but onscreen, with well drawn and (very) well acted characters, the idea (kind of like in Stephen King’s Pet Sematary) that in trying to bring back the dead you can awaken other things, is both powerful and emotionally engaging.
All of which is a very long way around to say that television doesn’t need to be revolutionised, it just needs to be seen for its own virtues and not as a kind of surrogate cinema. Hopefully the makers of Stranger Things get it right next time.
Between the ages of 14 and 16 or thereabouts, the things I probably loved the most – or at least the most consistently – were horror (books and movies) and heavy metal.
These loves changed (and ended, for a long time) at around the same time as each other in a way that I’m sure is typical of adolescence, but which also seemed to reflect bigger changes in the world. Reading this excellent article that references the end of the 80s horror boom made me think; are these apparent beginnings and endings really mainly internal ones that we only perceive as seismic shifts because of how they relate to us? After all, Stephen King, Clive Barker, James Herbert & co continued to have extremely successful careers after I stopped buying their books, and it’s not like horror movies or heavy metal ground to a halt either. But still; looking back, the turn of the 80s to the 90s still feels like a change of era and of culture in a way that not every decade does (unless you’re a teenager when it happens perhaps?) But why should 1989/90 be more different than say, 85/86? Although time is ‘organised’ in what feels like an arbitrary manner (the time it takes the earth to travel around the sun is something which I don’t think many of us experience instinctively or empirically as we do with night and day), decades do seem to develop their own identifiable ‘personalities’ somehow, or perhaps we simply sort/filter our memories of the period until they do so.
“The 80s” is a thing that means many different things to different people; but in the western world its iconography and soundtrack have been agreed on and packaged in a way that, if it doesn’t necessarily reflect your own experience, it at least feels familiar if you were there. What the 2010s will look like to posterity is hard to say; but the 2020s seem to have established themselves as something different almost from the start; whether they will end up as homogeneous to future generations as the 1920s seem to us now is impossible to say at this point; based on 2020 so far, hopefully not.
I sometimes feel like my adolescence began at around the age of 11 and ended some time around 25, but still, my taste in music, books, films etc went through a major change in the second half of my teens which was surely not coincidental. But even trying to look at it objectively, it really does seem like everything else was changing too. From the point of view of a teenager, the 80s came to a close in a way that few decades since have done; in world terms, the cold war – something that had always been in the background for my generation – came to an end. Though that was undoubtedly a euphoric moment, 80s pop culture – which had helped to define what ‘the west’ meant during the latter period of that war – seemed simultaneously to be running out of steam.
My generation grew up with a background of brainless action movies starring people like Arnold Schwarzenegger and Sylvester Stallone, who suddenly seemed to be laughable and obsolete, teen comedies starring ‘teens’ like Andrew McCarthy and Robert Downey, Jr who were now uneasily in their 20s. We had both old fashioned ‘family entertainment’ like Little & Large and Cannon & Ball which was, on TV at least. in its dying throes; but then so was the ‘alternative comedy’ boom initiated by The Young Ones, as its stars became the new mainstream. The era-defining franchises we had grown up with – Star Wars, Indiana Jones, Ghostbusters, Back to the Future, Police Academy – seemed to be either finished or on their last legs. Comics, were (it seemed) suddenly¹ semi-respectable and re-branded as graphic novels, even if many of the comics themselves remained the same old pulpy nonsense in new, often painted covers. The international success of Katsuhiro Otomo’s Akira in 1988 opened the gates for the manga and anime that would become part of international pop culture from the 90s onwards.
Those aforementioned things I loved the most in the late 80s, aged 14-15 – horror fiction and heavy metal music – were changing too. The age of the blockbuster horror novel wasn’t quite over, but its key figures; Stephen King, James Herbert, Clive Barker², Shaun Hutson – all seemed to be losing interest in the straightforward horror-as-horror novel³, diversifying into more fantastical or subtle, atmospheric or ironic kinds of stories. In movies too, the classic 80s Nightmare on Elm Street and Friday the 13th franchises – as definitively 80s as anything else the decade produced – began to flag in terms of both creativity and popularity. Somewhere between these two models of evolution and stagnation were the metal bands I liked best. These seemed to either be going through a particularly dull patch, with personnel issues (Iron Maiden, Anthrax) or morphing into something softer (Metallica) or funkier Suicidal Tendencies). As with the influence of Clive Barker in horror, so bands who were only partly connected with metal (Faith No More, Red Hot Chilli Peppers) began to shape the genre. All of which occurred as I began to be obsessed with music that had nothing to do with metal at all, whether contemporary (Pixies, Ride, Lush, the Stone Roses, Happy Mondays, Jesus Jones – jesus, the Shamen etc) or older (The Smiths, Jesus and Mary Chain, The Doors⁴, the Velvet Underground).
Still; not many people are into the same things at 18 as they were at 14; and it’s tempting to think that my feelings about the end of the decade had more to do with my age than the times themselves; but they were indeed a-changing, and a certain aspect of the new decade is reflected in editor Peter K. Hogan’s ‘Outro’ to the debut issue of the somewhat psychedelically-inclined comic Revolver (published July 1990):
Why Revolver?
Because what goes around comes around, and looking out my window it appears to be 1966 again (which means – with any luck – we should be in for a couple of good years ahead of us). Because maybe – just maybe – comics might now occupy the slot that rock music used to. Because everything is cyclical and nothing lasts forever (goodbye, Maggie). Because the 90s are the 60s upside down (and let’s do it right, this time). Because love is all and love is everything and this is not dying. Any more stupid questions?
This euphoric vision of the 90s was understandable (when Margaret Thatcher finally resigned in 1990 there was a generation of by now young adults who couldn’t remember any other Prime Minister) but it aged quickly. The ambiguity of the statement ‘the 90s are the 60s upside down’ is embodied in that disclaimer (and let’s do it right, this time) and turned out to be prophetic; within a month of the publication of Revolver issue1 the Gulf War had begun. Aspects of that lost version of the 90s lived on in rave culture, just as aspects of the summer of love lived on through the 70s in the work of Hawkwind and Gong, but to posterity the 90s definitely did not end up being the 60s vol.2. In the end, like the 80s, the 90s (like every decade?) is defined, depending on your age and point of view, on a series of apparently incompatible things; rave and grunge, Jurassic Park and Trainspotting, Riot Grrrl and the Spice Girls, New Labour and Saddam Hussein.
That tiny oasis of positivity in 1990 – between the Poll Tax Riots on 31st March and the declaration of the first Gulf War on the 2nd August is, looking back, even shorter than I remember, and some of the things I loved in that strange interregnum between adolescence and adulthood (which lasted much longer than those few months) – perhaps because they seemed grown up then – are in some ways more remote now than childhood itself. So… conclusions? I don’t know, the times change as we change and they change us as we change them; a bit too Revolver, a lot too neat. And just as we are something other than the sum of our parents, there’s some part of us too that seems to be independent of the times we happen to exist in. I’ll leave the last words to me, aged 18, not entirely basking in the spirit of peace and love that seemed to be ushered in by the new decade.
¹ in reality this was the result of a decade of quiet progress led by writers like Alan Moore, Neil Gaiman and Frank Miller
² although 100% part of the 80s horror boom, Barker is perhaps more responsible than any other writer for the end of its pure horror phase
³ Stephen King’s Dark Tower series, though dating from earlier in the 80s, appeared in print with much fanfare in the UK in the late 80s and, along with the more sci-fi inflected The Tommyknockers and the somewhat postmodern The Dark Half seemed to signal a move away from the big, cinematic horror novels like Pet Sematary, Christine, Cujo et al. In fact, looking at his bibliography, there really doesn’t appear to be the big shift around the turn of the 90s that I remember, except that a couple of his new books around that time (Dark Tower III, Needful Things, Gerald’s Game for one reason or another didn’t have half the impact that It had on me. That’s probably the age thing). James Herbert, more clearly, abandoned the explicit gore of his earlier work for the more or less traditional ghost story Haunted (1988) and the semi-comic horror/thriller Creed (1990)– a misleadingly portentous title which always makes me think of that Peanuts cartoon where Snoopy types This is a story about Greed. Joe Greed lived in a small town in Colorado… Clive Barker, who had already diverged into dark fantasy with Weaveworld, veered further away from straightforward horror with The Great & Secret Show while reliably fun goremeister Shaun Hutson published the genuinely dark Nemesis, a book with little of the black humour – and only a fraction of the bodycount – of his earlier work. ⁴ the release of Oliver Stone’s The Doors in 1991 is as 90s as the 50s of La Bamba (1987) and Great Balls of Fire (1989) was 80s. Quite a statement.