Judging the Past: Can We Really Afford Not To?

University of Edinburgh historian Donald Bloxham has provided much food for thought in his recent article for the March edition of BBC History Magazine, entitled ‘Why History Must Take a Stance’. In it, he challenges the dogmatic insistence on neutrality that pervades the historical profession. Instead of feigning an unattainable neutrality, he argues, historians should take ownership of the judgements they make and the moral ‘prompts’ that they provide to their readers. Proclaiming neutrality is misleading, and possibly dangerous. I am inclined to agree.

Whilst neutrality is an honourable and necessary ambition for any historian, it is an ideal, and it is folly to suppose otherwise. No morally conscious human being can honestly claim to provide a totally neutral account of British imperialism, for instance. We tell a story in the way that we want to tell it, and there are plethora ways of telling that story, all of which have moral implications in the present. Language, as Bloxham observes, is a key factor. Can a historian who writes about the ‘exploitation’ and ‘subjugation’ of millions of human beings as a result of the Atlantic slave trade truly claim that they are providing a ‘neutral’ impression to their reader? These words carry weight, and rightly so. To talk about the past in totally neutral terms is not only impossible, but also heartless. The stories of the people whose lives were torn apart by past injustices deserve to be told, not only out of respect or disengaged interest but because they bear lessons that exert a tangible and morally didactic hold over us in the present.

The Lady of Justice statute outside the Old Baily. (Credit: Into the Blue)

That is not to say that historical writing should take the form of a moral invective, lambasting the behaviour of dead people whom we can no longer hold to account. Nor is it to argue that historical relativism is not a vitally important and foundational principle of the profession. What I am proposing, however, is that when Richard J. Evans claims, in his otherwise brilliant ‘In Defence of History’, that we should refute E.H. Carr’s argument – that the human cost of collectivisation in the USSR was a necessary evil – in the ‘historian’s way’, by undermining its ‘historical validity’, he seems to be suggesting that we are not doing so with a moral purpose in mind. Indeed, suggesting that the costs outweighed the benefits is itself a moral judgement, for is it not judging the value of people’s lives? Whilst Evans claims that it is the reader who must infer this conclusion, not the historian, his economic argument (that collectivisation was no more successful than the policies that preceded it) is surely intended to ‘prompt’ it.

Evans, like most people, clearly opposes the morality of Carr’s argument, and his way of communicating this is in the (highly effective) ‘historian’s way’. But his purpose nonetheless is to influence the opinion of his readers, not simply to fulfil the role of historical automaton, providing those readers with every fact under the sun. The process of omission and admission is one that, try as we may to temper it, will always involve some degree of value judgement about which facts matter for the purpose of our argument and which do not. Such a value judgement will inevitably, at times, operate on a moral criterion.

This debate may, as is often the case with those that take historiography as their subject, appear somewhat academic. In a world in which our history does so much to define the identities of (and relations between) ethnic, social, cultural and political groups, however, it is anything but. What we can call the ‘neutrality complex’ runs the risk of imbuing the historical profession and its practitioners with a sense of intellectual superiority, forgetting the political consequences of its output. One can find little fault in Bloxham’s assertion that certain histories carry less moral weight, and are therefore more conducive to neutral assessment, but subjects with as much emotional resonance as the history of slavery, the Holocaust or Mao’s Great Famine cannot but be judgemental in nature. 

‘Neutrality’ can be a mask for the covert projection of nefarious ideologies and interpretations. Presenting something simply as ‘fact’ is irresponsible and shows great ignorance of the moral dispositions that influence what we write and how we write it. There is space and need for some degree, however tentative, of self-acknowledged judgement in historical writing. We owe it to our audience to declare our judgement and to justify it. The crimes of imperialism, genocide and slavery are universally evil. The historian has a concern and a duty to show their audience why those that claim otherwise, who hyperinflate relativism and claim neutrality, are guilty both of intellectual hubris and moral cowardice.

Samuel Lake, History in Politics Writer

The Environment Has No Ideology: Debating Which System Works Best is Inherently Flawed

It is often assumed that we in the ‘West’ are the arbiters of environmental policy, that we simply ‘care more’ than the rest of the world. ‘China’, for many, evokes images of flat-pack cities and rapid industrialisation synonymous with the stain left by humanity on the natural world. It is lazily viewed as an outlying hindrance to the global goal of sustainable development, whilst we remain wilfully ignorant of our own shortcomings, both past and present. Instead of viewing Chinese environmental negligence as unique, I argue, within the lingering paradigm of the ‘capitalist good/communist bad’ dichotomy, that a more bipartisan assessment of the root cause of environmental degradation may be in order. Our planet, after all, cares little for politics.

Many of China’s environmental failures have historically been attributed to the communist policies of the ruling party, particularly under Mao, whose ‘ren ding shen jian’, or ‘man must conquer nature’ slogan has been presented by the historian Judith Shapiro as evidence of the Communist Party’s desire to dominate the natural world, even at the expense of its own people and environment. Of course, there is merit to this argument – the collectivisation of land and the Great Leap Forward’s unattainable targets  wreaked havoc on the land and contributed in no small part to what Frank Dikötter has termed ‘Mao’s Great Famine’, which is estimated to have killed up to 45 million people between 1958 and 1962. It can be easy, therefore, for us to assume that this environmental exploitation is one peculiar to China’s communist system of government.

A factory in China by the Yangtze River, 2008. (Credit: Wikimedia Commons)

Without excusing the undoubtedly detrimental and inhumane policies of Mao’s government, we should  view the environmental impact of the Chinese state’s rapid development in a more contextual manner. After all, did not the rampant capitalism of the Industrial Revolution in the United Kingdom lead to the explosion of soot-filled cities like Manchester, Liverpool and Birmingham? All of which were centres of heightened industrial activity that harmed both their human population and the surrounding environment. London’s death rate rose 40% during a period of smog in December 1873, and similarly, we can look to the Great Smog of 1952, which the Met Office claims killed at least 4000 people, possibly many more.

Industrial potteries in North Staffordshire during the nineteenth century. (Credit: StokeonTrent Live)

Geographically closer to China, the Japanese state has also shown in recent years that pointing to ideology might be mistaken. The post-war Japanese growth-first and laissez-faire mentality left the likes of Chisso Corporation in Minamata to their own devices, and the results were devastating. From 1956 through to the 1970s, first cats, then human residents of  Minamata began coming down with a mysterious illness, one that caused ataxia and paralysis in its victims. It would transpire that what came to be known as ‘Minamata disease’ was the result of Chisso’s chemical plant releasing methylmercury into the town’s bay. This was absorbed by algae and passed up the food chain through the fish that local residents (both human and feline) were regularly consuming. Government inaction was deafening, despite the cause being known since 1959, and change only came after it was forced by  non-capitalist union pressure in the 1970s. If this seems like a problem confined to the past, one need only cast their mind back to the Fukushima disaster in 2011, ultimately the result of the irresponsible decision to pursue a nuclear energy policy on the disaster-prone Pacific Ring of Fire.

This article does not wish to make the case for either the capitalist or communist system’s superiority in environmental affairs. Rather, it should be clear that the common thread running through all of these disasters – from the Great Smog to the Great Famine and Fukushima – is a policy emphasising economic growth as the paramount standard of success is a dangerous one that will inevitably lead to environmental destruction. The style and severity of that destruction may be influenced by ideology, but if we are to live in harmony with our environment, we must be willing to abandon the ideals of gain (collective or individual) and competition, that have placed us in our current quandary, whatever the tint of our political stripes.

Samuel Lake, History in Politics Writer

Is It Time For An Elected Head of State?

Democracy and equality under the law have increasingly come to be seen as the gold-standard for structuring societies ever since the enlightenment. it may therefore appear odd to some that the United Kingdom, the ‘mother of parliamentary democracy’, is still reigned over by a monarchy. Stranger still is that despite the drastic decline in the number of monarchies worldwide since the start of the 20th century, the British monarchy continues to sit in the heart of a proudly democratic nation and continues to enjoy significant popular support amongst the general public. Perhaps this will change with the passing of our current and longest serving monarch Queen Elizabeth II, perhaps the royal family will lose its purpose, or perhaps it will continue to hold steadfast as it has done in the face of major social transformations. But while there may be calls for the monarchy to be replaced by an elected head of state, we should ask ourselves what the monarchy both means to us and offers us, domestically and internationally, before we rush to any conclusions. 

Queen Elizabeth II and Prince Philip. (Credit: AY COLLINS – WPA POOL/GETTY IMAGES)

While certainly debatable, I would contend that in its history, structures, and character, Britain is fundamentally a conservative nation. Not conservative in the sense that it strictly aligns with today’s Conservative party, but more in the sense of Burke or Oakeshott; we sacrifice democratic purity on the altar of an electoral system that is more inclined to produce stable and commanding governments; we still retain a strong support for the principle of national sovereignty in a world of increasing interdependence and cooperation; we take pride in our institutions, such as parliamentary democracy and our common law; and as evidenced by our addiction to tea, we value tradition. So is it really surprising that monarchy, the oldest form of government in the United Kingdom, still not only exists but enjoys significant public support? 

The monarchy is intended as a symbol of national identity, unity, pride, duty, and serves to provide a sense of stability and continuity across the lifespan of the nation (according to its website). Its whole existence is rooted in the conservative disposition towards traditions, historical continuity, and the notion of collective wisdom across the ages that should not be readily discarded by those in the present. The monarchy is also politically impartial, and so able to provide that described sense of unity as it is a symbol that should cut across factional lines. Finally, the royal family is not necessarily an obstacle to democracy anymore; we have a constitutional monarchy, whereby the politicians make the decisions without arbitrary sovereign rule. The Sovereign’s role is not to undemocratically dictate legislation, it is to embody the spirit of the nation and exemplify a life of service and duty to country.

Conversely, many may say with good reason that the monarchy is outdated, elitist, and a spanner in the works for democracy. Indeed monarchies are increasingly becoming a thing of the past, and in today’s world it may seem out of place to see a family of people living a life of unbounded riches and privileges simply by birth right. This is a view that is becoming increasingly popular among younger Britons. Additionally, one might contend that the monarchy has lost its magic; it no longer inspires the same awe and reverence it once did, and is unable to invoke the sense of service and duty to country that it once could. 

Prince Harry and Meghan Markle being interviewed by Oprah Winfrey. (Credit: Marie Claire)

While support for the British monarchy appears to be holding steady, even in the wake of the latest saga with Harry and Meghan, I believe that the monarchy is on thin ice. The age of deference has long since passed, and in an era of materialism and rationality, the ethereal touch of monarchy has arguably lost its draw. Perhaps this is a good thing, or perhaps now more than ever we need a symbol of unity and duty to do our best by our neighbour and country. What is worth pointing out though is that Queen Elizabeth II, our longest serving monarch, has led the country dutifully throughout her life, and it is worth considering deeply whether the alternative (President Boris Johnson?) is really a better option.

Leo Cullis, History in Politics Writer

Who Won the Good Friday Agreement?

The Good Friday Agreement was signed in 1998, heralding a new era of peace after the decades of violence that characterised the Troubles. But who benefitted the most from the signing of the Agreement, and is the answer different today than it was twenty years ago?

For unionists, represented most prominently by the Democratic Unionist Party (DUP) and the Ulster Unionist Party (UUP), the Good Friday Agreement ultimately symbolised the enshrining of the status quo in law: Northern Ireland remained a part of the UK. In addition, the Republic of Ireland renounced articles two and three of its Constitution, which laid claim to the entire island of Ireland. It may seem, then, that unionism was the victor in 1998, but elements of the Good Friday Agreement have been responsible for tectonic shifts in the period since, arguably exposing it, ultimately, as a victory for nationalism.

While Irish republicans in the form of the Provisional IRA were required to put down their weapons and suspend the violent struggle for a united Ireland, there is a compelling argument that the Good Friday Agreement laid the platform for the growth of the movement and perhaps even the fulfilment of the goal of Irish unity. For one, it mandated power-sharing between the two sides of the Northern Irish divide: both unionists and nationalists must share the leadership of government. Since 1998, this has acted to legitimise the nationalist cause, rooting it as a political movement rather than an armed struggle. Sinn Féin, the leading nationalist party in the North, have moved from the leadership of Gerry Adams and Martin McGuiness to new, younger leaders Mary Lou McDonald and Michelle O’Neill. Irish unity propaganda now emphasises the economic sense of a united Ireland, the need to counter the worst effects of Brexit on Northern Ireland and a sensible debate about the constitutional nature of a new country, rather than the adversarial anti-unionist simplicity of the Troubles. Here, nationalism did gain significantly from the Good Friday Agreement because it allowed this transition from the Armalite to the ballot box in return for legitimacy as a movement.

British Prime Minister Tony Blair (left) and Irish Prime Minister Bertie Ahern (right) signing the Good Friday Agreement. (Credit: PA, via BBC)

Most prominently for nationalists, however, is the fact that the Good Friday Agreement spells out the route to a united Ireland, explicitly stating that a ‘majority’ in a referendum would mandate Northern Ireland’s exit from the UK. While unclear on whether majorities would be required both in the Republic and Northern Ireland, as was sought for the Agreement itself in 1998, as well as what criteria would have to be met in order to hold a vote, this gives Irish nationalism a legal and binding route to its ultimate goal of Irish unity, arguably the most prominent victory of the peace process.

Since 1998, Northern Ireland has seen a huge amount of political tension, governmental gridlock and occasional outbreaks of violence, most recently witnessed in the killing of journalist Lyra McKee in 2019. However, the Good Friday Agreement has served crucially to preserve peace on a scale unimaginable in the most intense years of the Troubles. If Irish nationalism achieves its goal of uniting the island, it will come about through a democratic referendum, not through violence. The very existence of the Good Friday Agreement, particularly its survival for over 20 years, is testament to the deep will for peace across the communities of Northern Ireland, forged in decades of conflict; it is this desire being fulfilled, even as the parties squabble in Stormont and the political status of Northern Ireland remains in the balance, that continues to make the biggest difference in the daily lives of both unionists and nationalists.

Joe Rossiter, History in Politics Writer

Margaret Thatcher: A Feminism Icon?

Thatcher’s lasting impact on twenty-first century feminism is widely debated. Whilst her actions have inspired future generations of ambitious young women in all professions, Thatcher was undoubtedly not a feminist. In fact, she actively disliked and directly discouraged feminist movements. Thus, Margaret Thatcher works as an apt example that a successful woman does not always mean a direct step forward for the women’s equality movement. Instead, whilst Thatcher was our first British female Prime Minister, it never occurred to her that she was a woman prime minister; she was, quite simply, a woman that was skilled and successful enough to work her way to the top. 

Throughout Thatcher’s eleven-year premiership, she only promoted one woman to her cabinet; she felt men were best for the role. When questioned about this, Thatcher remarked that no other women were experienced or capable enough to rise through the ranks in the same way that she had.  Similarly, whilst Thatcher demonstrated that women were now able to get to the top of the UK Government, she certainly did not attempt to make things easier for women to follow; she pulled the ladder straight up after herself.  Thatcher claimed that she ‘did not owe anything to women’s liberation.’ Reflected in her policy provisions, she ignored almost all fundamental women’s issues. Despite hopeful attempts to raise female concerns to her, childcare provision, positive action and equal pay were not addressed during her years as Prime Minister. One can therefore diagnose that Thatcher’s loyalty was almost exclusively to the Conservative Party and her vision focused on saving the country, not women.

The May 1989 Cabinet: CREDIT: Photo: UPPA

Thatcher resented being defined by her gender, but she worked naturally as a role model and continues to do so (despite her policies) on the basis that she was simply, a female. She was made unique by feminists (and women generally) in the media simply for being a woman. In this way, examples matter; this is in the same way that Obama’s presidency matters for young generations of the BAME community. Perhaps Thatcher’s disacknowledgement of the glass ceiling is precisely what allowed her to smash it so fearlessly. If she couldn’t see it, no one could point it out to her! Amber Rudd has claimed that she is a direct beneficiary of Thatcher’s valiance, as her work debunked the idea that only men could survive, let alone prosper, in political life. Men were undoubtedly perturbed by Thatcher’s female presence and conviction; John Snow has gone as far as describing interviewing her as ‘unnerving.’ These predispositions, held by both men and women, were fundamentally and forcefully shifted by Thatcher.

Yet, is symbolism alone enough to praise Thatcher as a feminism icon? It is of course clear, that being a role model symbolically is incomparable to those who have actively and tirelessly campaigned to seek change for women. The influence she had on the feminist movement was not a result of her own actions or policies. Rather, her influence is a result of her being the first woman who made the exceptional progress, as a total outsider, to not only become our Prime Minister, but go on to win two successive elections convincingly. 

Amelia Crick

Women in Terrorism: An Invisible Threat?

In 1849 the world met its first female doctor, Elizabeth Blackwell. Years later in 1903, the first woman to win a Nobel Prize, Marie Curie, did so for her outstanding contributions to Physics. How, with many more remarkable achievements behind women, does society continue to hold limited expectations of them? Why does the concept of a female terrorist seem so improbable to the vast majority of the Western world? 

While this perhaps appears a perverse logic; almost rendering terrorism a positive milestone for women, that is certainly not the intention. Instead, I hope to enlighten the reader to the gendered dimensions of terrorism, and to highlight the escalating need to perceive women as potentially equal vessels of terror. 

The BBC series Bodyguard focuses on Police Sergeant David Budd’s protection of Home Secretary Julie Montague in a fast-paced drama. The plot twist in the season finale centres on a Muslim women, who is revealed as the architect and bomb-maker behind the attack. Although some have found this portrayal as troublesome, displaying Islamophobic overtones, Anijli Mohindra, the actress, explains that the role was actually “empowering”. Regardless of these perceptions, it is clear that the ‘surprise’ element manifests itself in the female gender. This sentiment presides outside of the media too, highlighting the potential threat posed by gender limitations. 

Anjli Mohindra playing terrorist Nadia in BBC One’s Bodyguard. (Credit: BBC)

There is an undeniable, and widespread assumption that terrorists are always male. While this assumption could be ascribed to the smaller numbers of women involved in terrorism, it is more likely attributable to embedded gender stereotypes. Such stereotypes perceiving women as maternal and nurturing, but also helpless and passive, are irreconcilable with that of an individual committing acts which knowingly cause death and disruption. In 2015 when women such as Shemima Begum and Kadiza Sultana left East London for the Islamic State, they were depicted as archetypal ‘jihadi bride[s]’ in the media: meek, manipulated and denied of any agency in their decision. Yet, an accurate representation of women in terrorism needs to transcend the constraints of traditional gender constructs.  Although we may be aware of female stereotypes, why do they continue to permeate our understanding of women in terrorism, when we claim to be an equal society. 

The reality of women in terror is quite the contrary of the aforementioned stereotype. In January 2002, Wafa Idris became the first female suicide bomber. Since this date, women have represented over 50% of successful suicide bombings in the conflicts of Turkey, Sri Lanka and Chechnya. In more recent years, the Global Extremism Monitor recorded 100 distinct suicide attacks conducted by female militants in 2017, constituting 11% of the total incidents occurring that year. Moreover, Boko Haram’s female members have been so effective in their role as suicide bombers, that women now comprise close to two-thirds of the group’s suicide-attackers.  

It is perhaps the dominant nature of presiding stereotypes regarding women, which enables them to be so successful in their attacks – presenting terrorist organisations with a strategic advantage. This is illustrated by the astonishing figures proving female suicide attacks more lethal on average than those conducted by their male counterparts. According to one study, attacks carried out by women had an average of 8.4 victims, compared to 5.3 for those attacks carried out by men. Weaponizing the female body is proving successful as society continues to assume women lack credibility as terrorist actors. Needless to say, remaining shackled to entrenched gender preconceptions will undoubtedly continue to place society at risk of unanticipated terror attacks from women.

Emily Glynn, History in Politics President

Four “Non-feminist” Feminists in British history

When we think of feminism, we think of women holding strongly coloured flags of green, white and gold or green, white and purple in historical photos. We think of women and girls who spoke on the news demanding equal opportunities, more provision of pregnancy and abortion advice and the liberation of females in third world countries. We think of Malala speaking at international conferences, Jessica Chastain acting in Zero Dark Thirty and FaceBook executive Sheryl Sandberg on the cover of Time Magazine. They are all strong women in their declaration as being a feminist and are frequently and publicly involved in feminist organizations and activities.

Yet, we often forget to include some women as feminists, either due to the fact they do not carry those clear ‘feminist features’ as mentioned or simply that they do not consider, or even refuse to consider themselves as feminists. They are the Celia Foote character (interestingly also played by Jessica Chastain) in The Help, who were not feminists in a conventional way like the Skeeter character (played by Emma Stone), and even often doubted themselves, but went against the mainstream in terms of thoughts and actions, and are undoubtedly feminists who proved to be great thinkers and writers of all times. 

Hildegard of Bingen

Unknown to many, feminism was rooted in religious contexts in which women found themselves with the opportunity to express their thoughts as freely as male. Since ancient history, especially in Europe, families sent their ‘unmarriable’ daughters away to convents. Some women found the time and quietness a great opportunity to think, read and write. One of them is our first ‘non-feminist’ feminists, Hildegard of Bingen. 

Saint Hildegard. (Credit: Franz Waldhäusl, via imageBROKER – http://www.agefotostock.com.)

Hildegard was born in the 11th century and became a nun when she was a teenager (the exact age of her enclosure is subject to debate). She later became the abbess of a small Rhineland convent. People normally do not consider her as a feminist as she lacked most essential elements contemporarily associated with feminism, yet she was certainly a pioneer of proving that women can do exactly what male can do with her actions. Hildegard was considered by many in Europe to be the founder of scientific natural history in Germany. She is also one of the few known chant composers to have written both the music and the words. Hildegard also produced two volumes of material on natural medicine and cures and three great volumes of visionary theology, which were all well celebrated and led to her recognition as a Doctor of the Church, one of the highest titles given by the Catholic Church to saints recognized as having made a significant contribution to theology or doctrine through their research, study, or writing.

Not only was she deeply involved in what conventionally thought as works of male – music, academics, medicine and religious theories – she also frequently wrote letters to popes, emperors, abbots and abbesses expressing her critical views and thoughts on a wide range of topics. These significant political, social, economic people often approached her for advice. She even invented a language called the Lingua ignota (“unknown language”).

Despite her apparent feminist approach of doing things, another thing that often dissociates her with feminism is her frequent self-doubt. Often doubtful towards her ‘unfeminine’ activities, she was always insecure of being an uneducated woman and was not confident of her ability to write. She once wrote to Bernard of Clairvaux, one of the leading churchmen of the time, asking whether she should continue with her wiring and composing, even though at that time she was already widely known and honoured for being an incredible writer and musician. 

But still, Hildegard was one of the few women in medieval history that wrote so freely and critically on everything and was viewed as an impressive writer, musician and religious leader, based on her achievements instead of based on her gender. That made her a feminist. 

Margaret Cavendish 

Margaret Cavendish was born in the 17th century into a family of well-established, Royalist landowners, and later married the Duke of Newcastle. She was one of the few fortunate women of her time who had her husband encouraging her to pursue her literacy ambition. 

At first, she started writing on topics mostly associated with women’s internal struggle, ranging from their worries regarding their family to their common fear and sorrow about their children’s sickness and death, though she did not experience what she wrote. Well-reserved by women, her writings were found to be very moving and unexpectedly understanding of the harsh realities faced by many women at that time. 

Margaret Cavendish, the Duchess of Newcastle. (Credit: Kean Collection, via Getty Images)

At later times Cavendish started writing philosophical verse. Though her work was widely recognised, same as Hildegard, Cavendish often had self-doubt about her capacity and duty as a woman when she was writing. A modern biographer once remarked that Cavendish felt torn between ‘the (feminine and Christian) virtue of modesty’ and her ambitions. On the one hand, she was very serious and confident about her work; on the other hand, she often depreciated her work and justified them with defensive and apologetic justifications. 

From Cavendish’s degradation of her work, it seems that she lacked the confidence and guts of feminists to denounce the conventional status of women and to loudly declare that her work should be equally recognised. But she wrote like a feminist in a sense that she brought womanly issues, which was thought to be a topic not worthy of writing and of no political, social or any importance, to public attention. She brought the dark side, the internal perspective of women’s struggles in the household into the open light. She also spoke out against the hostility towards any women regarded as outspoken or ambitious, which at her time was deemed as madness and dangerous. Her writings also encouraged later women to write and urged them to unite together instead of always being jealousy critical of other women’s achievements. That made her a feminist. 

Dorothy Osborne 

Same as Cavendish, Dorothy Osborne was born in the 17th century into a family of Royalists. She is comparatively less well-known than our other four ‘non-feminist’ feminists as she did not produce writings or theories as significant as those produced by Cavendish or Hildegard. She could even be seen as an anti-feminist by some people as she was one of those critics who heavily denounced the work of Cavendish, a more obvious feminist, as ‘extravagant’ and ‘ridiculous’. 

Funnily enough, what made her on the list is her criticism of other people’s work (including Cavendish’s) in letters exchanged between her husband and her. She read widely, often had heavy criticisms of other people’s work and exchanged her thoughts with her fiance, then husband, Sir William Temple in letters. These ‘witty, progressive and socially illuminating’ letters were later published and became large volumes of evidence that Dorothy was a ‘lively, observant, articulate woman’. Even Virginia Woolf later remarked that Dorothy, with her great literacy fashion, would have been a novelist in another time. 

An additional point that made her a true feminist was her actions against an arranged marriage and conventional family mindsets. During the 17th century, marriages were usually business arrangements, especially for a rich family like Dorothy’s. Being in love with Sir William Temple, who was refused by her family due to financial reasons, she protested by remaining single and refused multiple proposers put forth by her family. At last, her struggle was rewarded with her finally marrying Temple. But her feminist acts did not stop there. Later references showed that she was actively involved in her husband’s diplomatic career and matters of the state, quite contrary to what an ordinary wife would behave in the 17th century. 

Dorothy had not ever said that she was a feminist, or intended to act like a feminist. But her thoughts, words and actions clearly showed that she lived a feminist life by becoming a free-willed, critical woman. That made her a feminist. 

Mary Shelley 

Mary Shelley, who famously wrote Frankenstein, is the woman who wrote several of the greatest Gothic novels of all times and was considered to be the pioneer of writing science fiction. 

Mary Shelley was born in 1797. Her father was the political philosopher William Godwin and her mother was none other than one of the first public feminist activist in British history, Mary Wollstonecraft. Under the influence of two great parents, Mary Shelley was encouraged to read, learn and write, and her father gave her an informal, nonetheless rich, education. 

She later fell in love with one of his father’s followers, a romantic poet and philosopher Percy Bysshe Shelley, who was married. After the death of Percy’s first wife, Mary and Percy got married. They were certainly two talented, happy people married. But Mary’s luck then seemed to run out. Three of her four children died prematurely and Percy later drowned when sailing. In the last decade of her life, she was constantly sick. 

Despite her miserable life, she was able to produce great novels such as Valperga, The Last Man, The Fortunes of Perkin Warbeck, etc. Perhaps due to her own sad stories, her novels were strikingly dark, in a sense that it created no hope for the characters. Mary’s novels, especially her most famous one, Frankenstein, did not have any strong female characters; ironically most female characters died. In a conventional perspective, her work is not feminist at all. But what she explored was the struggles of women in an age which society was driven by reason, science and patriarchy. One commentator once said that ‘[t]he death of female characters in the novel is alone to raise enough feminist eyebrows to question how science and development is essentially a masculine enterprise and subjugates women.’ She was radical in thought and critical of society’s norm. Alongside writing, she devoted herself to raising and educating her only surviving child, Percy Florence Shelley, who later famously supported amateur performers and charity performances. 

What made Mary Shelley a legendary woman, in addition to her great writings, was her strength in turning misery into energy and striving as an author and a mother despite her miserable life. That made her a feminist. 

Chan Stephanie Sheena

Does the Electoral College Serve the Democratic Process?

“It’s got to go,” asserted Democratic presidential candidate, Pete Buttigieg, when speaking of the electoral college in 2019 – reflecting a growing opposition to the constitutional process, which has been only heightened by the chaotic events of the past weeks. Rather than simply reiterating the same, prosaic arguments for the institution’s removal – the potential subversion the popular vote, the overwhelming significance of battleground states, the futility of voting for a third party, and so forth – this piece will consider the historical mentalities with which the electoral college was created in an effort to convey the ludicrous obsolescence of the institution in a twenty-first century democracy.  

Joe Biden and Kamala Harris preparing to deliver remarks about the U.S. economy in Delaware, 16 November 2020. (Credit: CNN)

In its essence, the system of electors stems from the patrician belief that the population lacked the intellectual capacity necessary for participation in a popular vote – Elbridge Gerry informing the Constitutional Convention, “the people are uninformed, and would be misled by a few designing men.” Over the past two hundred years, the United States has moved away from the early modern principles encouraging indirect systems of voting: for instance, the fourteenth amendment normalised the direct election of senators in 1913. It has also seen the electors themselves transition from the noble statesmen of the Framers’ vision, to the staunch party loyalists that they so greatly feared. In fact, the very institutions of modern political parties had no place in the Framers’ original conception, with Alexander Hamilton articulating a customary opposition to the, “tempestuous waves of sedition and party rage.” This optimistic visualisation of a factionless union soon proved incompatible with the realities of electioneering and required the introduction of the twelfth amendment in 1803, a response to the factious elections of 1796 and 1800. Yet, while early pragmatism was exercised over the issue of the presidential ticket, the electoral college remains entirely unreformed at a time when two behemothic parties spend billions of dollars to manipulate its outcome in each presidential election cycle. 

The Constitutional Convention was, in part, characterised by a need for compromise and it is these compromises, rooted in the specific political concerns of 1787, that continue to shape the system for electing the nation’s president. With the struggle between the smaller and larger states causing, in the words of James Madison, “more embarrassment, and a greater alarm for the issue of the Convention than all the rest put together,” the electoral college presented a means of placating the smaller states by increasing their proportional influence in presidential elections. While it may have been necessary to appease the smaller states in 1787, the since unmodified system still ensures voters in states with smaller populations and lower turnout rates, such as Oklahoma, hold greater electoral influence than those in states with larger populations and higher rates of turnout, such as Florida. Yet, it was the need for compromise over a more contentious issue – the future of American slavery – that compelled the introduction of the electoral college further still. Madison recognised that “suffrage was much more diffusive in the Northern than the Southern States” and that “the substitution of electors obviated this difficulty.” The indirect system of election, combined with a clause that counted three of every five slaves towards the state population, thus granted the slaveholding section of the new republic much greater representation in the election of the president than an alternative, popular vote would have permitted. At a time when the United States’ relationship with its slaveholding past has become the subject of sustained revaluation, its means of electing the executive remains steeped in the legacy of American slavery.

It takes only a brief examination, such as this, to reveal the stark contrasts between the historical mentalities with which the electoral college was established and the realities of a modern, democratic state. Further attempts to reform the institution will no doubt continue to come and go, as they have over the past two hundred years. However, when compared with the environment in which it was proposed, it is clear that the unreformed electoral college is no longer fit for purpose and must, eventually, give way to a system in which the president is elected by a popular vote.

Ed Warren

Debates Take On a Different Meaning in the “Worst Year Ever”

The Trump-Biden debates are wrapped up, and for the “Worst Year Ever” they didn’t disappoint. The first debate was widely condemned as the “Worst Debate Ever”. Both candidates talked over each other, and it was near-impossible to understand them. Biden faced calls to boycott the other debates. Trump made this decision for him, falling ill with COVID-19.

President Donal Trump and Democratic candidate Joe Biden take the stage in their final debate of the election campaign, Nashville, Tennessee. (Credit: Reuters)

Anyone who saw even brief highlights of the first debate could be forgiven for giving up on the whole institution of debates. But this would be extremely unwise. Yes, Trump interrupted Joe Biden a staggering 128 times. And admittedly, Joe Biden did reply by telling him to “shut up” and calling him a “clown”. Yet this wasn’t the breakdown of the debate as an institution. Rather, it was an additional insight into who the two candidates are, and how they will act in the face of the adversity that Presidents experience on a daily basis.

The problem with the way we view debates is that we anticipate 90 minutes of detailed and virtuous policy discussion. There is no clearer example of this fantasy than the West Wing episode, in which the two candidates running for president have a high-minded and theoretical exchange of views on what it means to stand as a Republican or a Democrat. In reality, presidential debates have little to do with policy. Most voters are unswayed by the arguments of the candidates; they may have little trust in them, or have made up their minds previously. The one area where debates really count is character.

The focus on character may be why the UK has lacked similar style pre-election debates, and why attempts here have enjoyed less success. The presidency is a position uniquely judged by the character of its occupant, and in the build-up to 2020 President Trump’s character – depending on who you ask – has been viewed as his biggest strength or weakness. This really gets to the crux of what debates are, and what they have always been – a blank slate.

The debate is one of the few foreseeable major events in a campaign. But that is all that can be foreseen: the event. Most voters are aware of it, and around 80 million will watch it, but the candidates are under no obligation to make it a debate on the state of America. Like most other political realities, the in-depth policy debate was an unwritten rule, held up by the ‘honour system’, and President Trump lacks this honour.

Using debates for non-policy advantages is as old as the institution itself. In the first ever presidential debate in 1960, Nixon faced off against Kennedy. Nixon turned up looking sickly and sweaty, whilst JFK was the epitome of suave New England style. Accordingly, whilst radio listeners thought Nixon had performed better, TV viewers agreed that Kennedy had won the debate. The echoes of 1960 were clear in Mr Trump’s first performance, in which he waved his hands, stood firm, interrupted, and generally tried to give the impression that he was in control of the events of the stage. Yet Mr Biden was not immune from these gimmicks either – he would flash a smile whenever the president made an outrageous claim, as if to say, “look at this clown – does he have what it takes to fill the office?”

Vice President Richard Nixon and Senator John F. Kennedy in their final presidential debate; 21 October, 1960. (Credit: AP Photo)

The pressure of these debates is intense. Each given candidate will have three or four separate strategies they’re trying to pursue, and they have to juggle all of them whilst simultaneously readjusting their approach depending on which hits are landing. In the first debate, Mr Trump was balancing trying to present Joe Biden as senile, racist, and yet also a radical socialist. The president struggled with these conflicting narratives, especially as he hoped that constantly interrupting Mr Biden would force the former vice president into a memorable gaffe. Ultimately, it was Mr Trump’s inability to change his approach in the debate that cost him more than any of his policy errors, and formed the main narrative of the debate in its aftermath.

But there was ultimately something more sinister going on. Donald Trump’s biggest election worry is high turnout – Republicans usually vote reliably, but Democrats are much more vote-shy. This is doubly true of young people. Accordingly, the president may have been playing a deeper game during the first debate, one which he executed outstandingly. President Trump saw an opportunity to portray the debate as an irrelevant contest between two old white men – not dissimilar to how young Americans view the election already. Mr Trump’s constant interruptions made the debate unbearable to watch, but he ultimately wanted that. He may not have done well with the few undecided voters left in the campaign. He will care little. The bigger constituency was voters undecided between voting for Biden or staying home. The first debate looked exactly like two old men bickering, and for Trump that’s as close to a debate win as he can get.

Seth Weisz

Debate: Monarchy, a Relic or Required?

Monarchy and its Political Pomp and Circumstance

The Glorious Revolution of 1688 implemented the constitutional monarchy of the UK that we know today, effectively limiting the political role of the Crown to mere pomp and circumstance. Yet, to this day, certain superfluous political liberties have remained. In practice, the sovereign still gives weekly counsel to the Prime Minister. In practice, the sovereign opens Parliament with their speech, albeit drafted by the Commons. In practice, the sovereign must approve all legislation before it can become an act of parliament, although the last bill to be refused in such a manner was vetoed in 1708. While the British political constitution has moved on considerably from its absolute-monarchical days, the monarch’s political role still retains an archaic air, where substance falls short of ceremony. The lack of majority dissent over this archaism can only be explained by the increasing celebrity of the monarchy, caused by the tabloid-frenzied consumption of their every move, from wedding dress to baby name. This infatuation with these winners of a ‘genetic lottery’ completely overlooks the fact that these political liberties are available to be used and abused. Even if they choose not to do so, that is irrelevant to the fact they still exist.

This is only the tip of the iceberg, as, ceremonial politics aside, the monarchy can also be utilised by the party in power when wanting to inspire confidence in their abilities. This was evident in the Queen’s recent coronavirus address where she spoke of the need for solidarity, harking back to the Second World War idea of ‘everyone doing their bit’ and quoting Vera Lynn’s song, ‘We’ll Meet Again’. For a more worrying influence we must look back only to August of last year where Boris Johnson used the Queen’s ability to prorogue parliament to prevent lawmakers from thwarting his Brexit plans. Though the Crown officially adopts an air of impartiality towards partisan politics, it seems the monarchy is still a political tool to be manipulated on a whim. Surely the best way to ensure sovereign impartiality is to remain aloof from the political world. But surely while this demands reform, the monarchy need not be abolished to take its fingers out from the political pie.

When also considering the royal finances, it seems there is certainly no harm in taking this next step either. With £82.2 million paid by taxpayers in 2019 to form the Sovereign Grant – not including security or ceremonial costs – is it really necessary to keep funding this archaic institution? Popular responses say yes, pointing to tourism revenues of £550 million, and ambassador-generated trade of £150 million. Yet the latter number barely makes a dent in the sum of UK exports (£543 billion), and as for tourism revenue, the abolition of the monarchy would not stop tourists from frequenting destinations such as Windsor Castle and Buckingham Palace. The question we the public should be asking is are the monarchy still relevant? The royal family can still exist in celebrity status and tabloid sensationalism without pulling on the drawstrings of the public purse and without being used as a political tool. The political role of the monarchy should be a thing of the past, celebrated and remembered perhaps, but fit for the vault of history.

Melanie Perrin

The current British royal family on Buckingham Palace’s Balcony. (Credit: Chris Jackson, via Getty Images.)

A Defence of the Monarchy

A word that recalls the riches and privileges of fairy-tale princes and princesses, but one that also connotes the existential crisis faced by many kingdoms. The twentieth century saw a deadly trend for the end of monarchies: most famously, the tragic demise of the Romanovs. However, new monarchies were forged that have remained to this day, such as Bhutan’s Wangchucks, whose popularity in Thailand has even led to a sharp increase in Thai tourism to Bhutan.

Monarchies carry more influence than is recognised in modern society. In Britain, the House of Windsor encourages support for charitable causes. Prince Harry, the Duke of Sussex, has been outspoken about the importance of mental health services, describing his participation in counselling and advocating open discussion concerning mental health. Alongside the Duke and Duchess of Cambridge, Prince Harry founded ‘Heads Together’; a campaign created to increase the visibility of mental health conditions. Using their royal status greatly, the Cambridges and Sussexes promoted ‘Heads Together’ through royal visits, social media presence and tailored events. It was highly successful, with the foundation announcing it had assisted “millions” in talking more about mental health. The British monarchy is still deeply entrenched within our society and culture, engaging with topical issues, and promoting causes that they believe in. The Windsors have become more personal than rulers of the past, and still engage with politics, albeit in different ways. Commentary on social issues is another valid way of engaging with the political constitution. 

Neutrality is the most important characteristic of today’s monarchy, with the royal veto having been abandoned for over 300 years. The monarch is now idealised to be a leader that the public can stand behind, regardless of the political climate. Prime Ministers cannot command the support nor the majority, which the monarchy can. According to YouGov in 2018, 69% of people support the monarchy, with 21% opposing and 11% stating no preference. No Prime Minister has ever achieved such a high public majority. Theresa May was the second most popular Conservative leader ever, and still only commanded a positive opinion of 30%. In a turbulent modern society, the British monarchy has been a source of constancy.  

In a politically chaotic decade, Britain has seen three Prime Ministers in three years under Conservative Party leadership, which has been deeply divisive. However, the popularity of the monarchy has been proven time and time again. For the wedding of the Cambridges, there were 60 million viewers (averaging at 22 million for whole coverage), and sales of the royal issue of the Hello! magazine rose by 25%. Globally, there were 29 million viewers of the wedding of Prince Harry to Meghan Markle. Furthermore, the British monarchy unites 2.4 billion under the Commonwealth, from across five continents. 

The grasp upon the monarchy has not been relinquished by the world, but especially not by British society. It has been steadfast for centuries and whether it is universally accepted, monarchy occupies a key part of politics, culture and society in modern Britain. It does not seem as if the world is ready for the monarchy to be a historic concept.

Lorna Cosgrave