Apartheid, literally defined as ‘apartness’ or ‘separateness’ in Afrikaans, refers to the policy of enforced racial segregation that defines the history of modern South Africa. Spanning from 1948 to 1994, when the National Party was in power and put into practice the culture of ‘baasskap’ or white supremacy, the national programme of apartheid forced black and white citizens apart for nearly fifty years. The first law, the Prohibition of Mixed Marriages Act of 1949, served as the forerunner for later legislation which sought to prevent interracial relationships and remove the political rights of black citizens. All public facilities, including hospitals and transportation vehicles were segregated; however, the effects of apartheid split up families and displaced them from their homes.
However, whilst the political doctrine of apartheid and its segregationist ideology ended in 1994, culminating in the election of Nelson Mandela as the President of South Africa, its socio-economic legacy extends into the present day. The apartheid economy was tailored to appeal to, and overwhelmingly benefit white citizens, and as a nation of significant inequality, the after-effects of enforced segregation still pervade twenty-first century discourses. This economic legacy of apartheid is still palpable within modern South Africa, which continues to be defined by the segregationist policies of the late twentieth century. Today, black citizens, compared to their white counterparts, arguably remain somewhat disadvantaged in the national economy and the opportunities afforded to them. As the Economic Freedom Fighters, a South African left-wing political party emphasised in 2013, ‘political freedom without economic emancipation is meaningless.’ Statistical evidence supports the party’s observation, citing that in 2011, 54% of Africans compared to less than one percent of white citizens lived in poverty, attesting to the wider culture of division which had served as the central bastion of political authority.
Even in the realm of education – particularly pertinent given the notable involvement of students within the anti-apartheid movement – the effect of segregation is demonstrable in the twenty-first century. Under the National Party, the funding of white schools was greater than that of black schools by tenfold, meaning that historical inequalities have become so deeply embedded in the framework of South Africa’s education system, that they are perpetuated nearly thirty years after the dissolution of apartheid. From 2015 to 2019, the school funding in the province of KwaZulu-Natal, one of the lowest-income communities in the nation, fell by a further 15%. What this evidence highlights is that whilst the official dogma of segregation is no longer directly involved in the fabric of the nation, the ghost of apartheid remains a ubiquitous element of life in South Africa, carving out an enduring and reprehensible modern socio-economic legacy.
Maximus McCabe-Abel, President, History in Politics
You might have heard of the unrest in Hong Kong last year, stemming from the Government’s attempt to introduce an extradition agreement with Mainland China and culminating in a full-blown humanitarian crisis with the enactment of the National Security Law (NSL). Why was the extradition agreement met with such vigour? The proposed Bill would have led to both foreign nationals residing in Hong Kong and local criminal suspects becoming extraditable to mainland China, which has a substantially different criminal justice system and a history of breaching fundamental human rights. This has included arbitrary detention, unfair trials and torture, with the only requirement that “prima facie” evidence, which carries a significantly low standard of proof, be provided to the Chief Executive and the courts. Following escalating public clashes between the Government, police and citizens, and protests seeing over a million people in attendance and over 10,000 people arrested, the Bill was shelved. But by that time, the damage was done. The Bill exacerbated the deep fears of local citizens and expats in Hong Kong, who saw it as an early sign of China’s descent upon the nation and the dark future to come.
Several demands arose from the locals: the formal withdrawal of the Bill, release and exoneration of those arrested from the protests, the establishment of an independent commission of inquiry into police behaviour, universal suffrage for the Legislative Council, Chief Executive elections in addition to the resignation of Chief Executive Carrie Lam, and lastly the retraction of the characterisation of protests as “riots”. Somewhat unsurprisingly, only the first demand was met, which was seen by the Hong Kong people as highly unsatisfactory, and protests continued with increasing intensity. All this culminated in the Chinese Standing Committee of the National People’s Congress enacting the NSL, which opened a bigger can of worms.
Under the Sino-British Joint Declaration of 1997, resulting from the First and Second Opium War, Britain handed back control of Hong Kong to China on the condition that the “One Country, Two System” and freedoms of free speech, assembly, religious belief, amongst others, would continue to be enjoyed by the former until 2047. The NSL contained intentionally vague provisions, which would allow for ‘secession, subversion, terrorism and collusion with foreign forces’ to become punishable by a maximum sentence of life in prison. Having already been exercised to charge 50+ individuals, this has naturally given rise to a sense of deep unease in both the domestic and international sphere. As the legislation would have also allowed cases to be tried in Mainland China under their legal system, there was a real risk of criminal suspects being deprived of fundamental human rights, like being held incommunicado in undisclosed locations for up to 6 months before being formally arrested or released. Whilst the UK has similar national security laws in that suspected terrorists can be detained without charge for up to 28 days, these individuals are nevertheless allowed legal representation after a maximum period of 48 hours upon arriving at the police station. Compared to Mainland China, the UK is subject to more intense public and legal scrutiny whenever human rights are undermined. The legislations effect is essentially a complete curtailing of free speech, press and political dissent in Hong Kong. Critics worldwide have speculated that this directly contravenes the Joint Declaration’s condition of “One Country, Two Systems”, with the addition of the NSL being also applicable to crimes committed abroad, to non-permanent residents and people outside of Hong Kong. This means that the reach of the law is far and extensive, essentially subjecting foreign nationals to r to the authority of the NSL.
Whilst the realistic probability of extraditing foreign citizens in the West for crimes committed against the communist party are relatively slim, the law has already caused a growing reluctance amongst foreign investors to conduct business in Hong Kong for fear of being subject to the extensive powers of the NSL. After the emergence of Covid-19 and consequent increasing criticism towards the Communist Party, it will be a matter of great importance for there to be checks and controls to prevent Mainland China’s ever-increasing influence. If they are left unchecked, one can only hope to stay out of the line of sight of the Chinese Government, and that is something I concern myself with, as this article has the potential to be considered “subversion” under the draconian National Security Law.
What a strange year. April might seem like an even stranger time to reflect, one month after the anniversary of the first Coronavirus lockdown, but it also seems astute as the easing of lockdown starts to open up our futures. With pubs starting to open, vaccines being delivered, and being officially allowed back to university, there is light at the end of the pandemic tunnel.
Yet, while we’ve been locked up in our houses, a few things have happened. For one, History in Politics has done two terms as a university society – but you probably don’t care much about that. More significant are the huge events seen through the prism of a new post pandemic world. Britain has finally properly left the EU, Boris Johnson lost his most infamous advisor, thousands marched for BLM, and thousands have protested policing in the wake of Sarah Everard: ‘why are you protecting statues of racists over actual women?’, one sign read.
During the pandemic, Britain has been reflecting. We might look back upon our relationship with Europe. We might look at the history of race-relations in the UK, or our colonial legacy. In fact, with books such as Empireland: How Imperialism Has Shaped Modern Britain being released in January by Sathnam Sanghera, it is clear that many have been reflecting on such themes. In doing so, it is hoped that, by having a clear idea of where we’ve come from, we might have a better idea of what we’re meant to do in the future.
Luckily for me, although perhaps less so for my career prospects, I’ve had the privilege of studying such history. I’ve spent a lifetime learning about the British Empire, race-relations, civil rights, and Britain’s relationship with Europe (although, aged 21, a lifetime is quite a melodramatic way of putting it). I have even had time to study the Tudors, which many complain took the place of ‘more relevant’ history. Despite all this history I am still to get the magic key to predicting our future – perhaps that will come tomorrow, or once I’m back in a Durham pub.
Ironically, such historical reflections can be found throughout history. When Edward Colston’s statue was raised in Bristol in 1895, for instance, it was already over a century and a half after his death. Those who toppled his statue over a hundred years later, certainly wouldn’t think that the Victorian reflections or remembrance of Colston was a positive one. Although some might suggest it was representative of the future for Victorians, a future of racial inequality.
One thing which we cannot change, regardless of how we might reflect upon it, is what has passed. This might sound obvious, but it is important to hold in mind such ‘objective truths’. They’re the reason people look back, hoping the past truths will unlock future truths. It is in search of the ‘truth’ that we talk, read, and reflect on our past – from empire to race. Last summer, as statues were ripped up and the media exploded into debate, I asked how we might have that conversation in a civil manner. Yet, the ‘culture war’ has continued – regardless of these ‘truths’.
Perhaps it is less talking and more listening which needs to happen. Over lockdown I had the pleasure of listening to Natalie Mears (associate professor in early modern British history) discuss some of these topics. Finally, I could put Tudor history to some use, and the comparisons with our present ‘culture war’ were stark. From powerful political advisors (it is the 500th anniversary of William Cecil’s birth; and a year since ‘that’ trip to Barnard Castle) to our relationship with Europe, some things seemingly haven’t changed. As we reflect upon the past year (and a bit) of COVID-19, one lesson from Elizabethan England sticks out the strongest: that reflections and memories of the past have always been political. At least that is one door to the future which is unlocked. The future is undoubtedly political.
Join Durham University’s History in Politics Society for their term’s theme of ‘Reflections’ and find series two of Dead Current, the History in Politics Podcast, on Spotify. The first episode of series two is President Emily Glynn and Event’s Manager Ed Selwyn Sharpe’s interview with Natalie Mears.
Diamonds symbolise love, wealth, and commitment to both the purchaser and the recipient, after all, they are known to be a woman’s best friend. Yet, the process of retrieving such a valuable commodity remains a battleground for those who work in the diamond mines. Alongside diamond production, the construction of worker exploitation, violence, and civil wars is generated proving that beauty is in fact, pain.
The tale of the present-day diamond market emerged on the African continent, South Africa to be precise. The Democratic Republic of the Congo ranks fourth in the world when it comes to diamond production with 12 million carats being produced in 2020, the African region dominates the top 10 rankings with seven out of 54 countries acting as some of the world’s largest diamond producers.
The diamond trade contributes approximately $8.5 billion per year to Africa and Nelson Mandela has previously stated that the industry is “vital to the Southern African economy”. The wages of the diamond miners, however, do not reflect the value of this work and its contributions to the financial expansion of African countries. An estimated one million diamond diggers in Africa earn less than a dollar a day, an unlivable wage stooping below the extreme poverty line. Despite the significant revenues from the diamond industry, both through taxation and profit-sharing arrangements, governments often fail to re-invest these funds in local communities. The government in Angola receives about $150 million per year in diamond revenues yet conditions near major diamond mining projects are appalling. Public schools, water supply systems and health clinics are near non-existent. Many African countries are still healing from the impact of colonisation and are dealing with corruption, incompetence and weak political systems. Therefore, it comes as no surprise that governments fail to invest their diamond revenues productively.
Adjacent to being excessively underpaid and overworked, miners endure work in exceptionally hazardous conditions often lacking safety equipment and the adequate tools for their role. Injuries are a likely possibility in the everyday life of a miner sometimes leading to fatality. The risk of landslides, mine collapses and a variety of other accidents is a constant fear. Additionally, diamond mining also contributes to public health problems since the sex trade flourishes in many diamond mining towns leading to the spread of sexually transmitted diseases.
Children are considered an easy source of cheap labour and so they tend to be regularly employed in the diamond mining industry. One survey of diamond miners in the Lunda Norte province of Angola found that 46% of miners were between the ages of 5 and 16. Life as a diamond miner is full of hardship, and this appalling way of living is only heightened for younger kids who are more prone to injuries and accidents. Since most of these kids do not attend school, they tend to be pigeonholed into this way of life throughout adulthood, robbing them of their childhood and bright futures.
African countries such as Sierra Leone, Liberia, Angola and Côte d’Ivoire have endured ferocious civil conflicts fuelled by diamonds. Diamonds that instigate such civil wars are often called “blood” diamonds as they intensify civil wars by financing militaries and rebel militias. Control over diamond rich territories causes rival groups to fight, resulting in tragic situations such as bloodshed, loss of life and disturbing human right abuses.
Whilst purchasing diamonds from a conflict-free country such as Canada can buy you a clean conscience, you must not forget about the miners being violated every day for the benefit of others but never themselves. Just as we have the opportunity to choose fair trade foods benefitting the producers, consumers of one of the most valuable products one may ever own should not be left in the dark regarding the strenuous work of digging miners do behind the stage of glamour and wealth. A true fair trade certification process must be set in place through which miners are adequately awarded for their dedication and commitment to such a relentless industry, especially in countries that are still processing generational trauma that has been caused by dominating nations.
Modern aversion to human rights protection in the United Kingdom can be seen on the surface of British politics, from Theresa May’s indignation that she was unable to deport an illegal immigrant because he had a pet cat, to Nigel Farage’s demand that we “scrap the EU Human Rights Act”. The modern rallying cry for human rights sceptics often takes the form that we, the British, have no need for coffee-drinking, baguette-eating European to tell us how to do human rights. The problem though is that rights protection is not that simple, and never has been, so why do we let human rights be disparaged by these tabloid-level soundbites and mantras?
When bashing human rights, politicians are most commonly referring to the European Convention of Human Rights (ECHR) which was created by the Council of Europe which, for clarity’s sake, has nothing to do with the European Union – incidentally making Farage’s claims about the “EU Human Rights Act” farcical. The ECHR established the European Court of Human Rights (ECtHR), frequently referred to as ‘Strasbourg’ in lieu of its location. These elements of European influence running through our human rights make it an easy target for politicians wanting to exploit the rife Euroscepticism that exists in modern Britain. This is why, perhaps unsurprisingly, key features of the ECHR’s history are left out by its critics.
The ECHR can find its origins in the ruins of Europe following World War Two, and the Holocaust, in which it was decided action must be taken to prevent future human rights abuses across the continent. Winston Churchill, and the British delegation, played a key role in the drafting of the Convention and were ardent supporters of it. Since then, the Convention has evolved according to the changing nature of Europe and has helped in reducing discrimination and exposing rights abuses.
Looking then at the face of modern Europe, the significant role of the ECHR may seem unnecessary and can even seem to be an inconvenience to public protection and government policy. However, the Convention should not be perceived as only existing to prevent grave breaches of human rights, but also to remedy all forms of discrimination, such as the right of LGBT couples to adopt or for people of all faiths to have the same job opportunities. These breaches still occur across Europe, with significant examples appearing in Poland and Hungary where radical anti-LGBT policies are being enacted, such as ‘LGBT free zones’.
This is one reason why it is more crucial than ever that other States party to the Convention continue to believe in and abide by it. If disagreements with Strasbourg start to appear over issues, for example prisoner voting rights, it waters down the impact of the ECtHR’s rulings and threatens the possibility that judgments given by the Court to clear breaches(such as those appearing in Eastern Europe), are swept aside, and ignored by the violating State. Arguing and fighting for proper rights protection is not done only for yourself, or your group’s, but is done for the wider community.
This is why the rhetoric used in modern politics about seemingly humorous issues, like the failed deportation of a man because of his cat, and other false remarks which attempt to sabotage the mechanisms for rights protection are dangerous. Dangerous not only for the prevention of relatively benign issues that may appear in nations like the UK, but also for those that rely on the Convention to prevent fundamental breaches of human rights, which appear more likely to occur on a daily basis, and cause one to consider the events that led to the creation of the ECHR.
When we think of feminism, we think of women holding strongly coloured flags of green, white and gold or green, white and purple in historical photos. We think of women and girls who spoke on the news demanding equal opportunities, more provision of pregnancy and abortion advice and the liberation of females in third world countries. We think of Malala speaking at international conferences, Jessica Chastain acting in Zero Dark Thirty and FaceBook executive Sheryl Sandberg on the cover of Time Magazine. They are all strong women in their declaration as being a feminist and are frequently and publicly involved in feminist organizations and activities.
Yet, we often forget to include some women as feminists, either due to the fact they do not carry those clear ‘feminist features’ as mentioned or simply that they do not consider, or even refuse to consider themselves as feminists. They are the Celia Foote character (interestingly also played by Jessica Chastain) in The Help, who were not feminists in a conventional way like the Skeeter character (played by Emma Stone), and even often doubted themselves, but went against the mainstream in terms of thoughts and actions, and are undoubtedly feminists who proved to be great thinkers and writers of all times.
Hildegard of Bingen
Unknown to many, feminism was rooted in religious contexts in which women found themselves with the opportunity to express their thoughts as freely as male. Since ancient history, especially in Europe, families sent their ‘unmarriable’ daughters away to convents. Some women found the time and quietness a great opportunity to think, read and write. One of them is our first ‘non-feminist’ feminists, Hildegard of Bingen.
Hildegard was born in the 11th century and became a nun when she was a teenager (the exact age of her enclosure is subject to debate). She later became the abbess of a small Rhineland convent. People normally do not consider her as a feminist as she lacked most essential elements contemporarily associated with feminism, yet she was certainly a pioneer of proving that women can do exactly what male can do with her actions. Hildegard was considered by many in Europe to be the founder of scientific natural history in Germany. She is also one of the few known chant composers to have written both the music and the words. Hildegard also produced two volumes of material on natural medicine and cures and three great volumes of visionary theology, which were all well celebrated and led to her recognition as a Doctor of the Church, one of the highest titles given by the Catholic Church to saints recognized as having made a significant contribution to theology or doctrine through their research, study, or writing.
Not only was she deeply involved in what conventionally thought as works of male – music, academics, medicine and religious theories – she also frequently wrote letters to popes, emperors, abbots and abbesses expressing her critical views and thoughts on a wide range of topics. These significant political, social, economic people often approached her for advice. She even invented a language called the Lingua ignota (“unknown language”).
Despite her apparent feminist approach of doing things, another thing that often dissociates her with feminism is her frequent self-doubt. Often doubtful towards her ‘unfeminine’ activities, she was always insecure of being an uneducated woman and was not confident of her ability to write. She once wrote to Bernard of Clairvaux, one of the leading churchmen of the time, asking whether she should continue with her wiring and composing, even though at that time she was already widely known and honoured for being an incredible writer and musician.
But still, Hildegard was one of the few women in medieval history that wrote so freely and critically on everything and was viewed as an impressive writer, musician and religious leader, based on her achievements instead of based on her gender. That made her a feminist.
Margaret Cavendish was born in the 17th century into a family of well-established, Royalist landowners, and later married the Duke of Newcastle. She was one of the few fortunate women of her time who had her husband encouraging her to pursue her literacy ambition.
At first, she started writing on topics mostly associated with women’s internal struggle, ranging from their worries regarding their family to their common fear and sorrow about their children’s sickness and death, though she did not experience what she wrote. Well-reserved by women, her writings were found to be very moving and unexpectedly understanding of the harsh realities faced by many women at that time.
At later times Cavendish started writing philosophical verse. Though her work was widely recognised, same as Hildegard, Cavendish often had self-doubt about her capacity and duty as a woman when she was writing. A modern biographer once remarked that Cavendish felt torn between ‘the (feminine and Christian) virtue of modesty’ and her ambitions. On the one hand, she was very serious and confident about her work; on the other hand, she often depreciated her work and justified them with defensive and apologetic justifications.
From Cavendish’s degradation of her work, it seems that she lacked the confidence and guts of feminists to denounce the conventional status of women and to loudly declare that her work should be equally recognised. But she wrote like a feminist in a sense that she brought womanly issues, which was thought to be a topic not worthy of writing and of no political, social or any importance, to public attention. She brought the dark side, the internal perspective of women’s struggles in the household into the open light. She also spoke out against the hostility towards any women regarded as outspoken or ambitious, which at her time was deemed as madness and dangerous. Her writings also encouraged later women to write and urged them to unite together instead of always being jealousy critical of other women’s achievements. That made her a feminist.
Same as Cavendish, Dorothy Osborne was born in the 17th century into a family of Royalists. She is comparatively less well-known than our other four ‘non-feminist’ feminists as she did not produce writings or theories as significant as those produced by Cavendish or Hildegard. She could even be seen as an anti-feminist by some people as she was one of those critics who heavily denounced the work of Cavendish, a more obvious feminist, as ‘extravagant’ and ‘ridiculous’.
Funnily enough, what made her on the list is her criticism of other people’s work (including Cavendish’s) in letters exchanged between her husband and her. She read widely, often had heavy criticisms of other people’s work and exchanged her thoughts with her fiance, then husband, Sir William Temple in letters. These ‘witty, progressive and socially illuminating’ letters were later published and became large volumes of evidence that Dorothy was a ‘lively, observant, articulate woman’. Even Virginia Woolf later remarked that Dorothy, with her great literacy fashion, would have been a novelist in another time.
An additional point that made her a true feminist was her actions against an arranged marriage and conventional family mindsets. During the 17th century, marriages were usually business arrangements, especially for a rich family like Dorothy’s. Being in love with Sir William Temple, who was refused by her family due to financial reasons, she protested by remaining single and refused multiple proposers put forth by her family. At last, her struggle was rewarded with her finally marrying Temple. But her feminist acts did not stop there. Later references showed that she was actively involved in her husband’s diplomatic career and matters of the state, quite contrary to what an ordinary wife would behave in the 17th century.
Dorothy had not ever said that she was a feminist, or intended to act like a feminist. But her thoughts, words and actions clearly showed that she lived a feminist life by becoming a free-willed, critical woman. That made her a feminist.
Mary Shelley, who famously wrote Frankenstein, is the woman who wrote several of the greatest Gothic novels of all times and was considered to be the pioneer of writing science fiction.
Mary Shelley was born in 1797. Her father was the political philosopher William Godwin and her mother was none other than one of the first public feminist activist in British history, Mary Wollstonecraft. Under the influence of two great parents, Mary Shelley was encouraged to read, learn and write, and her father gave her an informal, nonetheless rich, education.
She later fell in love with one of his father’s followers, a romantic poet and philosopher Percy Bysshe Shelley, who was married. After the death of Percy’s first wife, Mary and Percy got married. They were certainly two talented, happy people married. But Mary’s luck then seemed to run out. Three of her four children died prematurely and Percy later drowned when sailing. In the last decade of her life, she was constantly sick.
Despite her miserable life, she was able to produce great novels such as Valperga, The Last Man, The Fortunes of Perkin Warbeck, etc. Perhaps due to her own sad stories, her novels were strikingly dark, in a sense that it created no hope for the characters. Mary’s novels, especially her most famous one, Frankenstein, did not have any strong female characters; ironically most female characters died. In a conventional perspective, her work is not feminist at all. But what she explored was the struggles of women in an age which society was driven by reason, science and patriarchy. One commentator once said that ‘[t]he death of female characters in the novel is alone to raise enough feminist eyebrows to question how science and development is essentially a masculine enterprise and subjugates women.’ She was radical in thought and critical of society’s norm. Alongside writing, she devoted herself to raising and educating her only surviving child, Percy Florence Shelley, who later famously supported amateur performers and charity performances.
What made Mary Shelley a legendary woman, in addition to her great writings, was her strength in turning misery into energy and striving as an author and a mother despite her miserable life. That made her a feminist.
This article will use Russian spellings of Belarusian names for the sake of consistency.
When comparing the situation in Belarus today to the revolutions of 1989, we have to note that each country experienced a different revolution. Estonians, Latvians and Lithuanians in the Baltics were trying to reverse the fifty-year long annexation of their nations since the Nazi Soviet Pact of 1939. The Baltic protest movement also saw an emphasis on salvaging national cultures – particularly language. Poland’s revolution was the result of a more long-term protest movement that began in the shipyards of Gdansk in the early 1980s under the helm of Lech Walesa. Romania saw the violent overthrow of the maverick megalomaniac dictator Nicolae Ceausescu.
What we are seeing in Belarus is a combination of all three. The protest movement is fundamentally against a long-serving authoritarian dictator whose foreign policy modus operandi is to play east off west, like Ceausescu. As in Poland, the Belarusian protest movement is spearheaded by striking workers. Finally, there is an element of the movement that campaigns for the revival of Belarusian national customs in favour of the more ‘Russified’ and ‘Sovietised’ ones pushed by the incumbent system. Opposition leader Svetlana Tikhanovskaya seems to suggest a blend of these three aspects in her interview with the independent Russian news site Meduza.
It must be said that Tikhanovskaya is not Lech Walesa, Lukashenko is not Ceausescu and Belarus is not the Baltic States. Nonetheless, we still see aspects of 1989 permeate the Belarusian protest movement.
The one aspect that is very different to 1989 is Moscow’s willingness to intervene in Belarus. In the late 1980s, Mikhail Gorbachev rescinded the Brezhnev Doctrine – the idea that if a country in the Warsaw Pact tried to break away the USSR, other Warsaw Pact nations would intervene to quell the political dissent. In an interview with Russian state television on the 27th August, Vladimir Putin essentially came up with his own version of the Brezhnev Doctrine. He said that Russian police forces would come into Belarus in the event that “extremist elements, using political slogans as cover, overstep a certain boundary.” The fact that Putin publicly admits that Russian forces could be used in Belarus is a reassertion of the Brezhnev Doctrine in a more subtle form – in contrast to Russia’s intervention in Eastern Ukraine where the Russian government denies that its military is present. Putin’s initiative is very bold and risky but if that is what it takes, in the view of the Russian leadership, to keep NATO out of Belarus, then so be it.
Russian support is the best chance Alexander Lukashenko has got if he is to survive. Beyond the security services and the highest echelons of the Belarusian leadership, Lukashenko has little or no support in wider Belarusian society. The price that Lukashenko will pay for keeping himself in power, thereby protecting his own security and finances, is by outsourcing more of his nation’s sovereignty to Russia.
Belarus’ protest movement does have some similarities with the 1989 revolutions in Eastern Europe if we look at some of its aims and the demographics of the opposition. However, Russia is more willing to intervene in the post-Soviet sphere than it was in 1989. Therefore, it is highly likely that instead of moving away from Moscow’s sphere of influence, Belarus may end up much nearer to it.
When we planned the third episode of the first series of History in Politics’ podcast Dead Current about peacebuilding and memory politics, we decided to begin by asking a broad question about peace in the world today versus peace in the world historically. And whilst we knew this question was challenging in that it is very difficult to quantify an abstract concept such as peace, we overlooked the definitional complexities associated with the term.
The Cambridge Dictionary online defines peace as “freedom from war and violence, especially when people live and work together happily without disagreements”, but during our recording Dr. Stefanie Kappler reminded us that peace is a deeply contested term and ‘peace’ can mean different things to different people and within different contexts. For example, if peace is seen as an absence of an official war declaration, this may misrepresent relations. Dr Kappler explained to us that peace can be a loaded term, manipulated and used to pacify; she pointed to examples in history which have not been represented as war yet were very violent.
Dr Kappler referenced a difference between positive and negative peace, which adds a further layer to defining and analysing the concept. Johan Galtung, who is widely regarded within Peace and Conflict Studies, distinguished between negative peace as an absence of large-scale violence, and positive peace which goes beyond that to include provisions against structural violence which hinders, among other things, democratic processes and social mobility. Often when we refer to ‘peace’ we are only using the definition of negative peace, limiting our understandings.
Peace and its representation are inherently political and interesting concepts. When peace is discussed there are a number of assumptions and biases which interact with the use of the term. Perhaps then, there is scope to go back through history and re-evaluate periods which have been classed as peaceful. To truly understand and apply the concept of peace to an event or time frame the political context surrounding the use of the term, and who is using it, must be considered. Who stands to gain from a certain circumstance being labelled peaceful, and is there anybody whose experience is being misrepresented by affixing this label? These are questions which should be asked in considerations of conflict and peace.
Would this re-evaluation affect how we view contemporary occurrences and scenarios around the world today? And would it affect how we represent and interact with our own histories? Which aspects of modern politics could this pose a challenge to today?
So many political statements and beliefs rest on a certain narrative of history, and by challenging standard historical narratives we begin to challenge the foundations on which they are built.
To hear more of our discussion with Dr Kappler and Dr Olga Demetriou and other topics we covered, the full episode can be accessed here.
In many countries it is criminal to deny the Holocaust; yet, many historians have argued heavily against this concept. Do laws like these, which are passed by parliaments, unjustifiably limit the freedom of expression? Or are they necessary in the remembrance of genocides, such as the Holocaust, or the Armenian genocide?
Holocaust deniers either state the Jews were not killed in a systemic genocide or minimise its extent; some claims suggest they were instead victims to disease, or other forms of indiscriminate hardship. The reality, as we well know, was “the most documented tragedy in recorded history”, as Holocaust survivor and Nobel Prize winner, Elie Wiesel declared during a discussion in 1999 at the White House. Due to the indescribable suffering inflicted upon many by the Nazi regime, many countries have in response passed Anti-Denial Laws, which criminalised both the promotion of Nazi ideology, as well as the denial of the Holocaust. In France, there is a more general law on genocide denial, geared perhaps to the Armenian genocide, which was commemorated formally for the first time in 2019. President Macron said during his 2017 presidential campaign, “France is, first and foremost, the country that knows how to look history in the face”, setting a precedent perhaps for other countries to not only set Anti-Denial laws, but to also commemorate such genocides.
However, historians protested heavily against the more general law on genocide denial in France, and on the concept more broadly. As Garton Ash writes for The Guardian, such laws “curtail free expression”. Through restricting this by law, regardless of good intentions, other freedoms which free expression sustains are suffocated. Although ex-German justice minister Brigitte Zypreis argues “this historical experience puts Germany under a permanent obligation to combat systematically every form of racism, anti-Semitism and xenophobia”, Garton Ash contends there is no evidence that a ban on free expression will make any significant difference. Many of the countries with laws against Holocaust denial (such as France, Germany, Lithuania, Romania, and Belgium) happen to also be some of the countries with particularly strong right-wing xenophobic parties. It is of course not that these parties exist due to the existence of Anti-Denial laws, but independent of this.
When the French Anti-Denial law was passed in 2006, many felt, again, that this was a repression of free expression. Even the renowned Turkish-Armenian editor Hrant Dink passionately opposed such laws, as they placed limitations on the discussion of what happened to thousands of Armenians in 1915. While in Turkey, it was illegal for Dink to describe these events as ‘genocide’, for which he was tried. Before his death, Dink responded to the first moot of such a law in France: “I cannot accept that in France you could possibly now be tried for denying the Armenian genocide. If this bill becomes law, I will be among the first to head for France and break the law.” He continued somewhat humorously, that then we could all watch whether it would be the Turkish Republic or the French government to condemn him first.
Anti-Denial laws while necessary in the remembrance of genocides, have proven a particularly contentious topic for historians. Although we promote free speech in society, there has to be limits. Therefore, while I have discussed both views, the promotion of free speech should not act as a gateway to hate speech in any form.
Throughout history, journalism has been used for many different purposes. It has been used to promote public morale, provide an antidote to social depressions, and expose injustices by revealing the voices of the oppressed. However, in a world of fake news, important stories have been lost by the rapid pace of today’s journalism. We can all remember the famous image of Alan Kurdi, the three year old Syrian boy who washed up on a Turkish beach. Despite drowning in his attempt to flee the conflict, today the crisis infrequently makes headlines despite the fact, to date, the war has displaced around 13 million people (NY Times).
The ability to shock and expose the injustices of governments and societies is not new to journalism. William Howard Russell, often cited as the first war correspondent, exposed the government’s disastrous handling of the Crimean War (1853-1856) and the devastating number of fatalities that subsequently ensued. This sparked a gripping new kind of journalism – one dedicated to precision, truth and immediacy.
There are cases throughout history of heroic individuals using journalism as a provocative agent of justice and a vehicle for truth. Nellie Bly, a journalist for the New York World faked mental illness in order to be admitted to an asylum in New York City in 1887. The abusive treatment of the patients and lack of sanitation, caused by government cuts, provoked outrage and forced the government to grant almost $1 million in new funds for the institution. Similarly, in 2012, investigative journalist Katherine Boo released a book titled Behind the Beautiful Forevers: Life, Death and Hope in Mumbai Undercity, documenting the people she encountered in the three years she lived in the slums of Mumbai. These are clear examples of journalism working to promote the voices of those often passed over in society.
Yet, with the changing mediums of journalism over time, today’s stories seem to have increasingly less stamina. From the initial spread of news through newspapers, to TV and now online, there is a much larger variety of platforms to receive news than ever before. With this increase in platforms, combined with online algorithms predicting what we want to read, logic assumes that stories need more stamina to be able to penetrate the collective public conscience. An Ofcom report in 2018 discovered that 44% of adults consume online news via social media as well as TV, magazines and newspapers. However, social media was also one of the least trusted mediums of news.
This lack of trust hints at the modern socio-political ‘fake news’ crisis, which has become particularly potent through Twitter’s narration of the coronavirus (case in point: Trump advocating injecting disinfectants as a cure). With 145 million daily users on Twitter, there can be too much information and not enough clarity or importance given to the most ground breaking stories, such as the rise in domestic violence in the UK during lockdown. Research from Google Trends has suggested that today, a news story stays in the headlines for only seven days, likely very different to the popular memory of stories when newspapers were the only source of journalism.
Former journalist Alistair Campbell has emphasised the individualistic nature of modern journalism. From early journalism through newspapers, Campbell advocates that today “everyone should think of themselves as a brand” able to perpetuate reputable, true news or fake news. News agencies now have teams who specialise in cracking down on the spread of disinformation. James Hamilton, a professor of communication at Stanford University, recently said in an interview, “journalism is said to be the first rough draft of history.” If this is true, to provide stamina for important stories in journalism, we face the greatest challenge of allowing the most important and true stories to penetrate our conscience and make history, without the risk of disinformation by fake news, and keeping the tradition of provocative journalism alive in order to allow history to ring true.
To hear more about the risks within journalism today relating to fake news and the changing landscape of conflict, listen to our new podcast with former The Guardian foreign correspondent Dr Rory McCarthy, available on Spotify on ‘Purple on Demand.’