How an overlooked conflict in the Vietnam War shaped the CIA

General Vang Pao, enlisted by the CIA to lead the Hmong soldiers (Credit: The Guardian, accessed on 28 November 2023, https://www.theguardian.com/world/2011/feb/22/vang-pao-obituary)

In north-central Indochina lies a sleepy nation, the land of a thousand elephants and white parasols – Laos. This idyllic Shangri La, with its verdant peaks to the north and the broad Mekong snaking lazily south, became the site of some of the CIA’s formative operations, becoming the testbed for clandestine activity, and shaping the organisation’s activity for decades to come.

Why Laos:

Strategically situated on the road to Communist China, with several roads crossing the highlands into Vietnam and the easily-navigable Mekong flowing down its spine towards Cambodia and South Vietnam; Laos was an essential component to the Ho Chi Minh Trail (HCMT) and the North Vietnamese war effort. Despite assurances of Lao (and Cambodian) neutrality under the Geneva Peace accords of 1954 that followed the French defeat at Dien Bien Phu, the North Vietnamese Army had cells in northern Laos dating back as far as the 1930s. Now as a power vacuum formed, they activated. 40,000 NVA soldiers and 2,000 of the nascent Pathet Lao began the insurgence while the French, supported by American funds, were overwhelmed.

The significance of Domino Theory within the White House shouldn’t be underestimated. It had already led the Americans to fight in Korea and Vietnam, now it would lead them to act in Laos and Cambodia to protect Thailand and prevent communism from taking hold in Southeast Asia. Aircraft consistently bombing North Vietnam under Operation Rolling Thunder would drop any remaining bombs over Laos on the return leg. This, combined with three separate, major air operations by the United States Air Force (USAF) meant that Laos remains the most bombed country per capita.

Covert Activities:

The USAF were not the only ones flying missions, American Intelligence and Special forces were also being ‘sheep-dipped’, a process by which the army transfers a soldier to clandestine operations and so must provide cover for their ‘departure’ from the army and a plausible civilian job. These soldiers fell into two camps: firstly were Ravens, lightly/un-armed Forward Air Controllers (FAC) that, due to technological limitations, would identify targets for the USAF, flying low and fast between mountains and valleys under enemy fire. The second camp were the Air America and CASI pilots, who flew ‘logistical’ operations that are subject to much conjecture. Many claim that these pilots would transport the only cash crop the Hmong had to exchange – opium.

The claims suggest that while it did not personally enrich American pilots, they did turn a blind eye to it, only inspecting large packages and not smaller ones. This tacit approval would become the Modus Operandi for the CIA (who handled all clandestine activity in Laos) when dealing with cash-strapped guerrilla armies. The US were similarly culpable for allowing the production of opium to proliferate under the Mujahideen during the Soviet-Afghan war; Barry Seal was a CIA asset whilst transporting cocaine for the Medellin; and the infamous Contras case in Nicaragua – suggesting a pattern of effectively using the drug industry to launder and move cashing exchange for goods/information between seedy individuals and the US Government.

Moreover, the use of proxy guerrilla armies with the support of American weapons and a handful of highly effective personnel, rather than committing the entire military, proved an effective way to wage this war, and one that would be repeated. By enlisting the help of General Vang Pao, a messianic figure among the Hmong, their loyalty to America was assured and Hmong fighters of all ages fought valiantly, bearing the brunt of the conflict on behalf of America. Cruelly, this also meant they were very expendable. Following their defeat at Long Tieng, Vang Pao would be airlifted out, eventually to California, while his people would be persecuted and forced into Thailand where many remain as refugees five decades later. This model of using an expendable, local force by forming an alliance with a key local powerbroker/warlord, was especially effective during the Cold War, where victory in proxy wars was one of the only ways to inflict damage on the USSR. Support for UNITA in Angola, tribal leaders in Afghanistan, and even attempts to invade Cuba such as the Bay of Pigs, all used local forces supported by Americans without risking the lives of American soldiers. This was essential to the fulfilment of the Reagan doctrine as it meant the US could continue to fight Communism everywhere it reared its head. Without going to Congress for funding or facing the war-weariness that infected the public during Vietnam simply by using its financial heft and transferring some of its military might into local hands – in the case of the Mujahideen, this would backfire spectacularly.

The Laos Secret War therefore one of the most essential and underrepresented theatres of the Vietnam War principally because its influence was highly significant to the outcome of the broader conflict since Laos was a key nexus for NVA smuggling, training, and resupplying their Viet Cong brothers in South Vietnam. More significantly though, the legacy of the conflict is enormous given that it was a sideshow to the main event. Not only were the people of Laos ‘Bomb[ed] back to the Stone Age’ as USAF Chief Curtis LeMay intended, devastating the country even today, but the clandestine work that the CIA pioneered there would influence every facet of the organisation and their dealings for over four decades.

Rahul Bhatia

The Double-Edged Cutlass of the Grenada Revolution 

The City of St George’s, Granada (P.C. Martin Falbisoner, Wikimedia)

Cuba was not the only Caribbean nation to experience a revolution and adopt communist ideologies in the twentieth century. The February Revolution of 1970 in southern neighbour Trinidad and Tobago proved the desire of many young Caribbeans to demolish colonial institutions and Euro-American capitalist power systems. It was Grenada’s revolution from 1979-1983, however, that truly captured the attention of the world, especially the United States.

The Grenada Revolution began on 13th March 1979 with the deposition of the first Prime Minister Eric Gairy by revolutionary Maurice Bishop, leader of the Marxist-Leninist New JEWEL Movement (NJM), one which advocated for educational and welfare reforms along with political liberation. The NJM quickly established the People’s Revolutionary Government (PRG) with Bishop as the new Prime Minister and suspended the post-independence 1974 constitution enabling them to rule by decree. Simultaneously, the NJM utilised their National Liberation Army to conduct armed takeovers of vital strategic locations such as police barracks and the national radio station. 

Since 1974, Prime Minister Gairy had increasingly restricted the rights of political expression and attempted to quell all forms of opposition. On the 21st January 1974, known to witnesses as “Bloody Monday”, Gairy’s private paramilitary nicknamed the ‘Mongoose Gang’ was mobilized to subdue a mass protest resulting in the death of Bishop’s father, the injuries of numerous others, and a drastic gain in support for the NJM. The fraudulent 1976 re-election of Gairy due to his continual deployment of the infamous paramilitary, and rumours of his plans to assassinate leaders of the NJM in 1979 seemed like the perfect storm for Bishop’s party to gain traction.  

By March of that year, most Grenadians supported the “Revo” (revolution) and once in full swing, Bishop and the PRG implemented social and political changes up until its end. The new government prioritised improvements to the education system, introduced a national literacy scheme, expanded free healthcare provisions, and encouraged voluntary labour schemes to construct housing and other physical infrastructure such as the new international airport with Cuba’s assistance. Additionally, the PRG authorised the creation of farming co-operatives to maximise the islands’ limited agricultural resources, decriminalised labour unions, and established the People’s Revolutionary Army (PRA). The domestic finances of Grenada would only stretch so far to fund these projects since sanctions had meant diminished access to the IMF and the World Bank. Inevitably, this led to international economic support from Cuba, the Soviet Union, and other left-wing allies much to the frustration of the Reagan administration. 

However, a few of Bishop’s key decisions seemed contrary to the ideals of fellow NJM members and supporters. Firstly, the decision to maintain the British monarch Elizabeth II as Head of State seemed paradoxical. After all, the monarch was a figure of imperialism and the colonialism that had once afflicted Grenada and who revolutionary Grenadians wished to distance themselves from. Nonetheless, Bishop allowed Elizabeth II to remain Head of State and for the Crown representative Governor-General Paul Scoon to be kept in his position since Bishop believed that Grenada would mantain stronger legitimacy due to its ties to Britain and develop stronger international trade links. It should be noted that while Grenada’s relationship with the Crown did not change, the country was sanctioned by Prime Minister Margaret Thatcher. 

Secondly, in the autumn of 1983, the central committee of the PRG attempted to convince Bishop to enter into a power-sharing agreement with Finance Minister and Deputy Prime Minister Bernard Coard to ensure more focus on political as well as social stability for Grenada’s future. This proposition caused a massive internal fissure within the PRG with accusations of Bishop deviating from the ideological prerogatives of the NJM and having no coherent vision. Following Bishop’s rejection of this proposal in October, he was placed under house arrest leading to demonstrations across the islands flaring up as Coard assumed leadership of the PRG. Despite attempts to evade re-capture having been helped by demonstrators to escape, Bishop and seven others were executed by a PRA firing squad on 19th October. What followed was the rapid collapse of the PRG and the six-day military junta led by former PRA General Hudson Austin. This, too, came to a grinding halt with the invasion of Grenada by the coalition forces of the US and six Caribbean nations on the 25th October 1983. Overall, from 1979-1983, Grenada experienced a unique position as a pro-monarchy, a (at times) stable socialist Caribbean nation, and a huge concern to the United States.

Stephanie Ormond, Summer Writer

Sport as Conflict

Sport fulfils a number of roles in society: it unifies people and nations behind a team, it provides children with role models, and often brings the international community together through tournaments. It could be said that it has another role— a less violent alternative to war. Nuclear deterrents, increased globalisation, effective international organisations, and a desire not repeat the horrors of the World Wars have made nations unprecedentedly reluctant to engage in warfare. So much so that a border skirmish or an invasion of national airspace causes anxiety. And so, nations have to find a new way to obtain the ends traditionally achieved by large scale war. Proxy wars, economic sanctions, and cyberwarfare do generally fill this void, but a more peaceful alternative is sport. Usually, international competitions are played in good spirit between friendly nations, but often they are politicised by nations seeking to dominate or get back at their rivals, demonstrate supposed superiority, or boost national pride. Ends traditionally achieved by warfare.

The Olympics have repeatedly been used as an outlet for nations to demonstrate supposed superiority. Infamously, Nazi Germany tried and failed to use the 1936 Munich Olympics to showcase ‘Aryan dominance’ only for African American Jesse Owens to win four gold medals. More recently, Communist China has seemingly viewed the 2020 Tokyo Olympics as an outlet to demonstrate national and ideological superiority. The ideological aspect was plain to see in the furore caused by two Chinese cyclists wearing pins of Mao Zedong after winning gold. In wearing those pins, the cyclists made clear for what and for whom they were winning their medals.

The Olympics have also been used by major powers in conflict. During the Cold War, the US and its allies famously boycotted Moscow 1980, to which the USSR and its allies retaliated by not attending Los Angeles 1984. In organising boycotts, the two superpowers both voiced their opposition to their rival and also showcased their global influence through the amount of countries they got on board with the boycott. With the 2022 Beijing Winter Olympics approaching, worsening relations between China and the West have led to some figures in the US, UK, and Australia to call for a boycott of those games. Once again, conflict between major powers is accompanied by Olympic boycotts.

The 1980 Moscow Olympics. (Credit)

Fortunes on the battlefield were historically tied to national pride. Trafalgar made Nelson a national hero in Britain; Ataturk’s victories mean he is still revered in Turkey. Nowadays, sport has a similar, if not quite as strong, link. Italy’s Euro increases perceptions of 2021 being a good year for them. They have also won Eurovision and are having a political and economic turnaround under new Prime Minister Mario Draghi. Likewise, England reaching the Euros final for the first time coincided with the end of lockdown. The UK was seen as being on an uptick— a reversal of fortunes after a devastating pandemic and sporting failure for the last few decades. The unifying effect the football tournament had, and how many commentators argued that the England team came to ‘embody Englishness’, is testimony to the powerful effect of sport. England’s national pride was inexorably tied to football.

On the flip side, losing competitions to major rivals can wound national pride. The Tokyo Olympics once again provide an example of this when the Taiwanese badminton team beat the reigning Chinese champions. This provoked outrage among Chinese nationalists at the thought of being beaten by a country which they perceive as a breakaway province.

On that note, sporting competitions provide an opportunity for revanchism and to ‘get back’ at countries which players’ nations have been in conflict with. Famously this occurred in the 1956 Olympics during a water polo match between Hungary and the USSR. Hungary had recently been invaded by Soviet forces, something the Hungarian water polo team witnessed. When the competition went ahead, the match turned violent and a Hungarian player left with a bloody gash on his head, leading to it being dubbed the ‘Blood in the Water’ match. Recent controversy was abound in the Euros when Serbian-Austrian striker Marko Arnautovic made derogatory comments to the North Macedonian football team. The Balkan country has recently had tense relations with Serbia over its decision to support Kosovo.

And so, there are clearly ways in which nations use sport to fulfil the traditional role of conflict. Sports allow nations to best their rivals in a way that they cannot with other alternatives to conflict (such as proxy wars, cyberwarfare or sanctions). Likewise, sporting dominance is an alternate way to boost national pride in a similar way that military dominance did so in previous centuries. Nonetheless, sports between actual rival nations does not negate conflict, but rather creates another outlet for it. Taiwan may have beaten the PRC in a badminton match, but this does not mean actual war between the two countries may not break out. The US and USSR did boycott each other’s Olympics, but this did not negate that they were always at the brink of war. International tournaments, in many scenarios, are simply the modern, peaceful rendition of the age-old desire for nations to best each other.

Jonas Balkus, Summer Writer

​​Has the United States Fallen Victim to the Graveyard of Empires?

Recent events in Afghanistan have been tinged with a saddening inevitability. After twenty years, the United States are withdrawing from Afghanistan, and in concurrence the Afghan government they propped up has fallen to the Taliban. It seems that everything the US had set out to do in Afghanistan has been undone, and the US’ worldwide reputation has been gravely damaged. A look at Afghanistan’s history makes the events of the last few months seem like a familiar story in the so-called ‘Graveyard of Empires’. This sobriquet has been applied to Afghanistan due to the immense difficulty foreign powers face in trying to completely conquer and administer it. 

An American soldier in combat gear. (Credit: Image: The US Army via Creative Commons.)

Afghanistan has proven a difficult challenge for many empires. Alexander the Great’s army suffered immense casualties in taking Afghanistan. During the Muslim conquests, it took two centuries for Arab Muslims to defeat the Zunbils of Afghanistan whereas it had taken only decades for them to bring down the empires which had dominated the Middle East since antiquity. When the Mongols swept through Asia and Europe, they faced strong and effective resistance from the groups living in Afghanistan. The casualties inflicted upon the Mongols included Genghis Khan’s favourite grandson. 

In more modern times, the British waged three disastrous Anglo-Afghan wars from 1839 to 1919 while competing with Russia over influence in Central Asia. The most infamous of these conflicts was the Second Anglo-Afghan War, in which the entire British force in Afghanistan— which totalled about 16,000 people— was completely wiped out in their retreat bar one survivor. In the end, the British failed to take Afghanistan, only making pyrrhic gains which were undone following the Third War in 1919.The Russians subsequently also suffered defeat in Afghanistan. In 1979, the USSR invaded Afghanistan to support the newly-established Communist government against insurgents. Like the British, the Soviets suffered heavy casualties against local guerrilla fighters, with their death toll reaching 15,000 with 35,000 wounded by the end of the war. In both instances, the invading forces heavily outnumbered the local fighters and yet they still took huge casualties. 

Similar patterns played out in 2001, when a coalition of Western powers, led by the United States invaded Afghanistan to topple the Taliban government, which had been hosting terrorist groups including Al-Qaeda. Western troops faced difficult resistance from the Taliban, who, once routed, waged war sheltered in fortified Afghan hills, from where they were ultimately unable to dislodge them. And yet, the US, unlike the British or the Russians, was able to take Afghanistan and establish a new government there. Yet this government collapsed eventually and was only ever nominally in control of the entire country. So, to affirm whether the US’ defeat is simply due to the factors which plagued previous invading powers, whether it was a victim of the ‘Graveyard of Empires’, warrants examination of why Afghanistan has been awarded that title.

In conquering and administering Afghanistan, the key challenge is terrain. Afghanistan hosts some of the highest mountains in the world. The Hindu Kush mountain range runs through the country, isolating communities and dotting the map with caves and easily fortifiable positions. This isolation makes it difficult for a central government to have control over local areas. Communities have tribal power structures and local tribal leaders are the power brokers. But this Afghan tribalism is primarily a product of Afghanistan’s vast diversity. 

Through the centuries, Pashtuns, Tajiks, Hazaras, Uzbeks and other ethnic groups have all settled in Afghanistan and thus have produced a very divided nation. Many Afghans identify with their local ethno-cultural group rather than the nation of Afghanistan, which grew out of an eighteenth century empire rather than being formed from nationalist ideas and shared culture such as Italy or Germany. In earlier centuries, diversity resulted in lots of conflict, and so much of rural Afghanistan is heavily fortified. This has been exacerbated due to Afghanistan being in constant state of conflict since the 1970s. And so, Afghanistan is hard to administer, traverse, and conquer, and its people are hard to unite, especially behind a central government such as the US-backed Republic.

These factors may suggest that American failure was inevitable and a prolonged presence in Afghanistan was folly. Yet this is not necessarily the case. Many empires in the past did manage to conquer and administer Afghanistan. Indeed, for most of its history, it has been a part of a foreign empire. It’s more successful rulers, such as the aforementioned Mughals, understood and took advantage of local culture and power relations and governed Afghanistan quite loosely. While the United States did attempt to use local power relations to their advantage, these attempts, such as supporting local warlords, often backfired and undermined confidence in the central government.

But more damagingly, the US’ main focus was to transplant an American-style political system and social structure to a country where it was fundamentally alien. The recent report published by SIGAR— the body which oversaw the US’ reconstruction efforts in Afghanistan— details numerous instances where the Americans in dealing with corruption, solving local disputes, and training the army among other things, assumed that such issues could be dealt with in Afghanistan in the same manner as they would in the United States. The US did construct a more democratic Afghan state and there were improvements under this state, but it was dysfunctional and it would have been perpetually reliant on the US. If the US had been more attuned to local culture, they would have still made an improvement in Afghanistan, which, importantly, would have been far more sustainable. The general difficulties encountered by foreign states in Afghanistan did make administering the country a hard task for the US, but it was their own approach to doing so which led to their failure. The United States has not fallen victim to ‘the Graveyard of Empires’ but rather to a serious deficit in pragmatism.

Jonas Balkus

“It’s not the Virgin Mary. It’s a painting.” Whiteness’ politicised grip on Iconography explored around Ofili’s depiction of the Holy Virgin Mary.

16 December 1999. New York City. With his hands as his weapon of choice, Dennis Heiner waltzes into the Brooklyn museum with vengeance, walking purposefully to the corner of the Sensation exhibit in which his victim awaits. Dipped in white paint smuggled inside in a hand sanitizer bottle, Heiner’s hand meets its target, the “blasphemous’ depiction of the Virgin Mary painted by British artist Chris Ofili two years earlier. Failing to prevent the attack, guards protecting the painting reportedly state “it’s not the virgin Mary. It’s a painting”.

Iconographic imagery proliferates in Western art history. Whether in the form of paintings or sculptures, from Da Vinci’s The Last Supper to Duccio’s Madonna and Child, one theme is consistent throughout the canon: the whiteness of its figures. Ofili’s Black Virgin Mary embodies an attempt to broaden the canon in line with non-western expressions of religiosity, and Heiner’s vandalism embodies the Western canon’s resistance to change. Supposedly incensed to violence by Ofili’s use of pornographic cut-outs surrounding the Madonna, why then did Heiner smother her face rather than her surroundings with white paint? This was nothing short of a political, defensive and violent display of whiteness. Ultimately, Heiner’s smearing of white paint on the figure of the Black Madonna symbolises the whiteness which has historically been imposed onto Western understandings of Christianity and continues to mark both art-political and theological discourse, as is explored here.

The Ofili piece being vandalised. (Credit: Philip Jones Griffiths, Magnum Photos, accessed here)

Upon a yellow-gold background which harks back to medieval iconographic trends, Ofili’s Madonna features a breast moulded from elephant faeces and a beautiful blue outfit interwoven with the contours of her body. Inspired by his time in Zimbabwe and appreciation of artistic technique in the region, Ofili’s work mixes European and African tradition with an expert experimentalist hand. However, its beauty has often been overlooked. 

Chris Ofili, The Holy Virgin Mary (1996) (Credit: MoMA

The controversy surrounding the painting was not limited to the violence of Heiner on that December afternoon. Upon its showcase in galleries lawsuits abounded, including from the Mayor’s office of New York in which Ofili’s use of pornographic clippings and elephant faeces was labelled ‘disgusting’ and ‘sick’. In  a seeming attempt to find a middle ground, some commentators have described the piece as a ‘juxatposition between the sacred and the profane’, but this is a misguided conclusion which reinforces the hegemony of whiteness’ grip on Iconography. Within Africa the piece was interpreted very differently. Nigerian Art historian Moyosore B. Okediji wrote that elephant dung was a material used in both art and architecture in Yorubaland, also commenting that in artistic depictions of indigenous deities called Orisha, the nude female form was commonplace. In specific relation to Western reactions to the piece, he exclaimed “the learned West always fails to understand Africa”.

Ultimately, the use of elephant dung was not an act of defamation but a display of Christian identity based in African tradition. Born in Britain and of Nigerian heritage, a country which now has a Christian population of around 102 million people, Ofili’s aim with this project  was to display anAfrican form of Christianity, rather than mount an anti-Christian attack. Whilst some commentators have described the sacred intentions of Ofili’s painting as ‘ironic’ in relation to the reaction it garnered, we must acknowledge that his use of images of the body and earthly materials is not objectively irreligious. In fact, there is nothing ‘ironic’ about a painting espousing a form of Afro-centric religiosity. Rather than conveying Ofili’s intentions, the label of ‘irony’ imposed onto his combination of African tradition and Christian ideas instead displays the extent to which whiteness has been defensively protected in Western Christian expression.

Therefore, it is vital that we acknowledge that the lawsuits and physical assaults inflicted on Ofili’s depiction of ‘The Holy Virgin Mary’ are innately political and are the product of centuries of European Christian history in which whiteness was not just centred but expected when speaking of divinity. The piece being deemed ‘blasphemous’ by a plethora of white Americans, art commentators and religious leaders at the turn of the twenty-first century shows how Christianity’s global reach remains overlooked in favour of centring whiteness. Ultimately, it must be said that the security guards tasked with her protection stating “It’s not the Virgin Mary, it’s a painting” embodies a reluctance to associate holiness with anything other than white skin. 

By way of a conclusion, Ofili’s depiction of the Holy Virgin Mary is nothing short of an emblematic display of Afro-centric Christian theology. The reaction shown to the piece both within and without the art and theological worlds reveals the seemingly inextricable link between whiteness and holiness in Western thought. As Black theologians both in the west and throughout the world challenge these ideas and artists such as Ofili display the debate in the public eye, the politics of race and theology will no doubt continue to be a much-needed region of inquiry.

George R. Evans, History in Politics’ Summer Writer

10 Movies Showing the Evolution of Gender Equality in Hollywood

People say that films are sometimes depictions of the society and time in which they were made. This is especially applicable for society’s view of women. How particular films depict women really shows how people at that time embrace feminist ideas, and more generally, women. In this article, I am going to introduce 10 movies that really show how audiences, or Hollywood as an institution, think about female characters in films. 

Legally Blonde (2001) 

The first film is ‘Legally Blonde’, starring Reese Witherspoon as the lead character. Most people watched this film as a teenager, especially women aspiring to become lawyers. Many would say that this is clearly a feminist film, and a huge step for Hollywood to depict woman as strong, independent and critically minded. 

Reese Witherspoon in Legally Blonde (2001). (Source: IMDB)

However, when we pay closer attention to the details of the film, we clearly see that the film is actually filled with stereotypes about women. For example, women like the colour pink, chasing love being the main goal in a woman’s life and investing in fashion and beauty as the second mind goal in a woman’s life. Although the film did try to attack the ‘dumb blonde’ stereotype, the other stereotypes about women demonstrates that audiences and Holloywood depict women as very different from men. This clearly shows that Hollywood is still a long way from achieving gender equality in script writing and character creation. 

Iron Jawed Angels (2004), The Stepford wives (1975, 2004), North country (2005)

Then we have ‘Iron Jawed Angels’, starring Hilary Swank, ‘The Stepford wives’, starring Nicole Kidman and ‘North Country’, starring Charlize Theron as the lead. I expect fewer young people to have watched these films, as they are made in a nostalgic style, reminiscent of post-World War Two films. 

A common feature of these films is that, while Hollywood are placing more attention onto women’s issues, these issues are mostly centered on women being suppressed. The Stepford wives featured women being controlled by men and technology, being forced into conforming to traditional stereotypes of being good wives and mothers. Both ‘Iron Jawed Angels’ and ‘North Country’ are documentary films. The former featured female suffragists’ struggles during the pre-post wars times, the latter featured women being harassed and discriminated in the workplace during a time when people started hiring women to work in traditionally male dominated jobs. Whether it is being suppressed by men, the system or politics, these films have a common theory saying that if films are about women, it should be about how women are being suppressed and how women are fighting against the suppression. While this is a good start to letting people pay attention to women’s suppression, from a modern feminist perspective, these films inevitably depicted women as being defined by their disadvantage and therefore as victims. 

Hidden Figures (2016) 

Then comes ‘Hidden Figures’. This is said to be a huge breakthrough for female characters. As even though female characters are still being discriminated against and suppressed, female characters are finally being depicted as having the same and even higher intellectual level as the other sex. Another breakthrough is that this ‘female-centered’ film is made in a way that targets both male and female audiences. ‘Hidden Figures’ demonstrates that ‘female-centered’ films can be shown on the big screen and the characters be taken seriously; female characters can be judged by the same standards and for the same qualities as their male counterparts. The film also begins to explore the duality of oppression through discussion of African-American women, something rarely depicted in Hollywood films. 

Wonder Woman (2017) 

‘Wonder Woman’ is always included when talking about feminist film, and this is justified. The character and plot of Wonder Woman itself is empowering and encouraging for anyone. More importantly, Wonder Woman is the first woman hero character in a male-dominated ‘universe’ that is being taken as seriously as the lead male hero characters such as Superman and Batman. Most DC or Marvel female characters whose existence value are largely dependent on her male counterpart, such as Harley Quinn, Cat Woman and Batwoman, who are interesting mostly because of their relationship with the Joker and Batman. Unlike them, Wonder Woman is herself an icon and is an interesting character by herself. Also, the fact ‘Wonder Woman’ is filmed in a way that does not make a huge stir about a hero being a female shows that Hollywood is depicting female leads with other significant attributes than their gender. 

Bombshell (2019) 

Interestingly, ‘Bombshell’ stars Charlize Theron, who also played the suppressed, harassed female lead character in ‘North Country’ included above. Though both ‘Bombshell’ and ‘North Country’ featured women being sexually harassed and discriminated against in the workplace, obvious comparisons are observed. For example, support in society for the harassed female characters are more available in ‘Bombshell’. People are also less uncomfortable by claims of female characters being harassed in ‘Bombshell’. The most obvious improvement is that women are finally depicted not as weak, but strong, career driven, confident characters. The power to make a change and to help others are also totally placed in the hands of women themselves, instead of dependent on benevolence of male characters, such as the lawyer and one of the co-workers in ‘North Country’. An additional observation from the two films is that how women’s status has grown over the years can clearly be seen as both films are documentary films which featured real events of women’s struggle of their respective times. 

I care a lot (2020), Promisingly young woman (2020), Pieces of a woman (2021)

With the Marvel trend quieting down after ‘Avengers: Endgame’ in 2019, a film trend featuring smaller productions, clamer plots and women has started. There are films like ‘Bird of Preys’, starring Margot Robbie as Harley Quinn finally getting rid of her ties with the Joker, ‘I care a lot’, starring Rosamund Pike who also played Gone Girl in 2014 and ‘Promisingly young woman’, starring Carey Mulligan. These are, I would say, films with a more overt feminist message compared to it being more implicit in earlier films such as ‘Zero Dark Thirty’ (2012), ‘Miss Sloane’ (2016) and ‘Lady Bird’ (2017), which had a feminist lead character but not a feminist plot. A trend of ‘clear feminist’ films, which all received huge accolades, shows that Hollywood is more confident in making gender equality films. This also shows that audiences are more accepting of strong feminists featured in films than in the past. I am confident to claim that had these films been shown 10 to 20 years ago, they would be criticized as “too radical”. Although, Hollywood still has a long way to go before reaching true gender equality and recognising the issues that intersect with feminism, such as sexuality, gender identity, race and class. 

Chan Stephanie Sheena

Diamonds, Best Friend or Mortal Enemy?

Diamonds symbolise love, wealth, and commitment to both the purchaser and the recipient, after all, they are known to be a woman’s best friend. Yet, the process of retrieving such a valuable commodity remains a battleground for those who work in the diamond mines. Alongside diamond production, the construction of worker exploitation, violence, and civil wars is generated proving that beauty is in fact, pain. 

The tale of the present-day diamond market emerged on the African continent, South Africa to be precise. The Democratic Republic of the Congo ranks fourth in the world when it comes to diamond production with 12 million carats being produced in 2020, the African region dominates the top 10 rankings with seven out of 54 countries acting as some of the world’s largest diamond producers.

Congolese workers searching for rough diamonds in mines in the south west region of Kasai in the Democratic Republic of Congo, August 9, 2015. (Credit: Lynsey Addario, Getty Images Reportage for Time Magazine)

The diamond trade contributes approximately $8.5 billion per year to Africa and Nelson Mandela has previously stated that the industry is “vital to the Southern African economy”. The wages of the diamond miners, however, do not reflect the value of this work and its contributions to the financial expansion of African countries. An estimated one million diamond diggers in Africa earn less than a dollar a day, an unlivable wage stooping below the extreme poverty line. Despite the significant revenues from the diamond industry, both through taxation and profit-sharing arrangements, governments often fail to re-invest these funds in local communities. The government in Angola receives about $150 million per year in diamond revenues yet conditions near major diamond mining projects are appalling. Public schools, water supply systems and health clinics are near non-existent. Many African countries are still healing from the impact of colonisation and are dealing with corruption, incompetence and weak political systems. Therefore, it comes as no surprise that governments fail to invest their diamond revenues productively. 

Adjacent to being excessively underpaid and overworked, miners endure work in exceptionally hazardous conditions often lacking safety equipment and the adequate tools for their role. Injuries are a likely possibility in the everyday life of a miner sometimes leading to fatality. The risk of landslides, mine collapses and a variety of other accidents is a constant fear. Additionally, diamond mining also contributes to public health problems since the sex trade flourishes in many diamond mining towns leading to the spread of sexually transmitted diseases.

Children are considered an easy source of cheap labour and so they tend to be regularly employed in the diamond mining industry. One survey of diamond miners in the Lunda Norte province of Angola found that 46% of miners were between the ages of 5 and 16. Life as a diamond miner is full of hardship, and this appalling way of living is only heightened for younger kids who are more prone to injuries and accidents. Since most of these kids do not attend school, they tend to be pigeonholed into this way of life throughout adulthood, robbing them of their childhood and bright futures. 

African countries such as Sierra Leone, Liberia, Angola and Côte d’Ivoire have endured ferocious civil conflicts fuelled by diamonds. Diamonds that instigate such civil wars are often called “blood” diamonds as they intensify civil wars by financing militaries and rebel militias. Control over diamond rich territories causes rival groups to fight, resulting in tragic situations such as bloodshed, loss of life and disturbing human right abuses. 

Whilst purchasing diamonds from a conflict-free country such as Canada can buy you a clean conscience, you must not forget about the miners being violated every day for the benefit of others but never themselves. Just as we have the opportunity to choose fair trade foods benefitting the producers, consumers of one of the most valuable products one may ever own should not be left in the dark regarding the strenuous work of digging miners do behind the stage of glamour and wealth. A true fair trade certification process must be set in place through which miners are adequately awarded for their dedication and commitment to such a relentless industry, especially in countries that are still processing generational trauma that has been caused by dominating nations.

Lydia Benaicha, History in Politics Contributor

Does the Electoral College Serve the Democratic Process?

“It’s got to go,” asserted Democratic presidential candidate, Pete Buttigieg, when speaking of the electoral college in 2019 – reflecting a growing opposition to the constitutional process, which has been only heightened by the chaotic events of the past weeks. Rather than simply reiterating the same, prosaic arguments for the institution’s removal – the potential subversion the popular vote, the overwhelming significance of battleground states, the futility of voting for a third party, and so forth – this piece will consider the historical mentalities with which the electoral college was created in an effort to convey the ludicrous obsolescence of the institution in a twenty-first century democracy.  

Joe Biden and Kamala Harris preparing to deliver remarks about the U.S. economy in Delaware, 16 November 2020. (Credit: CNN)

In its essence, the system of electors stems from the patrician belief that the population lacked the intellectual capacity necessary for participation in a popular vote – Elbridge Gerry informing the Constitutional Convention, “the people are uninformed, and would be misled by a few designing men.” Over the past two hundred years, the United States has moved away from the early modern principles encouraging indirect systems of voting: for instance, the fourteenth amendment normalised the direct election of senators in 1913. It has also seen the electors themselves transition from the noble statesmen of the Framers’ vision, to the staunch party loyalists that they so greatly feared. In fact, the very institutions of modern political parties had no place in the Framers’ original conception, with Alexander Hamilton articulating a customary opposition to the, “tempestuous waves of sedition and party rage.” This optimistic visualisation of a factionless union soon proved incompatible with the realities of electioneering and required the introduction of the twelfth amendment in 1803, a response to the factious elections of 1796 and 1800. Yet, while early pragmatism was exercised over the issue of the presidential ticket, the electoral college remains entirely unreformed at a time when two behemothic parties spend billions of dollars to manipulate its outcome in each presidential election cycle. 

The Constitutional Convention was, in part, characterised by a need for compromise and it is these compromises, rooted in the specific political concerns of 1787, that continue to shape the system for electing the nation’s president. With the struggle between the smaller and larger states causing, in the words of James Madison, “more embarrassment, and a greater alarm for the issue of the Convention than all the rest put together,” the electoral college presented a means of placating the smaller states by increasing their proportional influence in presidential elections. While it may have been necessary to appease the smaller states in 1787, the since unmodified system still ensures voters in states with smaller populations and lower turnout rates, such as Oklahoma, hold greater electoral influence than those in states with larger populations and higher rates of turnout, such as Florida. Yet, it was the need for compromise over a more contentious issue – the future of American slavery – that compelled the introduction of the electoral college further still. Madison recognised that “suffrage was much more diffusive in the Northern than the Southern States” and that “the substitution of electors obviated this difficulty.” The indirect system of election, combined with a clause that counted three of every five slaves towards the state population, thus granted the slaveholding section of the new republic much greater representation in the election of the president than an alternative, popular vote would have permitted. At a time when the United States’ relationship with its slaveholding past has become the subject of sustained revaluation, its means of electing the executive remains steeped in the legacy of American slavery.

It takes only a brief examination, such as this, to reveal the stark contrasts between the historical mentalities with which the electoral college was established and the realities of a modern, democratic state. Further attempts to reform the institution will no doubt continue to come and go, as they have over the past two hundred years. However, when compared with the environment in which it was proposed, it is clear that the unreformed electoral college is no longer fit for purpose and must, eventually, give way to a system in which the president is elected by a popular vote.

Ed Warren

Debates Take On a Different Meaning in the “Worst Year Ever”

The Trump-Biden debates are wrapped up, and for the “Worst Year Ever” they didn’t disappoint. The first debate was widely condemned as the “Worst Debate Ever”. Both candidates talked over each other, and it was near-impossible to understand them. Biden faced calls to boycott the other debates. Trump made this decision for him, falling ill with COVID-19.

President Donal Trump and Democratic candidate Joe Biden take the stage in their final debate of the election campaign, Nashville, Tennessee. (Credit: Reuters)

Anyone who saw even brief highlights of the first debate could be forgiven for giving up on the whole institution of debates. But this would be extremely unwise. Yes, Trump interrupted Joe Biden a staggering 128 times. And admittedly, Joe Biden did reply by telling him to “shut up” and calling him a “clown”. Yet this wasn’t the breakdown of the debate as an institution. Rather, it was an additional insight into who the two candidates are, and how they will act in the face of the adversity that Presidents experience on a daily basis.

The problem with the way we view debates is that we anticipate 90 minutes of detailed and virtuous policy discussion. There is no clearer example of this fantasy than the West Wing episode, in which the two candidates running for president have a high-minded and theoretical exchange of views on what it means to stand as a Republican or a Democrat. In reality, presidential debates have little to do with policy. Most voters are unswayed by the arguments of the candidates; they may have little trust in them, or have made up their minds previously. The one area where debates really count is character.

The focus on character may be why the UK has lacked similar style pre-election debates, and why attempts here have enjoyed less success. The presidency is a position uniquely judged by the character of its occupant, and in the build-up to 2020 President Trump’s character – depending on who you ask – has been viewed as his biggest strength or weakness. This really gets to the crux of what debates are, and what they have always been – a blank slate.

The debate is one of the few foreseeable major events in a campaign. But that is all that can be foreseen: the event. Most voters are aware of it, and around 80 million will watch it, but the candidates are under no obligation to make it a debate on the state of America. Like most other political realities, the in-depth policy debate was an unwritten rule, held up by the ‘honour system’, and President Trump lacks this honour.

Using debates for non-policy advantages is as old as the institution itself. In the first ever presidential debate in 1960, Nixon faced off against Kennedy. Nixon turned up looking sickly and sweaty, whilst JFK was the epitome of suave New England style. Accordingly, whilst radio listeners thought Nixon had performed better, TV viewers agreed that Kennedy had won the debate. The echoes of 1960 were clear in Mr Trump’s first performance, in which he waved his hands, stood firm, interrupted, and generally tried to give the impression that he was in control of the events of the stage. Yet Mr Biden was not immune from these gimmicks either – he would flash a smile whenever the president made an outrageous claim, as if to say, “look at this clown – does he have what it takes to fill the office?”

Vice President Richard Nixon and Senator John F. Kennedy in their final presidential debate; 21 October, 1960. (Credit: AP Photo)

The pressure of these debates is intense. Each given candidate will have three or four separate strategies they’re trying to pursue, and they have to juggle all of them whilst simultaneously readjusting their approach depending on which hits are landing. In the first debate, Mr Trump was balancing trying to present Joe Biden as senile, racist, and yet also a radical socialist. The president struggled with these conflicting narratives, especially as he hoped that constantly interrupting Mr Biden would force the former vice president into a memorable gaffe. Ultimately, it was Mr Trump’s inability to change his approach in the debate that cost him more than any of his policy errors, and formed the main narrative of the debate in its aftermath.

But there was ultimately something more sinister going on. Donald Trump’s biggest election worry is high turnout – Republicans usually vote reliably, but Democrats are much more vote-shy. This is doubly true of young people. Accordingly, the president may have been playing a deeper game during the first debate, one which he executed outstandingly. President Trump saw an opportunity to portray the debate as an irrelevant contest between two old white men – not dissimilar to how young Americans view the election already. Mr Trump’s constant interruptions made the debate unbearable to watch, but he ultimately wanted that. He may not have done well with the few undecided voters left in the campaign. He will care little. The bigger constituency was voters undecided between voting for Biden or staying home. The first debate looked exactly like two old men bickering, and for Trump that’s as close to a debate win as he can get.

Seth Weisz

What Will Happen Now Ruth Bader Ginsburg’s Dead?

You cannot understand the confirmation process of Amy Coney Barrett without understanding that of Robert Bork. Nominated by Ronald Reagan in 1987, Bork was a polarising figure, known for his disdain for the supposed liberal activism of the court. Ted Kennedy of Massachusetts, deeming Bork to be too radical for the court, turned away from the bipartisan tradition of assessing a nominee’s qualifications rather than values. The Judiciary Committee hearings featured hostile questioning, and Bork was ultimately rejected by 58-42 in a Democratic-majority Senate. The events produced the term “borked,” referring to the vigorous questioning of the legal philosophy and political views of judges in an effort to derail their nomination. The legacy of Bork lives on today.

The death of Supreme Court Justice Ruth Bader Ginsburg (RBG) has triggered a high-stakes nomination process just weeks before the election. The Supreme Court is the highest level of the judicial branch in the US, with Justices nominated by the President and voted on by the Senate. The process usually takes a few months, with nominees being interviewed privately by senators, and then publicly by the Senate Judiciary Committee, before being forwarded by the committee to be voted on in the Senate. 

Ruth Bader Ginsburg, 2014. (Credit: Ruven Afanador)

However Barack Obama’s final year in office altered the traditional conception of nominating Supreme Court Justices. With the death of Justice Scalia in 2016, Obama, in alignment with the Constitution, nominated Merrick Garland to fill the seat. However, in what political scientists Steven Levitsky and Daniel Ziblatt deemed “an extraordinary instance of norm breaking,” the Republican-controlled Senate refused hearings. Senate majority leader Mitch McConnell argued that in an election year the Senate should wait until a new President has been elected, thus giving “the people” a say in the nomination process.

His position proved polarising. The practice of the Senate blocking a specific nominee (as in the case of Bork) would usually be fairly uncontroversial, even happening to George Washington in 1795. The issue was McConnell preventing an elected President from filling the seat at all, something that had never happened in post-construction US politics.

Yet the death of RBG has shown this precedent to be short-lived. Despite a Court seat opening up even closer to the election, the vast majority of Republicans have accepted McConnell’s present claim that his own precedent doesn’t apply in an election year if the same party holds both the Senate and Presidency. Thus, President’s Trump’s nominee, Amy Coney Barrett, looks set to be confirmed.

It’s unknown how polarising her confirmation will be. The hearings of Clarence Thomas in 1991 were dominated by the questioning of Anita Hill over her allegations of sexual harassment against the then-nominee, with Thomas then accusing the Democrat-led hearing of being a “high-tech lyniching for uppity who in any way deign to think for themselves.” The 2018 Kavanaugh hearings echoed this process, with the then-nominee accused of attempted rape in a widely-viewed public hearing. Although the Barrett hearings are unlikely to prove as sinister, it’s likely the Republicans will accuse the Democrats of finding any means possible to block a conservative justice, as was seen in the Clarence and Kavanaugh hearings.

Barrett is set to be ‘borked’. Her views have been well-documented over her career, and, most notably, Republican Senators seem confident she’ll vote to overturn Roe vs Wade, the 1973 ruling that protected a woman’s liberty to have an abortion without excessive government restriction. The Committee hearings process will likely rally each party’s base going into the election, but the long term implications on civil rights and the legitimacy of the Court have yet to be determined.

Sam Lazenby


Bibliography

The Economist. “Courting trouble: The knife fight over Ruth Bader Ginsburg’s replacement.” (26 Sep 2020) https://www.economist.com/united-states/2020/09/26/the-knife-fight-over-ruth-bader-ginsburgs-replacement

The Economist. “What does Amy Coney Barrett think?” (26 Sep 2020) https://www.economist.com/united-states/2020/09/26/what-does-amy-coney-barrett-think

Levitsky, S. and Ziblatt, D. (2019) “How Democracies Die.” Great Britain: Penguin

Liptak, A. “Barrett’s Record: A Conservative Who Would Push the Supreme Court to the Right.,” New York Times (26 Sep 2020). https://www.nytimes.com/2020/09/26/us/amy-coney-barrett-views-abortion-health-care.html

Pruitt, S. “How Robert Bork’s Failed Nomination Led to a Changed Supreme Court,” History (28 Oct 2018). https://www.history.com/news/robert-bork-ronald-reagan-supreme-court-nominations

Siddiqui, S. “Kavanaugh hearing recalls Clarence Thomas case,” The Guardian, (27 Sep 2018). https://www.theguardian.com/us-news/2018/sep/27/brett-kavanaugh-clarence-thomas-anita-hill-hearings

Victor, D. “How a Supreme Court Justice Is (Usually) Appointed,” The New York Times, (26 Sep 2020). https://docs.google.com/document/d/1880187lYZ4z9gXjkVeNDsSsN8F0ZdRK1MIrua4CQmIk/edit