Merkel’s Immigration Policy: A Failure?

Many highlight Angela Merkel’s policy in relation to the 2015 migrant crisis as the beginning of her downfall. In that year, Germany’s net migration figure was 1.1 million, just under double the previous year’s total. As Europe struggled to cope with refugees, Merkel made her country the continent’s biggest destination, despite the Dublin Agreement mandating that refugees should seek asylum in the first country they entered. Even today, the refugee crisis is pointed to as a key moment in Merkel’s premiership, and the moment she began to lose her grip on power. But was her immigration policy actually a failure?

When thinking of Germany and immigration, a notable example is the Gastarbeiter, or guest worker, programme which began in the 1950s. These guest workers were invited to help rebuild West Germany after the Second World War, with the main source of guests through this period being Turkey. Many workers never returned home, remaining in Germany with their families. Such people are no longer known as guests, but as Germans.

As Chancellor, Angela Merkel has spoken at length about her view of immigration. She has a markedly positive attitude on the subject, at odds with many in her Christian Democratic Union (CDU) party, talking of those who come to Germany as “enriching” society. She has always been clear on her approach to refugees: that there is a “moral obligation” to help those fleeing war, persecution or terror. It was this positive approach to immigration that culminated Germany’s policy during the 2015 refugee crisis.

 German Chancellor Angela Merkel receiving flowers from a Lebanese refugee; Migration Commissioner Annette Widmann-Mauz looks on from the right, June 2018. (Credit: Reuters)

Since then, Germany has homed over 1.5 million refugees, in comparison to the 450,000 by France and 300,000 by Italy. This huge influx of people into Germany proved a huge task for both local and national government, with issues such as providing German classes and wider education, as well as integrating the new arrivals into German society. As a logistical challenge, it is clear that Merkel and her government, in combination with state bodies, handled the refugee crisis robustly and commendably. Language classes were provided by the government and an effective programme of enrolling young children into nurseries to begin their education was introduced. 

There is, however, a darker side to the consequences of this policy in Germany. On a national level, the most obvious of these has been the rise of the far right and explicitly anti-immigration Alternative for Germany (AfD) party, which surged in the 2017 Bundestag elections and in state elections since the crisis. This has led to the erosion of CDU support and, arguably, to the circumstances amongst which Merkel stood down as her party’s leader in 2018. The future of that party is now in doubt, with right wing candidate Friedrich Merz, a fierce critic of Merkel and her immigration policy in particular, finishing second in both the 2018 and 2021 leadership elections, narrowly losing each time.

Merkel’s successor, Annegret Kramp-Karrenbauer, was forced to resign after CDU state parliament members in Thuringia defied her authority to vote with the AfD. Armin Laschet, the new leader, is seen as a moderate in comparison to Merz, but ran with health minister Jens Spahn, labelled the “anti-Merkel” for his fierce criticism of her handling of the 2015 crisis. Her moderate legacy in the CDU may be safe for now, but the future remains uncertain as there are fresh elections to contend in September, in the aftermath of disappointing results in March’s state elections.

Merkel’s immigration policy was a divisive path which sowed the seeds of her downfall, while providing refuge for millions fleeing war. In the short term, it has arguably been pivotal in her resignation both as CDU leader and Chancellor, while fracturing her party as it struggles with internal battles and the imposing presence of the AfD. In the long term, however, history will surely look kindly on Merkel: the Chancellor who brought millions in from the cold despite the political consequences and remained steadfast in her commitment to her instincts. A political failure but a moral success, and one which may be remembered as positively as the Gastarbeiter are today.

Joe Rossiter, History in Politics Writer

Hong Kong’s National Security Law: Power Not To The People

You might have heard of the unrest in Hong Kong last year, stemming from the Government’s attempt to introduce an extradition agreement with Mainland China and culminating in a full-blown humanitarian crisis with the enactment of the National Security Law (NSL). Why was the extradition agreement met with such vigour? The proposed Bill would have led to both foreign nationals residing in Hong Kong and local criminal suspects becoming extraditable to mainland China, which has a substantially different criminal justice system and a history of breaching fundamental human rights. This has included arbitrary detention, unfair trials and torture, with the only requirement that “prima facie” evidence, which carries a significantly low standard of proof, be provided to the Chief Executive and the courts. Following escalating public clashes between the Government, police and citizens, and protests seeing over a million people in attendance and over 10,000 people arrested, the Bill was shelved. But by that time, the damage was done. The Bill exacerbated the deep fears of local citizens and expats in Hong Kong, who saw it as an early sign of China’s descent upon the nation and the dark future to come.

Several demands arose from the locals: the formal withdrawal of the Bill, release and exoneration of those arrested from the protests, the establishment of an independent commission of inquiry into police behaviour, universal suffrage for the Legislative Council, Chief Executive elections in addition to the resignation of Chief Executive Carrie Lam, and lastly the retraction of the characterisation of protests as “riots”. Somewhat unsurprisingly, only the first demand was met, which was seen by the Hong Kong people as highly unsatisfactory, and protests continued with increasing intensity. All this culminated in the Chinese Standing Committee of the National People’s Congress enacting the NSL, which opened a bigger can of worms.

Protesters marching at the “Stop Police Violence, Defend Press Freedom” silent march called after media professionals were insulted by police officers when covering protests against the extradition law to China. (Credit: Ivan Abreu, SOPA Images, Sipa via AP Images.)

Under the Sino-British Joint Declaration of 1997, resulting from the First and Second Opium War, Britain handed back control of Hong Kong to China on the condition that the “One Country, Two System” and freedoms of free speech, assembly, religious belief, amongst others, would continue to be enjoyed by the former until 2047. The NSL contained intentionally vague provisions, which would allow for ‘secession, subversion, terrorism and collusion with foreign forces’ to become punishable by a maximum sentence of life in prison. Having already been exercised to charge 50+ individuals, this has naturally given rise to a sense of deep unease in both the domestic and international sphere. As the legislation would have also allowed cases to be tried in Mainland China under their legal system, there was a real risk of criminal suspects being deprived of fundamental human rights, like being held incommunicado in undisclosed locations for up to 6 months before being formally arrested or released. Whilst the UK has similar national security laws in that suspected terrorists can be detained without charge for up to 28 days, these individuals are nevertheless allowed legal representation after a maximum period of 48 hours upon arriving at the police station. Compared to Mainland China, the UK is subject to more intense public and legal scrutiny whenever human rights are undermined. The legislations effect is essentially a complete curtailing of free speech, press and political dissent in Hong Kong. Critics worldwide have speculated that this directly contravenes the Joint Declaration’s condition of “One Country, Two Systems”, with the addition of the NSL being also applicable to crimes committed abroad, to non-permanent residents and people outside of Hong Kong. This means that the reach of the law is far and extensive, essentially subjecting foreign nationals to r to the authority of the NSL. 

Whilst the realistic probability of extraditing foreign citizens in the West for crimes committed against the communist party are relatively slim, the law has already caused a growing reluctance amongst foreign investors to conduct business in Hong Kong for fear of being subject to the extensive powers of the NSL. After the emergence of Covid-19 and consequent increasing criticism towards the Communist Party, it will be a matter of great importance for there to be checks and controls to prevent Mainland China’s ever-increasing influence. If they are left unchecked, one can only hope to stay out of the line of sight of the Chinese Government, and that is something I concern myself with, as this article has the potential to be considered “subversion” under the draconian National Security Law.

May Lam, History in Politics Contributor

The Environment Has No Ideology: Debating Which System Works Best is Inherently Flawed

It is often assumed that we in the ‘West’ are the arbiters of environmental policy, that we simply ‘care more’ than the rest of the world. ‘China’, for many, evokes images of flat-pack cities and rapid industrialisation synonymous with the stain left by humanity on the natural world. It is lazily viewed as an outlying hindrance to the global goal of sustainable development, whilst we remain wilfully ignorant of our own shortcomings, both past and present. Instead of viewing Chinese environmental negligence as unique, I argue, within the lingering paradigm of the ‘capitalist good/communist bad’ dichotomy, that a more bipartisan assessment of the root cause of environmental degradation may be in order. Our planet, after all, cares little for politics.

Many of China’s environmental failures have historically been attributed to the communist policies of the ruling party, particularly under Mao, whose ‘ren ding shen jian’, or ‘man must conquer nature’ slogan has been presented by the historian Judith Shapiro as evidence of the Communist Party’s desire to dominate the natural world, even at the expense of its own people and environment. Of course, there is merit to this argument – the collectivisation of land and the Great Leap Forward’s unattainable targets  wreaked havoc on the land and contributed in no small part to what Frank Dikötter has termed ‘Mao’s Great Famine’, which is estimated to have killed up to 45 million people between 1958 and 1962. It can be easy, therefore, for us to assume that this environmental exploitation is one peculiar to China’s communist system of government.

A factory in China by the Yangtze River, 2008. (Credit: Wikimedia Commons)

Without excusing the undoubtedly detrimental and inhumane policies of Mao’s government, we should  view the environmental impact of the Chinese state’s rapid development in a more contextual manner. After all, did not the rampant capitalism of the Industrial Revolution in the United Kingdom lead to the explosion of soot-filled cities like Manchester, Liverpool and Birmingham? All of which were centres of heightened industrial activity that harmed both their human population and the surrounding environment. London’s death rate rose 40% during a period of smog in December 1873, and similarly, we can look to the Great Smog of 1952, which the Met Office claims killed at least 4000 people, possibly many more.

Industrial potteries in North Staffordshire during the nineteenth century. (Credit: StokeonTrent Live)

Geographically closer to China, the Japanese state has also shown in recent years that pointing to ideology might be mistaken. The post-war Japanese growth-first and laissez-faire mentality left the likes of Chisso Corporation in Minamata to their own devices, and the results were devastating. From 1956 through to the 1970s, first cats, then human residents of  Minamata began coming down with a mysterious illness, one that caused ataxia and paralysis in its victims. It would transpire that what came to be known as ‘Minamata disease’ was the result of Chisso’s chemical plant releasing methylmercury into the town’s bay. This was absorbed by algae and passed up the food chain through the fish that local residents (both human and feline) were regularly consuming. Government inaction was deafening, despite the cause being known since 1959, and change only came after it was forced by  non-capitalist union pressure in the 1970s. If this seems like a problem confined to the past, one need only cast their mind back to the Fukushima disaster in 2011, ultimately the result of the irresponsible decision to pursue a nuclear energy policy on the disaster-prone Pacific Ring of Fire.

This article does not wish to make the case for either the capitalist or communist system’s superiority in environmental affairs. Rather, it should be clear that the common thread running through all of these disasters – from the Great Smog to the Great Famine and Fukushima – is a policy emphasising economic growth as the paramount standard of success is a dangerous one that will inevitably lead to environmental destruction. The style and severity of that destruction may be influenced by ideology, but if we are to live in harmony with our environment, we must be willing to abandon the ideals of gain (collective or individual) and competition, that have placed us in our current quandary, whatever the tint of our political stripes.

Samuel Lake, History in Politics Writer

The Cost of Casual Scepticism to Human Rights

Modern aversion to human rights protection in the United Kingdom can be seen on the surface of British politics, from Theresa May’s indignation that she was  unable to deport an illegal immigrant because he had a pet cat, to Nigel Farage’s demand that we “scrap the EU Human Rights Act”. The modern rallying cry for human rights sceptics often takes the form that we, the British,  have no need for coffee-drinking, baguette-eating European to tell us how to do human rights. The problem though is that rights protection is not that simple, and never has been, so why do we let human rights be disparaged by these tabloid-level soundbites and mantras?

The European Court of Human Rights in Strasbourg. (Credit: Europe’s Human Rights Watchdog)

When bashing human rights, politicians are most commonly referring to the European Convention of Human Rights (ECHR) which was created by the Council of Europe which, for clarity’s sake, has nothing to do with the European Union – incidentally making Farage’s claims about the “EU Human Rights Act” farcical. The ECHR established the European Court of Human Rights (ECtHR), frequently referred to as ‘Strasbourg’ in lieu of its location. These elements of European influence running through our human rights make it an easy target for politicians wanting to exploit the rife Euroscepticism that exists in modern Britain. This is why, perhaps unsurprisingly, key features of the ECHR’s history are left out by its critics.

The ECHR can find its origins in the ruins of Europe following World War Two, and the Holocaust, in which it was decided action must be taken to prevent future human rights abuses across the continent. Winston Churchill, and the British delegation, played a key role in the drafting of the Convention and were ardent supporters of it. Since then, the Convention has evolved according to the changing nature of Europe and has helped in reducing discrimination and exposing rights abuses. 

Looking then at the face of modern Europe, the significant role of the ECHR may seem unnecessary and can even seem to be an inconvenience to public protection and government policy. However, the Convention should not be perceived as only existing to prevent grave breaches of human rights, but also to remedy all forms of discrimination, such as the right of LGBT couples to adopt or for people of all faiths to have the same job opportunities. These breaches still occur across Europe, with significant examples appearing in Poland and Hungary where radical anti-LGBT policies are being enacted, such as ‘LGBT free zones’.

This is one reason why it is more crucial than ever that other States party to the Convention continue to believe in and abide by it. If disagreements with Strasbourg start to appear over issues, for example prisoner voting rights, it waters down the impact of the ECtHR’s rulings and threatens the possibility that judgments given by the Court to clear breaches(such as those appearing in Eastern Europe), are swept aside, and ignored by the violating State. Arguing and fighting for proper rights protection is not done only for yourself, or your group’s, but is done for the wider community.

This is why the rhetoric used in modern politics about seemingly humorous issues, like the failed deportation of a man because of his cat, and other false remarks which attempt to sabotage the mechanisms for rights protection are dangerous. Dangerous not only for the prevention of relatively benign issues that may appear in nations like the UK, but also for those that rely on the Convention to prevent fundamental breaches of human rights, which appear more likely to occur  on a daily basis, and cause one to consider the events that led to the creation of the ECHR.

Aidan Taylor, History in Politics Contributor

Does the Electoral College Serve the Democratic Process?

“It’s got to go,” asserted Democratic presidential candidate, Pete Buttigieg, when speaking of the electoral college in 2019 – reflecting a growing opposition to the constitutional process, which has been only heightened by the chaotic events of the past weeks. Rather than simply reiterating the same, prosaic arguments for the institution’s removal – the potential subversion the popular vote, the overwhelming significance of battleground states, the futility of voting for a third party, and so forth – this piece will consider the historical mentalities with which the electoral college was created in an effort to convey the ludicrous obsolescence of the institution in a twenty-first century democracy.  

Joe Biden and Kamala Harris preparing to deliver remarks about the U.S. economy in Delaware, 16 November 2020. (Credit: CNN)

In its essence, the system of electors stems from the patrician belief that the population lacked the intellectual capacity necessary for participation in a popular vote – Elbridge Gerry informing the Constitutional Convention, “the people are uninformed, and would be misled by a few designing men.” Over the past two hundred years, the United States has moved away from the early modern principles encouraging indirect systems of voting: for instance, the fourteenth amendment normalised the direct election of senators in 1913. It has also seen the electors themselves transition from the noble statesmen of the Framers’ vision, to the staunch party loyalists that they so greatly feared. In fact, the very institutions of modern political parties had no place in the Framers’ original conception, with Alexander Hamilton articulating a customary opposition to the, “tempestuous waves of sedition and party rage.” This optimistic visualisation of a factionless union soon proved incompatible with the realities of electioneering and required the introduction of the twelfth amendment in 1803, a response to the factious elections of 1796 and 1800. Yet, while early pragmatism was exercised over the issue of the presidential ticket, the electoral college remains entirely unreformed at a time when two behemothic parties spend billions of dollars to manipulate its outcome in each presidential election cycle. 

The Constitutional Convention was, in part, characterised by a need for compromise and it is these compromises, rooted in the specific political concerns of 1787, that continue to shape the system for electing the nation’s president. With the struggle between the smaller and larger states causing, in the words of James Madison, “more embarrassment, and a greater alarm for the issue of the Convention than all the rest put together,” the electoral college presented a means of placating the smaller states by increasing their proportional influence in presidential elections. While it may have been necessary to appease the smaller states in 1787, the since unmodified system still ensures voters in states with smaller populations and lower turnout rates, such as Oklahoma, hold greater electoral influence than those in states with larger populations and higher rates of turnout, such as Florida. Yet, it was the need for compromise over a more contentious issue – the future of American slavery – that compelled the introduction of the electoral college further still. Madison recognised that “suffrage was much more diffusive in the Northern than the Southern States” and that “the substitution of electors obviated this difficulty.” The indirect system of election, combined with a clause that counted three of every five slaves towards the state population, thus granted the slaveholding section of the new republic much greater representation in the election of the president than an alternative, popular vote would have permitted. At a time when the United States’ relationship with its slaveholding past has become the subject of sustained revaluation, its means of electing the executive remains steeped in the legacy of American slavery.

It takes only a brief examination, such as this, to reveal the stark contrasts between the historical mentalities with which the electoral college was established and the realities of a modern, democratic state. Further attempts to reform the institution will no doubt continue to come and go, as they have over the past two hundred years. However, when compared with the environment in which it was proposed, it is clear that the unreformed electoral college is no longer fit for purpose and must, eventually, give way to a system in which the president is elected by a popular vote.

Ed Warren

What Will Happen Now Ruth Bader Ginsburg’s Dead?

You cannot understand the confirmation process of Amy Coney Barrett without understanding that of Robert Bork. Nominated by Ronald Reagan in 1987, Bork was a polarising figure, known for his disdain for the supposed liberal activism of the court. Ted Kennedy of Massachusetts, deeming Bork to be too radical for the court, turned away from the bipartisan tradition of assessing a nominee’s qualifications rather than values. The Judiciary Committee hearings featured hostile questioning, and Bork was ultimately rejected by 58-42 in a Democratic-majority Senate. The events produced the term “borked,” referring to the vigorous questioning of the legal philosophy and political views of judges in an effort to derail their nomination. The legacy of Bork lives on today.

The death of Supreme Court Justice Ruth Bader Ginsburg (RBG) has triggered a high-stakes nomination process just weeks before the election. The Supreme Court is the highest level of the judicial branch in the US, with Justices nominated by the President and voted on by the Senate. The process usually takes a few months, with nominees being interviewed privately by senators, and then publicly by the Senate Judiciary Committee, before being forwarded by the committee to be voted on in the Senate. 

Ruth Bader Ginsburg, 2014. (Credit: Ruven Afanador)

However Barack Obama’s final year in office altered the traditional conception of nominating Supreme Court Justices. With the death of Justice Scalia in 2016, Obama, in alignment with the Constitution, nominated Merrick Garland to fill the seat. However, in what political scientists Steven Levitsky and Daniel Ziblatt deemed “an extraordinary instance of norm breaking,” the Republican-controlled Senate refused hearings. Senate majority leader Mitch McConnell argued that in an election year the Senate should wait until a new President has been elected, thus giving “the people” a say in the nomination process.

His position proved polarising. The practice of the Senate blocking a specific nominee (as in the case of Bork) would usually be fairly uncontroversial, even happening to George Washington in 1795. The issue was McConnell preventing an elected President from filling the seat at all, something that had never happened in post-construction US politics.

Yet the death of RBG has shown this precedent to be short-lived. Despite a Court seat opening up even closer to the election, the vast majority of Republicans have accepted McConnell’s present claim that his own precedent doesn’t apply in an election year if the same party holds both the Senate and Presidency. Thus, President’s Trump’s nominee, Amy Coney Barrett, looks set to be confirmed.

It’s unknown how polarising her confirmation will be. The hearings of Clarence Thomas in 1991 were dominated by the questioning of Anita Hill over her allegations of sexual harassment against the then-nominee, with Thomas then accusing the Democrat-led hearing of being a “high-tech lyniching for uppity who in any way deign to think for themselves.” The 2018 Kavanaugh hearings echoed this process, with the then-nominee accused of attempted rape in a widely-viewed public hearing. Although the Barrett hearings are unlikely to prove as sinister, it’s likely the Republicans will accuse the Democrats of finding any means possible to block a conservative justice, as was seen in the Clarence and Kavanaugh hearings.

Barrett is set to be ‘borked’. Her views have been well-documented over her career, and, most notably, Republican Senators seem confident she’ll vote to overturn Roe vs Wade, the 1973 ruling that protected a woman’s liberty to have an abortion without excessive government restriction. The Committee hearings process will likely rally each party’s base going into the election, but the long term implications on civil rights and the legitimacy of the Court have yet to be determined.

Sam Lazenby


Bibliography

The Economist. “Courting trouble: The knife fight over Ruth Bader Ginsburg’s replacement.” (26 Sep 2020) https://www.economist.com/united-states/2020/09/26/the-knife-fight-over-ruth-bader-ginsburgs-replacement

The Economist. “What does Amy Coney Barrett think?” (26 Sep 2020) https://www.economist.com/united-states/2020/09/26/what-does-amy-coney-barrett-think

Levitsky, S. and Ziblatt, D. (2019) “How Democracies Die.” Great Britain: Penguin

Liptak, A. “Barrett’s Record: A Conservative Who Would Push the Supreme Court to the Right.,” New York Times (26 Sep 2020). https://www.nytimes.com/2020/09/26/us/amy-coney-barrett-views-abortion-health-care.html

Pruitt, S. “How Robert Bork’s Failed Nomination Led to a Changed Supreme Court,” History (28 Oct 2018). https://www.history.com/news/robert-bork-ronald-reagan-supreme-court-nominations

Siddiqui, S. “Kavanaugh hearing recalls Clarence Thomas case,” The Guardian, (27 Sep 2018). https://www.theguardian.com/us-news/2018/sep/27/brett-kavanaugh-clarence-thomas-anita-hill-hearings

Victor, D. “How a Supreme Court Justice Is (Usually) Appointed,” The New York Times, (26 Sep 2020). https://docs.google.com/document/d/1880187lYZ4z9gXjkVeNDsSsN8F0ZdRK1MIrua4CQmIk/edit

‘You Can Get Rid of the Mines, But You Can’t Get Rid of the Miners’: Industrial Legacy and Contemporary Identity in Durham

Durham’s coal mines closed throughout the 1980s, despite dissent from local communities and mining unions. This was not an anomaly – under Conservative rule, mines were shut throughout the nation, yet these were largely concentrated in the North. As a result, a significant regional divide in unemployment, poverty, and general desolation was created. And yet, although the mines are most certainly shut, the culture and the identity of the miners, and of a mining region lives on. In Durham, mining is deeply tied up in local identity, and a celebration of this shared history occurs every year through the Miners’ Gala. This consists of a loud and proud parade through the city, in which each mining village sends a colliery band, and banners. Upon finishing the city parade, all the mining lodges meet on the cricket field for a large party for all ages. Despite the closure of the mines, the economic hardship and proud history continues to be entwined with present day understandings and contemporary identity; a common phrase heard at the gala is ‘You can get rid of the mines, but you can’t get rid of the miners’. 

The 135th Durham Miners’ Gala, 2019. (Credit: The Northern Echo)

The first Durham Miners’ Gala was organised by the Durham Miners’ Association in 1871 and was held on the outskirts of the city in Wharton Park. Despite the demise of the mining industry, the gala has survived, and continues to be integral to local identity. The gala is no longer an example of political mass assembly, but as Jack Lawson, a Durham miner, later Labour MP and minister in the Atlee government, said of the gala, it was less political demonstration, and more “the spontaneous expression of their [the miners’] communal life”. The gala is an example of intangible cultural heritage, and an identity which occurs in a specific place. Some have dubbed occasions like the gala as simply reminiscence – journalist James Bloodworth, who visited in 2016, saw the Durham Miners’ Gala as a “carnival of nostalgia”, and “something like a historical re-enactment society”. However, it is much, much more. It is a living history, a continued solidarity with the working class and the loss of jobs caused by a deep deindustrialisation which continues today in the loss, if not disappearance, of heavy manufacturing industries such as ship building. 

Labour’s Green New Deal appears to draw upon this history and empathise with the loss of industry and employment in the North. The deal sets out to rebuild industry, jobs, and pride in the towns, with more “rewarding, well-paid jobs, lower energy bills”, and “whole new industries to revive parts of our country that have been neglected for too long”. As the Industrial Revolution brought jobs and pride to the North, the ‘Green Industrial Revolution’, hopes to provide funding to restore this. Furthermore, the Labour Party recognises that for some ‘industrial transition’ has become a “byword for devastation”, and blames successive Conservative governments for this continued ignorance of whole industries and communities. The Green Industrial Revolution manifesto states that, “Tories wasted a decade serving the interests of big polluters”, echoing the sentiment of many speakers at the Durham Miners’ Gala. For example, in 2017, one speaker exclaimed that they should draw upon the lesson of the 1984-5 strikes today: that if “on the verge of achieving real change to working class people, the establishment will try to crush you”. Labour’s plans for a Green New Deal show not only the impact of economics on identity, but also, highlights the scars of neglect at the hands of a Tory government.

James Bloodworth also exclaimed in his somewhat scathing review of the Durham Miners’ Gala, that “when the past becomes an obsession, it can act as a dead weight on meaningful action in the present”. Is Labour’s Green New Deal an example of being too preoccupied with the past? Or should we be looking to it? Is an eye to the past not necessarily a bad thing, as Bloodworth states, but instead a chance to rectify past mistakes? 

Emily Glynn


Bibliography

Bloodworth, James, ‘Labour is becoming a historical re-enactment society’, International Business Times, 11 July 2017, https://www.ibtimes.co.uk/jeremy-corbyns-labour-tribute-act-socialism-trade-unions-back-nostalgic-leader-1570061 

Lawson, Jack, Peter Lee, (1949: London)

Labour Party ‘A Green Industrial Revolution’ Manifesto 2020

Housing Reform: Not the Solution to a Prominent Problem

The government’s slogan of “Build, Build, Build”, coupled with radical reforms to the planning system, promises a utopia it will struggle to deliver. 

Modern, red brick houses viewed from above. (Credit: Anthony Brown.)

The reforms centre around deregulation, with the aim being to make it easier to build homes where people want to live. Housing Secretary Robert Jenrick wrote in the Telegraph of the reforms building houses “with green spaces”, “new parks at close hand”, “tree-lined streets” and neighbours who are not strangers.  

The reality of these reforms will most likely be very different, and jeopardises the livelihoods of some of society’s most vulnerable. 

The latest proposals include dividing England’s land into three different categories: growth, renewal, or protection. Pre-approved “design codes” would automatically be allowed in “growth” areas, and granted “permission in principle” in renewal areas. 

Additionally, the recent extension of permitted development rights allows buildings such as offices and shops to be converted into housing without the need for planning permission. 

A combination of these policies present a threat of the production of low-quality slum housing. Only 22% of permitted development homes meet the space standards nationally accepted, and less than 5% have access to an outdoor area. Automatic planning permission without proper checks, as indicated through permitted development, compromises quality of home and livelihood. With deregulation to the planning system, agreed “design codes” could be produced to a low standard for the sake of developers producing quickly and on a large scale. 

As the government aims to fulfil its pledge of 300,000 new houses a year with these proposals and policies, there is a danger that the safety and wellbeing of future inhabitants of the dwellings will be put at risk. Whilst reform to the system is necessary to build sustainable and environmental homes of the future, simply cutting checks of quality and safety whilst hindering local voices is not the solution.

Further, charities such as Shelter, aimed at ending homelessness, have voiced concerns over such schemes. To encourage small developers to build, the proposal is to end “section 106” payments for small sites – the mechanism for developers to be required to produce affordable, social housing. Such payments are also bypassed under permitted development without the need for planning permission. Without such a requirement, social housing will continue to suffer, already underfunded and underproduced. 

The government needs to reconsider such proposals and schemes that are in developer’s best interests, but without adequate checks threaten inhabitants with low quality housing, whilst neglecting the desperate need for social housing.

Maddy Burt

Why Events in the Gulf Still Matter: Implications of Peace Between Israel and the UAE

There’s a joke that goes as follows: ‘…and on the eighth day, God created the Middle East, and said “let there be breaking news”’. In this constant stream of events it can be hard to distinguish between the important and irrelevant – but make no mistake, mutual recognition between Israel and the United Arab Emirates is as important as it gets.

Israeli Prime Minister Benjamin Netanyahu making a joint statement with Senior US Presidential Adviser Jared Kushner about the Israeli-United Arab Emirates peace accords, Jerusalem, 30 August 2020. (Credit: Reuters)

With the exception of Israel, every Middle Eastern country is Muslim. More importantly, with the exceptions of Iran and Turkey, every country is Arab. In the early years of the 20th century, this relationship wasn’t contentious – indeed, the first Iraqi Minister of Finance was Jewish. However, Zionism and the Arab reaction to it, in concert with the destabilising effects of latter-stage colonialism, fuelled a rise in animosity and Israel’s declaration of independence in 1948 was met by a declaration of war by its Arab neighbours.

The next 25 years saw two more wars and – in the midst of the Cold War – the US formed a strategy to protect what it viewed as an outpost of Western liberalism. American foreign policy united around providing Israel with a qualitative military edge over other Middle Eastern states. Accordingly, Israel won every major Cold War conflict, and territorial gains they made in these wars forced Arab neighbours to coalesce around a new strategy of ‘land for peace’. This saw Israel return the Sinai to Egypt in 1977 in exchange for recognition, and grant limited Palestinian autonomy in exchange for peace with Jordan in 1994. Eight years later, the Arab League declared that its members would collectively recognise the State of Israel in exchange for the establishment of a Palestinian state.

At the same time, two key events took place. The 1979 revolution in Iran turned a staunch American and Israeli ally into an anti-Western, anti-Arab power and the American invasion of Iraq in 2003 created a regional power vacuum. The last two decades have seen an Iran-Arab cold war across the region. Saudi Arabia, along with its Arab allies, is currently waging a war of influence against Iran across the Middle East. Yemen, Syria and Lebanon in particular bear the fingerprints of this struggle.

Now, for the coup de grace. In 2015, the US signed a deal with Iran, trading sanctions relief in exchange for Iran scaling back its nuclear programme. Israel and the Arab World were united in their fear of Iran and animosity towards the deal, which allowed Iran to funnel more money to proxy groups in the region. President Trump upended America’s approach, seeking to unite Israel and the Arab states by opposing Iranian regional influence. This bipolar strategy enabled the US to bring Israel and the UAE closer together and on August 13th, the two nations signed a deal mutually recognising each other’s existence.

So why the UAE, of all Arab states? In one respect, the Emirates are keen to bolster their military position. The US may be more willing to sell technologically-advanced weapons, including the coveted F-35, to seemingly less belligerent Arab powers. Israel is also a regional leader in technology, which the UAE may stand to benefit from.

Yet the UAE also benefits from its demographics. Nearly 60% of its population are South Asian foreign workers, employed in massive construction projects in Dubai; only 11% are Arab Emirati citizens. This corporate state structure makes the Emirati monarchy highly stable in comparison to its Arab neighbours, who are populated by citizenries that are generally hostile towards Israel.

Saudi Arabia, in particular, will likely wait and see if other Gulf States follow the UAE’s lead before making its own peace deal with Israel. The primary objective of all Arab autocracies is domestic stability, and Saudi Arabia’s conservative Muslim population might view public overtures towards Israel as a sell-out by the state’s monarchy. The Arab populations in Africa are generally less conservative but they make up for it with anti-imperialist sentiment, and would be unlikely to recognise Israel whilst the occupation continues.

This brings us to the one Arab entity that will not be making peace in the near future – Palestine. Arab states have largely given up on the Palestinian cause and instead come to fear Palestinian freedom, lest it bring to power a people’s government that undermines their fragile authoritarian legitimacy. Until recently, Palestinians still had one bargaining chip. Previously, the Arab League had almost unanimously withheld recognition of Israel. When it did come, as in the case of Egypt and Jordan, it was in exchange for significant concessions. Now that the UAE has agreed to recognise Israel with no significant conditions, Palestinian leaders will feel as if the rug has been swept out from under their feet. The UAE has given an official seal of approval to the occupation; expect to see it remain for a long time.

Seth Weisz 

The Politics of Food: Modern Sumptuary Law?

The politicisation of food by Boris Johnson’s government has proved to be a highly controversial issue. Whilst the necessity of what has been described as an “obesity crackdown” has been supported by Public Health England, there has been backlash surrounding the government’s strategy. In particular, the lack of meaningful support for the most financially-disadvantaged have led to accusations that the government is tone-deaf in its approach. The abandonment of this group of people by food policy is far from a new phenomenon, and the parallels with early modern sumptuary law is compelling.

A family watching British Prime Minister Boris Johnson’s coronavirus address. (Credit: AFP.)

Sumptuary law is defined by Black’s Law Dictionary as, “Laws made for the purpose of restraining luxury or extravagance, particularly against inordinate expenditures for apparel, food, furniture, etc.” These laws impacted everyone in society and were considered vital for maintaining the social hierarchy in the face of increased social mobility. They had a strong moral element: it was believed this hierarchy was mandated by God; refusing to adhere to this was therefore defying Him. Therefore, these laws were seen as essential for the good of society, just as the current government policy regarding obesity aims to lessen the impact of Covid-19 and the strain on the NHS. 

Whilst sumptuary laws set limits upon all, just as we are all to be affected by this new health drive, it was – and is – the financially disadvantaged who experience the greatest restrictions. Legislation and poverty meant the poor of Tudor England were allowed no more than pottage, vegetables, and bread. With the rigours of life at that time, the number of calories that needed to be consumed was much higher than the recommended 2000-2500 today, though even this would be out of reach for many. 

Today, the converse seems to be true, and it is far easier to over-consume on a lower budget. According to the statistics, half of the 10 worst areas for childhood obesity in the UK are also the 10 poorest. Yet the government seems to think the solution is as simple as signposting the healthy options and restricting those high in calories, going so far as removing multi-buy offers. The government does not seem to recognise that choice and control over diet is often a privilege. As Kieran Morris writes in The Guardian, the ability to eat healthily and exercise needs “time, money and space”, all of which have been have become increasingly inaccessible. 

Elizabeth I wearing the “superfluities of silks, cloths of gold, [and] silver” many were not allowed to. (Credit: the ‘Darnley Portrait’, Public Domain.)

Annunziata Rees-Mogg typified this privilege in her tweet on July 27, in which she stated “Tesco 1kg potatoes = 83p, 950g own brand chips = £1.35”. What the daughter of Baron William Rees-Mogg fails to recognise is that it is not ignorance or laziness that is the problem. According to the Trussell Trust, between April 2019 and March 2020, 1.6 million people were estimated to have used a foodbank, which are only able to provide non-perishable, carbohydrate-heavy foods. The average full-time employee in Britain works an average of 42 hours a week, which the TUC claims is “robbing workers of a decent home life and time with their loved ones”. To claim that personal choice is the sole reason behind the obesity epidemic ignores this. 

So, just like sumptuary law, we are all being asked to do our bit for the good of the nation. But this policy, just like the legislation 500 years ago, will disproportionately affect the poorest in society. Whilst we do need to tackle obesity, the government needs to provide support for these people, rather than remove and criticise their already limited choices.

Georgia Greatrex