The Glasgow Lock Hospital for Unfortunate Females was supposed to help vulnerable girls and women in the 19th century – instead it continued their abuse

Nine-year-old Annie McGuire and seven-year-old Elizabeth Martin were among thousands of women and girls admitted to the Glasgow Lock Hospital for Unfortunate Females, one of a network of Lock hospitals built across Britain and its colonies in the 19th century, between 1846 and 1947.

Established to diagnose and treat women and girls for venereal diseases, the hospital lay at the heart of the notorious Glasgow System – the Scottish city’s response to the Contagious Diseases Act (CDA).

The CDA, implemented in garrison towns and naval ports across England and Wales in the 1860s, aimed to eradicate sexually transmitted diseases among the armed forces. The Glasgow Lock played a key role in the Glasgow System’s unprecedented coalition of police, courts, prisons, medical authorities and the city’s Magdalene institutions.

Annie and Elizabeth were admitted to the hospital with gonorrhoea – and died there three years later. They, like all those who died in the Lock, were buried in unmarked graves. These sparse details are all that remain of the short but brutal lives of these two little girls.

Although the hospital’s records reveal little about its patients’ personal stories, through them, as part of my research on gender-based violence, I’ve been able to glimpse the social repression of working-class women and girls in 19th century Glasgow.

Here was a city with a mission to end women’s supposedly evil ways and eradicate the “social evil” of prostitution and venereal disease. Vulnerable young girls like Annie and Elizabeth were victims of these attitudes, and blamed for the sexual violence they’d experienced. As a “treatment”, they were then subjected to more abuse – at the institution that was supposed to protect them.

‘Dangerous sexualities’

Alexander Patterson, a surgeon at the Lock in the 1860s, advocated the creation of the Glasgow System to eradicate prostitution and venereal diseases – by focusing on women as the cause of both.

Patterson’s suggestion was well received by the city’s leaders, who regarded prostitution and venereal diseases as “highly visible symbols of the social dislocation attendant upon the industrial era”. Women and their “dangerous sexualities” were supposedly responsible for a “social evil which disgraces the land”.

In their eyes, any woman was a “prostitute” if her behaviour, speech, dress or lifestyle defied Victorian social or sexual norms. Prostitution included what contemporary moralists regarded as degenerate working-class culture, whereby women were free to roam the streets creating a “dangerous temptation” to men and a threat to public morality and health.

The Glasgow Police Act 1866 granted officers full discretionary powers to apprehend any women or girl they thought was, or risked becoming, a prostitute. According to Glasgow’s chief constable Alexander McCall, this meant any woman found on the street who was unable to account for how she made her living.

Thousands were apprehended based on this definition (or on the whim of individual policemen), including homeless, destitute or unemployed women, separated women and single mothers, and part-time and casual female labourers.

Women and girls were taken first to the Lock for a compulsory, brutal internal examination. Its registers show ballerinas, actresses, shop girls, unemployed mill girls, domestic servants, farm workers and the wives of soldiers and tradesmen and children all being admitted.

The women entered the Lock’s doors under a sign proclaiming its aims: TREATMENT – KNOWLEDGE – REFORMATION. They were categorised as one of the following:

“Wanderer” – a homeless, destitute young girl;
“Fallen” – a young woman who had become a known prostitute;
“Newly fallen” – a young woman or girl who had only recently become involved in prostitution; or
“Hardened” – an adult woman working as a prostitute who was known to the police or who had previous convictions.

By 1910, the Lock was admitting around 300 women a year, housing around 50 patients at any one time. With shaved heads and regulation brown uniforms, these patients were highly visible and stigmatised outside the walls of hospital.

When well enough, the women worked in the laundry, kitchens and mortuary, and attended classes provided by middle-class women volunteers, designed to train patients in “acceptable” female behaviour.

A (short) life of incarceration

The Glasgow Lock’s records show many children under 13, like Annie and Elizabeth, contracted gonorrhea while incarcerated at other institutions and reformatories – possibly after being raped by men seeking the “virgin cure” for venereal disease.

The girls’ bodies, already severely damaged by sexual violence, often struggled to survive the punishing conditions – and brutal but largely ineffective treatments – inflicted on them in the Lock.

By the late 19th century, the Glasgow System, incorporating Duke Street women’s jail, the Lock, and Glasgow’s Magdalene institutions, was incarcerating, treating and attempting to reform thousands of women and girls.

Once treated, girls and young women considered to be in imminent danger of becoming prostitutes were admitted to the Magdalene, where they were subjected to a harsh regime of religious instruction and hard labour in its commercial laundry. Designed to divert them from prostitution and to become obedient wives and compliant workers, it remained in operation until 1960.

While the scandal of the Magdalene laundries in Ireland is notorious, Scotland’s Magdalene asylums are less well known. Glasgow’s Magdalene Institution for the Repression of Vice and Rehabilitation of Penitent Females was closely associated with the Glasgow Lock.

The hospital closed its doors in 1947, following the introduction of Britain’s new National Health Service. The Royal College of Physicians and Surgeons of Glasgow now has a dedicated Lock Room, telling its story through powerful visual displays. Demolished in 1955, the original Glasgow Lock Hospital sites lie within the University of Strathclyde’s campus area. Läs mer…

HIV stigma is now more dangerous than the virus – my research shows how to address this

Speaking at the 16th International Aids Conference in 2006, the then UNAids executive director, Peter Piot, remarked: “Since the beginning of the epidemic, stigma, discrimination and gender inequality have been identified as major causes of personal suffering, and as major obstacles to effective responses to HIV.”

Now, in the fourth decade of the HIV crisis, this statement remains largely true. Despite the leaps and bounds that have been made in the treatment and prevention of HIV, stigma and discrimination continue to harm the lives of people living with HIV, and hinder efforts to stem the epidemic globally.

HIV today: a changed landscape

HIV is now an extremely treatable condition with excellent outcomes. Those living with HIV who are on treatment can expect to live a healthy life with a normal lifespan.

Crucially, medications are so effective that they reduce the amount of the virus in the body to undetectable levels. This means people living with HIV cannot pass on the virus to others – a condition known as “U=U”, or undetectable equals untransmittable. The evidence supporting this is robust and spans over 20 years.

The most comprehensive evidence for U=U stems from the Partner and Partner II studies, which aimed to determine the risk of transmission between HIV positive and HIV negative partners when the positive partner is virally suppressed.

These large observational studies tracked a combined 2,020 couples – both heterosexual and gay – over several years of follow-up. Between the two studies, participants reported 134,000 acts of condomless sex, however no HIV transmissions between couples were recorded.

The evidence was incontrovertible and study authors concluded the risk of transmission when a person is virally suppressed is zero. This is extremely reassuring for people living with HIV, who can feel confident that they cannot pass the virus on to their partners.

It is also good news for public health, with the World Health Organization endorsing the U=U message and affirming the importance of treatment as a vital preventative tool in the ongoing HIV epidemic.

Stigma: a persistent obstacle

Despite these extraordinary medical advancements, HIV-related stigma remains a persistent obstacle that negatively affects the health and wellbeing of people living with HIV.

The first European-wide community survey of people living with HIV, published by the European Centre for Disease Control (ECDC) in 2023, reported that among the 3,272 respondents, one in three had not told a single family member their diagnosis for fear of rejection.

In the same study, a third of respondents also reported experiencing stigma in healthcare settings, with almost a quarter reporting having healthcare refused or delayed as a result of their HIV status.

A subsequent ECDC study of HIV-related stigma among over 18,000 European healthcare workers, published in July 2024, found that almost two-thirds worried about drawing blood from patients with HIV and a quarter reported using double gloves with such patients.

That HIV-related stigma in healthcare settings is so prevalent and commonplace is particularly troubling. Such experiences create distrust between people living with HIV and their healthcare providers This can lead people living with HIV to avoid attending healthcare services, which has obvious knock-on effects for health.

Moreover, there is evidence that experiences of HIV-related stigma in healthcare settings are associated with poor adherence to antiretroviral medication and treatment failure.

This is bad for individual health, but also may have negative impacts on public health, given that treatment is such a vital prevention tool.

Reducing HIV stigma in healthcare settings

A range of factors including fear of HIV, negative attitudes, lack of policies and a lack of training of healthcare workers can drive HIV-related stigma in healthcare settings.

In Ireland, where my research is based, as part of a wider project in 2022 we carried out a survey of 295 healthcare workers to measure stigma in Irish healthcare settings. Our findings, published earlier this year, provide tentative new evidence for the potential of the U=U message to reduce HIV-related stigma.

Like other studies, we found that fear of HIV was a major driver of stigma in healthcare settings in Ireland. Just over half of our survey respondents said they would worry about drawing blood from a patient living with HIV, and over a quarter reported using special measures they would not use with other patients, such as double-gloving.

What was unique about our study was that, for the first time, we also measured knowledge of U=U among healthcare workers. We found that, while other factors were relevant, knowledge of U=U was the most powerful protective factor against fear of HIV and stigmatising behaviour.

In other words, healthcare workers who are confident in the U=U message are much less likely to stigmatise against people living with HIV.

Stigma is a complex social phenomenon, and no single intervention will be fully effective against it. However, our research suggests that increasing awareness and acceptance of U=U would be an effective, low-cost and scalable action that might inch us closer to a stigma-free future. Läs mer…

What do insects do all winter?

You are standing in a forest in the middle of winter and the temperature has dropped below zero. The ground is covered in snow and the trees and bushes are naked. The insects that normally fly or crawl in warmer weather are nowhere to be seen.

You might assume that insects do not survive the seasonal shift. After all, temperatures are too low for them to forage and the plants or other insects they’d eat are scarce anyway.

But that is not the case. Actually, they are still all around you: in the bark of the trees and bushes, in the soil, and some may even be attached to plants underneath the snow. Snow, as it turns out, is a rather good insulator – almost like a blanket.

The insects are hibernating. Scientists call this a “diapause” and it is how insects, which in most cases cannot generate their own heat like we mammals can, survive the cold winter months.

Winter is coming…

Insects have to prepare for winter before the temperature gets too low. For some species, hibernation is a part of life. These species have one generation a year, and every individual will experience winter and hibernate no matter the conditions.

However, most insects only get the cue to hibernate from their environment. This allows a species to have several generations a year in which only one experiences winter. Those species must somehow foresee winter’s approach.

So how do they do it? Temperature is not a particularly reliable signal. Although temperatures get colder towards winter, they can vary a lot from week to week. Another environmental factor can be trusted to be the same every year: day length.

Insects are highly attuned to day length as a seasonal marker.
Flystock/Shutterstock

A great variety of insects interpret the shortening days as their cue to prepare for hibernation, unless there is still time for another generation before winter descends. Take the speckled wood butterfly. This butterfly can sense the lengths of days as a larva (it is still not fully known how) and if they are suitably short, it gains extra weight and, as a pupa (or chrysalis), hibernates.

Correctly assessing when winter will arrive is crucial for survival. If an insect fails to make the right decision on time it may freeze or starve, or spend all of its hard earned energy before it can safely emerge from hibernation.

A long winter’s gnat

Hibernation entails several strategies that have allowed this vast class of animals, comprising approximately 5.5 million species, to cope with the cold far from Earth’s balmy equator.

Some insects hibernate in places that conceal them from low temperatures while others undergo changes within their bodies to either avoid or tolerate freezing. Our friend the speckled wood butterfly, after gaining weight as a larva, will search for a suitably sheltered spot to bed down in its forest habitat – perhaps on the grass (which it eats the rest of the year) that will become covered in snow.

There is almost no food available at this time of year and insects generally do not eat during their hibernation. Winter can last for months, so insects have evolved two strategies: gain additional weight before winter and consume this energy store slowly by lowering their metabolic rate.

Many insects live their entire life cycle (from egg to larva, pupa and adult) within a few months to a year. Losing months during winter is significant. And so, insects simply pause their development during hibernation. Which life stage species hibernate in differs from species to species. But the speckled wood butterfly, found across Europe and North Africa, turns into a pupa just before winter and develops into a butterfly several months later in spring.

A speckled wood in spring.
Nigel Jarvis/Shutterstock

Change is in the air

Rising global temperatures caused by burning fossil fuels, animal agriculture and deforestation, among other things, have resulted in shorter and warmer winters.

For insects that can adapt to these changes fast enough, it leaves an opportunity to expand northwards and produce more generations per year where they currently are. Some species have managed to do this while others can’t – entomologists are expending great effort to understand why.

The challenges of adapting to warmer winters are manifold. Temperatures drop later and later in the season, but days shorten as consistently as they ever did. This mismatch can trick insects into making the wrong decision. If this happens to too many insects, a species can go locally extinct.

Studies suggest that some insects can change the day length they use to diagnose winter’s approach relatively quickly. However, it is not known if all species will be so capable.

Energy consumption in insects is also temperature-dependent. As winters warm, an insect risks depleting its energy stores before it can terminate hibernation.

Higher temperatures during winter also mean less snow, which, somewhat ironically, means that some species cannot conceal themselves from the cold.

Expanding northwards can be a somewhat limited opportunity. The food source or habitat an insect relies on may not be available in its new home, especially if the species live offs just a few plants or its habitat is not found further north.

With more research into the factors influencing how different insects adapt to higher temperatures during winter, scientists could predict which species will need more urgent help from conservationists – and what form that help should take.

Next time you are standing in a forest on a cold winter day, think about how amazing it is that hibernating insects are surviving, for several months at a time, in a climate where they would otherwise perish.

Don’t have time to read about climate change as much as you’d like?
Get our award-winning weekly roundup in your inbox instead. Every Wednesday, The Conversation’s environment editor writes Imagine, a short email that goes a little deeper into just one climate issue. Join the 40,000+ readers who’ve subscribed so far. Läs mer…

China has banned US exports of key minerals for computer chips – leaving Washington with limited options

China recently banned the export of the minerals gallium and germanium to the US amid growing tensions between the two countries on trade.

The minerals are of critical economic value because they are used in computer chips, in military technology such as night vision goggles, and in the renewable energy industry, where they are important for manufacturing electric vehicles and solar cells. All of these areas are very sensitive sectors for the US and EU.

China has overwhelming market power over supply, because it is the source of 98% of primary gallium and 91% of primary germanium. Primary refers to “raw” sources such as mineral ore. In several sectors where the minerals are used, there are no substitutes for them.

Gallium and germanium are present in very low concentration as byproducts of major minerals – they’re known as trace minerals. Germanium’s primary source is the residue from zinc refineries and coal fly ash (a powdered residue produced when coal is burnt in power plants).

Gallium is mainly produced as a byproduct of bauxite ore (which is the main source for aluminium) as well as the processing stage to extract aluminium from bauxite.

The Chinese ban on exports of these minerals to the US closely followed Washington’s third crackdown in three years on China’s semiconductor (computer chip) industry. The US wants to curb exports of advanced chips to China that could be used in applications that threaten America’s security.

Gallium melts at slightly above room temperature.
E-Rik / Shutterstock

For example, advanced chips could be used in electronic warfare applications that make use of artificial intelligence (AI), or in advanced weapons systems such as hypersonic missiles. China said its ban on gallium and germanium was because of the minerals’ “dual military and civilian uses”.

According to a report in Reuters in 2023, the US Department of Defense holds a strategic stockpile of germanium, but no reserves of gallium. In October 2024, the US Geological Survey (USGS) estimated that a total ban on the export of gallium and germanium could result in a US$3.4 billion loss to US GDP.

The minerals’ uses extend far beyond national security applications. Gallium is used in solid-state lighting devices, including light-emitting diodes (LEDs). Germanium is used in optical fibres and as a catalyst to speed up the reactions used in manufacturing polyster and PLA (a bioplastic). The minerals are vital for making the electronic devices we depend on every day, such as smartphones, displays and laptops.

Germanium is used in optical fibres, among many other applications.
Asharkyu / Shutterstock

So what can the US do to circumvent the effects of the ban, given China’s near monopoly on the primary production of these critical minerals?

One route is for the US to re-start and expand domestic mining of these minerals. Indeed, the Pentagon has already indicated that this is being explored.

As previously mentioned, gallium is mainly recovered as a byproduct of processing aluminium or zinc ores. The USGS says that some US zinc deposits contain up to 50 parts per million of gallium, but the mineral is not currently recovered from these deposits.

Washington is concerned about the export of advanced computer chips to China, which could be used in weapons such as hypersonic missiles.
US Army

Historically, reported production of germanium in the US has been limited to one site, the Apex mine in Washington County, Utah. The Apex mine produced both gallium and germanium as primary products during the mid-1980s, but it has since closed.

Another option for the US is to diversify the primary production of these minerals by investing in zinc, coal, and bauxite refineries in other, friendly countries since, for instance, only 3-5% of germanium is recovered from the refining process of zinc and coal. Canada’s Teck Resources is the biggest supplier of germanium in North America, extracting the mineral from its Trail smelter in British Columbia.

An alternative would be to step up extraction from so-called secondary sources, which primarily means recycling old electronic devices and other hardware that has reached the end of its useful life. There are no official statistics on secondary supply, but some reports estimate that no more than 10% of the total gallium supply comes from secondary sources. This share reaches 30% in the case of germanium.

However, there are important barriers to increasing the secondary production of these minerals. The process for recovery through recycling is very complex since, in hardware such as computer chips, the minerals are usually combined with other materials. This makes isolating the minerals difficult.

Consequently, the Chinese ban represents a major supply chain disruption for these minerals. The lower primary supply cannot be offset by secondary supply (recycling) in the short term, since the recovery yield is still low and its cost is not competitive.

In the long term, technological advances in this recovery process for both minerals could reduce its cost and increase the supply, thus reducing the dependence on Chinese mineral ores. Läs mer…

The Hebrew Hammer: a Hanukah film that mocks antisemitic stereotypes through its butt-kicking Jewish hero

If you watch one Hanukah film this festive season may I suggest you watch the 2003 film, The Hebrew Hammer. I am particularly partial to this film, it featured heavily in my book, The New Jew in Film, for its self-conscious reversal of cinematic stereotypes of Jews.

Starring Adam Goldberg (fresh from Saving Private Ryan), Andy Dick and Judy Greer, The Hebrew Hammer features an orthodox crime-fighting Jewish hero, Mordechai Jefferson Carver, who saves Hanukah from the clutches of Santa Claus’s evil son, who wants to make everyone celebrate Christmas.

The Hebrew Hammer has been voted among the top holiday movies by the New York Times and Vanity Fair. Moment Magazine listed it among its “Top 100 Most Influential Films in the History of Jewish Cinema” alongside such great films as The Graduate, Schindler’s List and Annie Hall.

The Hebrew Hammer bills itself as the first “Jewsploitation” film since it’s self-consciously based on the Blaxploitation subgenre of American film. A portmanteau of the words “black” and “exploitation”, the genre emerged in the 1970s and was characterised by its controversial portrayal of Blackness, graphic violence and frequent female nudity.

Looking for something good? Cut through the noise with a carefully curated selection of the latest releases, live events and exhibitions, straight to your inbox every fortnight, on Fridays. Sign up here.

Speaking to the Hebrew Hammer’s director Jonathan Kesselman about how he crafted the film, he mentioned that he rented all the blaxploitation movies he could get his hands on to get a sense of the genre and how it works. So inspired by this movie marathon, he wrote the Hebrew Hammer in a month and there are clear influences to be spotted throughout.

The eponymous Brooklyn-based Haredi crime fighter is not so much a Jewish James Bond as a semitic Shaft (a classic of blaxsploitation from 1971) – “the kike who won’t cop out when Gentiles are all about” as the theme tune tells us. He is a tough Yiddish-speaking action hero modelled on the Black Panthers. As “the baddest Heeb this side of Tel Aviv”, he is also tattooed and muscled – what in Yiddish would be called a shtarker.

Then there is his whole look. Carver is dressed as a cross between the fictional private investigator Shaft and a Haredi Jew. He wears a black trench coat and cowboy boots but with Star of David-shaped spurs and belt buckle, two exaggeratedly large gold chai (Hebrew for 18 or life) neck chains and a tallit (a traditional prayer shawl) as a scarf. He drives a white Cadillac with Star of David ornamentation and two furry dreidels (spinning tops used during the festival of Purim) hanging from his rear-view mirror. His registration plate also reads L’chaim (Hebrew for “to life” or “cheers”).

Undermining stereotypes

Blaxsploitation films have a complicated legacy with some celebrating them as a revolution in representations of black empowerment and by others as pandering to longstanding and harmful racial stereotypes. For those who celebrate these films, however, they are seen as countering and mocking stereotypes rather than reinforcing them. The Hebrew Hammer can be seen as doing very much the same for Jewish stereotypes.

Carver is recruited by the Jewish Justice League (JJL), which is housed in a building modelled on the Pentagon but in a Star of David shape. The JJL is an umbrella organisation for such groups as “The Anti-Denigration League”, “The Worldwide Jewish Media Conspiracy” and “the Coalition of Jewish Athletes” (whose delegate is, in another dig at anti-Jewish stereotypes, predictably absent).

Carver’s mother is overbearing, his girlfriend is a Jewish American Princess – a spoiled and entitled whiny woman – and her father resembles the Israeli general Moshe Dayan. Carver also manifests every Jewish neurosis: he is allergic to dust, has a taste for Manischewitz wine (Black Label) and cannot handle too much pressure or expectation. When his enemies seek to distract him they do so by throwing money on the ground.

The Hebrew Hammer is less of a Jewish James Bond and more of semitic Shaft.
Jonathan Kessleman

Like in blaxsploitation, these are all harmful stereotypes of Jewish people. However, in The Hebrew Hammer it’s not about bolstering them but mocking and therefore undermining them in a self conscious way.

As well as hyperbolic representations of stereotypes, The Hebrew Hammer reverses the antisemitic trope that Jews are physically weak and cowardly. “We’re often depicted as intellectual, but weak and uncool,” Kessleman said. “It’s important to take back these stereotypes and own them.”

“When I made it, I didn’t think I was making a holiday movie,” Kesselman told me but noted that, “it survives because it’s a holiday movie.” Läs mer…

Computer models are vital for studying everything from climate change to disease – here’s how AI could make them even better

Here’s one definition of science: it’s essentially an iterative process of building models with ever-greater explanatory power.

A model is just an approximation or simplification of how we think the world works. In the past, these models could be very simple, as simple in fact as a mathematical formula. But over time, they have evolved and scientists have built increasingly sophisticated simulations of the world as new data has become available.

A computer model of the Earth’s climate can show us temperatures will rise as we continue to release greenhouse gases into the atmosphere. Models can also predict how infectious disease will spread in a population, for example.

Computer models can be rejected if experimental evidence does not support them. So there’s a kind of arms race to keep models competitive as new data appears. And the revolution occurring in the field of artificial intelligence (AI) could make these vital tools even better.

Take weather and climate forecasting. The numerical models used to predict weather are large, complex and demanding in terms of the amount of computing power needed to run them.

They are also unable to learn from past weather patterns. However, methods based around artificial intelligence, including a subset of AI known as machine learning, have shown huge potential to improve on what we currently have.

Machine learning involves creating algorithms (sets of mathematical rules to perform particular tasks) that can learn from data and apply these lessons to unseen data.

But until recently, weather models that incorporated machine learning techniques weren’t considered suitable for what’s called ensemble forecasting, a set of forecasts that shows the range of possible future weather conditions. Nor were they useful for weather and climate simulations over the longer term, as opposed to near-term forecasts.

Models that use machine learning show huge promise in weather forecasting.
ChickenWing Jackson / Shutterstock

However, a recent study published in Nature showed that a machine learning model called NeuralGCM produces ensemble forecasts that were just as good as the leading models. It could also produce realistic long-term forecasts of changes in the climate.

Machine learning models have to be “trained” by feeding them lots of data, from which they learn and improve at what they do. The training process is expensive and requires a lot of computer power.

However, after a model is trained, using it to make predictions is comparatively fast and cheap. The results show that AI can enhance the large-scale physical simulations that are essential for understanding and predicting the climate system.

Big data

As the British statistician George E.P. Box said, “All models are wrong but some models are useful.” We must also remember that all measurements are wrong. There is always some noise present in our data and it is not an entirely accurate reflection of the state of the world.

But models employing machine learning are enabled by “big data”. Vast stores of information and measurements can be used to train these models, giving them ever-greater predictive power. In general, big data is characterised by the three v’s: volume, velocity and variety.

Data now arrives in bigger volumes, at greater velocity, and with increased variety. This is partly due to the way that different electronic devices can be connected, via what’s called the “Internet of Things”.

Improving our understanding of how the Earth’s climate system will evolve in coming decades will be vital for informing efforts to tackle greenhouse emissions. It will also help us adapt to the effects of global warming.

Models using machine learning and another AI-based approach called deep learning have been used to detect and track COVID-19. Researchers have developed machine learning models that incorporate clinical, genetic and lifestyle factors to predict an individual’s risk of developing cardiovascular disease.

Scientists have also used the AI technique of deep reinforcement learning to develop tools that allow them to control the hot plasma necessary to generate nuclear fusion reactions.

In the past, AI was a fairly narrow field with very specific applications, such as playing chess. With the dawn of generative AI, its uses are much broader, with the technology able to create new content such as text, images and video.

This moves us closer to the goal of general artificial intelligence, where the technology becomes capable of carrying out any task a human could do. Building computer-based models of the world with the help of artificial intelligence represents another major milestone.

The world of science is starting to recognise the power of AI, as seen in the award this year of two Nobel prizes for work involving artificial intelligence. Maybe we are not that far away from a Nobel prize being awarded to an AI – or even a situation where a machine decides who to award the prizes to. Läs mer…

My grandfather was ‘Mr Health and Safety’. His life’s quest to make work safer has been ridiculed – but the rise of the gig economy shows it’s no joke

“I hope we have persuaded [the minister] of the need to do more, not merely to reduce industrial noise but to help people who have suffered industrial deafness. I confess to having some direct and personal interest in the subject … I am one of those who has to pin his ear to the amplifier as I am partially deaf … I attribute this not to advancing years, but to my former industrial experience in the engineering industry.”

Harold Walker was a man obsessed with making work safer. As this extract from a 1983 House of Commons debate on industrial noise shows, the longtime Labour MP for Doncaster did not stop campaigning even after his greatest professional achievement, the Health and Safety at Work Act, passed into UK law 50 years ago.

Walker was a “no nonsense” politician who brought his own experiences as a toolmaker and shop steward to inform legislation that is estimated to have saved at least 14,000 lives since its introduction in 1974.

But despite this, the act has for nearly as long been a source of comedic frustration at perceived bureaucratic overreach and unnecessary red tape. The phrase “health and safety gone mad” was common parlance in the late 20th century, and in 2008, Conservative leader David Cameron told his party conference: “This whole health and safety, Human Rights Act culture has infected every part of our life … Teachers can’t put a plaster on a child’s grazed knee without calling a first aid officer.”

Read more:
The UK’s Health and Safety at Work Act is 50. Here’s how it’s changed our lives

Such criticisms feel like a precursor to today’s torrent of complaints aimed at “woke culture”. But as a senior occupational health researcher and member of the University of Glasgow’s Healthy Working Lives Group, I want to stand up for this landmark piece of legislation, which underpins much of our work on extending and improving working lives – ranging from the needs of ageing workers and preventing early retirement due to ill health, to the effects of Long COVID on worker efficiency and mental health and suicide prevention at work.

Sometimes, our group’s research highlights changes to the act that are needed to fit today’s (and tomorrow’s) ways of working. But its fundamental principles of employer responsibility for safety in the workplace have stood the test of time – and feel more relevant than ever amid the increase of insecure work created by today’s zero hours “gig economy”.

Harold Walker with his grandson, Simon, in Rome.
Simon Harold Walker, Author provided (no reuse)

But my reasons for writing about the act are personal as well as professional, because Harold Walker – who in 1997 became Baron Walker of Doncaster – was my grandfather. As a boy, he would tell me terrifying stories about his time working in a factory. I did not realise that one of them would prove to be a seminal moment in the formulation of Britain’s modern health and safety legislation.

‘It happened both quick and slow’

The details always remained the same, however many times he told me the story. His friend and workmate had been trying to fix a pressing machine in the toolmaking factory where they both worked. My grandfather vividly painted a scene of extreme noise, sweating bodies, rumbling machines and dirty faces.

The machine lacked guards and suddenly, as his friend worked, his sleeve got caught in the cogs. “It happened both quick and slow,” my grandfather recalled. He described the man being pulled into the machine, screaming, while he and other colleagues tried, unsuccessfully, to wrench him free. Too late, the machine was shut down. By this point, the cogs were red with blood, the man’s arm crushed beyond repair.

Despite its obvious culpability, my grandfather said their employer offered his friend no support, made no reforms, and simply moved on as if nothing had happened. My grandfather was deeply affected by the incident, and I believe it played a key role in shaping the rest of his political career.

A British tool-making factory in the early 1950s.
Allan Cash Picture Library / Alamy Stock Photo

Walker would serve as a Labour MP for 33 years, holding roles including minister of state for employment (1974-79) and deputy speaker of the House of Commons (1983-92). His fierce support for Doncaster and the city’s main industry, mining, earned him a reputation as a key figure in industrial relations – and saw him struggle to maintain neutrality during the 1984 miners’ strike.

As deputy speaker, no one appears to have been safe from his charismatic if somewhat tyrannical rule – he once reportedly chided the prime minister, Margaret Thatcher, for straying off-topic. Fellow Labour MP Barbara Castle described him as “a short, stocky man who looked like a boxer and behaved like one at the despatch box – very effectively”.

The act he championed was born from anger and his unshakable belief that workers’ lives were worth protecting. As a result of his impoverished upbringing – born in Audenshaw near Manchester, his father had variously been a hat-maker, rat-catcher and dustman – my grandfather was obsessed with making sure I had properly fitting shoes, since ill-fitting hand-me-downs as a child had left his feet permanently damaged. This belief in small but essential acts of care underpinned his approach to both life and politics.

How the act was created

For decades after the second world war, as Britain sought to rebuild its shattered economy and infrastructure, some of its workplaces remained death traps. In 1969, 649 workers were killed and around 322,000 suffered severe injuries. This reflected, at best, a lack of knowledge about the need for enhanced worker safety – and at worst, a blatant disregard for workers’ health on the part of businesses and the UK government.

For centuries, coal mining had been one of Britain’s most hazardous industries. Miners faced the constant threat of mine collapses, gas explosions and flooding, and disasters such as at Aberfan in 1966, where a coal spoil tip collapsed onto a school killing 116 children, became tragic markers of the industry’s dangers. Older miners would often succumb to diseases such as pneumoconiosis (black lung), caused by inhaling coal dust day after day.

Likewise, unprotected shipbuilders and dockers, construction workers, steelworkers and railway employees all faced a threat to life merely for doing their job, including falls from unsafe scaffolding, molten metal accidents and train collisions.

An unprotected construction worker on Westminster’s Big Ben clock tower in 1948.
Dave Bagnall Collection/Alamy Stock Photo

In the textile mills of Manchester and beyond, workers toiled in poorly ventilated factories, inhaling cotton dust that led to byssinosis, or “Monday fever”. Poisoning and exposure to dangerous chemicals plagued workers in tanneries and chemical plants.

While the 1952 London smog had brought nationwide attention to large-scale respiratory diseases when some 4,000 excess deaths were recorded over a fortnight (leading to the first effective Clean Air Act in 1956), respiratory ailments in the workplace were typically considered just another occupational hazard in any industry that involved dust or toxic fumes, from mining to shipbuilding to textiles. Meanwhile another lurking, silent killer – asbestos in buildings – would only be recognised in the late 20th century as the cause of deadly conditions such as mesothelioma.

Read more:
The hidden danger of asbestos in UK schools: ’I don’t think they realise how much risk it poses to students’

Historian David Walker has argued that many British companies favoured reparations over prevention because it was more cost-effective, resulting in thousands of workers permanently being disabled and unable to work. Determined to change this systemic injustice, Walker first revealed his ambition to bring in new protections for workers in 1966, when he proposed “some necessary amendments to the Factories Acts”. But progress was slow and difficult.

Over the years, whenever my grandfather told me about those parliamentary debates and private arguments while digging his garden in Doncaster, I swear he would stick his fork into the earth more forcefully. His stories were full of names like Barbara Castle, Michael Foot, Philip Holland and Alfred Robens that meant little to me as a boy. But even to my young self, it was obvious the creation of the act required extraordinary levels of determination and resolve.

The Insights section is committed to high-quality longform journalism. Our editors work with academics from many different backgrounds who are tackling a wide range of societal and scientific challenges.

In 1972, the Robens Report – commissioned by Castle, then Labour’s secretary of state for employment and productivity – laid more concrete groundwork by calling for a single, streamlined framework for workplace safety. Working alongside legal experts, industry leaders and union representatives, my grandfather was a central part of the team attempting to translate these recommendations into practical law. A lifelong smoker, he painted vivid pictures to me of the endless debates in smoky committee rooms, poring over drafts late into the night.

This was a cross-party effort involving MPs and peers from all sides, united by a shared determination to tackle appalling workplace conditions, who faced significant opposition both within and outside the commons. Despite this, the act’s journey through parliament was anything but smooth – beset by disagreements over small details or questions of process, such as when a 1972 debate veered off course to discuss the nuances between “occupational health” and an “occupational health service”.

In May 1973, Liberal leader Jeremy Thorpe taunted his parliamentary colleagues about the lack of progress on the act, shouting across the floor at Dudley Smith, the Conservative under-secretary of state: “No one would wish the honourable gentleman to sit there pregnant with ideas, but constipated about giving us any indication of what was intended.”

One of the thorniest issues was whether the law should prescribe specific rules or take a broader, principle-based approach. The latter would ultimately win out, reflected in the now-famous phrase “so far as is reasonably practicable”, meaning employers must do what is feasible and sensible to ensure the health and safety of their workers. As the bill was read yet again in April 1974, Walker – by now the under-secretary of state for employment – was congratulated on his dogged determination by opposition Tory MP Holland:

“The honourable member for Doncaster has spent so much time and effort on this subject in previous debates that it is fitting he should now be in his present position to see this bill through its various stages … In this bill, we have such a broad approach to the subject, and I welcome it on those grounds. The fact the last two general elections interrupted the promotion of safety legislation causes me a little trepidation about the fate of this bill. But I hope this parliament will last long enough to see it reach the statute book.”

Health and safety ‘goes mad’

When it finally passed into law in July 1974, the Health and Safety at Work Act aimed to ensure every worker had the right to a safe environment. It introduced a fundamental shift in accountability, making employers responsible for their employees’ welfare through financial penalties and public discourse, and established an independent national regulator for workplace safety, the Health and Safety Executive.

Two years later, when asked in parliament if he was satisfied with how the act was operating, Walker replied: “I shall be satisfied with the operation of the act when I am sure that all people at work – employers, employees and the self-employed – are taking all necessary measures for their own and others’ health and safety.”

Since the act’s inception, the number of workplace deaths in Britain has fallen from nearly 650 a year in the 1960s to 135 in 2023. This decline is even more striking when considering the parallel growth in the UK workforce, from 25 million in 1974 to more than 33 million today.

Number of fatal injuries to workers in Great Britain, 1974-2024:

D Clark/statista.com, CC BY-NC-ND

Severely injured workers could now receive disability benefits or compensation funded both by the state and their employers. The act replaced the Workmen’s Compensation Act (1923), which had favoured employers, but it still placed a heavy burden on workers to prove their illness was caused by their work. As a result, the outcomes of claims varied significantly depending on individual circumstances.

Almost immediately, the act attracted criticism both for both going too far, and not far enough. In March 1975, the Daily Mail published a full page “advertorial”, issued by the Health and Safety Commission, headlined “A Great New Chance to Make Work a Lot Safer and Healthier – for everyone in Britain”. But by September, the same newspaper was mounting a scathing attack on the new Health and Safety Executive (HSE) under the headline “The Great Red Tape Plague”, which ended with a quote from an employer saying: “New rules are made so fast these days that it takes me all my time to enforce them. I’m seriously considering giving up.”

The HSE, a largely bureaucratic body, quickly introduced an increasingly complicated system of checks and balances through which employers were questioned by inspectors in order to protect workers – a system that, in time, the act became heavily criticised for.

Contrasting perspectives about the new Health and Safety at Work Act in the Daily Mail, 1975.

High-profile cases of seemingly trivial bans or extreme precautions – such as forbidding conker games without goggles in schools and cancelling local traditions over liability fears – were often ridiculed in the media and by critics of bureaucracy, who framed them as evidence of a creeping “nanny state” culture that stifled common sense and personal responsibility. This extended to public health campaigns such as the push for mandatory seatbelt-wearing, which caused the Daily Mail to quip in 1977: “Those who want to enslave us all, to the Left …”

The “nanny state” criticism stuck, and has become lodged in our popular culture. Decades later, Conservative employment minister Chris Grayling compiled a list of the top ten “most bizarre health and safety bans”. Top of his list of examples of “health and safety gone mad” was an incident in which Wimbledon tennis officials cited health and safety concerns as the reason for closing the grassy spectator hill, Murray Mount, when it was wet. Grayling complained:

“We have seen an epidemic of excuses wrongly citing health and safety as a reason to prevent people from doing pretty harmless things with only very minor risks attached.”

Terms like nanny state, woke and cancel culture are now used interchangeably to criticise perceived over-sensitivity, entitlement, or self-serving and inauthentic forms of activism. In November 2024, the GB News website published an article referring to the installation of safety warnings on staircases at Cambridge University as “utter woke gibberish!”

But defenders of health and safety point out that many of these examples are, in fact, misinterpretations of the act, driven more by organisations’ fears of litigation than by government regulation. In the wake of Grayling’s top ten list, Channel 4 debunked the claims about sponge footballs and banned sack races, noting that such stories were rarely linked to the act itself.

The tension between necessary protections and perceived overreach has become emblematic of broader debates about state intervention, responsibility and risk in modern Britain. In June 2024, a survey of more than 1,200 frontline workers across six sectors found that seven in ten workers think regulations slow them down. Yet more than half (56%) agreed that they had risked their health and safety at work, with a quarter (27%) having done so “several times”.

The rise of the gig economy

The Health and Safety at Work Act, though updated several times – and with a new amendment on violence and harassment in the workplace) now being debated – is continually tested by the emergence of new types of job and ways of working.

The rise of gig work, automation, and precarious contracts has complicated traditional notions of workplace safety. The gig or freelance economy has created a culture of temporary unaffiliated workers ranging from cleaners to academic tutors to e-bike riding deliveries. These workers typically lack any form of occupational health support. Sick days mean lost wages and potential loss of jobs in the future.

The rise of the gig economy has brought new challenges for health and safety legislation.
Andy Gibson/Alamy Stock Photo

During the COVID pandemic, a survey of 18,317 people in Japan found that gig workers had a significantly higher incident rate of minor occupational and activity-limiting injury, compared with their non-gig working counterparts.

A big question for regulators such as the HSE to tackle is around responsibility. Since gig workers are regarded as self-employed, they typically bear the health and safety responsibility for themselves and anyone who is affected by their work, rather than their employer. In the UK, their rights were somewhat strengthened by a 2020 high court ruling that found the government had failed to implement EU health and safety protections for temporary (gig) workers.

Health and safety legislation has been made even more complex by the adoption of new technologies and, now, artificial intelligence. In 1974, when the UK act was introduced, computers were very much in their infancy, whereas in 2024, millions of workers have exchanged desk dependency for the apparent freedom to work anywhere through phones, tablets and computers.

A particular issue for the UK government (and all sectors of the economy) is why so many people, particularly older workers, are inactive due to long-term sickness. Not only is this bad for business, it’s bad for society too. Waddell and Bell’s influential 2008 report, Is Work Good for your Health and Wellbeing?, made the case that the risks of being out of work were far higher than the risks being in work.

A growing body of evidence suggests health interventions should go beyond the remit of the original act by engaging with workers more holistically, both within and outside traditional workplaces, to help them stay in work. One intervention tested by our Healthy Working Lives Group found that sickness absence could be reduced by a fifth when a telephone-based support programme for recovery was introduced for workers who were off sick.

The modern flexibility of working locations has blurred the boundaries between work and private spheres in ways that can create additional stress and health risks – for example, for workers who never fully stop working thanks to email tethers. This is the world of work which future versions of the act must address.

My grandfather’s legacy

Harold Walker with his grandson Simon in Pisa, Italy.
Simon Harold Walker, Author provided (no reuse)

My grandfather died in November 2003 when I was 19, so my reflections on why he made health and safety his life’s work come in part from unpublished papers I found after his death, as well as Hansard’s reports of his robust debates in the House of Commons. For him, it was a moral imperative – demonstrated by the fact that he continued to challenge the very act he had helped create, critiquing it for not doing enough to protect workers.

Walker’s regrets over the act were poignantly revealed in December 2002, in one of his last contributions to the House of Lords before his death, when he drew from his pre-politics experiences to challenge calls to ease regulations on asbestos at work:

“I have asbestos on my chest – for most of my adult life prior to entering parliament in the 1960s, I worked in industry with asbestos, mostly white asbestos. In the 1960s, when I was a junior minister, I was involved in the discussion of the regulations relating to asbestos. The debate so far today has carried echoes of [those] discussions, when I was persuaded we should not legislate as rigorously as we might have done because the dangers had not been fully assessed. Are we going down that road again?”

Perhaps predictably, tributes after my grandfather’s death reflected the curious confusion that his health and safety legislation had aroused in British society. Labour MP Tam Dalyell praised him as “the most effective champion of workplace safety in modern British history”. In contrast, parliamentary journalist Quentin Letts ranked him 46th among people to have “buggered up Britain” – he was demoted to 53rd in Letts’s subsequent book – as the architect of a regulatory system that destroyed the industries it sought to protect. In his obituary, Letts wrote:

“Harold Walker, the grandfather of the HSE, often meant well. But that is not quite the same as saying that he achieved good things. Not the same thing at all.”

My grandfather would have been the first to argue that the act was not perfect, because compromise does not lend itself to perfection. But as I see in my research today, it changed the world of work and health by shining a critical light on workers and employers. In many ways, it led the charge for workers safety – the European Framework Directive on Safety and Health at Work was only adopted much later, in 1989.

Were he still alive, I think he would agree with Letts about some of the unintended consequences of the act. But I also think he would give short shrift to modern debates about the nanny state and wokish over-interference – dismissing these as “folk whinging”, accompanied by a characteristic roll of his eyes.

My grandfather believed his legislation was not merely about compliance, but about fostering a culture where safety became systemic and instinctive. Tales of “health and safety gone mad” have obscured his act’s true purpose – to change how we all think about our responsibilities to one another. I hope we are not losing sight of that.

For you: more from our Insights series:

To hear about new Insights articles, join the hundreds of thousands of people who value The Conversation’s evidence-based news. Subscribe to our newsletter. Läs mer…

Gift-giving was practised by early humans in Africa – how it spread and evolved

For many countries around the world, December is an intense, commercialised period of gift-giving. Not just within families but across all sorts of relationships, such as gifts between buyers and service providers.

Gift-giving, the act of presenting someone with a gift is intended to convey thoughtfulness, appreciation, or goodwill. The gift can be a tangible item, experience, personal time or gesture. It’s an age-old tradition found across cultures and societies, carrying various meanings and functions that help shape human relationships.

I’m the university chair in African philanthropy at the Centre on African Philanthropy and Social Investment at Wits Business School in South Africa. The centre is Africa’s first and only place of scholarship, teaching and research in this field. I’ve undertaken various studies looking at where gifting came from as a human behaviour, and its history in Africa.

Gifting began in Africa, when the first humans like us emerged. It then evolved as people migrated and was adapted to fit different cultures. Early examples involved the transfer of cattle or women to seal relationships between groups.

Today, it is exemplified by exchanges of gifts between countries during state visits and has evolved into practices like philanthropy. Giving is something that takes place outside households and celebrations, typically to create or seal relationships.

By examining the full history of giving, we’re able to trace its fascinating evolution and the many ways of showing generosity.

Human cognition

Today’s humans originated in Africa, about 200,000 years ago, developing unique mental (cognitive) abilities as part of their evolution. These governed the way humans interact with each other. Giving complemented other survival mechanisms, like instinctive “fight or flight” response.

Research shows that three types of interactive human socioeconomic behaviours evolved together: selfish, cooperative and selfless. Collectively applied, they enabled groups of hunter gatherers to survive, flourish and grow in numbers. These behaviours appeared in a ratio of about 20% selfish, 63% cooperative, and 13% selfless. This relative ratio endures today.

Gifting is similar to the instinct to cooperate, but it does not necessarily imply that something is expected in return. In other words, giving gifts started as a way of sharing that showed selflessness.

As people migrated around the world, their societies adapted to the conditions they encountered. The mix of selfishness, cooperation and selflessness became woven into diverse cultures.

Emergence of gifting

As humans evolved, more and more complex social relationships developed in bounded territorial spaces. Within Africa, groups become clans, clans become tribes, developing into chieftainships, kingdoms and other types of organised areas.

Here, gifts were important for two reasons.

First, within groups, gifts were structured ways of caring for each other and ensured mutual well-being and growth. Gifts were used to build friendships and connections among equals (horizontal relationships). Gifts also helped create loyalty and respect in relationships with leaders or people in power (vertical relationships). Here, gifts were often equated with an expected “deal”. For example, a gift would earn the support of and protection by those with authority. Or, for example, gifts during ceremonies ensured one’s place within the group.

Second, between separate identity groups, gifts also functioned as a (symbolic) instrument to negotiate and prevent what might otherwise be hostile relationships.

Shift in practices

Islamic expansion in northern Africa and the imposition of rules by European colonisers everywhere altered this landscape. Gifting started to function in different, notable ways.

Islam came into ancient Africa around the seventh century while Christianity spread from what is now Egypt in the first century AD. Each faith recognised an obligation to gift. They introduced new, formalised and institutionalised forms of giving, such as caritas, or Christian charity, and zakat, a Muslim obligation of giving for the needy.

Early in the past millennium, as resistance to colonisation gained traction, gifting practices transformed into a self-defensive strategy. Gifting became one tool to cope and survive under difficult conditions. For example, in east Africa people would exchange food, money and other resources to support both families and the communities they were part of. In Kenya the community practice of harambee (pulling together) sponsored expansion of access to education: an example of horizontal gifting.

End of colonial rule

Colonial rule ended after about 300 years. In the post-colonial era, gifting can be divided into two periods. One can be referred to as “traditional”, dating from about 1960 to 2000. The second, from 2000 onwards, can be called “new age”.

UK’s Prince Charles (future King Charles III) receives a gift from a member of the Ashanti tribe in Kumasi, Ghana. March 22, 1977.
John Scott/Princess Diana Museum/Getty Images

The traditional era loosely corresponded to when many African countries gained political independence, calling for a return to traditional values, societal norms and to self-determination.

African leaders inherited borders that forced together diverse ethnic and language groups, each with different relationships to colonial powers that had to be managed. In many ways, this laid the foundation for the prevalence of Africa’s ethnic patronage in politics today.

For about 30 years of independence, many countries were under single-party rule with politicians relying on vertical gift-like handouts extracted from public resources, to manage internal political tensions. Even after multi-party systems were introduced, this practice continued as a form of political dispensation.

Independence allowed many non-governmental organisations (NGOs), or “givers”, to become involved in development. Instead of focusing on people’s rights, aid was often framed as charity. NGOs used professional, one-way donation models. Despite good intentions, this shift has weakened traditional, community-based contributions as local communities became reliant on external gifts.

Alongside NGOs, big private donors and foundations introduced the idea of “philanthropy” to Africa. This popularised a type of giving that can make traditional, smaller-scale generosity feel less important. It potentially discourages those who can’t give as much.

New era of gifting

This millennium has introduced a new age of African giving, driven by three key factors; dissatisfaction with traditional grant methods, a variety of funding sources, and different approaches and ways to measure success.

One force is a fast-moving diversification of gift-givers. Examples include corporate social investment as well as “philanthocapitalism” – large-scale donations or investment by very wealthy individuals or private organisations. They typically use business strategies to tackle social issues.

Another force is innovation in the design of gift-giving practices. For instance, trust-based philanthropy where funders support recipients they trust without requiring strict contracts or periodic detailed reporting until the next tranche is paid. Another innovation is effective altruism, a type of giving that focuses on making rational, evidence-based investments to create measurable solutions for social problems.

Third is promotion of domestic resource mobilisation. This is the application of Africa’s own assets for its development, which includes diaspora remittances.

Looking back, it’s clear that those giving gifts – in whatever form – should take a more reflective and balanced approach to understanding the role of giving role in Africa’s community and society, especially as a political tool. Doing so can help bring greater accountability of leaders to their citizens. Läs mer…

Europe’s microstates: the medieval monarchies that survive in our midst

Continental Europe is home to four microstates with populations of between 30,000 and 80,000 people: Andorra, on the border between France and Spain; Liechtenstein, nestled between Switzerland and Austria; Monaco, which sits on the French Riviera; and San Marino, which is surrounded by northern Italy.

These states have existed since the medieval period and their tiny size has enabled them to develop and maintain singular constitutional arrangements. They have all developed original solutions to the problems of state architecture, many of which survive today.

All four of these microstates participate in the Council of Europe (Europe’s human rights organisation) and have therefore had to modernise to meet international standards of governance. This includes the independence of the judiciary.

However, all four have also implemented these reforms without altering their institutional identity. Their commitment to preserving their distinctiveness from other countries prevents wider reform to their institutions. For them, the protection of national tradition and identity is a form of self-preservation rather than a mere expression of ideology.

The distinctiveness of the four microstates lies in the survival of institutional arrangements that can no longer to be found practically anywhere else in the world. In the principalities of Liechtenstein and Monaco, for example, the monarchy still has a central role in the constitution.

Unlike in most European states with a monarchy, in Liechtenstein and Monaco, the royal head of state continues to exercise meaningful power. Andorra and San Marino, meanwhile, operate under a dual head of state arrangement. They effectively have two monarchs.

The populations of Europe’s medieval microstates.
World Bank/ Data Commons, CC BY-ND

Institutional arrangements in these principalities has been shaped by their diminutive size, both in terms of territory and population, and their geographical location. And these arrangements have survived since the middle ages because they have become their identity. While national tradition is an ideological debate in other nations, in these, preserving the past is a survival mechanism.

Liechtenstein and Monaco

Liechtenstein and Monaco are constitutional monarchies of the kind that offer substantial power to the royal family. Everything is organised around a prince, who exercises the executive power. Contemporary monarchies in the western legal tradition generally have a ceremonial king or queen but the executive power is held by an elected government. Liechtenstein and Monaco have maintained their historical organisation of government, centred on a very powerful monarch.

Monaco’s royals mark their national day in 2023.
EPA

Although his powers are not unlimited, in Monaco, the prince is not even accountable to the parliament for the powers he does hold. Liechtenstein’s prince enjoys even more powers, including the right to appoint half of the members of the constitutional court.

However, the prince of Liechtenstein’s sovereign power is held in partnership with the people of Liechtenstein. The institutional architecture is built as to allow a system of checks and balances between the prince and the people.

Since a 2003 constitutional amendment, for example, the people can table a motion of no-confidence in the prince if more than 1,500 citizens are in agreement to do so, which triggers a referendum on confidence in him. The same number of citizens can mount an initiative to abolish the monarchy entirely, should they choose to do so.

Andorra and San Marino

The principality of Andorra should more properly be called co-principality, because of its co-princes arrangement. One of the princes is the bishop of Urgell – from Catalonia – and the other is the president of the French Republic (and previously the French king or emperor). So another Andorran peculiarity is that neither of the princes are Andorran nationals.

Emmanuel Macron and Joan Enric Vives, the co-princes of Andorra.
EPA

Following a 1993 reform that established a fully fledged constitution, neither prince holds sovereign power. Their present constitutional role is almost entirely ceremonial. However, concerns remain over the fact that they are not nationals of the state and that the heads of state are selected neither by the Andorran people nor by their representatives. The historical reason for a foreign head of state is the geographical location of Andorra – wedged between Catalonia and France. Allowing itself to be put under this double sovereignty was a guarantee of survival.

San Marino also has a two-headed state but both leaders, called the Captains Regent, are Sammarinese nationals. They are elected by the Grand and General Council (the Sammarinese legislative body) and their distinctive trait is that they serve only a six-month term of office.

The reason for such a short tenure is that San Marino has a population of just under 34,000 people. Everyone knows everyone else, which is a situation that can be detrimental to the independence of elective offices.

Captains Regent can’t shore up enough power in their short time in office to be able to overthrow the republic. The Captains Regent were first established in 1243, shortly before a number of Italian republics were overthrown by wealthy families. One of the reasons why San Marino has been able to survive is because it has prevented one family from being more powerful than the others for centuries.

Microstates are, therefore, not like Europe’s regular-sized states. They have distinctive institutional architectures – and often for understandable reasons. Läs mer…

Eating red meat may increase your risk of type 2 diabetes – not a lot of people know that

Red meat has been a part of diets worldwide since early man. It is an excellent source of protein, vitamins (such as B vitamins) and minerals (such as iron and zinc).

However, red meat has long been associated with increasing the risk of heart disease, cancer and early death. What may not be so well known is the link between red meat consumption and type 2 diabetes.

A paper published in the Lancet in September 2024 highlighted this link to type 2 diabetes using data from the Americas, the Mediterranean, Europe, south-east Asia and the Western Pacific (20 countries included).

This recent study, with nearly 2 million participants, found that high consumption of unprocessed red meat, such as beef, lamb and pork, and processed meat, such as bacon, salami and chorizo, increased the incidence of type 2 diabetes.

The researchers also highlighted a link between the consumption of poultry and the incidence of type 2 diabetes, but the link was weaker and varied across the populations.

Type 2 diabetes is a serious public health issue affecting 462 million people globally. It occurs when our bodies don’t make enough insulin or can’t use insulin well.

Nearly half a billion people globally have type 2 diabetes.
Trevor Smith / Alamy Stock Photo

Insulin is a hormone produced by the pancreas, a small leaf-shaped gland that sits behind the stomach and just in front of the spine. Insulin helps blood glucose enter cells, which stops levels from rising in the blood.

In type 2 diabetes, due to our body not having enough insulin or inability to use the insulin (also referred to as “insulin resistance” or “impaired insulin sensitivity”), blood glucose reaches high levels, causing symptoms such as extreme thirst, increased need to pass urine and feelings of tiredness. Long-term health issues include nerve damage, foot problems and heart disease.

The underlying mechanisms linking red meat intake with type 2 diabetes are unclear. Mechanisms could relate to the function of the pancreas, insulin sensitivity or a combination of the two.

Possible mechanisms

Red meat has high levels of saturated fat and is low in polyunsaturated fats, which could disrupt insulin sensitivity.

Research has also shown that a high protein intake from animal sources (compared to vegetarian sources) can increase the risk of type 2 diabetes, possibly due to the high levels of branched-chain amino acids (BCAA) in animal protein.

BCAA include the amino acids leucine, isoleucine and valine. In a small study, short-term BCAA infusions increased insulin resistance in humans. Similar findings were shown in larger human studies.

High levels of plasma BCAA can have various origins. These connections between red meat, BCAA, insulin resistance and type 2 diabetes are worth exploring further.

Another potential mechanism involves gut microbiota, the collection of microbes in our gut.

Our microbiota metabolises choline (a water-soluble essential nutrient) and L-carnitine (an amino acid found naturally in food), both of which are abundant in red meat, producing trimethylamine. Increased trimethylamine has been associated with a higher risk of developing type 2 diabetes.

How we cook meat may also add to this conundrum. Cooking meat at high temperatures, such as grilling and barbecuing, can produce harmful compounds called “advanced glycation end products”.

These compounds can damage cells due to oxidative stress (caused by unstable atoms called free radicals), lead to inflammation (which can be damaging if it occurs in healthy tissues or lasts too long) and insulin resistance.

Red meat is a great source of iron. But some studies have shown long-term iron intake or iron overload, particularly haem iron (iron from animal-based sources), may increase the risk of type 2 diabetes.

Eat less red meat

According to a World Health Organization report, in the last 50 years, global consumption of all types of meat has increased. In some wealthy countries, such as the UK, red meat consumption appears to be stable or declining. Although there is a lot of variation in meat consumption between and within countries.

In the UK, people are advised to consume no more than 70g (cooked weight) of red meat per day and to avoid eating processed meat. A similar recommendation is given across many countries.

With the winter holidays around the corner and the festive gatherings in full swing, reducing red meat consumption will be difficult, especially for those who really like the taste. So enjoy these moments without worrying, and where possible, try to consume fibre-rich vegetables with red meat.

Small steps can be taken to reduce your red meat intake by having smaller portions or choosing a day in the week that is meat free (meat-free Mondays, say), or substituting some (or all) of the meat in recipes with chicken, fish, beans, lentils or the like.

And for those days you do eat red meat, try poaching, steaming or stewing it – it’s healthier than grilling or barbecuing. Läs mer…