The psychology behind anti-trans legislation: How cognitive biases shape thoughts and policy

A state law signed Feb. 28, 2025, removes gender identity as a protected status from the Iowa Civil Rights Act, leaving transgender people vulnerable to discrimination. The rights of transgender people – those who present gender characteristics that differ from what has historically been expected of someone based on their biological sex traits – are under political attack across the United States. There are now hundreds of anti-trans bills at various points in the legislative process.

But why?

Reasons given usually center on protecting children, protecting cisgender women’s rights in bathrooms and sports competitions, and on removing funding for gender-affirming care. Some efforts appear to stem from fear-driven motives that are not supported by evidence.

Bias against trans people may not always feel like bias. For someone who believes it to be true, saying there can only be biological men who identify as men and biological women who identify as women may feel like a statement of fact. But research shows that gender is a spectrum, separate from biological sex, which is also more complex than the common male-female binary.

We are social psychologists who study and teach about the basic social, cognitive and emotion-based processes people use to make sense of themselves and the world. Research reveals psychological processes that bias people in ways they usually aren’t aware of. These common human tendencies can influence what we think about a particular group, influence how we act toward them, and prompt legislators to pass biased laws.

Root of negative views of transgender people

Social psychology theory and research point to several possible sources of negative views of transgender people.

Part of forming your own identity is defining yourself by the traits that make you unique. To do this, you categorize others as belonging to your group – based on characteristics that matter to you, such as race, age, culture or gender – or not. Psychologists call these categories in-groups and out-groups.

There is a natural human tendency to have inherent negative feelings toward people who aren’t part of your in-group. The bias you might feel against fans of a rival sports team is an example. This tendency may be rooted deep in evolutionary history, when favoring your own safe group over unknown outsiders would have been a survival advantage.

A trans person’s status as transgender may be the most salient thing about them to an observer, overshadowing other characteristics such as their height, race, profession, parental status and so on. As a small minority, transgender people are an out-group from the mainstream – making it likely out-group bias will be directed their way.

Anti-trans feeling may also result from fear that transgender people pose threats to one’s personal or group identity. Gender is part of everyone’s identity. If someone perceives their own gender to be determined by their biological sex, they may perceive other people who violate that “rule” as a threat to their own gender identity. Part of identity formation is not just out-group derogation but in-group favoritism. A cisgender person may engage in “in-group boundary protection” by making sure the parameters of “gender” are well defined and match their own beliefs.

Once you hold negative feelings about someone in an out-group, there are other social psychological processes that may solidify and amplify them in your mind.

The illusion of a causal connection

People tend to form illusory correlations between objects, people, occurrences or behaviors, particularly when those things are infrequently encountered. Two distinctive things happening at the same time makes people believe that one is causing the other.

Some superstitions result from this phenomenon. For example, you might attribute an unusual success such as winning money to wearing a particular shirt, which you now think of as your lucky shirt.

If a person only ever hears about negative events when they see or hear about a transgender person, an immigrant or a member of some other minority group, then an illusory correlation can form between the negative events and the minority group. That connection is the starting point for prejudice: automatic, negative feelings toward a group of people without justification.

Of course, it is possible that individuals from the group in question have committed some offense. But to take one individual’s bad deed and attribute it to an entire group of people isn’t justified. This kind of extrapolation is the natural human tendency of stereotyping, which can bias people’s actions.

‘That’s exactly what I thought’

Human minds are biased to confirm the beliefs they already hold, including stereotypes about trans people. A few interconnected processes are at play in what psychologists call confirmation bias.

First, there’s a natural tendency to seek out information that fits with what you already believe. If you think a shirt is lucky, then you’re more likely to look for positive things that happen when you wear it than you are to look for negative events that would seem to disconfirm its luckiness.

If you think transgender people are dangerous, you are more likely to conduct an internet search for “transgender people who are dangerous” than “transgender people are victims of crime.”

There’s a second, more passive process in play as well. Rather than actively seeking out confirming information, people also simply pay attention to information that confirms what they thought in the first place and ignore contradictory information. This can happen without you even realizing.

People also tend to interpret ambiguous events in line with their beliefs – “I must be having a good day, despite some setbacks, because I’m wearing my lucky shirt.” That confirmation bias could explain someone with anti-trans attitudes thinking “that transgender person holding hands with a child must be a pedophile” instead of “that transgender mother is showing love and care for her kid.”

Finally, people tend to remember things that confirm their beliefs better than things that challenge them.

Confirmation bias can strengthen an illusory correlation, making it even more likely to influence subsequent actions – whether compulsively wearing a lucky shirt to an anxiety-inducing appointment or not hiring someone because of discriminatory thoughts about the group they belong to.

Moving past biases

Awareness of biases is the first step in avoiding them. Setting bias aside allows people to make fair decisions, based on accurate information, and in line with their values.

However, this is not an easy task in the face of another social psychological process called group polarization. This phenomenon occurs when individuals’ beliefs become more extreme as they talk and listen only to people who hold the same beliefs they do. Think of the social media bubbles that result from interacting only with people who share your perspective.

Efforts to stifle or prohibit educators’ and librarians’ ability to teach and discuss gender and sexuality topics, openly and fairly, add another challenge. Education through access to impartial, evidence-based information can be one way to help neutralize inherent bias.

Montana state Rep. Zooey Zephyr, who is transgender, in discussion with a colleague.
AP Photo/Tommy Martino

As a final, hopeful point, social psychological research has identified one strategy for overcoming intergroup conflict: forming close contacts with individuals from the “other” group. Having a friend, loved one or trusted and valued colleague who belongs to the out-group can help you recognize their humanity and overcome the biases you hold against that out-group as a whole.

A relevant and recent example of this scenario came when two transgender state representatives convinced their fellow lawmakers to vote against two extreme anti-trans bills in Montana by making the issue personal.

All of these decision-making biases influence everyone, not just the lawmakers currently in power. And they can be quite complex, with particular in-group and out-group memberships being hard to define – for instance, factions within religious groups who disagree on particular political issues.

But understanding and overcoming the biases everyone falls prey to means that optimal decisions can be made for everyone’s well-being and economic vitality. After all, psychology research has repeatedly demonstrated that diversity is good for the bottom line while it simultaneously promotes an equitable and inclusive society. Even from a solely financial perspective, discrimination is bad for all Americans. Läs mer…

Radioisotope generators − inside the ‘nuclear batteries’ that power faraway spacecraft

Powering spacecraft with solar energy may not seem like a challenge, given how intense the Sun’s light can feel on Earth. Spacecraft near the Earth use large solar panels to harness the Sun for the electricity needed to run their communications systems and science instruments.

However, the farther into space you go, the weaker the Sun’s light becomes and the less useful it is for powering systems with solar panels. Even in the inner solar system, spacecraft such as lunar or Mars rovers need alternative power sources.

As an astrophysicist and professor of physics, I teach a senior-level aerospace engineering course on the space environment. One of the key lessons I emphasize to my students is just how unforgiving space can be. In this extreme environment where spacecraft must withstand intense solar flares, radiation and temperature swings from hundreds of degrees below zero to hundreds of degrees above zero, engineers have developed innovative solutions to power some of the most remote and isolated space missions.

So how do engineers power missions in the outer reaches of our solar system and beyond? The solution is technology developed in the 1960s based on scientific principles discovered two centuries ago: radioisotope thermoelectric generators, or RTGs.

RTGs are essentially nuclear-powered batteries. But unlike the AAA batteries in your TV remote, RTGs can provide power for decades while hundreds of millions to billions of miles from Earth.

Nuclear power

Radioisotope thermoelectric generators do not rely on chemical reactions like the batteries in your phone. Instead, they rely on the radioactive decay of elements to produce heat and eventually electricity. While this concept sounds similar to that of a nuclear power plant, RTGs work on a different principle.

Most RTGs are built using plutonium-238 as their source of energy, which is not usable for nuclear power plants since it does not sustain fission reactions. Instead, plutonium-238 is an unstable element that will undergo radioactive decay.

Radioactive decay, or nuclear decay, happens when an unstable atomic nucleus spontaneously and randomly emits particles and energy to reach a more stable configuration. This process often causes the element to change into another element, since the nucleus can lose protons.

Plutonium-238 decays into uranium-234 and emits an alpha particle, made of two protons and two neutrons.
NASA

When plutonium-238 decays, it emits alpha particles, which consist of two protons and two neutrons. When the plutonium-238, which starts with 94 protons, releases an alpha particle, it loses two protons and turns into uranium-234, which has 92 protons.

These alpha particles interact with and transfer energy into the material surrounding the plutonium, which heats up that material. The radioactive decay of plutonium-238 releases enough energy that it can glow red from its own heat, and it is this powerful heat that is the energy source to power an RTG.

The nuclear heat source for the Mars Curiosity rover is encased in a graphite shell. The fuel glows red hot because of the radioactive decay of plutonium-238.
Idaho National Laboratory, CC BY

Heat as power

Radioisotope thermoelectric generators can turn heat into electricity using a principle called the Seebeck effect, discovered by German scientist Thomas Seebeck in 1821. As an added benefit, the heat from some types of RTGs can help keep electronics and the other components of a deep-space mission warm and working well.

In its basic form, the Seebeck effect describes how two wires of different conducting materials joined in a loop produce a current in that loop when exposed to a temperature difference.

The Seeback effect is the principle behind RTGs.

Devices that use this principle are called thermoelectric couples, or thermocouples. These thermocouples allow RTGs to produce electricity from the difference in temperature created by the heat of plutonium-238 decay and the frigid cold of space.

Radioisotope thermoelectric generator design

In a basic radioisotope thermoelectric generator, you have a container of plutonium-238, stored in the form of plutonium-dioxide, often in a solid ceramic state that provides extra safety in the event of an accident. The plutonium material is surrounded by a protective layer of foil insulation to which a large array of thermocouples is attached. The whole assembly is inside a protective aluminum casing.

An RTG has decaying material in its core, which generates heat that it converts to electricity.
U.S. Department of Energy

The interior of the RTG and one side of the thermocouples is kept hot – close to 1,000 degrees Fahrenheit (538 degrees Celsius) – while the outside of the RTG and the other side of the thermocouples are exposed to space. This outside, space-facing layer can be as cold as a few hundred degrees Fahrenheit below zero.

This strong temperature difference allows an RTG to turn the heat from radioactive decay into electricity. That electricity powers all kinds of spacecraft, from communications systems to science instruments to rovers on Mars, including five current NASA missions.

But don’t get too excited about buying an RTG for your house. With the current technology, they can produce only a few hundred watts of power. That may be enough to power a standard laptop, but not enough to play video games with a powerful GPU.

For deep-space missions, however, those couple hundred watts are more than enough.

The real benefit of RTGs is their ability to provide predictable, consistent power. The radioactive decay of plutonium is constant – every second of every day for decades. Over the course of about 90 years, only half the plutonium in an RTG will have decayed away. An RTG requires no moving parts to generate electricity, which makes them much less likely to break down or stop working.

Additionally, they have an excellent safety record, and they’re designed to survive their normal use and also be safe in the event of an accident.

RTGs in action

RTGs have been key to the success of many of NASA’s solar system and deep-space missions. The Mars Curiosity and Perseverance rovers and the New Horizons spacecraft that visited Pluto in 2015 have all used RTGs. New Horizons is traveling out of the solar system, where its RTGs will provide power where solar panels could not.

However, no missions capture the power of RTGs quite like the Voyager missions. NASA launched the twin spacecraft Voyager 1 and Voyager 2 in 1977 to take a tour of the outer solar system and then journey beyond it.

The RTGs on the Voyager probes have allowed the spacecraft to stay powered up while they collect data.
NASA/JPL-Caltech

Each craft was equipped with three RTGs, providing a total of 470 watts of power at launch. It has been almost 50 years since the launch of the Voyager probes, and both are still active science missions, collecting and sending data back to Earth.

Voyager 1 and Voyager 2 are about 15.5 billion miles and 13 billion miles (nearly 25 billion kilometers and 21 billion kilometers) from the Earth, respectively, making them the most distant human-made objects ever. Even at these extreme distances, their RTGs are still providing them consistent power.

These spacecraft are a testament to the ingenuity of the engineers who first designed RTGs in the early 1960s. Läs mer…

When humans use AI to earn patents, who is doing the inventing?

The advent of generative artificial intelligence has sent shock waves across industries, from the technical to the creative. AI systems that can generate viable computer code, write news stories and spin up professional-looking graphics have inspired countless headlines asking whether they will take away jobs in technology, journalism and design, among many other fields.

And these new ways of doing work and making things raise another question: In the era of AI, what does it mean to be an inventor?

Among technologists who build digital tools or programs, it is increasingly common to use AI as part of design and development processes. But as deep learning models flex their technical muscles more and more, even highly skilled researchers who are using AI in their work have begun to express concerns about becoming obsolete.

There is much debate about whether AI can augment human creativity, but emerging data suggests that the technology can boost research and development where creativity typically plays an important role. A recent study by MIT economics doctoral student Aidan Toner-Rodgers found that scientists using AI tools increased their patent filings by 39% and created 17% more prototypes than when they worked without such tools.

While this study indicates that AI seemed to help humans be more productive, it also showed there was a downside: 82% of the surveyed researchers felt less satisfied with their jobs since implementing AI in their workflows. “I couldn’t help feeling that much of my education is now worthless,” one researcher said.

This emerging dynamic leads to a related question: If a scientist uses AI in order to build something new, does the output still qualify as an invention? As a legal scholar who studies technology and intellectual property law, I see the growing power of AI shifting the legal landscape.

Natural persons

In 2020, the United States Patent and Trademark Office refused to list the AI system DABUS, which purportedly designed a food container and a flashing emergency beacon, as an inventor on patent applications. Subsequent court rulings clarified that under current U.S. law, only humans can be listed as inventors, but they left open the question of whether inventions developed by scientists with the help of AI qualify for patent protection.

The concept of inventorship and legal protections for inventions have deep roots in the U.S. The Constitution explicitly protects the “exclusive rights” of authors and inventors “to their respective writings and discoveries,” reflecting the framers’ strong conviction that the state should protect and encourage original ideas.

The first U.S. patent, granted in 1790 and signed by George Washington.
United States Patent and Trademark Office

U.S. law today defines an inventor as a natural person who has conceived of a complete and operative invention that can be used without extensive research or experimentation. An inventor must do more than follow routine instructions – they must make an intellectual contribution in producing something novel.

That contribution can be a key idea that sparks the invention or a crucial insight that turns the concept into a working product. If a person’s input is routine or just explains what’s already known, they are not an inventor.

Role of AI

To what extent can or should AI become part of the invention process? The release of AI applications such as ChatGPT in 2022 introduced the public to large language models and sparked renewed debate about whether and how AI should be used in the inventive process. That same year, the U.S. Court of Appeals for the Federal Circuit heard a case that tested whether AI could be named as an inventor on a patent application.

The court concluded that under U.S. law, inventors must be human beings. The ruling reaffirmed the idea that Congress intended to encourage human beings, not machines, to invent. This idea remains foundational to current patent policy.

In light of the court’s decision, in 2024 the United States Patent and Trademark Office updated its guidance to clarify the role of AI in the inventive process. The guidance reaffirms that an inventor must be human. However, the Patent and Trademark Office explained that the policy did not preclude inventors from using AI tools to assist in the research and development of inventions. This approach acknowledges how the rapid development of AI technologies has allowed researchers to make exciting breakthroughs.

Policymakers seem to understand that if the U.S. is to continue to lead the world in innovation, the mythology of a sole inventor toiling away in a garage and relying on pure intellect must evolve to account for the value of AI tools that research has proven make humans more productive.

Nevertheless, since only human beings can be named as inventors on a patent, current policy does not quite answer the question of who or what should get credit for doing the work. Despite a growing trend where researchers are expected to disclose whether they’ve used AI tools, for example in academic papers, the U.S. patent system makes no such demand.

Regardless of AI’s role in the research and development process, a U.S. patent will list only the names of human inventors so long as those humans made a significant contribution to the invention. As a result, current policy is not concerned with how to recognize the contributions of AI. AI is considered a tool like a microscope or a Bunsen burner.

Personal ingenuity in the age of AI

Given this shifting legal landscape, I see that U.S. innovation policy is at a crossroads. The Patent and Trademark Office’s guidance reaffirming human inventorship and simultaneously embracing AI as an innovation tool is only a year old. It is unclear how the Trump administration’s forthcoming action plan to “enhance America’s global AI dominance” will affect this guidance.

Some observers expect the rate of scientific discovery to increase dramatically with the assistance of AI tools. But if the majority of those same productive researchers enjoy their jobs less, is the act of inventing being encouraged as the framers envisioned?

Current U.S. policy attempts to strike a balance and recognize the concept of personal ingenuity, stemming from the principle that for an invention to be patented in the U.S., a human must have led the way. Yet the guidance also implicitly acknowledges that AI can lend a helping hand in modern research and development. Whether and how policymakers maintain this balance – and how leaders in industry and science respond – will help shape the next chapter of American innovation. Läs mer…

The push to restore semiconductor manufacturing faces a labor crisis − can the US train enough workers in time?

Semiconductors power nearly every aspect of modern life – cars, smartphones, medical devices and even national defense systems. These tiny but essential components make the information age possible, whether they’re supporting lifesaving hospital equipment or facilitating the latest advances in artificial intelligence.

It’s easy to take them for granted, until something goes wrong. That’s exactly what happened when the COVID-19 pandemic exposed major weaknesses in the global semiconductor supply chain. Suddenly, to name just one consequence, new vehicles couldn’t be finished because chips produced abroad weren’t being delivered. The semiconductor supply crunch disrupted entire industries and cost hundreds of billions of dollars.

The crisis underscored a hard reality: The U.S. depends heavily on foreign countries – including China, a geopolitical rival – to manufacture semiconductors. This isn’t just an economic concern; it’s widely recognized as a national security risk.

That’s why the U.S. government has taken steps to invest in semiconductor production through initiatives such as the CHIPS and Science Act, which aims to revitalize American manufacturing and was passed with bipartisan support in 2022. While President Donald Trump has criticized the CHIPS and Science Act recently, both he and his predecessor, Joe Biden, have touted their efforts to expand domestic chip manufacturing in recent years.

Yet, even with bipartisan support for new chip plants, a major challenge remains: Who will operate them?

Minding the workforce gap

The push to bring semiconductor manufacturing back to the U.S. faces a significant hurdle: a shortage of skilled workers. The semiconductor industry is expected to need 300,000 engineers by 2030 as new plants are built. Without a well-trained workforce, these efforts will fall short, and the U.S. will remain dependent on foreign suppliers.

This isn’t just a problem for the tech sector – it affects every industry that relies on semiconductors, from auto manufacturing to defense contractors. Virtually every military communication, monitoring and advanced weapon system relies on microchips. It’s not sustainable or safe for the U.S. to rely on foreign nations – especially adversaries – for the technology that powers its military.

For the U.S. to secure supply chains and maintain technological leadership, I believe it would be wise to invest in education and workforce development alongside manufacturing expansion.

Building the next generation of semiconductor engineers

Filling this labor gap will require a nationwide effort to train engineers and technicians in semiconductor research, design and fabrication. Engineering programs across the country are taking up this challenge by introducing specialized curricula that combine hands-on training with industry-focused coursework.

Clean rooms, a vital part of semiconductor factories, are also where the next generation of tech innovators conduct research. Here, a Ph.D. candidate is seen in an air shower room before entering a clean room at Tokyo University on May 1, 2024.
Yuichi Yamazaki/Getty Images

Future semiconductor workers will need expertise in chip design and microelectronics, materials science and process engineering, and advanced manufacturing and clean room operations. To meet this demand, it will be important for universities and colleges to work alongside industry leaders to ensure students graduate with the skills employers need. Offering hands-on experience in semiconductor fabrication, clean-room-based labs and advanced process design will be essential for preparing a workforce that’s ready to contribute from Day 1.

At the Missouri University of Science of Technology, where I am the chair of the materials science and engineering department, we’re launching a multidisciplinary bachelor’s degree in semiconductor engineering this fall. Other universities across the U.S. are also expanding their semiconductor engineering options amid strong demand from both industry and students.

A historic opportunity for economic growth

Rebuilding domestic semiconductor manufacturing isn’t just about national security – it’s an economic opportunity that could benefit millions of Americans. By expanding training programs and workforce pipelines, the U.S. can create tens of thousands of high-paying jobs, strengthening the economy and reducing reliance on foreign supply chains.

And the race to secure semiconductor supply chains isn’t just about stability – it’s about innovation. The U.S. has long been a global leader in semiconductor research and development, but recent supply chain disruptions have shown the risks of allowing manufacturing to move overseas.

If the U.S. wants to remain at the forefront of technological advancement in artificial intelligence, quantum computing and next-generation communication systems, it seems clear to me it will need new workers – not just new factories – to gain control of its semiconductor production. Läs mer…

Simple strategies can boost vaccination rates for adults over 65 − new study

Knowing which vaccines older adults should get and hearing a clear recommendation from their health care provider about why a particular vaccine is important strongly motivated them to get vaccinated. That’s a key finding in a recent study I co-authored in the journal Open Forum Infectious Diseases.

Adults over 65 have a higher risk of severe infections, but they receive routine vaccinations at lower rates than do other groups. My colleagues and I collaborated with six primary care clinics across the U.S. to test two approaches for increasing vaccination rates for older adults.

In all, 249 patients who were visiting their primary care providers participated in the study. Of these, 116 patients received a two-page vaccine discussion guide to read in the waiting room before their visit. Another 133 patients received invitations to attend a one-hour education session after their visit.

The guide, which we created for the study, was designed to help people start a conversation about vaccines with their providers. It included checkboxes for marking what made it hard for them to get vaccinated and which vaccines they want to know more about, as well as space to write down any questions they have. The guide also featured a chart listing recommended vaccines for older adults, with boxes where people could check off ones they had already received.

In the sessions, providers shared in-depth information about vaccines and vaccine-preventable diseases and facilitated a discussion to address vaccine hesitancy.

In a follow-up survey two months later, patients reported that the most significant barriers they faced were knowing when they should receive a particular vaccine, having concerns about side effects and securing transportation to a vaccination appointment.

The percentage of patients who said they wanted to get a vaccine increased from 68% to 79% after using the vaccine guide. Following each intervention, 80% of patients reported they discussed vaccines more in that visit than they had in prior visits.

Of the 14 health care providers who completed the follow-up survey, 57% reported increased vaccination rates following each approach. Half of the providers felt that the use of the vaccine guide was an effective strategy in guiding conversations with their patients.

A pamphlet at the doctor’s office can empower older patients to ask about vaccines.

Why it matters

Only about 15% of adults ages 60-64 and 26% of adults 65 and older are up to date on all the vaccines recommended for their age, according to CDC data from 2022. These include vaccines for COVID 19, influenza, tetanus, pneumococcal disease and shingles.

Yet studies consistently show that getting vaccinated reduces the risk of complications from these conditions in this age group.

My research shows that strategies that equip older adults with personalized information about vaccines empower them to start the conversation about vaccines with their clinicians and enable them to be active participants in their health care.

What’s next

In the future, we will explore whether engaging patients on this topic earlier is even more helpful than doing so in the waiting room before their visit.

This might involve having clinical team members or care coordinators connect with patients ahead of their visit, either by phone or through telemedicine that is designed specifically for older adults.

My research team plans to conduct a pilot study that tests this approach. We hope to learn whether reaching out to these patients before their clinic visits and helping them think through their vaccination status, which vaccines their provider recommends and what barriers they face in getting vaccinated will improve vaccination rates for this population.

The Research Brief is a short take on interesting academic work. Läs mer…

When algorithms take the field – inside MLB’s robo-umping experiment

Baseball fans tuning into spring training games may have noticed another new wrinkle in a sport that’s experienced a host of changes in recent years.

Batters, pitchers and catchers can challenge a home plate umpire’s ball or strike call. Powered by Hawk-Eye ball-tracking technology, the automated ball-strike system replays the pitch trajectory to determine whether the umpire’s call was correct.

To minimize disruptions, Major League Baseball permits each team a maximum of two failed challenges per game but allows unlimited challenges as long as they’re successful. For now, the technology will be limited to the spring exhibition games. But it could be implemented in the regular season as soon as 2026.

Count future Hall of Famer Max Scherzer among the skeptics.

“We’re humans,” the Toronto Blue Jays hurler said after a spring training game in which he challenged two calls and lost both to the robo umps. “Can we just be judged by humans?”

Technological advances that lead to fairer, more accurate calls are often seen as triumphs. But as co-editors of the recently published volume “Inventing for Sports,” which includes case studies of over 20 sports inventions, we find that new technology doesn’t mean perfect precision – nor does it necessarily lead to better competition from the fan perspective.

Cue the cameras

While playing in a cricket match in the 1990s, British computer scientist Paul Hawkins fumed over a bad call. He decided to make sure the same mistake wouldn’t happen again.

Drawing on his doctoral training in artificial intelligence, he designed an array of high-speed cameras to capture a ball’s flight path and velocity, and a software algorithm that used the data to predict the ball’s likely future path.

He founded Hawk-Eye Innovations Ltd. in 2001, and his first clients were cricket broadcasters who used the technology’s trajectory graphics to enhance their telecasts.

By 2006, professional tennis leagues began deploying Hawk-Eye to help officials adjudicate line calls. Cricket leagues followed in 2009, incorporating it to help umpires make what are known as “leg before wicket” calls, among others. And professional soccer leagues started using the technology in 2012 to determine whether balls cross the goal line.

A technician uses the Hawk-Eye system as part of a broadcast trial for the technology during the 2005 Masters Tennis tournament in London.
Julian Finney/Getty Images

Reaction to Hawk-Eye has been mixed. In tennis, players, fans and broadcasters have generally embraced the technology. During a challenge, spectators often clap rhythmically in anticipation as the Hawk-Eye official cues up the replayed trajectory.

“As a player, and now as a TV commentator,” tennis legend Pam Shriver said in 2006, “I dreamed of the day when technology would take the accuracy of line calling to the next level. That day has now arrived.”

But Hawk-Eye isn’t perfect. In 2020 and 2022, the firm publicly apologized to fans of professional soccer clubs after its goal-line technology made errant calls after players congregated in the goal box and obstructed key camera sight lines.

Perfection isn’t possible

Critics have also raised more fundamental concerns.

In their 2016 book “Bad Call,” researchers Harry Collins, Robert Evans and Christopher Higgins reminded readers that Hawk-Eye is not a replay of the ball’s actual position; rather, it produces a prediction of a trajectory, based on the ball’s prior velocity, rotation and position.

The authors lament that Hawk-Eye and what they term “decision aids” have undermined the authority of referees and umpires, which they consider bad for the games.

Ultimately, there are no purely objective standards for fairness and accuracy in technological officiating. They are always negotiated. Even the most precise officiating innovations require human consensus to define and validate their role. Technologies like photo-finish cameras, instant replay and ball-tracking systems have improved the precision of officiating, but their deployment is shaped – and often limited – by human judgment and institutional decisions.

For example, today’s best race timing systems are accurate to 0.0001 seconds, yet Olympic sports such as swimming, track and field, and alpine skiing report results in increments of only 0.01 seconds. This can lead to situations – such as Dominique Gisin and Tina Maze’s gold medal tie in the women’s downhill ski race at the 2014 Sochi Olympics – in which the timing officials admitted that their equipment could have revealed the actual winner. But they were forced to report a dead heat under the rules established by the ski federation.

With slow-motion instant replays, determining a catch or a player’s intention for a personal foul can actually be distorted by low-speed replay, since humans aren’t adept at adjusting to shifting replay speeds.

One of the big issues with baseball’s automated ball-strike system has to do with the strike zone itself.

MLB’s rule book defines the strike zone as the depth and width of home plate and the vertical distance between the midpoint of a player’s torso to the point just below his knees. The interpretation of the strike zone is notoriously subjective and varies with each umpire. For example, human umpires often call a strike if the ball crosses the plate in the rear corner. However the automated ball-strike system uses an imaginary plane that bisects the middle – not the front or the rear – of home plate.

There are more complications. Since every player has a unique height, each has a unique strike zone. At the outset of spring training, each player’s height was measured – standing up without cleats – and then confirmed through a biomechanical analysis.

Eddie Gaedel, the shortest player in major league baseball history, had a much smaller strike zone than his peers. He drew a walk in his only at-bat.
Bettmann/Getty Images

But what if a player changes their batting stance and decides to crouch? What if they change their cleats and raise their strike zone by an extra quarter-inch?

Of course, as has been the case in tennis, soccer and other sports, Hawk-Eye can help rectify genuinely bad calls. By allowing teams to correct the most disputed calls without eliminating the human element of umpiring, MLB hopes to strike a balance between tradition and change.

Fans have the final say

Finding a balance between machine precision and the human element of baseball is crucial.

Players’ and managers’ efforts to work the umpires to contract or expand the strike zone have long been a part of the game. And fans eagerly cheer or jeer players and managers who argue with the umpires. When ejections take place, more yelling and taunting ensues.

Though often unacknowledged in negotiations between leagues and athletes, fan enthusiasm is a key component of whether to adopt new technology.

For example, innovative “full-body” swimsuits contributed to a wave of record-breaking finishes in the sport between 2000 and 2009. But uneven access to the newest gear raised the specter of what some called “technological doping.” World Aquatics worried that as records fell simply due to equipment innovations, spectators would stop watching and broadcast and sponsorship revenue would dry up. The swimming federation ended up banning full-body swimsuits.

When managers argue balls and strikes, it can make for great TV.

Of course, algorithmic officiating differs from technologies that enhance performance and speed. But it runs a similar risk of turning off fans. So MLB, like other sports leagues, is being thrust into the role of managing technological change.

Assessing technologies for their immediate and long-term impact is difficult enough for large government agencies. Sports leagues lack those resources, yet are nonetheless being forced to carefully consider how they introduce and regulate various innovations.

MLB, to its credit, is proceeding incrementally. While the logical conclusion to the current automated ball-strike experiment would be fully electronic officiating, we think fans and players will resist going that far.

The league’s challenge system is a test. But the real umpires will ultimately be the fans. Läs mer…

Big cuts at the Education Department’s civil rights office will affect vulnerable students for years to come

The U.S. Department of Education cut its workforce by nearly 50% on March 11, 2025, when it laid off about 1,315 employees. The move follows several recent directives targeting the Cabinet-level agency.

Within the department, the Office for Civil Rights – which already experienced layoffs in February – was especially hard hit by cuts.

The details remain unclear, but reports suggest that staffs at six of the 12 regional OCR offices were laid off. Because of the office’s role in enforcing civil rights laws in schools and universities, the cuts will affect students across the country.

As education policy scholars who study how laws and policies shape educational inequities, we believe the Office for Civil Rights has played an important role in facilitating equitable education for all students.

The latest cuts further compound funding and staffing shortages that have plagued the office. The full effects of these changes on the most vulnerable public school students will likely be felt for many years.

Few staff members

The Education Department, already the smallest Cabinet-level agency before the recent layoffs, distributed roughly US$242 billion to students, K-12 schools and universities in the 2024 fiscal year.

About $160 billion of that money went to student aid for higher education. The department’s discretionary budget was just under $80 billion, a sliver compared with other agencies.

By comparison, the Department of Health and Human Services received nearly $2.9 trillion in fiscal year 2024.

Within the Education Department, the Office for Civil Rights had a $140 million budget for fiscal year 2024, less than 0.2% of discretionary funding, which requires annual congressional approval.

It has lacked financial support to effectively carry out its duties. For example, amid complaints filed by students and their families, the OCR has not had an increase in staff. That leaves thousands of complaints unresolved.

The office’s appropriated budget in fiscal year 2017 was one-third of the budget of the Equal Employment Opportunity Commission – a federal agency responsible for civil rights protection in the workplace – despite the high number of discrimination complaints that OCR handles.

Support for OCR

Despite this underfunding, the office has traditionally received bipartisan support.

Former Secretary of Education Betsy DeVos, for example, requested a funding decrease for the office during the first Trump administration. Congress, however, overrode her budget request and increased appropriations.

Likewise, regardless of changing administrations, the office’s budget has remained fairly unchanged since 2001.

It garners attention for investigating and resolving discrimination-related complaints in K-12 and higher education. And while administrations have different priorities in how to investigate these complaints, they have remained an important resource for students for decades.

But a key function that often goes unnoticed is its collection and release of data through the Civil Rights Data Collection.

The CRDC is a national database that collects information on various indicators of student access and barriers to educational opportunity. Historically, only 5% of the OCR’s budget appropriations has been allocated for the CRDC.

Yet, there are concerns among academic scholars that the continued collection and dissemination of the CRDC might be affected by staff cuts and contract cancellations worth $900 million at the Department of Education’s research arm, the Institute of Education Science.

That’s because the CRDC often relies on data infrastructure that is shared with the institute.

The history of the CRDC

The CRDC originated in the late 1960s as required by the Civil Rights Act of 1964. The data questionnaire, which poses questions about civil rights concerns, is usually administered to U.S. public school districts every two years.

It provides indicators on student experiences in public preschools and K-12 schools. That includes participation rates in curricular opportunities like Advanced Placement courses and extracurricular activities. It also provides data on 504 plans for students with disabilities and English-learner instruction.

Although there have been some changes to questions over the years, others have been consistent for 50 years to allow for examining changes over time. Some examples are counts of students disciplined by schools’ use of corporal punishment or out-of-school suspension.

The U.S. Department of Education building is seen in Washington on Dec. 3, 2024.
AP Photo/Jose Luis Magana

During the Obama administration, the Office for Civil Rights prioritized making the CRDC more accessible to the public. The administration created a website that allows the public to view information for particular schools or districts, or to download data to analyze.

Why the CRDC matters

Our research focuses on how the CRDC has been used and how it could be improved. In an ongoing research project, we identified 221 peer-reviewed publications that have analyzed the CRDC.

Articles focusing on school discipline – out-of-school suspensions, for example – are the most common. But there are many other topics that would be difficult to study without the CRDC.

That’s especially true when making comparisons between districts and states, such as whether students have access to advanced coursework or participation in gifted and talented programs.

The data has also inspired policy changes.

The Obama administration, informed by the data on the use of seclusion and restraint to discipline students, issued a policy guidance document in 2016 regarding its overuse for students with disabilities.

Additionally, the data helps examine the effects of judicial decisions and laws – desegregation laws in the South, for example – that have improved educational opportunities for many vulnerable students.

Amid the Education Department’s continued cancellation of contracts of federally funded equity assistance centers, we believe research partnerships with policymakers and practitioners drawing on CRDC data will be more important than ever. Läs mer…

Why parents of ‘twice-exceptional’ children choose homeschooling over public school

Homeschooling has exploded in popularity in recent years, particularly since the pandemic. But researchers are still exploring why parents choose to homeschool their children.

While the decision to homeschool is often associated with religion, a 2023 survey found that the two top reasons people cited as most important were a concern about the school environment, such as safety and drugs, and a dissatisfaction with academic instruction.

I studied giftedness, creativity and talent as part of my Ph.D. program focusing on students who are “twice exceptional” – that is, they have both learning challenges such autism or attention-deficit/hyperactivity disorder as well as advanced skills. A better understanding of why parents choose homeschooling can help identify ways to improve the public education system. I believe focusing on twice-exceptional students can offer insights beyond this subset of the homeschooled population.

What we know about homeschooling

The truth is researchers don’t know much about homeschooling and homeschoolers.

One problem is regulations involving homeschooling differ dramatically among states, so it is often hard to determine who is being instructed at home. And many families are unwilling to talk about their experiences homeschooling and their reasons for doing so.

But here’s what we do know.

The share of children being homeschooled has surged since 2020, rising from 3.7% in the 2018-2019 school year to 5.2% in 2022-2023 – the latest data available from the National Center for Education Statistics. Over 3 million students were homeschooled in 2021-22, according to the National Home Education Research Institute.

And the population of homeschoolers is becoming increasingly diverse, with about half of families reporting as nonwhite in a 2023 Washington Post-Schar School poll. In addition, homeschooling families are just as likely to be Democrat as Republican, according to that same Post-Schar survey, a sharp shift from previous surveys that suggested Republicans were much more likely to homeschool.

As for why parents homeschool, 28% of those surveyed in 2023 by the Institute of Education Sciences said the school environment was their biggest reason, followed by 17% that cited concerns about academic instruction. Another 17% said providing their kids with moral or religious instruction was most important.

But not far behind at 12% was a group of parents who prioritized homeschooling for a different reason: They have a child with physical or mental health problems or other special needs.

This group would include parents of twice-exceptional children, who may be especially interested in pursuing homeschooling as an alternative method of education for three reasons in particular.

Some families have devoted significant resources, such as by creating home libraries, to homeschool their children.
AP Photo/Charles Krupa

1. The ‘masking’ problem

These parents may notice that their child’s needs are being overlooked in the public education system and may view homeschooling as a way to provide better individualized instruction.

Students who are twice exceptional often experience what researchers call the “masking” phenomenon. This can occur when a child’s disabilities hide their giftedness. When this occurs, teachers tend to provide academic support but hesitate to give these children the challenging material they may require.

Masking can also occur in reverse, when a student’s gifts tend to hide disabilities. In these cases, teachers provide challenging material, but they do not provide the needed accommodations that allow the gifted child to access the materials. Either way, masking can be a problem for students and parents who must advocate for teachers to address their unique range of academic needs.

While either type of masking is challenging for the student, it may be particularly frustrating for parents of twice-exceptional students to watch classroom teachers focus only on their child’s weaknesses rather than helping them develop their advanced abilities.

2. Individualized instruction

By the time a child enters school, parents have spent years observing their child’s development, comparing their progress with that of others their age. They’re also likely to be aware of their child’s unique interests.

While this may not be true for all parents, those who choose to homeschool may do so because they feel they have more of an ability and interest in catering to their child’s unique needs than a classroom teacher who is tasked with teaching many students simultaneously. Parents of students who demonstrate exceptional ability have expressed concerns about their child’s future educational opportunities in a public school setting.

Additionally, parents may become exhausted by their efforts to advocate for their child’s unique needs in the school system. Parents of students who demonstrate advanced abilities often pull their children out of public school after repeated efforts to improve communication between home and school.

3. Behavioral and emotional needs

Gifted students who have emotional or behavioral disabilities may find it difficult to demonstrate their abilities in the classroom.

All too often, teachers may be more focused on disciplining these students rather than addressing their academic needs. For example, a child who is bored with the class material may be loud and attempt to distract others as well.

Rather than recognizing this as signaling a need for more advanced material, the teacher might send the child to a separate area in the classroom or in the school to refocus or as punishment. Parents may feel better equipped than teachers to address both their child’s challenging behaviors and their gifted abilities, given the knowledge they have about their child’s history, interests, strengths and areas needing improvement.

Supporting students’ needs

Gaining a better understanding of the motivations driving parents to take their children out of the public school system is an important step toward improving schools so that fewer will feel the need to take this path.

Additionally, strengthening educators’ and policymakers’ understanding about twice-exceptional homeschooled students may help communities provide more support to their families – who then may not feel homeschooling is the only or best option. My research shows that many schools can do a better job providing these types of students and their parents with the support they need to thrive. Läs mer…

What food did the real St Patrick eat? Less corned beef and cabbage, more oats and stinky cheese

Every St Patrick’s day, thousands of Americans eat corned beef and cabbage as a way of connecting to Ireland. But this association sits uncomfortably with many Irish people.

That’s because the dish, while popular in the past, has nothing to do with St Patrick himself. St Patrick (also known as Patricius or Pádraig) was born in Roman Britain in the 5th century. He is the patron saint of Ireland and in later biographies, legend and folklore, he is depicted as almost single-handedly converting the Irish to Christianity, and breaking the power of the druids.

The entangled mix of history, myth and folklore that has been attached to the saint makes it difficult to isolate historical fact from hagiographical and folklore embellishments. So what, if anything, do the celebratory foods of today have to do with the real St Patrick? And would he have eaten any of those same foods himself?

Looking for something good? Cut through the noise with a carefully curated selection of the latest releases, live events and exhibitions, straight to your inbox every fortnight, on Fridays. Sign up here.

The real St Patrick

The little we know about the real Patrick comes from two, probably 5th-century, short Latin texts written by the saint himself. Those are the Confessio, which is believed to be Patrick’s autobiography, and the Epistola, a letter of excommunication to the soldiers of a British king Coroticus, after they killed and enslaved some of his converts.

A St Patrick’s Day greeting card from 1909.
Missouri History Museum

In these texts, food is only mentioned in the context of hunger and the miraculous appearance of pigs that are slaughtered to sustain starving travellers.

Other important biographies of St Patrick were written in the 7th and somewhere between the 9th and 12th century. The two 7th-century Latin texts were written by churchmen, Muirchú and Tírechán. The author of the later biography, The Tripartite Life of Saint Patrick, is not known, but it was written partly in Latin and partly in Irish. These hagiographies (writing on the lives of saints) were works in legend-building with little connection to the real Patrick.

They do, however, give us a glimpse of the food culture of early medieval Ireland, when Patrick lived. They make references to dairy produce, salmon, bread, honey and meats, including beef, goat and a “ram for a king’s feast”.

Herb gardens are discussed alongside details of the cooking culture with mention of copper cauldrons, kitchens and cooking women. Grain and dairy foods would have most common, with white meats abundant in summer, and grain – especially oats – associated with the winter and early spring.

It is these foods, along with cultivated cabbage and onion-type vegetables and wild greens and fruit, that most likely would have sustained Patrick.

Delicious miracles

Food is frequently the subject of Saint Patrick’s miracles. As a child, he is said to have turned snow into butter and curds. On his missionary work, he was said to have changed water to honey, and cheese into stone and back to cheese again. In another miracle, he turned rushes into chives to satisfy a pregnant woman’s craving.

The bountiful fish stocks of certain rivers are also attributed to the saint’s blessing. One such example is the River Bann in Northern Ireland which was known for its salmon.

The food in Patrick’s world had a defined Irish signature. There is an emphasis in the hagiographies on a range of fresh, cultured and preserved dairy produce and the use of byproducts such as whey-water.

Corned beef and cabbage has become a popular St Patrick’s Day meal, but bears little connection to the real Patrick.
Brent Hofacker/Shutterstock

The extensive and later abandoned Irish cheese-making tradition is referenced in mention of curds and fáiscre grotha (pressed curds). The differentiation between new milk and milk may indicate a skills-based culture of working with dairy in the preparation of a family of thickened, soured and fermented milks. The associated communities, of which Patrick would have been part, probably had a taste for highly flavoured and cultured milk and cheese products.

These foods are typical of a self-sufficient agrarian economy, producing food that was suited to Irish soil and climatic conditions including wild and managed woodland, coastline and farmland. It is this vision of an untouched Ireland that continues to inspire Irish food culture today. Läs mer…

Four small planets discovered around one of the closest stars to Earth – an expert explains what we know

Barnard’s Star is a small, dim star, of the type that astronomers call red dwarfs. Consequently, even though it is one of the closest stars to Earth, such that its light takes only six years to get here, it is too faint to be seen with the naked eye. Now, four small planets have been found orbiting the star. Teams in America and Europe achieved this challenging detection by exploiting precision instruments on the world’s largest telescopes.

Diminutive Barnard’s Star is closer in size to Jupiter than to the Sun. Only the three stars that make up the Alpha Centauri system lie closer to us.

The planets newly discovered around Barnard’s Star are much too faint to be seen directly, so how were they found? The answer lies in the effect of their gravity on the star. The mutual gravitational attraction keeps the planets in their orbits, but also tugs on the star, moving it in a rhythmic dance that can be detected by sensitive spectrograph instruments. Spectrographs split up the star’s light into its component wavelengths. They can be used to measure the star’s motion.

A significant challenge for detection, however, is the star’s own behaviour. Stars are fluid, with the nuclear furnace at their core driving churning motions that generate a magnetic field (just as the churning of Earth’s molten core produces Earth’s magnetic field). The surfaces of red dwarf stars are rife with magnetic storms. This activity can mimic the signature of a planet when there isn’t one there.

The Maroon-X instrument installed at the Gemini North telescope.
International Gemini Observatory/NOIRLab/NSF/AURA/J. Bean, Author provided (no reuse)

The task of finding planets by this method starts with building highly sensitive spectrograph instruments. They are mounted on telescopes large enough to capture sufficient light from the star. The light is then sent to the spectrograph which records the data. The astronomers then observe a star over months or years. After carefully calibrating the resulting data, and accounting for stellar magnetic activity, one can then scrutinise the data for the tiny signals that reveal orbiting planets.

In 2024, a team led by Jonay González Hernández from the Canary Islands Astrophysics Institute reported on four years of monitoring of Barnard’s Star with the Espresso spectrograph on the European Southern Observatory’s Very Large Telescope in Chile. They found one definite planet and reported tentative signals that indicated three more planets.

Now, a team led by Ritvik Basant from the University of Chicago in a paper just published in Astrophysical Journal Letters, have added in three years of monitoring with the Maroon-X instrument on the Gemini North telescope. Analysing their data confirmed the existence of three of the four planets, while combining both the datasets showed that all four planets are real.

Often in science, when detections push the limits of current capabilities, one needs to ponder the reliability of the findings. Are there spurious instrumental effects that the teams haven’t accounted for? Hence it is reassuring when independent teams, using different telescopes, instruments and computer codes, arrive at the same conclusions.

The Gemini North telescope is located on Maunakea in Hawaii.
MarkoBeg / Shutterstock

The planets form a tightly packed, close-in system, having short orbital periods of between two and seven Earth days (for comparison, our Sun’s closest planet, Mercury, orbits in 88 days). It is likely they all have masses less than Earth’s. They’re probably rocky planets, with bare-rock surfaces blasted by their star’s radiation. They’ll be too hot to hold liquid water, and any atmosphere is likely to have been stripped away.

The teams looked for longer-period planets, further out in the star’s habitable zone, but didn’t find any. We don’t know much else about the new planets, such as their estimated sizes. The best way of figuring that out would be to watch for transits, when planets pass in front of their star, and then measure how much starlight they block. But the Barnard’s Star planets are not orientated in such a way that we see them “edge on” from our perspective. This means that the planets don’t transit, making them harder to study.

Nevertheless, the Barnard’s Star planets tell us about planetary formation. They’ll have formed in a protoplanetary disk of material that swirled around the star when it was young. Particles of dust will have stuck together, and gradually built up into rocks that aggregated into planets. Red dwarfs are the most common type of star, and most of them seem to have planets. Whenever we have sufficient observations of such stars we find planets, so there are likely to be far more planets in our galaxy than there are stars.

Most of the planets that have been discovered are close to their star, well inside the habitable zone (where liquid water could survive on the planet’s surface), but that’s largely because their proximity makes them much easier to find. Being closer in means that their gravitational tug is bigger, and it means that they have shorter orbital periods (so we don’t have to monitor the star for as long). It also increases their likelihood of transiting, and thus of being found in transit surveys.

The European Space Agency’s Plato mission, to be launched in 2026, is designed to find planets further from their stars. This should produce many more planets in their habitable zones, and should begin to tell us whether our own solar system, which has no close-in planets, is unusual. Läs mer…