Browsing: National

The best boards meet often. Thomas Barwick/DigitalVision via Getty Images

The people who serve on a nonprofit’s board of directors are legally responsible for its performance. Despite their importance, board members are rarely in the news. When they do make headlines, they may have messed up.

Perhaps the most spectacular example is what happened to Donald Trump’s now-defunct charity. While he was a sitting president, Trump was forced to dissolve his foundation and pay US$2 million to other causes after New York state authorities found that the Trump Foundation had violated numerous state and federal laws.

Among other lapses, his foundation inappropriately coordinated with his political campaign and engaged in self-dealing – using charitable money for his own personal benefit. In addition, state authorities determined that the foundation’s board members had failed to fulfill their duties.

In fact, that board allegedly hadn’t even held a meeting for two decades.

Fortunately, such cases are rare. But as a nonprofit management professor, I find that extreme tales of board failure can help illustrate what boards are actually supposed to do and why it’s so important to get it right. Public trust in charities is at stake.

Doing more than the minimum

To perform their jobs at a minimal level, boards of directors have to meet legal requirements, such as convening at least once a year and supervising an organization’s top leader.

But board members must do more than that if they are to meet the expectations of the donors, volunteers, staff and other stakeholders of the nonprofit they oversee. Let’s call these the “necessary” versus the “legal” obligations. While nonprofits’ tax-exempt status requires them to show the public they perform some community benefit, stakeholders who are supporting the organization may demand more.

Fortunately, board members can turn to organizations like the Council of Nonprofits and other sources trusted by experts for excellent guidance. Let’s take these expectations one by one.

The basics

Most states require nonprofit boards to include at least one to three people. Experts believe that groups make better decisions than individual people.

So donors are wise to insist that any charity they fund have more board members than the minimum for better oversight. Larger boards – but not too large – perform better. Most have somewhere between eight and 14 members; newer organizations may have fewer.

Nonprofit boards that do more than the law requires are more likely to succeed.
Beth Gazley and Colin Kulpa, CC BY-NC-SA

There are an estimated 1.5 million registered nonprofits in the U.S., with staffs that may range from a single unpaid founder to thousands of employees. These groups carry out a dizzying array of missions, ranging from community health care to boosting support for national parks.

Because of that diversity, experts will never agree on a single job description for nonprofit board members. Nor would they agree on a single recipe for who should sit on a board, although experts think nonprofits should pay more attention to diversity and representation.

Nonprofit boards typically recruit people who can represent the people served and who bring a range of skills and expertise in such areas as finance, communications and management, along with a connection to the organization’s mission. Most nonprofits also expect board members to make a meaningful financial contribution to the organization themselves.

Although it’s legal for nonprofits to pay board members, most are volunteers.

Care, duty and obedience

Legal expectations of boards come from both the states and the federal government. For the most part, a board’s legal responsibilities fall into three categories: a duty of care, a duty of loyalty and a duty of obedience.

Care means board members must meet regularly enough and provide enough oversight to ensure a nonprofit’s staff, budget and other resources are furthering the mission rather than squandering its funds or diverting them into personal expenditures.

Loyalty means they must act in the organization’s best interest, rather than their own, avoiding conflicts of interest.

Obedience has to do with ensuring that the group follows all applicable laws and regulations while acting in accordance with its own policies and mission.

Those three obligations add up to what’s known as a board’s fiduciary duties.

These duties encompass most of what boards do: approve budgets and expenditures; ensure that audits are conducted; hire the nonprofit’s chief executive and set that person’s compensation; and ensure that required public reporting happens, such as submitting a 990 information return to the Internal Revenue Service every year.

Nonprofit board members have many legal obligations.

Nobody’s business

Unlike in the case of businesses, nobody owns a nonprofit.

Instead, nonprofits essentially belong to themselves. Since they are mostly tax-exempt, they operate under the distant supervision of public officials such as a state’s attorney general. The board acts as agent of the state to ensure the public trust is not broken.

That’s why board members are often called “trustees.”

And on the rare occasions when that trust is broken, state officials will exercise their authority to step in, as they did with the Trump Foundation.

States set minimal standards for what boards need to do. Often, minimal compliance with those regulations does not suffice for an organization to thrive. For example, most states require boards to meet at least once every year.

Yet most boards meet five to eight times per year, since nonprofit experts agree that multiple meetings are needed to keep board members sufficiently informed and engaged. Additionally, regulators don’t require a conflict-of-interest policy, but stakeholders would be wise to do so.

Board cultures

A board’s structure – how big it is and how often it meets – is fairly easy to observe and measure. But what matters more is how the board behaves. How boards do their work is at least as important as what they do.

I’ve identified three kinds of cultures that help a board stay focused on what matters: a culture of learning, a culture of assessment and a strategic culture.

First, boards have to be willing to learn how to govern well, such as through training themselves. One common practice is an orientation for new members.

Members also need to assess not only the organization’s financial health but the health of the board itself. Although a board self-assessment is a recommended practice, since it ensures board members understand their job, it is practiced by only 4 out of 10 nonprofits.

Finally, boards need to devote sufficient time to planning for the nonprofit’s future. Strategic boards that do this may in turn support organizations that are more resilient – such as those that can withstand crises like the COVID-19 pandemic.

As for who can serve on a board, the honest answer is that just about any adult can. I would encourage anyone who is passionate and knowledgeable about a cause to look for leadership opportunities on a nonprofit board. If you have children enrolled at a school, how about becoming a member of its PTA board? If you enjoy shopping for fresh produce, perhaps you can join a board that manages your local farmers market.

Just be ready for the legal responsibility of being a trustee.

Beth Gazley receives funding from no organizations referenced in this article

Clowns in American circuses were once considered a form of adult entertainment. ArtMarie/E+ via Getty Images

The scary clown has become a horror staple.

Featuring Art the Clown as the main villain, Damien Leone’s new film “Terrifier 2” is so gruesome that there are reports of viewers vomiting and passing out in the theater. And every Halloween, you’ll see vicious clowns stalking haunted house attractions or trick-or-treaters dressed as Pennywise, the evil clown from Stephen King’s “It.”

It can be hard to imagine a time when clowns were regularly invited to children’s birthday parties and hospital wards – not to terrorize, but to delight and entertain. For much of the 20th century, this was the standard role of the clown.

However, clowns have always had a dark side. Before the 20th century, clowns in American circuses were largely considered a form of adult entertainment.

In my own research on the history of the 19th-century circus, I spend a lot of time in archives where I regularly come across vintage photos of clowns.

Now, I don’t consider myself afraid of clowns. In fact, I always try to remind folks that today’s clowns are serious artists with an enormous amount of training in their craft. But even I have to admit that the clowns I come across from old circuses give me the heebie-jeebies.

Drunken, lewd clowns in drag

For most of the 19th century, circuses were relatively small, one-ring events where audiences could hear performers speak.

These shows were rowdy affairs in which audiences felt free to yell, boo and hiss at performers. Typically, clowns would engage in banter with the stoic ringmaster, who was often the target for the clowns’ pranks. Borrowing comedic traditions from the blackface minstrel show, circus clowns used puns, non sequiturs and exaggerated burlesque humor.

One very popular clown act, which Mark Twain depicted in “The Adventures of Huckleberry Finn,” involved a performer disguised as a drunken circus patron who shocked the audience by entering the ring and clumsily attempting to ride one of the show’s horses before dramatically revealing himself to be part of the show. Famous 19th-century clown Dan Rice was known for including local gossip and political commentary in his performances and impersonating prominent figures in each town he visited.

The jokes they told were often misogynistic and full of sexual double-entendres, which wasn’t a problem because circus audiences at this time were mostly adult and male. Back then, circuses were a stigmatized form of entertainment in the U.S., considered disreputable for their association with gambling, grift, scantily clad female performers, profanity and alcohol. Church leaders regularly warned their congregations not to attend the circus. Some states even had laws banning circuses altogether.

Clowns in the 19th century were often sinister, vulgar characters.
Library of Congress

Clowns played a part in the circus’ seedy reputation.

Showman P.T. Barnum noted that part of the appeal of the circuses “consisted of the clown’s vulgar jests, emphasized with still more vulgar and suggestive gestures.” Clowns also subverted gender norms, with many appearing in drag, often exaggerating the female figure with cartoonishly big fake breasts.

In the early 19th century, some circuses also featured a separate tent that contained a “cooch show.” Male patrons were invited, for a fee, to watch women dance and strip.

Circus historian Janet Davis notes that some of these performances included clowns in drag “playing gender-bending pranks on dumbfounded men who expected to see nude women.” In a shocking revelation, Davis also notes that at some cooch show performances, gay clowns had sexual encounters with male audience members “during and after anonymously crowded scenes.”

These clowns, suffice it to say, weren’t for kids.

Clowns clean up their act

It wasn’t really until the 1880s and 1890s, when entertainment impresarios like Barnum made efforts to “clean up” the circus to draw in a larger audience, that clowns truly became associated with children.

After circuses started traveling by railroad, they could carry more equipment, allowing them to expand from one ring to three. Audiences could no longer hear performers, so the clown became a pantomime comedian, eliminating any potentially vulgar or suggestive language.

Circus owners, aiming to make as much money as possible, tried to court a broader audience, including women and children. That necessitated the removal of any scandalous acts and strict monitoring of their employees’ behavior.

At the directive of P.T. Barnum, clowns became palatable to families with young kids.
Bettmann/Getty Images

The shows with the most staying power, like Barnum & Bailey’s Greatest Show on Earth, were known as “Sunday school” shows, free of any objectionable content. They successfully portrayed themselves as the purveyors of good, clean fun.

Clowns played a role in this transformation. With now-silent acts focused on physical comedy, their performances were easy for children to understand. Clowns remained tricksters, but their slapstick comedy was seen as all in good fun.

This had a lasting effect. Clowns entertained families at the circus, and, as entertainment moved to film and television, child-friendly clowns followed there too. Clowns became staples of children’s entertainment in the 20th century. A popular television program featuring Bozo the Clown ran for 40 years, from 1960 to 2001. Beginning in the 1980s, clowns became regular visitors to children’s hospitals to cheer up young patients. And companies like McDonald’s used clowns as mascots to make their brands appealing to children.

But in the 21st century, there’s been a sharp turnaround. A 2008 study concluded that “clowns are universally disliked” by children today. Some point to clown-turned-serial killer John Wayne Gacy as the turning point, while others may blame Stephen King’s “It” for yoking clowns to horror.

Upon examining the history of the American circus, it almost seems as if the period in the 20th century when clowns were beloved by children deviated from the norm. Today’s scary clowns are not a divergence from tradition, but a return to it.

Madeline Steiner does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

This illustration shows the lack of civility in American politics. Getty Images

When former Vice President Mike Pence declared, in a speech to a conservative group, that “democracy depends on heavy doses of civility,” several attendees stood up and walked out of the Georgetown University auditorium.

That speech came just three weeks before the midterm elections as commentators and candidates around the country were calling for greater civility in politics.

This is no surprise.

Civility is popular with the American people. Across the political spectrum, citizens agree that politics has become dangerously toxic, and they think the problem is worsening.

That is one political issue we all agree on – democracy needs to regain civility. If it’s going to, the effort has to start with each of us individually, rather than waiting for someone else to make the first move.

Bipartisan hypocrisy

This unanimity that more civility is needed in politics may be an illusion.

Citizens tend to lay the blame for political incivility solely on their political opponents. They want civility in politics, but say they think compromise is a one-way street.

They want politicians to work together, but also want the opposition to capitulate.

Former Vice President Mike Pence visits Fox News on Oct. 19, 2022.
Shannon Finney/Getty Images

They value civility, but hold that their partisan rivals are uniformly immoral, dishonest and close-minded.

Pence reflected these us-versus-them attitudes himself during his Georgetown speech when he claimed that powerful institutions have “locked arms to advance a woke agenda designed to advance the policies and beliefs of the American left.”

Defining civility

Despite the multiple pleas for civility, little is said about what civility is.

That probably explains why civility is so popular.

Each citizen gets to define the term in their own way, and no one believes their own side to be uncivil. But if we believe that the U.S. needs to restore civility, we must define it.

It cannot be the demand to always remain calm in political debate. It’s generally good to keep one’s cool, of course. But when engaging in political disagreement, it’s not always possible to do so.

Our political opinions typically reflect deeply held values and commitments about justice. We tend to regard those who disagree with us about such matters as not merely on the other side of the issue, but on the wrong side. We should expect disagreements about important matters to get heated.

Civility might be better understood as the avoidance of undue hostility and gratuitous animosity in political debate. This could be something as simple as calling out inflated rhetoric, as John McCain famously did during his presidential campaign when his supporters claimed that Barack Obama was untrustworthy and not an American.

This idea acknowledges that heated debates can be appropriate within reason. It allows for some degree of antagonism, while at the same time prohibiting unnecessary vitriol.

In a sense, this makes civility a matter of judging whether our subject’s behavior calls for an escalation of hostility. The problem is that, when it comes to evaluating the behavior of our opponents, we are remarkably poor judges.

Partisan civility

Americans’ assessment of political behavior tightly tracks our partisan allegiances.

We cut our allies slack while holding our opponents to very high standards. When our allies engage in objectionable behavior, we excuse them. But when members of the opposition engage in the same behavior, we condemn them. In one experiment, when partisans were told of an ally stealing an opposing candidate’s campaign signs off neighborhood lawns, they chalked it up to political integrity. But when those same partisans were told that an opponent had stolen their signs, they condemned the act as undemocratic.

We over-ascribe hostility, dishonesty and untrustworthiness to our political opponents. Consequently, we will almost always see fit to escalate hostility when interacting with our opposition. When civility is understood as the avoidance of unnecessary rancor, it fails.

I’ve argued in my recent book “Sustaining Democracy” that civility isn’t really about how we conduct disagreements with political opponents.

Instead, civility has to do with how people formulate their own political ideas.

The GOP elephant and Democratic donkey are going toe-to-toe on Election Day.
Getty Images

We are uncivil when our political opinions do not take due account of the perspectives, priorities and concerns of our fellow citizens.

To better understand this idea, consider that in a democratic society, citizens share political power as political equals. As democratic citizens, we have the responsibility to act in ways that respect the equality of our fellow citizens, even when we disagree with their politics.

In my view, one way to respect their equality is to give due consideration to their values and preferences.

Of course, this does not require that we water down our own political commitments – or always try to meet our opponents halfway.

It calls only for a sincere attempt to consider their perspectives when devising our own.

People are civil when we can explain our political opinions to our political opponents in ways that are responsive to their rival ideas.

A civility test

Here is a simple three-part test for civility:

First, take one of your strongest political views, and then try to figure out what your smartest partisan opponent might say about it.

Second, identify a political idea that is key to your opponent and then develop a lucid argument that supports it.

Third, identify a major policy favored by the other side that you could regard as permissible for government – despite your opposition.

If you struggle to perform those tasks, that means one has a feeble grasp on the range of responsible political opinion.

When we cannot even imagine a cogent political perspective that stands in opposition to our own, we can’t engage civilly with our fellow citizens.

Robert B. Talisse does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Women are more likely to vote than men, but white women have different voting tendencies than women of color. Scott Eisen/Getty Images

Who shows up to cast a ballot and who is allowed to mark a ballot and have it counted will determine which candidates take office and what issues they focus on.

The Conversation asked three scholars of different aspects of voter turnout for their insights as the election approaches.

More women vote, and white women vote differently

Jane Junn, USC Dornsife College of Letters, Arts and Sciences

As the 2022 midterm elections approach, and in the wake of the U.S. Supreme Court overturning Roe v. Wade, new attention is focused on the role of women voters in U.S. elections. Regarding their turnout, three facts are important to keep in mind.

First, women outnumber men in the electorate. In the 2020 presidential election, women made up 53.1% of voters compared with 46.9% of men. This is a consistent pattern over decades.

Second, the gender gap is also a race gap. Women are more likely to support Democratic candidates than men, but there are racial and ethnic differences in that overall trend. While Black, Latina, Asian American and other women voters of color are strong supporters of Democrats, most white female voters have consistently supported Republican Party candidates.

For example, in 2020, 53% of white women voted for Donald Trump – compared with 46% who supported President Joe Biden.

Third, every election has a unique electorate. So it’s important to distinguish between voter turnout, where mobilization is key, and the patterns of partisan candidate choice. National patterns of voting in presidential elections are different from state and local election trends. And the contours of the voting public change over time, as young people turn 18 and new citizens register to vote.

Young voter turnout is low

John Holbein, University of Virginia

The United States has some of the lowest rates of youth voter turnout in the world. That’s despite the fact that a dominant majority of young people 18 to 24 years old care about politics and public affairs and want to participate in politics.

As my collaborator, political scientist D. Sunshine Hillygus, and I describe in our book “Making Young Voters,” many young people find the process of registering and voting too complex.

There are two ways to address this problem. The first is to revamp civics education to teach young people the skills they need to overcome voting obstacles. The Democracy Prep Charter School Network is a group of schools that structures students’ entire educational experience around “educating responsible citizen scholars for success in the college of their choice and a life of active citizenship.”

The other way is to reform laws to make registration easier and less complex, such as enabling online registration, preregistration of 16- and 17-year-olds and same-day registration and voting.

Both approaches meaningfully increase youth turnout and would help the next generation of young voters.

Voter ID laws affect turnout unequally

Nazita Lajevardi, Michigan State University

In 35 states, voters must provide some form of physical identification when they arrive to cast a ballot. In eight of those states, the strictest rules apply, typically requiring voters who arrive without a proper photo ID to take additional action, such as bring one to the polling place later in the day, before their vote will be counted.

These laws make it more difficult for all people to vote, but do so unequally. Black and other voters of color are less likely than whites to be able to afford the material burdens of securing qualifying identification, such as even getting to a motor vehicles office to attain the identification required to vote.

The strictest forms of these laws appear to disproportionately affect minority voter turnout.

Further, research shows that minorities are more likely than whites to be asked to actually present their ID at the polls.

And finally, even if voter ID laws are repealed, studies show that their effects last: People who were less likely to have proper ID still don’t show up, even if they don’t need those IDs anymore. That signals voters remain confused about whether they are allowed to vote, even when the law is clear that they can.

John Holbein receives funding from the National Science Foundation

Jane Junn and Nazita Lajevardi do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

St Louis’ Central Visual and Performing Arts High School — the latest scene of school gun violence. AP Photo/Jeff Roberson

As a Michigan teen pleaded guilty on Oct. 24, 2022, to killing four students in a December 2021 attack, America was learning of yet another school shooting. This time, it was a performance arts high school in St. Louis, where a former student opened fire, killing two and injuring at least seven others before dying in a shootout with police.

The fact that yet another school shooting took place within hours of a gunman in a separate case appearing in court underscores how often these events take place in the U.S. As criminologists who have built a comprehensive database to log all school shootings in the U.S., we know that deadly school gun violence in America in now a regular occurrence – with incidents only becoming more frequent and deadlier.

Our records show that seven more people died in mass shootings at U.S. schools between 2018 and 2022 – a total of 52 – than in the previous 18 years combined since the watershed 1999 Columbine High School massacre.

Since the February 2018 mass shooting at Marjory Stoneman Douglas High School in Florida, moreover, more than 700 people have been shot at U.S. schools on football fields and in classrooms, hallways, cafeterias and parking lots.

Many of these shootings were not the mass killing events that schools typically drill for. Rather, they were an extension of rising everyday gun violence.

More frequent and deadlier

There have been shootings at U.S. schools almost every year since 1966, but in 2021 there were a record 250 shooting incidents – including any occurrence of a firearm being discharged, be it related to suicides, accidental shootings, gang-related violence or incidents at after-hours school events.

That’s double the annual number of shooting incidents recorded in the previous three years – in both 2018 and 2019, 119 shootings were logged, and there were 114 incidents in 2020.

With more than two months left, 2022 is already the worst year on record. As of Oct. 24, there have been 257 shootings on school campuses – passing the 250 total for all of 2021.

Many of these incidents have been simple disputes turned deadly because teenagers came to school angry and armed. At East High in Des Moines, Iowa, in March 2022, for example, six teens allegedly fired 42 shots in an incident that took place during school dismissal time. The hail of gunfire killed one boy and critically injured two female bystanders. The district attorney described the case as one of the most complex murder investigations their office has ever conducted, partly because six handguns were used.

At Miami Gardens High in Florida that same month, two teens are alleged to have sprayed more than 100 rounds with a rifle and handgun modified for fully automatic fire. They targeted a student standing in front of the school, but bullets penetrated the building, striking two students sitting inside.

A similar situation unfolded outside Roxborough High in Philadelphia in October. A lunchtime dispute among students allegedly turned into a targeted shooting after a football scrimmage. Five teenage shooters are believed to have fired 60 shots at five classmates leaving the game, killing a 15-year-old.

In each of these cases, multiple student shooters fired dozens of shots.

The tally for 2022 also includes incidents involving lone shooters.

In April, a sniper with 1,000 rounds of ammunition and six semiautomatic rifles fired from a fifth-floor window overlooking the Edmund Burke School in Washington, D.C. at dismissal. A student, parent, school security officer and bystander were wounded before the shooter died by suicide.

Threats, hoaxes and false alarms

The increase in shootings in and around school buildings has many parents, students and teachers on edge. An October 2022 Pew Research survey found that one-third of parents report being “very worried” or “extremely worried” about a shooting at their child’s school.

Aside from the near daily occurrences of actual school shootings, there are also the near misses and false alarms that only add to the heightened sense of threat.

In September, a potential attack was averted in Houston when police got a tip that a student planned to chain the cafeteria doors shut and shoot students who were trapped inside. The following day near Dallas, another tip sent police scrambling to stop a vehicle on the way to a high school homecoming football game. Two teens had a loaded semiautomatic rifle and planned to commit a mass shooting at the stadium, it is alleged.

There have also been thousands of false reports of shootings this year. Hoaxes, swatting calls, even a viral TikTok school shooting challenge have sent schools across the nation into lockdown. Dozens, possibly hundreds, of these threats are automated 911 calls from overseas, but police have no choice but to respond.

People are so much on edge that a popped balloon at one California school in September led to an active shooter response from police. The sound of a metal pipe banging in August caused thousands of people to flee an Arkansas high school football stadium for fear of being shot. A loud bang from a chair being thrown caused a code red lockdown and parents to rush to a Florida high school.

A better way?

The rising annual tally of school shootings has occurred despite enhanced school security in the two decades since the Columbine massacre. Metal detectors, clear backpacks, bulletproof chalkboards, lockdown apps, automatic door locks and cameras have not stopped the rise in school shootings. In fact, the May 2022 mass shooting at Robb Elementary School in Uvalde, Texas, provides a case study in systemic failure across the school safety enterprise.

Federal legislation passed in the wake of Uvalde will provide districts with money to hire additional school social workers, or pay for better communication mechanisms in school buildings to address the warning signs of violence missed in dozens of high-profile attacks.

It is aimed at better identifying and helping at-risk students before they turn to violence. However, another area that needs attention is students’ ready access to firearms.

Some school shooters, like the perpetrator in Uvalde, are young adults old enough to get their guns legally from gun stores, prompting questions over whether some states need to reconsider a minimum age for firearms sales.

Meanwhile, most school shooters get their guns from home, making safe storage of firearms a public health priority.

But many children get their guns from the streets. Preventing weapons from getting into the hands of potential school shooters will require police and policymakers to devote resources toward cracking down on straw purchasers – those who buy firearms for someone else – and getting stolen weapons, unserialized ghost guns and guns modified with auto-sears to make them fully automatic off the streets.

Such measures could be what it takes to stop the tragic normalization of school shootings.

James Densley receives funding from the National Institute of Justice and the Bureau of Justice Assistance.

Jillian Peterson receives funding from the National Institute of Justice and the Bureau of Justice Assistance

David Riedman does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

The right to abortion is among the top issues on the ballot in several states. AP Photo/Jacquelyn Martin, File

Following the U.S. Supreme Court’s ruling in Dobbs v. Jackson Women’s Health Organization and the wave of state-level abortion bans that followed, it might appear that anti-abortion activists could declare victory and go home.

However, from their perspective, a major threat still looms: Their tax dollars may be used to fund abortion in states where abortion is legal.

As it currently stands, several policies are in place that almost entirely prevent federal funds from being used to directly pay for abortion services. Since 1976, the Hyde Amendment has prohibited the public funding of abortion through Medicaid except in rare exceptions. In the years since, “Hyde-like restrictions” have been added to other federal healthcare programs, as well as to private insurance plans purchased through the health insurance exchanges established by the Affordable Care Act.

There are also restrictions on federal funds granted to organizations that provide reproductive healthcare for low-income women, like Planned Parenthood, such that these funds cannot be used for abortion services. Even so, anti-abortion activists insist that because money is fungible, any federal support for organizations that provide abortion services or counseling represents an indirect taxpayer subsidy to the “abortion industry.”

As such, despite the multitude of restrictions currently in place, anti-abortion activists promote the idea that Americans are nonetheless being forced to pay for abortions. When the Democratic Party declared in 2016 its intention to roll back these restrictions, framing them as unjust barriers to abortion access, anti-abortion activists only ramped up this existing rhetoric.

In the post-Dobbs world of the 2022 midterms, abortion debates are primarily focused on whether abortion will be legal, but anti-abortion leaders are also highlighting the implications of these laws for voters’ tax dollars.

This should not be surprising. In the course of my research on debates about taxpayer-funded abortion, I found that this threat has historically been used to motivate and mobilize anti-abortion voters. This message has especially resonated for those conservative evangelical Christians and Catholics who believe that when abortion is funded using their tax dollars, this makes them personally complicit in sin.

Opposition to public funding

The U.S. Council of Catholic Bishops has long been a central player in advocacy campaigns to “stop taxpayer funding of abortion.” As one message encouraging voters to support this advocacy puts it, “Don’t let our government force you to pay for the deaths of unborn children.”

This concern resonates for Catholic Republicans, more than 7 in 10 who oppose the use of public funds for abortion, according to an analysis of national survey data that I conducted in 2021 with scholars Andrew Whitehead and Ryan Burge. This opposition is even stronger among Republicans who identify as born-again or evangelical Christian – between 84% and 90%.

But abortion funding bans also appeal to fiscally conservative voters who oppose welfare spending in general, whether or not they are morally opposed to abortion. Since the 1970s, anti-abortion leaders have argued that “funding bans protected taxpayers’ wallets as well as their consciences,” according to the legal historian Mary Ziegler. National survey data my colleagues and I analyzed suggests that this argument continues to resonate. Six out of 10 Republicans with no religious affiliation support abortion funding bans; so do between 14% and 17% of Republicans who support legal abortion.

Opposition to taxpayer-funded abortion, even more than abortion itself, is a thread connecting religious and fiscal conservatives within the Republican coalition.

A winning strategy

Campaigns to prevent tax dollars from funding abortion have kept these anti-abortion activists and other Republican voters engaged and mobilized for decades, even when a ban on legal abortion itself seemed unlikely.

As one leader of an anti-abortion organization told me in a 2021 interview: “Ultimately, I think our focus should still remain on criminalizing [abortion]. … But I think in the meantime we also should oppose the taxpayer funding of it … just because it’s a winning strategy.”

This seems no less true post-Dobbs. As the midterms approach, I have found that Republican candidates and movement leaders are continuing to stoke fear about taxpayer-funded abortion in order to mobilize voters, especially religious conservatives.

Bill codifying federal abortion rights

A major issue energizing voters this cycle is the possibility that Congress might pass a bill codifying abortion rights. While the primary issue at stake is whether abortions would be legal nationwide, abortion opponents are quick to note that such a bill would also “force taxpayers to pay for them,” as the anti-abortion news website LifeNews.com put it.

Anti-abortion activists are motivating voters by saying that they would be forced to pay for abortions through their tax dollars.
AP Photo/Jose Luis Magana

National Right to Life president Carol Tobias warned in the September 2022 issue of the organization’s newsletter that “Proponents of the bill claim this bill would codify Roe v Wade. Don’t be fooled – it goes much, much further.” In particular, she writes, “tax dollars would flow to pay for abortion.” Jeanne F. Mancini, President of March for Life, warned that if this “extreme bill” is passed, it would “force taxpayers to pay for abortions nationwide — up until birth.”

Even in the absence of such a bill, abortion opponents are raising the alarm about existing Biden administration policies that allow public funds to be used for abortion services, like a new Pentagon policy that would “pay for service members to travel for abortion care.”

As reported by the Baptist Press, the Southern Baptist Ethics & Religious Liberty Commission raised concerns that “the interim rule forces taxpayers to fund the taking of preborn human lives.” Meanwhile, the Christian Right organization Concerned Women for America warned, “A baby has already been killed under this cruel ploy. … Not only that, but the Administration wants Americans to pay for it.”

Abortion on state-level ballots

Voters in several states are also directly deciding the fate of their states’ abortion laws in November 2022. In at least two of these states, anti-abortion leaders are highlighting the implications for voters’ tax dollars.

For example, in Kentucky, where a near-total abortion ban went into effect shortly after Dobbs, voters will decide whether to amend the state constitution to say, “To protect human life, nothing in this Constitution shall be construed to secure or protect a right to abortion or require the funding of abortion.”

Explaining why voters should vote “Yes for Life,” the chair of the campaign supporting the amendment led with its implications for taxpayers: “The constitutional amendment is very clear. It protects taxpayer dollars, and it makes sure there is not an interpreted right of abortion in the constitution.”

In Michigan, where a ballot measure called Proposal 3 would enshrine abortion rights, backlash from anti-abortion activists led by local Catholic organizations prominently features the claim that “If passed, Proposal 3 would result in taxpayer-funded abortion.”

Municipal politics

Cities dedicating public funds to abortion post-Dobbs have also faced scrutiny in the lead-up to the midterms, especially from conservative religious groups.

In Philadelphia, for example, anti-abortion activists represented by the conservative Catholic Thomas More Society have filed suit against city leaders “for illegally using taxpayer money to pay for abortions.” Only weeks before the election, the Pro-Life Union of Greater Philadelphia rallied supporters to a hearing on the case, pleading “Don’t let Mayor (Jim) Kenney get away with it!”

Abortion debates are certainly not only about how abortions will be paid for. But journalists and scholars often pay far too little attention to anti-abortion activists’ persistent focus on the possibility that some abortions will be paid for with their tax dollars. If history and current research is any guide, this threat resonates with a diverse array of Republicans and will be used to mobilize voters in 2022 and beyond.

Gloria Dickson and Brianna Monte, undergraduate research assistants at the University of Connecticut, contributed research to this piece.

Ruth Braunstein has received funding from the Louisville Institute, the Society for the Scientific Study of Religion (SSSR), and the University of Connecticut. She serves on the Board of Directors of PRRI (Public Religion Research Institute).

Above it, only skies. Inside, very few nonbelievers. AP Photo/Jacquelyn Martin, File

The midterm elections are likely to return to Congress elected representatives who hold a range of religious beliefs.

But while self-identified Christians, Muslims, Jews, Buddhists and Hindus currently rub shoulders in the corridors of power, one group is noticeably absent: atheists. And despite a growing number of openly nonreligious candidates running for office, it remains difficult for atheists to get a foothold in Congress.

Of 531 members of Congress included in a 2021 survey (at the time, four seats were unfilled), 88% identified as Christian, with those of a Jewish faith second, with 6%. Indeed, according to that survey, only two people in Congress don’t openly identify with any mainstream religion: Rep. Jared Huffman, a Californian Democrat, who identifies as a “humanist,” and Sen. Kyrsten Sinema, who describes herself as religiously unaffiliated. But neither has self-identified as being an “atheist.” A list compiled by the Freethought Equality Fund Political Action Committee indicates that atheists are running for a few seats in the U.S. Congress, and many more are doing so at the state level.

But throughout history, only one self-identified atheist in the U.S. Congress comes to mind, the late California Democrat Peter Stark.

‘In atheists, they don’t trust’

This puts the country at odds with democracies the world over that have elected openly godless – or at least openly skeptical – leaders who went on to become revered national figures, such as Jawaharlal Nehru in India, Sweden’s Olof Palme, Jose Mujica in Uruguay and Israel’s Golda Meir. New Zealand’s Jacinda Ardern, the global leader who has arguably navigated the coronavirus crisis with the most credit, says she is agnostic.

But in the United States, self-identified nonbelievers are at a distinct disadvantage. A 2019 poll asking Americans who they were willing to vote for in a hypothetical presidential election found that 96% would vote for a candidate who is Black, 94% for a woman, 95% for a Hispanic candidate, 93% for a Jew, 76% for a gay or lesbian candidate and 66% for a Muslim – but atheists fall below all of these, down at 60%. That is a sizable chunk who would not vote for a candidate simply on the basis of their nonreligion.

In fact, a 2014 survey found Americans would be more willing to vote for a presidential candidate who had never held office before, or who had extramarital affairs, than for an atheist.

In a country that changed its original national motto in 1956 from the secular “e pluribus unum” – “out of many, one” – to the faithful “in God we trust,” it seems people don’t trust someone who doesn’t believe in God.

As a scholar who studies atheism in the U.S., I have long sought to understand what is behind such antipathy toward nonbelievers seeking office.

Branding issue?

There appear to be two primary reasons atheism remains the kiss of death for aspiring politicians in the U.S. – one is rooted in a reaction to historical and political events, while the other is rooted in baseless bigotry.

Let’s start with the first: atheism’s prominence within communist regimes. Some of the most murderous dictatorships of the 20th century – including Stalin’s Soviet Union and Pol Pot’s Cambodia – were explicitly atheistic. Bulldozing human rights and persecuting religious believers were fundamental to their oppressive agendas. Talk about a branding problem for atheists.

For those who considered themselves lovers of liberty, democracy and the First Amendment guarantee of the free exercise of religion, it made sense to develop fearful distrust of atheism, given its association with such brutal dictatorships.

And even though such regimes have long since met their demise, the association of atheism with a lack of freedom lingered long after.

The second reason atheists find it hard to get elected in America, however, is the result of an irrational linkage in many people’s minds between atheism and immorality. Some assume that because atheists don’t believe in a deity watching and judging their every move, they must be more likely to murder, steal, lie and cheat. One recent study, for example, found that Americans even intuitively link atheism with necrobestiality and cannibalism.

Such bigoted associations between atheism and immorality do not align with reality. There is simply no empirical evidence that most people who lack a belief in God are immoral. If anything, the evidence points in the other direction. Research has shown that atheists tend to be less racist, less homophobic and less misogynistic than those professing a belief in God.

Most atheists subscribe to humanistic ethics based on compassion and a desire to alleviate suffering. This may help explain why atheists have been found to be more supportive of efforts to fight climate change, as well as more supportive of refugees and of the right to die.

This may also explain why, according to my research, those states within the U.S. with the least religious populations – as well as democratic nations with the most secular citizens – tend to be the most humane, safe, peaceful and prosperous.

Freethought Caucus

Although the rivers of anti-atheism run deep throughout the American political landscape, they are starting to thin. More and more nonbelievers are openly expressing their godlessness, and swelling numbers of Americans are becoming secular: In the past 15 years, the percentage of Americans claiming no religious affiliation has risen from 16% to 26%. Meanwhile, some find the image of a Bible-wielding Trump troubling, opening up the possibility that suddenly Christianity may be contending with a branding problem of its own, especially in the skeptical eyes of younger Americans.

In 2018, a new group emerged in Washington, D.C.: The Congressional Freethought Caucus. Although it has only 16 members, it portends a significant shift in which some elected members of Congress are no longer afraid of being identified as, at the very least, agnostic. Given this development, as well as the growing number of nonreligious Americans, it shouldn’t be a surprise if one day a self-identified atheist makes it to the White House.

Will that day come sooner rather than later? God only knows. Or rather, only time will tell.

Editor’s note: This is an updated version of an article that was originally published on Oct. 5, 2020.

Phil Zuckerman does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

In Maine’s 2020 Senate race, not one poll showed the GOP incumbent, Susan Collins, in the lead. But she trounced her Democratic challenger by 9 points. AP Photo/Robert F. Bukaty

When it became clear his poll had erred in the 2021 New Jersey governor’s race, Patrick Murray, director of the Monmouth University Polling Institute, acknowledged:

“I blew it.”

The campaign’s final Monmouth poll estimated Gov. Phil Murphy’s lead over Republican foe Jack Ciattarelli at 11 percentage points – a margin that “did not provide an accurate picture of the state of the governor’s race,” Murray later said in a newspaper commentary. Murphy won by 3.2 points.

It was a refreshingly candid acknowledgment by an election pollster.

More broadly, the error was one of several in the recent past and looms among the disquieting omens confronting pollsters in the 2022 midterm elections. Will they be embarrassed again? Will their polls in high-profile U.S. Senate and gubernatorial races produce misleading indications of election outcomes?

Such questions are hardly far-fetched or irrelevant, given election polling’s tattered recent record. A few prominent survey organizations in recent years have given up on election polling, with no signs of returning.

Worried supporters of Democratic incumbent New Jersey Gov. Phil Murphy at an election night event in 2021. Murphy, who one state poll estimated was leading his GOP challenger by 11 percentage points, won by 3.2 points.
Mark Makela/Getty Images

Treat polls warily

It is important to keep in mind that polls are not always in error, a point noted in my 2020 book, “Lost in a Gallup: Polling Failure in U.S. Presidential Elections.” But polls have been wrong often enough over the years that they deserve to be treated warily and with skepticism.

For a reminder, one need look no further than New Jersey in 2021 or, more expansively, to the 2020 presidential election. The polls pointed to Democrat Joe Biden’s winning the presidency but underestimated popular support for President Donald Trump by nearly 4 percentage points overall.

That made for polling’s worst collective performance in a presidential campaign in 40 years, and post-election analyses were at a loss to explain the misfire. One theory was that Trump’s hostility to election surveys dissuaded supporters from answering pollsters’ questions.

In any case, polling troubles in 2020 were not confined to the presidential race: In several Senate and gubernatorial campaigns, polls also overstated support for Democratic candidates. Among the notable flubs was the U.S. Senate race in Maine, where polls signaled defeat for the Republican incumbent, Susan Collins. Not one survey in the weeks before the election placed Collins in the lead.

She won reelection by nearly 9 points.

Recalling the shock of 2016

The embarrassing outcomes of 2020 followed a stunning failure in 2016, when off-target polls in key Great Lakes states confounded expectations of Hillary Clinton’s election to the presidency. They largely failed to detect late-campaign shifts in support to Trump, who won a clear Electoral College victory despite losing the national popular vote.

Past performance is not always prologue in election surveys; polling failures are seldom alike. Even so, qualms about a misfire akin to those of the recent past have emerged during this campaign.

In September 2022, Nate Cohn, chief political analyst for The New York Times, cited the possibility of misleading polls in key races, writing that “the warning sign is flashing again: Democratic Senate candidates are outrunning expectations in the same places where the polls overestimated Mr. Biden in 2020 and Mrs. Clinton in 2016.”

There has been some shifting in Senate polls since then, and surely there will be more before Nov. 8. In Wisconsin, for example, recent surveys suggest Republican incumbent Ron Johnson has opened a lead over Democratic challenger Mandela Barnes. Johnson’s advantage was estimated at 6 percentage points not long ago in a Marquette Law School Poll.

The spotlight on polling this election season is unsurprising, given that key Senate races – including those featuring flawed candidates in Pennsylvania and Georgia – will determine partisan control of the upper house of Congress.

Worth doing?

Polling is neither easy nor cheap if done well, and the field’s persistent troubles have even prompted the question whether election surveys are worth the bother.

Monmouth’s Murray spoke to that sentiment, stating: “If we cannot be certain that these polling misses are anomalies then we have a responsibility to consider whether releasing horse race numbers in close proximity to an election is making a positive or negative contribution to the political discourse.”

He noted that prominent survey organizations such as Pew Research and Gallup quit election polls several years ago to focus on issue-oriented survey research. “Perhaps,” Murray wrote, “that is a wise move.”

Questions about the value of election polling run through the history of survey research and never have been fully settled. Early pollsters such as George Gallup and Elmo Roper were at odds about such matters.

Gallup used to argue that election polls were acid tests, proxies for measuring the effectiveness of surveys of all types. Roper equated election polling to stunts like “tearing a telephone book in two” – impressive, but not all that consequential.

Pollsters in 2016 predicted Democratic presidential candidate Hillary Clinton would win some states that she actually lost.
Screenshot, New York Times

Who is and isn’t responding

Experimentation, meanwhile, has swept the field, as contemporary pollsters seek new ways of reaching participants and gathering data.

Placing calls to landlines and cellphones – once polling’s gold standard methodology – is expensive and not always effective, as completion rates in such polls tend to hover in the low single digits. Many people ignore calls from numbers they do not recognize, or decline to participate when they do answer.

Some polling organizations have adopted a blend of survey techniques, an approach known as “methodological diversity.” CNN announced in 2021, for example, that it would include online interviews with phone-based samples in polls that it commissions. A blended approach, the cable network said, should allow “the researchers behind the CNN poll to have a better understanding of who is and who is not responding.”

During an online discussion last year, Scott Keeter of Pew Research said “methodological diversity is absolutely critical” for pollsters at a time when “cooperation is going down [and] distrust of institutions is going up. We need to figure out lots of ways to get at our subjects and to gather information from them.”

So what lies immediately ahead for election polling and the 2022 midterms?

Some polls of prominent races may well misfire. Such errors could even be eye-catching.

But will the news media continue to report frequently on polls in election cycles ahead?

Undoubtedly.

After all, leading media outlets, both national and regional, have been survey contributors for years, conducting or commissioning – and publicizing – election polls of their own.

W. Joseph Campbell does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Republican candidate for U.S. Senate Mehmet Oz has talked a lot about the crime rate during his campaign in Pennsylvania. AP Photo/Gene J. Puskar

In the lead-up to the 2022 midterm elections, Republican candidates across the nation are blaming Democrats for an increase in crime.

But as a scholar of criminology and criminal justice, I believe it’s important to note that, despite the apparently confident assertions of politicians, it’s not so easy to make sense of fluctuations in the crime rate. And whether it’s going up or down depends on a few key questions:

What you mean by “crime,”
What the “up” or “down” comparisons are in reference to, and
The location or area being examined.

Here’s an explanation of those elements – and why there is no one answer to whether crime has increased in the past year, or over the past decade.

What is ‘crime,’ anyway?

Republican politicians across the nation, including Cicely Davis in Minnesota, are working to get voters concerned about crime.
Cicely Davis campaign email

Usually when politicians, public officials and scholars talk about crime statistics, they’re referring to the most serious crimes, which the FBI officially calls “index” or “Part 1” offenses: criminal homicide, rape, robbery, aggravated assault, burglary, larceny, motor vehicle theft and arson.

Because these crimes vary a great deal in terms of seriousness, experts break this list up into “violent” and “property” offenses, so as not to confuse a surge in thefts with an increase in killings.

Each month, state and local police departments tally up the crimes they have handled and send the data to the FBI for inclusion in the nation’s annual Uniform Crime Report.

But that system has limitations. According to the U.S. Bureau of Justice Statistics, fewer than half of all events that could count as crimes actually get reported to police in the first place. And police departments are not required to send information about known crimes to the FBI. So each year what are presented as national crime statistics are derived from whichever of the roughly 17,000 police departments across the country decide to send in their data.

In 2021, the optional nature of reporting crime statistics was a particular problem, because the FBI asked for more detailed information than it had in the past. Historically, the bureau received data from police departments covering about 90% of the U.S. population. But fewer agencies supplied the more detailed data requested in 2021. That data covered only 66% of the nation’s population. And the patchwork wasn’t even: In some states, such as Texas, Ohio and South Carolina, nearly all agencies reported. But in other states, such as Florida, California and New York, participation was abysmal.

With those caveats in mind, the 2021 data estimates that criminal homicide rose about 4% nationally from 2020 levels. Robberies were down 9%, and aggravated assaults remained relatively unchanged.

Rapes are notoriously underreported to police, but the 2021 National Crime Victimization Survey suggests there was no significant change from 2020.

What’s the benchmark?

Those comparisons look at the prior year to assess whether certain types of crime are up or down. Such comparisons may seem straightforward, but violent crime, particularly homicide, is statistically rare enough that a rise or fall from one year to the next doesn’t necessarily mean there is reason to panic or celebrate.

Another way to assess trends is to look at as much data as possible. Over the past 36 years, clear trends have emerged. The national homicide rate in 2021 wasn’t as high as it was in the early 1990s, but 2021’s figure is the highest in nearly 25 years.

Meanwhile, robberies have been trending steadily downward for the better part of 30 years. And though the aggravated assault rate didn’t change much from 2020 to 2021, it is clearly higher now than at any time during the 2010s.

Crime is highly localized

These figures are imperfect in other ways, too. The data being used in today’s assertions about crime rates is more than 10 months old and presents national figures that mask a substantial amount of local variation. The FBI won’t release 2022 crime data until the fall of 2023.

But there is more current data available: The consulting firm AH Datalytics has a free dashboard that compiles more up-to-date murder data from 99 big cities.

As of October 2022, it indicates that murder in big cities is down about 5% in 2022 when compared with the first 10 months of 2021. But this aggregate change masks the fact that murder is up 85% in Colorado Springs, Colo.; 33% in Birmingham, Ala.; 28% in New Orleans; and 27% in Charlotte, N.C. Meanwhile, murder is down 38% in Columbus, Ohio; 29% in Richmond, Va.; and 18% in Chicago.

Even these city-level statistics don’t tell the whole story. It is now well established that crime is not randomly distributed across communities. Instead, it clusters in small areas that criminologists and police departments often refer to as “hot spots.” What this means is that regardless of whether crime is up or down in cities, a handful of neighborhoods in those cities are likely still significantly and disproportionately affected by violence.

Justin Nix does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Soviet leader Nikita Krushchev, left, met with U.S. President John F. Kennedy in Vienna in 1961. Universal History Archive/Universal Images Group via Getty Images

Curious Kids is a series for children of all ages. If you have a question you’d like an expert to answer, send it to [email protected].

In the Cold War, was there any actual war going on? Like, with armies? Or was it mostly about space? – Leia K., age 10, Redmond, Washington

“I am getting confused about all these wars we are studying,” one of my college students confessed to me years ago. After we discussed the various nations who fought in World Wars I and II, she asked: “Now, who fought in the Cold War?”

I told her the Cold War was not an actual war. Unlike the two world wars, there were no physical battles between the major adversaries. It was, instead, an extended competition between the United States and the Soviet Union, along with their respective allies. In 1991, the Soviet Union split up into 15 countries, the largest of which is Russia.

But back then, both of these two so-called superpowers wanted to be the most powerful nation in the world, building themselves up while simultaneously trying to reduce the power and influence of the other. Washington and Moscow competed in numerous ways: over money and natural resources like oil, over allies, over weapons technology, over influence and prestige, over space exploration, over ideas.

The Cold War relationship between the two rival nations was often tense. Once, it led to the threat of nuclear war breaking out because Russia wanted to place nuclear missiles in Cuba, very close to the U.S. That brought the world to the brink of what would have been a catastrophic conflict.

But through skill, prudence or luck – or all three – American and Soviet leaders managed to avoid direct combat with each other from 1945 to 1989, the basic period of the Cold War.

A war without fighting?

My student could be forgiven for her confusion. The very term “Cold War” is contradictory and confusing. It was first used in 1947. By using the word “war,” it captured the seemingly life-or-death struggle between the United States and the Soviet Union and between capitalism and communism. But by describing this war as “cold,” it indicates the struggle did not involve weapons and did not result in rival armies seeking to destroy each other.

How could a war be cold? Essentially, by being fought not in the traditional manner of clashing armies, but by all other means short of actual combat.

The Cold War stayed cold for a variety of reasons. Most importantly, the advent of nuclear weapons meant that any conflict between the superpowers risked a nuclear exchange that could have claimed tens of millions of lives and left a swath of destruction in both the Soviet and American homelands.

To avoid such a cataclysmic outcome, policymakers in Moscow and Washington were highly sensitive to the risks of any conflict. They worked hard to find peaceful resolutions to the multiple confrontations and crises they faced between the end of World War II in 1945 and the fall of the Berlin Wall in 1989.

Each superpower also believed that it was engaged in a long-term struggle. Each was convinced that the superiority of its social, political and economic systems would ultimately bring victory in the competition, through peaceful means.

Think about it: Why resort to war, with all the death, devastation and uncertainty it would bring, if you sincerely believe that time – and history – is on your side?

Not a time of peace

Yet the Cold War era was hardly peaceful.

U.S. and Soviet troops never fought each other directly during those years. But numerous Cold War-related conflicts raged across the globe from the 1940s to the 1980s.

The vast bulk of those conflicts occurred in the developing countries of Asia, Africa and the Middle East – the so-called Third World or Global South. In fact, as many as 20 million people died in wars fought between 1945 and 1989. Only 1% of those lost their lives in Europe, the original area of Cold War confrontation. The other 99% died on battlefields of developing nations.

Those conflicts took many forms, including rebellions against colonial powers, civil wars, invasions and revolutions. They also had many different causes. Yet nearly all were affected by the wider Soviet-American struggle for power and influence. And nearly all were intensified and made bloodier and more costly by it.

The era’s most deadly conflicts were the Korean War, from 1950 to 1953, and the Vietnam War, from 1961 to 1975, each of which claimed millions of lives. The United States deployed troops to both of those conflicts, largely because it was determined to contain the expansion of communism.

For its part, the Soviet Union did not participate directly in either war. But it provided aid and support to its communist allies in North Korea and North Vietnam. The Soviet Union’s major ally during the first half of the Cold War, the communist-led People’s Republic of China, contributed massive numbers of troops to the conflict in Korea and provided hundreds of thousands of support troops to the conflict in Vietnam.

In sum, “Cold War” remains a somewhat contradictory term for, and description of, the period from 1945 to 1989. It correctly reflects the crucial fact that the struggle for global supremacy between the United States and the Soviet Union never involved direct combat between the two nation’s military forces. But it minimizes the extensive and bloody litany of conflicts that raged throughout those years, nearly all of which were caused by or affected by their rivalry.

Hello, curious kids! Do you have a question you’d like an expert to answer? Ask an adult to send your question to [email protected]. Please tell us your name, age and the city where you live.

And since curiosity has no age limit – adults, let us know what you’re wondering, too. We won’t be able to answer every question, but we will do our best.

Robert J. McMahon does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.