Browsing: National
Diwali: A celebration of the goddess Lakshmi, and her promise of prosperity and good fortune
Aman Verma/ iStock / Getty Images Plus
This year Diwali, a popular festival for Hindus, Jains, Buddhists and Sikhs, will be celebrated on Oct. 24, the Amavasya, or new moon day, of the month of Kartik in the traditional Indian lunar calendar.
Devotees across around the world will bring festivities into their homes by lighting earthen lamps called diyas, setting off fireworks, displaying colored electric lights and exchanging gifts. In northern India, this date also marks the beginning of the new year.
The day is specially dedicated to the worship of Lakshmi, the Hindu goddess of prosperity and good fortune.
Who is Lakshmi?
In modern images, Lakshmi is typically depicted wearing either a red or a green sari. The upper two of her four hands are holding lotus flowers, while her lower right hand is upraised in the “do not be afraid” gesture, or abhaya mudra.
Her lower left hand is pointed downward with her palm facing out and golden coins are falling from it. She sits or stands upon a large red lotus flower. Often, there are two elephants behind her with their trunks upraised. As poet Patricia Monaghan writes, sometimes these elephants “shower her with water from belly-round urns.”
Lakshmi is believed to be the consort of Vishnu, who is the preserver of the cosmic order, or dharma. As Vishnu’s shakti, or power, Lakshmi is his equal and an integral part of his being.
In the Srivaishnava tradition of Hinduism, Lakshmi and Vishnu make up a single deity, known as Lakshmi Narayana. Also known as Shri, Lakshmi is believed to mediate between her human devotees and Vishnu.
Origins of Lakshmi
RapidEye/Collection E+ via Getty Images
According to the sources I have studied as a scholar of Hindu, Jain and Buddhist traditions, Shri in fact seems to be the earliest name given to this goddess in Hindu texts. This word originally means splendor and it refers to all that is auspicious: all the good and beautiful things in life. The name Lakshmi, on the other hand, refers to a sign, imprint or manifestation of Shri. These two words seem to refer to two distinct goddesses in the earliest Hindu literature, the Vedas.
By the first century, however, which is the period of the writing of the “Puranas,” or the ancient lore of the Hindu deities, these two deities appear to have merged into a single goddess, known as Shri, Lakshmi or Shri Lakshmi.
There are many stories of Lakshmi’s origins. In the most popular of these, from the fifth century Vishnu Purana, she emerges from the ocean when the Devas and Asuras, the gods and the anti-gods, churn it to acquire amrita, the elixir of immortality. In another source – the Garuda Purana, a ninth-century text – she is said to be the daughter of the Vedic sage Bhrigu and his wife, Khyati.
Those who wish for prosperity in the new year say special prayers to Lakshmi and light diyas in their homes so the goddess will visit and bless them.
Jeffery D. Long does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Iranian drones used by Russia in Ukraine show that there’s already one victor in that war: Iran
Oleksii Chumachenko/SOPA Images/LightRocket via Getty Images
The war in Ukraine is helping one country achieve its foreign policy and national security objectives, but it’s neither Russia nor Ukraine.
It’s Iran.
That was starkly clear on the morning of Oct. 17, 2022, as Iranian-made drones attacked civilian targets in Ukraine’s capital, Kyiv. Russia used the Iranian drones to inflict damage on Ukraine’s national energy company headquarters, and the drones also killed four civilians.
Iran is among Russia’s most vocal supporters in the war. As a military analyst who specializes in Iranian national security strategy, I see this having little to do with Ukraine and everything to do with Iran’s long-term strategy vis-à-vis the United States.
As Russia’s war on Ukraine passed six months and continued eroding Russia’s manpower, military stores, economy and diplomatic connections, leader Vladimir Putin opted for an unlikely but necessary Iranian lifeline to salvage victory in Ukraine and also in Syria where, since 2015, Russian soldiers have been fighting to keep Bashar al-Assad’s government in power.
And at a time when the Islamic Republic of Iran’s government is facing growing citizen protests against its autocratic rule, Putin’s move has, in turn, helped Iran make progress in promoting its national interests, as defined by its leadership.
Office of the Iranian Supreme Leader via AP
Opposing the US everywhere
Since the Islamic Revolution of 1979, Iran’s leaders have believed the United States is constantly scheming to topple Iran’s government. They view leaders in Washington as the greatest threat and obstacle to promoting Iranian national interests – achieving economic self-sufficiency, international legitimacy, regional security, power and influence.
The fears of Iran’s leaders are not irrational – the long history of U.S. meddling in Iranian affairs, continuous open hostility between the two countries and decades of U.S. military buildup in close proximity to Iran greatly concern leaders in Tehran.
The U.S. has military forces in many Middle Eastern countries, with or without invitation. To promote its national interests, Iran is working to force the U.S. military out of the region and reduce U.S. political influence there.
Iran has an even bigger aim: to overthrow what it sees as the U.S.-dominated global political order.
Iran counters U.S. influence by maintaining partnerships with an assortment of nonstate militias and governments united by their fierce anti-U.S. hostility. The country nurtures a network of militant partner and proxy groups, whose own political preferences and ambitions align with Iran’s objectives, by providing weapons, training, funds – and, in some cases, direction. Among the recipients are Hezbollah, Hamas and Palestinian Islamic Jihad, friendly Iraqi militias and Ansar Allah in Yemen, better known as the Houthis or the Houthi rebels.
Through these militias and their political arms, Iran extends its influence and works to shape an Iran-friendly government in states like Lebanon, Syria, Iraq and Yemen. It threatens U.S. forces and antagonizes Western-allied governments in states such as Israel, Jordan, Saudi Arabia, Kuwait, Bahrain and the United Arab Emirates.
At the national level, Iran maintains no permanent mutual defense treaties. Its closest strategic partners include Syria, Venezuela, North Korea, China and Russia. They cooperate politically, economically and militarily to create an alternative to what their leaders perceive as the U.S.-led world political order.
That cooperation includes undermining U.S. national interests and helping ease or circumvent Western political pressure and economic sanctions.
Tehran to the rescue
Russia’s current war in Ukraine has left Moscow with only a handful of sympathetic friends.
Few political leaders understand Putin’s newfound political isolation and related animosity toward the United States more than Iranian leader Ayatollah Ali Khamenei. But Iran-Russia relations are complicated.
The two countries found common cause in helping Syrian strongman Assad defeat his country’s opposition forces, but for different national interests.
Saving Assad helps Russia reassert itself as a major power in the Middle East. For Iran, a friendly Syria is a critical link in Iran’s anti-U.S., anti-Israel coalition.
As Russia and Iran fought to sustain Assad, they also competed for lucrative postwar reconstruction and infrastructure contracts in that country, and to shape the post-civil war political environment to their advantage.
But neither country was bold enough to influence the way the other operated in Syria. Consequently, sometimes Iranian-backed and Russian forces cooperated, and at other times they squabbled. Mostly they left each other alone.
Ultimately, though, Russia’s plight in Ukraine compelled its leader to solicit Iran’s help in two ways.
First, the Islamic Revolutionary Guards Corps, a branch of the Iranian military, provided supplementary manpower to fill the void left when Russia reallocated troops from Syria to its Ukraine campaign.
Second, Russia has used Iran’s low-cost and battle-proven unmanned aerial vehicles, commonly known as drones, to counter Kyiv’s Western-supported arsenal and buttress its own struggling forces and surprisingly inept warfighting capabilities.
In July, Iran hosted numerous Russian officers and conducted training on Iranian Shahed-129 and Shahed-191 drone operations. As of early August 2022, anonymous intelligence sources and Ukrainian officials indicated that Russia had obtained and used Iranian drones in Ukraine.
Since acquiring Iranian drones in early September, Russia has launched over 100 Iranian Shahed-136 and Mohajer-6 attack and reconnaissance drones in over a dozen attacks against a large range of targets: Ukrainian special forces, armor and artillery units, air defense and fuel storage facilities, Ukrainian military and energy infrastructure, civilian targets and a recent series of drone and missile attacks against Kyiv.
Russia is expected to soon rely on Iran further to supplant its dwindling weapons supplies by acquiring two types of Iranian-made short-range ballistic missiles for use in Ukraine, according to U.S. and allied security officials.
Sergei Supinsky/AFP via Getty Images
Ukraine war promotes Iran’s interests
This warming alliance may not help Russia defeat Ukraine. It will promote Iran’s national interests.
Russia’s Syria drawdown brought additional Iranian soldiers there to further prove their fighting abilities and entrench themselves in Syria. That then allows Iran to control territory threatened by anti-Assad forces and maintain an open corridor or “land bridge” by which Iran extends support to its network of anti-America and anti-Israel partners and proxies.
Russia’s acquisition of Iranian arms will significantly boost Iran’s weapons industry, whose primary clientele right now is its own militias. Iran’s recent efforts to expand drone manufacturing and exports yielded limited success in small, mostly peripheral markets of Ethiopia, Sudan, Tajikistan and Venezuela.
Moscow is the second-largest global arms exporter, and its surprising transformation to Iranian arms importer signals the seriousness of Russia’s problems. It also legitimizes and expands Tehran’s weapons industry beyond arms production for the purpose of self-sufficiency. This one alliance moves Iran toward a more prominent role as a major arms exporter.
Lastly, Russia’s war in Ukraine extends a new avenue by which Iran might directly counter U.S.-provided weapons, as well as the opportunity to undermine U.S. and NATO influence in Eurasia. Iran’s drones could afford Moscow an effective and desperately needed response to U.S. weapons wreaking havoc against Russian forces in Ukraine.
Iranian weapons may force Ukraine’s Western benefactors to allocate additional billions for counter-drone or air defense systems, or aid to replace assets that Iranian weapons potentially neutralize.
Zero-sum game
The introduction of Iranian ballistic missiles to Ukraine would compound the limited tactical victories scored by Iranian drones. They will bring further unnecessary suffering and prolong and further destabilize the war in Ukraine, but I don’t believe they will tip the scales of conflict in Russia’s favor.
Their greater contribution is to Iran’s national interests: They allow Iran to directly check and undermine the U.S. and NATO outside of Iran’s usual regional area of operations. They boost Iran’s profile among countries that also wish to challenge the United States and NATO’s political, military and economic power. And they strengthen solidarity among those countries.
As Iran’s fighters, advisers and weapons proliferate to new areas and empower U.S. adversaries, Iran further promotes its national interests at the expense of U.S. national interests.
This is an updated version of a story originally published Aug. 30, 2022.
Aaron Pilkington is a U.S. Air Force analyst of Middle East affairs now studying at the University of Denver, conducting research on Iranian national security strategy. He will later join the Military & Strategic Studies department at the U.S. Air Force Academy. The views expressed are those of the author and do not reflect the official position of the Department of Defense, Department of the Air Force, the United States Air Force Academy, or any other organizational affiliation.
HBO’s ‘House of the Dragon’ was inspired by a real medieval dynastic struggle over a female ruler
In three decades of teaching medieval European history, I’ve noticed my students are especially curious about the intersection of the stories told in class and the depictions of the Middle Ages they see in movies and television.
Judged by their historical accuracy, cinematic portrayals are a mixed bag.
However, popular fantasy, unencumbered by the competing priority of “getting it right,” can, in broad strokes, reflect the values of the medieval society that inspires it.
“House of the Dragon” is one of those TV shows. A king, lacking a male heir to his throne, elevates his teenage daughter to be his named successor, and a complex dynastic drama ensues.
This storyline reflects the real obstacles facing women who aspired to exercise royal authority in medieval society.
The queen as a conduit to power
George R. R. Martin, whose novels were the foundation for the HBO series “Game of Thrones,” has made no secret of his inspiration for “House of the Dragon”: the Anarchy, a two-decade period, from 1135 to 1154, when a man and a woman vied with each other for the English throne.
The story went like this: Henry I sired two dozen or more children out of wedlock. But with his queen, Matilda, he had only a daughter, the future “Empress” Matilda, and a son, William. With William’s birth, the foremost responsibility of medieval queenship was fulfilled: There would be a male heir.
Then tragedy struck. In 1120, a drunken 17-year-old William attempted a nighttime channel crossing. When his also-inebriated helmsmen hit a rock, the prince drowned.
The queen had died two years earlier, so Henry I remarried – Adeliza of Louvain – but they had no children together. The cradle sat empty and the sands in Henry I’s hourglass ran low, so he resolved that his lone legitimate child, Matilda, would have the throne as a ruling queen.
The move was unprecedented in medieval England. A queen could exert influence in her husband’s physical absence or when, after a king’s death, their son was a minor. Her role, moreover, as an intimate confidant and counselor could be consequential.
But a queen was not expected to swing a sword or lead troops into battle and forge the personal loyalties on which kingship rested, to say nothing of the misogyny inherent to medieval English society. The queen was the conduit through which power was transferred by marriage and childbirth, not its exclusive wielder.
Viserys and Henry I share the same plight
A similar scenario drives the plot of “House of the Dragon.” The absolute preference in the fictional kingdom Westeros for a male ruler is expressed in the series’ opening scene.
The old king, having outlived his sons, empowers a council of nobles to choose his successor between two of his grandchildren, the cousins Rhaenys and Viserys. Rhaenys, a female, is the older of the two.
Yet the male Viserys becomes king and Rhaenys, “the queen who never was,” later ruefully concedes that this represented “the order of things.”
Once installed, however, Westeros’ new king would have understood the plight of England’s Henry I.
Aemma, Viserys’ queen, suffers stillbirths and miscarriages and produces only a daughter, Rhaenyra. A fading hope for a son is dashed when a breached birth and a brutal Caesarian section, intended to save the child, ends up killing Aemma. The boy – the desperately desired heir – doesn’t live out the day.
Sonless, Visery’s named heir is his younger brother, the debauched, sinister Daemon. When Daemon’s conduct becomes intolerable, Viserys disinherits and banishes him. Left with his young daughter Rhaenyra, he decides to make her a ruling queen, a role the girl relishes as she seeks to change “the order of things.”
Building support for a ruling queen
The challenge for a medieval king, whether Henry I or the fictional Viserys, was to persuade the nobles to overcome their prejudices and not just accept but actively support a woman’s ascension to power.
Henry I pursued measures to make his daughter palatable to them. Matilda, who had married the Holy Roman Emperor Henry V in 1114, returned to England a widow in 1125. Henry I, determined to forge a sacramental bond between his daughter and England’s magnates, compelled his barons in 1127 to swear their support for her as his successor. Henry I then turned to arranging a marriage for Matilda so she could give birth to a grandson and buttress her position.
After Matilda’s nuptials with Geoffrey, count of Anjou, the barons were summoned to renew their oath to her in 1131. A son, Henry, was born two years later, and a third pledge followed. Henry I died two years later of food poisoning after eating eels, a favorite dish of his.
The durability of his arrangements for Matilda’s rise to authority was immediately tested.
Viserys in “House of the Dragon” works from a similar playbook. The worthies of Westeros vow their loyalty to Rhaenyra as royal successor. Once Rhaenyra becomes marriageable, Viserys fields a plethora of suitors for her hand. A reluctant bride, Rhaenyra finally accedes to a union in which she would “dutifully” produce a male heir but then let her heart have what it wanted.
The unfortunate result is her inability to conceive with her husband while having three sons by a lover. Her situation is further complicated by Viserys’ remarriage to the lady Alicent, who gives him sons. Dangers stalk Rhaenyra’s path to power. In Westeros, as in England, a princess is expected to guard her chastity closely until marriage and, once wed, to be monogamous and not to “sully” herself in order to ensure the legitimacy of her children – a blatant double standard when noblemen frequently had children out of wedlock.
Yet even rumors of female infidelity could threaten succession. Lineage matters. Blood binds, as evident in the streams of it running from family crest to family crest in the series’ opening credits.
War ensues
Did these strategies work?
Not for Matilda. Stephen of Blois, a son from the marriage of Henry I’s sister Adela to a French count, aggressively registered a claim to the crown after Henry I’s death. Many English magnates conveniently forgot their oaths to Matilda, and Stephen became king.
Matilda was not without supporters – her half-brother Robert, earl of Gloucester; her husband, the count of Anjou; nobles disaffected by Stephen’s rule; and opportunists seeking personal gain from the conflict. Matilda resisted and the Anarchy ensued.
Forces supporting Matilda invaded England in 1139 but, save for a moment in 1141, she never ruled. She then focused instead on elevating her son to the crown.
Prosecution of the war ultimately passed to the young Henry. His mounting military successes jogged the barons’ memory of their past commitments, and the contending parties reached a settlement. Henry would succeed Stephen. With Stephen’s death, Henry became Henry II. England wouldn’t have another ruling queen until the ascension of Queen Mary I in 1553, nearly four centuries later.
But what of Rhaenyra?
Westeros is not 12th century England. For Martin, the author, the Anarchy does not serve to establish historical fact but is a wellspring for his creative vision. The fire-breathing dragon – that denizen of the medieval imagination – exists in Westeros. Rhaenyra’s pursuit of the throne may be fraught with difficulties, but she is a dragon-rider, and dragons were the most fearsome military asset in the kingdom.
This makes her dangerous in a way Matilda of England could hardly have conceived. Nonetheless, “House of the Dragon,” through the lens of fantasy, reflects a slice of the English medieval experience.
David Routt does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Colonoscopy is still the most recommended screening for colorectal cancer, despite conflicting headlines and flawed interpretations of a new study
Sebastian Kaulitzki/Science Photo Library via Getty Images
A recently published study in a high-profile medical journal appeared to call into question the efficacy of colonoscopy, a proven and widely utilized strategy for the screening and prevention of colorectal cancer.
News headlines were striking: “Disappointing results on colonoscopy benefits”; “New study suggests benefits of colonoscopies may be overestimated”; “In gold-standard trial, invitation to colonoscopy reduced cancer incidence but not death.”
Such news coverage has ignited controversy and created some confusion about the study and its implications, leading people to question whether the results suggest that reevaluation of the utility and need for a colonoscopy is warranted.
As a cancer research scientist with over 20 years of experience studying colorectal cancer screening and prevention, I am confident that colonoscopy remains one of the most critical and effective tools to screen for, detect and prevent this prevalent and lethal form of cancer.
Colorectal cancer is the fourth-most prevalent and second-leading cause of cancer deaths in the U.S. The American Cancer Society estimates that there will be 151,000 new cases of colorectal cancer diagnosed in 2022 and nearly 53,000 deaths. Screening has contributed markedly to a decline in colorectal cancer cases and deaths over the past several decades.
Current U.S. Preventive Services Taskforce guidelines recommend that people with average risk begin screening for colorectal cancer at the age of 45. This recommendation was lowered from age 50 in 2021 due to the recent increase in colorectal cancer disease prevalence among young adults.
Unpacking the new study
Several investigations have shown that colonoscopy screening is highly effective in the detection and removal of precancerous polyps before they progress to cancer.
That is why media coverage of the new study published in the New England Journal of Medicine prompted confusion and concern among health care experts and the public. Many of these news reports mistakenly interpreted the study as showing that colonoscopies have a small effect on the incidence of colorectal cancer and are ineffective at reducing deaths. Such misinterpretations could have grave consequences with regard to efforts aimed at screening and preventing a form of cancer that affects the health and well-being of so many.
In the study, a team of European researchers performed a randomized clinical trial that examined the risk of colorectal cancer and death in healthy men and women between the ages of 55 and 64. Study participants, who were recruited from population registries in Norway, Sweden, Poland and the Netherlands, were either invited to undergo a colonoscopy or were not invited and received usual care.
After approximately 10 years, the research team gathered information on colorectal cancer incidence and deaths among 28,220 in the invited group and 56,365 in the uninvited group. They found that those in the invited group had a mere 18% decrease in the number of cases of colorectal cancer relative to those in the uninvited group. They also found that there was no significant reduction in deaths in the invited group. This seemingly disappointing result drove many of the more misleading headlines in the media.
But there is a critical caveat in all this that bears explaining. Only 42% of the participants who were invited to receive a colonoscopy did so. This percentage ranged from 33% among those from Poland, from where most of the participants were recruited, to 60.7% among those from Norway.
When the researchers determined the benefit among those who actually underwent a colonoscopy, they found that the incidence of colorectal cancer decreased by 31% and deaths decreased by 50% – results that are much closer to those expected from other studies.
Another shortcoming of the study is the time between recruitment and screening of the participants. Colorectal cancer is typically slow to develop, taking 10 or more years to progress from precancerous polyps to cancer. Thus, the 10-year window used in the study may be too short to measure the full impact of colonoscopy screening. The authors recognize this and indicate that they will be doing an analysis at 15 years.
These and other issues have been clearly outlined in responses to the study by several medical and advocacy groups comprised of experts with long-standing experience in colorectal cancer and its screening. These include the National Colorectal Cancer Roundtable, the Colorectal Cancer Alliance, the American Cancer Society and the American Society for Gastrointestinal Endoscopy, among others.
All of the responses emphasize that, despite the tone of much of the media coverage, nothing in the study changes the recognized reliability or efficacy of colonoscopy screening. At best, the findings confirm that for many, a simple invitation to screening does not necessarily promote participation in screening.
Colonoscopy remains the ‘gold standard’
During a colonoscopy, a long flexible tube is inserted into the rectum and moved through the colon to allow the direct viewing, identification, imaging and removal of abnormal tissues such as precancerous polyps that could progress into colorectal cancer. As such, for quite some time, colonoscopies have been considered the “gold standard” for colorectal cancer screening and prevention, and still are.
However, there are several features of the procedure that can deter people from choosing it. It is invasive, and there is risk – though small – of complications. In addition, for the procedure to be effective, the colon must be cleared of any stool, requiring a protocol that many find distasteful and uncomfortable. Finally, it can be expensive, creating barriers for those who lack adequate insurance coverage.
Though not as sensitive as a colonoscopy, there are a number of noninvasive alternatives for colorectal cancer screening that are currently available and recommended by the U.S. Preventive Services Task Force for people with normal risk levels. Such alternatives include stool tests such as high-sensitivity guaiac fecal occult blood tests, fecal immunochemical tests and multitarget stool DNA tests.
These methods vary in effectiveness, and each has advantages and disadvantages. The option of choice is based upon patient preference, determined with input from the medical provider. But those at higher risk, such as having a family history of colorectal cancer, certain symptoms such as blood in the stool or a history of polyps are advised to get screened by a colonoscopy.
Importantly, noninvasive screening tests do not on their own prevent the disease. Rather, they raise the possibility that a benign polyp or tumor may exist, and must therefore be followed up with a colonoscopy to confirm the presence of, and remove, any abnormal lesions.
New directions for cancer screening
Most recently, researchers have made significant progress in the development of liquid biopsies, which involve the profiling of informative biomarkers in fluids such as blood. This type of profiling identifies signals for detecting and monitoring numerous cancers, including colorectal cancer.
There is particular enthusiasm in the scientific and medical communities around liquid biopsies that can aid in multi-cancer early detection. This approach offers great potential in the early detection of colorectal cancer as well as numerous other cancers for which there are currently no effective screening methods. Multi-cancer early detection tests are under development by many companies and are not yet approved by the Food and Drug Administration. Several are currently available by prescription as laboratory-developed tests.
As with all noninvasive tests, liquid biopsies must be appropriately followed up to verify, remove and/or treat any identified lesions. Extensive research on liquid biopsies is ongoing, and results suggest that a new generation of highly sensitive, readily available and patient-friendly modes of cancer screening will emerge in the next few years.
Over the past several decades, screening has contributed significantly to a marked reduction in the incidence and mortality of colorectal cancer. Given the aging of the population, as well as the recent rise in colorectal cancer among young adults, detecting the disease sensitively and in its earliest stages is more important than ever.
Franklin G. Berger receives funding from Centers for Disease Control & Prevention
Why the GOP’s battle for the soul of ‘character conservatives’ in these midterms may center on Utah and its Latter-day Saint voters
U.S. Sen. Mike Lee is seeking reelection in Utah – a typically uneventful undertaking for an incumbent Republican in a state that hasn’t had a Democratic senator since 1977. But he faces a unique challenger: Evan McMullin.
The former CIA operative, investment banker and Republican policy adviser left the GOP in 2016 because of Donald Trump. McMullin then ran for president as an independent, styling himself as a principled conservative, and won 21% of Utahans’ votes.
Lee himself voted for McMullin in 2016, saying Trump was “wildly unpopular” in Utah because of “religiously intolerant” statements about Muslims. Some 62% of the state’s residents belong to the Church of Jesus Christ of Latter-day Saints, which has its own history of suffering persecution. Yet Lee embraced Trump after his election, and now McMullin is trying to upend him.
Both men are devoted members of the Church of Jesus Christ of Latter-day Saints, often known as the Mormon church or the LDS church. As a scholar of U.S. elections and author of two books on LDS politics, I see their November face-off as part of a larger fight over what it means to be a “character conservative.” This battle has been raging around the country, not only in Utah; but LDS voters have become an especially interesting example since Trump’s rise.
Road to acceptance
Over two centuries, Latter-day Saints have transformed themselves from among the most persecuted religious groups in U.S. history to a global religion of almost 17 million members, by their own count, with an estimated US$100 billion in resources.
Politics has always been woven into this history. Early Latter-day Saints were forced gradually westward from state to state because of neighbors’ distrust, mob justice and government oppression – most notably, an extermination order was issued by the state of Missouri in 1838. The church ultimately fled the U.S. after founder Joseph Smith was killed and settled around Salt Lake, which was a Mexican territory when church members first arrived.
Utah was granted statehood in 1896, and the Senate provided a building block for increased LDS immersion into American culture – though it didn’t look that way at first. In the early 1900s, the church was so widely reviled that Sen.-elect Reed Smoot was blocked from taking his seat over accusations that his role in the church made him inherently hostile to the government.
Heritage Art/Heritage Images/Hulton Archive via Getty Images
Yet Smoot was exonerated, and his three-decade tenure significantly enhanced the church’s acceptance in national politics. The soft-spoken senator became a leading voice of conservative morality and embodiment of Mormonism in wider American culture, replacing Brigham Young, the bearded patriarch with multiple wives.
LDS ascendance throughout the 20th century culminated in Mitt Romney’s 2012 presidential nomination and wider cultural attention dubbed “the Mormon moment.” Some LDS beliefs and practices – such as the teaching that Smith discovered scripture on golden plates buried in upstate New York – have long generated curiosity, if not derision, from other Americans. Many Latter-day Saints and observers felt Romney’s nomination suggested greater acceptance of the religion.
In particular, LDS conservatives have become political allies with white evangelicals when it comes to social issues such as opposing gay marriage. In popular culture, Latter-day Saints are often seen as the embodiment of 1950s conservative Americana. LDS cultural norms such as patriotism, abstinence from tobacco and alcohol and prioritizing child rearing, family life and devotion to service have forged a conception of character widely embraced by conservatives.
This all helped position Latter-day Saints as a small but influential group within the Christian right.
And then Trump decided to run for president.
An inconvenient candidate
Trump galvanized parts of the Republican Party. Yet conservatives were divided over the candidate’s character – especially his unorthodox attacks on primary rivals and former GOP presidential candidates, the “Access Hollywood” video in which he bragged about groping women, and numerous allegations of sexual assault.
Latter-day Saints are the most Republican religious group in the country, making them a particularly interesting case study of character conservatism. Trump’s overlap with the LDS community “starts and stops” with his GOP affiliation, as Brigham Young University political scientist Quin Monson told the Los Angeles Times in 2016.
Romney thoroughly criticized Trump and encouraged Republicans to vote for any other primary candidate. Grounded in his LDS faith, which prioritizes family on Earth and for eternity, Romney urged Utahans: “Think of Donald Trump’s personal qualities. The bullying, the greed, the showing off, the misogyny, the absurd third-grade theatrics. … Imagine your children and your grandchildren acting the way he does.”
Deseret News, the church-owned newspaper in Salt Lake, opposed Trump for not upholding “the ideals and values of this community.” Just 16% of Latter-day Saints thought he was a moral person.
When McMullin ran in 2016, Trump still won Utah, but with 45% – the lowest for a Republican nominee there since 1992. Nationwide, just over 50% of Latter-day Saints voted for Trump in 2016, almost 30 percentage points lower than white evangelicals. The second time around, he won over 60% of the LDS vote, but most church members who are people of color or are under 40 did not vote for him.
GOP soul-searching
Jan. 6, 2021, was a pivotal moment for the Trump presidency and character conservatives. Half of Republicans believed Trump bore at least some responsibility for what happened. Voters’ disapproval was compounded by further activities, such as Trump’s trying to overturn the 2020 election and taking highly classified documents. Still, GOP candidates face strategic pressure to pledge allegiance to Trump: The Republican National Committee, for example, has directed millions of dollars to his legal defense.
Character conservatives are reckoning with two different impulses. Trump is not a role model, but he has demonstrated willingness to fight for some religious-conservative values, such as reconfiguring the Supreme Court to enable the overturning of Roe v. Wade. Some character conservatives support Trump, believing the ends justify means. Others reject Trump’s behavior as immoral and unacceptable for democracy – and the majority are probably somewhere in the middle.
The Utah Senate contest will provide some clarity to these countervailing trends. Lee has previously compared Trump with Captain Moroni, a hero from LDS scripture. McMullin, meanwhile, contends that Lee’s efforts to overturn the 2020 election results were “brazen treachery.”
Independent polling has Lee and McMullin in a virtual tie. Incumbency advantage is powerful, but Utah’s Democratic Party has uncharacteristically decided to support McMullin rather than field its own candidate.
The character divide between Trump-supporting candidates and McMullin questions the extent to which LDS values and the carefully crafted public identity of the church can be disentangled from the modern Republican Party. Lee remains the favorite, but the fact that this is a competitive race speaks to how ongoing concerns continue to trouble the former president’s party, even in deeply red Utah.
Luke Perry does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
AI is changing scientists’ understanding of language learning – and raising questions about an innate grammar
kate_sept2004/E+ via Getty Images
Unlike the carefully scripted dialogue found in most books and movies, the language of everyday interaction tends to be messy and incomplete, full of false starts, interruptions and people talking over each other. From casual conversations between friends, to bickering between siblings, to formal discussions in a boardroom, authentic conversation is chaotic. It seems miraculous that anyone can learn language at all given the haphazard nature of the linguistic experience.
For this reason, many language scientists – including Noam Chomsky, a founder of modern linguistics – believe that language learners require a kind of glue to rein in the unruly nature of everyday language. And that glue is grammar: a system of rules for generating grammatical sentences.
Children must have a grammar template wired into their brains to help them overcome the limitations of their language experience – or so the thinking goes.
This template, for example, might contain a “super-rule” that dictates how new pieces are added to existing phrases. Children then only need to learn whether their native language is one, like English, where the verb goes before the object (as in “I eat sushi”), or one like Japanese, where the verb goes after the object (in Japanese, the same sentence is structured as “I sushi eat”).
But new insights into language learning are coming from an unlikely source: artificial intelligence. A new breed of large AI language models can write newspaper articles, poetry and computer code and answer questions truthfully after being exposed to vast amounts of language input. And even more astonishingly, they all do it without the help of grammar.
Grammatical language without a grammar
Even if their choice of words is sometimes strange, nonsensical or contains racist, sexist and other harmful biases, one thing is very clear: the overwhelming majority of the output of these AI language models is grammatically correct. And yet, there are no grammar templates or rules hardwired into them – they rely on linguistic experience alone, messy as it may be.
GPT-3, arguably the most well-known of these models, is a gigantic deep-learning neural network with 175 billion parameters. It was trained to predict the next word in a sentence given what came before across hundreds of billions of words from the internet, books and Wikipedia. When it made a wrong prediction, its parameters were adjusted using an automatic learning algorithm.
Remarkably, GPT-3 can generate believable text reacting to prompts such as “A summary of the last ‘Fast and Furious’ movie is…” or “Write a poem in the style of Emily Dickinson.” Moreover, GPT-3 can respond to SAT level analogies, reading comprehension questions and even solve simple arithmetic problems – all from learning how to predict the next word.
Just_Super/E+ via Getty Images
Comparing AI models and human brains
The similarity with human language doesn’t stop here, however. Research published in Nature Neuroscience demonstrated that these artificial deep-learning networks seem to use the same computational principles as the human brain. The research group, led by neuroscientist Uri Hasson, first compared how well GPT-2 – a “little brother” of GPT-3 – and humans could predict the next word in a story taken from the podcast “This American Life”: people and the AI predicted the exact same word nearly 50% of the time.
The researchers recorded volunteers’ brain activity while listening to the story. The best explanation for the patterns of activation they observed was that people’s brains – like GPT-2 – were not just using the preceding one or two words when making predictions but relied on the accumulated context of up to 100 previous words. Altogether, the authors conclude: “Our finding of spontaneous predictive neural signals as participants listen to natural speech suggests that active prediction may underlie humans’ lifelong language learning.”
A possible concern is that these new AI language models are fed a lot of input: GPT-3 was trained on linguistic experience equivalent to 20,000 human years. But a preliminary study that has not yet been peer-reviewed found that GPT-2 can still model human next-word predictions and brain activations even when trained on just 100 million words. That’s well within the amount of linguistic input that an average child might hear during the first 10 years of life.
We are not suggesting that GPT-3 or GPT-2 learn language exactly like children do. Indeed, these AI models do not appear to comprehend much, if anything, of what they are saying, whereas understanding is fundamental to human language use. Still, what these models prove is that a learner – albeit a silicon one – can learn language well enough from mere exposure to produce perfectly good grammatical sentences and do so in a way that resembles human brain processing.
Rethinking language learning
For years, many linguists have believed that learning language is impossible without a built-in grammar template. The new AI models prove otherwise. They demonstrate that the ability to produce grammatical language can be learned from linguistic experience alone. Likewise, we suggest that children do not need an innate grammar to learn language.
“Children should be seen, not heard” goes the old saying, but the latest AI language models suggest that nothing could be further from the truth. Instead, children need to be engaged in the back-and-forth of conversation as much as possible to help them develop their language skills. Linguistic experience – not grammar – is key to becoming a competent language user.
Morten H. Christiansen receives funding from the A&S New Frontier Grant Program at Cornell University. He is affiliated with the School of Communication and Culture and Interacting Minds Centre at Aarhus University, Denmark, as well as the Haskins Labs, New Haven, CT.
Pablo Contreras Kallens received funding from the A&S New Frontier Grant Program at Cornell University.
U-turns, economic turmoil and an occasionally absent prime minister: The (latest) UK political crisis explained
Stefan Rousseau/PA Images via Getty Images
The U.K. government – and its leader, Prime Minister Liz Truss – appears to be in a spot of trouble, to use a typically British understatement. An economic mess largely of its own making has resulted in U-turns, a high-profile firing, curious absences and plummeting support.
Indeed, just months into the job, Truss appears in danger of becoming the shortest-lived U.K. prime minister in history.
So what exactly has gone wrong, and what happens next? The Conversation asked Garret Martin, an expert on U.K. politics at American University School of International Service, to explain all.
Who is Liz Truss and how did she become prime minister?
Liz Truss is both the leader of the Conservative Party and the nation’s political leader – albeit not one put in place by the electorate. In early July 2022, then-U.K. Prime Minister Boris Johnson, having lost the support of his party after a series of scandals, resigned as leader of the Conservatives. Instead of stepping down immediately as prime minister, Johnson announced that he would stay on until his party had selected a successor.
The leadership election proceeded in two distinct steps over the course of the summer. Through a series of votes, Conservative members of Parliament whittled down the list of candidates to two finalists: Truss, who served as foreign secretary, and former Chancellor of the Exchequer Rishi Sunak. It was then up to the wider members of the Conservative Party to pick between the top two. On Sept. 5, Truss was formally announced as the winner, with 57.4% of the votes, paving her way to become the new prime minister.
Why is she in trouble?
Truss came to office amid extremely difficult circumstances. Queen Elizabeth II died within a few days of her taking over from Johnson. That removed the promise of any new leadership “bounce,” as the nation was plunged into an official period of mourning.
Overseeing the transition to a new monarch only added to the plethora of thorny challenges affecting the government, including the war in Ukraine and the threat of Scottish secession, as well as the severe energy and inflation crises.
But if any observers expected caution from Truss, they were rapidly corrected. On Sept. 23, then-Chancellor of the Exchequer Kwasi Kwarteng outlined a bold “mini-budget” to Parliament. This new plan promised growth for a struggling U.K. economy, relying on a massive package of tax cuts. It would have represented the biggest tax cut in half a century, with benefits predominantly for richer segments of the population.
This was not a complete surprise, since Truss had campaigned on such a platform during the leadership election. Yet the scale and speed of the announcement were stunning, an example of what BBC journalist Nicholas Watt referred to as “shock and awe” tactics.
It was an audacious gamble by Truss – and one that completely failed to convince the markets. Within days of Kwarteng’s announcements, the pound had plummeted in value, leading British borrowing costs to shoot up. Meanwhile, soaring interest rates piled on misery to millions in the U.K. in the shape of higher mortgage payments.
The International Monetary Fund piled on as well, urging the U.K. government to “reevaluate” the planned tax cuts because of how they might “stoke soaring inflation.” And the Bank of England was forced to take drastic measures, including buying an unlimited quantity of government bonds, to protect the U.K. economy from crashing even further.
How has she responded?
With pressure mounting and growing disquiet among the wider public and members of her own party, Truss resorted yet again to drastic measures. She sacked Kwarteng unceremoniously on Oct. 14, meaning he had lasted only 38 days on the job.
Jeremy Hunt, a former foreign secretary, stepped in to replace Kwarteng – the fourth chancellor in less than four months. He immediately proceeded to roll back nearly all the measures promised in Kwarteng’s mini-budget. Hunt emphasized that this was necessary to restore confidence in the U.K. economy, but it was also an unmistakable and stunning rebuke of the prime minister. Her absence from Parliament during an “urgent question” on the dismissal of Kwarteng and subsequent ducking out of a planned media event have done little to instill confidence in her handling of a political crisis. And that crisis only worsened on Oct. 19 with the announcement that the U.K.‘s home secretary had resigned over an apparent security breach.
Truss, for her part, is now trying to salvage what is left of her authority. In a recent BBC interview she confessed to mistakes but remained adamant that she would lead her party in the next elections. However, that decision will be in the hands of the party.
Can she cling on to her job?
Truss’ future will depend on how the Conservative Party navigates a difficult dilemma. It could try to stick with Truss, in the hope that there is enough time for her to recover. After all, the next election could be as far away as January 2025.
Yet the prime minister is deeply wounded and will face a major challenge to recover her credibility. As it stands, only 10% of voters approve of her leadership, with 80% having an unfavorable view, a significantly worse score than even Boris Johnson when he resigned. Within her own party, a whopping 55% want Truss to leave.
The Conservative Party could try to ditch Truss, but the various paths to achieve that have drawbacks as well. The prime minister could resign of her own accord, seeing the writing on the wall. But she has not shown any inclination to do so as of now and told Parliament on Oct. 19, 2022, that she is a “fighter, not a quitter.”
The Conservatives could try to revise their current internal rules, which protect any new leader from facing a confidence vote within their first year in office. That is a feasible step if enough members of the party support that; but it would trigger yet another long and divisive leadership contest mere months after the last one.
The Conservatives could also try to pass a motion of no confidence in the government, triggering a new general election. Yet that would be an extremely risky strategy, considering the latest polls show the opposition Labour Party with a dramatic 29 percentage-point lead.
What are the options to replace her?
Were Truss to leave office, there would be several possible candidates to replace her.
These include figures like Rishi Sunak; Leader of the House of Commons Penny Mordaunt; or Jeremy Hunt – all of whom ran against Truss in July. Boris Johnson might even try a daring comeback, although that remains a stretch, considering the circumstances in which he left office.
But whoever is in office, whether Truss or someone else, will face a steep climb to regain the confidence and support of voters.
Garret Martin receives funding from the European Union for the research center – The Transatlantic Policy Center – that he co-directs at American University.
When it comes to education in prison, policy and research often focus on how it benefits society or improves the life circumstances of those who are serving time.
But as I point out in my new edited volume, “Education Behind the Wall: Why and How We Teach College in Prison,” education in prison is doing more than changing the lives of those who have been locked up as punishment for crimes – it is also changing the lives of those doing the teaching.
As director of a college program in prisons and as a researcher and professor who teaches in both colleges and prisons, I know that the experience of teaching in a correctional facility makes educators question and reexamine much of what we do.
My book collects experiences of college professors who teach in prison. A common thread is that we all went into education behind the wall thinking about ourselves to some extent as experts but have since critically reflected on what we know through interactions with incarcerated students and the institutions that hold them.
Rewriting the book
One semester in 2020, I volunteered to tutor for a class on something that occurs frequently behind prison walls: conflict and negotiation. The class featured two books that are considered essential to the field. The first is “Interpersonal Conflict,” a 2014 text that invites readers to reflect on how conflict has played out in their personal lives. The second is “Getting to Yes,” a 2011 text described by its publisher as a “universally applicable method for negotiating personal and professional disputes without getting angry – or getting taken.”
“You know, I know these are very important books and all, but this isn’t really what would work in here,” one incarcerated student said after a few class meetings, gesturing to the prison walls. “Here, you can’t talk openly about your feelings like the authors want us to, and the rules of relating to people are different.”
I responded that his observation was astute, and that knowing both sets of rules – and how to switch between them – could be profoundly useful. For example, I theorized, I imagine he behaves differently during yard time than on a phone call with a family member on the outside. If the textbooks about conflict on the outside didn’t adequately address how to handle conflict in prison, I suggested he write an equivalent book for conflict negotiation in prison.
“Maybe I should,” he chuckled, and looked around to his classmates. “Maybe we should.”
The experience showed me how even though there are textbooks that are considered “universal,” that universality may not always extend itself to correctional institutions.
A new understanding of status
As a full professor and chair of the sociology department at Clark University, a small, private university in Worcester, Massachusetts, Shelly Tenenbaum is used to being accorded a certain degree of respect for her professional accomplishments and credentials. But none of those things mattered once she passed through the gates of medium-security prisons for men located in Massachusetts.
“Status that I might have as a scholar, full professor, department chair … is rendered invisible as we enter prison,” Tenenbaum writes. When passing through security, “I have been abruptly instructed to obey commands and my questions are ignored.”
Encounters with correctional officers are frequently unnerving for educators, particularly at the entrance gates.
“I find myself in the position of needing to second-guess what I may (or may not) have done wrong and defer to people who are considerably younger than I am,” Tenenbaum continues. “There were times that I followed rules only to be scolded when the rules appeared to be differently interpreted from one day to the next. To be in the subordinate role of a power dynamic is a humbling experience. … It takes having expectations defied to realize that they even existed.”
Whether the rules are about clothing faculty members are allowed to wear or the number of pieces of paper we can carry in, the decisions are frequently about power. In her chapter, Tenenbaum writes that having had her status questioned has led to a new sense of humility and altered the power dynamics in her professional world. She does not take it for granted that her expertise is currency for respect.
Modeling apology
When an incarcerated student told Bill Littlefield, a retired English professor, that the novel “Frankenstein” had no relevance to his experience or life, Littlefield’s first reaction was to push back.
“‘Good writing is always relevant,’ I said, ever the professor,” writes Littlefield. Littlefield tutors and teaches at the Massachusetts Correctional Institution in Concord and Northeastern Correctional Center. He is also author of the newly released book “Mercy,” as well as popular host of WBUR’s sports radio show “Only A Game.”
“He said he would read it, certainly … even though he knew that the story of the lonely, ultimately vengeful monster created by the gentleman scientist’s preposterous, insane overreach would have nothing to say to him,” Littlefield writes. “I argued that he was wrong.”
But in the week that followed, Littlefield said he came to see his own reaction as a mistake and an act of arrogance.
“When we met again, I made a point of apologizing to the student, in front of his classmates,” Littlefield writes. “I told him that I’d realized it was no business of mine to tell him what was relevant to his life. If he did the reading, he’d decide for himself.” The student thanked him.
More college in prison
As college programs in prison become more prevalent, I fully expect that in the coming years there will be more and more college professors being transformed by the powerful experience of teaching behind bars. This is especially so given that Congress has lifted a long-standing ban on federal financial aid, namely, Pell Grants, for people who are incarcerated.
In 2022, there are 374 prison education programs run by 420 institutions of higher education operating in 520 facilities, according to the National Directory maintained by the Alliance for Higher Education in Prison.
Collectively, college programs in prison have been shown to lower the odds that a person who participates in them will return to prison after being released. But as I show in my book, the programs are also dramatically changing the perspective of the college professors who teach them.
Mneesha Gellman is affiliated with the Emerson Prison Initiative.
Sarah Bassing, CC BY-ND
In the arid American West, wildfires now define summer. Recent years have seen some of the worst wildfires in recorded history. Climate change, the loss of Indigenous burning practices and a century of fire suppression are increasing the risk of larger, hotter and more frequent wildfires.
I’m a wildlife ecologist studying how the presence of wolves and other predators is affecting deer and elk in Washington state. I’m particularly interested in understanding how these species interact in changing landscapes.
Habitat degradation and other factors have caused populations of mule deer, a common species in many parts of the West, to decline across much of their native range. My collaborators and I recently published a study examining how mule deer use forests that have burned, and how wildfires affect deer interactions with cougars and wolves.
We found that mule deer use these burns in summer but avoid them in winter. Deer also adjusted their movement to reduce predation risk in these burned landscapes, which varies depending on whether cougars or wolves are the threat.
Understanding how mule deer respond to burns and interact with predators in burned areas may be essential for conserving and restoring wildlife communities. Our findings could help land managers and policy makers balance the needs of wildlife with those of humans as they evaluate wildfire impacts and create policies to address future wildfires.
Long-term effects of wildfires
Many forests in western North America have trees that have evolved to withstand fire. Some even depend on burning to dispense seeds. Herbivores can thrive on the lush vegetation that grows after a blaze – so much so that burned areas have a “magnet effect” on deer, attracting them from surrounding areas.
But as fires trigger forest regeneration, they also restructure landscapes. And this process is influencing interactions between predators and prey.
Wildfires have had major impacts in recent decades in the Methow Valley of Okanogan County in northern Washington, where my collaborators on the Washington Predator-Prey Project and I focus our research. Wolves recolonized this area over the past 15 years, and researchers, land managers and the public want to know how the presence of wolves is affecting the ecosystem.
Fires have burned nearly 40% of this region since 1985, with more than half of those burns in the past decade. As in much of the West, low-severity fires historically were frequent here, burning every one to 25 years, with mixed-severity fires burning every 25 to 100 years. But now the area is seeing larger and more frequent fires.
Fire reshapes forests and wildlife behavior
In northern Washington and much of the American West, fires clear the forest understory and burn away the shrubs and small trees that grow there. In more severe fires, flames reach treetops and burn away the upper branches of the forest. More light reaches the forest floor post-fire, and fire-adapted plants regenerate.
After a fire, burned forests can be lush with shrubs and other vegetation that deer favor as summer forage. In our study, deer generally preferred burned areas for about 20 years post-fire, which is the time it takes for the forest to move beyond the initial regrowth stage.
Fires also affect deer behavior in winter. In unburned evergreen forests, trees’ upper branches intercept much of the falling snow before it builds up on the forest floor. Where fires have removed these upper branches, snow is often deeper than in unburned forests.
The snow prevents deer from feeding. It also makes deer more vulnerable to carnivores, since their hooves sink into the snow, while predators like wolves and cougars have wide paws that help them walk over the snow. For these reasons, the mule deer we tracked avoided burns in the winter.
Cougars and wolves prey on mule deer in different ways. Cougars, like nearly all cats, hunt by stalking and ambushing their prey. Often they rely on shrubs and complex terrain to approach deer undetected.
In contrast, wolves hunt by chasing their prey over longer distances. This strategy works best in open terrain.
After fires, vegetation growth and the accumulation of fallen trees and branches can create stalking cover for cougars and also provide refuge for deer to hide from wolves. In Washington, we found that deer were generally less likely to use burned forests in areas of high cougar activity, although their response also depended on the severity of the fire and the time that had elapsed since the fire.
Deer had to balance the availability of improved summer forage in burns with increased predation risk from cougars. In areas heavily used by wolves, however, burns created a win-win for deer: more food and less risk of being detected by a predator.
Mapping fires, deer and predators
To assess how wildfires altered forests in our study area, we used satellite data to map 35 years of impacts from fires that occurred between 1985 and 2019. This data set represents one of the widest ranges of fire histories yet examined by wildlife researchers.
To investigate how deer navigated burns and avoided predators, we captured 150 mule deer and fit them with GPS collars programmed to record a location every four hours. We also caught and GPS-collared five wolves and 24 cougars to map the areas those species used most heavily.
Putting all of this information together, we examined burn history, wolf activity and cougar activity at the locations that mule deer used and compared the results with locations the deer could have reached but did not use. This approach measured how strongly mule deer selected for or avoid burned areas with varying levels of cougar and wolf activity.
Wildlife is part of healthy forests
Our study and others show that deer and other wildlife use burned areas after wildfires, even when these zones have been intensely burned. But these fires bring both costs and benefits to wildlife.
Mule deer may benefit from the opportunity to feed on better summer forage. But avoiding burns in the winter, when the ground is covered with snow, could reduce the deer’s range at a time when the animals already gather at lower elevations to avoid the deepest snow.
Our research suggests that in fire-affected areas, scientists and land managers who want to predict how burns could affect wildlife need to account for interactions between species, as well as how fires affect food supplies for herbivores such as deer. As policymakers debate suppressing wildfires, treating forests to reduce fuels and logging after fires, I believe they should consider how these strategies will affect wildlife – a key part of biodiverse, resilient landscapes.
Taylor Ganz does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
A new type of material called a mechanical neural network can learn and change its physical properties to create adaptable, strong structures
Jonathan Hopkins, CC BY-ND
The Research Brief is a short take about interesting academic work.
The big idea
A new type of material can learn and improve its ability to deal with unexpected forces thanks to a unique lattice structure with connections of variable stiffness, as described in a new paper by my colleagues and me.
Ryan Lee, CC BY-ND
The new material is a type of architected material, which gets its properties mainly from the geometry and specific traits of its design rather than what it is made out of. Take hook-and-loop fabric closures like Velcro, for example. It doesn’t matter whether it is made from cotton, plastic or any other substance. As long as one side is a fabric with stiff hooks and the other side has fluffy loops, the material will have the sticky properties of Velcro.
My colleagues and I based our new material’s architecture on that of an artificial neural network – layers of interconnected nodes that can learn to do tasks by changing how much importance, or weight, they place on each connection. We hypothesized that a mechanical lattice with physical nodes could be trained to take on certain mechanical properties by adjusting each connection’s rigidity.
To find out if a mechanical lattice would be able to adopt and maintain new properties – like taking on a new shape or changing directional strength – we started off by building a computer model. We then selected a desired shape for the material as well as input forces and had a computer algorithm tune the tensions of the connections so that the input forces would produce the desired shape. We did this training on 200 different lattice structures and found that a triangular lattice was best at achieving all of the shapes we tested.
Once the many connections are tuned to achieve a set of tasks, the material will continue to react in the desired way. The training is – in a sense – remembered in the structure of the material itself.
We then built a physical prototype lattice with adjustable electromechanical springs arranged in a triangular lattice. The prototype is made of 6-inch connections and is about 2 feet long by 1½ feet wide. And it worked. When the lattice and algorithm worked together, the material was able to learn and change shape in particular ways when subjected to different forces. We call this new material a mechanical neural network.
Jonathan Hopkins, CC BY-ND
Why it matters
Besides some living tissues, very few materials can learn to be better at dealing with unanticipated loads. Imagine a plane wing that suddenly catches a gust of wind and is forced in an unanticipated direction. The wing can’t change its design to be stronger in that direction.
The prototype lattice material we designed can adapt to changing or unknown conditions. In a wing, for example, these changes could be the accumulation of internal damage, changes in how the wing is attached to a craft or fluctuating external loads. Every time a wing made out of a mechanical neural network experienced one of these scenarios, it could strengthen and soften its connections to maintain desired attributes like directional strength. Over time, through successive adjustments made by the algorithm, the wing adopts and maintains new properties, adding each behavior to the rest as a sort of muscle memory.
This type of material could have far reaching applications for the longevity and efficiency of built structures. Not only could a wing made of a mechanical neural network material be stronger, it could also be trained to morph into shapes that maximize fuel efficiency in response to changing conditions around it.
What’s still not known
So far, our team has worked only with 2D lattices. But using computer modeling, we predict that 3D lattices would have a much larger capacity for learning and adaptation. This increase is due to the fact that a 3D structure could have tens of times more connections, or springs, that don’t intersect with one another. However, the mechanisms we used in our first model are far too complex to support in a large 3D structure.
What’s next
The material my colleagues and I created is a proof of concept and shows the potential of mechanical neural networks. But to bring this idea into the real world will require figuring out how to make the individual pieces smaller and with precise properties of flex and tension.
We hope new research in the manufacturing of materials at the micron scale, as well as work on new materials with adjustable stiffness, will lead to advances that make powerful smart mechanical neural networks with micron-scale elements and dense 3D connections a ubiquitous reality in the near future.
Ryan Lee has received funding from the Air Force Office of Science Research .