Quantcast
Channel: Eurasia Review
Viewing all 73339 articles
Browse latest View live

Handgun Purchaser Licensing Laws Linked To Fewer Firearm Homicides In Large, Urban Areas

0
0

State laws that require gun purchasers to obtain a license contingent on passing a background check performed by state or local law enforcement are associated with a 14 percent reduction in firearm homicides in large, urban counties, a new study from the Johns Hopkins Bloomberg School of Public Health found.

Studies have shown that these laws, which are sometimes called permit-to-purchase licensing laws, are associated with fewer firearm homicides at the state level. This is the first study to measure the impact of licensing laws on firearm homicides in large, urban counties, where close to two-thirds of all gun deaths in the U.S. occur.

The study was published online in the Journal of Urban Health and was written by researchers at the Johns Hopkins Center for Gun Policy and Research, based at the Johns Hopkins Bloomberg School of Public Health, and the Violence Prevention Research Program at the University of California, Davis.

Handgun licensing laws typically require prospective gun purchasers to apply directly to a state or local law enforcement agency to obtain a purchase permit, which is dependent on passing a background check, prior to approaching a seller. Many state licensing laws also require applicants to submit fingerprints.

The study also found that states that only required so-called comprehensive background checks (CBCs) – that is, did not include other licensing requirements– were associated with a 16 percent increase in firearm homicides in the large, urban counties. In states that only require a CBC the gun seller or dealer, not law enforcement, typically carries out the background check.

“Background checks are intended to screen out prohibited individuals, and serve as the foundation upon which other gun laws are built, but they may not be sufficient on their own to decrease gun homicides,” said Cassandra Crifasi, PhD, MPH, assistant professor with the Johns Hopkins Center for Gun Policy and Research and the paper’s lead author. “This study extends what we know about the beneficial effects of a licensing system on gun homicides to large, urban counties across the United States.”

In addition to sending potential purchasers to law enforcement and requiring fingerprints, state licensing laws provide a longer period for law enforcement to conduct background checks. These checks may have access to more records, increasing the likelihood that law enforcement can identify and screen out those with a prohibiting condition. Surveys from the Johns Hopkins Center for Gun Policy and Research find that the majority of both gun owners and non-gun owners support this policy.

CBC laws generally depend upon the use of the National Instant Criminal Background Check System (NICS), but problems with the NICS include incomplete records and the quality and timeliness of reported records. Permit-to-purchase laws provide a longer period for law enforcement to conduct background checks at the local level, and these checks may have access to additional records.

Previous research examining the impact of CBC-only laws has documented the importance of enforcement and compliance. Licensing laws, however, have consistently been associated with reductions in gun homicide.

For the study, a sample of 136 of the largest, urban counties in the U.S. was created for 1984-2015 and analyses were conducted to assess the effects of changes to the policies over time.

The study also examined the impact of right-to-carry (RTC) and stand- your-ground (SYG) laws. SYG laws give individuals expanded protections for use of lethal force in response to a perceived threat, and RTC laws make it easier for people to carry loaded, concealed firearms in public spaces.

The researchers found that counties in states that adopted SYG laws experienced a seven percent increase in firearm homicide, and counties in states with RTC laws experienced a four percent increase firearm homicide after the state’s adoption of the RTC law.

“Our research finds that state laws that encourage more public gun carrying with fewer restrictions on who can carry experience more gun homicides in the state’s large, urban counties than would have been expected had the law not been implemented,” said Crifasi. “Similarly, stand-your-ground laws appear to make otherwise non-lethal encounters deadly if people who are carrying loaded weapons feel emboldened to use their weapons versus de-escalating a volatile situation.”


Sleep Loss Linked With Nighttime Snacking, Junk Food Cravings, Obesity, Diabetes

0
0

Nighttime snacking and junk food cravings may contribute to unhealthy eating behaviors and represent a potential link between poor sleep and obesity, according to a study by University of Arizona Health Sciences sleep researchers.

The study was conducted via a nationwide, phone-based survey of 3,105 adults from 23 U.S. metropolitan areas. Participants were asked if they regularly consumed a nighttime snack and whether lack of sleep led them to crave junk food. They also were asked about their sleep quality and existing health problems.

About 60 percent of participants reported regular nighttime snacking and two-thirds reported that lack of sleep led them to crave more junk food.

The researchers found that junk food cravings were associated with double the increase in the likelihood of nighttime snacking, which was associated with an increased risk for diabetes. They also found that poor sleep quality seemed to be a major predictor of junk food cravings, and that junk food cravings were associated with a greater likelihood of participants reporting obesity, diabetes and other health problems.

“Laboratory studies suggest that sleep deprivation can lead to junk food cravings at night, which leads to increased unhealthy snacking at night, which then leads to weight gain. This study provides important information about the process, that these laboratory findings may actually translate to the real world,” noted Michael A. Grandner, PhD, MTR, UA assistant professor of psychiatry and director of the UA Sleep and Health Research Program and the UA Behavioral Sleep Medicine Clinic. “This connection between poor sleep, junk food cravings and unhealthy nighttime snacking may represent an important way that sleep helps regulate metabolism.”

“Sleep is increasingly recognized as an important factor in health, alongside nutrition,” said Christopher Sanchez, UA undergraduate nutrition and dietetics major, who is the lead author of the study and a student research assistant in the Sleep and Health Research Program directed by Dr. Grandner. “This study shows how sleep and eating patterns are linked and work together to promote health.”

William D. “Scott” Killgore, PhD, UA professor of psychiatry, medical imaging and psychology, and director of the UA Social, Cognitive and Affective Neuroscience (SCAN) Lab, also contributed to the study.

UA Health Sciences sleep researchers work as interdisciplinary teams, conducting research and leading clinical trials to assess how sleep affects memory, mental health, stress, alertness and decision-making, and how environmental factors affect sleep. Sleep and wakefulness disorders affect an estimated 15 to 20 percent of U.S. adults, according to the U.S. Department of Health and Human Services.

The research abstract, “Nighttime Snacking: Prevalence and Associations With Poor Sleep, Health, Obesity, and Diabetes,” will be presented at SLEEP 2018, the 32nd annual meeting of the Associated Professional Sleep Societies LLC (APSS), which is a joint venture of the American Academy of Sleep Medicine and the Sleep Research Society, June 2-6 in Baltimore. The meeting is the world’s premier forum to present and discuss the latest developments in clinical sleep medicine and sleep and the roughly 24-hour cycle that influences physiology and behavior, known as circadian science.

Are The Culture Wars Unique To Our Times? – OpEd

0
0

By John D. Wilsey*

For the past five decades, Americans have waged what has been commonly referred to as a “culture war.” A number of authors have examined the culture wars from philosophical, historical, and sociological standpoints, especially since the early 1990s—Charles Murray, Robert Putnam, James Davison Hunter, Philip Gorski, and Andrew Hartman to name a few. It is tempting to see the culture wars as being the defining characteristic of American history since the 1960s. How central are the culture wars to American life at the beginning of the twenty-first century, and is there anything unusual about the presence of cultural conflict in America when we think historically about the subject?

I argue that there is not much qualitatively unique about the culture wars of the last fifty or so years, especially as long as Americans have the intellectual resources from which to draw in order to bridge the partisan divides—and I think that they do. I also argue that it is necessary to broaden the accepted paradigm of “culture wars”—a paradigm useful only to contemporary history—to what I think is a more sustainable historical understanding of domestic social and political conflict over time. Alexis de Tocqueville gives us important guidance as we consider cultural conflict in American history more broadly. His treatment of the irrepressible conflict between the majority and the minority in America is applicable to every cultural divide in American history, and also offers a more helpful paradigm in terms of plotting a way forward across the crevasse-rent landscape of today’s cultural and political discourse. When authors treat the culture wars of the past five decades, they often ask, “what can we do?” When we take the long view of cultural conflict in America, we are in a position to gain wisdom in answering this question.

Let’s remember that the culture wars are incredibly complex with overlapping conflicts that are often confused and conflated, one with another. The participants in the culture wars come from an enormous variety of political and religious persuasions, socio-economic classes, races and ethnicities, and sexual orientations and gender identities. Over the past few decades, we can describe the culture wars as embracing a number of distinct binaries simultaneously: Romanticism vs. Enlightenment; intellectualism vs. volitionalism; modernism vs. postmodernism; deconstructionism vs. absolutism; elitism vs. populism; therapeutic vs. classical liberalism; individualism vs. communitarianism; nationalism vs. globalism or identity politics; consolidation vs. disaggregation; progressivism vs. orthodoxy; and radical secularism vs. religious nationalism. Many of these binary conflicts have arisen in contemporary times, but some are leftover conflicts from earlier periods in American history, going back as far as the colonial period. Some of these conflicts have arisen only recently, but are intensified by the older debates. Giving room for the complexity of American cultural conflict helps us to see that the present day culture wars are not simply matter of left versus right. And as long as Americans have cultural, historical, aesthetic, religious, philosophical, and political resources from which to draw, there is hope for bridging those divides. The Civil War, for example, was fought partly because those resources were no longer commonly accessible to Americans. Americans do not appear to be unable to draw on our shared resources today as they were in 1861. The pool of available cultural resources is much deeper now than then, and more accessible to more people from more diverse backgrounds than ever. Philip Gorski’s recent book, American Covenant: A History of Civil Religion From the Puritans to the Present, powerfully demonstrates this point.

Instead of thinking of the culture wars as unique to our times, let’s look to Tocqueville for guidance. In chapter fifteen of volume one in Democracy in America, Tocqueville writes that as long as sovereignty rests with the people in America—in other words, as long as America remains a democracy—the majority of the people rule, both politically and culturally. The power of the majority is absolute, and anytime power is absolute, grave danger to freedom exists. Human nature tends toward abuse of power, and the majority tends to tyrannize the minority.

An excellent example of how the tyranny of the majority works, Tocqueville wrote, is in the way free speech operates in America. In American democracy, in which the majority rules, free speech is never absolute. For example, while the people consider a particular issue—take gay marriage—speech is completely free. Both sides, those opposed and those in favor, make their arguments. The majority is coalescing around a particular position, and during that time, both sides are free to speak up and make their arguments. But once the majority view is settled, speech on the issue is no longer free. A line is drawn, so to speak, around the issue defining what is acceptable to the majority and what is off limits. Those who are in the minority quickly learn to keep their opinions and arguments to themselves, or face dire consequences.

Tocqueville observed that one can know Americans have lost their freedom when the minority no longer believes that there is any common intellectual or cultural resource to access in order to bridge partisan divides, and all that is left to them is violence. Indeed, we can look to numerous episodes in American history when minorities representing various groups have done just that.

Both sides, left and right, have a tendency to view their visions for America in normative terms. Both sides are guilty of reckoning themselves champions of righteousness and the other as paragons of evil to be totally annihilated. If we adhere to a broader paradigm like the one Tocqueville offers—the conflict between a (potentially) tyrannical majority and a marginalized minority—perhaps left and right can avoid normativizing their positions and pitting one side against the other, with unconditional surrender as the only resolution to the conflict. If we recognize that conflicts between majorities and minorities are common, even somewhat ordinary, in our democracy, perhaps we can have a basis for dialogue that is less martial, more civil, and thus draws on resources that contribute to the bridging of divides.

About the author:
*John D. Wilsey is Affiliate Scholar in Theology and History at the Acton Institute. He is Associate Professor of Church History at The Southern Baptist Theological Seminary and author of One Nation Under God: An Evangelical Critique of Christian America (Pickwick, 2011) and American Exceptionalism and Civil Religion: Reassessing the History of an Idea (IVP Academic, 2015); he also edited Alexis de Tocqueville’s famous work, which recently appeared under the title Democracy in America: A New Abridgment for Students (Lexham, 2016). Wilsey is 2017-18 William E. Simon Visiting Fellow in Religion and Public Life with the James Madison Program in American Ideals and Institutions at Princeton University. He is doing research for a new biography of John Foster Dulles, scheduled to appear in Eerdmans’ Library of Religious Biography series.

Source:
This article was published by the Acton Institute.

Survey Shows Collapse Of Moral Values – OpEd

0
0

Half the nation, 49%, say moral values in the U.S. are “poor.” This is the highest percentage ever recorded on this issue since Gallup first asked about it in 2002. Only 37% say moral values are “fair,” and a mere 14% say they are “good.” Moreover, 77% say moral values in the U.S. are getting worse; 18% think they are improving.

The American people are conflicted on moral issues. Even though more than three in four say our moral values are collapsing, a Gallup survey taken less than a month ago found that “Americans Hold Liberal Views on Most Moral Issues.” Consider the following.

More Americans find morally contentious practices acceptable today than ever before. For example, sex between an unmarried man and woman is now found to be morally acceptable by 7 in 10 Americans (69%); gay or lesbian relations register a 63% approval; having a baby outside of marriage is at a record high approval rating (62%). Pornography is gaining acceptability—the 36% figure has never been higher—and the same can be said of polygamy, which now receives an acceptability rating of 17%.

What does this tell us? It tells us that Americans know in their hearts that some behaviors are morally wrong, but they have a hard time passing judgment on them. In other words, they know in their gut that the state of our moral values is getting worse, but they also feel the pinch of the dominant culture’s embrace of moral nonjudgmentalism.

Here’s the rub: The more we find morally contentious behaviors acceptable, the more likely we are to conclude that our moral values are deteriorating. This paradox is a function of immaturity: We refuse to stigmatize the very behaviors (e.g., having kids out-of-wedlock) that convince us that our moral values are collapsing.

It would be wrong to say that we are opposed to stigmatization. We are not. Ask smokers. Did stigmatizing smokers work? Yes, smoking has declined dramatically. But when it comes to other behaviors, we wimp out, following the lead of elites in the dominant culture. So we lose.

Gallup needs to broaden its questioning. It needs to ask the American people how they think people like Samantha Bee are helping to drive our morals south. Indeed, Hollywood merits its own survey—it has had more to do with crafting our morally debased culture than any other factor. ​

Russia’s Lavrov Holds Talks With Mozambique’s Minister Pacheco – OpEd

0
0

Russia’s Foreign Minister Sergey Lavrov held diplomatic talks with Minister of Foreign Affairs and Cooperation of Mozambique Jose Pacheco, who arrived in Moscow on a visit following his participation in the St Petersburg International Economic Forum.

The two ministers discussed a whole range of issues related to the further progressive development and strengthening of traditionally friendly Russia-Mozambique relations, including the maintenance of an active political dialogue. Relations has been friendly over the years between Russia and Mozambique.

They paid special attention to improving mutually beneficial partnership in various areas with an emphasis on making use of the potential of the Russia-Mozambique Intergovernmental Commission on Economic, Scientific and Technical Cooperation, whose first meeting was held on April 24-25 in Maputo.

Foreign Minister of Mozambique Jose Pacheco said that Mozambique plans to sign an agreement with Rosneft and ExxonMobil on gas field exploration in the north of the country by the end of 2018. The plan is to create a consortium with the participation of a Mozambican company, Rosneft and ExxonMobil to develop offshore hydrocarbon fields near Mozambique.

“The project to develop gas fields in the north of Mozambique is under discussion now. The plan is to sign an agreement this year and launch the project on field development in Mozambique with participation of Rosneft and ExxonMobil,” the Minister said.

“We had an opportunity to speak with Rosneft’s management at the St. Petersburg International Economic Forum 2018, our delegation also included experts in the field, we are actively working and discussing, and are hoping to get a positive result,” Pacheco said.

There is an increasing interest of the Russian business community in building a partnership with Mozambique, which matches Maputo’s intention to attract Russian investment and technical assistance. “We have reaffirmed mutual commitment to promoting trade and economic cooperation, and believe that joint efforts in geological exploration and mineral extraction as well as telecommunications, energy and agriculture are the main priorities,” according to Lavrov.

There has been a long-standing tradition of Mozambicans receiving a higher education in Russia. While expressing deep satisfaction with the first meeting of the Intergovernmental Commission held in Maputo in late April, Minister Sergey Lavrov pointed to the training of Mozambique specialists at Russian universities in civilian professions and law enforcement officers trained at the educational institutions of the Russian Ministry of Defense and Ministry of the Interior.

“We are ready to consider the possibility of increasing the number of students in these areas. I firmly believe that Russia and Mozambique will achieve new results in its economic and political dialogue as well as in the humanitarian, cultural and education areas,” Lavrov told his visiting colleague in Moscow.

The two diplomats also discussed the efficiency of terrorism counteraction, multilateral and sustainable development of Africa and the settlement of internal political crises and armed conflicts on the continent.

In fact, at present, Russia’s relations with African countries are progressing both on a bilateral basis and along the line of African regional organisations, primarily the African Union and the Southern African Development Community. Russia’s support for Maputo’s constructive commitment to developing regional integration processes was confirmed, as was its intention to assist the African community in the search for consensus solutions to the challenges facing the continent.

Speaking about the international agenda, Lavrov said “we have the identical view on the need to build international relations on the basis of international law, respect for national identity and the wish and right of nations to determine their destiny. We noted the high level of our efforts’ coordination in the UN and other multilateral platforms. We agreed to develop and strengthen this coordination.”

In the opinion of Lavrov, Russia and Mozambique have consistently maintained that all problems, conflicts and crises, unfortunately, still remain on this continent, and should be resolved based on the approaches of Africans themselves, of course, with moral, political and material support from the UN and the UN Security Council. *Report by Kester Kenn Klomegah in Moscow.

Sri Lanka And the World: Whither Political Prudence? – Analysis

0
0

By Asanga Abeyagoonasekera

There is piercing hopelessness for the future when listening to certain political rhetoric in Sri Lanka. The International Workers Day, a day to remember workers’ rights, was initiated from the 1886 Haymarket affair in Chicago. It grew from a general strike for the eight-hour workday and has over time developed into a showcase of political muscle at the May Day rally. The competition among the political parties is to generate the most amount of crowd moving out of the initial idea.

The line that divides the opposition and government in Sri Lankan politics has been blurred by a bipartisan mechanism introduced by the incumbent government. It is further blurred with 16 Sri Lanka Freedom Party ministers having become part of the joint opposition or ambiguous in their political affiliation. Political party loyalty and discipline has reached the lowest ebb in Sri Lankan politics.

The appointment of cabinet ministers in the beginning of this month to what the largest cabinet in the world is perhaps has been questioned by former members of the former government who claim there is no scientific basis to the allocation of ministries to particular individuals. Yet such an accusation leaves the general public to wonder if the previous government itself had a mechanism to select its cabinet ministers. Although there was a message to the public from certain politicians including a senior cabinet Minister who said “Cabinet reshuffle will take place in a scientific manner” and that a “scientific formula” was introduced to allocate ministers, the substance of the formula was not revealed to the public. A government should allocate its cabinet ministers based on merit and achievements in their area/s of expertise even though party leaders will be limited to selecting only 25 out of 225 members. Whatever the “scientific formula” used was, it will not give results because most members of parliament were elected from a grave miscalculation, and not based on merit.

Sri Lankan political scientist Dr Jayadeva Uyangoda rightly calls for academic scrutiny of this changing behavior of political party members towards their leadership, especially on the question of party discipline/indiscipline changing the dynamics of Sri Lanka’s political party system. According to Dr Uyangoda, “Sri Lanka’s political parties have become new creatures with some unusually new characteristics. Monitoring these new changes requires not only scholarly vigilance, but also detachment from our old images of what democratic political institutions are.” The new creature created by the system sows confusion for political society.

Moving from domestic political society to the international, China’s President Xi Jinping recently spoke of Karl Marx’s idea of the struggle of the proletariat, the ideal that underpins some international workers’ movements. Xi said, “Writing Marxism onto the flag of the Chinese Communist party was totally correct.” Two centuries since Marx’s death, while advancing a much more open economic system, the leader of the second largest economy of the world said Marx is “the greatest thinker of modern times.” As a rising power, China liberalised its economy and ushered in globalisation to move millions out of poverty.

Closer home in South Asia, India took up a somewhat similar formula. As China and India underwent these changes, their respective foreign policies were impacted. After 1990, with the end of the Cold War era, India underwent two important adjustments to its foreign policy: first was economic liberalisation and deregulation; and the second was India’s changing relationship with the US. In their book, India at the Global High Table: The Quest for Regional Primacy and Strategic Autonomy, ambassadors Teresita and Howard Schaffer correctly identify this phenomenon. Over 50 years ago, the classical realist international relations theorist Hans Morgenthau had explained that “The character of foreign policy can be ascertained only through the examination of the political acts performed and of the foreseeable consequences of these acts.” By assessing given actions, one could evaluate what statesmen have actually done for the foreign policies of their countries and the profound impact on the outlook towards the world outside. Sri Lanka, with its middle path idealistic foreign policy, is stuck somewhat in the non-aligned past, which needs re-calibration towards a more realistic approach in this century. The idealistic view adopted in the past could be due to the influence of Buddhist values towards our leaders.

Today, China has expanded its trade with US. Nonetheless, in the past few weeks, Beijing has faced a serious trade war with Washington due to the US Department of Commerce ban on ZTE, one of the largest Chinese telecommunications companies in the US. In February 2018, US intelligence agencies warned Americans against buying products from ZTE and Huawei, another Chinese telecom company, claiming that the companies posed a security threat to American customers. The chairman of ZTE called this “unfair and unacceptable,” decrying the US export ban as a massive disruption to its business since the company relies on US firms for key smartphone components.

This incident is a clear indication of how national security plays out in the present context despite an open trade policy. Meanwhile, Sri Lanka has opened its gates for the lowest price with a high percentage of telecommunication infrastructure based on ZTE and Huawei products. Such a predicament was anticipated and highlighted during a discussion of experts from national security think tanks in Sri Lanka a year ago. Its outcome was circulated among the highest policy makers.

Should such warnings from experts go unheeded?

Bangladesh And Nuclear Power: Significance For India – Analysis

0
0

By Tarika Rastogi

The Rooppur Nuclear Power Plant (RNPP) in Bangladesh has the potential to re-energise India-Russia cooperation and significantly enhance India’s geopolitical clout and standing in the nuclear community.

The RNPP (whose construction began in 2013) will be Bangladesh’s first and is being constructed by Rosatom (State Atomic Energy Corporation, Russia). It is the first atomic energy project in a third country under an India-Russia deal where Indian companies train the workforce while Russia builds the reactor. The RNPP involves two units, each with a capacity of 1200MW. Built on a turnkey basis, Rosatom will manage the entire project and will be liable for any complications that arise in the plant. The estimated cost of the project is US$12.65 billion; of which, the Russian government will provide 90 per cent of the cost on 1.75 per cent interest. The Bangladeshi government would arrange the remaining 10 per cent. The loan will be settled 28 years after the plant becomes operational, and, if required, a grace period of an additional 10 years period would be provided.

For India, this is important for a variety of reasons, foremost of which would be the exposure to international project management. This involves being closely associated with all stages of construction, albeit on an observer basis to ensure knowledge of construction and then acting as an interface between the Russian engineers and the Bangladeshi operators. This is an important role of understanding, translating and transmitting information, the lack of which can result in severe cost overruns—as was seen in the Olkiluoto 3 reactor in Eurajoki, Finland. Moreover, it will mean the adaptation or development of a new set of standards, guidelines and legal frameworks to deal with holistic nuclear plant management such as safety (from sabotage or terrorism, static or during transport); countering the possibility of pilferage and smuggling; setting up emergency response centres; and cyber security to name but a few. It will give India the opportunity to stress test frameworks it will develop from this and similar projects in Vietnam and Sri Lanka. The knowledge gained through these projects enhances India’s credibility and enables it to further undertake such projects as a knowledge partner.

The knowledge and human aspect here are particularly important as it leverages India’s experience with several generations of Russian reactors. The training of a diverse set of Bangladeshi experts from operational to supervisory and regulatory staff in much the same way as graduating from Western universities creates a point of reference and peer approval of an alumna network; and such training essentially makes Bangladesh’s emerging nuclear experts dependent on and therefore closely linked to the Indian nuclear ecosystem, in most human aspects. What is particularly important here is that this will be the first 3rd generation Russian reactor that India has dealt with, as opposed to India’s Russian reactors which are second generation. This means India has internalised the operational philosophy of Russian reactors and can provide more culturally sensitive training to third countries in a brand new design from its construction stage; and as such this marks a significant increase in Indian human capital generation.

All of this required much sustained effort from the Russian and Indian sides. It did not develop in a vacuum but out of a deliberate plan. In 2014, Russia and India signed a strategic vision (Strategic Vision for Strengthening Cooperation in Peaceful Uses of Atomic Energy) under which both countries will explore opportunities of sourcing materials, equipment and services from Indian industry for the construction of Russian-designed nuclear power plants in third countries. In this regard, the Russian Nuclear Agency, Rosatom, opened a regional centre in Mumbai in 2016 to facilitate projects in the region to facilitate greater Indian inputs and suppliers. This is significant because it represents the first seemingly successful attempt by both countries to break the traditional relationship that was focused on fossil fuel and military sales.

This is also good for India’s reputation on the international stage. First, it validates the 2008 International Atomic Energy Agency (IAEA) exception for India, showcasing its credentials as a responsible country in the international nuclear market. Moreover, given the visible lack of a functioning export worth reactor in India, the provision of knowledge services to third party nuclear reactors augments India’s case for membership to the Nuclear Suppliers Group (NSG). The very fact that this is being carried out in partnership with Russia—a country that China is increasingly drawing closer to and one that disagrees with China on India’s NSG membership—is significant as may create a point of friction between China and Russia. This becomes even more important given how Chinese projects in the region are either extractive or financially unsustainable, whereas the RNPP is designed to give Bangladesh clean energy and energy security on a financially sustainable basis.

India should consider this project as a stepping-stone to becoming an international knowledge partner for third country reactors. Such projects are a low risk, high yield way of gaining institutional links with other countries and greatly enhance the capability and credibility of India’s nuclear industry as well as its standards, procedures, safety and security.

Afghanistan: Taliban, ‘Talks’ And Blind Optimism – Analysis

0
0

By Ajit Kumar Singh*

The latest round of efforts to bring Taliban to ‘talks table’ has commenced. Sounding optimistic, Atturahman Saleem, Deputy of Afghanistan’s High Peace Council (HPC), when asked whether the Council would wait for a Taliban response, stated on May 20, 2018, “Absolutely. Necessary pressures, military, political and religious, should be built on the Taliban. Sooner or later it will work, and the Taliban would have no other option but to join the peace process.”

Notably, Afghanistan President Ashraf Ghani, addressing the second Kabul Process conference on February 28, 2018, offered unconditional peace talks to the Taliban. Ghani stated, unequivocally, “We are making this offer without preconditions in order to lead to a peace agreement. The Taliban are expected to give input to the peace-making process, the goal of which is to draw the Taliban, as an organisation, to peace talks.”

Before the beginning of the conference, the Afghanistan Government released a document titled “Offering Peace: Framing the Kabul Conference of February 28, 2018” where it suggested “seven building blocks for peace-making”, including “a political process: ceasefire, recognition as political party, transitional confidence building arrangements, and inclusive, credible, free and fair elections”. The first round of the Kabul Process had been held on June 6, 2017.

The developments between February 28, 2018, and May 20, 2018, and thereafter clearly suggest that Atturahman Saleem’s optimism is entirely misplaced.

To begin with, in the absence of any ‘official’ response from Taliban, the High Peace Council (HPC) on March 5, 2018, warned the Taliban against the consequences if the group rejected Ghani’s peace proposal. Despite the warning, the Taliban indicated, on numerous occasions, that it was not going to accept the proposal. In an ‘article’ posted on its website, Alemarah, on March 6, 2018, it stated, “the Americans gave the Kabul regime orders to start the calls for peace and negotiations… in order to make up for their defeat in the military and political arena which they have been facing constantly.” Again in a statement released on March 10, 2018, the Taliban called the Kabul administration “illegitimate” and described the peace process as “propaganda and deceptive”.

Further on April 14, 2018, when Ghani reiterated the offer asking the Taliban “to act as a political party and participate in the elections while utilizing the prevailing opportunity and the peace offer”, the Taliban was quite clear in its response. A day later, on April 15, it stated that Afghanistan is occupied, with thousands of foreign troops in the country and that major political and military decisions are “taken by the occupiers”. “We have seen in past elections that people have been cheated and the final decision was taken by John Kerry (former US Secretary of State), and the National Unity Government was created at the US embassy in Kabul,” the statement read.

A relatively peaceful conduct of the Parliamentary and District Council elections scheduled to be held on October 20, 2018, is essential for Afghanistan, as indefinite delay in the democratic process can only add to the country’s present misery. However, the challenges of holding this massive electoral exercise within circumstances of enveloping violence across wide areas of the country remain overwhelming.

And there are no indications that violence will diminish, as the Taliban went ahead with its declaration of the ‘annual spring offensive’. In a statement released on April 25, 2018, in which it announced the launch of its ‘Al Khandaq (the Trench)’ campaign, the Taliban stated,

On top of the… aggressive, ideological and licentious ambitions of their American masters, the deceptive efforts launched by the ineffectual and corrupt officials of the puppet regime inside and outside the country are nothing but a conspiracy orchestrated by the foreign occupiers for enervating, crushing and eventually pacifying the ongoing legitimate Afghan resistance and are not efforts for ending the war and restoring lasting peace…

Partial data compiled by the South Asia Terrorism Portal (SATP) indicates that, since April 25, 2018, Afghanistan has recorded at least 2,145 terrorism-linked fatalities (data till May 27, 2018). These fatalities include 178 civilians, 157 SF personnel, and 1,810 militants.

There has, in fact, been no respite from terror in the country for years. According to the United Nations Assistance Mission in Afghanistan (UNAMA), civilian casualties were in four digits till 2013, but entered five digits in 2014 for the first time since 2009, when UNAMA started documenting civilian casualties, and have since remained in five digits. There were 10,453 casualties (3,438 deaths and 7,015 injured) in 2017. In the current year, as of March 30, 2018, such casualties had already reached 2,258 (763 deaths and 1,495 injured).

The Taliban remains a resurgent force. According to the Special Inspector General for Afghanistan Reconstruction (SIGAR)’s latest quarterly report released April 30, 2018,

Since SIGAR began receiving population-control data in August 2016, Afghan government control has decreased by roughly four percentage points, and the overall trend for the insurgency is rising control over the population (from 9% in August 2016 to 12% in January 2018)… Using Afghanistan’s 407 districts as the unit of assessment, as of January 31, 2018, 229 districts were under Afghan government control (73 districts) or influence (156) — an increase of two districts under government influence since last quarter. This brings Afghan government control or influence to 56.3% of Afghanistan’s total districts. There were 59 districts under insurgent control (13) or influence (46), an increase of one district under insurgent influence since last quarter. Therefore, 14.5% of the country’s total districts are now under insurgent control or influence, only a slight increase from last quarter, but a more than three percentage point increase from the same period in 2016. The remaining 119 districts (29.2%) are contested—controlled by neither the Afghan government nor the insurgency. The Afghan government’s control of districts is at its second lowest level, and the insurgency’s at its highest level, since SIGAR began receiving district control data in November 2015.

It is imprudent on the part of the Ghani Government to invite Taliban for peace talks at this juncture. Past experiences clearly demonstrate that terrorist formations never join peace talks in good faith when their influence and capacities on the ground are at an upswing. Ignoring the cumulative experience of counter-insurgency within Afghanistan and in theatres across the world, Ghani has chosen to push his ‘peace proposal’ forward, giving some credence to Taliban’s assertion that the Afghan Government and its ‘foreign masters’ are losing the battle. The rising desperation visible in the offer of talks can only further embolden the Taliban, even as the hasty decision of premature withdrawal by the US did. Nothing positive could possibly be achieved as a result of either decision. Indeed, Afghanistan’s Second Vice President General Abdul Rashid Dostum stated on May 14, 2018,

Reconciliation with the Taliban should be handled from the position of strength, and not from the position of appeasement and weakness… We can reach a peaceful settlement with Taliban, if we are determined to win the war. So by softening the tone asking Taliban for reconciliation, we failed. And neither will it work in the future…

The Kabul Process was initiated to confer a pivotal role in the peace talks on Afghanistan. Almost all the earlier initiatives, including the Qatar Process and the Quadrilateral Coordination Group (QCG) process had given centrality to Pakistan, largely because Pakistan had deceived a willfully gullible international community into believing that peace could only be achieved by making the Taliban the ‘principal stakeholder’ in the talks process. Now it appears that the Kabul Process also has also lost its way. It will be wise on the part of the Ghani Government to initiate immediate course correction.

*Ajit Kumar Singh

Research Fellow, Institute for Conflict Management


India: ‘South Bastar Division’ The Last Maoist Bastion – Analysis

0
0

By Deepak Kumar Nayak*

On May 27, 2018, a 45-year-old villager identified as Vanjami Sukda, was hacked to death by cadres of the Communist Party of India-Maoists (CPI-Maoist) in Punpalli village under Dornapal Police Station limits in Sukma District. In a pamphlet recovered from the spot, the Maoists accused the deceased of being a ‘Police informer’.

On May 24, 2018, Sub-Inspector Rajesh Kumar was killed and Constable Manik Tinpare, sustained injuries; when a pressure bomb planted by the cadres of the CPI-Maoist went off in a forest near Puswada village in Sukma District. Both were from a CoBRA (Commando Battalion for Resolute Action) unit, a specialized group within the Central Reserve Police Force (CRPF).

On May 20, 2018, at least seven personnel of the Chhattisgarh Police [four personnel of the Chhattisgarh Armed Force (CAF) and three of the District Forces (DFs)] were killed and one was critically injured when CPI-Maoist cadres blew up their vehicle, triggering an Improvised Explosive Device (IED) blast at Cholnar village in Dantewada District. Special Director General of Police (SDGP), anti-Naxal [Left Wing Extremism, LWE] operations, D.M. Awasthi disclosed, “For the last one year, construction of the Kirandul and Arangpur road was stopped but we have restarted it just a few days ago. The jawans were escorting a truck which was laden with building material when the Maoists targeted them at a culvert between Cholnar and Kirandul village.” Officials disclosed that the Maoists also looted two INSAS (Indian Small Arms System) assault rifles, two SLRs (Self Loading Rifles), and two AK-47 rifles from the possession of the deceased SF personnel.

On the same day, CPI-Maoist cadres opened fire targeting Security Forces (SFs) at an unspecified location in Sukma District. In retaliatory fire, SFs killed a women Maoist cadre, while some 15 Maoists managed to escape from the encounter site. Some of those who managed to flee had sustained injuries.

According to partial data collated by the South Asia Terrorism Portal (SATP), Dantewada and Sukma Districts, which fall under the Maoists’ ‘operational zone’ of ‘South Bastar division’ have accounted for at least 42 Maoist-linked fatalities (12 civilians, 20 SFs, and 10 Maoists) in the current year, thus far (data till May 25, 2018). During the corresponding period in 2017, the ‘division’ had recorded 58 fatalities (six civilians, 40 SFs, and 12 Maoists). Through 2017, Maoist-linked fatalities in the ‘division’ stood at 78 [10 civilians, 44 SF personnel, and 24 Maoists].

Since March 23, 2005, the ‘South Bastar division’ has accounted for at least 1,446 fatalities, including 407 civilians, 593 SF personnel, and 446 Maoists (data till May 27, 2018). The first fatality in the ‘division’ was registered on March 23, 2005, when a group of suspected Maoists killed one Policeman and injured another during an attack on the helipad at Danteguda village in Dantewada District. Since March 23, 2005, and till May 27, 2018, Chhattisgarh State accounted for a total of 2,797 fatalities (800 civilians, 997 SF personnel, and 1,000 Maoists). Thus, the ‘South Bastar division’ alone accounted for 51.69 per cent of total-Maoist linked fatalities recorded in Chhattisgarh. The ‘South Bastar division’ tallies 18.74 per cent of total Maoist-linked fatalities recorded across India during this period, with all-India fatalities at 7,713 (3,061 civilians, 1,924 SF personnel, and 2,728 Maoists).

Fatalities in ‘South Bastar division’ and Chhattisgarh: 2005*-2018

Year ‘South Bastar division’ Chhattisgarh
Civilians SFs LWEs Total Civilians SFs LWEs Total
2005 38 37 18 93 52 42 26 120
2006 161 47 72 280 189 55 117 361
2007 41 133 45 219 95 182 73 350
2008 10 5 9 24 35 67 66 168
2009 25 50 89 164 87 121 137 345
2010 34 100 46 180 72 153 102 327
2011 4 36 41 81 39 67 70 176
2012 7 20 2 29 26 36 46 108
2013 28 20 13 61 48 45 35 128
2014 8 40 3 51 25 55 33 113
2015 16 21 16 53 34 41 45 120
2016 12 20 58 90 38 36 133 207
2017 10 44 24 78 32 59 78 169
2018* 13 20 10 43 28 38 39 105
Total 407 593 446 1446 800 997 1000 2797

Source: SATP, * Data since March 23, 2005; **Data till May 27, 2018

The SF:Maoist kill ratio in South Bastar has been overwhelming in favour of the Maoists for nine years [2005 (2.05:1); 2007 (2.95:1); 2010 (2.17:1); 2012 (10:1); 2013 (1.53:1); 2014 (13.33:1); 2015 (1.31:1); 2017 (1.83:1) and 2018 (2:1)]. The ratio stood in favour of the SFs in five years [2006 (1:1.53); 2008 (1:1.8); 2009 (1:1.78); 2011 (1:1.13) and 2016 (1:2.9)]. Disturbingly, the SF: Maoist kill ratio in the ‘division’ over the years favoured the Maoists with 1:1.32. The SF:Maoist kill ratio in Chhattisgarh has been marginally in favor of the SFs with 1:1.003 over the period 2015-2018 (data till May 27, 2018).

‘South Bastar division’ has also accounted for large number of civilian killings. During between March 23, 2005, and May 27, 2018, the ‘division’ accounted for 407 civilian fatalities, around, 50.87 per cent of total civilian fatalities recorded in the State.

Since March 23, 2005, the ‘South Bastar division’ has also accounted for at least 112 major incidents (each involving three or more fatalities), resulting in 964 fatalities (240 civilians, 434 SF personnel, and 290 Maoists). In the current year, two such incidents have already been recorded resulting in 16 fatalities (all SF personnel). Apart from the May 20 incident at Cholnar, the ‘division’ witnessed an incident in which at least nine CRPF personnel were killed and two were critically injured, when CPI-Maoist cadres blew up a Mine-Protected Vehicle (MPV) in Kistaram jungle area of Sukma District on March 13, 2018. Significantly, the worst ever Maoist attack, across all regions, targeting SFs took place at the Tarmetla village near Chintalnad in the Dantewada District on April 6, 2010, when 75 CRPF troopers and one Chhattisgarh Police trooper were slaughtered by the Maoists.

The ‘South Bastar division’ is reportedly headed by Maoist ‘commander’ Raghu aka Pungar Mandavi, while the ‘secretary’ of the ‘division’ is Vikas. The two Districts (Dantewada and Sukma) of the ‘division’ also come under the ‘purview’ of the ‘Darbha division’ of the CPI-Maoist, which ‘looks after’ these two Districts, along with Bastar. The ‘Darbha division’ is led by a senior Maoist Surinder aka Kabir. He is also reportedly ‘heading’ the new ‘MMC (Madhya Pradesh, Maharashtra and Chhattisgarh) region’.

The Maoists have, however, suffered significant losses at leadership level in the ‘division’. The former ‘South Bastar division’ ‘secretary’, Kurasam Mothi Bai aka Madhavi (40), was arrested in Vijayawada city in Krishna District of Andhra Pradesh on June 3, 2013. Madhavi was said to have danced around the slain body of Mahendra Karma, founder of Salwa Judum (an anti-Maoist vigilante group) after the Maoists killed 28 persons including Karma, and injured another 30, some of them critically, in a swarming attack in the Darbha Ghati region of the Sukma District on May 25, 2013. At least 25 Maoist ‘leaders’ operating in the ‘division’ have been killed so far [between March 23, 2005 and May 27, 2018]. These include at least 13 ‘commander’ level leaders. Another 41 ‘commander’ level leaders have been arrested. Also, the mounting pressure of the SFs has forced at least 30 ‘commander’ level leaders to surrender.

The two Districts (Dantewada and Sukma) falling under the ‘South Bastar division’ – are spread over a geographical area over 11,132.91 square kilometers, and offer immense tactical advantages to the Maoists. The forest cover of the ‘division’ is 8,516.21 square kilometres, i.e., about 76.49 per cent of the total area. The Division is situated to the south of the State, and is surrounded mostly by currently Maoist-affected or erstwhile Maoist-affected Districts of the State and neighbouring States of Odisha and Telangana. To the south, the ‘South Bastar division’ shares borders with Khammam in Telangana; to the north, with Bastar in Chhattisgarh; to the east, with Malkangiri in Odisha; and to the west, Bijapur in Chhattisgarh. All these adjacent Districts are among the 30, across seven States, identified by the Ministry of Home Affairs (MHA) as “worst-affected” by Maoist violence. The two Districts of the ‘division’ – Dantewada and Sukma – are themselves among these 30.

On April 17, 2018, Chhattisgarh Chief Minister Raman Singh stated in an interview that the South Bastar region, by virtue of being surrounded by four inter-State borders with Maharashtra, Telangana, Andhra Pradesh and Odisha, is subject to Left-wing extremism perpetrated by those who come from outside Chhattisgarh. He thus noted,

…The leaders are not even locals, they come from Andhra. Some fighters live in Odisha, strike here and leave. That is why it is harder to control Naxalism in the South Bastar region, because it is surrounded by inter-state borders.

Moreover, the two Districts of the ‘division’ are afflicted by relatively low standards on all human development indicators. A report released by the Government of India listed Dantewada and Sukma among 115 ‘backward districts’ of India. The Districts were identified on the basis of select indicators of backwardness and prevalence of Left Wing Extremism. The indicators of backwardness included Poverty, Health, and Education.

Despite suffering losses, SFs have succeeded in increasing their pressure on the Maoists in the region. According to a report dated April 24, 2018, a letter recovered by SFs in February 2017 from Bastar points to increasing SF operations in the region forcing the Maoists to shift to the ‘MMC region’, of which Gondia and a tip of Gadchiroli (both in Maharashtra) are part. The letter, written to a Maoist rebel identified as comrade Surendra, from comrade Somru read: “Oppression is rising. The enemy is opening camps. Villagers are fleeing from the area and we are working in difficult conditions.”

Significantly, on May 21, 2018, the 241st battalion of the CRPF was commissioned at the Force’s anti-Naxal training school in Chhattisgarh. The battalion, named “Bastariya Warriors”, comprises personnel drawn largely from the Bastar region, and is tasked with carrying out anti-Naxal operations in that area. With 534 personnel, including 189 women, the battalion has been raised with the specific purpose of strengthening SF operations in areas where they have suffered their greatest reverses owing to a lack of concrete intelligence, and familiarity with locals and topography.

On April 30, 2018, the Chhattisgarh Government extended the ban on the CPI-Maoist and six of its front organisations [Dandakaranya Adivasi Kisan Mazdoor Sangh, Krantikari Adivasi Mahila Sangh, Krantikari Adivasi Balak Sangh, Krantikari Kisan Committee, Mahila Mukti Manch and Janatana Sarkar (‘people’s government’ unit)] for one more year. The CPI-Maoist was first banned, along with its affiliates, in the State under provisions of the Chhattisgarh Special Public Security Act-2005 in April 2006.

The Maoists’ ‘South Bastar division’ remains a major challenge for the State. The ‘final battle’ against the Maoists is set to be decided/won in this ‘division’, where the Maoists continue to wield significant influence, and retain disturbing levels of operational capabilities.

*Deepak Kumar Nayak

Research Assistant, Institute for Conflict Management

Gaza’s Regrettable Catastrophe – OpEd

0
0

The Gaza Strip is a territory that no faction wants to take responsibility for, not even the Palestinian Authority. Most of the time, the global community prefers to ignore it until some incident forces the world to take note. The latest repetition of this cycle of violence occurred on May 14th when thousands of Palestinians tried to storm the Gaza-Israel border fence. The Palestinian organizers who refer to the series of protests as the ‘great march of return’ seek for the re-entry of their kin to what is now Israel.

For the past seven weeks, Palestinians have protested at twelve locations along the border fence separating Israel and the Gaza Strip. Israeli troops responded to the protesters by using non-lethal as well as excessive lethal force such as drones that dropped tear gas and sharpshooters that fired live ammunition. Since the start of the events seven weeks ago, the number of Palestinian casualties has mounted to well over a hundred, some of them members of militant groups like Hamas that carried lethal weapons, while others were just unarmed ordinary civilians who were depressed with the circumstances inside Gaza. Regardless of the details, human rights organizations like Amnesty International and Human Rights Watch immediately condemned the excessive use of force by the Israeli Government and said that there was plenty of time to come up with alternative methods to contain the situation.

Meanwhile, at the peak of the violence, American and Israeli officials celebrated the relocation of the U.S Embassy from Tel Aviv to Jerusalem. The split-screen moment captured the surreal geopolitics of the region with Prime Minister Netanyahu calling it a ‘great day for peace’. Although President Trump’s decision to move the embassy has certainly aggravated a level of unrest in the region, the blame for the current crisis falls on multiple factors.

For most, Gaza is no ordinary piece of land. It is practically a large open-air prison measuring at around 365 square kilometers and home to nearly two million people making it one of the most crowded places in the world. Gaza also faces some dire shortages of food, water, power, medicine, and other basic necessities. For instance, the tap water is undrinkable and power outages usually last most of the day. As a result, hospitals are experiencing difficulties of functioning properly and most people only have four hours of access to electricity daily. There is also a shortage of schools and nearly half of the residents in Gaza are unemployed. In addition, almost half of the children in Gaza suffer from malnourishment. Due to these circumstances, about half of the residents have expressed no will to live.

The blockade of the Gaza Strip is the catalyst of the region’s despair. Israel, having withdrawn its troops in 2005, insists that it is not to blame. Yet, the Israeli Government still retains control over Gaza’s land, sea, and air space to the north and east. Israel also determines what goods get in and out of Gaza. Israeli officials argue that these measures are necessary because Hamas, the militant group that controls the Gaza Strip, is likely to use the materials for the construction of rockets, bombs, and smuggling tunnels. Either way, the lack of freedom of movement means that people cannot leave the region even if they wanted to (only a small number can leave for exceptional reasons).

Egypt also contributes to the despair in Gaza. The sole crossing point in Rafah to the south, has been practically closed for years due to security concerns in the Sinai Peninsula.

Meanwhile, Hamas, which won the elections in 2006 has proven to be corrupt, incompetent, and oppressive. Last year, the militant group basically admitted that it is not up to the task of governing the strip and tried to concede the administrative tasks to Fatah as part of a reconciliation pact. The religious-based Hamas and the nationalist-based Fatah have been at odds for over a decade since the former expelled the latter from Gaza. Since then, Fatah has been restricted to the West Bank, but for the past few years, the two factions have been trying to come to an agreement to unify the Palestinian Authority.

Thus far however, the deal has failed to bring about a functioning government because Hamas is not yet prepared to give up its weapons. In response, Fatah has withheld the salaries for civil servant employees in the Gaza Strip as a means to pressure Hamas into political concessions. At the same time, the Israeli side led by the government of Netanyahu rejects the idea of a Palestinian state and has encouraged the construction of settlements in the West Bank. With Israel’s booming economy, it is hard to convince the Israeli leadership to change course. As such, Netanyahu prefers to manage the conflict rather than trying to solve it.

Moreover, since the 2015 legislative elections, Prime Minister Netanyahu has faced several predicaments including criminal investigations and ministerial resignations. In February, Israeli law enforcement agencies recommended that the Attorney General indict Netanyahu for multiple cases of corruption.

Then, at the right moment, protests in Iran erupted, the situation in Gaza festered, and it seemed as if war with Lebanon was imminent. With so many conflicts in and around Israel, Netanyahu’s diplomatic and military knowhow complicated his image in Israeli domestic politics. For nearly a decade, the status quo has allowed the Israeli Government to keep Palestinian violence in check, and whenever a skirmish erupts, it damages Israel’s international reputation but not much more. All these political events, as well as the social economic conditions have placed Gaza on the brink of eruption.

What has changed recently is that the flow of financial aid to Gaza has diminished. Even though much of the global community boycotts Hamas, hundreds of millions of dollars in humanitarian assistance are still poured into Gaza annually. This is what essentially kept the region from falling into a total economic meltdown. But since the start of the year, foreign donors have scaled back their financial aid to the area because there is too little to show for it. This includes the United States which cut around $300 million in contributions to the United Nations Relief and Works Agency (UNRWA).

In addition, roughly a million residents in the Gaza Strip rely on UNRWA for assistance. Now however, with depleted funds, the agency is unable to fulfill its tasks which leaves half of the population in Gaza to fend for themselves. Since the reduction of international aid, Gaza has struggled with cash shortages. Subsequently, the purchasing power of Palestinians in the region has plummeted by half. This has forced people to seek credit wherever they can. However, the increasing level of debt has triggered a domino effect in the area which has crippled the local economy. For Hamas, the recent border crisis is a means to divert attention away from the internal social economic depreciation. None of this is going to kick-start the economy, but it does help bolster the legitimacy of Hamas in the Gaza Strip.

The Palestinian cause raises questions on a broader scale. An increasing number of regional governments are losing patience with the Palestinian crisis, particularly the Arab states who recognize the sentimental topic amongst their citizens. As such, the Palestinian crisis represents an instrument to shift the attitudes of citizens and shape the electoral outcomes. It is also a means for governments to gain regional influence. How exactly regional players exploit the Palestinian cause differs from state to state.

Iran, for example, uses Hamas to undermine President Abbas of the Palestinian Authority who has kept Iranian influence in check near the West Bank. Ultimately, this has strengthened Iran’s hand in the area.

Another state that uses the Palestinian question to enhance its role as the original regional power is Turkey. In response to the recent protests, the Israeli Ambassador to Turkey was expelled from the country. In practice, the decision is due to the upcoming elections in Turkey which is scheduled for June 2018. The ruling AK Party hopes to use the Gaza crisis to rally voters in their favor.

Other powers like Saudi Arabia face a complicated political reality. The Saudis condemned the use of force by Israel against Palestinian civilians, but government officials in Riyadh considered the actions of Iran a far more pressing issue than the Palestinian question. As such, Saudi policymakers must navigate between the traditional sentiments and the geopolitical necessity to forge better relations with Israel.

Every state deals with the Palestinian question differently, but the recent crisis in Gaza highlights to what extent the Palestinian people have been abandoned by the world, even by their own leaders. Considering all of these social, political, and economic circumstances, one thing is certain. There are no innocent actors.

Russia, Israel Agree To Limit Iran’s Presence In South Syria

0
0

Moscow and Tel Aviv held advanced talks Thursday concerning Syria and agreed to “limit” the presence of Tehran in the south of the country.

Russian Defense Minister Sergei Shoigu met in Moscow with his Israeli counterpart Avigdor Lieberman, while President Vladimir Putin and Israeli Prime Minister Benjamin Netanyahu discussed the Syrian file in a phone call.

The two sides agreed to “limit” Iran, keep its forces away from the south, and to allow Tel Aviv to target menacing bases in the deep Syrian territories.

The Kremlin said the Putin-Netanyahu conversation focused on “some aspects of the Syrian settlement,” which it didn’t specify.

A Russian source said that Moscow refuses to offer details about the understanding with Tel Aviv to respect the balance in its separate relationships with each of Israel and Iran.

“Russia is somehow embarrassed because talks with the Israelis mainly focus on a plan to remove Iran and its forces from southern Syria,” the source added.

On Thursday, Lieberman informed Shoigu that “Israel greatly appreciates Russia’s understanding of our security needs, especially regarding the situation on our northern border.”

The Israeli Defense Ministry issued a statement saying the two ministers discussed “the Israeli campaign to prevent Iranian entrenchment in Syria.”

Russian sources said talks between the two sides produced agreements concerning the South of Syria, stipulating the withdrawal of Iranian-linked forces from the area and offering Israel a green light to launch military operations against any threatening target, except regime forces positions.

Later, Israeli sources confirmed reaching such an understanding with the Russians.

The meeting in Moscow came as the Syrian Observatory for Human Rights said that Iranian troops and “Hezbollah” appear to be getting ready to withdraw from southern Syria near the Israeli-occupied Golan Heights.

Separately, Syrian regime forces launched a security campaign in the Latakia Governorate to arrest several wanted criminals around the province.

In the past two days, security forces personnel have arrested a large number of wanted criminals in the provincial capital and port-city of Jableh.

Earliest European Evidence Of Lead Pollution Uncovered In The Balkans

0
0

New research from Northumbria University has revealed that metal-related pollution began in the Balkans more than 500 years before it appeared in western Europe, and persisted throughout the Dark Ages and Medieval Period, meaning the region played a far bigger role in mineral exploitation than previously believed.

The study provides a new perspective on both the timing and extent of metallurgy – the technique of extracting metal from ores prior to heating or working with metals to give them a desired shape – in the Balkans, and the associated economic change this brought to the region, such as the inception of the Metal Ages.

The findings are published in the journal Proceedings of the National Academy of Sciences of the United States of America (PNAS).

The exploitation of mineral resources has a broad range of environmental impacts, including metal-contaminated wastewater and the release of microscopic chemical particles into the atmosphere from mining and smelting. As these particles settle on to the surface of a peat bog, an environment in which the sediment develops year on year, a clear history of the bog’s development may be established.

Samples recovered from the Crveni Potok peat bog, located on the Serbia/Montenegro border, were geochemically examined by researchers from Northumbria’s Department of Geography and Environmental Sciences alongside colleagues from the University of Montpellier and the Romanian Academy. They found the first clear evidence of metal pollution originating from lead in the region dating back to approximately 3600 BC.

The evidence is supported by a concurrent rise in charcoal concentration, which suggests an increase in biomass burning which is potentially related to a broad range of economic activities, including fuel production for metal smelting.

Previously, the oldest European environmental pollution dating to circa 3000 BC had been found in southern Spain, but the new data from Crveni Potok show that metal pollution was evident in this region of eastern Europe more than 500 years earlier. This evidence is the earliest documented in European environmental records and indicates environmental pollution from metallurgy at a time when Britons were still in the Stone Age. This confirms that the Balkans were not only the birthplace of metallurgy in Europe, but also of metallic pollution.

Furthermore, levels of lead pollution decreased dramatically in western Europe after the collapse of the Roman Empire, a feature not observed in this Balkan record. This suggests that the region – which is metal rich – should be considered more of a major player in environmental metal pollution through Dark and Middle Ages than previously thought.

This contrast between eastern and western Europe indicates that while western Europe was in the ‘Dark Ages’ there was significant economic development in the Balkan area with high levels of metal environmental pollution throughout the Medieval period. This confirms the large extent and size of the metalworking industry in the Balkans during this era.

As part of his PhD research Jack Longman uncovered these findings supervised by Dr Vasile Ersek, Senior Lecturer in Physical Geography in the University’s Department of Geography and Environmental Sciences.

Dr Ersek explained: “Much of the focus in determining sources of ancient pollution has been on established sources such as the Romans or ancient Greeks, but these findings highlight the crucial role that the Balkan metallurgy has played in the economic development of the area.

“Metallurgy and mining is intimately linked to socioeconomic development, therefore improving our knowledge of how these resources were exploited in the past can help us understand better how societies developed over time. In this respect, the peat bog record from Crveni Potok provides a fascinating history of pollution from the early Bronze Age through to the Industrial Revolution.”

Dr Longman added: “What is most interesting is that after the Roman Empire falls in the third and fourth centuries AD, lead pollution continues and even increases, indicating that the strong mining and smelting culture developed by the Romans was continued by the local population. This goes against the long-held view of barbaric hordes with little technological know-how ousting the Romans leading to the Dark Ages – as we term the 1,000 years following the Roman period. These Dark Ages may well have been true in much of western Europe, but in the Balkans, it seems that this period was, in fact, rather ‘well-lit’.”

Ancient Boulders Provide Clues About Human Migration To The Americas

0
0

When and how did the first people come to the Americas?

The conventional story says that the earliest settlers came via Siberia, crossing the now-defunct Bering land bridge on foot and trekking through Canada when an ice-free corridor opened up between massive ice sheets toward the end of the last ice age.

But with recent archaeological evidence casting doubt on this thinking, scientists are seeking new explanations. One dominant, new theory: The first Americans took a coastal route along Alaska’s Pacific border to enter the continent.

A new geological study provides compelling evidence to support this hypothesis.

By analyzing boulders and bedrock, a research team led by the University at Buffalo shows that part of a coastal migration route became accessible to humans 17,000 years ago. During this period, ancient glaciers receded, exposing islands of southern Alaska’s Alexander Archipelago to air and sun — and, possibly, to human migration.

The timing of these events is key: Recent genetic and archaeological estimates suggest that settlers may have begun traveling deeper into the Americas some 16,000 years ago, soon after the coastal gateway opened up.

The research will be published online on May 30 in the journal Science Advances.

“People are fascinated by these questions of where they come from and how they got there,” said lead scientist Jason Briner, PhD, professor of geology in UB’s College of Arts and Sciences. “Our research contributes to the debate about how humans came to the Americas. It’s potentially adding to what we know about our ancestry and how we colonized our planet.”

“Our study provides some of the first geologic evidence that a coastal migration route was available for early humans as they colonized the New World,” said UB geology PhD candidate Alia Lesnek, the study’s first author. “There was a coastal route available, and the appearance of this newly ice-free terrain may have spurred early humans to migrate southward.”

The findings do not mean that early settlers definitely traversed Alaska’s southern coast to spread into the Americas: The project examined just one section of the coast, and scientists would need to study multiple locations up and down the coastline to draw firmer conclusions.

Still, the work is exciting because it hints that the seafaring theory of migration is viable.

The bones of an ancient ringed seal — previously discovered in a nearby cave by other researchers — provide further, tantalizing clues. They hint that the area was capable of supporting human life at the time that early settlers may have been passing through, Briner says. The new study calculates that the seal bones are about 17,000 years old. This indicates that the region was ecologically vibrant soon after the ice retreated, with resources including food becoming available.

Co-authors on the research included Briner; Lesnek; Charlotte Lindqvist, PhD, an associate professor of biological sciences at UB and a visiting associate professor at Nanyang Technological University; James Baichtal of Tongass National Forest; and Timothy Heaton, PhD, of the University of South Dakota.

A landscape, touched by ice, that tells a story

To conduct their study, the scientists journeyed to four islands within the Alexander Archipelago that lie about 200 miles south/southeast of Juneau.

The team traveled by helicopter to reach these remote destinations. As soon as the researchers arrived, Briner knew that the islands had once been covered by ice.

“The landscape is glacial,” he said. “The rock surfaces are smooth and scratched from when the ice moved over it, and there are erratic boulders everywhere. When you are a geologist, it hits you in the face. You know it immediately: The glacier was here.”

To pinpoint when the ice receded from the region, the team collected bits of rock from the surfaces of boulders and bedrock. Later, the scientists ran tests to figure out how long the samples — and thus the islands as a whole — had been free of ice.

The researchers used a method called surface exposure dating. As Lesnek explained, “When land is covered by a glacier, the bedrock in the area is hidden under ice. As soon as the ice disappears, however, the bedrock is exposed to cosmic radiation from space, which causes it to accumulate certain chemicals on their surface. The longer the surface has been exposed, the more of these chemicals you get. By testing for these chemicals, we were able to determine when our rock surfaces were exposed, which tells us when the ice retreated.

“We use the same dating method for huge boulders called erratics. These are big rocks that are plucked from the Earth and carried to new locations by glaciers, which actually consist of moving ice. When glaciers melt and disappear from a specific region, they leave these erratics behind, and surface exposure dating can tell us when the ice retreated.”

For the region that was studied, this happened roughly 17,000 years ago.

The case for a coastal migration route

In recent years, evidence has mounted against the conventional thinking that humans populated North America by taking an inland route through Canada. To do so, they would have needed to walk through a narrow, ice-free ribbon of terrain that appeared when two major ice sheets started to separate. But recent research suggests that while this path may have opened up more than 14,000 years ago, it did not develop enough biological diversity to support human life until about 13,000 years ago, Briner said.

That clashes with archaeological findings that suggest humans were already living in Chile about 15,000 years ago or more and in Florida 14,500 years ago.

The coastal migration theory provides an alternative narrative, and the new study may mark a step toward solving the mystery of how humans came to the Americas.

“Where we looked at it, the coastal route was not only open — it opened at just the right time,” Lindqvist said. “The timing coincides almost exactly with the time in human history that the migration into the Americas is thought to have occurred.”

Global Warming Hits Poorest Hardest

0
0

he wealthiest areas of the world will experience fewer changes in local climate compared to the poorest regions if global average surface temperatures reach the 1.5°C or 2°C limit set by the Paris agreement, according to new research.

The new study, published today in Geophysical Research Letters, a journal of the American Geophysical Union, compares the difference between climate change impacts for wealthy and poor nations.

“The results are a stark example of the inequalities that come with global warming,” said lead author Andrew King from the ARC Centre of Excellence for Climate Extremes at the University of Melbourne in Australia.

“The richest countries that produced the most emissions are the least affected by heat when average temperatures climb to just 2°C, while poorer nations bear the brunt of changing local climates and the consequences that come with them.”

The least affected countries include most temperate nations, with the United Kingdom coming out ahead of all others. By contrast, the worst affected are in the Equatorial regions, including countries like the Democratic Republic of Congo.

This pattern holds true even if global average surface temperatures only reach 1.5°C above pre-industrial levels.

To get their results the researchers used a simple metric – the signal to noise ratio. The signal in this case is the local change in average temperatures caused by climate change. The noise is how variable the temperature is for that region.

In places outside the tropics, where there is greater year-to-year variability and those locations are more well adapted to a wide range of temperatures, the warming will be less noticeable.

But in Equatorial regions, where there is already a very high average temperature and less variation through the year, a small rise in temperatures due to climate change will be distinctly felt and have immediate impacts.

This difference in experienced temperature combined with the distribution of wealth across the world, with richer nations tending to be in temperate regions and the poorer nations in the tropics, adds to the future climate change burden of developing nations.

“Economically powerful nations, who are most responsible for the emissions that led to global warming, are going to have to pick up the slack if they want to maintain economic growth in developing countries,” said co-author Luke Harrington from the University of Oxford.

“It’s why we need to invest in limiting the worst impacts of climate change for developing nations today. By assisting developing nations to meet these challenges we help maintain their economic stability and security into the future and by extension, our own as well.”

Increased Death Rate In Puerto Rico In Months After Hurricane Maria

0
0

The mortality rate in Puerto Rico rose by 62% [95% confidence interval (CI) 11% to 114%] after Hurricane Maria, according to a new study led by researchers from Harvard T.H. Chan School of Public Health. The study was conducted in January and February 2018, in collaboration with colleagues from Carlos Albizu University in Puerto Rico and the University of Colorado School of Medicine.

The researchers concluded that the original estimate of 64 excess deaths due to Hurricane Maria is likely to be a substantial underestimate. The study estimates a death rate of 14.3 deaths per thousand [95% CI 9.8 to 18.9] between September 20 (date of Hurricane Maria) and December 31, 2017, up from a rate of 8.8 deaths per thousand at the same time in 2016. About one third of the reported deaths in the households surveyed in the study were attributed to delayed or prevented access to medical care.

The study was published online in the New England Journal of Medicine.

Hurricane Maria made landfall in Puerto Rico on September 20, 2017, inflicting approximately $90 billion worth of damage and displacing thousands of residents. The storm disrupted medical services across the island, and many households were left for weeks without water, electricity, or cell phone coverage.

As with any major natural disaster, assessing the loss of life caused by Hurricane Maria was difficult and contentious. For disaster-related deaths to be confirmed in Puerto Rico, bodies must be transported to San Juan or a medical examiner must travel to the region to verify the death. This makes it difficult to log deaths that were caused by delays in treatment or chronic conditions that worsened in the aftermath of the storm. In December 2017, media reports suggested that the official death toll was significantly underestimated.

To produce an independent estimate of lives lost as a result of the storm, the researchers surveyed 3,299 randomly chosen households across Puerto Rico. Participants were asked about infrastructure damage, displacement, and deaths. Results from the survey showed that there were an estimated 14.3 deaths per 1,000 people between September 20 and December 31, 2017. By comparing this post-hurricane mortality rate with the same time period in 2016, the researchers estimated that there were 4,645 [95% CI, 793 to 8498] additional deaths in the three-month period following Hurricane Maria.

In addition to a significantly higher death toll, the study showed that the average household went approximately 41 days without cell phone service, 68 days without water, and 84 days without electricity following the storm. More than 30% of surveyed households reported interruptions to medical care, with trouble accessing medications and powering respiratory equipment being the most frequently cited challenges.

Household-based surveys such as these are well studied in the scientific literature and offer a cost-effective, rapid approach in the aftermath of a disaster. The researchers have made all of their anonymized data, analysis, and code publicly available for review.


From Face Recognition To Phase Recognition

0
0

If you want to understand how a material changes from one atomic-level configuration to another, it’s not enough to capture snapshots of before-and-after structures. It’d be better to track details of the transition as it happens. Same goes for studying catalysts, materials that speed up chemical reactions by bringing key ingredients together; the crucial action is often triggered by subtle atomic-scale shifts at intermediate stages.

“To understand the structure of these transitional states, we need tools to both measure and identify what happens during the transition,” said Anatoly Frenkel, a physicist with a joint appointment at the U.S. Department of Energy’s Brookhaven National Laboratory and Stony Brook University.

Frenkel and his collaborators have now developed such a “phase-recognition” tool–or more precisely, a way to extract “hidden” signatures of an unknown structure from measurements made by existing tools. In a paper just published in Physical Review Letters, they describe how they trained a neural network to recognize features in a material’s x-ray absorption spectrum that are sensitive to the arrangement of atoms at a very fine scale. The method helped reveal details of the atomic-scale rearrangements iron undergoes during an important but poorly understood phase change.

“This network training is similar to how machine learning is used in facial-recognition technology,” Frenkel explained. In that technology, computers analyze thousands of images of faces and learn to recognize key features, or descriptors, and the differences that tell individuals apart. “There is a correlation between some features of the data,” Frenkel explained. “In the language of our x-ray data, the correlations exist between the intensity of different regions of the spectra that also have direct relevance to the underlying structure and the corresponding phase.”

Network training

To get the neural network ready for “phase recognition”–that is, to be able to recognize the key spectral features–the scientists needed a training set of images.

Janis Timoshenko, a postdoctoral fellow working with Frenkel at Stony Brook and lead author on the paper, tackled that challenge. First, he used molecular dynamic simulations to create 3000 realistic structure models corresponding to different phases of iron and different degrees of disorder.

“In these models, we wanted to account for the dynamic effects, so we define the forces that act between different atoms and we allow the atoms to move around as influenced by these forces,” Timoshenko said. Then, using well-established approaches, he used mathematical calculations to derive the x-ray absorption spectra that would be obtained from each of these 3000 structures.

“It’s not a problem to simulate a spectrum,” Timoshenko said, “it’s a problem to understand them in the backwards direction–start with the spectrum to get to the structure–which is why we need the neural network!”

After using Timoshenko’s modeled spectral data to train the network, the scientists put their method to the test using real spectral data collected as iron underwent the phase transition.

“There are not a lot of experimental methods to monitor this transition, which happens at quite high temperatures,” Timoshenko said. “But our collaborators– Alexei Kuzmin, Juris Purans, Arturs Cintins, and Andris Anspoks from the Institute of Solid State Physics of the University of Latvia, my former institution–performed this really nice experiment at the ELETTRA synchrotron in Italy to collect x-ray absorption data on this phase transition for the first time.”

The neural network was able to extract the relevant structural information from the x-ray absorption spectrum of iron–in particular, the radial distribution function, which is a measure of the separations between atoms and how likely the various separations are. This function, unique for any material, is the key that can unlock the hidden details of the structure, according to Frenkel. It allowed scientists to quantify changes in the density and coordination of iron atoms in the process of their transition from one atomic arrangement to another.

Additional applications

In addition to being useful for studying the dynamics of phase changes, this method could be used to monitor the arrangements of nanoparticles in catalysts and other materials, the scientists say.

“We know that nanoparticles in catalytic materials change their structure in reaction conditions. It’s really important to understand the transitional structure–why it changes, and how that affects catalytic properties and processes,” Timoshenko said.

Nanoparticles also often take on structures that lie somewhere between crystalline and amorphous, with structural variations between the surface and the bulk. This method should be able to tease apart those differences so scientists can assess their relevance for material performance.

The method would also be useful for studying heterogeneous materials (which are made from a combination of particles with different sizes and shapes) and isomers of the same particle (which contain the same number of atoms but differ in their arrangements).

“No technique can image positions of atoms in three dimensions with such precision to tell what’s the difference between their shapes. But if we measure this radial distribution function, there is a chance to tell them apart–and address important questions about the role of heterogeneity in catalysis,” Frenkel said.

Study Finds Two Ancient Ancestries ‘Reconverged’ With Settling Of South America

0
0

Recent research has suggested that the first people to enter the Americas split into two ancestral branches, the northern and southern, and that the “southern branch” gave rise to all populations in Central and South America.

Now, a study shows for the first time that, deep in their genetic history, the majority – if not all – of the Indigenous peoples of the southern continent retain at least some DNA from the “northern branch”: the direct ancestors of many Native communities living today in the Canadian east.

The latest findings, published today in the journal Science, reveal that, while these two populations may have remained separate for millennia – long enough for distinct genetic ancestries to emerge – they came back together before or during the expansion of people into South America.

The new analyses of 91 ancient genomes from sites in California and Canada also provide further evidence that the first peoples separated into two populations between 18,000 and 15,000 years ago. This would have been during or after migrating across the now-submerged land bridge from Siberia along the coast.

Ancient genomes from sites in Southwest Ontario show that, after the split, Indigenous ancestors representing the northern branch migrated eastwards to the great lakes region. This population may have followed the retreating glacial edges as the Ice Age began to thaw, say researchers.

The study also adds to evidence that the prehistoric people associated with Clovis culture – named for 13,000-year-old stone tools found near Clovis, New Mexico, and once believed to be ancestral to all Native Americans – originated from ancient peoples representing the southern branch.

This southern population likely continued down the Pacific coast, inhabiting islands along the way. Ancient DNA from the Californian Channel Islands shows that initial populations were closely related to the Clovis people.

Yet contemporary Central and South American genomes reveal a “reconvergence” of these two branches deep in time. The scientific team, led by the universities of Cambridge, UK, and Illinois Urbana-Champaign, US, say there must have been one or a number of “admixture” events between the two populations around 13,000 years ago.

They say that the blending of lineages occurred either in North America – prior to expansion south – or as people migrated ever deeper into the southern continent, most likely following the western coast down.

“It was previously thought that South Americans, and indeed most Native Americans, derived from one ancestry related to the Clovis people,” said Dr Toomas Kivisild, co-senior author of the study from Cambridge’s Department of Archaeology.

“We now find that all native populations in North, Central and South America also draw genetic ancestry from a northern branch most closely related to Indigenous peoples of eastern Canada. This cannot be explained by activity in the last few thousand years. It is something altogether more ancient,” he said.

Dr Ripan S. Malhi, co-senior author from Illinois Urbana-Champaign, said: “Working in partnership with Indigenous communities, we can now learn more about the intricacies of ancestral histories in the Americas through advances in paleogenomic technologies. We are starting to see that previous models of ancient populations were unrealistically simple.”

Present day Central and South American populations analysed in the study were found to have a genetic contribution from the northern branch ranging between 42% to as high as 71% of the genome.

Surprisingly, the highest proportion of northern branch genetics in South America was found way down in southern Chile, in the same area as the Monte Verde archeological site – one of the oldest known human settlements in the Americas (over 14,500 years old).

“It’s certainly an intriguing finding, although currently circumstantial – we don’t have ancient DNA to corroborate how early this northern ancestral branch arrived,” said Dr Christiana Scheib, first author of the study, who conducted the work while at the University of Cambridge.

“It could be evidence for a vanguard population from the northern branch deep in the southern continent that became isolated for a long time – preserving a genetic continuity.

“Prior to 13,000 years ago, expansion into the tip of South America would have been difficult due to massive ice sheets blocking the way. However, the area in Chile where the Monte Verde site is located was not covered in ice at this time,” she said.

“In populations living today across both continents we see much higher genetic proportions of the southern, Clovis-related branch. Perhaps they had some technology or cultural practice that allowed for faster expansion. This may have pushed the northern branch to the edges of the landmass, as well as leading to admixture encounters.”

While consultation efforts varied in this study from community-based partnerships to more limited engagement, the researchers argue that more must be done to include Indigenous communities in ancient DNA studies in the Americas.

The researchers say that genomic analysis of ancient people can have adverse consequences for linked Indigenous communities. Engagement work can help avoid unintended harm to the community and ensure that Indigenous peoples have a voice in research.

“The lab-based science should only be a part of the research. We need to work with Indigenous communities in a more holistic way,” added Schieb, who has recently joined the University of Tartu’s Institute of Genomics, where Kivisild also holds an affiliation.

“From the analysis of a single tooth, paleogenomics research can now offer information on ancient diet and disease as well as migration. By developing partnerships that incorporate ideas from Native communities, we can potentially generate results that are of direct interest and use to the Indigenous peoples involved,” she said.

Same Foods Create Markedly Different Environmental Impacts

0
0

Researchers at Oxford University and the Swiss agricultural research institute, Agroscope, have created the most comprehensive database yet on the environmental impacts of nearly 40,000 farms, and 1,600 processors, packaging types, and retailers. This allows them to assess how different production practices and geographies lead to different environmental impacts for 40 major foods.

They found large differences in environmental impact between producers of the same product. High-impact beef producers create 105kg of CO2 equivalents and use 370m2 of land per 100 grams of protein, a huge 12 and 50 times greater than low-impact beef producers. Low-impact beef producers then use 36 times more land and create 6 times more emissions than peas.

Aquaculture, assumed to create relatively little emissions, can emit more methane, and create more greenhouse gases than cows per kilogram of liveweight. One pint of beer, for example, can create 3 times more emissions and use 4 times more land than another. This variation in impacts is observed across all five indicators they assess, including water use, eutrophication, and acidification.

“Two things that look the same in the shops can have very different impacts on the planet. We currently don’t know this when we make choices about what to eat. Further, this variability isn’t fully recognised in strategies and policy aimed at reducing the impacts of farmers.” says Joseph Poore from the Department of Zoology and the School of Geography and Environment.

A small number of producers create much of the impact. Just 15% of beef production creates ~1.3 billion tonnes of CO2 equivalents and uses ~950 million hectares of land. Across all products, 25% of producers contribute on average 53% of each product’s environmental impacts. This variation and skew highlights potential to reduce impacts and enhance productivity in the food system.

“Food production creates immense environmental burdens, but these are not a necessary consequence of our needs. They can be significantly reduced by changing how we produce and what we consume” says Joseph Poore.

“One of the key challenges is finding solutions that are effective across the millions of diverse producers unique to agriculture. An approach to reduce environmental impacts or enhance productivity that is effective for one producer can be ineffective or create trade-offs for another. This is a sector where we require many different solutions delivered to many millions of different producers.”

For producers, the researchers present evidence in favour of using new technology. This technology often works on mobile devices, taking information on inputs, outputs, climate, and soil, to quantify environmental impacts. The technology then provides recommendations on how to reduce these impacts and increase productivity.

However, producers have limits on how far they can reduce their impacts. Specifically, the researchers found that the variability in the food system fails to translate into animal products with lower impacts than vegetable equivalents. For example, a low-impact (10th percentile) litre of cow’s milk uses almost two times as much land and creates almost double the emissions as an average litre of soymilk.

Diet change, therefore, delivers greater environmental benefits than purchasing sustainable meat or dairy.

Further, without major changes in technology that disproportionately target animal products, the researchers show that animal product-free diets are likely to deliver greater environmental benefits than changing production practices both today and in the future.

Specifically, plant-based diets reduce food’s emissions by up to 73% depending where you live. Staggeringly, global agricultural land would also be reduced by ~3.1 billion hectares (76%). “This would take pressure off the world’s tropical forests and release land back to nature” says Joseph Poore.

The researchers show that we can use take advantage of variable environmental impacts to access a second scenario. Reducing consumption of animal products by 50% by avoiding the highest-impact producers achieves 73% of the previous scenarios GHG emission reduction for example. Further, lowering consumption of discretionary products (oils, alcohol, sugar, and stimulants) by 20% by avoiding high-impact producers reduces the greenhouse gas emissions of these products by 43%.

This creates a multiplier effect, where small behavioural changes have large consequences for the environment. However, this scenario requires communicating producer (not just product) environmental impacts to consumers. This could be through environmental labels in combination with taxes and subsidies.

“We need to find ways to slightly change the conditions so it’s better for producers and consumers to act in favour of the environment” says Joseph Poore. “Environmental labels and financial incentives would support more sustainable consumption, while creating a positive loop: Farmers would need to monitor their impacts, encouraging better decision making; and communicate their impacts to suppliers, encouraging better sourcing”.

Paving The Way For Safer, Smaller Batteries And Fuel Cells

0
0

Fuel cells and batteries provide electricity by generating and coaxing positively charged ions from a positive to a negative terminal which frees negatively charged electrons to power cellphones, cars, satellites, or whatever else they are connected to. A critical part of these devices is the barrier between these terminals, which must be separated for electricity to flow.

Improvements to that barrier, known as an electrolyte, are needed to make energy storage devices thinner, more efficient, safer, and faster to recharge. Commonly used liquid electrolytes are bulky and prone to shorts, and can present a fire or explosion risk if they’re punctured.

Research led by University of Pennsylvania engineers suggests a different way forward: a new and versatile kind of solid polymer electrolyte (SPE) that has twice the proton conductivity of the current state-of-the-art material. Such SPEs are currently found in proton-exchange membrane fuel cells, but the researchers’ new design could also be adapted to work for the lithium-ion or sodium-ion batteries found in consumer electronics.

The study, published in Nature Materials, was led by Karen I. Winey, TowerBrook Foundation Faculty Fellow, professor and chair of the Department of Materials Science and Engineering, and Edward B. Trigg, then a doctoral student in her lab. Demi E. Moed, an undergraduate member of the Winey lab, was a coauthor.

They collaborated with Kenneth B. Wagener, George B. Butler Professor of Polymer Chemistry at the University of Florida, Gainesville, and Taylor W. Gaines, a graduate student in his group. Mark J. Stevens, of Sandia National Laboratories, also contributed to this study, as well as Manuel Maréchal and Patrice Rannou, of the French National Center for Scientific Research, the French Alternative Energies and Atomic Energy Commission, and the Université Grenoble Alpes.

A variety of SPEs already exists. Nafion, which is widely used in proton-exchange membrane fuel cells, is a sheet of flexible plastic that is permeable to protons and impermeable to electrons. After absorbing water, protons can flow through microscopic channels that span the film.

A thin, SPE like Nafion is especially enticing for fuel cells in aerospace applications, where every kilogram counts. Much of the bulk of portable batteries comes from shielding designed to protect liquid electrolytes from punctures. Systems using liquid electrolytes must separate the electrodes further apart then their solid electrolyte counterparts, as metal build-up on the electrodes can eventually cross the channel and cause a short.

Nafion addresses those problems, but there is still much room for improvement.

“Nafion is something of a fluke,” Winey said. “Its structure has been the subject of debate for decades, and will likely never be fully understood or controlled.”

Nafion is hard to study because its structure is random and disordered. This fluorinated polymer occasionally branches off into side chains that end with sulfonic acid groups. It’s these sulfonic acids that draw in water and form the channels that allow for proton transport from one side of the film to the other. But because these side chains occur at random positions and are of different lengths, the resulting channels through the disordered polymer are a twisty maze that transports ions.

With an eye toward cutting through this maze, Winey’s group recently collaborated with Stevens to discover a new proton-transporting structure that has ordered layers. These layers feature many parallel acid-lined channels through which protons can quickly flow.

“It’s like superhighways versus the country roads of Provence,” Winey said.

This new structure is the result of a special chemical synthesis route developed by Wagener’s group at the University of Florida. This route evenly places the acid groups along a polymer chain such that the spacing between the functional groups is long enough to crystalize. The most detailed structural analysis to date was on a polymer with exactly 21 carbons atoms between carboxylic acid groups, the polymer that initiated the Penn-Florida collaboration a decade ago.

While Winey’s group and Stevens were working out the structure and noting it’s potential for transporting ions, Wagener’s group was working to incorporate sulfonic acid groups to demonstrate the diversity of chemical groups that could be attached to polyethylenes. Both teams realized that proton conductivity would require the stronger acid.

“Precisely placing the sulfonic acid groups along polyethylene proved to be our biggest synthetic challenge,” Wagener said. “Success finally happened in the hands of Taylor Gaines, who devised a scheme we call ‘heterogeneous to homogeneous deprotection’ of the sulfonic acid group ester. It was this synthetic process which finally led to the formation of the precision sulfonic acid polymers.”

The details of this process were also recently published in the journal Macromolecular Chemistry and Physics.

With the chains forming a series of hairpin shapes with a sulfonic acid group at each turn, the polymer assembles into orderly layers, forming straight channels instead of the tortuous maze found in Nafion.

There are, literally, still some kinks to work out. The group’s next step is to orient these layers in the same direction throughout the film.

“We’re already faster than Nafion by a factor of two, but we could be even faster if we aligned all of those layers straight across the electrolyte membrane,” Winey said.

More than improving fuel cells where Nafion is currently employed, the crystallization-induced layers described in the researchers’ study could be extended to work with functional groups compatible with other kinds of ions.

“Better proton conduction is definitely valuable, but I think the versatility of our approach is what is ultimately most important,” Winey said. “There’s still no sufficiently good solid electrolyte for lithium or for hydroxide, another common fuel cell ion, and everyone who is trying to design new SPEs is using a very different approach than ours.”

Cellphone batteries made with this type of SPE could be thinner and safer, with the superhighway-style ion channels enabled by the researchers’ design, recharge much faster.

“Precision synthesis has been one of the grand challenges in polymer science, and this remarkable work demonstrates how it can open doors to novel materials of great promise,” said Linda Sapochak, director of the National Science Foundation’s Division of Materials Research. “NSF is excited to see that its support at both universities for this integrative collaboration has led to a synergistic breakthrough.”

Blood Test Shows Potential For Early Detection Of Lung Cancer

0
0

A test that analyzes free-floating DNA in the blood may be able to detect early-stage lung cancer, a preliminary report from the ongoing Circulating Cell-Free Genome Atlas (CCGA) study suggests.

The findings, from one of the first studies to explore whether sequencing blood-borne DNA is a feasible approach to the early cancer detection,.

“We’re excited that the initial results from the CCGA study show it is possible to detect early-stage lung cancer from blood samples using genome sequencing,” said lead study author Geoffrey R. Oxnard, MD, of Dana-Farber Cancer Institute. “There is an unmet need globally for early-detection tests for lung cancer that can be easily implemented by health-care systems. These are promising early results and the next steps are to further optimize the assays and validate the results in a larger group of people.”

Early diagnosis is key to improving survival rates for lung cancer. A blood test that could be done through a simple blood draw at the doctor’s office could potentially have a major impact on survival, but before such a test could be widely used, additional validation in larger data sets and in studies involving people who have not been diagnosed with cancer would be needed, researchers say.

Tests that analyze cell-free DNA in blood, known as “liquid biopsies,” are already used to help choose targeted therapies for people already diagnosed with lung cancer. Until recently, there has been limited evidence to indicate that cell-free DNA analysis may be feasible for early detection of the disease.

The CCGA study has enrolled more than 12,000 of the planned 15,000 participants (70 percent with cancer, 30 percent without cancer) across 141 sites in the United States and Canada.

The new report is from the first sub-study from the CCGA, in which three prototype sequencing assays were performed on blood samples from approximately 1,700 participants. Twenty different cancer types of all stages were included in the sub-study (additional early results from the sub-study, including breast, gastrointestinal, gynecologic, blood and other cancers will be presented separately at the 2018 ASCO Annual Meeting).

In this initial analysis, researchers explored the ability of the three assays to detect cancer in 127 people with stage I-IV lung cancer. The assays were designed to detect cancer-defining signals (mutations and other genomic changes) that could be used in an early cancer detection test:

  • Targeted sequencing to detect non-inherited (somatic) mutations, such as single nucleotide variants and small insertions and/or deletions;
  • Whole-genome sequencing (WGS) to detect somatic gene copy number changes;
  • Whole-genome bisulfite sequencing (WGBS) of cell-free DNA to detect epigenetic changes.
  • At 98 percent specificity, the WGBS assay detected 41 percent of early-stage (stage I-IIIA) lung cancers and 89 percent of late-stage (stage IIIB-IV) lung cancers. The WGS assay was similarly effective, detecting 38 percent of early-stage cancers and 87 percent of late-stage cancers,
  • whereas the targeted assay detected 51 percent of early-stage cancers and 89 percent of late-stage cancers.

The initial results showed that all three assays could detect lung cancer with a low rate of false positives (in which a test indicates a person has cancer when there is no cancer). Of the 580 samples from people without cancer at the time of enrollment in the sub-study, five (less than 1 percent) had a cancer-like signal across all three assays. Of those five participants, two were subsequently diagnosed with cancer (one with stage III ovarian cancer and one with stage II endometrial cancer) – highlighting the potential for such tests to identify early-stage cancers.

Among participants with lung cancer, the study found that more than 54 percent of the somatic (non-inherited) mutations detected in blood samples were derived from white blood cells and not from tumors. These mutations are likely the result of natural aging processes (so-called clonal hematopoiesis of indeterminate potential, or CHIP) and will need to be taken into account when developing blood tests for early detection of blood cancers, noted Oxnard.

The researchers are verifying these results in an independent group of approximately 1,000 participants from CCGA as part of the same sub-study. Following this, they will continue to optimize the assays, then validate them in an even larger data set from CCGA. With increased sample sizes, machine learning approaches are expected to improve assay performance, Oxnard noted.

Viewing all 73339 articles
Browse latest View live




Latest Images