Endangered Bornean orangutans survive in managed forest, decline near oil palm plantations

Recent surveys of the population of endangered Bornean orangutans in Sabah, the Malaysian state in the north-east of Borneo, show mixed results. Populations have remained stable within well-managed forests, where there is little hunting, but declined in landscapes comprising extensive oil palm plantations, according to a new study in the open-access journal PLOS ONE by Donna Simon of the World Wide Fund for Nature — Malaysia, and colleagues. The study is the largest and most complete population survey of orangutans on Borneo, home to this endangered and endemic species.

Lowland forest is the most important habitat for orangutans in Sabah. Over the past 50 years, however, extensive logging and land clearance for agriculture caused habitat loss and fragmentation, which led to a drastic decline in their numbers, but the full extent of the effects on orangutan population have been difficult to estimate.

In the current study, the authors conducted aerial transects totaling nearly 5,500 kilometers across Sabah state, almost three times the length of a previous survey done in 2002-2003. Based on the number of nests, they calculated a population of 9,558 orangutans, including a previously unknown population of about 1,770 orangutans in many widely dispersed sub-populations.

The largest populations of orangutans, numbering about 5,500, were within forests that are either sustainably managed or unlogged in the central uplands of the State. In this area, the population has been stable since the 2002-2003 survey. In contrast, in fragmented forest areas surrounded by extensive areas of oil palm plantations, orangutan populations have declined by as much as 30% since the earlier study. These data are expected to be used by the government of Sabah to shape environmental policies to sustain these important Malaysian orangutan populations.

Simon adds: “A recent survey on orangutan populations in Sabah, North-east Borneo showed a mixed picture from different regions. However, overall the research shows that they have maintained the same numbers over the last 15 years and can remain so as long as proper conservation management measure continues to be put in place.”

Story Source:

Materials provided by PLOS. Note: Content may be edited for style and length.

Source

Healthy lifestyle may offset genetic risk of dementia

Living a healthy lifestyle may help offset a person’s genetic risk of dementia, according to new research.

The study was led by the University of Exeter — simultaneously published today in JAMA and presented at the Alzheimer’s Association International Conference 2019 in Los Angeles. The research found that the risk of dementia was 32 per cent lower in people with a high genetic risk if they had followed a healthy lifestyle, compared to those who had an unhealthy lifestyle.

Participants with high genetic risk and an unfavourable lifestyle were almost three times more likely to develop dementia compared to those with a low genetic risk and favourable lifestyle.

Joint lead author Dr El?bieta Ku?ma, at the University of Exeter Medical School, said: “This is the first study to analyse the extent to which you may offset your genetic risk of dementia by living a healthy lifestyle. Our findings are exciting as they show that we can take action to try to offset our genetic risk for dementia. Sticking to a healthy lifestyle was associated with a reduced risk of dementia, regardless of the genetic risk.”

The study analysed data from 196,383 adults of European ancestry aged 60 and older from UK Biobank. The researchers identified 1,769 cases of dementia over a follow-up period of eight years. The team grouped the participants into those with high, intermediate and low genetic risk for dementia.

To assess genetic risk, the researchers looked at previously published data and identified all known genetic risk factors for Alzheimer’s disease. Each genetic risk factor was weighted according to the strength of its association with Alzheimer’s disease.

To assess lifestyle, researchers grouped participants into favourable, intermediate and unfavourable categories based on their self-reported diet, physical activity, smoking and alcohol consumption. The researchers considered no current smoking, regular physical activity, healthy diet and moderate alcohol consumption as healthy behaviours. The team found that living a healthy lifestyle was associated with a reduced dementia risk across all genetic risk groups.

Joint lead author Dr David Llewellyn, from the University of Exeter Medical School and the Alan Turing Institute, said: “This research delivers a really important message that undermines a fatalistic view of dementia. Some people believe it’s inevitable they’ll develop dementia because of their genetics. However it appears that you may be able to substantially reduce your dementia risk by living a healthy lifestyle.”

The study was led by the University of Exeter in collaboration with researchers from the University of Michigan, the University of Oxford, and the University of South Australia.

Story Source:

Materials provided by University of Exeter. Note: Content may be edited for style and length.

Source

New technology improves atrial fibrillation detection after stroke

A new method of evaluating irregular heartbeats outperformed the approach that’s currently used widely in stroke units to detect instances of atrial fibrillation.

The technology, called electrocardiomatrix, goes further than standard cardiac telemetry by examining large amounts of telemetry data in a way that’s so detailed it’s impractical for individual clinicians to attempt.

Co-inventor Jimo Borjigin, Ph.D., recently published the latest results from her electrocardiomatrix technology in Stroke. Among stroke patients with usable data (260 of 265), electrocardiomatrix was highly accurate in identifying those with Afib.

“We validated the use of our technology in a clinical setting, finding the electrocardiomatrix was an accurate method to determine whether a stroke survivor had an Afib,” says Borjigin, an associate professor of neurology and molecular and integrative physiology at Michigan Medicine.

A crucial metric

After a stroke, neurologists are tasked with identifying which risk factors may have contributed in order to do everything possible to prevent another event.

That makes detecting irregular heartbeat an urgent concern for these patients, explains first author Devin Brown, M.D., professor of neurology and a stroke neurologist at Michigan Medicine.

“Atrial fibrillation is a very important and modifiable risk factor for stroke,” Brown says.

Importantly, the electrocardiomatrix identification method was highly accurate for the 212 patients who did not have a history of Afib, Borjigin says. She says this group is most clinically relevant, because of the importance of determining whether stroke patients have previously undetected Afib.

When a patient has Afib, their irregular heartbeat can lead to blood collecting in their heart, which can form a stroke-causing clot. Many different blood thinners are on the market today, making it easier for clinicians to get their patients on an anticoagulant they’ll take as directed.

The most important part is determining Afib’s presence in the first place.

Much-needed improvement

Brown says challenges persist in detecting intermittent Afib during stroke hospitalization.

“More accurate identification of Afib should translate into more strokes prevented,” she says.

Once hospitalized in the stroke unit, patients are typically placed on continuous heart rhythm monitoring. Stroke neurologists want to detect possible intermittent Afib that initial monitoring like an electrocardiogram, or ECG, would have missed.

Because a physician can’t reasonably review every single heartbeat, current monitoring technology flags heart rates that are too high, Brown says. The neurologist then reviews these flagged events, which researchers say could lead to some missed Afib occurrences, or false positives in patients with different heart rhythm issues.

In contrast, Borjigin’s electrocardiomatrix converts two-dimensional signals from the ECG into a three-dimensional heatmap that allows for rapid inspection of all collected heartbeats. Borjigin says this method permits fast, accurate and intuitive detection of cardiac arrhythmias. It also minimizes false positive as well as false negative detection of arrhythmias.

“We originally noted five false positives and five false negatives in the study,” Borjigin says, “but expert review actually found the electrocardiomatrix was correct instead of the clinical documentation we were comparing it to.”

More applications

The Borjigin lab also recently demonstrated the usefulness of the electrocardiomatrix to differentiate between Afib and atrial flutter. In addition, the lab has shown the ability of electrocardiomatrix to capture reduced heart-rate variability in critical care patients.

Borjigin says she envisions electrocardiomatrix technology will one day be used to assist the detection of all cardiac arrhythmias online or offline and side-by-side with the use of ECG.

“I believe that sooner or later, electrocardiomatrix will be used in clinical practice to benefit patients,” she says.

Disclosure: Borjigin is the co-inventor of electrocardiomatrix, for which she holds a patent in Japan (6348226); and for which the Regents of the University of Michigan hold the patent in the United States (9918651).

Source

The parallel ecomorph evolution of scorpionflies: The evidence is in the DNA

With little cases of ethanol to preserve tissue samples for total genomic DNA analysis, a trio covered much ground in the mountains of Japan and Korea to elucidate the evolution of the scorpionfly. The rugged scientists set out to use molecular phylogenetic analysis to show that the “alpine” type of scorpionfly and “general” type must be different species. After all, the alpine type exhibit shorter wings than the general type, and alpine type females also have very dark and distinct markings on their wings.

However, what they found in the DNA surprised them.

Casually called the scorpionfly because the males have abdomens that curve upward and are shaped like the stinger of scorpions, the Panorpodes paradoxus do not sting. Tomoya Suzuki, postdoc research fellow of the Faculty of Science at Shinshu University, Suzuki’s father, expert of scorpionflies, Nobuo Suzuki, professor at the Japan Women’s College of Physical Education and Koji Tojo, professor at the only Institute for Mountain Science in Japan, Shinshu University, were able to indicate parallel evolutions of Japanese scorpionflies through Bayesian simulations and phylogenetic analyses.

Insects are among the most diverse organism on earth and many fall captive to their elegant beauty as did the scientists dedicated to their study. Insects are very adaptive to their habitat environments, making them excellent subjects to study ecology, evolution and morphology. Phylogenetics is the study of evolutionary history, often visualized in the form of ancestral trees. The team studied the Japanese scorpionfly by collecting samples of the Panorpodes paradoxus throughout Japan and parts of the Korean peninsula searching for samples in altitude of up to 3033 meters.

In a previous study, Professor Tojo was able to correlate plate tectonic geological events in Japan by studying the DNA of insects from a relatively small area of Nagano prefecture. By testing DNA, they discovered the different lineages align with how the land formations occurred in Japan, with some insect types having a more similar background to those on the Asian continent.

The Japanese archipelago used to be a part of mainland East Asia. About 20 million years ago, the movement of the tectonic plates caused the Japanese land mass to tear away from the continent. By around 15 million years ago, the Japanese islands were completely detached and isolated from the mainland. Ancestral lineages of the Japanese Panorpodes therefore, diverged from the continental types around this time. The two major phenotypes of scorpionflies in Japan; the “alpine” type that live in higher altitudes and have shorter wings, and their “general” type counterparts. It is hypothesized that the shorter wings are better suited for the colder climate of higher elevations. The alpine and general types also have slightly different seasonal periods when they can be observed in the wild.

Through Bayesian simulations which are estimates through probability, the divergence time of the genealogical lineages were estimated. Simulations were run for over 100 million generations. The divergence time of the continental and Japanese Panorpodes was estimated to be 8.44 million years ago. The formation of the mountains in the Japanese Archipelago which began around 5 million years ago could be seen in the estimated evolution of the alpine type of P. paradoxus. Another estimated evolution time coincided with climate change cooling times. Cool weather is a tough environment for insects and serves as a genetic selection process. The cool glacial periods encouraged local adaptation of the scorpionflies in the northeast part of the island of Honshu.

With DNA tests of the various scorpionfly specimens, the group was able to show how the P. paradoxus “ecomorphed” or evolved to have forms and structural features adapted to their ecology. This parallel evolution started about 5 million years ago, when the mountain ranges in central Japan formed. Gene flow between the samples collected at different mountains were not detected, evidence of the parallel evolution. Interestingly however, gene flow between the general and alpine types might be happening, one indicator that they are not different species.

In conclusion, the alpine type and general type were not separate species as they suspected, but the alpine scorpionfly ecomorphed, explaining why they looked different. Through a next generation sequencer the team hope to elucidate the exact moment of difference. What sort of genetic basis underlies the alpine ecomorph? What type of genes emerged to facilitate the shortening of the wings? The team hope to study the genetic basis for the ecomorph. To do so, Dr. Suzuki wishes to breed scorpionflies to further elucidate the differences in the gene expression from the alpine and general types. Breeding is necessary to perform the next generation sequencer but what the larva feeds on and other growing conditions remain a mystery. The trio hope to unlock each of these steps to further identify the unknown aspects of the Japanese scorpionfly, as well as continue cutting edge research at the Institute for Mountain Science in Japan, Shinshu University, which is blessed to be surrounded by the Alps in the heart of Japan.

Story Source:

Materials provided by Shinshu University. Note: Content may be edited for style and length.

Source

AI-designed heat pumps consume less energy

Researchers at EPFL have developed a method that uses artificial intelligence to design next-generation heat-pump compressors. Their method can cut the pumps’ power requirement by around 25%.

In Switzerland, 50 — 60% of new homes are equipped with heat pumps. These systems draw in thermal energy from the surrounding environment — such as from the ground, air, or a nearby lake or river — and turn it into heat for buildings.

While today’s heat pumps generally work well and are environmentally friendly, they still have substantial room for improvement. For example, by using microturbocompressors instead of conventional compression systems, engineers can reduce heat pumps’ power requirement by 20-25% (see inset) as well as their impact on the environment. That’s because turbocompressors are more efficient and ten times smaller than piston devices. But incorporating these mini components into heat pumps’ designs is not easy; complications arise from their tiny diameters (<20 mm) and fast rotation speeds (>200,000 rpm).

At EPFL’s Laboratory for Applied Mechanical Design on the Microcity campus, a team of researchers led by Jürg Schiffmann has developed a method that makes it easier and faster to add turbocompressors to heat pumps. Using a machine-learning process called symbolic regression, the researchers came up with simple equations for quickly calculating the optimal dimensions of a turbocompressor for a given heat pump. Their research just won the Best Paper Award at the 2019 Turbo Expo Conference held by the American Society of Mechanical Engineers.

1,500 times faster

The researchers’ method drastically simplifies the first step in designing turbochargers. This step — which involves roughly calculating the ideal size and rotation speed for the desired heat pump — is extremely important because a good initial estimate can considerably shorten the overall design time. Until now, engineers have been using design charts to size their turbocompressors — but these charts become increasingly inaccurate the smaller the equipment. And the charts have not kept up to date with the latest technology.

That’s why two EPFL PhD students — Violette Mounier and Cyril Picard — worked on developing an alternative. They fed the results of 500,000 simulations into machine-learning algorithms and generated equations that replicate the charts but with several advantages: they are reliable even at small turbocompressor sizes; they are just as detailed as more complicated simulations; and they are 1,500 times faster. The researchers’ method also lets engineers skip some of the steps in conventional design processes. It paves the way to easier implementation and more widespread use of microturbochargers in heat pumps.

The benefits of microturbocompressors

Conventional heat pumps use pistons to compress a fluid, called a refrigerant, and drive a vapor-compression cycle. The pistons need to be well-oiled to function properly, but the oil can stick to the heat exchanger walls and impairs the heat transfer process. However, microturbocompressors — which have diameters of just a few dozen millimeters — can run without oil; they rotate on gas bearings at speeds of hundreds of thousands of rpm. The rotating movement and gas layers between the components mean there is almost no friction. As a result, these miniature systems can boost heat pumps’ heat transfer coefficients by 20-30%.

This microturbocharger technology has been in development for several years and is now mature. “We have already been contacted by several companies that are interested in using our method,” says Schiffmann. Thanks to the researchers’ work, companies will have an easier time incorporating the microturbocharger technology into their heat pumps.

Story Source:

Materials provided by Ecole Polytechnique Fédérale de Lausanne. Note: Content may be edited for style and length.

Source

Pain signaling in humans more rapid than previously known

Pain signals can travel as fast as touch signals, according to a new study from researchers at Linköping University in Sweden, Liverpool John Moores University in the UK, and the National Institutes of Health (NIH) in the US. The discovery of a rapid pain-signalling system challenges our current understanding of pain. The study is published in the scientific journal Science Advances.

It has until now been believed that nerve signals for pain are always conducted more slowly than those for touch. The latter signals, which allow us to determine where we are being touched, are conducted by nerves that have a fatty sheath of myelin that insulates the nerve. Nerves with a thick layer of myelin conduct signals more rapidly than unmyelinated nerves. In contrast, the signalling of pain in humans has been thought to be considerably slower and carried out by nerves that have only a thin layer of myelin, or none at all.

In monkeys and many other mammals, on the other hand, part of the pain-signalling system can conduct nerve signals just as fast as the system that signals touch. The scientists speculated whether such a system is also present in humans.

“The ability to feel pain is vital to our survival, so why should our pain-signalling system be so much slower than the system used for touch, and so much slower than it could be?” asks Saad Nagi, principal research engineer of the Department of Clinical and Experimental Medicine and the Center for Social and Affective Neuroscience (CSAN) at Linköping University.

To answer this, the scientists used a technique that allowed them to detect the signals in the nerve fibres from a single nerve cell. They examined 100 healthy volunteers and looked for nerve cells that conducted signals as rapidly as the nerve cells that detect touch, but that had the properties of pain receptors, otherwise known as nociceptors. Pain receptors are characterised by the ability to detect noxious stimuli, such as pinching and abrasion of the skin, while not reacting to light touch. The researchers found that 12% of thickly myelinated nerve cells had the same properties as pain receptors, and in these nerve cells the conduction speed was as high as in touch-sensitive nerve cells.

The next step of the scientists’ research was to determine the function of these ultrafast pain receptors. By applying short electrical pulses through the measurement electrodes, they could stimulate individual nerve cells. The volunteers described that they experienced sharp or pinprick pain.

“When we activated an individual nerve cell, it caused a perception of pain, so we conclude that these nerve cells are connected to pain centres in the brain,” says Saad Nagi.

The research team also investigated patients with various rare neurological conditions. One group of people had, as adults, acquired nerve damage that led to the thickly myelinated nerve fibres being destroyed, while the small fibres were spared. These patients cannot detect light touch. The scientists predicted that the loss of myelinated nerve fibres should also affect the rapidly conducting pain system they had identified. It turned out that these people had an impaired ability to experience mechanical pain. Examination of patients with two other rare neurological conditions gave similar results. These results may be highly significant for pain research, and for the diagnosis and care of patients with pain.

“It’s becoming evident that thickly myelinated nerve fibres contribute to the experience of pain when it has a mechanical cause. Our results challenge the textbook description of a rapid system for signalling touch and a slower system for signalling pain. We suggest that pain can be signalled just as rapidly as touch,” says Saad Nagi.

Story Source:

Materials provided by Linköping University. Note: Content may be edited for style and length.

Source

Superhydrophobic ‘nanoflower’ for biomedical applications

Plant leaves have a natural superpower — they’re designed with water repelling characteristics. Called a superhydrophobic surface, this trait allows leaves to cleanse themselves from dust particles. Inspired by such natural designs, a team of researchers at Texas A&M University has developed an innovative way to control the hydrophobicity of a surface to benefit to the biomedical field.

Researchers in Dr. Akhilesh K. Gaharwar’s lab in the Department of Biomedical Engineering have developed a “lotus effect” by incorporating atomic defects in nanomaterials, which could have widespread applications in the biomedical field including biosensing, lab-on-a-chip, blood-repellent, anti-fouling and self-cleaning applications.

Superhydrophobic materials are used extensively for self-cleaning characteristic of devices. However, current materials require alteration to the chemistry or topography of the surface to work. This limits the use of superhydrophobic materials.

“Designing hydrophobic surfaces and controlling the wetting behavior has long been of great interest, as it plays crucial role in accomplishing self-cleaning ability,” Gaharwar said. “However, there are limited biocompatible approach to control the wetting behavior of the surface as desired in several biomedical and biotechnological applications.”

The Texas A&M design adopts a ‘nanoflower-like’ assembly of two-dimensional (2D) atomic layers to protect the surface from wetting. The team recently released a study published in Chemical Communications. 2D nanomaterials are an ultrathin class of nanomaterials and have received considerable attention in research. Gaharwar’s lab used 2D molybdenum disulfide (MoS2), a new class of 2D nanomaterials that has shown enormous potential in nanoelectronics, optical sensors, renewable energy sources, catalysis and lubrication, but has not been investigated for biomedical applications. This innovative approach demonstrates applications of this unique class of materials to the biomedical industry.

“These 2D nanomaterials with their hexagonal packed layer repel water adherence, however, a missing atom from the top layer can allow easy access to water molecules by the next layer of atoms underneath making it transit from hydrophobic to hydrophilic,” said lead author of the study, Dr. Manish Jaiswal, a senior research associate in Gaharwar’s lab.

This innovative technique opens many doors for expanded applications in several scientific and technological areas. The superhydrophobic coating can be easily applied over various substrates such as glass, tissue paper, rubber or silica using the solvent evaporation method. These superhydrophobic coatings have wide-spread applications, not only in developing self-cleaning surfaces in nanoelectronics devices, but also for biomedical applications. Specifically, the study demonstrated that blood and cell culture media containing proteins do not adhere to the surface, which is very promising. In addition, the team is currently exploring the potential applications of controlled hydrophobicity in stem cell fate.

Story Source:

Materials provided by Texas A&M University. Note: Content may be edited for style and length.

Source

Atomic ‘patchwork’ using heteroepitaxy for next generation semiconductor devices

Researchers from Tokyo Metropolitan University have grown atomically thin crystalline layers of transition metal dichalcogenides (TMDCs) with varying composition over space, continuously feeding in different types of TMDC to a growth chamber to tailor changes in properties. Examples include 20nm strips surrounded by different TMDCs with atomically straight interfaces, and layered structures. They also directly probed the electronic properties of these heterostructures; potential applications include electronics with unparalleled power efficiency.

Semiconductors are indispensable in the modern age; silicon-based integrated circuits underpin the operation of all things digital, from discrete devices like computers, smartphones and home appliances to control components for every possible industrial application. A broad range of scientific research has been directed to the next steps in semiconductor design, particularly the application of novel materials to engineer more compact, efficient circuitry which leverages the quantum mechanical behavior of materials at the nanometer length scale. Of special interest are materials with a fundamentally different dimensionality; the most famous example is graphene, a two-dimensional lattice of carbon atoms which is atomically thin.

Transition metal dichalcogenides (or TMDCs) are promising candidates for incorporation into new semiconductor devices. Composed of transition metals like molybdenum and tungsten and a chalcogen (or Group 16 element) like sulfur or selenium, they can form layered crystalline structures whose properties change drastically when the metallic element is changed, from normal metals to semiconductors, even to superconductors. By controllably weaving domains of different TMDCs into a single heterostructure (made of domains with different composition), it may be possible to produce atomically thin electronics with distinct, superior properties to existing devices.

A team led by Dr. Yu Kobayashi and Associate Professor Yasumitsu Miyata from Tokyo Metropolitan University has been at the cutting edge of efforts to create two-dimensional heterostructures with different TMDCs using vapor-phase deposition, the deposition of precursor material in a vapor state onto a surface to make atomically flat crystalline layers. One of the biggest challenges they faced was creating a perfectly flat interface between different domains, an essential feature for getting the most out of these devices. Now, they have succeeded in engineering a continuous process to grow well-defined crystalline strips of different TMDCs at the edge of existing domains, creating strips as thin as 20nm with a different composition. Their new process uses liquid precursors which can be sequentially fed into a growth chamber; by optimizing the growth rate, they were able to grow heterostructures with distinct domains linked perfectly over atomically straight edges. They directly imaged the linkage using scanning tunneling microscopy (STM), finding excellent agreement with first-principles numerical simulations of what an ideal interface should look like. The team used four different TMDCs, and also realized a layer-on-layer heterostructure.

By creating atomically sharp interfaces, electrons may be effectively confined to one-dimensional spaces on these 2D devices, for exquisite control of electron transport and resistivity as well as optical properties. The team hopes that this may pave the way to devices with unparalleled energy efficiency and novel optical properties.

Story Source:

Materials provided by Tokyo Metropolitan University. Note: Content may be edited for style and length.

Source

Researchers validate optimum composites structure created with additive manufacturing

Additive manufacturing built an early following with 3D printers using polymers to create a solid object from a Computer-Aided Design model. The materials used were neat polymers — perfect for a rapid prototype, but not commonly used as structural materials.

A new wave of additive manufacturing uses polymer composites that are extruded from a nozzle as an epoxy resin, but reinforced with short, chopped carbon fibers. The fibers make the material stronger, much like rebar in a cement sidewalk. The resulting object is much stiffer and stronger than a resin on its own.

The question a recent University of Illinois at Urbana-Champaign study set out to answer concerns which configuration or pattern of carbon fibers in the layers of extruded resin will result in the stiffest material.

John Lambros, Willett professor in the Department of Aerospace Engineering and director of the Advanced Materials Testing and Evaluation Laboratory at U of I was approached by an additive manufacturing research group at Lawrence Livermore National Laboratory to test composite parts that they had created using a direct ink writing technique.

“The carbon fibers are small, about seven microns in diameter and 500 microns in length,” Lambros said. “It’s easier with a microscope but you can certainly see a bundle with the naked eye. The fibers are mostly aligned in the extruded resin, which is like a glue that holds the fibers in place. The Lawrence Livermore group provided the parts, created with several different configurations and one made without any embedded fibers as a control. One of the parts had been theoretically optimized for maximum stiffness, but the group wanted definitive experimental corroboration of the optimization process.”

Lambros said that while waiting for the actual additively manufactured composite samples, Lambros and his student made their own “dummy” samples out of Plexiglas, and that way could begin testing the dummies.

In this case, the shape being tested was a clevis joint — a small, oval-shaped plate with two holes used to connect two other surfaces. For each different sample shape, Lambros’ lab must create a unique loading fixture to test it.

“We create the stands, the grips, and everything — how they’ll be painted, how the cameras will record the tests, and so on,” Lambros said. “When we got the real samples, they weren’t exactly the same shape. The thickness was a bit different than our Plexiglas ones, so we made new spacers and worked it out in the end. From the mechanics side, we must be very cautious. It’s necessary to use precision so as to be confident that any eventual certification of additively manufactured parts is done properly.”

“We created an experimental framework to validate the optimal pattern of the short-fiber reinforced composite material,” Lambros said. “As the loading machine strained the clevis joint plates, we used a digital image correlation technique to measure the displacement field across the surface of each sample by tracking the motion in the pixel intensity values of a series of digital images taken as the sample deforms. A random speckle pattern is applied to the sample surface and serves to identify subsets of the digital images in a unique fashion so they can be tracked during deformation.”

They tested one control sample and four different configurations, including the one believed to be optimized for stiffness, which had a wavy fiber pattern rather than one oriented along horizontal or vertical lines.

“Each sample clevis joint plate had 12 layers in a stack. The optimized one had curved deposition lines and gaps between them,” Lambros said. “According to the Livermore group’s predictions, the gaps are there by design, because you don’t need more material than this to provide the optimal stiffness. That’s what we tested. We passed loading pins through the holes, then pulled each sample to the point of breaking, recording the amount of load and the displacement.

“The configuration that they predicted would be optimal, was indeed optimal. The least optimal was the control sample, which is just resin — as you would expect because there are no fibers in it.”

Lambros said that there is a premise in the analysis that this is a global optimum — meaning that this is the absolutely best possible sample built for stiffness — no other build pattern is better than this one.

“Although of course we only tested four configurations, it does look like the optimized configuration may be the absolute best in practice because the configurations that would most commonly be used in design, such as 0°-90° or ±45° alignments, were more compliant or less stiff than what this one was,” Lambros said. “The interesting thing that we found is that the sample optimized to be the stiffest also turned out to be the strongest. So, if you look at where they break, this one is at the highest load. This was somewhat unexpected in the sense that they had not optimized for this feature. In fact, the optimized sample was also a bit lighter than the others, so if you look at specific load, the failure load per unit weight, it’s a lot higher. It’s quite a bit stronger than the other ones. And why that is the case is something that we’re going to investigate next.”

Lambros said there may be more testing done in the future, but for now, his team successfully demonstrated that they could provide a validation for the optimized additive composite build.

Source

A further step towards reliable quantum computation

Quantum computation has been drawing the attention of many scientists because of its potential to outperform the capabilities of standard computers for certain tasks. For the realization of a quantum computer, one of the most essential features is quantum entanglement. This describes an effect in which several quantum particles are interconnected in a complex way. If one of the entangled particles is influenced by an external measurement, the state of the other entangled particle changes as well, no matter how far apart they may be from one another. Many scientists are developing new techniques to verify the presence of this essential quantum feature in quantum systems. Efficient methods have been tested for systems containing only a few qubits, the basic units of quantum information. However, the physical implementation of a quantum computer would involve much larger quantum systems. Yet, with conventional methods, verifying entanglement in large systems becomes challenging and time-consuming, since many repeated experimental runs are required.

Building on a recent theoretical scheme, a team of experimental and theoretical physicists from the University of Vienna and the ÖAW led by Philip Walther and Borivoje Daki?, together with colleagues from the University of Belgrade, successfully demonstrated that entanglement verification can be undertaken in a surprisingly efficient way and in a very short time, thus making this task applicable also to large-scale quantum systems. To test their new method, they experimentally produced a quantum system composed of six entangled photons. The results show that only a few experimental runs suffice to confirm the presence of entanglement with extremely high confidence, up to 99.99 %.

The verified method can be understood in a rather simple way. After a quantum system has been generated in the laboratory, the scientists carefully choose specific quantum measurements which are then applied to the system. The results of these measurements lead to either confirming or denying the presence of entanglement. “It is somehow similar to asking certain yes-no questions to the quantum system and noting down the given answers. The more positive answers are given, the higher the probability that the system exhibits entanglement,” says Valeria Saggio, first author of the publication in Nature Physics. Surprisingly, the amount of needed questions and answers is extremely low. The new technique proves to be orders of magnitude more efficient compared to conventional methods.

Moreover, in certain cases the number of questions needed is even independent of the size of the system, thus confirming the power of the new method for future quantum experiments.

While the physical implementation of a quantum computer is still facing various challenges, new advances like efficient entanglement verification could move the field a step forward, thus contributing to the progress of quantum technologies.

Story Source:

Materials provided by University of Vienna. Note: Content may be edited for style and length.

Source

More than 5 million cancer survivors experience chronic pain, twice the rate of the general population

More than 5 million cancer survivors in the United States experience chronic pain, almost twice the rate in the general population, according to a study published by Mount Sinai researchers in JAMA Oncology in June.

Researchers used the National Health Interview Survey, a large national representative dataset from the U.S. Centers for Disease Control and Prevention, to estimate the prevalence of chronic pain among cancer survivors. They found that about 35 percent of cancer survivors have chronic pain, representing 5.39 million patients in the United States.

This study provided the first comprehensive estimate of chronic pain prevalence among cancer survivors,” said corresponding author Changchuan Jiang, MD, MPH, a medical resident at Mount Sinai St. Luke’s and Mount Sinai West. “These results highlight the important unmet needs of pain management in the large, and growing cancer survivorship community.”

Specific types of cancer — such as bone, kidney, throat, and uterine — also had a higher incidence of chronic and severe pain that restricted daily activity. Chronic pain was more prevalent in survivors who were unemployed and had inadequate insurance.

Chronic pain is one of the most common long-term effects of cancer treatment and has been linked with an impaired quality of life, lower adherence to treatment, and higher health care costs. This study is important because a better understanding of the epidemiology of pain in cancer survivors can help inform future health care educational priorities and policies.

Researchers from the American Cancer Society, Memorial Sloan Kettering Cancer Center, and the University of Virginia were part of the study team.

Story Source:

Materials provided by The Mount Sinai Hospital / Mount Sinai School of Medicine. Note: Content may be edited for style and length.

Source

Spiders risk everything for love

University of Cincinnati biologist George Uetz long suspected the extravagant courtship dance of wolf spiders made them an easy mark for birds and other predators.

But it was only when he and colleague Dave Clark from Alma College teamed up with former University of Minnesota researcher Tricia Rubi and her captive colony of blue jays that he could prove it.

For a study published in May in the journal Behavioural Processes, Rubi trained a captive colony of blue jays to peck at buttons to indicate whether or not they saw wolf spiders (Schizocosa ocreata) on video screens.

Clark made videos superimposing images of courting, walking and stationary male spiders on a leaf litter background. Rubi presented the videos to blue jays on a flat display screen on the ground.

When viewed from above, the brindled black and brown spiders disappear amid the dead leaves.

The jays had trouble finding spiders that stayed motionless in the videos. This confirmed the adaptive value of the anti-predator “freeze” behavior.

The jays had less trouble seeing spiders that walked in the video. And the jays were especially quick to find male spiders engaged in ritual courtship behavior, in which they wave their furry forelegs in the air like overzealous orchestra conductors.

“By courting the way they do, they are clearly putting themselves at risk of bird predation,” UC’s Uetz said.

His lab studies the complementary methods spiders employ to communicate with each other, called multimodal communication. Female spiders leave a pheromone trail behind them in their silk and when they rub their abdomens on the ground, Uetz said.

And when male spiders come into visual range, they bounce and rattle their legs on the leaf litter to create vibrations that can travel some considerable distance to the legs of potential mates. The males also wave their front legs in a unique pattern to captivate females.

The males of the species have especially furry front legs that look like black woolen leg warmers. The combination of thick fur and vigorous dancing are indicators that the male is fit and healthy, Uetz said.

“The displays and the decorations show off male quality,” Uetz said. “The males that display vigorous courtship and robust leg tufts are showing off their immune competence and overall health. They, in turn, will have sons that have those qualities.”

That is, if they live so long. Many birds, including blue jays, find spiders to be tasty. And a chemical called taurine found in spiders is especially important for the neurological development of baby birds, he said.?

Spiders instinctively fear predatory birds. In previous studies, Uetz, Clark and UC student Anne Lohrey determined that wolf spiders would freeze in place when they detected the sharp, loud calls of blue jays, cardinals and other insect-eating birds. By comparison, they ignored the calls of seed-eating birds such as mourning doves along with background forest noises such as the creak of katydids.

“They clearly recognized these birds as some kind of threat,” Uetz said. “Robins will hunt them on the ground. Lots of other birds do, too. Turkeys will snap them up.”

When Uetz proposed a spider experiment, Rubi said it wasn’t hard to train her colony of blue jays. The jays quickly learned to peck at different buttons when they either observed a spider or didn’t see one on a video screen.

“Birds are super visual. They have excellent color vision and good visual acuity. It’s not surprising they would have no trouble seeing spiders in motion,” she said.

Rubi now studies genetic evolution at the University of Victoria in British Columbia.

If natural selection means the most avid courting spiders are also most likely to get eaten, why does this behavior persist across generations? Wouldn’t meeker spiders survive to pass along their genes?

Rubi said the explanation lies in another selective force.

“Natural selection is selection for survival, which would lead to spiders that are less conspicuous to predators,” she said. “But sexual selection is driven by females. And they select for a more conspicuous display.”

In genetic terms, Rubi said, fitness is measured in the number of healthy offspring produced. So while staying alive by minimizing risk is a good strategy for the individual, it’s not a viable strategy for the species.

“The longest-lived male can still have a fitness of ‘zero’ if he never mates,” Rubi said. “So there appears to be a trade-off between being safe and being sexy. That balance is what shapes these courtship displays.”

Uetz said female wolf spiders can be very choosy about the qualities they value in a mate.

“The tufts on their forelegs are very important. Their size and symmetry play a big role,” he said. “They’re so tiny and have brains the size of a poppy seed. You wouldn’t think they could discriminate, but they do.”

And at least for successful male wolf spiders living in a hostile world, this means love wins over fear.

Source

Shedding light on ‘black box’ of inpatient opioid use

People who receive opioids for the first time while hospitalized have double the risk of continuing to receive opioids for months after discharge compared with their hospitalized peers who are not given opioids, according to research led by scientists at the University of Pittsburgh Graduate School of Public Health.

The findings, published today in the Annals of Internal Medicine, are among the first to shed light on the little-studied causes and consequences of inpatient opioid prescribing.

“I was surprised by the level of opioid prescribing to patients without a history of opioid use,” said lead author Julie Donohue, Ph.D., professor in Pitt Public Health’s Department of Health Policy and Management. “About half of the people admitted to the hospital for a wide variety of medical conditions were given opioids. The stability of this prescribing also was surprising. Nationally and regionally, as people have become more aware of how addictive opioids can be, we’ve seen declines in outpatient opioid prescribing. But we didn’t see that in inpatient prescribing.”

Previous studies have shown that some surgical and medical patients who fill opioid prescriptions immediately after leaving the hospital go on to have chronic opioid use. Until this study, however, little was known about how and if those patients were being introduced to the opioids while in the hospital.

Donohue and her colleagues reviewed the electronic health records of 191,249 hospital admissions of patients who had not been prescribed opioids in the prior year and were admitted to a community or academic hospital in Pennsylvania between 2010 and 2014.

Opioids were prescribed in 48% of the admissions, with those patients being given opioids for a little more than two-thirds of their hospital stay, on average.

Almost 6% of patients receiving opioids during their hospital stay were still being prescribed opioids three months later, compared with 3% of those without inpatient opioid use. And 7.5% of patients who received opioids less than 12 hours before discharge were still receiving opioids 90 days later, compared with 3.9% of their peers who were free of opioids for at least 24 hours prior to discharge.

Additionally, non-opioid painkillers and anti-inflammatory medications, such as ibuprofen, aspirin or naproxen, were rarely tried before an opioid was administered — as little as 7.9% of the time for some conditions.

“Inpatient opioid use has been something of a black box,” Donohue said. “And, while our study could not assess the appropriateness of opioid administration, we identified several practices — low use of non-opioid painkillers, continuous use of opioids while hospitalized, opioid use shortly before discharge — which may be opportunities to reduce risk of outpatient opioid use and warrant further study.”

Kennedy, M.S., Christopher Seymour, M.D. M.Sc., Timothy Girard, M.D., M.S.C.I ., Oscar Marroquin, M.D., F.A.C.C., and Chung-Chou H. Chang, Ph.D., all of Pitt; Wei-Hsuan Lo-Ciganic, Ph.D., M.S.Pharm., of the University of Florida; Catherine H. Kim, Pharm.D., of UPMC; and Patience Moyo, Ph.D., of Brown University.

This research was funded by UPMC and Pitt.

Source

‘Power shift’ needed to improve gender balance in energy research

Women still face significant barriers in forging successful and influential careers in UK energy research, a new high-level report has revealed.

A team of experts from the University of Exeter’s Energy Policy Group has analysed gender balance within the crucial field of energy research and spoken to female researchers about their experiences of academic life. The study, launched today (14th June 2019), sets out how research funders and universities can ensure female talent and expertise is mobilised in transforming our energy systems.

The report is particularly timely as the UK parliament declares a climate emergency and the government commits to legislate for a 2050 net-zero greenhouse gas emissions target. It is clear that energy research needs to harness 100 per cent of available talent in order to meet the challenge of rapidly decarbonising energy systems.

The study revealed that women are still significantly under-represented in energy research and application rates from women are low. It also found that grants applied for and awarded to women tend to be of smaller value, when they do apply female academics are equally and sometimes more likely to be funded than male academics.

The report also highlighted the ‘significant drop-off’ between the number of female PhD students and funded researchers — meaning the sector loses a substantial pool of potential talent at an early stage.

The research presents four key ways in which funders and universities can work together to improve gender balance: look at the data, fund more women, stimulate career progression for female energy academics, and build on what’s already working.

Jess Britton, a Postdoctoral Research Fellow at the University of Exeter and co-author of the report said: “Progress on gender balance in research has been too slow for too long, but we think now is the time to bring together action across funders and universities to ensure that female talent in capitalised on. Taking action across the funding, institutional and systemic issues we identify could drive a real shift in inclusion in the sector.”

The new report, commissioned by the UK Energy Research Centre (UKERC) and funded by the Engineering and Physical Sciences Research Council (EPSRC) saw the researchers speak to 59 female academics conducting energy research and from various disciplines, institutions and career stages. They also analysed available data on gender and energy research funding.

Crucially, interviews with the researchers unearthed an array of issues that were felt to be holding women back from career progression — including the detrimental impact of part-time work or maternity leave, and inherent institutional and funding bias towards established, male academics.

While the report recognised that since 2017 there has been some progress in the gender balance of Peer Review Panel Members and small increases in awards granted to female researchers, progress has remained slow.

The study suggests that any progress should be accompanied by systemic change within the institutional structures and cultural environment of institutions involved with energy research.

Jim Watson, Director of UKERC added: “This report shows that there is an urgent need to address the poor gender balance within the UK energy research community — particularly with respect to leadership of grants and career progression.

“It not only reveals the extent of the problem with new evidence, but makes a series of practical recommendations should be required reading for funders and universities alike.”

The research identified four key ways in which UKRI, other funders and universities can work to improve gender balance. They are:

Look at the data — There remain significant difficulties in accessing meaningful data on gender balance in energy research. Data should be published, used to set targets, monitor progress and provide annual updates. The report also suggested using quantitative and qualitative data to identify key intervention points, speaking to more female energy academics to identify biases and barriers, and continuing to improve gender balance in funding review processes.

Fund more women — the report identified that funding structures can be a barrier, and that both part-time working and career breaks are perceived to slow progress. It suggests that the assessment of part-time working and maternity leave needs to be standardised across funder eligibility criteria and in the review process. It also identified that a lack of diversity of funding types impacts on women, and suggested trialling innovative approaches to allocating funding and supporting early career researchers.

Stimulate career progression for female energy academics — The report highlighted the need to acknowledge and take action on the individualistic, long hours culture of academia and also overhaul existing institutional structures and cultures. Early career stages are often characterised by precarious fixed-term contracts and over reliance on quantitative measures of progress. It also recommended building suitable training, mentoring and support networks to help more women progress and ensure the visibility of female researchers.

Build on what is working — The study recommended identifying key points of engagement to build gender balance: combine specific targeted actions, such as UKRI and university frameworks and targeted funding initiatives, with long-term action on structural issues that promote cultural change in our institutions. It also identified the need to ensure equality of voice — so that female academic voices are heard.

Alison Wall, Deputy Director for Equality, Diversity and Inclusion at EPSRC said: “We welcome this report, its findings and recommendations. Many of the issues raised are ones we recognise more widely in our research community.

“Enhancing diversity and inclusion is one of the priorities in our new Delivery Plan. For example, we plan to make further progress on embedding EDI into the grant application process, developing our peer review processes, provision of further data and increased flexibility in our funding.”

Source