Scientists discover how birds navigate crosswinds

Research News

Scientists discover how birds navigate crosswinds

Study of lovebirds’ flight could lead to better visual control algorithms for autonomous aerial robots

Lovebird during flight training in wind tunnel

Ferrari the lovebird during flight training in a laboratory bird wind tunnel.

July 16, 2019

While pilots rely on radio signals, advanced computations and other tools to keep them on course during strong crosswinds, birds can naturally navigate these demanding conditions — in environments with little visibility.

To understand how, Stanford University mechanical engineer David Lentink and colleagues studied lovebirds flying in a crosswind tunnel that features customizable wind and light settings.

The results, published in Proceedings of the National Academy of Science, could inspire more robust and computationally efficient visual control algorithms for autonomous aerial robots.

This is the first study of how birds orient their bodies, necks and heads to fly through extreme 45-degree crosswinds over short ranges — both in bright visual environments and in dark, cave-like environments, where faint points of light are the only beacons. The lovebirds navigated all environments equally well.

The researchers found that lovebirds navigate by stabilizing and fixating their gaze on the goal, while yawing their bodies into a crosswind. Staying on course requires them to contort their necks by 30 degrees or more. A computer-simulated model indicated that, while neck control is active, body reorientation into the wind is achieved passively.

“Airplanes have a vertical tail to orient stably into the wind,” said Lentink. “We discovered why birds don’t need one: their flapping wings don’t only offer propulsion — they also orient into the wind passively like a weathervane.”

Added Kathy Dickson, a program officer in NSF’s Division of Integrative Organismal Systems, which funded the study, “This integrative research used innovative technological advancements to bring studies of animal movement from closely controlled conditions in the laboratory into the field, where unsteady and intermittent flows are more the norm. The work provides unexpected insights into how birds adjust their bodies when encountering crosswinds and navigate through unstable air flows, even at night with limited visual cues.”

—  NSF Public Affairs, (703) 292-8070 media@nsf.gov

Source

Endangered Bornean orangutans survive in managed forest, decline near oil palm plantations

Recent surveys of the population of endangered Bornean orangutans in Sabah, the Malaysian state in the north-east of Borneo, show mixed results. Populations have remained stable within well-managed forests, where there is little hunting, but declined in landscapes comprising extensive oil palm plantations, according to a new study in the open-access journal PLOS ONE by Donna Simon of the World Wide Fund for Nature — Malaysia, and colleagues. The study is the largest and most complete population survey of orangutans on Borneo, home to this endangered and endemic species.

Lowland forest is the most important habitat for orangutans in Sabah. Over the past 50 years, however, extensive logging and land clearance for agriculture caused habitat loss and fragmentation, which led to a drastic decline in their numbers, but the full extent of the effects on orangutan population have been difficult to estimate.

In the current study, the authors conducted aerial transects totaling nearly 5,500 kilometers across Sabah state, almost three times the length of a previous survey done in 2002-2003. Based on the number of nests, they calculated a population of 9,558 orangutans, including a previously unknown population of about 1,770 orangutans in many widely dispersed sub-populations.

The largest populations of orangutans, numbering about 5,500, were within forests that are either sustainably managed or unlogged in the central uplands of the State. In this area, the population has been stable since the 2002-2003 survey. In contrast, in fragmented forest areas surrounded by extensive areas of oil palm plantations, orangutan populations have declined by as much as 30% since the earlier study. These data are expected to be used by the government of Sabah to shape environmental policies to sustain these important Malaysian orangutan populations.

Simon adds: “A recent survey on orangutan populations in Sabah, North-east Borneo showed a mixed picture from different regions. However, overall the research shows that they have maintained the same numbers over the last 15 years and can remain so as long as proper conservation management measure continues to be put in place.”

Story Source:

Materials provided by PLOS. Note: Content may be edited for style and length.

Source

Research Headlines – How to protect privacy in the age of big data

Related theme(s) and subtheme(s)

:   |   | 
Countries involved in the project described in the article

Add to PDF “basket”

When companies collect and analyse data about consumer behaviour, they provoke profound questions about privacy rights. But a group of EU-funded researchers has struck a balance between privacy and the private sector. Their goal is to allow consumers to select the level of privacy protection that suits them best.

Image

© metamorworks #182361404, source: stock.adobe.com 2019

‘Big data’ – the immense trove of information that corporations collect about customer behaviour – has the potential to offer customers exactly what they want, sometimes before they know they want it. But often, that data can be analysed to identify individuals and even to reveal the specifics of individual behaviour in ways that many people find alarming.

In response to concerns from privacy activists, and after years of preparation, the European Union began implementing the General Data Protection Regulation (GDPR) on 25 May 2018. GDPR is designed to preserve privacy rights by requiring companies to obtain a customer’s consent about how they use consumer data. Public confusion lingers, however, about how much information companies collect and how they use it.

An EU-funded research project called DAPPER has developed a method that could help to solve this most complex issue. The key is choice. Among other results, the DAPPER method allows companies to analyse consumer information, but only using a level of analysis that each consumer can select.

‘A capstone result of the project is the development of methods to capture information about correlations within data and use them to accurately reveal information about behaviour of users,’ says principal investigator Graham Cormode of the University of Warwick in the UK. ‘For example, results could be used to gather information about the correlation between smoking and lung disease, without revealing the smoking status or disease status of any individual.’

Mixing randomness with choice

One widely accepted method for guaranteeing strong privacy rights in big data analysis is called differential privacy. This introduces a random element into how an organisation accesses a client’s data, making it nearly impossible to reconstruct individual identity after analysing group behaviour.

The problem is that differential privacy assumes that all individuals have the same preferences. Some might allow for less privacy, if that meant better choices; some might demand total privacy.

Enter DAPPER. The project focused on four areas of research. The first, synthetic private data, proposes a new definition for digital privacy: personalised differential privacy, in which users specify a personal privacy requirement for their data.

Organisations – whether corporations, governments or university researchers – could analyse subject behaviour, but only using parameters that the subjects set themselves. The result allows customers to make their own privacy choices while giving companies the insight they need to offer better products.

Other research areas included correlated data modelling, which provides algorithms for analysing statistics while respecting privacy safeguards; data utility enhancement, which helps construct accurate graph-structured data while protecting privacy; and trajectory data, which developed a method for analysing GPS data about users while protecting information about an individual’s location.

A better balance

Project results should soon find their way into the private sector. ‘Methods for collecting data have been deployed by Google, Microsoft and Apple in recent years,’ Cormode says. ‘The methods we developed in this project have the potential to be incorporated into these systems, allowing the gathering of more sophisticated data on user activity while preserving privacy.’

Most of the project’s funds supported the research of two PhD candidates: Tejas Kulkarni at the University of Warwick, and Jun Zhang at the National University of Singapore. After Kulkarni has completed his dissertation, he will explore ways to safeguard privacy in machine learning.

DAPPER received funding through the EU’s Marie Skłodowska-Curie Actions programme.

Project details

  • Project acronym: DAPPER
  • Participants: United Kingdom (Coordinator)
  • Project N°: 618202
  • Total costs: € 100 000
  • EU contribution: € 100 000
  • Duration: April 2014 to March 2018

See also

Source

Events – 3rd HBP Curriculum Workshop Series – New Horizons in Brain Medicine: From Research to Clinics – 3-5 July 2019, Innsbruck, Austria

The aim of this interactive workshop is to introduce and deepen the understanding of brain medicine for non-specialists with the most recent advances in research of neurodevelopmental, neurodegenerative and neuropsychiatric disorders.

Lectures and tutorials by international experts will report the state of the art of research and treatment of brain diseases.

Hands-on examples and practical tools and methodologies will be presented during a visit to lead laboratories at Medical University Innsbruck.

A student brainstorming session will be organised to allow exchange about concepts and methods. Application is open to the entire student community and early career researchers, regardless of whether they are affiliated with the Human Brain Project or not.

All early-career scientists are encouraged to participate and it is aimed to achieve equal representation of all sexes.

Application deadline: 29 May 2019

Source

How does your productivity stack up?

You know that person who always seems to be ahead of their deadlines, despite being swamped? Do you look at them with envy and wonder how they do it?

“Regardless of location, industry, or occupation, productivity is a challenge faced by every professional,” says Robert Pozen, senior lecturer at the MIT Sloan School of Management.

As part of his ongoing research and aided by MIT undergraduate Kevin Downey, Pozen surveyed 20,000 self-selected individuals in management from six continents to learn why some people are more productive than others.

The survey tool, dubbed the Pozen Productivity Rating, consists of 21 questions divided into seven categories: planning your schedule, developing daily routines, coping with your messages, getting a lot done, improving your communication skills, running effective meetings, and delegating to others. These particular habits and skills are core to Pozen’s MIT Sloan Executive Education program, Maximizing Your Productivity: How to Become an Efficient and Effective Executive, and his bestselling book, “Extreme Productivity: Boost Your Results, Reduce Your Hours.”

After cleaning up the data, Pozen and Downey obtained a complete set of answers from 19,957 respondents. Roughly half were residents of North America; another 21 percent were residents of Europe, and 19 percent were residents of Asia. The remaining 10 percent included residents of Australia, South America, and Africa.

They identified the groups of people with the highest productivity ratings and found that professionals with the highest scores tended to do well on the same clusters of habits:

  • They planned their work based on their top priorities and then acted with a definite objective;
  • they developed effective techniques for managing a high volume of information and tasks; and
  • they understood the needs of their colleagues, enabling short meetings, responsive communications, and clear directions.

The results were also interesting when parsed by the demographics of the survey participants.

Geographically, the average productivity score for respondents from North America was in the middle of the pack, even though Americans tend to work longer hours. In fact, the North American score was significantly lower than the average productivity scores for respondents from Europe, Asia, and Australia.

Age and seniority were highly correlated with personal productivity — older and more senior professionals recorded higher scores than younger and more junior colleagues. Habits of these more senior respondents included developing routines for low-value activities, managing message flow, running effective meetings, and delegating tasks to others.

While the overall productivity scores of male and female professionals were almost the same, there were some noteworthy differences in how women and men managed to be so productive. For example, women tended to score particularly high when it came to running effective meetings — keeping meetings to less than 90 minutes and finishing with an agreement of next steps. By contrast, men did particularly well at coping with high message volume — not looking at their emails too frequently and skipping over the messages of low value.

Coping with your daily flood of messages

While it’s clear that the ability to deal with inbox overload is key to productivity, how that’s accomplished may be less clear to many of us who shudder at our continuous backlog of emails.

“We all have so much small stuff, like email, that overwhelms us, and we wind up dedicating precious time to it,” says Pozen. “Most of us look at email every three to five minutes. Instead, look every hour or two, and when you do look, look only at subject matter and sender, and essentially skip over 60-80 percent of it, because most emails you get aren’t very useful.” Pozen also encourages answering important emails immediately instead of flagging them and then finding them again later (or forgetting altogether), as well as flagging important contacts and making ample use of email filters.

However, Pozen stresses that managing incoming emails, while an important skill, needs to be paired with other, more big-picture habits in order to be effective, such as defining your highest priorities. He warns that without a specific set of goals to pursue — both personal and professional — many ambitious people devote insufficient time to activities that actually support their top goals.

More tips for maximizing your productivity

If you want to become more productive, try developing the “habit clusters” demonstrated in Pozen’s survey results and possessed by the most productive professionals. This includes:

  • Focusing on your primary objectives: Every night, revise your next day’s schedule to stress your top priorities. Decide your purpose for reading any lengthy material, before you start.
  • Managing your work overload: Skip over 50-80 percent of your emails based on the sender and the subject. Break large projects into small steps — and start with step one.
  • Supporting your colleagues: Limit any meeting to 90 minutes or less and end each meeting with clearly defined next steps. Agree on success metrics with your team.

Pozen’s survey tool is still available online. Those completing it will receive a feedback report offering practical tips for improving productivity. You can also learn from Pozen firsthand in his MIT Executive Education program, Maximizing Your Personal Productivity.


Source

Professor Emeritus Sylvain Bromberger, philosopher of language and science, dies at 94

Professor Emeritus Sylvain Bromberger, a philosopher of language and of science who played a pivotal role in establishing MIT’s Department of Linguistics and Philosophy, died on Sept. 16 in Cambridge, Massachusetts. He was 94.

A faculty member for more than 50 years, Bromberger helped found the department in 1977 and headed the philosophy section for several years. He officially retired in 1993 but remained very active at MIT until his death.

Kindness and intellectual generosity

“Although he officially retired 25 years ago, Sylvain was an active and valued member of the department up to the very end,” said Alex Byrne, head of the Department of Linguistics and Philosophy. “He made enduring contributions to philosophy and linguistics, and his colleagues and students were frequent beneficiaries of his kindness and intellectual generosity. He had an amazing life in so many ways, and MIT is all the better for having been a part of it.”

Paul Egré, director of research at the French National Center for Scientific Research (aka CNRS) and a former visiting scholar at MIT, said, “Those of us who were lucky enough to know Sylvain have lost the dearest of friends, a unique voice, a distinctive smile and laugh, someone who yet seemed to know that life is vain and fragile in unsuspected ways, but also invaluable in others.”

Enduring contribution to fundamental issues about knowledge

Bromberger’s work centered largely on fundamental issues in epistemology, namely the theory of knowledge and the conditions that make knowledge possible or impossible. During the course of his career, he devoted a substantial part of his thinking to an examination of the ways in which we come to apprehend unsolved questions. His research in the philosophy of linguistics, carried out in part with the late Institute Professor Morris Halle of the linguistics section, included investigations into the foundations of phonology and of morphology.

Born in 1924 in Antwerp to a French-speaking Jewish family, Bromberger escaped the German invasion of Belgium with his parents and two brothers on May 10, 1940. After reaching Paris, then Bordeaux, his family obtained one of the last visas issued by the Portuguese consul Aristides de Sousa Mendes in Bayonne. Bromberger later dedicated the volume of his collected papers “On What We Know We Don’t Know: Explanation, Theory, Linguistics, and How Questions Shape Them” (University of Chicago Press, 1992) to Sousa Mendes.

The family fled to New York, and Bromberger was admitted to Columbia University. However, he chose to join the U.S. Army in 1942, and he went on to serve three years in the infantry. He took part in the liberation of Europe as a member of the 405th Regiment, 102nd Infantry Division. He was wounded during the invasion of Germany in 1945.

After leaving the Army, Bromberger studied physics and the philosophy of science at Columbia University, earning his bachelor’s degree in 1948. He received his PhD in philosophy from Harvard University in 1961.
 
Research and teaching at MIT

He served on the philosophy faculties at Princeton University and at the University of Chicago before joining MIT in 1966. Over the years, he trained many generations of MIT students, teaching alongside such notables as Halle, Noam Chomsky, Thomas Kuhn, and Ken Hale.

In the early part of his career, Bromberger focused on critiquing the so-called deductive-nomological model of explanation, which says that to explain a phenomenon is to deductively derive the statement reporting that phenomenon from laws (universal generalizations) and antecedent conditions. For example, we can explain that this water boils from the law that all water boils at 100 degrees C, and that the temperature of the water was elevated to exactly 100 C.

An influential article: Why-questions

One simple though key observation made by Bromberger in his analysis was that we may not only explain that the water boils at 100 C, but also how it boils, and even why it boils when heated up. This feature gradually led Bromberger to think about the semantics and pragmatics of questions and their answers.

Bromberger’s 1966 “Why-questions” paper was probably his most influential article. In it, he highlights the fact that most scientifically valid questions put us at first in a state in which we know all actual answers to the question to be false, but in which we can nevertheless recognize the question to have a correct answer (a state he calls “p-predicament,” with “p” for “puzzle”). According to Bromberger, why-questions are particularly emblematic of this state of p-predicament, because in order to ask a why-question rationally, a number of felicity conditions (or presuppositions) must be satisfied, which are discussed in his work.

The paper had an influence on ulterior accounts of explanation, notably Bas van Fraassen’s discussion of the semantic theory of contrastivism in his book “The Scientific Image” (to explain a phenomenon is to answer a why-question with a contrast class in mind). Still today, why-questions are recognized as questions whose semantics is hard to specify, in part for reasons Bromberger discussed.

In addition to investigating the syntactic, semantic, and pragmatic analysis of interrogatives, Bromberger also immersed himself in generative linguistics, with a particular interest in generative phonology, and the methodology of linguistic theory, teaching a seminar on the latter with Thomas Kuhn.

A lifelong engagement with new ideas

In 1993, the MIT Press published a collection of essays in linguistics to honor Bromberger on the occasion of his retirement. “The View From Building 20,” edited by Ken Hale and Jay Keyser, featured essays by Chomsky, Halle, Alec Marantz, and other distinguished colleagues.

In 2017, Egré and Robert May put together a workshop honoring Bromberger at the Ecole Normale Supérieure in Paris. Talks there centered on themes from Bromberger’s work, including metacognition, questions, linguistic theory, and problems concerning word individuation.

Tributes were read, notably this one from Chomsky, who used to take walks with Bromberger when they taught together:

“Those walks were a high point of the day for many years … almost always leaving me with the same challenging question: Why? Which I’ve come to think of as Sylvain’s question. And leaving me with the understanding that it is a question we should always ask when we have surmounted some barrier in inquiry and think we have an answer, only to realize that we are like mountain climbers who think they see the peak but when they approach it find that it still lies tantalizingly beyond.”

Egré noted that even when Bromberger was in his 90s, he had a “constant appetite for new ideas. He would always ask what your latest project was about, why it was interesting, and how you would deal with a specific problem,” Egré said. “His hope was that philosophy, linguistics, and the brain sciences would eventually join forces to uncover unprecedented dimensions of the human mind, erasing at least some of our ignorance.”

Bromberger’s wife of 64 years, Nancy, died in 2014. He is survived by two sons, Allen and Daniel; and three grandchildren, Michael Barrows, Abigail Bromberger, and Eliza Bromberger.
 

Written by Paul Egré and Kathryn O’Neill, with contributions from Daniel Bromberger, Allen Bromberger, Samuel Jay Keyser, Robert May, Agustin Rayo, Philippe Schlenker, and Benjamin Spector
 

Source

Artificial “muscles” achieve powerful pulling force

As a cucumber plant grows, it sprouts tightly coiled tendrils that seek out supports in order to pull the plant upward. This ensures the plant receives as much sunlight exposure as possible. Now, researchers at MIT have found a way to imitate this coiling-and-pulling mechanism to produce contracting fibers that could be used as artificial muscles for robots, prosthetic limbs, or other mechanical and biomedical applications.

While many different approaches have been used for creating artificial muscles, including hydraulic systems, servo motors, shape-memory metals, and polymers that respond to stimuli, they all have limitations, including high weight or slow response times. The new fiber-based system, by contrast, is extremely lightweight and can respond very quickly, the researchers say. The findings are being reported today in the journal Science.

The new fibers were developed by MIT postdoc Mehmet Kanik and MIT graduate student Sirma Örgüç, working with professors Polina Anikeeva, Yoel Fink, Anantha Chandrakasan, and C. Cem Taşan, and five others, using a fiber-drawing technique to combine two dissimilar polymers into a single strand of fiber.

The key to the process is mating together two materials that have very different thermal expansion coefficients — meaning they have different rates of expansion when they are heated. This is the same principle used in many thermostats, for example, using a bimetallic strip as a way of measuring temperature. As the joined material heats up, the side that wants to expand faster is held back by the other material. As a result, the bonded material curls up, bending toward the side that is expanding more slowly.

Credit: Courtesy of the researchers

Using two different polymers bonded together, a very stretchable cyclic copolymer elastomer and a much stiffer thermoplastic polyethylene, Kanik, Örgüç and colleagues produced a fiber that, when stretched out to several times its original length, naturally forms itself into a tight coil, very similar to the tendrils that cucumbers produce. But what happened next actually came as a surprise when the researchers first experienced it. “There was a lot of serendipity in this,” Anikeeva recalls.

As soon as Kanik picked up the coiled fiber for the first time, the warmth of his hand alone caused the fiber to curl up more tightly. Following up on that observation, he found that even a small increase in temperature could make the coil tighten up, producing a surprisingly strong pulling force. Then, as soon as the temperature went back down, the fiber returned to its original length. In later testing, the team showed that this process of contracting and expanding could be repeated 10,000 times “and it was still going strong,” Anikeeva says.

Credit: Courtesy of the researchers

One of the reasons for that longevity, she says, is that “everything is operating under very moderate conditions,” including low activation temperatures. Just a 1-degree Celsius increase can be enough to start the fiber contraction.

The fibers can span a wide range of sizes, from a few micrometers (millionths of a meter) to a few millimeters (thousandths of a meter) in width, and can easily be manufactured in batches up to hundreds of meters long. Tests have shown that a single fiber is capable of lifting loads of up to 650 times its own weight. For these experiments on individual fibers, Örgüç and Kanik have developed dedicated, miniaturized testing setups.

Credit: Courtesy of the researchers

The degree of tightening that occurs when the fiber is heated can be “programmed” by determining how much of an initial stretch to give the fiber. This allows the material to be tuned to exactly the amount of force needed and the amount of temperature change needed to trigger that force.

The fibers are made using a fiber-drawing system, which makes it possible to incorporate other components into the fiber itself. Fiber drawing is done by creating an oversized version of the material, called a preform, which is then heated to a specific temperature at which the material becomes viscous. It can then be pulled, much like pulling taffy, to create a fiber that retains its internal structure but is a small fraction of the width of the preform.

For testing purposes, the researchers coated the fibers with meshes of conductive nanowires. These meshes can be used as sensors to reveal the exact tension experienced or exerted by the fiber. In the future, these fibers could also include heating elements such as optical fibers or electrodes, providing a way of heating it internally without having to rely on any outside heat source to activate the contraction of the “muscle.”

Such fibers could find uses as actuators in robotic arms, legs, or grippers, and in prosthetic limbs, where their slight weight and fast response times could provide a significant advantage.

Some prosthetic limbs today can weigh as much as 30 pounds, with much of the weight coming from actuators, which are often pneumatic or hydraulic; lighter-weight actuators could thus make life much easier for those who use prosthetics. Such fibers might also find uses in tiny biomedical devices, such as a medical robot that works by going into an artery and then being activated,” Anikeeva suggests. “We have activation times on the order of tens of milliseconds to seconds,” depending on the dimensions, she says.

To provide greater strength for lifting heavier loads, the fibers can be bundled together, much as muscle fibers are bundled in the body. The team successfully tested bundles of 100 fibers. Through the fiber drawing process, sensors could also be incorporated in the fibers to provide feedback on conditions they encounter, such as in a prosthetic limb. Örgüç says bundled muscle fibers with a closed-loop feedback mechanism could find applications in robotic systems where automated and precise control are required.

Kanik says that the possibilities for materials of this type are virtually limitless, because almost any combination of two materials with different thermal expansion rates could work, leaving a vast realm of possible combinations to explore. He adds that this new finding was like opening a new window, only to see “a bunch of other windows” waiting to be opened.

“The strength of this work is coming from its simplicity,” he says.

The team also included MIT graduate student Georgios Varnavides, postdoc Jinwoo Kim, and undergraduate students Thomas Benavides, Dani Gonzalez, and Timothy Akintlio. The work was supported by the National Institute of Neurological Disorders and Stroke and the National Science Foundation.


Topics: Research, Materials Science and Engineering, DMSE, Mechanical engineering, Nanoscience and nanotechnology, Research Laboratory of Electronics, McGovern Institute, Brain and cognitive sciences, School of Science, School of Engineering, National Science Foundation (NSF)

Source

Instability in Antarctic ice projected to increase likelihood of worst-case sea level rise projectio

Research News

Instability in Antarctic ice projected to increase likelihood of worst-case sea level rise projectio

Thwaites Glacier, modeled for new study, likely to succumb to instability

Antarctic ice

Instability in Antarctic ice is likely to rapidly increase sea level rise.

July 12, 2019

Images of vanishing Arctic ice are jarring, but the region’s potential contributions to sea level rise are no match for Antarctica’s. Now, a study says that instability hidden in Antarctic ice increases the likelihood of worst-case scenarios for the continent’s contribution to global sea level.

In the last six years, five closely observed Antarctic glaciers have doubled their rate of ice loss. At least one, Thwaites Glacier, modeled for the new study, will likely succumb to this instability, a volatile process that pushes ice into the ocean fast.

How much ice the glacier will shed in the coming 50 to 800 years can’t be projected exactly, scientists say, due to unpredictable fluctuations in climate and the need for more data. But researchers at the Georgia Institute of Technology and other institutions have factored the instability into 500 ice flow simulations for Thwaites.

The scenarios together point to the eventual triggering of the instability. Even if global warming were to stop, the instability would keep pushing ice out to sea at an accelerated rate over the coming centuries.

“This study underscores the sensitivity of key Antarctic glaciers to instability,” says Paul Cutler, director of NSF’s Antarctic Glaciology Program. “The warping influence of these instabilities on the uncertainty distribution for sea level predictions is of great concern, and is a strong motivator for research that will tighten the error bars on predictions of Antarctica’s contribution to sea level rise.”

The research was funded by NSF’s Office of Polar Programs.

—  NSF Public Affairs, (703) 292-8070 media@nsf.gov

Source

Healthy lifestyle may offset genetic risk of dementia

Living a healthy lifestyle may help offset a person’s genetic risk of dementia, according to new research.

The study was led by the University of Exeter — simultaneously published today in JAMA and presented at the Alzheimer’s Association International Conference 2019 in Los Angeles. The research found that the risk of dementia was 32 per cent lower in people with a high genetic risk if they had followed a healthy lifestyle, compared to those who had an unhealthy lifestyle.

Participants with high genetic risk and an unfavourable lifestyle were almost three times more likely to develop dementia compared to those with a low genetic risk and favourable lifestyle.

Joint lead author Dr El?bieta Ku?ma, at the University of Exeter Medical School, said: “This is the first study to analyse the extent to which you may offset your genetic risk of dementia by living a healthy lifestyle. Our findings are exciting as they show that we can take action to try to offset our genetic risk for dementia. Sticking to a healthy lifestyle was associated with a reduced risk of dementia, regardless of the genetic risk.”

The study analysed data from 196,383 adults of European ancestry aged 60 and older from UK Biobank. The researchers identified 1,769 cases of dementia over a follow-up period of eight years. The team grouped the participants into those with high, intermediate and low genetic risk for dementia.

To assess genetic risk, the researchers looked at previously published data and identified all known genetic risk factors for Alzheimer’s disease. Each genetic risk factor was weighted according to the strength of its association with Alzheimer’s disease.

To assess lifestyle, researchers grouped participants into favourable, intermediate and unfavourable categories based on their self-reported diet, physical activity, smoking and alcohol consumption. The researchers considered no current smoking, regular physical activity, healthy diet and moderate alcohol consumption as healthy behaviours. The team found that living a healthy lifestyle was associated with a reduced dementia risk across all genetic risk groups.

Joint lead author Dr David Llewellyn, from the University of Exeter Medical School and the Alan Turing Institute, said: “This research delivers a really important message that undermines a fatalistic view of dementia. Some people believe it’s inevitable they’ll develop dementia because of their genetics. However it appears that you may be able to substantially reduce your dementia risk by living a healthy lifestyle.”

The study was led by the University of Exeter in collaboration with researchers from the University of Michigan, the University of Oxford, and the University of South Australia.

Story Source:

Materials provided by University of Exeter. Note: Content may be edited for style and length.

Source

Research Headlines – Microorganisms to clean up environmental methane

Related theme(s) and subtheme(s)
:   | 

Countries involved in the project described in the article

Add to PDF “basket”

Methane has a global warming impact 25 times higher than that of carbon dioxide and is the world’s second most emitted greenhouse gas. An EU-funded project is developing new strains of microorganisms that can transform methane into useful and bio-friendly materials.

Image

© vchalup #188138761, source: stock.adobe.com 2019

Methanotrophs are microorganisms that metabolise methane. They are a subject of great interest in the environmental sector, where the emission of harmful greenhouse gases is a major concern.

The EU-funded CH4BIOVAL project is working to develop new methanotroph strains that can more readily transform methane from the atmosphere into valuable user products. The CH4BIOVAL team is particularly interested in the potential of methanotrophs to produce large amounts of bio-polymers known as polyhydroxyalkanoates (PHAs).

PHAs include a wide range of materials with different physical properties. Some of them are biodegradable and can be used in the production of bioplastics. The mechanical properties and biocompatibility of PHAs can be changed by modifying their surfaces or by combining them with other polymers, enzymes and inorganic materials. This makes possible an even wider range of applications.

CH4BIOVAL researchers are also interested in another methanotroph by-product called ectoine. This is a natural compound produced by several species of bacteria. It is what’s known as a compatible solute, which can be useful as a protective substance. For example, ectoine is used as an active ingredient in skincare and sun protection products, stabilising proteins and other cellular structures and protecting the skin from dryness and UV radiation.

The CH4BIOVAL project is undertaking the isolation of useful methanotroph strains through conventional genetic selective techniques as well as state-of-the-art bioinformatic techniques. The latter involve the detailed analysis and modification of complex biological features based on an in-depth understanding of the genetic codes of selected strains.

By closely studying the metabolic characteristics of specific methanotroph strains, CH4BIOVAL scientists are identifying key genetic modifications that can improve their performance. Thus, the project is enabling both the abatement of an important greenhouse gas and the production of useful bio-consumables.

The project received funding from the EU’s Marie Skłodowska Curie Actions programme.

Project details

  • Project acronym: CH4BIOVAL
  • Participants: Spain (Coordinator)
  • Project N°: 750126
  • Total costs: € 170 121
  • EU contribution: € 170 121
  • Duration: September 2017 to September 2019

See also

Source

Research Headlines – New cameras to make X-rays safer for patients

Related theme(s) and subtheme(s)
:   |   | 
Countries involved in the project described in the article
 |   |   |   | 

Add to PDF “basket”

CT scans have revolutionised the fight against human illness by creating three-dimensional images of the body’s inner workings. Such scans, however, can deliver high doses of radiation. Now EU-funded researchers have built special cameras that limit radiation while delivering images vital for patient health.

Image

© Blue Planet Studio #162096679, source: stock.adobe.com 2019

Doctors have used computed tomography scans, or CT scans, to greatly improve the diagnosis and treatment of illnesses such as cancer and cardiovascular disease. But a major problem limits their use: they deliver high doses of radiation that can harm patients nearly as much as their ailment.

Enter the EU-funded VOXEL project which set out to develop an innovative way to create three-dimensional imaging. The result is special cameras that can deliver 3D images but without the high doses of radiation.

‘Reports show that in Germany in 2013, although CT scans only represented 7 % of all X-rays performed, they conveyed 60 % of the radiation that patients received,’ says Marta Fajardo, project coordinator and assistant professor at the Instituto Superior Técnico in Lisbon, Portugal. ‘We built several prototype cameras. As an alternative to CT, they enable 3D X-ray imagines in very few exposures, meaning less radiation for the patient.’

New perspective on 3D imaging

CT scans make images by taking thousands of flat, two-dimensional photos in order to reconstruct a 3D image. The problem is that each photo injects ionising radiation into the patient. As photos multiply, radiation levels rise.

To counter this, VOXEL’s breakthrough idea was to adapt a technique called plenoptic imaging to X-ray radiation. Plenoptic cameras capture information about the direction that light rays, including X-rays, are travelling in space, as opposed to a normal camera that captures only light intensity.

Because researchers can use the information about light direction captured by plenoptic cameras to reconstruct 3D images, there is no need to take thousands of 2D photos. Images of important structures like blood vessels can be made from a single exposure, lowering the average radiation dose significantly.

A major part of the work was using the right algorithms to manipulate the captured information. ‘First, we demonstrated that plenoptic imagining is mathematically equivalent to a limited-angle tomography problem,’ Fajardo says. ‘Then we could simply reformat plenoptic imaging as tomography data and apply image reconstruction algorithms to obtain much better images.’

But the biggest challenge remained engineering the cameras. ‘The higher the photon energy, the harder it is to manufacture the optics for a plenoptic camera,’ she says. ‘You need X-rays of different energies for different tasks.’ The solution was to develop one camera prototype that used lower-energy X-rays for tiny structures like cells and another that used higher-energy X-rays for larger objects, such as small animals or human organs.

Less radiation, healthier patients

While Fajardo is encouraged by the project’s results, work remains to be done. ‘The low-energy X-ray camera belongs to a niche market,’ she explains. ‘But the high-energy X-ray prototype has huge medical potential, although it still requires some development.’

Results from the project, which was awarded a Future Emerging Technologies grant, will soon be submitted for publication in the international science journal Nature Photonics.

.

Project details

  • Project acronym: VOXEL
  • Participants: Portugal (Coordinator), France, Spain, Netherlands, Italy
  • Project N°: 665207
  • Total costs: € 3 996 875
  • EU contribution: € 3 996 875
  • Duration: June 2015 to May 2019

See also

Search articles

Notes:
To restrict search results to articles in the Information Centre, i.e. this site, use this search box rather than the one at the top of the page.

After searching, you can expand the results to include the whole Research and Innovation web site, or another section of it, or all Europa, afterwards without searching again.

Please note that new content may take a few days to be indexed by the search engine and therefore to appear in the results.

Source

Model paves way for faster, more efficient translations of more languages

MIT researchers have developed a novel “unsupervised” language translation model — meaning it runs without the need for human annotations and guidance — that could lead to faster, more efficient computer-based translations of far more languages.

Translation systems from Google, Facebook, and Amazon require training models to look for patterns in millions of documents — such as legal and political documents, or news articles — that have been translated into various languages by humans. Given new words in one language, they can then find the matching words and phrases in the other language.

But this translational data is time consuming and difficult to gather, and simply may not exist for many of the 7,000 languages spoken worldwide. Recently, researchers have been developing “monolingual” models that make translations between texts in two languages, but without direct translational information between the two.

In a paper being presented this week at the Conference on Empirical Methods in Natural Language Processing, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) describe a model that runs faster and more efficiently than these monolingual models.

The model leverages a metric in statistics, called Gromov-Wasserstein distance, that essentially measures distances between points in one computational space and matches them to similarly distanced points in another space. They apply that technique to “word embeddings” of two languages, which are words represented as vectors — basically, arrays of numbers — with words of similar meanings clustered closer together. In doing so, the model quickly aligns the words, or vectors, in both embeddings that are most closely correlated by relative distances, meaning they’re likely to be direct translations.

In experiments, the researchers’ model performed as accurately as state-of-the-art monolingual models — and sometimes more accurately — but much more quickly and using only a fraction of the computation power.

“The model sees the words in the two languages as sets of vectors, and maps [those vectors] from one set to the other by essentially preserving relationships,” says the paper’s co-author Tommi Jaakkola, a CSAIL researcher and the Thomas Siebel Professor in the Department of Electrical Engineering and Computer Science and the Institute for Data, Systems, and Society. “The approach could help translate low-resource languages or dialects, so long as they come with enough monolingual content.”

The model represents a step toward one of the major goals of machine translation, which is fully unsupervised word alignment, says first author David Alvarez-Melis, a CSAIL PhD student: “If you don’t have any data that matches two languages … you can map two languages and, using these distance measurements, align them.”

Relationships matter most

Aligning word embeddings for unsupervised machine translation isn’t a new concept. Recent work trains neural networks to match vectors directly in word embeddings, or matrices, from two languages together. But these methods require a lot of tweaking during training to get the alignments exactly right, which is inefficient and time consuming.

Measuring and matching vectors based on relational distances, on the other hand, is a far more efficient method that doesn’t require much fine-tuning. No matter where word vectors fall in a given matrix, the relationship between the words, meaning their distances, will remain the same. For instance, the vector for “father” may fall in completely different areas in two matrices. But vectors for “father” and “mother” will most likely always be close together.

“Those distances are invariant,” Alvarez-Melis says. “By looking at distance, and not the absolute positions of vectors, then you can skip the alignment and go directly to matching the correspondences between vectors.”

That’s where Gromov-Wasserstein comes in handy. The technique has been used in computer science for, say, helping align image pixels in graphic design. But the metric seemed “tailor made” for word alignment, Alvarez-Melis says: “If there are points, or words, that are close together in one space, Gromov-Wasserstein is automatically going to try to find the corresponding cluster of points in the other space.”

For training and testing, the researchers used a dataset of publicly available word embeddings, called FASTTEXT, with 110 language pairs. In these embeddings, and others, words that appear more and more frequently in similar contexts have closely matching vectors. “Mother” and “father” will usually be close together but both farther away from, say, “house.”

Providing a “soft translation”

The model notes vectors that are closely related yet different from the others, and assigns a probability that similarly distanced vectors in the other embedding will correspond. It’s kind of like a “soft translation,” Alvarez-Melis says, “because instead of just returning a single word translation, it tells you ‘this vector, or word, has a strong correspondence with this word, or words, in the other language.’”

An example would be in the months of the year, which appear closely together in many languages. The model will see a cluster of 12 vectors that are clustered in one embedding and a remarkably similar cluster in the other embedding. “The model doesn’t know these are months,” Alvarez-Melis says. “It just knows there is a cluster of 12 points that aligns with a cluster of 12 points in the other language, but they’re different to the rest of the words, so they probably go together well. By finding these correspondences for each word, it then aligns the whole space simultaneously.”

The researchers hope the work serves as a “feasibility check,” Jaakkola says, to apply Gromov-Wasserstein method to machine-translation systems to run faster, more efficiently, and gain access to many more languages.

Additionally, a possible perk of the model is that it automatically produces a value that can be interpreted as quantifying, on a numerical scale, the similarity between languages. This may be useful for linguistics studies, the researchers say. The model calculates how distant all vectors are from one another in two embeddings, which depends on sentence structure and other factors. If vectors are all really close, they’ll score closer to 0, and the farther apart they are, the higher the score. Similar Romance languages such as French and Italian, for instance, score close to 1, while classic Chinese scores between 6 and 9 with other major languages.

“This gives you a nice, simple number for how similar languages are … and can be used to draw insights about the relationships between languages,” Alvarez-Melis says.


Topics: Research, Language, Machine learning, Artificial intelligence, Data, Algorithms, Computer science and technology, Computer Science and Artificial Intelligence Laboratory (CSAIL), IDSS, Electrical Engineering & Computer Science (eecs), School of Engineering

Source

Automated system generates robotic parts for novel tasks

An automated system developed by MIT researchers designs and 3-D prints complex robotic parts called actuators that are optimized according to an enormous number of specifications. In short, the system does automatically what is virtually impossible for humans to do by hand.  

In a paper published today in Science Advances, the researchers demonstrate the system by fabricating actuators — devices that mechanically control robotic systems in response to electrical signals — that show different black-and-white images at different angles. One actuator, for instance, portrays a Vincent van Gogh portrait when laid flat. Tilted an angle when it’s activated, however, it portrays the famous Edvard Munch painting “The Scream.” The researchers also 3-D printed floating water lilies with petals equipped with arrays of actuators and hinges that fold up in response to magnetic fields run through conductive fluids.

The actuators are made from a patchwork of three different materials, each with a different light or dark color and a property — such as flexibility and magnetization — that controls the actuator’s angle in response to a control signal. Software first breaks down the actuator design into millions of three-dimensional pixels, or “voxels,” that can each be filled with any of the materials. Then, it runs millions of simulations, filling different voxels with different materials. Eventually, it lands on the optimal placement of each material in each voxel to generate two different images at two different angles. A custom 3-D printer then fabricates the actuator by dropping the right material into the right voxel, layer by layer.

“Our ultimate goal is to automatically find an optimal design for any problem, and then use the output of our optimized design to fabricate it,” says first author Subramanian Sundaram PhD ’18, a former graduate student in the Computer Science and Artificial Intelligence Laboratory (CSAIL). “We go from selecting the printing materials, to finding the optimal design, to fabricating the final product in almost a completely automated way.”

The shifting images demonstrates what the system can do. But actuators optimized for appearance and function could also be used for biomimicry in robotics. For instance, other researchers are designing underwater robotic skins with actuator arrays meant to mimic denticles on shark skin. Denticles collectively deform to decrease drag for faster, quieter swimming. “You can imagine underwater robots having whole arrays of actuators coating the surface of their skins, which can be optimized for drag and turning efficiently, and so on,” Sundaram says.

Joining Sundaram on the paper are: Melina Skouras, a former MIT postdoc; David S. Kim, a former researcher in the Computational Fabrication Group; Louise van den Heuvel ’14, SM ’16; and Wojciech Matusik, an MIT associate professor in electrical engineering and computer science and head of the Computational Fabrication Group.

Navigating the “combinatorial explosion”

Robotic actuators today are becoming increasingly complex. Depending on the application, they must be optimized for weight, efficiency, appearance, flexibility, power consumption, and various other functions and performance metrics. Generally, experts manually calculate all those parameters to find an optimal design.  

Adding to that complexity, new 3-D-printing techniques can now use multiple materials to create one product. That means the design’s dimensionality becomes incredibly high. “What you’re left with is what’s called a ‘combinatorial explosion,’ where you essentially have so many combinations of materials and properties that you don’t have a chance to evaluate every combination to create an optimal structure,” Sundaram says.

In their work, the researchers first customized three polymer materials with specific properties they needed to build their actuators: color, magnetization, and rigidity. In the end, they produced a near-transparent rigid material, an opaque flexible material used as a hinge, and a brown nanoparticle material that responds to a magnetic signal. They plugged all that characterization data into a property library.

The system takes as input grayscale image examples — such as the flat actuator that displays the Van Gogh portrait but tilts at an exact angle to show “The Scream.” It basically executes a complex form of trial and error that’s somewhat like rearranging a Rubik’s Cube, but in this case around 5.5 million voxels are iteratively reconfigured to match an image and meet a measured angle.

Initially, the system draws from the property library to randomly assign different materials to different voxels. Then, it runs a simulation to see if that arrangement portrays the two target images, straight on and at an angle. If not, it gets an error signal. That signal lets it know which voxels are on the mark and which should be changed. Adding, removing, and shifting around brown magnetic voxels, for instance, will change the actuator’s angle when a magnetic field is applied. But, the system also has to consider how aligning those brown voxels will affect the image.

Voxel by voxel

To compute the actuator’s appearances at each iteration, the researchers adopted a computer graphics technique called “ray-tracing,” which simulates the path of light interacting with objects. Simulated light beams shoot through the actuator at each column of voxels. Actuators can be fabricated with more than 100 voxel layers. Columns can contain more than 100 voxels, with different sequences of the materials that radiate a different shade of gray when flat or at an angle.

When the actuator is flat, for instance, the light beam may shine down on a column containing many brown voxels, producing a dark tone. But when the actuator tilts, the beam will shine on misaligned voxels. Brown voxels may shift away from the beam, while more clear voxels may shift into the beam, producing a lighter tone. The system uses that technique to align dark and light voxel columns where they need to be in the flat and angled image. After 100 million or more iterations, and anywhere from a few to dozens of hours, the system will find an arrangement that fits the target images.

“We’re comparing what that [voxel column] looks like when it’s flat or when it’s titled, to match the target images,” Sundaram says. “If not, you can swap, say, a clear voxel with a brown one. If that’s an improvement, we keep this new suggestion and make other changes over and over again.”

To fabricate the actuators, the researchers built a custom 3-D printer that uses a technique called “drop-on-demand.” Tubs of the three materials are connected to print heads with hundreds of nozzles that can be individually controlled. The printer fires a 30-micron-sized droplet of the designated material into its respective voxel location. Once the droplet lands on the substrate, it’s solidified. In that way, the printer builds an object, layer by layer.

The work could be used as a stepping stone for designing larger structures, such as airplane wings, Sundaram says. Researchers, for instance, have similarly started breaking down airplane wings into smaller voxel-like blocks to optimize their designs for weight and lift, and other metrics. “We’re not yet able to print wings or anything on that scale, or with those materials. But I think this is a first step toward that goal,” Sundaram says.


Topics: Research, Computer science and technology, Algorithms, Artificial intelligence, Machine learning, Robots, Robotics, 3-D printing, Materials Science and Engineering, Computer Science and Artificial Intelligence Laboratory (CSAIL), Electrical Engineering & Computer Science (eecs), School of Engineering

Source

Coral reefs shifting away from equatorial waters

Research News

Coral reefs shifting away from equatorial waters

Number of young corals on tropical reefs has declined

Corals settled on a reef

In the study, researchers counted the number of baby corals that settled on a reef.

July 12, 2019

Coral reefs are retreating from equatorial waters and establishing new reefs in more temperate regions, according to NSF-funded research published in the journal Marine Ecology Progress Series. Scientists found that the number of young corals on tropical reefs has declined by 85 percent – and doubled on subtropical reefs – over the last four decades.

The research was conducted in part at NSF’s Moorea Coral Reef Long-Term Ecological Research site in French Polynesia, one of 28 such NSF long-term research sites across the country and around the globe.

“Climate change seems to be redistributing coral reefs, the same way it is shifting many other marine species,” said Nichole Price, a senior research scientist at the Bigelow Laboratory for Ocean Sciences and lead author of the paper. “The clarity in this trend is stunning, but we don’t yet know whether the new reefs can support the incredible diversity of tropical systems.”

As oceans warm, cooler subtropical environments are becoming more favorable for corals than the equatorial waters where they traditionally thrived. That’s allowing drifting coral larvae to settle and grow in new regions.

The scientists, who are affiliated with more than a dozen institutions, believe that only certain types of coral are able to reach these new locations, based on how far their microscopic larvae can drift on currents before they run out of limited fat stores.

“This report addresses the important question of whether warming waters have resulted in increases in coral populations in new locations,” said David Garrison, a program director in NSF’s Division of Ocean Sciences, which funded the research. “Whether it offers hope for the sustainability of coral reefs requires more research and monitoring.”

—  NSF Public Affairs, (703) 292-8070 media@nsf.gov

Source

New technology improves atrial fibrillation detection after stroke

A new method of evaluating irregular heartbeats outperformed the approach that’s currently used widely in stroke units to detect instances of atrial fibrillation.

The technology, called electrocardiomatrix, goes further than standard cardiac telemetry by examining large amounts of telemetry data in a way that’s so detailed it’s impractical for individual clinicians to attempt.

Co-inventor Jimo Borjigin, Ph.D., recently published the latest results from her electrocardiomatrix technology in Stroke. Among stroke patients with usable data (260 of 265), electrocardiomatrix was highly accurate in identifying those with Afib.

“We validated the use of our technology in a clinical setting, finding the electrocardiomatrix was an accurate method to determine whether a stroke survivor had an Afib,” says Borjigin, an associate professor of neurology and molecular and integrative physiology at Michigan Medicine.

A crucial metric

After a stroke, neurologists are tasked with identifying which risk factors may have contributed in order to do everything possible to prevent another event.

That makes detecting irregular heartbeat an urgent concern for these patients, explains first author Devin Brown, M.D., professor of neurology and a stroke neurologist at Michigan Medicine.

“Atrial fibrillation is a very important and modifiable risk factor for stroke,” Brown says.

Importantly, the electrocardiomatrix identification method was highly accurate for the 212 patients who did not have a history of Afib, Borjigin says. She says this group is most clinically relevant, because of the importance of determining whether stroke patients have previously undetected Afib.

When a patient has Afib, their irregular heartbeat can lead to blood collecting in their heart, which can form a stroke-causing clot. Many different blood thinners are on the market today, making it easier for clinicians to get their patients on an anticoagulant they’ll take as directed.

The most important part is determining Afib’s presence in the first place.

Much-needed improvement

Brown says challenges persist in detecting intermittent Afib during stroke hospitalization.

“More accurate identification of Afib should translate into more strokes prevented,” she says.

Once hospitalized in the stroke unit, patients are typically placed on continuous heart rhythm monitoring. Stroke neurologists want to detect possible intermittent Afib that initial monitoring like an electrocardiogram, or ECG, would have missed.

Because a physician can’t reasonably review every single heartbeat, current monitoring technology flags heart rates that are too high, Brown says. The neurologist then reviews these flagged events, which researchers say could lead to some missed Afib occurrences, or false positives in patients with different heart rhythm issues.

In contrast, Borjigin’s electrocardiomatrix converts two-dimensional signals from the ECG into a three-dimensional heatmap that allows for rapid inspection of all collected heartbeats. Borjigin says this method permits fast, accurate and intuitive detection of cardiac arrhythmias. It also minimizes false positive as well as false negative detection of arrhythmias.

“We originally noted five false positives and five false negatives in the study,” Borjigin says, “but expert review actually found the electrocardiomatrix was correct instead of the clinical documentation we were comparing it to.”

More applications

The Borjigin lab also recently demonstrated the usefulness of the electrocardiomatrix to differentiate between Afib and atrial flutter. In addition, the lab has shown the ability of electrocardiomatrix to capture reduced heart-rate variability in critical care patients.

Borjigin says she envisions electrocardiomatrix technology will one day be used to assist the detection of all cardiac arrhythmias online or offline and side-by-side with the use of ECG.

“I believe that sooner or later, electrocardiomatrix will be used in clinical practice to benefit patients,” she says.

Disclosure: Borjigin is the co-inventor of electrocardiomatrix, for which she holds a patent in Japan (6348226); and for which the Regents of the University of Michigan hold the patent in the United States (9918651).

Source