In search of an undersea kelp forest’s missing nitrogen

Research News

In search of an undersea kelp forest’s missing nitrogen

Ocean plants need nutrients to grow

California spiny lobster

Lobsters and sea stars are important contributors to the kelp forest nitrogen cycle.

June 24, 2019

Plants need nutrients to grow. So scientists were surprised to learn that giant kelp maintains its impressive growth rates year-round, even in summer and early fall when ocean currents along the California coast stop delivering nutrients. Clearly something else is nourishing the kelp, but what?

A team of NSF-supported scientists at UC Santa Barbara has made a breakthrough in identifying one of those sources. Their research suggests that the invertebrate residents of kelp forests provide at least some of the nutrients the giant algae need. The findings appear in the journal Global Change Biology.

To sustain growth rates, kelp requires many nutrients, especially nitrogen, but changes in ocean currents reduce the availability of such nutrients each year beginning in May. As a result, kelp forests face a potential shortage of nitrogen just as long summer days are poised to fuel algal growth, said lead author Joey Peters.

Peters and his co-authors, UC Santa Barbara marine ecologists Dan Reed and Deron Burkepile, saw the local community of sea-bottom invertebrates as a likely additional nitrogen source. Indeed, it turned out that these invertebrates, especially lobsters and sea stars, are an important part of the nitrogen cycle in coastal ecosystems. Waste from the invertebrates is a consistent component of the “missing nitrogen.”

The scientists made the discovery thanks to nearly two decades of data from the Santa Barbara Coastal Long-Term Ecological Research site, part of a network of sites funded by NSF to conduct long-term ecological research.

“This study reveals how environmental change can affect subtle ecosystem dynamics in kelp forests,” said David Garrison, a program director in NSF’s Division of Ocean Sciences, which funded the research. “This work would only be possible where long-term studies are underway.”

—  NSF Public Affairs, (703) 292-8070


A further step towards reliable quantum computation

Quantum computation has been drawing the attention of many scientists because of its potential to outperform the capabilities of standard computers for certain tasks. For the realization of a quantum computer, one of the most essential features is quantum entanglement. This describes an effect in which several quantum particles are interconnected in a complex way. If one of the entangled particles is influenced by an external measurement, the state of the other entangled particle changes as well, no matter how far apart they may be from one another. Many scientists are developing new techniques to verify the presence of this essential quantum feature in quantum systems. Efficient methods have been tested for systems containing only a few qubits, the basic units of quantum information. However, the physical implementation of a quantum computer would involve much larger quantum systems. Yet, with conventional methods, verifying entanglement in large systems becomes challenging and time-consuming, since many repeated experimental runs are required.

Building on a recent theoretical scheme, a team of experimental and theoretical physicists from the University of Vienna and the ÖAW led by Philip Walther and Borivoje Daki?, together with colleagues from the University of Belgrade, successfully demonstrated that entanglement verification can be undertaken in a surprisingly efficient way and in a very short time, thus making this task applicable also to large-scale quantum systems. To test their new method, they experimentally produced a quantum system composed of six entangled photons. The results show that only a few experimental runs suffice to confirm the presence of entanglement with extremely high confidence, up to 99.99 %.

The verified method can be understood in a rather simple way. After a quantum system has been generated in the laboratory, the scientists carefully choose specific quantum measurements which are then applied to the system. The results of these measurements lead to either confirming or denying the presence of entanglement. “It is somehow similar to asking certain yes-no questions to the quantum system and noting down the given answers. The more positive answers are given, the higher the probability that the system exhibits entanglement,” says Valeria Saggio, first author of the publication in Nature Physics. Surprisingly, the amount of needed questions and answers is extremely low. The new technique proves to be orders of magnitude more efficient compared to conventional methods.

Moreover, in certain cases the number of questions needed is even independent of the size of the system, thus confirming the power of the new method for future quantum experiments.

While the physical implementation of a quantum computer is still facing various challenges, new advances like efficient entanglement verification could move the field a step forward, thus contributing to the progress of quantum technologies.

Story Source:

Materials provided by University of Vienna. Note: Content may be edited for style and length.


Research Headlines – Smart factory robots with collaborative skills

Related theme(s) and subtheme(s)

Countries involved in the project described in the article
 |   |   |   | 

Add to PDF “basket”

An EU-funded project is developing a new industrial shop floor concept comprising mobile robots that can perceive their surroundings and collaborate with people. The expected efficiency boost could benefit the EU economy and society in general.


© sveta #176892695, source: 2019

Industrial robots save time and money and deliver key benefits in the areas of productivity and workplace safety. However, most industrial robots are quite limited in their capabilities. It is generally acknowledged that the full potential represented by leading-edge robotics technologies is not being exploited in today’s European production plants.

The aim of the EU-funded THOMAS project is to create a dynamically reconfigurable shop floor, utilizing autonomous, mobile, two-armed robots. The robots are able to perceive the environment around them and, through reasoning, cooperate with each other and with other production resources – including human operators.

Manufacturers know that robot performance is highly accurate and very consistent over time, yet most robots have trouble handling unexpected events. When it comes to introducing new or modified products, the efficiency of the robotic, serial-production model is quickly compromised. Production equipment that cannot support variable operations in dynamic environments needs to be modified to carry out new tasks or replaced altogether.

Mobile multi-taskers

For the THOMAS project, mobility is a key shop floor feature. Mobile and highly dextrous robots can navigate in the environment autonomously and perform multiple operations. These robots perceive the environment using their own sensors but also through collaborative perception, combining the sensors of multiple robots.

As multiple robots communicate over a common network, they can automatically adjust their behaviour to share or reallocate tasks. New tasks can be programmed and executed quickly and automatically.

Safe human-robot collaboration is also a key feature of the THOMAS concept, where robots possessing true cognitive abilities can detect humans in the work space and understand their intentions.

THOMAS is validating its new concept in the automotive and aeronautics sectors. These are highly profitable and beneficial for the European economy and public, and any improvement in efficiency in these areas, thanks to THOMAS, could have a positive impact on European society as a whole.

Project details

  • Project acronym: THOMAS
  • Participants: Greece (Coordinator), France, Germany, Spain, Luxembourg
  • Project N°: 723616
  • Total costs: € 5 624 225
  • EU contribution: € 4 510 700
  • Duration: October 2016 to September 2020

See also

Search articles

To restrict search results to articles in the Information Centre, i.e. this site, use this search box rather than the one at the top of the page.

After searching, you can expand the results to include the whole Research and Innovation web site, or another section of it, or all Europa, afterwards without searching again.

Please note that new content may take a few days to be indexed by the search engine and therefore to appear in the results.


Events – Young Researchers Event: Ebrains – A platform for collaboration in digital neuroscience – 9 July 2019, Belgrade, Serbia

The human brain is a multi-level and highly complex system that produces, processes and transmits information in an incomparable manner.
Researchers and scientists from all over the world strive to decode the mechanisms underlying this unique system.
The European Human Brain Project plays a pioneering role in neuroscience research, uniting experts from various fields in the aim to build a collaborative platform for computational neuroscience.

This event is open to the entire scientific community but especially targets early-career researchers.

The programme provides an overview of interdisciplinary efforts towards collaboration for the advance of digital neuroscience and offers ample opportunities for participants to exchange information and knowledge with peers as well as renowned experts.

Registration deadline: 14 June 2019


3 Questions: What is linguistics?

For decades, MIT has been widely held to have one of the best linguistics programs in the world. But what is linguistics and what does it teach us about human language? To learn more about the ways linguists help make a better world, SHASS Communications recently spoke with David Pesetsky, the Ferrari P. Ward Professor of Modern Languages and Linguistics at MIT. A Margaret MacVicar Faculty Fellow (MIT’s highest undergraduate teaching award), Pesetsky focuses his research on syntax and the implications of syntactic theory to language acquisition, semantics, phonology, and morphology (word-structure). He is a fellow of the American Association for the Advancement of Science and a fellow of the Linguistic Society of America.

In collaboration with Pesetsky, SHASS Communications also developed a companion piece to his interview, titled “The Building Blocks of Linguistics.” This brisk overview of basic information about the field includes entries such as: “Make Your Own Personal Dialect Map,” “Know Your Linguistics Subfields,” and “Top 10 Ways Linguists Help Make a Better World.”

Q: Linguistics, the science of language, is often a challenging discipline for those outside the field to understand. Can you comment on why that might be?

A: Linguistics is the field that tries to figure out how human language works — for example: how the languages of the world differ, how they are the same, and why; how children acquire language; how languages change over time and why; how we produce and understand language in real time; and how language is processed by the brain.

These are all very challenging questions, and the linguistic ideas and hypotheses about them are sometimes intricate and highly structured. Still, I doubt that linguistics is intrinsically more daunting than other fields explored at MIT — though it is certainly just as exciting.

The problems that linguists face in communicating about our discipline mostly arise, I think, from the absence of any foundational teaching about linguistics in our elementary and middle schools. This means that the most basic facts about language — including the building blocks of language and how they combine — remain unknown, even to most well-educated people.

While it’s a challenge for scholars in other major fields to explain cutting-edge discoveries to others, they don’t typically have to start by explaining first principles. A biologist or astronomer speaking to educated adults, for example, can assume they know that the heart pumps blood and that the Earth goes around the sun. 

Linguistics has equivalent facts to those examples, among them: how speech sounds are produced by the vocal tract, and the hierarchical organization of words in a sentence. Our research builds on these fundamentals when phonologists study the complex ways in which languages organize their speech sounds, for example; or when semanticists and syntacticians (like me) study how the structure of a sentence constrains its meaning.

Unlike our physicist or biologist colleagues, however, we really have to start from scratch each time we discuss our work. That is a challenge that we will continue to face for a while yet, I fear. But there is one silver lining: watching the eyes of our students and colleagues grow wide with excitement when they do learn what’s been going on in their own use of language — in their own linguistic heads — all these years. This reliable phenomenon makes 24.900, MIT’s very popular introductory linguistics undergraduate class, one of my favorite classes to teach. (24.900 is also available via MIT OpenCourseWare.)

Q: Can you describe the kinds of questions linguistic scholars explore and why they are important?

A: Linguists study the puzzles of human language from just about every possible angle — its form, its meanings, sound, gesture, change over time, acquisition by children, processing by the brain, role in social interaction, and much more. Here at MIT Linguistics, our research tends to focus on the structural aspects of language, the logic by which its inner workings are organized.

Our methodologies are diverse. Many of us work closely with speakers of other languages not only to learn about the languages themselves, but also to test hypotheses about language in general. There are also active programs of laboratory research in our department, on language acquisition in children, the online processing of semantics and syntax, phonetics, and more.

My own current work focuses on a fact about language that looks like the most minor of details — until you learn that more or less the same very fact shows up in language after language, all around the globe!

The fact is the strange, obligatory shrinkage in the size of a clause when its subject is extracted to another position in the sentence. In English, for example, the subordinating conjunction “that” — which is normally used to introduce a sentence embedded in a larger sentence (linguists call it a “complementizer”) — is omitted when the subject is questioned.

For example, we say “Who are you sure will smile?” not “Who are you sure that will smile?”

Something very similar happens in languages all over the globe. We find it in Bùlì, for example, a language of Ghana; and in dialects of Arabic; and in the Mayan language Kaqchikel. Adding to the significance of this finding: MIT alumnus Colin Phillips PhD ’96 has shown that, in English at least, this language protocol is acquired by children without any statistically usable evidence for it from the speech they hear around them.

A phenomenon like this one, found all over the globe and clearly not directly learned from experience, cannot be an accident — but must be a by-product of some deeper general property of the human language faculty, and of the human mind. I am now developing and testing a hypothesis about what this deeper property might be.

This example also points to one reason linguistics research is exciting. Language is the defining property of our species and to understand how language works is to better understand ourselves. Linguistic research sheds light on many dimensions of the human experience.

And yet, for all the great advances that my field has made, there are so many fundamental aspects of the human language capacity that we do not properly understand yet. I do not believe that genuine progress can be made on a whole host of language-related problems until we broaden and deepen our understanding of how language works — whether the problem is teaching computers to understand us, teaching children to read, or figuring out the most effective way to learn a second language.

Q: What is the historical relationship between research in linguistics and artificial intelligence (AI), and what roles might linguistics scholarship play in the next era of AI research?

A: The relation between linguistic research and language-related research on AI has been less close than one might expect. One reason might be the different goals of the scholars involved. Historically, the questions about language viewed as most urgent by linguists and AI researchers have not been the same. Consequently, language-related AI has tended to favor end-runs around the findings of linguistics concerning how human language works.

In recent years, however, the tide has been turning, and one sees more and more interaction and collaboration between the two domains of research, including here at MIT. Under the aegis of the MIT Quest for Intelligence, for example, I’ve been meeting regularly with a colleague from Electrical Engineering and Computer Science and a colleague from Brain and Cognitive Sciences to explore ways in which research on syntax can inform machine learning for languages that lack extensive bodies of textual material — a precondition for training existing kinds of systems.

A child acquiring language does this without the aid of the thousands of annotated sentences that machine systems require. An intriguing question, then, is, can we build machines with some of the capabilities of human children, that might not need such aids?

I am looking forward to seeing what progress we can make together.

Story prepared by MIT SHASS Communications


A chemical approach to imaging cells from the inside

The following press release was issued today by the Broad Institute of MIT and Harvard.

A team of researchers at the McGovern Institute and Broad Institute of MIT and Harvard has developed a new technique for mapping cells. The approach, called DNA microscopy, shows how biomolecules such as DNA and RNA are organized in cells and tissues, revealing spatial and molecular information that is not easily accessible through other microscopy methods. DNA microscopy also does not require specialized equipment, enabling large numbers of samples to be processed simultaneously.

“DNA microscopy is an entirely new way of visualizing cells that captures both spatial and genetic information simultaneously from a single specimen,” says first author Joshua Weinstein, a postdoctoral associate at the Broad Institute. “It will allow us to see how genetically unique cells — those comprising the immune system, cancer, or the gut, for instance — interact with one another and give rise to complex multicellular life.”

The new technique is described in Cell. Aviv Regev, core institute member and director of the Klarman Cell Observatory at the Broad Institute and professor of biology at MIT, and Feng Zhang, core institute member of the Broad Institute, investigator at the McGovern Institute for Brain Research at MIT, and the James and Patricia Poitras Professor of Neuroscience at MIT, are co-authors. Regev and Zhang are also Howard Hughes Medical Institute Investigators.

The evolution of biological imaging

In recent decades, researchers have developed tools to collect molecular information from tissue samples, data that cannot be captured by either light or electron microscopes. However, attempts to couple this molecular information with spatial data — to see how it is naturally arranged in a sample — are often machinery-intensive, with limited scalability.

DNA microscopy takes a new approach to combining molecular information with spatial data, using DNA itself as a tool.

To visualize a tissue sample, researchers first add small synthetic DNA tags, which latch on to molecules of genetic material inside cells. The tags are then replicated, diffusing in “clouds” across cells and chemically reacting with each other, further combining and creating more unique DNA labels. The labeled biomolecules are collected, sequenced, and computationally decoded to reconstruct their relative positions and a physical image of the sample.

The interactions between these DNA tags enable researchers to calculate the locations of the different molecules — somewhat analogous to cell phone towers triangulating the locations of different cell phones in their vicinity. Because the process only requires standard lab tools, it is efficient and scalable.

In this study, the authors demonstrate the ability to molecularly map the locations of individual human cancer cells in a sample by tagging RNA molecules. DNA microscopy could be used to map any group of molecules that will interact with the synthetic DNA tags, including cellular genomes, RNA, or proteins with DNA-labeled antibodies, according to the team.

“DNA microscopy gives us microscopic information without a microscope-defined coordinate system,” says Weinstein. “We’ve used DNA in a way that’s mathematically similar to photons in light microscopy. This allows us to visualize biology as cells see it and not as the human eye does. We’re excited to use this tool in expanding our understanding of genetic and molecular complexity.”

Funding for this study was provided by the Simons Foundation, Klarman Cell Observatory, NIH (R01HG009276, 1R01- HG009761, 1R01- MH110049, and 1DP1-HL141201), New York Stem Cell Foundation, Simons Foundation, Paul G. Allen Family Foundation, Vallee Foundation, the Poitras Center for Affective Disorders Research at MIT, the Hock E. Tan and K. Lisa Yang Center for Autism Research at MIT, J. and P. Poitras, and R. Metcalfe. 

The authors have applied for a patent on this technology.

Topics: Research, RNA, DNA, Broad Institute, Biological engineering, Brain and cognitive sciences, McGovern Institute, School of Science, School of Engineering, Imaging, Microscopy


Fishing communities from Maine to North Carolina may need to change catch species

Research News

Fishing communities from Maine to North Carolina may need to change catch species

Fishing communities require new approaches to fishing, studies find

fishing harbor at Matinicus Isle, Maine

The fishing harbor at Matinicus Isle, Maine. Some communities risk losing current fishing options.

June 19, 2019

Most fishing communities from Maine to North Carolina are projected to face declining fishing options unless they adapt to climate change by catching different species or fishing in different areas, according to a study in the journal Nature Climate Change.

Some Maine fishing communities are at greatest risk of losing their current fishing options, according to the work by scientists at Rutgers University and other institutions.

Communities like Portland, Maine, are on track to lose out, while others like Mattituck, New York, or Sandwich, Massachusetts, may do better as waters warm, the scientists said. Adapting to climate change for many communities will require new approaches to fishing, the researchers believe.

Fishing has been the economic and cultural lifeblood for many coastal towns and cities along the Northeast coast, in some cases for hundreds of years. But climate change is expected to have a major impact on the distribution, abundance and diversity of marine species worldwide, the study notes.

The researchers used 13 global climate models to project how ocean temperatures are likely to change. They also looked at whether the species caught by fishing communities are likely to become more, or less, abundant in the ocean regions where they typically fish.

“This new approach to mathematical modeling — using vast data sets of fishing practices, fish populations, and projected ocean change — can build scenarios that enable fishing communities to think about and plan for adapting to possible futures,” says Mike Sieracki, a program director in NSF’s Division of Ocean Sciences, which funded the research.

—  NSF Public Affairs, (703) 292-8070


More than 5 million cancer survivors experience chronic pain, twice the rate of the general population

More than 5 million cancer survivors in the United States experience chronic pain, almost twice the rate in the general population, according to a study published by Mount Sinai researchers in JAMA Oncology in June.

Researchers used the National Health Interview Survey, a large national representative dataset from the U.S. Centers for Disease Control and Prevention, to estimate the prevalence of chronic pain among cancer survivors. They found that about 35 percent of cancer survivors have chronic pain, representing 5.39 million patients in the United States.

“This study provided the first comprehensive estimate of chronic pain prevalence among cancer survivors,” said corresponding author Changchuan Jiang, MD, MPH, a medical resident at Mount Sinai St. Luke’s and Mount Sinai West. “These results highlight the important unmet needs of pain management in the large, and growing cancer survivorship community.”

Specific types of cancer — such as bone, kidney, throat, and uterine — also had a higher incidence of chronic and severe pain that restricted daily activity. Chronic pain was more prevalent in survivors who were unemployed and had inadequate insurance.

Chronic pain is one of the most common long-term effects of cancer treatment and has been linked with an impaired quality of life, lower adherence to treatment, and higher health care costs. This study is important because a better understanding of the epidemiology of pain in cancer survivors can help inform future health care educational priorities and policies.

Researchers from the American Cancer Society, Memorial Sloan Kettering Cancer Center, and the University of Virginia were part of the study team.

Story Source:

Materials provided by The Mount Sinai Hospital / Mount Sinai School of Medicine. Note: Content may be edited for style and length.


Research Headlines – Helping developing countries preserve their fish hauls

Related theme(s) and subtheme(s)
Countries involved in the project described in the article
 |   |   |   |   |   |   |   | 

Add to PDF “basket”

Without access to modern technology like refrigeration, people in developing countries often have to throw away a significant proportion of the fish they catch. EU-funded researchers have delivered innovative, low-cost solutions to help such communities around the world make their fish stocks go further.


© Alexander #183267400, source: 2019

In a bid to bolster food security and help tackle poverty, the EU-funded SECUREFISH project aimed to improve the preservation of fish supplies, utilise waste fish matter and develop products that could be sold in markets.

This resulted in the development of a range of technologies based on traditional approaches that harnessed renewable energy sources to keep costs down.

The project also led to processed goods, with research revealing that product shelf-life can be extended by using natural antioxidants sourced from local plants such as water hyacinth, which is a nuisance to fishermen working the waters of Africa’s Lake Victoria. Packaging options were also assessed, with vacuum packaging offering the greatest potential.

‘Our technologies were successfully tested and delivered quality fish products to local and regional markets,’ says SECUREFISH project coordinator Nazlin Howell of the University of Surrey in the UK.

Creative fixes

The technologies developed included a hybrid wind and solar tunnel drier, a modified solar energy-assisted extruder and an atmospheric fast-freeze drier. These innovations were used to dry and preserve whole fish and fillets in Kenya, Ghana, Namibia, India and Argentina.

In Kenya, for example, Kipini fisherwomen produced dried fish and dried fish fillets using the solar tunnel drier, says Howell, adding that the group went on to sell their products in local supermarkets.

Using the extruder, the project tested fish and other foodstuffs such as flour, chickpeas and rice together to make a range of processed goods, including soups and snacks.

Consumer surveys and testing were done to ensure products would be popular with the local population. SECUREFISH meetings also provided information on nutrition and food safety to local communities.

In addition, the project developed a way of recovering nutrients from fish-processing waste-water and fish skin. This breakthrough can be used to recover oils rich in Omega-3 fatty acids and proteins that are essential for a healthy diet.

Quality control

The project developed a quality management tool (QMT) covering issues such as food safety and risk assessment, traceability, nutritional quality and the carbon footprint of production processes.

The QMT matches European standards and seeks to ensure best practice when it comes to handling and storing food products. It also guarantees high product quality at a sustainable cost. While the tool was made to cover the production of dried, extruded and frozen fish products in the partner countries, it can also be used for other types of food.

‘SECUREFISH food chains involved food manufacturers, processors, retailers and consumers,’ says Howell. ‘We tested the new processing and quality management tools in real developing country situations and in collaboration with SMEs. This was an innovative step to take and helped us develop products that have a market value and are of direct benefit to consumers and processors.’

Project details

  • Project acronym: SECUREFISH
  • Participants: United Kingdom (Coordinator), Netherlands, Portugal, Argentina, Ghana, India, Kenya, Malaysia, Namibia
  • Project N°: 289282
  • Total costs: € 3 965 592
  • EU contribution: € 2 997 422
  • Duration: January 2012 to December 2014

See also

Search articles

To restrict search results to articles in the Information Centre, i.e. this site, use this search box rather than the one at the top of the page.

After searching, you can expand the results to include the whole Research and Innovation web site, or another section of it, or all Europa, afterwards without searching again.

Please note that new content may take a few days to be indexed by the search engine and therefore to appear in the results.


Events – PECUNIA Satellite Workshop – registration open – 17 July 2019, Basel, Switzerland

The H2020 project PECUNIA (ProgrammE in Costing, resource use measurement and outcome valuation for Use in multi-sectoral National and International health economic evaluAtions) is organising an interactive workshop on validating the methodological approach developed in the course of the project and on drawing up a roadmap for the further work plan. Health economists with interest and expertise in economic evaluations and costing studies are invited to join this Satellite Workshop.

PECUNIA is a collaborative effort to develop standardised multi-sectoral costing and outcome assessment methods and tools for health economic evaluations that are harmonized across different countries and economic sectors.

It aims to tackle the healthcare challenges of an ever-growing and rapidly ageing population in the EU by developing new standardised, harmonised and validated methods and tools for the assessment of costs and outcomes in European healthcare systems. 

Comparing and exploiting data across different countries and sectors, PECUNIA aims to provide direct comparable solutions to improve chronic and mental healthcare in all EU health systems. 

Programme and registration for the workshop:


Can science writing be automated?

The work of a science writer, including this one, includes reading journal papers filled with specialized technical terminology, and figuring out how to explain their contents in language that readers without a scientific background can understand.

Now, a team of scientists at MIT and elsewhere has developed a neural network, a form of artificial intelligence (AI), that can do much the same thing, at least to a limited extent: It can read scientific papers and render a plain-English summary in a sentence or two.

Even in this limited form, such a neural network could be useful for helping editors, writers, and scientists scan a large number of papers to get a preliminary sense of what they’re about. But the approach the team developed could also find applications in a variety of other areas besides language processing, including machine translation and speech recognition.

The work is described in the journal Transactions of the Association for Computational Linguistics, in a paper by Rumen Dangovski and Li Jing, both MIT graduate students; Marin Soljačić, a professor of physics at MIT; Preslav Nakov, a principal scientist at the Qatar Computing Research Institute, HBKU; and Mićo Tatalović, a former Knight Science Journalism fellow at MIT and a former editor at New Scientist magazine.

From AI for physics to natural language

The work came about as a result of an unrelated project, which involved developing new artificial intelligence approaches based on neural networks, aimed at tackling certain thorny problems in physics. However, the researchers soon realized that the same approach could be used to address other difficult computational problems, including natural language processing, in ways that might outperform existing neural network systems.

“We have been doing various kinds of work in AI for a few years now,” Soljačić says. “We use AI to help with our research, basically to do physics better. And as we got to be  more familiar with AI, we would notice that every once in a while there is an opportunity to add to the field of AI because of something that we know from physics — a certain mathematical construct or a certain law in physics. We noticed that hey, if we use that, it could actually help with this or that particular AI algorithm.”

This approach could be useful in a variety of specific kinds of tasks, he says, but not all. “We can’t say this is useful for all of AI, but there are instances where we can use an insight from physics to improve on a given AI algorithm.”

Neural networks in general are an attempt to mimic the way humans learn certain new things: The computer examines many different examples and “learns” what the key underlying patterns are. Such systems are widely used for pattern recognition, such as learning to identify objects depicted in photos.

But neural networks in general have difficulty correlating information from a long string of data, such as is required in interpreting a research paper. Various tricks have been used to improve this capability, including techniques known as long short-term memory (LSTM) and gated recurrent units (GRU), but these still fall well short of what’s needed for real natural-language processing, the researchers say.

The team came up with an alternative system, which instead of being based on the multiplication of matrices, as most conventional neural networks are, is based on vectors rotating in a multidimensional space. The key concept is something they call a rotational unit of memory (RUM).

Essentially, the system represents each word in the text by a vector in multidimensional space — a line of a certain length pointing in a particular direction. Each subsequent word swings this vector in some direction, represented in a theoretical space that can ultimately have thousands of dimensions. At the end of the process, the final vector or set of vectors is translated back into its corresponding string of words.

“RUM helps neural networks to do two things very well,” Nakov says. “It helps them to remember better, and it enables them to recall information more accurately.”

After developing the RUM system to help with certain tough physics problems such as the behavior of light in complex engineered materials, “we realized one of the places where we thought this approach could be useful would be natural language processing,” says Soljačić,  recalling a conversation with Tatalović, who noted that such a tool would be useful for his work as an editor trying to decide which papers to write about. Tatalović was at the time exploring AI in science journalism as his Knight fellowship project.

“And so we tried a few natural language processing tasks on it,” Soljačić says. “One that we tried was summarizing articles, and that seems to be working quite well.”

The proof is in the reading

As an example, they fed the same research paper through a conventional LSTM-based neural network and through their RUM-based system. The resulting summaries were dramatically different.

The LSTM system yielded this highly repetitive and fairly technical summary: “Baylisascariasis,” kills mice, has endangered the allegheny woodrat and has caused disease like blindness or severe consequences. This infection, termed “baylisascariasis,” kills mice, has endangered the allegheny woodrat and has caused disease like blindness or severe consequences. This infection, termed “baylisascariasis,” kills mice, has endangered the allegheny woodrat.

Based on the same paper, the RUM system produced a much more readable summary, and one that did not include the needless repetition of phrases: Urban raccoons may infect people more than previously assumed. 7 percent of surveyed individuals tested positive for raccoon roundworm antibodies. Over 90 percent of raccoons in Santa Barbara play host to this parasite.

Already, the RUM-based system has been expanded so it can “read” through entire research papers, not just the abstracts, to produce a summary of their contents. The researchers have even tried using the system on their own research paper describing these findings — the paper that this news story is attempting to summarize.

Here is the new neural network’s summary: Researchers have developed a new representation process on the rotational unit of RUM, a recurrent memory that can be used to solve a broad spectrum of the neural revolution in natural language processing.

It may not be elegant prose, but it does at least hit the key points of information.

Çağlar Gülçehre, a research scientist at the British AI company Deepmind Technologies, who was not involved in this work, says this research tackles an important problem in neural networks, having to do with relating pieces of information that are widely separated in time or space. “This problem has been a very fundamental issue in AI due to the necessity to do reasoning over long time-delays in sequence-prediction tasks,” he says. “Although I do not think this paper completely solves this problem, it shows promising results on the long-term dependency tasks such as question-answering, text summarization, and associative recall.”

Gülçehre adds, “Since the experiments conducted and model proposed in this paper are released as open-source on Github, as a result many researchers will be interested in trying it on their own tasks. … To be more specific, potentially the approach proposed in this paper can have very high impact on the fields of natural language processing and reinforcement learning, where the long-term dependencies are very crucial.”

The research received support from the Army Research Office, the National Science Foundation, the MIT-SenseTime Alliance on Artificial Intelligence, and the Semiconductor Research Corporation. The team also had help from the Science Daily website, whose articles were used in training some of the AI models in this research.

Topics: Research, Physics, Artifical intelligence, Machine learning, Language, Algorithms, Knight fellowship, Science writing, Science communications, Technology and society, National Science Foundation (NSF), School of Science, School of Humanities Arts and Social Sciences


“Nanoemulsion” gels offer new way to deliver drugs through the skin

MIT chemical engineers have devised a new way to create very tiny droplets of one liquid suspended within another liquid, known as nanoemulsions. Such emulsions are similar to the mixture that forms when you shake an oil-and-vinegar salad dressing, but with much smaller droplets. Their tiny size allows them to remain stable for relatively long periods of time.

The researchers also found a way to easily convert the liquid nanoemulsions to a gel when they reach body temperature (37 degrees Celsius), which could be useful for developing materials that can deliver medication when rubbed on the skin or injected into the body.

“The pharmaceutical industry is hugely interested in nanoemulsions as a way of delivering small molecule therapeutics. That could be topically, through ingestion, or by spraying into the nose, because once you start getting into the size range of hundreds of nanometers you can permeate much more effectively into the skin,” says Patrick Doyle, the Robert T. Haslam Professor of Chemical Engineering and the senior author of the study.

In their new study, which appears in the June 21 issue of Nature Communications, the researchers created nanoemulsions that were stable for more than a year. To demonstrate the emulsions’ potential usefulness for delivering drugs, the researchers showed that they could incorporate ibuprofen into the droplets.

Seyed Meysam Hashemnejad, a former MIT postdoc, is the first author of the study. Other authors include former postdoc Abu Zayed Badruddoza, L’Oréal senior scientist Brady Zarket, and former MIT summer research intern Carlos Ricardo Castaneda.

Energy reduction

One of the easiest ways to create an emulsion is to add energy — by shaking your salad dressing, for example, or using a homogenizer to break down fat globules in milk. The more energy that goes in, the smaller the droplets, and the more stable they are.

Nanoemulsions, which contain droplets with a diameter 200 nanometers or smaller, are desirable not only because they are more stable, but they also have a higher ratio of surface area to volume, which allows them to carry larger payloads of active ingredients such as drugs or sunscreens.

Over the past few years, Doyle’s lab has been working on lower-energy strategies for making nanoemulsions, which could make the process easier to adapt for large-scale industrial manufacturing.

Detergent-like chemicals called surfactants can speed up the formation of emulsions, but many of the surfactants that have previously been used for creating nanoemulsions are not FDA-approved for use in humans. Doyle and his students chose two surfactants that are uncharged, which makes them less likely to irritate the skin, and are already FDA-approved as food or cosmetic additives. They also added a small amount of polyethylene glycol (PEG), a biocompatible polymer used for drug delivery that helps the solution to form even smaller droplets, down to about 50 nanometers in diameter.

“With this approach, you don’t have to put in much energy at all,” Doyle says. “In fact, a slow stirring bar almost spontaneously creates these super small emulsions.”

Active ingredients can be mixed into the oil phase before the emulsion is formed, so they end up loaded into the droplets of the emulsion.

Once they had developed a low-energy way to create nanoemulsions, using nontoxic ingredients, the researchers added a step that would allow the emulsions to be easily converted to gels when they reach body temperature. They achieved this by incorporating heat-sensitive polymers called poloxamers, or Pluronics, which are already FDA-approved and used in some drugs and cosmetics.

Pluronics contain three “blocks” of polymers: The outer two regions are hydrophilic, while the middle region is slightly hydrophobic. At room temperature, these molecules dissolve in water but do not interact much with the droplets that form the emulsion. However, when heated, the hydrophobic regions attach to the droplets, forcing them to pack together more tightly and creating a jelly-like solid. This process happens within seconds of heating the emulsion to the necessary temperature.

MIT chemical engineers have devised a way to convert liquid nanoemulsions into solid gels. These gels (red) form almost instantaneously when drops of the liquid emulsion enter warm water.

Tunable properties

The researchers found that they could tune the properties of the gels, including the temperature at which the material becomes a gel, by changing the size of the emulsion droplets and the concentration and structure of the Pluronics that they added to the emulsion. They can also alter traits such as elasticity and yield stress, which is a measure of how much force is needed to spread the gel.

Doyle is now exploring ways to incorporate a variety of active pharmaceutical ingredients into this type of gel. Such products could be useful for delivering topical medications to help heal burns or other types of injuries, or could be injected to form a “drug depot” that would solidify inside the body and release drugs over an extended period of time. These droplets could also be made small enough that they could be used in nasal sprays for delivering inhalable drugs, Doyle says.

For cosmetic applications, this approach could be used to create moisturizers or other products that are more shelf-stable and feel smoother on the skin.

The research was funded by L’Oréal.


NSF-supported Frontera named 5th fastest supercomputer in the world

Research News

NSF-supported Frontera named 5th fastest supercomputer in the world

Leadership-class system tops all academic supercomputers

view between two rows of Frontera servers in the TACC data center

A view between two rows of Frontera servers in the TACC data center.

June 19, 2019

The NSF-supported Frontera supercomputer at the Texas Advanced Computing Center earned the number-five spot on the Top500 list, which ranks the world’s most powerful non-distributed computer systems twice a year.

Located at The University of Texas at Austin, Frontera is the fastest university supercomputer in the world. To match what Frontera can compute in just one second, a person would have to perform one calculation every second for about a billion years.

“Many of the frontiers of research today can only be advanced using computing,” said TACC Executive Director Dan Stanzione. “Frontera will be an important tool to solve Grand Challenges that will improve our nation’s health, well-being, competitiveness, and security.”

Supported by a $60 million NSF award, Frontera will provide researchers with the most advanced capabilities for science and engineering when it goes into full operation later this summer.

“Frontera will provide scientists across the country with access to unprecedented computational modeling, simulation, and data analytics capabilities,” said Jim Kurose, NSF assistant director for Computer and Information Science and Engineering. “Frontera represents the next step in NSF’s more than three decades of support for advanced computing capabilities to ensure that the U.S. retains its global leadership in research frontiers.”

—  NSF Public Affairs, (703) 292-8070


Spiders risk everything for love

University of Cincinnati biologist George Uetz long suspected the extravagant courtship dance of wolf spiders made them an easy mark for birds and other predators.

But it was only when he and colleague Dave Clark from Alma College teamed up with former University of Minnesota researcher Tricia Rubi and her captive colony of blue jays that he could prove it.

For a study published in May in the journal Behavioural Processes, Rubi trained a captive colony of blue jays to peck at buttons to indicate whether or not they saw wolf spiders (Schizocosa ocreata) on video screens.

Clark made videos superimposing images of courting, walking and stationary male spiders on a leaf litter background. Rubi presented the videos to blue jays on a flat display screen on the ground.

When viewed from above, the brindled black and brown spiders disappear amid the dead leaves.

The jays had trouble finding spiders that stayed motionless in the videos. This confirmed the adaptive value of the anti-predator “freeze” behavior.

The jays had less trouble seeing spiders that walked in the video. And the jays were especially quick to find male spiders engaged in ritual courtship behavior, in which they wave their furry forelegs in the air like overzealous orchestra conductors.

“By courting the way they do, they are clearly putting themselves at risk of bird predation,” UC’s Uetz said.

His lab studies the complementary methods spiders employ to communicate with each other, called multimodal communication. Female spiders leave a pheromone trail behind them in their silk and when they rub their abdomens on the ground, Uetz said.

And when male spiders come into visual range, they bounce and rattle their legs on the leaf litter to create vibrations that can travel some considerable distance to the legs of potential mates. The males also wave their front legs in a unique pattern to captivate females.

The males of the species have especially furry front legs that look like black woolen leg warmers. The combination of thick fur and vigorous dancing are indicators that the male is fit and healthy, Uetz said.

“The displays and the decorations show off male quality,” Uetz said. “The males that display vigorous courtship and robust leg tufts are showing off their immune competence and overall health. They, in turn, will have sons that have those qualities.”

That is, if they live so long. Many birds, including blue jays, find spiders to be tasty. And a chemical called taurine found in spiders is especially important for the neurological development of baby birds, he said.?

Spiders instinctively fear predatory birds. In previous studies, Uetz, Clark and UC student Anne Lohrey determined that wolf spiders would freeze in place when they detected the sharp, loud calls of blue jays, cardinals and other insect-eating birds. By comparison, they ignored the calls of seed-eating birds such as mourning doves along with background forest noises such as the creak of katydids.

“They clearly recognized these birds as some kind of threat,” Uetz said. “Robins will hunt them on the ground. Lots of other birds do, too. Turkeys will snap them up.”

When Uetz proposed a spider experiment, Rubi said it wasn’t hard to train her colony of blue jays. The jays quickly learned to peck at different buttons when they either observed a spider or didn’t see one on a video screen.

“Birds are super visual. They have excellent color vision and good visual acuity. It’s not surprising they would have no trouble seeing spiders in motion,” she said.

Rubi now studies genetic evolution at the University of Victoria in British Columbia.

If natural selection means the most avid courting spiders are also most likely to get eaten, why does this behavior persist across generations? Wouldn’t meeker spiders survive to pass along their genes?

Rubi said the explanation lies in another selective force.

“Natural selection is selection for survival, which would lead to spiders that are less conspicuous to predators,” she said. “But sexual selection is driven by females. And they select for a more conspicuous display.”

In genetic terms, Rubi said, fitness is measured in the number of healthy offspring produced. So while staying alive by minimizing risk is a good strategy for the individual, it’s not a viable strategy for the species.

“The longest-lived male can still have a fitness of ‘zero’ if he never mates,” Rubi said. “So there appears to be a trade-off between being safe and being sexy. That balance is what shapes these courtship displays.”

Uetz said female wolf spiders can be very choosy about the qualities they value in a mate.

“The tufts on their forelegs are very important. Their size and symmetry play a big role,” he said. “They’re so tiny and have brains the size of a poppy seed. You wouldn’t think they could discriminate, but they do.”

And at least for successful male wolf spiders living in a hostile world, this means love wins over fear.


11th World conference of Science Journalists – 1-5 July 2019, Lausanne, Switzerland

The European Union’s stand number 2 is the place to learn more about the role of science and research in the EU. From “Dieselgate” to terrorist attacks, from climate change to regulating social media and the digital economy – scientific research underpins EU policies across the board.

With its EUR10bln annual research budget and its in-house research institute, the EU is one of the largest sponsors, producers, and users of policy relevant science in the world.

Come along and say hello to our team, and spend some time with our hands-on research tools and our Virtual Reality experience. We also run regular info sessions on various topics at the stand daily – look for the schedule in the daily newsletters.