Drag-and-drop data analytics

In the Iron Man movies, Tony Stark uses a holographic computer to project 3-D data into thin air, manipulate them with his hands, and find fixes to his superhero troubles. In the same vein, researchers from MIT and Brown University have now developed a system for interactive data analytics that runs on touchscreens and lets everyone — not just billionaire tech geniuses — tackle real-world issues.

For years, the researchers have been developing an interactive data-science system called Northstar, which runs in the cloud but has an interface that supports any touchscreen device, including smartphones and large interactive whiteboards. Users feed the system datasets, and manipulate, combine, and extract features on a user-friendly interface, using their fingers or a digital pen, to uncover trends and patterns.

In a paper being presented at the ACM SIGMOD conference, the researchers detail a new component of Northstar, called VDS for “virtual data scientist,” that instantly generates machine-learning models to run prediction tasks on their datasets. Doctors, for instance, can use the system to help predict which patients are more likely to have certain diseases, while business owners might want to forecast sales. If using an interactive whiteboard, everyone can also collaborate in real-time.  

The aim is to democratize data science by making it easy to do complex analytics, quickly and accurately.

“Even a coffee shop owner who doesn’t know data science should be able to predict their sales over the next few weeks to figure out how much coffee to buy,” says co-author and long-time Northstar project lead Tim Kraska, an associate professor of electrical engineering and computer science in at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and founding co-director of the new Data System and AI Lab (DSAIL). “In companies that have data scientists, there’s a lot of back and forth between data scientists and nonexperts, so we can also bring them into one room to do analytics together.”

VDS is based on an increasingly popular technique in artificial intelligence called automated machine-learning (AutoML), which lets people with limited data-science know-how train AI models to make predictions based on their datasets. Currently, the tool leads the DARPA D3M Automatic Machine Learning competition, which every six months decides on the best-performing AutoML tool.    

Joining Kraska on the paper are: first author Zeyuan Shang, a graduate student, and Emanuel Zgraggen, a postdoc and main contributor of Northstar, both of EECS, CSAIL, and DSAIL; Benedetto Buratti, Yeounoh Chung, Philipp Eichmann, and Eli Upfal, all of Brown; and Carsten Binnig who recently moved from Brown to the Technical University of Darmstadt in Germany.

An “unbounded canvas” for analytics

The new work builds on years of collaboration on Northstar between researchers at MIT and Brown. Over four years, the researchers have published numerous papers detailing components of Northstar, including the interactive interface, operations on multiple platforms, accelerating results, and studies on user behavior.

Northstar starts as a blank, white interface. Users upload datasets into the system, which appear in a “datasets” box on the left. Any data labels will automatically populate a separate “attributes” box below. There’s also an “operators” box that contains various algorithms, as well as the new AutoML tool. All data are stored and analyzed in the cloud.

The researchers like to demonstrate the system on a public dataset that contains information on intensive care unit patients. Consider medical researchers who want to examine co-occurrences of certain diseases in certain age groups. They drag and drop into the middle of the interface a pattern-checking algorithm, which at first appears as a blank box. As input, they move into the box disease features labeled, say, “blood,” “infectious,” and “metabolic.” Percentages of those diseases in the dataset appear in the box. Then, they drag the “age” feature into the interface, which displays a bar chart of the patient’s age distribution. Drawing a line between the two boxes links them together. By circling age ranges, the algorithm immediately computes the co-occurrence of the three diseases among the age range.  

“It’s like a big, unbounded canvas where you can lay out how you want everything,” says Zgraggen, who is the key inventor of Northstar’s interactive interface. “Then, you can link things together to create more complex questions about your data.”

Approximating AutoML

With VDS, users can now also run predictive analytics on that data by getting models custom-fit to their tasks, such as data prediction, image classification, or analyzing complex graph structures.

Using the above example, say the medical researchers want to predict which patients may have blood disease based on all features in the dataset. They drag and drop “AutoML” from the list of algorithms. It’ll first produce a blank box, but with a “target” tab, under which they’d drop the “blood” feature. The system will automatically find best-performing machine-learning pipelines, presented as tabs with constantly updated accuracy percentages. Users can stop the process at any time, refine the search, and examine each model’s errors rates, structure, computations, and other things.

According to the researchers, VDS is the fastest interactive AutoML tool to date, thanks, in part, to their custom “estimation engine.” The engine sits between the interface and the cloud storage. The engine leverages automatically creates several representative samples of a dataset that can be progressively processed to produce high-quality results in seconds.

“Together with my co-authors I spent two years designing VDS to mimic how a data scientist thinks,” Shang says, meaning it instantly identifies which models and preprocessing steps it should or shouldn’t run on certain tasks, based on various encoded rules. It first chooses from a large list of those possible machine-learning pipelines and runs simulations on the sample set. In doing so, it remembers results and refines its selection. After delivering fast approximated results, the system refines the results in the back end. But the final numbers are usually very close to the first approximation.

“For using a predictor, you don’t want to wait four hours to get your first results back. You want to already see what’s going on and, if you detect a mistake, you can immediately correct it. That’s normally not possible in any other system,” Kraska says. The researchers’ previous user study, in fact, “show that the moment you delay giving users results, they start to lose engagement with the system.”

The researchers evaluated the tool on 300 real-world datasets. Compared to other state-of-the-art AutoML systems, VDS’ approximations were as accurate, but were generated within seconds, which is much faster than other tools, which operate in minutes to hours.

Next, the researchers are looking to add a feature that alerts users to potential data bias or errors. For instance, to protect patient privacy, sometimes researchers will label medical datasets with patients aged 0 (if they do not know the age) and 200 (if a patient is over 95 years old). But novices may not recognize such errors, which could completely throw off their analytics.  

“If you’re a new user, you may get results and think they’re great,” Kraska says. “But we can warn people that there, in fact, may be some outliers in the dataset that may indicate a problem.”

Topics: Research, Computer science and technology, Algorithms, Data, Machine learning, Artificial intelligence, Health sciences and technology, Medicine, Technology and society, Computer Science and Artificial Intelligence Laboratory (CSAIL), Electrical Engineering & Computer Science (eecs), School of Engineering


New research shows an iceless Greenland may be in the future

Research News

New research shows an iceless Greenland may be in the future

Island could be ice-free by year 3000, says new estimate

Ilulissat, known as the city of icebergs

Scientists believe that Greenland may be ice-free by the year 3000.

June 26, 2019

New research shows that an iceless Greenland may be in the future. If worldwide greenhouse gas emissions remain on their current trajectory, Greenland may be ice-free by the year 3000. By the end of the century, the island could lose 4.5% of its ice, contributing up to 13 inches of sea level rise.

“How Greenland will look in the future — in a couple hundred years or in 1,000 years — whether there will be Greenland, or at least a Greenland similar to today, is up to us,” said NSF-supported researcher Andy Aschwanden of the University of Alaska Fairbanks Geophysical Institute.

Aschwanden is lead author of a new study published in the June issue of Science Advances.

Greenland’s ice sheet is huge, spanning more than 660,000 square miles. Today, the ice sheet covers 81% of Greenland and contains 8% of Earth’s fresh water.

If greenhouse gas concentrations remain on their current trajectory, melting ice from Greenland alone could contribute as much as 24 feet to global sea level rise by the year 3000, which would place much of San Francisco, Los Angeles, New Orleans and other coastal cities underwater.

However, if greenhouse gas emissions are cut significantly, ice losses would be reduced. Instead, by 3000, Greenland may lose 8% to 25% of its ice and contribute up to 6.5 feet of sea level rise.

“Modeling studies like this show us the future of the Greenland Ice Sheet, which in turn can help us predict and plan for rising sea levels,” said Cynthia Suchman, Arctic Natural Sciences Program Director in NSF’s Office of Polar Programs, which funded the research.

—  NSF Public Affairs, (703) 292-8070 media@nsf.gov


Atomic ‘patchwork’ using heteroepitaxy for next generation semiconductor devices

Researchers from Tokyo Metropolitan University have grown atomically thin crystalline layers of transition metal dichalcogenides (TMDCs) with varying composition over space, continuously feeding in different types of TMDC to a growth chamber to tailor changes in properties. Examples include 20nm strips surrounded by different TMDCs with atomically straight interfaces, and layered structures. They also directly probed the electronic properties of these heterostructures; potential applications include electronics with unparalleled power efficiency.

Semiconductors are indispensable in the modern age; silicon-based integrated circuits underpin the operation of all things digital, from discrete devices like computers, smartphones and home appliances to control components for every possible industrial application. A broad range of scientific research has been directed to the next steps in semiconductor design, particularly the application of novel materials to engineer more compact, efficient circuitry which leverages the quantum mechanical behavior of materials at the nanometer length scale. Of special interest are materials with a fundamentally different dimensionality; the most famous example is graphene, a two-dimensional lattice of carbon atoms which is atomically thin.

Transition metal dichalcogenides (or TMDCs) are promising candidates for incorporation into new semiconductor devices. Composed of transition metals like molybdenum and tungsten and a chalcogen (or Group 16 element) like sulfur or selenium, they can form layered crystalline structures whose properties change drastically when the metallic element is changed, from normal metals to semiconductors, even to superconductors. By controllably weaving domains of different TMDCs into a single heterostructure (made of domains with different composition), it may be possible to produce atomically thin electronics with distinct, superior properties to existing devices.

A team led by Dr. Yu Kobayashi and Associate Professor Yasumitsu Miyata from Tokyo Metropolitan University has been at the cutting edge of efforts to create two-dimensional heterostructures with different TMDCs using vapor-phase deposition, the deposition of precursor material in a vapor state onto a surface to make atomically flat crystalline layers. One of the biggest challenges they faced was creating a perfectly flat interface between different domains, an essential feature for getting the most out of these devices. Now, they have succeeded in engineering a continuous process to grow well-defined crystalline strips of different TMDCs at the edge of existing domains, creating strips as thin as 20nm with a different composition. Their new process uses liquid precursors which can be sequentially fed into a growth chamber; by optimizing the growth rate, they were able to grow heterostructures with distinct domains linked perfectly over atomically straight edges. They directly imaged the linkage using scanning tunneling microscopy (STM), finding excellent agreement with first-principles numerical simulations of what an ideal interface should look like. The team used four different TMDCs, and also realized a layer-on-layer heterostructure.

By creating atomically sharp interfaces, electrons may be effectively confined to one-dimensional spaces on these 2D devices, for exquisite control of electron transport and resistivity as well as optical properties. The team hopes that this may pave the way to devices with unparalleled energy efficiency and novel optical properties.

Story Source:

Materials provided by Tokyo Metropolitan University. Note: Content may be edited for style and length.


Horizon Europe – Have your say on future objectives for EU-funded research and innovation

The Commission is preparing the implementation of Horizon Europe, the next and most ambitious EU research and innovation programme (2021-2027) with a proposed budget of €100 billion, in an intensive co-design process. The process will help shape European research and innovation investments in the coming years. As part of the process, the Commission has launched an online consultation

Carlos Moedas, Commissioner for Research, Science and Innovation, said:

Our common future will to a large degree depend on how successfully we work together to create valuable research and innovation. I am pleased to see that we are practicing what we preach as we now kick off the consultation on Horizon Europe with a period of unprecedented co-design including all interested parties

The consultation will collect input from across Europe and beyond. The inputs received will inform the work to prepare a ‘Strategic Plan’ for Horizon Europe, which will then guide the work programmes and calls for proposals for Horizon Europe’s first four years (2021-2024). Overall, the consultation will help identify impacts, spark debate and new ideas. A key event in this co-design process will be the European Research and Innovation Days in Brussels from 24 to 26 September 2019.

The co-design process ensures that Horizon Europe is directed towards what matters most, improves our daily lives and helps turn big societal challenges such as climate change into innovation opportunities and solutions for a sustainable future.

The Commission invites anyone with an interest in future EU research and innovation priorities, anywhere in the world, to participate in the consultation, which will close on 8 September 2019.


The European Parliament and Council reached a political agreement on Horizon Europe in April 2019, on the basis of which the Commission has started to prepare the programme’s implementation, including through the strategic planning process. This process, focused in particular on Horizon Europe’s second pillar: ‘Global Challenges and European Industrial Competitiveness’, will develop the first ‘Horizon Europe Strategic Plan (2021-2024)’. The plan will identify major policy drivers, strategic policy priorities, and targeted impacts to be achieved as well as identify missions and European Partnerships. The first ‘Horizon Europe Strategic Plan’ is planned to be endorsed by the next Commission towards the end of 2019, subject to agreement between the European Parliament and Council on the EU’s long-term budget (2021-2027) and its related horizontal provisions. 

More information

Share your views

Horizon Europe

European Research and Innovation Days


Research Headlines – Applied nanotechnology to improve cancer monitoring

Related theme(s) and subtheme(s)
:   |   |   | 

Countries involved in the project described in the article

Add to PDF “basket”

An EU-funded project is supporting innovative nanotechnology research, with a platform for improved cancer monitoring among its successes. This promises earlier and more cost-effective detection of cancers and can guide individualised therapeutic approaches.


© International Iberian Nanotechnology Laboratory, 2019. Photographer: Sandra Maya

Liquid biopsy is a non-invasive technique for studying cancer biomarkers present in body fluids. However, biomarkers can be hard to find using current methods. Advances in nanotechnology and microfluidics – processing small quantities of fluids using tiny channels – could lead to the earlier detection of cancer metastasis and relapse, to enable better and more cost-effective monitoring.

This was just one challenge addressed by ground-breaking research funded through the NANOTRAINFORGROWTH II project. The project supported fellowships for experienced researchers at the International Iberian Nanotechnology Laboratory (INL), Europe’s first research organisation dedicated to nanoscience and nanotechnology.

‘Within the context of the NANOTRAINFORGROWTH II grant, nanotechnology and microfluidics are being applied to develop a real-time, high-throughput and multiplex cancer monitoring platform,’ says project lead Sara Abalde-Cela, of INL in Braga, Portugal. Thanks to the project, a device for detecting cancer biomarkers is being prepared for the market.

Detecting cancer biomarkers

One group of cancer biomarkers targeted by liquid biopsy are circulating tumour cells (CTCs). These travel from the primary tumour and invade other organs, causing metastasis, or the development of secondary malignant growths. The Medical Devices group at INL recently developed a microfluidic platform that can isolate these very rare cells, which are present in the blood of cancer patients.

However, several challenges remained. One was to extract much more information from these cells, which called for advanced interrogation techniques such as Surface-enhanced Raman scattering (SERS) spectroscopy, a powerful and sensitive detection method.

Abalde-Cela believes the novel combination of SERS spectroscopy with nanotechnology and microfluidics will overcome current bottlenecks faced by researchers in the field of liquid biopsy.

A key innovation involved producing engineered nanoparticles that act as ‘barcodes’ for different cell membrane receptors. This enables improved analysis of CTCs and their various surface proteins simultaneously. Furthermore, CTCs have been encapsulated in microdroplets for single-cell analysis and metabolite tracking, as a way of predicting tumour growth.

‘In the future, all these methods will be integrated into a single medical device for the analysis of real samples,’ says Abalde-Cela. This development would open up the possibility of transferring these methods to the clinic.

Personalised patient care

The platform allows for the monitoring of cancer during treatment, using advanced liquid biopsy. It also provides physicians with more information for adapting therapies to treat patients in a personalised way.

‘Traditional biopsies can only be done at the moment of diagnosis and surgery, while imaging techniques to monitor relapse cannot be performed as often as needed and they usually miss early spreading,’ Abalde-Cela says. ‘This platform will allow for a non-invasive, fast and more sensitive diagnosis of metastasis evolution and relapse in patients who might otherwise be considered cured. Each analysis will cost approximately EUR 500.’

Thanks to the advances that Abalde-Cela’s fellowship at INL helped to achieve, a start-up company – RUBYnanomed – was created in January 2018. The first product the company hopes to bring to market is the RUBYchip device for isolating CTCs from blood. A prototype device is being validated at hospitals in Spain and Portugal, for five types of cancer, and the company has approximately 1 500 pre-orders for 2019.

NANOTRAINFORGROWTH II received funding from the EU’s Marie Skłodowska-Curie Actions programme. Other fellowships funded by the project at INL are advancing innovative applications of nanotechnology in fields such as energy, food and the environment.

Project details

  • Project acronym: NANOTRAINFORGROWTH II
  • Participants: Portugal (Coordinator)
  • Project N°: 713640
  • Total costs: € 3 398 400
  • EU contribution: € 1 699 200
  • Duration: June 2016 to May 2021

See also


Bridging the gap between research and the classroom

In a moment more reminiscent of a Comic-Con event than a typical MIT symposium, Shawn Robinson, senior research associate at the University of Wisconsin at Madison, helped kick off the first-ever MIT Science of Reading event dressed in full superhero attire as Doctor Dyslexia Dude — the star of a graphic novel series he co-created to engage and encourage young readers, rooted in his own experiences as a student with dyslexia. 

The event, co-sponsored by the MIT Integrated Learning Initiative (MITili) and the McGovern Institute for Brain Research at MIT, took place earlier this month and brought together researchers, educators, administrators, parents, and students to explore how scientific research can better inform educational practices and policies — equipping teachers with scientifically-based strategies that may lead to better outcomes for students.

Professor John Gabrieli, MITili director, explained the great need to focus the collective efforts of educators and researchers on literacy.

“Reading is critical to all learning and all areas of knowledge. It is the first great educational experience for all children, and can shape a child’s first sense of self,” he said. “If reading is a challenge or a burden, it affects children’s social and emotional core.”

A great divide

Reading is also a particularly important area to address because so many American students struggle with this fundamental skill. More than six out of every 10 fourth graders in the United States are not proficient readers, and changes in reading scores for fourth and eighth graders have increased only slightly since 1992, according to the National Assessment of Education Progress.

Gabrieli explained that, just as with biomedical research, where there can be a “valley of death” between basic research and clinical application, the same seems to apply to education. Although there is substantial current research aiming to better understand why students might have difficulty reading in the ways they are currently taught, the research often does not necessarily shape the practices of teachers — or how the teachers themselves are trained to teach. 

This divide between the research and practical applications in the classroom might stem from a variety of factors. One issue might be the inaccessibility of research publications that are available for free to all — as well as the general need for scientific findings to be communicated in a clear, accessible, engaging way that can lead to actual implementation. Another challenge is the stark difference in pacing between scientific research and classroom teaching. While research can take years to complete and publish, teachers have classrooms full of students — all with different strengths and challenges — who urgently need to learn in real time.

Natalie Wexler, author of “The Knowledge Gap,” described some of the obstacles to getting the findings of cognitive science integrated into the classroom as matters of “head, heart, and habit.” Teacher education programs tend to focus more on some of the outdated psychological models, like Piaget’s theory of cognitive development, and less on recent cognitive science research. Teachers also have to face the emotional realities of working with their students, and might be concerned that a new approach would cause students to feel bored or frustrated. In terms of habit, some new, evidence-based approaches may be, in a practical sense, difficult for teachers to incorporate into the classroom.

“Teaching is an incredibly complex activity,” noted Wexler.

From labs to classrooms

Throughout the day, speakers and panelists highlighted some key insights gained from literacy research, along with some of the implications these might have on education.

Mark Seidenberg, professor of psychology at the University of Wisconsin at Madison and author of “Language at the Speed of Sight,” discussed studies indicating the strong connection between spoken and printed language. 

“Reading depends on speech,” said Seidenberg. “Writing systems are codes for expressing spoken language … Spoken language deficits have an enormous impact on children’s reading.”

The integration of speech and reading in the brain increases with reading skill. For skilled readers, the patterns of brain activity (measured using functional magnetic resonance imaging) while comprehending spoken and written language are very similar. Becoming literate affects the neural representation of speech, and knowledge of speech affects the representation of print — thus the two become deeply intertwined. 

In addition, researchers have found that the language of books, even for young children, include words and expressions that are rarely encountered in speech to children. Therefore, reading aloud to children exposes them to a broader range of linguistic expressions — including more complex ones that are usually only taught much later. Thus reading to children can be especially important, as research indicates that better knowledge of spoken language facilitates learning to read.

Although behavior and performance on tests are often used as indicators of how well a student can read, neuroscience data can now provide additional information. Neuroimaging of children and young adults identifies brain regions that are critical for integrating speech and print, and can spot differences in the brain activity of a child who might be especially at-risk for reading difficulties. Brain imaging can also show how readers’ brains respond to certain reading and comprehension tasks, and how they adapt to different circumstances and challenges.

“Brain measures can be more sensitive than behavioral measures in identifying true risk,” said Ola Ozernov-Palchik, a postdoc at the McGovern Institute. 

Ozernov-Palchik hopes to apply what her team is learning in their current studies to predict reading outcomes for other children, as well as continue to investigate individual differences in dyslexia and dyslexia-risk using behavior and neuroimaging methods.

Identifying certain differences early on can be tremendously helpful in providing much-needed early interventions and tailored solutions. Many speakers noted the problem with the current “wait-to-fail” model of noticing that a child has a difficult time reading in second or third grade, and then intervening. Research suggests that earlier intervention could help the child succeed much more than later intervention.

Speakers and panelists spoke about current efforts, including Reach Every Reader (a collaboration between MITili, the Harvard Graduate School of Education, and the Florida Center for Reading Research), that seek to provide support to students by bringing together education practitioners and scientists. 

“We have a lot of information, but we have the challenge of how to enact it in the real world,” said Gabrieli, noting that he is optimistic about the potential for the additional conversations and collaborations that might grow out of the discussions of the Science of Reading event. “We know a lot of things can be better and will require partnerships, but there is a path forward.”


A new way to make droplets bounce away

In many situations, engineers want to minimize the contact of droplets of water or other liquids with surfaces they fall onto. Whether the goal is keeping ice from building up on an airplane wing or a wind turbine blade, or preventing heat loss from a surface during rainfall, or preventing salt buildup on surfaces exposed to ocean spray, making droplets bounce away as fast as possible and minimizing the amount of contact with the surface can be key to keeping systems functioning properly.

Now, a study by researchers at MIT demonstrates a new approach to minimizing the contact between droplets and surfaces. While previous attempts, including by members of the same team, have focused on minimizing the amount of time the droplet spends in contact with the surface, the new method instead focuses on the spatial extent of the contact, trying to minimize how far a droplet spreads out before bouncing away.

The new findings are described in the journal ACS Nano in a paper by MIT graduate student Henri-Louis Girard, postdoc Dan Soto, and professor of mechanical engineering Kripa Varanasi. The key to the process, they explain, is creating a series of raised ring shapes on the material’s surface, which cause the falling droplet to splash upward in a bowl-shaped pattern instead of flowing out flat across the surface.

The work is a followup on an earlier project by Varanasi and his team, in which they were able to reduce the contact time of droplets on a surface by creating raised ridges on the surface, which disrupted the spreading pattern of impacting droplets. But the new work takes this farther, achieving a much greater reduction in the combination of contact time and contact area of a droplet.

In order to prevent icing on an airplane wing, for example, it is essential to get the droplets of impacting water to bounce away in less time than it takes for the water to freeze. The earlier ridged surface did succeed in reducing the contact time, but Varanasi says “since then, we found there’s another thing at play here,” which is how far the drop spreads out before rebounding and bouncing off. “Reducing the contact area of the impacting droplet should also have a dramatic impact on transfer properties of the interaction,” Varanasi says.

The team initiated a series of experiments that demonstrated that raised rings of just the right size, covering the surface, would cause the water spreading out from an impacting droplet to splash upward instead, forming a bowl-shaped splash, and that the angle of that upward splash could be controlled by adjusting the height and profile of those rings. If the rings are too large or too small compared to the size of the droplets, the system becomes less effective or doesn’t work at all, but when the size is right, the effect is dramatic.

It turns out that reducing the contact time alone is not sufficient to achieve the greatest reduction in contact; it’s the combination of the time and area of contact that’s critical. In a graph of the time of contact on one axis, and the area of contact on the other axis, what really matters is the total area under the curve — that is, the product of the time and the extent of contact. The area of the spreading was “was another axis that no one has touched” in previous research, Girard says. “When we started doing so, we saw a drastic reaction,” reducing the total time-and-area contact of the droplet by 90 percent. “The idea of reducing contact area by forming ‘waterbowls’ has far greater effect on reducing the overall interaction than by reducing contact time alone,” Varanasi says.

As the droplet starts to spread out within the raised circle, as soon as it hits the circle’s edge it begins to deflect. “Its momentum is redirected upward,” Girard says, and although it ends up spreading outward about as far as it would have otherwise, it is no longer on the surface, and therefore not cooling the surface off, or leading to icing, or blocking the pores on a “waterproof” fabric.

Credit: Henri-Louis Girard, Dan Soto, and Kripa Varanas

The rings themselves can be made in different ways and from different materials, the researchers say — it’s just the size and spacing that matters. For some tests, they used rings 3-D printed on a substrate, and for others they used a surface with a pattern created through an etching process similar to that used in microchip manufacturing. Other rings were made through computer controlled milling of plastic.

While higher-velocity droplet impacts generally can be more damaging to a surface, with this system the higher velocities actually improve the effectiveness of the redirection, clearing even more of the liquid than at slower speeds. That’s good news for practical applications, for example in dealing with rain, which has relatively high velocity, Girard says. “It actually works better the faster you go,” he says.

In addition to keeping ice off airplane wings, the new system could have a wide variety of applications, the researchers say. For example, “waterproof” fabrics can become saturated and begin to leak when water fills up the spaces between the fibers, but when treated with the surface rings, fabrics kept their ability to shed water for longer, and performed better overall, Girard says. “There was a 50 percent improvement by using the ring structures,” he says.

The research was supported by MIT’s Deshpande Center for Technological Innovation.


Scientists find no direct link between North Atlantic Ocean currents, New England coast sea level

Research News

Scientists find no direct link between North Atlantic Ocean currents, New England coast sea level

New study clarifies influence of major ocean currents on sea level

New England coast

Scientists report new results on the link between ocean currents and sea level in New England.

June 26, 2019

A new study by NSF-funded scientists at the Woods Hole Oceanographic Institution clarifies the influence of major currents in the North Atlantic Ocean on sea level along the coast of the northeastern United States.

The results, published in the American Geophysical Union journal Geophysical Research Letters, consider the strength of the Atlantic Meridional Overturning Circulation — a conveyor belt of currents that moves warmer waters north and cooler waters south in the Atlantic — and historical records of sea level in coastal New England.

“Scientists had previously noticed that if the AMOC is stronger in a given season or year, sea levels in the northeast U.S. go down, but if the AMOC weakens, average sea levels rise considerably,” says Chris Piecuch, a physical oceanographer at WHOI and lead author of the paper. “A half-foot of sea level rise, held for months, can have serious coastal impacts. It’s been unclear whether those two things—coastal sea level and the AMOC—are linked by cause and effect.” 

Although the study confirmed that AMOC intensity and sea level seem to change at the same time, it found that neither directly causes changes in the behavior of the other. According to Piecuch, a study like this was not possible until recently. In 2004, an international team of scientists began maintaining a chain of instruments that stretch across the Atlantic. The instruments, which are collectively called the RAPID array, hold sensors that measure currents, salinity, and temperature.

“This study looked at variability in water sloshing around the Atlantic Basin, which is limited to several inches, whereas the melting of glaciers could add many feet of new water,” said Mete Uz, a program director in NSF’s Division of Ocean Sciences, which funded the research. “Still it is very important to understand this signal. Melting will happen over many years, during which we will constantly be watching for signs of acceleration or slow down. Understanding what drives local sea level variability will help us avoid misinterpreting limited observations.”

—  NSF Public Affairs, (703) 292-8070 media@nsf.gov


Researchers validate optimum composites structure created with additive manufacturing

Additive manufacturing built an early following with 3D printers using polymers to create a solid object from a Computer-Aided Design model. The materials used were neat polymers — perfect for a rapid prototype, but not commonly used as structural materials.

A new wave of additive manufacturing uses polymer composites that are extruded from a nozzle as an epoxy resin, but reinforced with short, chopped carbon fibers. The fibers make the material stronger, much like rebar in a cement sidewalk. The resulting object is much stiffer and stronger than a resin on its own.

The question a recent University of Illinois at Urbana-Champaign study set out to answer concerns which configuration or pattern of carbon fibers in the layers of extruded resin will result in the stiffest material.

John Lambros, Willett professor in the Department of Aerospace Engineering and director of the Advanced Materials Testing and Evaluation Laboratory at U of I was approached by an additive manufacturing research group at Lawrence Livermore National Laboratory to test composite parts that they had created using a direct ink writing technique.

“The carbon fibers are small, about seven microns in diameter and 500 microns in length,” Lambros said. “It’s easier with a microscope but you can certainly see a bundle with the naked eye. The fibers are mostly aligned in the extruded resin, which is like a glue that holds the fibers in place. The Lawrence Livermore group provided the parts, created with several different configurations and one made without any embedded fibers as a control. One of the parts had been theoretically optimized for maximum stiffness, but the group wanted definitive experimental corroboration of the optimization process.”

Lambros said that while waiting for the actual additively manufactured composite samples, Lambros and his student made their own “dummy” samples out of Plexiglas, and that way could begin testing the dummies.

In this case, the shape being tested was a clevis joint — a small, oval-shaped plate with two holes used to connect two other surfaces. For each different sample shape, Lambros’ lab must create a unique loading fixture to test it.

“We create the stands, the grips, and everything — how they’ll be painted, how the cameras will record the tests, and so on,” Lambros said. “When we got the real samples, they weren’t exactly the same shape. The thickness was a bit different than our Plexiglas ones, so we made new spacers and worked it out in the end. From the mechanics side, we must be very cautious. It’s necessary to use precision so as to be confident that any eventual certification of additively manufactured parts is done properly.”

“We created an experimental framework to validate the optimal pattern of the short-fiber reinforced composite material,” Lambros said. “As the loading machine strained the clevis joint plates, we used a digital image correlation technique to measure the displacement field across the surface of each sample by tracking the motion in the pixel intensity values of a series of digital images taken as the sample deforms. A random speckle pattern is applied to the sample surface and serves to identify subsets of the digital images in a unique fashion so they can be tracked during deformation.”

They tested one control sample and four different configurations, including the one believed to be optimized for stiffness, which had a wavy fiber pattern rather than one oriented along horizontal or vertical lines.

“Each sample clevis joint plate had 12 layers in a stack. The optimized one had curved deposition lines and gaps between them,” Lambros said. “According to the Livermore group’s predictions, the gaps are there by design, because you don’t need more material than this to provide the optimal stiffness. That’s what we tested. We passed loading pins through the holes, then pulled each sample to the point of breaking, recording the amount of load and the displacement.

“The configuration that they predicted would be optimal, was indeed optimal. The least optimal was the control sample, which is just resin — as you would expect because there are no fibers in it.”

Lambros said that there is a premise in the analysis that this is a global optimum — meaning that this is the absolutely best possible sample built for stiffness — no other build pattern is better than this one.

“Although of course we only tested four configurations, it does look like the optimized configuration may be the absolute best in practice because the configurations that would most commonly be used in design, such as 0°-90° or ±45° alignments, were more compliant or less stiff than what this one was,” Lambros said. “The interesting thing that we found is that the sample optimized to be the stiffest also turned out to be the strongest. So, if you look at where they break, this one is at the highest load. This was somewhat unexpected in the sense that they had not optimized for this feature. In fact, the optimized sample was also a bit lighter than the others, so if you look at specific load, the failure load per unit weight, it’s a lot higher. It’s quite a bit stronger than the other ones. And why that is the case is something that we’re going to investigate next.”

Lambros said there may be more testing done in the future, but for now, his team successfully demonstrated that they could provide a validation for the optimized additive composite build.


European Innovation Council – European Commission appoints top innovation leaders to guide the European Innovation Council

Today, the Commission has appointed 22 exceptional innovators from the worlds of entrepreneurship, venture capital, science and technology to the European Innovation Council Advisory Board, which will provide strategic leadership to the EIC. The Board will oversee the roll out of the current pilot and lead the strategy and design of the EIC under Horizon Europe.
Full list of members.

The Commission also published today a vacancy notice to recruit the first EIC programme managers. Inspired by the renowned US agency DARPA, the EIC programme managers will be experts in their fields, able to work closely with fast moving technology projects and open doors to the wider ecosystem. Applications can be sent until 31 July 2019 at 12.00 CET.

Carlos Moedas, the EU Commissioner for Research, Science and Innovation said:

With the EIC, we are filling a critical funding gap in the innovation ecosystem and putting Europe at the forefront of market creating innovation. I am delighted that the EIC will be advised by some of Europe’s most accomplished innovators and investors, and that we will be bringing in talented programme managers to get the work off the ground.

Also today, the Commission announced €149 million for the latest round of 83 SMEs and startups to be awarded EIC Accelerator Pilot grants (previously known as the SME Instrument Phase 2). The SMEs and startups are all developing potential game changing innovations, such as: next generation of safe and environmentally-friendly light aircrafts; anti-bacterial textile for hospitals; 3D audio software; motion planning technology for autonomous driving; and a superbot for audio calls.
List of the companies from 17 countries across the EU and from countries associated to Horizon 2020.

In addition, the Commission announced €164m to 53 new EIC Pathfinder pilot grants for bottom-up high-risk, high-impact research ideas (previously known as FET Open). Projects include metal-free MRI contrast agents; treatment to replace antibiotics in lung infection diseases; custom-crafted graphene nanostructures; precise measuring and monitoring of highly penetrating particles in deep space; artificial proteins for biological light-emitting diodes; and many other ideas.
Full list.

Members of EIC Advisory Board*

  • Mark Ferguson, Entrepreneur, Science Foundation Ireland (Chair)
  • Hermann Hauser, Co-founder of Amadeus Capital Partners (Vice-chair)
  • Kerstin Bock, CEO of Openers
  • Jo Bury, Managing Director of Flanders Institute of Biotechnology
  • Dermot Diamond, Principal Investigator: INSIGHT Centre for Data Analytics, Dublin City University
  • Laura González Estéfani, Founder and CEO at TheVentureCity
  • Jim Hagemann Snabe, Chair Siemens AG, Chair A P Moller Maersk A/S
  • Ingmar Hoerr, Founder and Chairman of the Supervisory Board of CureVac AG
  • Fredrik Horstedt, Vice president of utilisation Chalmers University of Technology
  • Heidi Kakko, Partner of BaltCap Growth Fund
  • Bindi Karia, European Innovation Expert + Advisor, Connector of People and Businesses
  • Anita Krohn Traaseth, Former CEO Innovation Norway
  • Jerzy M. Langer, Physicist, Emeritus Professor at the Institute of Physics of the Polish Academy of Sciences
  • Ana Maiques, Chief Executive Officer, Neuroelectrics
  • Marja Makarow, Biochemistry/molecular biology, director of Biocenter Finland
  • Valeria Nicolosi, Chair of Nanomaterials and Advanced Microscopy
  • Carlos Oliveira, Serial Entrepreneur, Innovator, Executive President of José Neves Foundation
  • Bruno Sportisse, Chair and CEO at INRIA
  • Kinga Stanislawska, Managing Partner and Founder of Experior Venture Fund
  • Roberto Verganti, Innovation academic, former RISE group
  • Martin Villig, Co-founder of Bolt (formerly Taxify)
  • Yousef Yousef, CEO of LG Sonic

*official membership is subject to finalisation of internal procedures

All member biographies


In June 2018, the Commission proposed the most ambitious Research and Innovation programme yet, Horizon Europe, with a proposed budget of €100 billion for 2021-2027. In March 2019 the European Parliament and the Council of the EU reached a provisional agreement on Horizon Europe. A key novelty of Horizon Europe is the establishment of a European Innovation Council with a proposed budget of €10 billion. An agreement of the European Parliament and Council on the EIC was reached as part of the Common Understanding on Horizon Europe.

A first pilot phase of the EIC was launched in October 2017, followed by a reinforced pilot in March 2019, with an overall budget of €3 billion, and the objective of funding Europe’s most exciting innovations of our most talented innovators. The new Advisory Board is part of the reinforced pilot. Members were selected following an open call for expressions of interest, which resulted in over 600 applications.

The European Innovation Council is part of a wider ecosystem that the EU is putting in place to give Europe’s many entrepreneurs every opportunity to become world leading companies. Other initiatives include a Pan-European Venture Capital Funds-of-Funds programme (VentureEU), the Investment Plan for Europe (EFSI), the work of the European Institute for Innovation and Technology, the Capital Markets Union Action Plan to improve access to finance or the proposal for a Directive on business insolvency.

More information

EIC website

Press release: launch of the pilot phase of the European Innovation Council

Press release: pilot phase of the European Innovation Council

Press release: provisional agreement on Horizon Europe

Statement by Commissioner Moedas on the European Parliament’s vote on Horizon Europe

Factsheet on Horizon Europe


Research Headlines – Understanding human and parasite interactions

Related theme(s) and subtheme(s)
:   |   | 
Countries involved in the project described in the article

Add to PDF “basket”

Whipworms are soil-transmitted parasitic worms that infect about 700 million people in the tropics and sub-tropics. An EU-funded project worked to better understand its interactions with human epithelial and immune cells, in the hope of identifying new treatment possibilities and alleviating suffering.


© sinhyu #142608343, 2019 source: stock.adobe.com

Whipworms are parasitic roundworms that live preferentially in the human cecum, the blind pouch at the beginning of the large intestine. They tunnel through epithelial cells and cause inflammation, potentially resulting in trichuriasis, an infection similar to colitis.

Despite extensive research, the role of whipworm interactions with host epithelial and immune cells in triggering parasite expulsion remains unclear. This has hindered the development of anti-parasite therapies.

The goal of the EU-funded GUTWORM project was to investigate and understand the interaction between whipworms and host cells. To achieve this, project researchers used T. muris, a mouse model, to replicate whipworm infection in humans.

The GUTWORM project had various aims. First, the team set out to identify new parasite and host genes that could interplay and modulate immunological outcomes.

It also characterised the role of host genes in whipworm infection and immunity. Here, novel and known candidate genetic mutations conferring susceptibility to colitis were targeted. GUTWORM researchers tested mice with particular mutations to evaluate the influence of these on anti-parasite immunity and expulsion.

Finally, after identifying key genes regulating the immune response to whipworms, the team explored the precise mechanisms of these genes to help them understand their effect on the parasite.

The GUTWORM project has generated a wealth of fundamental data on host-whipworm interactions. Ultimately, this will provide tools for future efforts to control these parasites, identifying potential new therapeutic targets for diseases that cause suffering in people living in tropical and sub-tropical regions.

The resulting knowledge of the parasite-immunological interplay could also help scientists understand other intestinal inflammatory diseases such as ulcerative colitis.

Project details

  • Project acronym: GUTWORM
  • Participants: United Kingdom (Coordinator)
  • Project N°: 656347
  • Total costs: € 183 455
  • EU contribution: € 183 455
  • Duration: October 2015 to November 2017

See also

Search articles

To restrict search results to articles in the Information Centre, i.e. this site, use this search box rather than the one at the top of the page.

After searching, you can expand the results to include the whole Research and Innovation web site, or another section of it, or all Europa, afterwards without searching again.

Please note that new content may take a few days to be indexed by the search engine and therefore to appear in the results.


QS World University Rankings rates MIT No. 1 in 11 subjects for 2019

MIT has been honored with 11 No. 1 subject rankings in the QS World University Rankings for 2019.

The Institute received a No. 1 ranking in the following QS subject areas: Chemistry; Computer Science and Information Systems; Chemical Engineering; Civil and Structural Engineering; Electrical and Electronic Engineering; Mechanical, Aeronautical and Manufacturing Engineering; Linguistics; Materials Science; Mathematics; Physics and Astronomy; and Statistics and Operational Research.

MIT also placed second in six subject areas: Accounting and Finance; Architecture/Built Environment; Biological Sciences; Earth and Marine Sciences; Economics and Econometrics; and Environmental Sciences.

Quacquarelli Symonds Limited subject rankings, published annually, are designed to help prospective students find the leading schools in their field of interest. Rankings are based on research quality and accomplishments, academic reputation, and graduate employment.

MIT has been ranked as the No. 1 university in the world by QS World University Rankings for seven straight years.

Topics: Rankings, Computer science and technology, Linguistics, Chemical engineering, Civil and environmental engineering, Mechanical engineering, Chemistry, Materials science, Mathematics, Physics, Economics, EAPS, Business and management, Accounting, Finance, DMSE, School of Engineering, School of Science, School of Architecture and Planning, Sloan School of Management, School of Humanities Arts and Social Sciences, Electrical Engineering & Computer Science (eecs), Architecture, Biology, Aeronautical and astronautical engineering


Study: Social robots can benefit hospitalized children

A new study demonstrates, for the first time, that “social robots” used in support sessions held in pediatric units at hospitals can lead to more positive emotions in sick children.

Many hospitals host interventions in pediatric units, where child life specialists will provide clinical interventions to hospitalized children for developmental and coping support. This involves play, preparation, education, and behavioral distraction for both routine medical care, as well as before, during, and after difficult procedures. Traditional interventions include therapeutic medical play and normalizing the environment through activities such as arts and crafts, games, and celebrations.

For the study, published today in the journal Pediatrics, researchers from the MIT Media Lab, Boston Children’s Hospital, and Northeastern University deployed a robotic teddy bear, “Huggable,” across several pediatric units at Boston Children’s Hospital. More than 50 hospitalized children were randomly split into three groups of interventions that involved Huggable, a tablet-based virtual Huggable, or a traditional plush teddy bear. In general, Huggable improved various patient outcomes over those other two options.  

The study primarily demonstrated the feasibility of integrating Huggable into the interventions. But results also indicated that children playing with Huggable experienced more positive emotions overall. They also got out of bed and moved around more, and emotionally connected with the robot, asking it personal questions and inviting it to come back later to meet their families. “Such improved emotional, physical, and verbal outcomes are all positive factors that could contribute to better and faster recovery in hospitalized children,” the researchers write in their study.

Although it is a small study, it is the first to explore social robotics in a real-world inpatient pediatric setting with ill children, the researchers say. Other studies have been conducted in labs, have studied very few children, or were conducted in public settings without any patient identification.

But Huggable is designed only to assist health care specialists — not replace them, the researchers stress. “It’s a companion,” says co-author Cynthia Breazeal, an associate professor of media arts and sciences and founding director of the Personal Robots group. “Our group designs technologies with the mindset that they’re teammates. We don’t just look at the child-robot interaction. It’s about [helping] specialists and parents, because we want technology to support everyone who’s invested in the quality care of a child.”

“Child life staff provide a lot of human interaction to help normalize the hospital experience, but they can’t be with every kid, all the time. Social robots create a more consistent presence throughout the day,” adds first author Deirdre Logan, a pediatric psychologist at Boston Children’s Hospital. “There may also be kids who don’t always want to talk to people, and respond better to having a robotic stuffed animal with them. It’s exciting knowing what types of support we can provide kids who may feel isolated or scared about what they’re going through.”

Joining Breazeal and Logan on the paper are: Sooyeon Jeong, a PhD student in the Personal Robots group; Brianna O’Connell, Duncan Smith-Freedman, and Peter Weinstock, all of Boston Children’s Hospital; and Matthew Goodwin and James Heathers, both of Northeastern University.

Boosting mood

First prototyped in 2006, Huggable is a plush teddy bear with a screen depicting animated eyes. While the eventual goal is to make the robot fully autonomous, it is currently operated remotely by a specialist in the hall outside a child’s room. Through custom software, a specialist can control the robot’s facial expressions and body actions, and direct its gaze. The specialists could also talk through a speaker — with their voice automatically shifted to a higher pitch to sound more childlike — and monitor the participants via camera feed. The tablet-based avatar of the bear had identical gestures and was also remotely operated.

During the interventions involving Huggable — involving kids ages 3 to 10 years — a specialist would sing nursery rhymes to younger children through robot and move the arms during the song. Older kids would play the I Spy game, where they have to guess an object in the room described by the specialist through Huggable.  

Through self-reports and questionnaires, the researchers recorded how much the patients and families liked interacting with Huggable. Additional questionnaires assessed patient’s positive moods, as well as anxiety and perceived pain levels. The researchers also used cameras mounted in the child’s room to capture and analyze speech patterns, characterizing them as joyful or sad, using software.

A greater percentage of children and their parents reported that the children enjoyed playing with Huggable more than with the avatar or traditional teddy bear. Speech analysis backed up that result, detecting significantly more joyful expressions among the children during robotic interventions. Additionally, parents noted lower levels of perceived pain among their children.

The researchers noted that 93 percent of patients completed the Huggable-based interventions, and found few barriers to practical implementation, as determined by comments from the specialists.

A previous paper based on the same study found that the robot also seemed to facilitate greater family involvement in the interventions, compared to the other two methods, which improved the intervention overall. “Those are findings we didn’t necessarily expect in the beginning,” says Jeong, also a co-author on the previous paper. “We didn’t tell family to join any of the play sessions — it just happened naturally. When the robot came in, the child and robot and parents all interacted more, playing games or in introducing the robot.”

An automated, take-home bot

The study also generated valuable insights for developing a fully autonomous Huggable robot, which is the researchers’ ultimate goal. They were able to determine which physical gestures are used most and least often, and which features specialists may want for future iterations. Huggable, for instance, could introduce doctors before they enter a child’s room or learn a child’s interests and share that information with specialists. The researchers may also equip the robot with computer vision, so it can detect certain objects in a room to talk about those with children.

“In these early studies, we capture data … to wrap our heads around an authentic use-case scenario where, if the bear was automated, what does it need to do to provide high-quality standard of care,” Breazeal says.

In the future, that automated robot could be used to improve continuity of care. A child would take home a robot after a hospital visit to further support engagement, adherence to care regimens, and monitoring well-being.

“We want to continue thinking about how robots can become part of the whole clinical team and help everyone,” Jeong says. “When the robot goes home, we want to see the robot monitor a child’s progress. … If there’s something clinicians need to know earlier, the robot can let the clinicians know, so [they’re not] surprised at the next appointment that the child hasn’t been doing well.”

Next, the researchers are hoping to zero in on which specific patient populations may benefit the most from the Huggable interventions. “We want to find the sweet spot for the children who need this type of of extra support,” Logan says.


In search of an undersea kelp forest’s missing nitrogen

Research News

In search of an undersea kelp forest’s missing nitrogen

Ocean plants need nutrients to grow

California spiny lobster

Lobsters and sea stars are important contributors to the kelp forest nitrogen cycle.

June 24, 2019

Plants need nutrients to grow. So scientists were surprised to learn that giant kelp maintains its impressive growth rates year-round, even in summer and early fall when ocean currents along the California coast stop delivering nutrients. Clearly something else is nourishing the kelp, but what?

A team of NSF-supported scientists at UC Santa Barbara has made a breakthrough in identifying one of those sources. Their research suggests that the invertebrate residents of kelp forests provide at least some of the nutrients the giant algae need. The findings appear in the journal Global Change Biology.

To sustain growth rates, kelp requires many nutrients, especially nitrogen, but changes in ocean currents reduce the availability of such nutrients each year beginning in May. As a result, kelp forests face a potential shortage of nitrogen just as long summer days are poised to fuel algal growth, said lead author Joey Peters.

Peters and his co-authors, UC Santa Barbara marine ecologists Dan Reed and Deron Burkepile, saw the local community of sea-bottom invertebrates as a likely additional nitrogen source. Indeed, it turned out that these invertebrates, especially lobsters and sea stars, are an important part of the nitrogen cycle in coastal ecosystems. Waste from the invertebrates is a consistent component of the “missing nitrogen.”

The scientists made the discovery thanks to nearly two decades of data from the Santa Barbara Coastal Long-Term Ecological Research site, part of a network of sites funded by NSF to conduct long-term ecological research.

“This study reveals how environmental change can affect subtle ecosystem dynamics in kelp forests,” said David Garrison, a program director in NSF’s Division of Ocean Sciences, which funded the research. “This work would only be possible where long-term studies are underway.”

—  NSF Public Affairs, (703) 292-8070 media@nsf.gov


A further step towards reliable quantum computation

Quantum computation has been drawing the attention of many scientists because of its potential to outperform the capabilities of standard computers for certain tasks. For the realization of a quantum computer, one of the most essential features is quantum entanglement. This describes an effect in which several quantum particles are interconnected in a complex way. If one of the entangled particles is influenced by an external measurement, the state of the other entangled particle changes as well, no matter how far apart they may be from one another. Many scientists are developing new techniques to verify the presence of this essential quantum feature in quantum systems. Efficient methods have been tested for systems containing only a few qubits, the basic units of quantum information. However, the physical implementation of a quantum computer would involve much larger quantum systems. Yet, with conventional methods, verifying entanglement in large systems becomes challenging and time-consuming, since many repeated experimental runs are required.

Building on a recent theoretical scheme, a team of experimental and theoretical physicists from the University of Vienna and the ÖAW led by Philip Walther and Borivoje Daki?, together with colleagues from the University of Belgrade, successfully demonstrated that entanglement verification can be undertaken in a surprisingly efficient way and in a very short time, thus making this task applicable also to large-scale quantum systems. To test their new method, they experimentally produced a quantum system composed of six entangled photons. The results show that only a few experimental runs suffice to confirm the presence of entanglement with extremely high confidence, up to 99.99 %.

The verified method can be understood in a rather simple way. After a quantum system has been generated in the laboratory, the scientists carefully choose specific quantum measurements which are then applied to the system. The results of these measurements lead to either confirming or denying the presence of entanglement. “It is somehow similar to asking certain yes-no questions to the quantum system and noting down the given answers. The more positive answers are given, the higher the probability that the system exhibits entanglement,” says Valeria Saggio, first author of the publication in Nature Physics. Surprisingly, the amount of needed questions and answers is extremely low. The new technique proves to be orders of magnitude more efficient compared to conventional methods.

Moreover, in certain cases the number of questions needed is even independent of the size of the system, thus confirming the power of the new method for future quantum experiments.

While the physical implementation of a quantum computer is still facing various challenges, new advances like efficient entanglement verification could move the field a step forward, thus contributing to the progress of quantum technologies.

Story Source:

Materials provided by University of Vienna. Note: Content may be edited for style and length.