Artificial “muscles” achieve powerful pulling force

As a cucumber plant grows, it sprouts tightly coiled tendrils that seek out supports in order to pull the plant upward. This ensures the plant receives as much sunlight exposure as possible. Now, researchers at MIT have found a way to imitate this coiling-and-pulling mechanism to produce contracting fibers that could be used as artificial muscles for robots, prosthetic limbs, or other mechanical and biomedical applications.

While many different approaches have been used for creating artificial muscles, including hydraulic systems, servo motors, shape-memory metals, and polymers that respond to stimuli, they all have limitations, including high weight or slow response times. The new fiber-based system, by contrast, is extremely lightweight and can respond very quickly, the researchers say. The findings are being reported today in the journal Science.

The new fibers were developed by MIT postdoc Mehmet Kanik and MIT graduate student Sirma Örgüç, working with professors Polina Anikeeva, Yoel Fink, Anantha Chandrakasan, and C. Cem Taşan, and five others, using a fiber-drawing technique to combine two dissimilar polymers into a single strand of fiber.

The key to the process is mating together two materials that have very different thermal expansion coefficients — meaning they have different rates of expansion when they are heated. This is the same principle used in many thermostats, for example, using a bimetallic strip as a way of measuring temperature. As the joined material heats up, the side that wants to expand faster is held back by the other material. As a result, the bonded material curls up, bending toward the side that is expanding more slowly.

Credit: Courtesy of the researchers

Using two different polymers bonded together, a very stretchable cyclic copolymer elastomer and a much stiffer thermoplastic polyethylene, Kanik, Örgüç and colleagues produced a fiber that, when stretched out to several times its original length, naturally forms itself into a tight coil, very similar to the tendrils that cucumbers produce. But what happened next actually came as a surprise when the researchers first experienced it. “There was a lot of serendipity in this,” Anikeeva recalls.

As soon as Kanik picked up the coiled fiber for the first time, the warmth of his hand alone caused the fiber to curl up more tightly. Following up on that observation, he found that even a small increase in temperature could make the coil tighten up, producing a surprisingly strong pulling force. Then, as soon as the temperature went back down, the fiber returned to its original length. In later testing, the team showed that this process of contracting and expanding could be repeated 10,000 times “and it was still going strong,” Anikeeva says.

Credit: Courtesy of the researchers

One of the reasons for that longevity, she says, is that “everything is operating under very moderate conditions,” including low activation temperatures. Just a 1-degree Celsius increase can be enough to start the fiber contraction.

The fibers can span a wide range of sizes, from a few micrometers (millionths of a meter) to a few millimeters (thousandths of a meter) in width, and can easily be manufactured in batches up to hundreds of meters long. Tests have shown that a single fiber is capable of lifting loads of up to 650 times its own weight. For these experiments on individual fibers, Örgüç and Kanik have developed dedicated, miniaturized testing setups.

Credit: Courtesy of the researchers

The degree of tightening that occurs when the fiber is heated can be “programmed” by determining how much of an initial stretch to give the fiber. This allows the material to be tuned to exactly the amount of force needed and the amount of temperature change needed to trigger that force.

The fibers are made using a fiber-drawing system, which makes it possible to incorporate other components into the fiber itself. Fiber drawing is done by creating an oversized version of the material, called a preform, which is then heated to a specific temperature at which the material becomes viscous. It can then be pulled, much like pulling taffy, to create a fiber that retains its internal structure but is a small fraction of the width of the preform.

For testing purposes, the researchers coated the fibers with meshes of conductive nanowires. These meshes can be used as sensors to reveal the exact tension experienced or exerted by the fiber. In the future, these fibers could also include heating elements such as optical fibers or electrodes, providing a way of heating it internally without having to rely on any outside heat source to activate the contraction of the “muscle.”

Such fibers could find uses as actuators in robotic arms, legs, or grippers, and in prosthetic limbs, where their slight weight and fast response times could provide a significant advantage.

Some prosthetic limbs today can weigh as much as 30 pounds, with much of the weight coming from actuators, which are often pneumatic or hydraulic; lighter-weight actuators could thus make life much easier for those who use prosthetics. Such fibers might also find uses in tiny biomedical devices, such as a medical robot that works by going into an artery and then being activated,” Anikeeva suggests. “We have activation times on the order of tens of milliseconds to seconds,” depending on the dimensions, she says.

To provide greater strength for lifting heavier loads, the fibers can be bundled together, much as muscle fibers are bundled in the body. The team successfully tested bundles of 100 fibers. Through the fiber drawing process, sensors could also be incorporated in the fibers to provide feedback on conditions they encounter, such as in a prosthetic limb. Örgüç says bundled muscle fibers with a closed-loop feedback mechanism could find applications in robotic systems where automated and precise control are required.

Kanik says that the possibilities for materials of this type are virtually limitless, because almost any combination of two materials with different thermal expansion rates could work, leaving a vast realm of possible combinations to explore. He adds that this new finding was like opening a new window, only to see “a bunch of other windows” waiting to be opened.

“The strength of this work is coming from its simplicity,” he says.

The team also included MIT graduate student Georgios Varnavides, postdoc Jinwoo Kim, and undergraduate students Thomas Benavides, Dani Gonzalez, and Timothy Akintlio. The work was supported by the National Institute of Neurological Disorders and Stroke and the National Science Foundation.


Topics: Research, Materials Science and Engineering, DMSE, Mechanical engineering, Nanoscience and nanotechnology, Research Laboratory of Electronics, McGovern Institute, Brain and cognitive sciences, School of Science, School of Engineering, National Science Foundation (NSF)

Source

Instability in Antarctic ice projected to increase likelihood of worst-case sea level rise projectio

Research News

Instability in Antarctic ice projected to increase likelihood of worst-case sea level rise projectio

Thwaites Glacier, modeled for new study, likely to succumb to instability

Antarctic ice

Instability in Antarctic ice is likely to rapidly increase sea level rise.

July 12, 2019

Images of vanishing Arctic ice are jarring, but the region’s potential contributions to sea level rise are no match for Antarctica’s. Now, a study says that instability hidden in Antarctic ice increases the likelihood of worst-case scenarios for the continent’s contribution to global sea level.

In the last six years, five closely observed Antarctic glaciers have doubled their rate of ice loss. At least one, Thwaites Glacier, modeled for the new study, will likely succumb to this instability, a volatile process that pushes ice into the ocean fast.

How much ice the glacier will shed in the coming 50 to 800 years can’t be projected exactly, scientists say, due to unpredictable fluctuations in climate and the need for more data. But researchers at the Georgia Institute of Technology and other institutions have factored the instability into 500 ice flow simulations for Thwaites.

The scenarios together point to the eventual triggering of the instability. Even if global warming were to stop, the instability would keep pushing ice out to sea at an accelerated rate over the coming centuries.

“This study underscores the sensitivity of key Antarctic glaciers to instability,” says Paul Cutler, director of NSF’s Antarctic Glaciology Program. “The warping influence of these instabilities on the uncertainty distribution for sea level predictions is of great concern, and is a strong motivator for research that will tighten the error bars on predictions of Antarctica’s contribution to sea level rise.”

The research was funded by NSF’s Office of Polar Programs.

—  NSF Public Affairs, (703) 292-8070 media@nsf.gov

Source

Healthy lifestyle may offset genetic risk of dementia

Living a healthy lifestyle may help offset a person’s genetic risk of dementia, according to new research.

The study was led by the University of Exeter — simultaneously published today in JAMA and presented at the Alzheimer’s Association International Conference 2019 in Los Angeles. The research found that the risk of dementia was 32 per cent lower in people with a high genetic risk if they had followed a healthy lifestyle, compared to those who had an unhealthy lifestyle.

Participants with high genetic risk and an unfavourable lifestyle were almost three times more likely to develop dementia compared to those with a low genetic risk and favourable lifestyle.

Joint lead author Dr El?bieta Ku?ma, at the University of Exeter Medical School, said: “This is the first study to analyse the extent to which you may offset your genetic risk of dementia by living a healthy lifestyle. Our findings are exciting as they show that we can take action to try to offset our genetic risk for dementia. Sticking to a healthy lifestyle was associated with a reduced risk of dementia, regardless of the genetic risk.”

The study analysed data from 196,383 adults of European ancestry aged 60 and older from UK Biobank. The researchers identified 1,769 cases of dementia over a follow-up period of eight years. The team grouped the participants into those with high, intermediate and low genetic risk for dementia.

To assess genetic risk, the researchers looked at previously published data and identified all known genetic risk factors for Alzheimer’s disease. Each genetic risk factor was weighted according to the strength of its association with Alzheimer’s disease.

To assess lifestyle, researchers grouped participants into favourable, intermediate and unfavourable categories based on their self-reported diet, physical activity, smoking and alcohol consumption. The researchers considered no current smoking, regular physical activity, healthy diet and moderate alcohol consumption as healthy behaviours. The team found that living a healthy lifestyle was associated with a reduced dementia risk across all genetic risk groups.

Joint lead author Dr David Llewellyn, from the University of Exeter Medical School and the Alan Turing Institute, said: “This research delivers a really important message that undermines a fatalistic view of dementia. Some people believe it’s inevitable they’ll develop dementia because of their genetics. However it appears that you may be able to substantially reduce your dementia risk by living a healthy lifestyle.”

The study was led by the University of Exeter in collaboration with researchers from the University of Michigan, the University of Oxford, and the University of South Australia.

Story Source:

Materials provided by University of Exeter. Note: Content may be edited for style and length.

Source

Research Headlines – Microorganisms to clean up environmental methane

Related theme(s) and subtheme(s)
:   | 

Countries involved in the project described in the article

Add to PDF “basket”

Methane has a global warming impact 25 times higher than that of carbon dioxide and is the world’s second most emitted greenhouse gas. An EU-funded project is developing new strains of microorganisms that can transform methane into useful and bio-friendly materials.

Image

© vchalup #188138761, source: stock.adobe.com 2019

Methanotrophs are microorganisms that metabolise methane. They are a subject of great interest in the environmental sector, where the emission of harmful greenhouse gases is a major concern.

The EU-funded CH4BIOVAL project is working to develop new methanotroph strains that can more readily transform methane from the atmosphere into valuable user products. The CH4BIOVAL team is particularly interested in the potential of methanotrophs to produce large amounts of bio-polymers known as polyhydroxyalkanoates (PHAs).

PHAs include a wide range of materials with different physical properties. Some of them are biodegradable and can be used in the production of bioplastics. The mechanical properties and biocompatibility of PHAs can be changed by modifying their surfaces or by combining them with other polymers, enzymes and inorganic materials. This makes possible an even wider range of applications.

CH4BIOVAL researchers are also interested in another methanotroph by-product called ectoine. This is a natural compound produced by several species of bacteria. It is what’s known as a compatible solute, which can be useful as a protective substance. For example, ectoine is used as an active ingredient in skincare and sun protection products, stabilising proteins and other cellular structures and protecting the skin from dryness and UV radiation.

The CH4BIOVAL project is undertaking the isolation of useful methanotroph strains through conventional genetic selective techniques as well as state-of-the-art bioinformatic techniques. The latter involve the detailed analysis and modification of complex biological features based on an in-depth understanding of the genetic codes of selected strains.

By closely studying the metabolic characteristics of specific methanotroph strains, CH4BIOVAL scientists are identifying key genetic modifications that can improve their performance. Thus, the project is enabling both the abatement of an important greenhouse gas and the production of useful bio-consumables.

The project received funding from the EU’s Marie Skłodowska Curie Actions programme.

Project details

  • Project acronym: CH4BIOVAL
  • Participants: Spain (Coordinator)
  • Project N°: 750126
  • Total costs: € 170 121
  • EU contribution: € 170 121
  • Duration: September 2017 to September 2019

See also

Source

Research Headlines – New cameras to make X-rays safer for patients

Related theme(s) and subtheme(s)
:   |   | 
Countries involved in the project described in the article
 |   |   |   | 

Add to PDF “basket”

CT scans have revolutionised the fight against human illness by creating three-dimensional images of the body’s inner workings. Such scans, however, can deliver high doses of radiation. Now EU-funded researchers have built special cameras that limit radiation while delivering images vital for patient health.

Image

© Blue Planet Studio #162096679, source: stock.adobe.com 2019

Doctors have used computed tomography scans, or CT scans, to greatly improve the diagnosis and treatment of illnesses such as cancer and cardiovascular disease. But a major problem limits their use: they deliver high doses of radiation that can harm patients nearly as much as their ailment.

Enter the EU-funded VOXEL project which set out to develop an innovative way to create three-dimensional imaging. The result is special cameras that can deliver 3D images but without the high doses of radiation.

‘Reports show that in Germany in 2013, although CT scans only represented 7 % of all X-rays performed, they conveyed 60 % of the radiation that patients received,’ says Marta Fajardo, project coordinator and assistant professor at the Instituto Superior Técnico in Lisbon, Portugal. ‘We built several prototype cameras. As an alternative to CT, they enable 3D X-ray imagines in very few exposures, meaning less radiation for the patient.’

New perspective on 3D imaging

CT scans make images by taking thousands of flat, two-dimensional photos in order to reconstruct a 3D image. The problem is that each photo injects ionising radiation into the patient. As photos multiply, radiation levels rise.

To counter this, VOXEL’s breakthrough idea was to adapt a technique called plenoptic imaging to X-ray radiation. Plenoptic cameras capture information about the direction that light rays, including X-rays, are travelling in space, as opposed to a normal camera that captures only light intensity.

Because researchers can use the information about light direction captured by plenoptic cameras to reconstruct 3D images, there is no need to take thousands of 2D photos. Images of important structures like blood vessels can be made from a single exposure, lowering the average radiation dose significantly.

A major part of the work was using the right algorithms to manipulate the captured information. ‘First, we demonstrated that plenoptic imagining is mathematically equivalent to a limited-angle tomography problem,’ Fajardo says. ‘Then we could simply reformat plenoptic imaging as tomography data and apply image reconstruction algorithms to obtain much better images.’

But the biggest challenge remained engineering the cameras. ‘The higher the photon energy, the harder it is to manufacture the optics for a plenoptic camera,’ she says. ‘You need X-rays of different energies for different tasks.’ The solution was to develop one camera prototype that used lower-energy X-rays for tiny structures like cells and another that used higher-energy X-rays for larger objects, such as small animals or human organs.

Less radiation, healthier patients

While Fajardo is encouraged by the project’s results, work remains to be done. ‘The low-energy X-ray camera belongs to a niche market,’ she explains. ‘But the high-energy X-ray prototype has huge medical potential, although it still requires some development.’

Results from the project, which was awarded a Future Emerging Technologies grant, will soon be submitted for publication in the international science journal Nature Photonics.

.

Project details

  • Project acronym: VOXEL
  • Participants: Portugal (Coordinator), France, Spain, Netherlands, Italy
  • Project N°: 665207
  • Total costs: € 3 996 875
  • EU contribution: € 3 996 875
  • Duration: June 2015 to May 2019

See also

Search articles

Notes:
To restrict search results to articles in the Information Centre, i.e. this site, use this search box rather than the one at the top of the page.

After searching, you can expand the results to include the whole Research and Innovation web site, or another section of it, or all Europa, afterwards without searching again.

Please note that new content may take a few days to be indexed by the search engine and therefore to appear in the results.

Source

Model paves way for faster, more efficient translations of more languages

MIT researchers have developed a novel “unsupervised” language translation model — meaning it runs without the need for human annotations and guidance — that could lead to faster, more efficient computer-based translations of far more languages.

Translation systems from Google, Facebook, and Amazon require training models to look for patterns in millions of documents — such as legal and political documents, or news articles — that have been translated into various languages by humans. Given new words in one language, they can then find the matching words and phrases in the other language.

But this translational data is time consuming and difficult to gather, and simply may not exist for many of the 7,000 languages spoken worldwide. Recently, researchers have been developing “monolingual” models that make translations between texts in two languages, but without direct translational information between the two.

In a paper being presented this week at the Conference on Empirical Methods in Natural Language Processing, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) describe a model that runs faster and more efficiently than these monolingual models.

The model leverages a metric in statistics, called Gromov-Wasserstein distance, that essentially measures distances between points in one computational space and matches them to similarly distanced points in another space. They apply that technique to “word embeddings” of two languages, which are words represented as vectors — basically, arrays of numbers — with words of similar meanings clustered closer together. In doing so, the model quickly aligns the words, or vectors, in both embeddings that are most closely correlated by relative distances, meaning they’re likely to be direct translations.

In experiments, the researchers’ model performed as accurately as state-of-the-art monolingual models — and sometimes more accurately — but much more quickly and using only a fraction of the computation power.

“The model sees the words in the two languages as sets of vectors, and maps [those vectors] from one set to the other by essentially preserving relationships,” says the paper’s co-author Tommi Jaakkola, a CSAIL researcher and the Thomas Siebel Professor in the Department of Electrical Engineering and Computer Science and the Institute for Data, Systems, and Society. “The approach could help translate low-resource languages or dialects, so long as they come with enough monolingual content.”

The model represents a step toward one of the major goals of machine translation, which is fully unsupervised word alignment, says first author David Alvarez-Melis, a CSAIL PhD student: “If you don’t have any data that matches two languages … you can map two languages and, using these distance measurements, align them.”

Relationships matter most

Aligning word embeddings for unsupervised machine translation isn’t a new concept. Recent work trains neural networks to match vectors directly in word embeddings, or matrices, from two languages together. But these methods require a lot of tweaking during training to get the alignments exactly right, which is inefficient and time consuming.

Measuring and matching vectors based on relational distances, on the other hand, is a far more efficient method that doesn’t require much fine-tuning. No matter where word vectors fall in a given matrix, the relationship between the words, meaning their distances, will remain the same. For instance, the vector for “father” may fall in completely different areas in two matrices. But vectors for “father” and “mother” will most likely always be close together.

“Those distances are invariant,” Alvarez-Melis says. “By looking at distance, and not the absolute positions of vectors, then you can skip the alignment and go directly to matching the correspondences between vectors.”

That’s where Gromov-Wasserstein comes in handy. The technique has been used in computer science for, say, helping align image pixels in graphic design. But the metric seemed “tailor made” for word alignment, Alvarez-Melis says: “If there are points, or words, that are close together in one space, Gromov-Wasserstein is automatically going to try to find the corresponding cluster of points in the other space.”

For training and testing, the researchers used a dataset of publicly available word embeddings, called FASTTEXT, with 110 language pairs. In these embeddings, and others, words that appear more and more frequently in similar contexts have closely matching vectors. “Mother” and “father” will usually be close together but both farther away from, say, “house.”

Providing a “soft translation”

The model notes vectors that are closely related yet different from the others, and assigns a probability that similarly distanced vectors in the other embedding will correspond. It’s kind of like a “soft translation,” Alvarez-Melis says, “because instead of just returning a single word translation, it tells you ‘this vector, or word, has a strong correspondence with this word, or words, in the other language.’”

An example would be in the months of the year, which appear closely together in many languages. The model will see a cluster of 12 vectors that are clustered in one embedding and a remarkably similar cluster in the other embedding. “The model doesn’t know these are months,” Alvarez-Melis says. “It just knows there is a cluster of 12 points that aligns with a cluster of 12 points in the other language, but they’re different to the rest of the words, so they probably go together well. By finding these correspondences for each word, it then aligns the whole space simultaneously.”

The researchers hope the work serves as a “feasibility check,” Jaakkola says, to apply Gromov-Wasserstein method to machine-translation systems to run faster, more efficiently, and gain access to many more languages.

Additionally, a possible perk of the model is that it automatically produces a value that can be interpreted as quantifying, on a numerical scale, the similarity between languages. This may be useful for linguistics studies, the researchers say. The model calculates how distant all vectors are from one another in two embeddings, which depends on sentence structure and other factors. If vectors are all really close, they’ll score closer to 0, and the farther apart they are, the higher the score. Similar Romance languages such as French and Italian, for instance, score close to 1, while classic Chinese scores between 6 and 9 with other major languages.

“This gives you a nice, simple number for how similar languages are … and can be used to draw insights about the relationships between languages,” Alvarez-Melis says.


Topics: Research, Language, Machine learning, Artificial intelligence, Data, Algorithms, Computer science and technology, Computer Science and Artificial Intelligence Laboratory (CSAIL), IDSS, Electrical Engineering & Computer Science (eecs), School of Engineering

Source

Automated system generates robotic parts for novel tasks

An automated system developed by MIT researchers designs and 3-D prints complex robotic parts called actuators that are optimized according to an enormous number of specifications. In short, the system does automatically what is virtually impossible for humans to do by hand.  

In a paper published today in Science Advances, the researchers demonstrate the system by fabricating actuators — devices that mechanically control robotic systems in response to electrical signals — that show different black-and-white images at different angles. One actuator, for instance, portrays a Vincent van Gogh portrait when laid flat. Tilted an angle when it’s activated, however, it portrays the famous Edvard Munch painting “The Scream.” The researchers also 3-D printed floating water lilies with petals equipped with arrays of actuators and hinges that fold up in response to magnetic fields run through conductive fluids.

The actuators are made from a patchwork of three different materials, each with a different light or dark color and a property — such as flexibility and magnetization — that controls the actuator’s angle in response to a control signal. Software first breaks down the actuator design into millions of three-dimensional pixels, or “voxels,” that can each be filled with any of the materials. Then, it runs millions of simulations, filling different voxels with different materials. Eventually, it lands on the optimal placement of each material in each voxel to generate two different images at two different angles. A custom 3-D printer then fabricates the actuator by dropping the right material into the right voxel, layer by layer.

“Our ultimate goal is to automatically find an optimal design for any problem, and then use the output of our optimized design to fabricate it,” says first author Subramanian Sundaram PhD ’18, a former graduate student in the Computer Science and Artificial Intelligence Laboratory (CSAIL). “We go from selecting the printing materials, to finding the optimal design, to fabricating the final product in almost a completely automated way.”

The shifting images demonstrates what the system can do. But actuators optimized for appearance and function could also be used for biomimicry in robotics. For instance, other researchers are designing underwater robotic skins with actuator arrays meant to mimic denticles on shark skin. Denticles collectively deform to decrease drag for faster, quieter swimming. “You can imagine underwater robots having whole arrays of actuators coating the surface of their skins, which can be optimized for drag and turning efficiently, and so on,” Sundaram says.

Joining Sundaram on the paper are: Melina Skouras, a former MIT postdoc; David S. Kim, a former researcher in the Computational Fabrication Group; Louise van den Heuvel ’14, SM ’16; and Wojciech Matusik, an MIT associate professor in electrical engineering and computer science and head of the Computational Fabrication Group.

Navigating the “combinatorial explosion”

Robotic actuators today are becoming increasingly complex. Depending on the application, they must be optimized for weight, efficiency, appearance, flexibility, power consumption, and various other functions and performance metrics. Generally, experts manually calculate all those parameters to find an optimal design.  

Adding to that complexity, new 3-D-printing techniques can now use multiple materials to create one product. That means the design’s dimensionality becomes incredibly high. “What you’re left with is what’s called a ‘combinatorial explosion,’ where you essentially have so many combinations of materials and properties that you don’t have a chance to evaluate every combination to create an optimal structure,” Sundaram says.

In their work, the researchers first customized three polymer materials with specific properties they needed to build their actuators: color, magnetization, and rigidity. In the end, they produced a near-transparent rigid material, an opaque flexible material used as a hinge, and a brown nanoparticle material that responds to a magnetic signal. They plugged all that characterization data into a property library.

The system takes as input grayscale image examples — such as the flat actuator that displays the Van Gogh portrait but tilts at an exact angle to show “The Scream.” It basically executes a complex form of trial and error that’s somewhat like rearranging a Rubik’s Cube, but in this case around 5.5 million voxels are iteratively reconfigured to match an image and meet a measured angle.

Initially, the system draws from the property library to randomly assign different materials to different voxels. Then, it runs a simulation to see if that arrangement portrays the two target images, straight on and at an angle. If not, it gets an error signal. That signal lets it know which voxels are on the mark and which should be changed. Adding, removing, and shifting around brown magnetic voxels, for instance, will change the actuator’s angle when a magnetic field is applied. But, the system also has to consider how aligning those brown voxels will affect the image.

Voxel by voxel

To compute the actuator’s appearances at each iteration, the researchers adopted a computer graphics technique called “ray-tracing,” which simulates the path of light interacting with objects. Simulated light beams shoot through the actuator at each column of voxels. Actuators can be fabricated with more than 100 voxel layers. Columns can contain more than 100 voxels, with different sequences of the materials that radiate a different shade of gray when flat or at an angle.

When the actuator is flat, for instance, the light beam may shine down on a column containing many brown voxels, producing a dark tone. But when the actuator tilts, the beam will shine on misaligned voxels. Brown voxels may shift away from the beam, while more clear voxels may shift into the beam, producing a lighter tone. The system uses that technique to align dark and light voxel columns where they need to be in the flat and angled image. After 100 million or more iterations, and anywhere from a few to dozens of hours, the system will find an arrangement that fits the target images.

“We’re comparing what that [voxel column] looks like when it’s flat or when it’s titled, to match the target images,” Sundaram says. “If not, you can swap, say, a clear voxel with a brown one. If that’s an improvement, we keep this new suggestion and make other changes over and over again.”

To fabricate the actuators, the researchers built a custom 3-D printer that uses a technique called “drop-on-demand.” Tubs of the three materials are connected to print heads with hundreds of nozzles that can be individually controlled. The printer fires a 30-micron-sized droplet of the designated material into its respective voxel location. Once the droplet lands on the substrate, it’s solidified. In that way, the printer builds an object, layer by layer.

The work could be used as a stepping stone for designing larger structures, such as airplane wings, Sundaram says. Researchers, for instance, have similarly started breaking down airplane wings into smaller voxel-like blocks to optimize their designs for weight and lift, and other metrics. “We’re not yet able to print wings or anything on that scale, or with those materials. But I think this is a first step toward that goal,” Sundaram says.


Topics: Research, Computer science and technology, Algorithms, Artificial intelligence, Machine learning, Robots, Robotics, 3-D printing, Materials Science and Engineering, Computer Science and Artificial Intelligence Laboratory (CSAIL), Electrical Engineering & Computer Science (eecs), School of Engineering

Source

Coral reefs shifting away from equatorial waters

Research News

Coral reefs shifting away from equatorial waters

Number of young corals on tropical reefs has declined

Corals settled on a reef

In the study, researchers counted the number of baby corals that settled on a reef.

July 12, 2019

Coral reefs are retreating from equatorial waters and establishing new reefs in more temperate regions, according to NSF-funded research published in the journal Marine Ecology Progress Series. Scientists found that the number of young corals on tropical reefs has declined by 85 percent – and doubled on subtropical reefs – over the last four decades.

The research was conducted in part at NSF’s Moorea Coral Reef Long-Term Ecological Research site in French Polynesia, one of 28 such NSF long-term research sites across the country and around the globe.

“Climate change seems to be redistributing coral reefs, the same way it is shifting many other marine species,” said Nichole Price, a senior research scientist at the Bigelow Laboratory for Ocean Sciences and lead author of the paper. “The clarity in this trend is stunning, but we don’t yet know whether the new reefs can support the incredible diversity of tropical systems.”

As oceans warm, cooler subtropical environments are becoming more favorable for corals than the equatorial waters where they traditionally thrived. That’s allowing drifting coral larvae to settle and grow in new regions.

The scientists, who are affiliated with more than a dozen institutions, believe that only certain types of coral are able to reach these new locations, based on how far their microscopic larvae can drift on currents before they run out of limited fat stores.

“This report addresses the important question of whether warming waters have resulted in increases in coral populations in new locations,” said David Garrison, a program director in NSF’s Division of Ocean Sciences, which funded the research. “Whether it offers hope for the sustainability of coral reefs requires more research and monitoring.”

—  NSF Public Affairs, (703) 292-8070 media@nsf.gov

Source

New technology improves atrial fibrillation detection after stroke

A new method of evaluating irregular heartbeats outperformed the approach that’s currently used widely in stroke units to detect instances of atrial fibrillation.

The technology, called electrocardiomatrix, goes further than standard cardiac telemetry by examining large amounts of telemetry data in a way that’s so detailed it’s impractical for individual clinicians to attempt.

Co-inventor Jimo Borjigin, Ph.D., recently published the latest results from her electrocardiomatrix technology in Stroke. Among stroke patients with usable data (260 of 265), electrocardiomatrix was highly accurate in identifying those with Afib.

“We validated the use of our technology in a clinical setting, finding the electrocardiomatrix was an accurate method to determine whether a stroke survivor had an Afib,” says Borjigin, an associate professor of neurology and molecular and integrative physiology at Michigan Medicine.

A crucial metric

After a stroke, neurologists are tasked with identifying which risk factors may have contributed in order to do everything possible to prevent another event.

That makes detecting irregular heartbeat an urgent concern for these patients, explains first author Devin Brown, M.D., professor of neurology and a stroke neurologist at Michigan Medicine.

“Atrial fibrillation is a very important and modifiable risk factor for stroke,” Brown says.

Importantly, the electrocardiomatrix identification method was highly accurate for the 212 patients who did not have a history of Afib, Borjigin says. She says this group is most clinically relevant, because of the importance of determining whether stroke patients have previously undetected Afib.

When a patient has Afib, their irregular heartbeat can lead to blood collecting in their heart, which can form a stroke-causing clot. Many different blood thinners are on the market today, making it easier for clinicians to get their patients on an anticoagulant they’ll take as directed.

The most important part is determining Afib’s presence in the first place.

Much-needed improvement

Brown says challenges persist in detecting intermittent Afib during stroke hospitalization.

“More accurate identification of Afib should translate into more strokes prevented,” she says.

Once hospitalized in the stroke unit, patients are typically placed on continuous heart rhythm monitoring. Stroke neurologists want to detect possible intermittent Afib that initial monitoring like an electrocardiogram, or ECG, would have missed.

Because a physician can’t reasonably review every single heartbeat, current monitoring technology flags heart rates that are too high, Brown says. The neurologist then reviews these flagged events, which researchers say could lead to some missed Afib occurrences, or false positives in patients with different heart rhythm issues.

In contrast, Borjigin’s electrocardiomatrix converts two-dimensional signals from the ECG into a three-dimensional heatmap that allows for rapid inspection of all collected heartbeats. Borjigin says this method permits fast, accurate and intuitive detection of cardiac arrhythmias. It also minimizes false positive as well as false negative detection of arrhythmias.

“We originally noted five false positives and five false negatives in the study,” Borjigin says, “but expert review actually found the electrocardiomatrix was correct instead of the clinical documentation we were comparing it to.”

More applications

The Borjigin lab also recently demonstrated the usefulness of the electrocardiomatrix to differentiate between Afib and atrial flutter. In addition, the lab has shown the ability of electrocardiomatrix to capture reduced heart-rate variability in critical care patients.

Borjigin says she envisions electrocardiomatrix technology will one day be used to assist the detection of all cardiac arrhythmias online or offline and side-by-side with the use of ECG.

“I believe that sooner or later, electrocardiomatrix will be used in clinical practice to benefit patients,” she says.

Disclosure: Borjigin is the co-inventor of electrocardiomatrix, for which she holds a patent in Japan (6348226); and for which the Regents of the University of Michigan hold the patent in the United States (9918651).

Source

Research Headlines – Agile auto production gets a digital makeover

Related theme(s) and subtheme(s)

Countries involved in the project described in the article

Add to PDF “basket”

Global markets for cars are becoming increasingly competitive, forcing manufacturers to find cost savings while meeting greater demand for customisation. Advances in technology, known as ‘Industry 4.0’, make these seemingly contradictory demands possible. The EU-funded AutoPro project found a solution.

Image

© 4kstocks #137026882, source: stock.adobe.com 2019

Industry’s struggle to drive down costs dates back to the rigid assembly lines needed to maximise efficiency, an approach made famous by US auto-makers in the early 1900s. By design, this approach did not handle ‘variety’ very well.

While variety is more feasible today – both practically and economically – it gets more difficult as products become more complex and integrated. Structural rigidity makes it hard to cope with product-model changes, product-mix variations and batch-size reductions.

Heavy, time-consuming investment is typically needed to streamline assembly systems after changes are made because the software governing these processes cannot ‘visualise’ complex scenarios.

The EU-funded AUTOPRO project found an ‘Industry 4.0’ solution to help the automotive industry keep up with increasing demand for customised cars. An integrated, highly visual software application is at the heart of their system, making work flows more flexible or ‘agile’, even while accommodating more variants in the production system, and at the same time boosting productivity by 30-60 %.

Real-time shadows?

Arculus, the German SME behind the project, built up experience providing integrated ICT solutions that provide what they call a ‘virtual real-time shadow’ of all the elements in the production. This provides a much clearer overview of how certain key performance indicators are affected when changes are made in one or more elements.

The modular solution can work for any sector with multiple and complex work flows, but it is in car manufacturing, which is highly dependent on new technological processes to remain competitive, that Arculus expects the most enthusiasm.

By customising the navigation control and adding an enhanced interface and automatic communication protocol, Arculus’ platform is better equipped to help auto-makers change production parameters faster and more efficiently.

Prospects for this innovative solution are strong. The 2020 forecast global market for advanced manufacturing technologies is around EUR 750 billion. EU targets to increase industry’s share of GDP to 20 % by 2020, with the auto sector a stated pillar of the economy, provide valuable impetus as well.

Project details

  • Project acronym: AUTOPRO
  • Participants: Germany (Coordinator)
  • Project N°: 782842
  • Total costs: € 71 429
  • EU contribution: € 50 000
  • Duration: August 2017 to December 2017

See also

Search articles

Notes:
To restrict search results to articles in the Information Centre, i.e. this site, use this search box rather than the one at the top of the page.

After searching, you can expand the results to include the whole Research and Innovation web site, or another section of it, or all Europa, afterwards without searching again.

Please note that new content may take a few days to be indexed by the search engine and therefore to appear in the results.

Source

Research Headlines – Taming terahertz radiation for novel imaging applications

Related theme(s) and subtheme(s)

Countries involved in the project described in the article

Add to PDF “basket”

An underexploited band of the electromagnetic spectrum is set to enable new imaging systems that are capable of peering into complex materials and the human body, thanks to innotivative research in an EU-funded project.

Image

© sakkmesterke #219564177, source: stock.adobe.com 2019

Terahertz radiation falls between infrared and microwaves on the electromagnetic spectrum but is less widely used, due to a variety of key technological and practical challenges.

These waves can penetrate materials such as clothing or packaging but unlike X-rays, for example, THz radiation is non-ionizing, making it safe for living tissue. This means THz scanners could safely be used in airports to pick up the unique spectral signatures of several types of explosives, many compounds used in pharmaceutical ingredients and illegal narcotics.

An EU-funded initiative has now laid the foundations for transformative applications in biology, medicine, material science, quality inspection and security using this radiation. By testing novel solutions to efficiently harness the unique properties of THz waves, the THEIA project has driven important research in the field.

‘The results can be used to develop novel types of scanners or imaging systems,’ says Marco Peccianti, THEIA’s lead researcher at the University of Sussex in the UK. ‘Many complex materials possess unique fingerprints in the THz spectrum, including compounds such as polymers, proteins, amino acids, drugs or explosives. For instance, terahertz radiation will be of paramount importance in next-generation airport security scanners. Scanners based on THz radiation would increase our ability to recognise drugs, explosives and other illicit substances, with notable societal and economic benefits.’

Obstacles to be overcome

Other applications include analysing the composition of a wide range of complex materials, creating imaging systems to diagnose defects in manufacturing and peering inside building walls to detect structural problems. In medicine and medical research, imaging systems using THz spectroscopy, which can detect differences in water content and density of tissue, would provide an alternative means of looking inside the human body, particularly into some types of soft tissue to detect diseases such as cancer.

To bring these applications to fruition, several obstacles to efficiently exploit the properties of THz radiation need to be overcome.

In the THEIA project, the team devised a novel technique for channelling THz waves using waveguides, a structure that controls the direction and dimension of the waves. Instead of generating a THz wave and coupling it to a waveguide using a lens or similar optical components, the researchers developed a way to generate the wave directly inside the waveguide.

Improved speed and efficiency

‘The investigation has been performed by simulating the waveguide structure using high-performance computing solutions, and matching the prediction to experimental observations,’ says Peccianti. ‘Practically, we compared different technological solutions, from embedding wave generation in a high-performance waveguide to fabricating the waveguides with terahertz-emitting materials. The key result is the creation of an active terahertz waveguide system.’

The THEIA solution not only delivers a THz signal where needed, but also serves to remove many of the large and bulky components of existing THz systems. ‘This could potentially enable THz imaging to be used in ways that would previously have been impossible,’ Peccianti says.

Researchers are now focusing on improving the efficiency, speed and resolution of their THz imaging techniques in TIMING, a follow-up EU-funded project. The research will aim to develop a next generation of THz imaging devices as unique diagnostic tools to unambiguously discriminate molecular compounds with improved speed and resolution.

The THEIA project received funding from the EU’s Marie Skłodowska-Curie Actions programme.

Project details

  • Project acronym: THEIA
  • Participants: United Kingdom (Coordinator)
  • Project N°: 630833
  • Total costs: € 100 000
  • EU contribution: € 100 000
  • Duration: March 2014 to February 2018

See also

Search articles

Notes:
To restrict search results to articles in the Information Centre, i.e. this site, use this search box rather than the one at the top of the page.

After searching, you can expand the results to include the whole Research and Innovation web site, or another section of it, or all Europa, afterwards without searching again.

Please note that new content may take a few days to be indexed by the search engine and therefore to appear in the results.

Source

Machines that learn language more like kids do

Children learn language by observing their environment, listening to the people around them, and connecting the dots between what they see and hear. Among other things, this helps children establish their language’s word order, such as where subjects and verbs fall in a sentence.

In computing, learning language is the task of syntactic and semantic parsers. These systems are trained on sentences annotated by humans that describe the structure and meaning behind words. Parsers are becoming increasingly important for web searches, natural-language database querying, and voice-recognition systems such as Alexa and Siri. Soon, they may also be used for home robotics.

But gathering the annotation data can be time-consuming and difficult for less common languages. Additionally, humans don’t always agree on the annotations, and the annotations themselves may not accurately reflect how people naturally speak.

In a paper being presented at this week’s Empirical Methods in Natural Language Processing conference, MIT researchers describe a parser that learns through observation to more closely mimic a child’s language-acquisition process, which could greatly extend the parser’s capabilities. To learn the structure of language, the parser observes captioned videos, with no other information, and associates the words with recorded objects and actions. Given a new sentence, the parser can then use what it’s learned about the structure of the language to accurately predict a sentence’s meaning, without the video.

This “weakly supervised” approach — meaning it requires limited training data — mimics how children can observe the world around them and learn language, without anyone providing direct context. The approach could expand the types of data and reduce the effort needed for training parsers, according to the researchers. A few directly annotated sentences, for instance, could be combined with many captioned videos, which are easier to come by, to improve performance.

In the future, the parser could be used to improve natural interaction between humans and personal robots. A robot equipped with the parser, for instance, could constantly observe its environment to reinforce its understanding of spoken commands, including when the spoken sentences aren’t fully grammatical or clear. “People talk to each other in partial sentences, run-on thoughts, and jumbled language. You want a robot in your home that will adapt to their particular way of speaking … and still figure out what they mean,” says co-author Andrei Barbu, a researcher in the Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Center for Brains, Minds, and Machines (CBMM) within MIT’s McGovern Institute.

The parser could also help researchers better understand how young children learn language. “A child has access to redundant, complementary information from different modalities, including hearing parents and siblings talk about the world, as well as tactile information and visual information, [which help him or her] to understand the world,” says co-author Boris Katz, a principal research scientist and head of the InfoLab Group at CSAIL. “It’s an amazing puzzle, to process all this simultaneous sensory input. This work is part of bigger piece to understand how this kind of learning happens in the world.”

Co-authors on the paper are: first author Candace Ross, a graduate student in the Department of Electrical Engineering and Computer Science and CSAIL, and a researcher in CBMM; Yevgeni Berzak PhD ’17, a postdoc in the Computational Psycholinguistics Group in the Department of Brain and Cognitive Sciences; and CSAIL graduate student Battushig Myanganbayar.

Visual learner

For their work, the researchers combined a semantic parser with a computer-vision component trained in object, human, and activity recognition in video. Semantic parsers are generally trained on sentences annotated with code that ascribes meaning to each word and the relationships between the words. Some have been trained on still images or computer simulations.

The new parser is the first to be trained using video, Ross says. In part, videos are more useful in reducing ambiguity. If the parser is unsure about, say, an action or object in a sentence, it can reference the video to clear things up. “There are temporal components — objects interacting with each other and with people — and high-level properties you wouldn’t see in a still image or just in language,” Ross says.

The researchers compiled a dataset of about 400 videos depicting people carrying out a number of actions, including picking up an object or putting it down, and walking toward an object. Participants on the crowdsourcing platform Mechanical Turk then provided 1,200 captions for those videos. They set aside 840 video-caption examples for training and tuning, and used 360 for testing. One advantage of using vision-based parsing is “you don’t need nearly as much data — although if you had [the data], you could scale up to huge datasets,” Barbu says.

In training, the researchers gave the parser the objective of determining whether a sentence accurately describes a given video. They fed the parser a video and matching caption. The parser extracts possible meanings of the caption as logical mathematical expressions. The sentence, “The woman is picking up an apple,” for instance, may be expressed as: λxy. woman x, pick_up x y, apple y.

Those expressions and the video are inputted to the computer-vision algorithm, called “Sentence Tracker,” developed by Barbu and other researchers. The algorithm looks at each video frame to track how objects and people transform over time, to determine if actions are playing out as described. In this way, it determines if the meaning is possibly true of the video.

Connecting the dots

The expression with the most closely matching representations for objects, humans, and actions becomes the most likely meaning of the caption. The expression, initially, may refer to many different objects and actions in the video, but the set of possible meanings serves as a training signal that helps the parser continuously winnow down possibilities. “By assuming that all of the sentences must follow the same rules, that they all come from the same language, and seeing many captioned videos, you can narrow down the meanings further,” Barbu says.

In short, the parser learns through passive observation: To determine if a caption is true of a video, the parser by necessity must identify the highest probability meaning of the caption. “The only way to figure out if the sentence is true of a video [is] to go through this intermediate step of, ‘What does the sentence mean?’ Otherwise, you have no idea how to connect the two,” Barbu explains. “We don’t give the system the meaning for the sentence. We say, ‘There’s a sentence and a video. The sentence has to be true of the video. Figure out some intermediate representation that makes it true of the video.’”

The training produces a syntactic and semantic grammar for the words it’s learned. Given a new sentence, the parser no longer requires videos, but leverages its grammar and lexicon to determine sentence structure and meaning.

Ultimately, this process is learning “as if you’re a kid,” Barbu says. “You see world around you and hear people speaking to learn meaning. One day, I can give you a sentence and ask what it means and, even without a visual, you know the meaning.”

“This research is exactly the right direction for natural language processing,” says Stefanie Tellex, a professor of computer science at Brown University who focuses on helping robots use natural language to communicate with humans. “To interpret grounded language, we need semantic representations, but it is not practicable to make it available at training time. Instead, this work captures representations of compositional structure using context from captioned videos. This is the paper I have been waiting for!”

In future work, the researchers are interested in modeling interactions, not just passive observations. “Children interact with the environment as they’re learning. Our idea is to have a model that would also use perception to learn,” Ross says.

This work was supported, in part, by the CBMM, the National Science Foundation, a Ford Foundation Graduate Research Fellowship, the Toyota Research Institute, and the MIT-IBM Brain-Inspired Multimedia Comprehension project.


Topics: Research, Language, Machine learning, Artificial intelligence, Data, Computer vision, Human-computer interaction, McGovern Institute, Center for Brains Minds and Machines, Robots, Robotics, National Science Foundation (NSF), Computer science and technology, Computer Science and Artificial Intelligence Laboratory (CSAIL), Electrical Engineering & Computer Science (eecs), School of Engineering, MIT-IBM Watson AI Lab

Source

IDSS hosts inaugural Learning for Dynamics and Control conference

Over the next decade, the biggest generator of data is expected to be devices which sense and control the physical world. From autonomy to robotics to smart cities, this data explosion — paired with advances in machine learning — creates new possibilities for designing and optimizing technological systems that use their own real-time generated data to make decisions.

To address the many scientific questions and application challenges posed by the real-time physical processes of these “dynamical” systems, researchers from MIT and elsewhere organized a new annual conference called Learning for Dynamics and Control. Dubbed L4DC, the inaugural conference was hosted at MIT by the Institute for Data, Systems, and Society (IDSS).

As excitement has built around machine learning and autonomy, there is an increasing need to consider both the data that physical systems produce and feedback these systems receive, especially from their interactions with humans. That extends into the domains of data science, control theory, decision theory, and optimization.

“We decided to launch L4DC because we felt the need to bring together the communities of machine learning, robotics, and systems and control theory,” said IDSS Associate Director Ali Jadbabaie, a conference co-organizer and professor in IDSS, the Department of Civil and Environmental Engineering (CEE), and the Laboratory for Information and Decision Systems (LIDS).

“The goal was to bring together these researchers because they all converged on a very similar set of research problems and challenges,” added co-organizer Ben Recht, of the University of California at Berkeley, in opening remarks.

Over the two days of the conference, talks covered core topics from the foundations of learning of dynamics models, data-driven optimization for dynamical models and optimization for machine learning, reinforcement learning for physical systems, and reinforcement learning for both dynamical and control systems. Talks also featured examples of applications in fields like robotics, autonomy, and transportation systems.

“How could self-driving cars change urban systems?” asked Cathy Wu, an assistant professor in CEE, IDSS, and LIDS, in a talk that investigated how transportation and urban systems may change over the next few decades. Only a small percentage of autonomous vehicles are needed to significantly affect traffic systems, Wu argued, which will in turn affect other urban systems. “Distribution learning provides us with an understanding for integrating autonomy into urban systems,” said Wu.

Claire Tomlin of UC Berkeley presented on integrating learning into control in the context of safety in robotics. Tomlin’s team integrates learning mechanisms that help robots adapt to sudden changes, such as a gust of wind, an unexpected human behavior, or an unknown environment. “We’ve been working on a number of mechanisms for doing this computation in real time,” Tomlin said.

Pablo Parillo, a professor in the Department of Electrical Engineering and Computer Science and faculty member of both IDSS and LIDS, was also a conference organizer, along with George Pappas of the University of Pennsylvania and Melanie Zellinger of ETH Zurich.

L4DC was sponsored by the National Science Foundation, the U.S. Air Force Office of Scientific Research, the Office of Naval Research, and the Army Research Office, a part of the Combat Capabilities Development Command Army Research Laboratory (CCDC ARL).

“The cutting-edge combination of classical control with recent advances in artificial intelligence and machine learning will have significant and broad potential impact on Army multi-domain operations, and include a variety of systems that will incorporate autonomy, decision-making and reasoning, networking, and human-machine collaboration,” said Brian Sadler, senior scientist for intelligent systems, U.S. Army CCDC ARL.

Organizers plan to make L4DC a recurring conference, hosted at different institutions. “Everyone we invited to speak accepted,” Jadbabaie said. “The largest room in Stata was packed until the end of the conference. We take this as a testament to the growing interest in this area, and hope to grow and expand the conference further in the coming years.”


Topics: Institute for Data, Systems, and Society, Civil and environmental engineering, Laboratory for Information and Decision Systems (LIDS), Electrical Engineering & Computer Science (eecs), School of Engineering, Machine learning, Special events and guest speakers, Data, Research, Robotics, Transportation, Autonomous vehicles

Source

Professor Emerita Catherine Chvany, Slavic scholar, dies at 91

Professor Emerita Catherine Vakar Chvany, a renowned Slavic linguist and literature scholar who played a pivotal role in advancing the study of Russian language and literature in MIT’s Foreign Languages and Literatures Section (now Global Studies and Languages), died on Oct. 19 in Watertown, Massachusetts. She was 91.

Chvany served on the MIT faculty for 26 years before her retirement in 1993.

Global Studies and Languages head Emma Teng noted that MIT’s thriving Russian studies curriculum today is a legacy of Chvany’s foundational work in the department. And, Maria Khotimsky, senior lecturer in Russian, said, “Several generations of Slavists are grateful for Professor Chvany’s inspiring mentorship, while her works in Slavic poetics and linguistics are renowned in the U.S. and internationally.”

A prolific and influential scholar

A prolific scholar, Chvany wrote “On the Syntax of Be-Sentences in Russian” (Slavica Publishers, 1975); and co-edited four volumes: “New Studies in Russian Language and Literature” (Slavica, 1987); “Morphosyntax in Slavic” (Slavica, 1980); “Slavic Transformational Syntax” (University of Michigan, 1974); and “Studies in Poetics: Commemorative Volume: Krystyna Pomorska” (Slavica Publishers, 1995).

In 1996, linguists Olga Yokoyama and Emily Klenin published an edited collection of her work, “Selected Essays of Catherine V. Chvany” (Slavica).

In her articles, Chvany took up a range of issues in linguistics, including not only variations on the verb “to be” but also hierarchies of situations in syntax of agents and subjects; definiteness in Bulgarian, English, and Russian; other issues of lexical storage and transitivity; hierarchies in Russian cases; and issues of markedness, including an important overview, “The Evolution of the Concept of Markedness from the Prague Circle to Generative Grammar.”

In literature she took up language issues in the classic “Tale of Igor’s Campaign,” Teffi’s poems, Nikolai Leskov’s short stories, and a novella by Aleksandr Solzhenitsyn.

From Paris to Cambridge 

“Catherine Chvany was always so present that it is hard to think of her as gone,” said MIT Literature Professor Ruth Perry. “She had strong opinions and wasn’t afraid to speak out about them.”

Chvany was born on April 2, 1927, in Paris, France, to émigré Russian parents. During World War II, she and her younger sister Anna were sent first to the Pyrenees and then to the United States with assistance from a courageous young Unitarian minister’s wife, Martha Sharp.

Fluent in Russian and French, Chvany quickly mastered English. She graduated from the Girls’ Latin School in Boston in 1946 and attended Radcliffe College from 1946 to 1948. She left school to marry Lawrence Chvany and raise three children, Deborah, Barbara, and Michael.

In 1961-63, she returned to school and completed her undergraduate degree in linguistics at Harvard University. She received her PhD in Slavic languages and literatures from Harvard in 1970 and began her career as an instructor of Russian language at Wellesley College in 1966.

She joined the faculty at MIT in 1967 and became an assistant professor in 1971, an associate professor in 1974, and a full professor in 1983.

Warmth, generosity, and friendship

Historian Philip Khoury, who was dean of the School of Humanities, Arts and Social Sciences during the latter years of Chvany’s time at MIT, remembered her warmly as “a wonderful colleague who loved engaging with me on language learning and how the MIT Russian language studies program worked.”

Elizabeth Wood, a professor of Russian history, recalled the warm welcome that Chvany gave her when she came to MIT in 1990: “She always loved to stop and talk at the Tuesday faculty lunches, sharing stories of her life and her love of Slavic languages.”

Chvany’s influence was broad and longstanding, in part as a result of her professional affiliations. Chvany served on the advisory or editorial boards of “Slavic and East European Journal,” “Russian Language Journal,” “Journal of Slavic Linguistics,” “Peirce Seminar Papers,” “Essays in Poetics” (United Kingdom), and “Supostavitelno ezikoznanie” (Bulgaria).

Emily Klenin, an emerita professor of Slavic languages and literature at the University of California at Los Angeles, noted that Chvany had a practice of expressing gratitude to those whom she mentored. She connected that practice to Chvany’s experience of being aided during WWII. “Her warm and open attitude toward life was reflected in her continuing interest and friendship for the young people she mentored, even when, as most eventually did, they went on to lives involving completely different academic careers or even no academic career at all,” Klenin said.

Memorial reception at MIT on November 18

Chvany is survived by her children, Deborah Gyapong and her husband Tony of Ottawa, Canada; Barbara Chvany and her husband Ken Silbert of Orinda, California; and Michael Chvany and his wife Sally of Arlington, Massachusetts; her foster-brother, William Atkinson of Cambridge, Massachusetts; six grandchildren; and nine great grandchildren.

A memorial reception will be held on Sunday, Nov. 18, from 1:30 to 4:00 p.m. in the Samberg Conference Center, 7th floor. Donations in Chvany’s name may be made to the Unitarian Universalist Association. Visit Friends of the UUA for online donations. Please RSVP to Michael Chvany, Mike@BridgeStreetProductions.com, if you plan to attend the memorial.


Source

Research reveals exotic quantum states in double-layer graphene

Research News

Research reveals exotic quantum states in double-layer graphene

Findings establish a potential new platform for future quantum computers

composite fermion consisting of one electron and two different types of magnetic flux

A new type of quasiparticle is discovered in graphene double-layer structure.

July 8, 2019

NSF-funded research by scientists at Brown and Columbia Universities has demonstrated the existence of previously unknown states of matter that arise in double-layer stacks of graphene, a two-dimensional nanomaterial. These new states, known as the fractional quantum Hall effect, arise from the complex interactions of electrons both within and across graphene layers.

“The findings show that stacking 2D materials together in close proximity generates entirely new physics,” said Jia Li, a physicist at Brown University. “In terms of materials engineering, this work shows that these layered systems could be viable in creating new types of electronic devices that take advantage of these new quantum Hall states.”

The research is published in the journal Nature Physics.

The Hall effect emerges when a magnetic field is applied to a conducting material in a direction perpendicular to a current flow. The magnetic field causes the current to deflect, creating a voltage in the transverse direction, called the Hall voltage. 

Importantly, researchers say, several of these new quantum Hall states may be useful in making fault-tolerant quantum computers.

“The full implications of this research are yet to be understood,” said Germano Iannacchione, a program director in NSF’s Division of Materials Research, which funded the project. “However, it’s not hard at all to foresee significant advances based on these discoveries emerging in traditional technologies such as semiconductors and sensors.”

—  NSF Public Affairs, (703) 292-8070 media@nsf.gov

Source