Research Headlines – Agile auto production gets a digital makeover

Related theme(s) and subtheme(s)

Countries involved in the project described in the article

Add to PDF “basket”

Global markets for cars are becoming increasingly competitive, forcing manufacturers to find cost savings while meeting greater demand for customisation. Advances in technology, known as ‘Industry 4.0’, make these seemingly contradictory demands possible. The EU-funded AutoPro project found a solution.

Image

© 4kstocks #137026882, source: stock.adobe.com 2019

Industry’s struggle to drive down costs dates back to the rigid assembly lines needed to maximise efficiency, an approach made famous by US auto-makers in the early 1900s. By design, this approach did not handle ‘variety’ very well.

While variety is more feasible today – both practically and economically – it gets more difficult as products become more complex and integrated. Structural rigidity makes it hard to cope with product-model changes, product-mix variations and batch-size reductions.

Heavy, time-consuming investment is typically needed to streamline assembly systems after changes are made because the software governing these processes cannot ‘visualise’ complex scenarios.

The EU-funded AUTOPRO project found an ‘Industry 4.0’ solution to help the automotive industry keep up with increasing demand for customised cars. An integrated, highly visual software application is at the heart of their system, making work flows more flexible or ‘agile’, even while accommodating more variants in the production system, and at the same time boosting productivity by 30-60 %.

Real-time shadows?

Arculus, the German SME behind the project, built up experience providing integrated ICT solutions that provide what they call a ‘virtual real-time shadow’ of all the elements in the production. This provides a much clearer overview of how certain key performance indicators are affected when changes are made in one or more elements.

The modular solution can work for any sector with multiple and complex work flows, but it is in car manufacturing, which is highly dependent on new technological processes to remain competitive, that Arculus expects the most enthusiasm.

By customising the navigation control and adding an enhanced interface and automatic communication protocol, Arculus’ platform is better equipped to help auto-makers change production parameters faster and more efficiently.

Prospects for this innovative solution are strong. The 2020 forecast global market for advanced manufacturing technologies is around EUR 750 billion. EU targets to increase industry’s share of GDP to 20 % by 2020, with the auto sector a stated pillar of the economy, provide valuable impetus as well.

Project details

  • Project acronym: AUTOPRO
  • Participants: Germany (Coordinator)
  • Project N°: 782842
  • Total costs: € 71 429
  • EU contribution: € 50 000
  • Duration: August 2017 to December 2017

See also

Search articles

Notes:
To restrict search results to articles in the Information Centre, i.e. this site, use this search box rather than the one at the top of the page.

After searching, you can expand the results to include the whole Research and Innovation web site, or another section of it, or all Europa, afterwards without searching again.

Please note that new content may take a few days to be indexed by the search engine and therefore to appear in the results.

Source

Research Headlines – Taming terahertz radiation for novel imaging applications

Related theme(s) and subtheme(s)

Countries involved in the project described in the article

Add to PDF “basket”

An underexploited band of the electromagnetic spectrum is set to enable new imaging systems that are capable of peering into complex materials and the human body, thanks to innotivative research in an EU-funded project.

Image

© sakkmesterke #219564177, source: stock.adobe.com 2019

Terahertz radiation falls between infrared and microwaves on the electromagnetic spectrum but is less widely used, due to a variety of key technological and practical challenges.

These waves can penetrate materials such as clothing or packaging but unlike X-rays, for example, THz radiation is non-ionizing, making it safe for living tissue. This means THz scanners could safely be used in airports to pick up the unique spectral signatures of several types of explosives, many compounds used in pharmaceutical ingredients and illegal narcotics.

An EU-funded initiative has now laid the foundations for transformative applications in biology, medicine, material science, quality inspection and security using this radiation. By testing novel solutions to efficiently harness the unique properties of THz waves, the THEIA project has driven important research in the field.

‘The results can be used to develop novel types of scanners or imaging systems,’ says Marco Peccianti, THEIA’s lead researcher at the University of Sussex in the UK. ‘Many complex materials possess unique fingerprints in the THz spectrum, including compounds such as polymers, proteins, amino acids, drugs or explosives. For instance, terahertz radiation will be of paramount importance in next-generation airport security scanners. Scanners based on THz radiation would increase our ability to recognise drugs, explosives and other illicit substances, with notable societal and economic benefits.’

Obstacles to be overcome

Other applications include analysing the composition of a wide range of complex materials, creating imaging systems to diagnose defects in manufacturing and peering inside building walls to detect structural problems. In medicine and medical research, imaging systems using THz spectroscopy, which can detect differences in water content and density of tissue, would provide an alternative means of looking inside the human body, particularly into some types of soft tissue to detect diseases such as cancer.

To bring these applications to fruition, several obstacles to efficiently exploit the properties of THz radiation need to be overcome.

In the THEIA project, the team devised a novel technique for channelling THz waves using waveguides, a structure that controls the direction and dimension of the waves. Instead of generating a THz wave and coupling it to a waveguide using a lens or similar optical components, the researchers developed a way to generate the wave directly inside the waveguide.

Improved speed and efficiency

‘The investigation has been performed by simulating the waveguide structure using high-performance computing solutions, and matching the prediction to experimental observations,’ says Peccianti. ‘Practically, we compared different technological solutions, from embedding wave generation in a high-performance waveguide to fabricating the waveguides with terahertz-emitting materials. The key result is the creation of an active terahertz waveguide system.’

The THEIA solution not only delivers a THz signal where needed, but also serves to remove many of the large and bulky components of existing THz systems. ‘This could potentially enable THz imaging to be used in ways that would previously have been impossible,’ Peccianti says.

Researchers are now focusing on improving the efficiency, speed and resolution of their THz imaging techniques in TIMING, a follow-up EU-funded project. The research will aim to develop a next generation of THz imaging devices as unique diagnostic tools to unambiguously discriminate molecular compounds with improved speed and resolution.

The THEIA project received funding from the EU’s Marie Skłodowska-Curie Actions programme.

Project details

  • Project acronym: THEIA
  • Participants: United Kingdom (Coordinator)
  • Project N°: 630833
  • Total costs: € 100 000
  • EU contribution: € 100 000
  • Duration: March 2014 to February 2018

See also

Search articles

Notes:
To restrict search results to articles in the Information Centre, i.e. this site, use this search box rather than the one at the top of the page.

After searching, you can expand the results to include the whole Research and Innovation web site, or another section of it, or all Europa, afterwards without searching again.

Please note that new content may take a few days to be indexed by the search engine and therefore to appear in the results.

Source

Machines that learn language more like kids do

Children learn language by observing their environment, listening to the people around them, and connecting the dots between what they see and hear. Among other things, this helps children establish their language’s word order, such as where subjects and verbs fall in a sentence.

In computing, learning language is the task of syntactic and semantic parsers. These systems are trained on sentences annotated by humans that describe the structure and meaning behind words. Parsers are becoming increasingly important for web searches, natural-language database querying, and voice-recognition systems such as Alexa and Siri. Soon, they may also be used for home robotics.

But gathering the annotation data can be time-consuming and difficult for less common languages. Additionally, humans don’t always agree on the annotations, and the annotations themselves may not accurately reflect how people naturally speak.

In a paper being presented at this week’s Empirical Methods in Natural Language Processing conference, MIT researchers describe a parser that learns through observation to more closely mimic a child’s language-acquisition process, which could greatly extend the parser’s capabilities. To learn the structure of language, the parser observes captioned videos, with no other information, and associates the words with recorded objects and actions. Given a new sentence, the parser can then use what it’s learned about the structure of the language to accurately predict a sentence’s meaning, without the video.

This “weakly supervised” approach — meaning it requires limited training data — mimics how children can observe the world around them and learn language, without anyone providing direct context. The approach could expand the types of data and reduce the effort needed for training parsers, according to the researchers. A few directly annotated sentences, for instance, could be combined with many captioned videos, which are easier to come by, to improve performance.

In the future, the parser could be used to improve natural interaction between humans and personal robots. A robot equipped with the parser, for instance, could constantly observe its environment to reinforce its understanding of spoken commands, including when the spoken sentences aren’t fully grammatical or clear. “People talk to each other in partial sentences, run-on thoughts, and jumbled language. You want a robot in your home that will adapt to their particular way of speaking … and still figure out what they mean,” says co-author Andrei Barbu, a researcher in the Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Center for Brains, Minds, and Machines (CBMM) within MIT’s McGovern Institute.

The parser could also help researchers better understand how young children learn language. “A child has access to redundant, complementary information from different modalities, including hearing parents and siblings talk about the world, as well as tactile information and visual information, [which help him or her] to understand the world,” says co-author Boris Katz, a principal research scientist and head of the InfoLab Group at CSAIL. “It’s an amazing puzzle, to process all this simultaneous sensory input. This work is part of bigger piece to understand how this kind of learning happens in the world.”

Co-authors on the paper are: first author Candace Ross, a graduate student in the Department of Electrical Engineering and Computer Science and CSAIL, and a researcher in CBMM; Yevgeni Berzak PhD ’17, a postdoc in the Computational Psycholinguistics Group in the Department of Brain and Cognitive Sciences; and CSAIL graduate student Battushig Myanganbayar.

Visual learner

For their work, the researchers combined a semantic parser with a computer-vision component trained in object, human, and activity recognition in video. Semantic parsers are generally trained on sentences annotated with code that ascribes meaning to each word and the relationships between the words. Some have been trained on still images or computer simulations.

The new parser is the first to be trained using video, Ross says. In part, videos are more useful in reducing ambiguity. If the parser is unsure about, say, an action or object in a sentence, it can reference the video to clear things up. “There are temporal components — objects interacting with each other and with people — and high-level properties you wouldn’t see in a still image or just in language,” Ross says.

The researchers compiled a dataset of about 400 videos depicting people carrying out a number of actions, including picking up an object or putting it down, and walking toward an object. Participants on the crowdsourcing platform Mechanical Turk then provided 1,200 captions for those videos. They set aside 840 video-caption examples for training and tuning, and used 360 for testing. One advantage of using vision-based parsing is “you don’t need nearly as much data — although if you had [the data], you could scale up to huge datasets,” Barbu says.

In training, the researchers gave the parser the objective of determining whether a sentence accurately describes a given video. They fed the parser a video and matching caption. The parser extracts possible meanings of the caption as logical mathematical expressions. The sentence, “The woman is picking up an apple,” for instance, may be expressed as: λxy. woman x, pick_up x y, apple y.

Those expressions and the video are inputted to the computer-vision algorithm, called “Sentence Tracker,” developed by Barbu and other researchers. The algorithm looks at each video frame to track how objects and people transform over time, to determine if actions are playing out as described. In this way, it determines if the meaning is possibly true of the video.

Connecting the dots

The expression with the most closely matching representations for objects, humans, and actions becomes the most likely meaning of the caption. The expression, initially, may refer to many different objects and actions in the video, but the set of possible meanings serves as a training signal that helps the parser continuously winnow down possibilities. “By assuming that all of the sentences must follow the same rules, that they all come from the same language, and seeing many captioned videos, you can narrow down the meanings further,” Barbu says.

In short, the parser learns through passive observation: To determine if a caption is true of a video, the parser by necessity must identify the highest probability meaning of the caption. “The only way to figure out if the sentence is true of a video [is] to go through this intermediate step of, ‘What does the sentence mean?’ Otherwise, you have no idea how to connect the two,” Barbu explains. “We don’t give the system the meaning for the sentence. We say, ‘There’s a sentence and a video. The sentence has to be true of the video. Figure out some intermediate representation that makes it true of the video.’”

The training produces a syntactic and semantic grammar for the words it’s learned. Given a new sentence, the parser no longer requires videos, but leverages its grammar and lexicon to determine sentence structure and meaning.

Ultimately, this process is learning “as if you’re a kid,” Barbu says. “You see world around you and hear people speaking to learn meaning. One day, I can give you a sentence and ask what it means and, even without a visual, you know the meaning.”

“This research is exactly the right direction for natural language processing,” says Stefanie Tellex, a professor of computer science at Brown University who focuses on helping robots use natural language to communicate with humans. “To interpret grounded language, we need semantic representations, but it is not practicable to make it available at training time. Instead, this work captures representations of compositional structure using context from captioned videos. This is the paper I have been waiting for!”

In future work, the researchers are interested in modeling interactions, not just passive observations. “Children interact with the environment as they’re learning. Our idea is to have a model that would also use perception to learn,” Ross says.

This work was supported, in part, by the CBMM, the National Science Foundation, a Ford Foundation Graduate Research Fellowship, the Toyota Research Institute, and the MIT-IBM Brain-Inspired Multimedia Comprehension project.


Topics: Research, Language, Machine learning, Artificial intelligence, Data, Computer vision, Human-computer interaction, McGovern Institute, Center for Brains Minds and Machines, Robots, Robotics, National Science Foundation (NSF), Computer science and technology, Computer Science and Artificial Intelligence Laboratory (CSAIL), Electrical Engineering & Computer Science (eecs), School of Engineering, MIT-IBM Watson AI Lab

Source

IDSS hosts inaugural Learning for Dynamics and Control conference

Over the next decade, the biggest generator of data is expected to be devices which sense and control the physical world. From autonomy to robotics to smart cities, this data explosion — paired with advances in machine learning — creates new possibilities for designing and optimizing technological systems that use their own real-time generated data to make decisions.

To address the many scientific questions and application challenges posed by the real-time physical processes of these “dynamical” systems, researchers from MIT and elsewhere organized a new annual conference called Learning for Dynamics and Control. Dubbed L4DC, the inaugural conference was hosted at MIT by the Institute for Data, Systems, and Society (IDSS).

As excitement has built around machine learning and autonomy, there is an increasing need to consider both the data that physical systems produce and feedback these systems receive, especially from their interactions with humans. That extends into the domains of data science, control theory, decision theory, and optimization.

“We decided to launch L4DC because we felt the need to bring together the communities of machine learning, robotics, and systems and control theory,” said IDSS Associate Director Ali Jadbabaie, a conference co-organizer and professor in IDSS, the Department of Civil and Environmental Engineering (CEE), and the Laboratory for Information and Decision Systems (LIDS).

“The goal was to bring together these researchers because they all converged on a very similar set of research problems and challenges,” added co-organizer Ben Recht, of the University of California at Berkeley, in opening remarks.

Over the two days of the conference, talks covered core topics from the foundations of learning of dynamics models, data-driven optimization for dynamical models and optimization for machine learning, reinforcement learning for physical systems, and reinforcement learning for both dynamical and control systems. Talks also featured examples of applications in fields like robotics, autonomy, and transportation systems.

“How could self-driving cars change urban systems?” asked Cathy Wu, an assistant professor in CEE, IDSS, and LIDS, in a talk that investigated how transportation and urban systems may change over the next few decades. Only a small percentage of autonomous vehicles are needed to significantly affect traffic systems, Wu argued, which will in turn affect other urban systems. “Distribution learning provides us with an understanding for integrating autonomy into urban systems,” said Wu.

Claire Tomlin of UC Berkeley presented on integrating learning into control in the context of safety in robotics. Tomlin’s team integrates learning mechanisms that help robots adapt to sudden changes, such as a gust of wind, an unexpected human behavior, or an unknown environment. “We’ve been working on a number of mechanisms for doing this computation in real time,” Tomlin said.

Pablo Parillo, a professor in the Department of Electrical Engineering and Computer Science and faculty member of both IDSS and LIDS, was also a conference organizer, along with George Pappas of the University of Pennsylvania and Melanie Zellinger of ETH Zurich.

L4DC was sponsored by the National Science Foundation, the U.S. Air Force Office of Scientific Research, the Office of Naval Research, and the Army Research Office, a part of the Combat Capabilities Development Command Army Research Laboratory (CCDC ARL).

“The cutting-edge combination of classical control with recent advances in artificial intelligence and machine learning will have significant and broad potential impact on Army multi-domain operations, and include a variety of systems that will incorporate autonomy, decision-making and reasoning, networking, and human-machine collaboration,” said Brian Sadler, senior scientist for intelligent systems, U.S. Army CCDC ARL.

Organizers plan to make L4DC a recurring conference, hosted at different institutions. “Everyone we invited to speak accepted,” Jadbabaie said. “The largest room in Stata was packed until the end of the conference. We take this as a testament to the growing interest in this area, and hope to grow and expand the conference further in the coming years.”


Topics: Institute for Data, Systems, and Society, Civil and environmental engineering, Laboratory for Information and Decision Systems (LIDS), Electrical Engineering & Computer Science (eecs), School of Engineering, Machine learning, Special events and guest speakers, Data, Research, Robotics, Transportation, Autonomous vehicles

Source

Professor Emerita Catherine Chvany, Slavic scholar, dies at 91

Professor Emerita Catherine Vakar Chvany, a renowned Slavic linguist and literature scholar who played a pivotal role in advancing the study of Russian language and literature in MIT’s Foreign Languages and Literatures Section (now Global Studies and Languages), died on Oct. 19 in Watertown, Massachusetts. She was 91.

Chvany served on the MIT faculty for 26 years before her retirement in 1993.

Global Studies and Languages head Emma Teng noted that MIT’s thriving Russian studies curriculum today is a legacy of Chvany’s foundational work in the department. And, Maria Khotimsky, senior lecturer in Russian, said, “Several generations of Slavists are grateful for Professor Chvany’s inspiring mentorship, while her works in Slavic poetics and linguistics are renowned in the U.S. and internationally.”

A prolific and influential scholar

A prolific scholar, Chvany wrote “On the Syntax of Be-Sentences in Russian” (Slavica Publishers, 1975); and co-edited four volumes: “New Studies in Russian Language and Literature” (Slavica, 1987); “Morphosyntax in Slavic” (Slavica, 1980); “Slavic Transformational Syntax” (University of Michigan, 1974); and “Studies in Poetics: Commemorative Volume: Krystyna Pomorska” (Slavica Publishers, 1995).

In 1996, linguists Olga Yokoyama and Emily Klenin published an edited collection of her work, “Selected Essays of Catherine V. Chvany” (Slavica).

In her articles, Chvany took up a range of issues in linguistics, including not only variations on the verb “to be” but also hierarchies of situations in syntax of agents and subjects; definiteness in Bulgarian, English, and Russian; other issues of lexical storage and transitivity; hierarchies in Russian cases; and issues of markedness, including an important overview, “The Evolution of the Concept of Markedness from the Prague Circle to Generative Grammar.”

In literature she took up language issues in the classic “Tale of Igor’s Campaign,” Teffi’s poems, Nikolai Leskov’s short stories, and a novella by Aleksandr Solzhenitsyn.

From Paris to Cambridge 

“Catherine Chvany was always so present that it is hard to think of her as gone,” said MIT Literature Professor Ruth Perry. “She had strong opinions and wasn’t afraid to speak out about them.”

Chvany was born on April 2, 1927, in Paris, France, to émigré Russian parents. During World War II, she and her younger sister Anna were sent first to the Pyrenees and then to the United States with assistance from a courageous young Unitarian minister’s wife, Martha Sharp.

Fluent in Russian and French, Chvany quickly mastered English. She graduated from the Girls’ Latin School in Boston in 1946 and attended Radcliffe College from 1946 to 1948. She left school to marry Lawrence Chvany and raise three children, Deborah, Barbara, and Michael.

In 1961-63, she returned to school and completed her undergraduate degree in linguistics at Harvard University. She received her PhD in Slavic languages and literatures from Harvard in 1970 and began her career as an instructor of Russian language at Wellesley College in 1966.

She joined the faculty at MIT in 1967 and became an assistant professor in 1971, an associate professor in 1974, and a full professor in 1983.

Warmth, generosity, and friendship

Historian Philip Khoury, who was dean of the School of Humanities, Arts and Social Sciences during the latter years of Chvany’s time at MIT, remembered her warmly as “a wonderful colleague who loved engaging with me on language learning and how the MIT Russian language studies program worked.”

Elizabeth Wood, a professor of Russian history, recalled the warm welcome that Chvany gave her when she came to MIT in 1990: “She always loved to stop and talk at the Tuesday faculty lunches, sharing stories of her life and her love of Slavic languages.”

Chvany’s influence was broad and longstanding, in part as a result of her professional affiliations. Chvany served on the advisory or editorial boards of “Slavic and East European Journal,” “Russian Language Journal,” “Journal of Slavic Linguistics,” “Peirce Seminar Papers,” “Essays in Poetics” (United Kingdom), and “Supostavitelno ezikoznanie” (Bulgaria).

Emily Klenin, an emerita professor of Slavic languages and literature at the University of California at Los Angeles, noted that Chvany had a practice of expressing gratitude to those whom she mentored. She connected that practice to Chvany’s experience of being aided during WWII. “Her warm and open attitude toward life was reflected in her continuing interest and friendship for the young people she mentored, even when, as most eventually did, they went on to lives involving completely different academic careers or even no academic career at all,” Klenin said.

Memorial reception at MIT on November 18

Chvany is survived by her children, Deborah Gyapong and her husband Tony of Ottawa, Canada; Barbara Chvany and her husband Ken Silbert of Orinda, California; and Michael Chvany and his wife Sally of Arlington, Massachusetts; her foster-brother, William Atkinson of Cambridge, Massachusetts; six grandchildren; and nine great grandchildren.

A memorial reception will be held on Sunday, Nov. 18, from 1:30 to 4:00 p.m. in the Samberg Conference Center, 7th floor. Donations in Chvany’s name may be made to the Unitarian Universalist Association. Visit Friends of the UUA for online donations. Please RSVP to Michael Chvany, Mike@BridgeStreetProductions.com, if you plan to attend the memorial.


Source

Research reveals exotic quantum states in double-layer graphene

Research News

Research reveals exotic quantum states in double-layer graphene

Findings establish a potential new platform for future quantum computers

composite fermion consisting of one electron and two different types of magnetic flux

A new type of quasiparticle is discovered in graphene double-layer structure.

July 8, 2019

NSF-funded research by scientists at Brown and Columbia Universities has demonstrated the existence of previously unknown states of matter that arise in double-layer stacks of graphene, a two-dimensional nanomaterial. These new states, known as the fractional quantum Hall effect, arise from the complex interactions of electrons both within and across graphene layers.

“The findings show that stacking 2D materials together in close proximity generates entirely new physics,” said Jia Li, a physicist at Brown University. “In terms of materials engineering, this work shows that these layered systems could be viable in creating new types of electronic devices that take advantage of these new quantum Hall states.”

The research is published in the journal Nature Physics.

The Hall effect emerges when a magnetic field is applied to a conducting material in a direction perpendicular to a current flow. The magnetic field causes the current to deflect, creating a voltage in the transverse direction, called the Hall voltage. 

Importantly, researchers say, several of these new quantum Hall states may be useful in making fault-tolerant quantum computers.

“The full implications of this research are yet to be understood,” said Germano Iannacchione, a program director in NSF’s Division of Materials Research, which funded the project. “However, it’s not hard at all to foresee significant advances based on these discoveries emerging in traditional technologies such as semiconductors and sensors.”

—  NSF Public Affairs, (703) 292-8070 media@nsf.gov

Source

The parallel ecomorph evolution of scorpionflies: The evidence is in the DNA

With little cases of ethanol to preserve tissue samples for total genomic DNA analysis, a trio covered much ground in the mountains of Japan and Korea to elucidate the evolution of the scorpionfly. The rugged scientists set out to use molecular phylogenetic analysis to show that the “alpine” type of scorpionfly and “general” type must be different species. After all, the alpine type exhibit shorter wings than the general type, and alpine type females also have very dark and distinct markings on their wings.

However, what they found in the DNA surprised them.

Casually called the scorpionfly because the males have abdomens that curve upward and are shaped like the stinger of scorpions, the Panorpodes paradoxus do not sting. Tomoya Suzuki, postdoc research fellow of the Faculty of Science at Shinshu University, Suzuki’s father, expert of scorpionflies, Nobuo Suzuki, professor at the Japan Women’s College of Physical Education and Koji Tojo, professor at the only Institute for Mountain Science in Japan, Shinshu University, were able to indicate parallel evolutions of Japanese scorpionflies through Bayesian simulations and phylogenetic analyses.

Insects are among the most diverse organism on earth and many fall captive to their elegant beauty as did the scientists dedicated to their study. Insects are very adaptive to their habitat environments, making them excellent subjects to study ecology, evolution and morphology. Phylogenetics is the study of evolutionary history, often visualized in the form of ancestral trees. The team studied the Japanese scorpionfly by collecting samples of the Panorpodes paradoxus throughout Japan and parts of the Korean peninsula searching for samples in altitude of up to 3033 meters.

In a previous study, Professor Tojo was able to correlate plate tectonic geological events in Japan by studying the DNA of insects from a relatively small area of Nagano prefecture. By testing DNA, they discovered the different lineages align with how the land formations occurred in Japan, with some insect types having a more similar background to those on the Asian continent.

The Japanese archipelago used to be a part of mainland East Asia. About 20 million years ago, the movement of the tectonic plates caused the Japanese land mass to tear away from the continent. By around 15 million years ago, the Japanese islands were completely detached and isolated from the mainland. Ancestral lineages of the Japanese Panorpodes therefore, diverged from the continental types around this time. The two major phenotypes of scorpionflies in Japan; the “alpine” type that live in higher altitudes and have shorter wings, and their “general” type counterparts. It is hypothesized that the shorter wings are better suited for the colder climate of higher elevations. The alpine and general types also have slightly different seasonal periods when they can be observed in the wild.

Through Bayesian simulations which are estimates through probability, the divergence time of the genealogical lineages were estimated. Simulations were run for over 100 million generations. The divergence time of the continental and Japanese Panorpodes was estimated to be 8.44 million years ago. The formation of the mountains in the Japanese Archipelago which began around 5 million years ago could be seen in the estimated evolution of the alpine type of P. paradoxus. Another estimated evolution time coincided with climate change cooling times. Cool weather is a tough environment for insects and serves as a genetic selection process. The cool glacial periods encouraged local adaptation of the scorpionflies in the northeast part of the island of Honshu.

With DNA tests of the various scorpionfly specimens, the group was able to show how the P. paradoxus “ecomorphed” or evolved to have forms and structural features adapted to their ecology. This parallel evolution started about 5 million years ago, when the mountain ranges in central Japan formed. Gene flow between the samples collected at different mountains were not detected, evidence of the parallel evolution. Interestingly however, gene flow between the general and alpine types might be happening, one indicator that they are not different species.

In conclusion, the alpine type and general type were not separate species as they suspected, but the alpine scorpionfly ecomorphed, explaining why they looked different. Through a next generation sequencer the team hope to elucidate the exact moment of difference. What sort of genetic basis underlies the alpine ecomorph? What type of genes emerged to facilitate the shortening of the wings? The team hope to study the genetic basis for the ecomorph. To do so, Dr. Suzuki wishes to breed scorpionflies to further elucidate the differences in the gene expression from the alpine and general types. Breeding is necessary to perform the next generation sequencer but what the larva feeds on and other growing conditions remain a mystery. The trio hope to unlock each of these steps to further identify the unknown aspects of the Japanese scorpionfly, as well as continue cutting edge research at the Institute for Mountain Science in Japan, Shinshu University, which is blessed to be surrounded by the Alps in the heart of Japan.

Story Source:

Materials provided by Shinshu University. Note: Content may be edited for style and length.

Source

For a Strong Digital Europe – 10 September 2019, Brussels, Belgium

The 4th edition of the EIT Digital annual conference will gather more than 1,000 digital experts & opinion leaders on September 10 at The Egg in Brussels.
Inspiring speakers from politics, industry, research and academia will share their views on Europe’s challenges and opportunities on the global digital market, ground-breaking digital deep tech innovations and live demos will be on display at the Innovators’ Village, and participants find new collaboration and business partners in more than 500 matchmaking sessions.

Source

Research Headlines – Versatile nanoparticles take aim at complex bone diseases

Related theme(s) and subtheme(s)
:   |   |   | 

Countries involved in the project described in the article

Add to PDF “basket”

Multifunctional nanoparticles being developed by EU-funded researchers are set to revolutionise treatments for complex bone diseases, enabling novel therapies for hundreds of millions of people worldwide suffering from bone cancer, bacterial bone infections and osteoporosis.

image

© crevis #128396472, 2019 source: stock.adobe.com

Diseases such as bone cancer and osteoporosis are frequently complex, with two or more disorders occurring simultaneously. To address the associated treatment challenges, the EU-funded VERDI project is developing an innovative multifunctional nanoplatform that would be capable of healing a number of currently hard-to-treat bone diseases using a unique, versatile and scalable system.

Led by Maria Vallet-Regi at the Universidad Complutense de Madrid in Spain, the project marks a significant milestone on the path to effectively deploying nanotechnology-based treatments in healthcare.

‘The idea is to create a toolbox in order to be able to select the appropriate building blocks of therapeutic agents and targeting mechanisms according to the disease being treated. This will enable us to customise nanoparticles specifically for each bone pathology, allowing the creation of a library of nanomedicines suitable for clinical trials and eventually clinical use,’ Vallet-Regi says.

Activated by a doctor

Using the toolbox, doctors would be able to deploy nanoparticles of mesoporous silica, a robust and versatile nanomaterial, as customisable carriers for treatments, such as antibiotics to treat infections or proteolytic enzymes to break up cancer cells.

These nanoparticles would then be injected into the patient, find their way to the afflicted area and be activated – providing targeted, effective therapy with lower toxicity and fewer side effects for people suffering from bone cancers, bacterial infections or bone density loss caused by osteoporosis.

Crucially, the nanoparticles carrying therapeutic agents must reach their targets, which requires the development and incorporation of compounds capable of targeting specific cells and penetrating cell walls or traversing the biofilms created by bacteria. The nanoparticles can then be activated by a doctor to release their load of therapeutic agents directly at the site of the diseased bone using external stimuli such as ultrasound, ultraviolet light or magnetic signals.

To treat osteoporosis, a degenerative bone disease estimated to affect 200 million people worldwide, the VERDI team is planning to use the nanoparticles to deliver molecules capable of silencing certain genes associated with the disease in order to limit bone loss and promote bone formation. The nanoparticles will be designed with unique masking properties to enable them to penetrate the cell membrane and reach the cytoplasm inside. Early tests of a similar nanosystem in animal models in a separate project led by Vallet-Regi have already had highly promising results, demonstrating the accumulation of therapeutic nanoparticles at the site of neuroblastomas, a cancer that affects nerve tissue.

From research to healthcare

‘The challenges we are addressing are immensely varied, since we are tackling three different bone pathologies each with its own peculiarities,’ Vallet-Regi explains. ‘For example, in bone cancer we find heterogeneous tumour cells difficult to treat with only one drug; in bone infection, bacteria develop a biofilm that impedes antibiotics from reaching their target; and in osteoporosis we must deal with accelerated resorption, the breakdown of bone tissue.’

The researchers have filed two patents for their technology so far and are preparing to conduct clinical studies of the nanoplatform over the coming years, aiming for the eventual commercialisation of the system and its deployment in clinical therapy.

‘The development of a single technology for the treatment of three different but frequently associated diseases, particularly among elderly people, will favour the industrial scale-up process, promoting the transition of nanotechnology-based treatments from research to healthcare,’ Vallet-Regi says.

Project details

  • Project acronym: VERDI
  • Participants: Spain (Coordinator)
  • Project N°: 694160
  • Total costs: € 2 500 000
  • EU contribution: € 2 500 000
  • Duration: October 2016 to September 2021

See also

Search articles

Notes:
To restrict search results to articles in the Information Centre, i.e. this site, use this search box rather than the one at the top of the page.

After searching, you can expand the results to include the whole Research and Innovation web site, or another section of it, or all Europa, afterwards without searching again.

Please note that new content may take a few days to be indexed by the search engine and therefore to appear in the results.

Source

Pathways to a low-carbon China

Fulfilling the ultimate goal of the 2015 Paris Agreement on climate change — keeping global warming well below 2 degrees Celsius, if not 1.5 C — will be impossible without dramatic action from the world’s largest emitter of greenhouse gases, China. Toward that end, China began in 2017 developing an emissions trading scheme (ETS), a national carbon dioxide market designed to enable the country to meet its initial Paris pledge with the greatest efficiency and at the lowest possible cost. China’s pledge, or nationally determined contribution (NDC), is to reduce its CO2 intensity of gross domestic product (emissions produced per unit of economic activity) by 60 to 65 percent in 2030 relative to 2005, and to peak CO2 emissions around 2030.

When it’s rolled out, China’s carbon market will initially cover the electric power sector (which currently produces more than 3 billion tons of CO2) and likely set CO2 emissions intensity targets (e.g., grams of CO2 per kilowatt hour) to ensure that its short-term NDC is fulfilled. But to help the world achieve the long-term 2 C and 1.5 C Paris goals, China will need to continually decrease these targets over the course of the century.

A new study of China’s long-term power generation mix under the nation’s ETS projects that until 2065, renewable energy sources will likely expand to meet these targets; after that, carbon capture and storage (CCS) could be deployed to meet the more stringent targets that follow. Led by researchers at the MIT Joint Program on the Science and Policy of Global Change, the study appears in the journal Energy Economics.

This research provides insight into the level of carbon prices and mix of generation technologies needed for China to meet different CO2 intensity targets for the electric power sector,” says Jennifer Morris, lead author of the study and a research scientist at the MIT Joint Program. ”We find that coal CCS has the potential to play an important role in the second half of the century, as part of a portfolio that also includes renewables and possibly nuclear power.”

To evaluate the impacts of multiple potential ETS pathways — different starting carbon prices and rates of increase — on the deployment of CCS technology, the researchers enhanced the MIT Economic Projection and Policy Analysis (EPPA) model to include the joint program’s latest assessments of the costs of low-carbon power generation technologies in China. Among the technologies included in the model are natural gas, nuclear, wind, solar, coal with CCS, and natural gas with CCS. Assuming that power generation prices are the same across the country for any given technology, the researchers identify different ETS pathways in which CCS could play a key role in lowering the emissions intensity of China’s power sector, particularly for targets consistent with achieving the long-term 2 C and 1.5 C Paris goals by 2100.

The study projects a two-stage transition — first to renewables, and then to coal CCS. The transition from renewables to CCS is driven by two factors. First, at higher levels of penetration, renewables incur increasing costs related to accommodating the intermittency challenges posed by wind and solar. This paves the way for coal CCS. Second, as experience with building and operating CCS technology is gained, CCS costs decrease, allowing the technology to be rapidly deployed at scale after 2065 and replace renewables as the primary power generation technology.

The study shows that carbon prices of $35-40 per ton of CO2 make CCS technologies coupled with coal-based generation cost-competitive against other modes of generation, and that carbon prices higher than $100 per ton of CO2 allow for a significant expansion of CCS.

“Our study is at the aggregate level of the country,” says Sergey Paltsev, deputy director of the joint program. “We recognize that the cost of electricity varies greatly from province to province in China, and hope to include interactions between provinces in our future modeling to provide deeper understanding of regional differences. At the same time, our current results provide useful insights to decision-makers in designing more substantial emissions mitigation pathways.”


Topics: Joint Program on the Science and Policy of Global Change, MIT Energy Initiative, Climate change, Alternative energy, Energy, Environment, Economics, Greenhouse gases, Carbon dioxide, Research, Policy, Emissions, China, Technology and society

Source

Times Higher Education ranks MIT No.1 in business and economics, No.2 in arts and humanities

MIT has taken the top spot in the Business and Economics subject category in the 2019 Times Higher Education World University Rankings and, for the second year in a row, the No. 2 spot worldwide for Arts and Humanities.

The Times Higher Education World University Rankings is an annual publication of university rankings by Times Higher Education, a leading British education magazine. The rankings use a set of 13 rigorous performance indicators to evaluate schools both overall and within individual fields. Criteria include teaching and learning environment, research volume and influence, and international outlook.

Business and Economics

The No. 1 ranking for Business and Economics is based on an evaluation of both the MIT Department of Economics — housed in the MIT School of Humanities, Arts, and Social Sciences — and of the MIT Sloan School of Management.

“We are always delighted when the high quality of work going on in our school and across MIT is recognized, and warmly congratulate our colleagues in MIT Sloan with whom we share this honor,” said Melissa Nobles, the Kenan Sahin Dean of the School of Humanities, Arts, and Social Sciences (SHASS).

The Business and Economics ranking evaluated 585 universities for their excellence in business, management, accounting, finance, economics, and econometrics subjects. In this category, MIT was followed by Stanford University and Oxford University.

“Being recognized as first in business and management is gratifying and we are thrilled to share the honors with our colleagues in the MIT Department of Economics and MIT SHASS,” said David Schmittlein, dean of MIT Sloan.

MIT has long been a powerhouse in economics. For over a century, the Department of Economics at MIT has played a leading role in economics education, research, and public service and the department’s faculty have won a total of nine Nobel Prizes over the years. MIT Sloan faculty have also won two Nobels, and the school is known as a driving force behind MIT’s entrepreneurial ecosystem: Companies started by MIT alumni have created millions of jobs and generate nearly $2 trillion a year in revenue.

Arts and Humanities

The Arts and Humanities ranking evaluated 506 universities that lead in art, performing arts, design, languages, literature, linguistics, history, philosophy, theology, architecture, and archaeology subjects. MIT was rated just below Stanford and above Harvard University in this category. MIT’s high ranking reflects the strength of both the humanities disciplines and performing arts located in MIT SHASS and the design fields and humanistic work located in MIT’s School of Architecture and Planning (SA+P).

At MIT, outstanding humanities and arts programs in SHASS — including literature; history; music and theater arts; linguistics; philosophy; comparative media studies; writing; languages; science, technology and society; and women’s and gender studies — sit alongside equally strong initiatives within SA+P in the arts; architecture; design; urbanism; and history, theory, and criticism. SA+P is also home to the Media Lab, which focuses on unconventional research in technology, media, science, art, and design.

“The recognition from Times Higher Education confirms the importance of creativity and human values in the advancement of science and technology,” said Hashim Sarkis, dean of SA+P. “It also rewards MIT’s longstanding commitment to “The Arts” — words that are carved in the Lobby 7 dome signifying one of the main areas for the application of technology.”

Receiving awards in multiple categories and in categories that span multiple schools at MIT is a recognition of the success MIT has had in fostering cross-disciplinary thinking, said Dean Nobles.

“It’s a testament to the strength of MIT’s model that these areas of scholarship and pedagogy are deeply seeded in multiple administrative areas,” Nobles said. “At MIT, we know that solving challenging problems requires the combined insight and knowledge from many fields. The world’s complex issues are not only scientific and technological problems; they are as much human and ethical problems.”


Topics: Awards, honors and fellowships, Arts, Architecture, Business and management, Comparative Media Studies/Writing, Economics, Global Studies and Languages, Humanities, History, Literature, Linguistics, Management, Music, Philosophy, Theater, Urban studies and planning, Rankings, Media Lab, School of Architecture and Planning, Sloan School of Management, School of Humanities Arts and Social Sciences

Source

Artificial intelligence controls robotic arm to pack boxes and cut costs

Research News

Artificial intelligence controls robotic arm to pack boxes and cut costs

Researchers used software, algorithms to automate packing of boxes, a critical part of warehouse efficiency

robotic arm packs items into a box

A robotic arm tightly packs items into a box for shipment.

July 5, 2019

An NSF-funded team of computer scientists at Rutgers University used artificial intelligence to control a robotic arm, thereby providing a more efficient way to pack boxes that will save businesses time and money.

Deploying robots to perform logistics, retail and warehouse tasks is a growing trend in business, but tightly packing products picked from an unorganized pile remains largely a manual task, even though it is critical to warehouse efficiency. Automating such tasks is important for companies’ competitiveness and allows people to focus on less menial and physically taxing work.

For the study, the team used a robotic arm to move objects from a bin into a small shipping box and tightly arrange them. The researchers developed software and algorithms for the robotic arm and used visual data and a simple suction cup that doubles as a finger for pushing objects. The resulting system can topple objects to get a desirable surface for grabbing them and use sensor data to pull objects toward a targeted area and push objects together. It is designed to overcome errors during packing.

The scientists’ peer-reviewed study was published recently at the IEEE International Conference on Robotics and Automation, where it was a finalist for the Best Paper Award in Automation.

The research was supported by the Robust Intelligence program in NSF’s Division of Information and Intelligent Systems. RI encompasses the broad spectrum of foundational computational research needed to understand and enable intelligent systems in complex, realistic contexts.

—  NSF Public Affairs, (703) 292-8070 media@nsf.gov

Source

AI-designed heat pumps consume less energy

Researchers at EPFL have developed a method that uses artificial intelligence to design next-generation heat-pump compressors. Their method can cut the pumps’ power requirement by around 25%.

In Switzerland, 50 — 60% of new homes are equipped with heat pumps. These systems draw in thermal energy from the surrounding environment — such as from the ground, air, or a nearby lake or river — and turn it into heat for buildings.

While today’s heat pumps generally work well and are environmentally friendly, they still have substantial room for improvement. For example, by using microturbocompressors instead of conventional compression systems, engineers can reduce heat pumps’ power requirement by 20-25% (see inset) as well as their impact on the environment. That’s because turbocompressors are more efficient and ten times smaller than piston devices. But incorporating these mini components into heat pumps’ designs is not easy; complications arise from their tiny diameters (<20 mm) and fast rotation speeds (>200,000 rpm).

At EPFL’s Laboratory for Applied Mechanical Design on the Microcity campus, a team of researchers led by Jürg Schiffmann has developed a method that makes it easier and faster to add turbocompressors to heat pumps. Using a machine-learning process called symbolic regression, the researchers came up with simple equations for quickly calculating the optimal dimensions of a turbocompressor for a given heat pump. Their research just won the Best Paper Award at the 2019 Turbo Expo Conference held by the American Society of Mechanical Engineers.

1,500 times faster

The researchers’ method drastically simplifies the first step in designing turbochargers. This step — which involves roughly calculating the ideal size and rotation speed for the desired heat pump — is extremely important because a good initial estimate can considerably shorten the overall design time. Until now, engineers have been using design charts to size their turbocompressors — but these charts become increasingly inaccurate the smaller the equipment. And the charts have not kept up to date with the latest technology.

That’s why two EPFL PhD students — Violette Mounier and Cyril Picard — worked on developing an alternative. They fed the results of 500,000 simulations into machine-learning algorithms and generated equations that replicate the charts but with several advantages: they are reliable even at small turbocompressor sizes; they are just as detailed as more complicated simulations; and they are 1,500 times faster. The researchers’ method also lets engineers skip some of the steps in conventional design processes. It paves the way to easier implementation and more widespread use of microturbochargers in heat pumps.

The benefits of microturbocompressors

Conventional heat pumps use pistons to compress a fluid, called a refrigerant, and drive a vapor-compression cycle. The pistons need to be well-oiled to function properly, but the oil can stick to the heat exchanger walls and impairs the heat transfer process. However, microturbocompressors — which have diameters of just a few dozen millimeters — can run without oil; they rotate on gas bearings at speeds of hundreds of thousands of rpm. The rotating movement and gas layers between the components mean there is almost no friction. As a result, these miniature systems can boost heat pumps’ heat transfer coefficients by 20-30%.

This microturbocharger technology has been in development for several years and is now mature. “We have already been contacted by several companies that are interested in using our method,” says Schiffmann. Thanks to the researchers’ work, companies will have an easier time incorporating the microturbocharger technology into their heat pumps.

Story Source:

Materials provided by Ecole Polytechnique Fédérale de Lausanne. Note: Content may be edited for style and length.

Source

100% Renewable Heating & Cooling for a Sustainable Future – 28 October 2019, Helsinki, Finland

Original Sokos Hotel Presidentti The European Technology & Innovation Platform on Renewable Heating and Cooling (RHC ETIP) organises the event 100% RHC for a Sustainable Future, the opportunity for RHC experts to learn, network and present their RHC innovative projects.

Why joining?
The RHC ETIP gathers industry representatives, researchers and policy makers to network and exchange their expertise on RHC innovative projects, challenges and opportunities to thrive in the long term. 100% RHC for a Sustainable Future includes:
• The 100% RHC Workshop
• Call for projects Read more
• The 100% RHC Conference
• Agenda 
• An evening reception hosted by the Mayor of Helsinki at the City Hall.

Source

Research Headlines – Research to help battle breast cancer

Related theme(s) and subtheme(s)

Countries involved in the project described in the article
 |   |   | 

Add to PDF “basket”

The ability to control key forces that drive biological functioning would be a boon for a number of medical fields. With a focus on breast cancer, an EU-funded project is bringing together different research communities to work towards understanding and controlling cellular mechanics.

Image

© Picture Partners #30814684, source: stock.adobe.com 2019

Mechanical forces are created inside the body through the action of specific molecular bonds. Being able to control these would enable a giant leap forward in fields such as oncology, regenerative medicine and biomaterial design.

Tapping the potential of cellular mechanics requires the development and integration of a number of disparate technologies. The EU-funded MECHANO-CONTROL project is addressing this challenge, assembling an interdisciplinary team to design and carry out pertinent research. The scientists involved are specifically targeting new ways to impair or abrogate breast tumour progression.

The project’s ultimate aim is to understand and learn to control the full range of cellular mechanics. To do so, scientists need to find new ways to measure and manipulate complex cellular processes – from the nanometre to the metre scale.

At all stages, the MECHANO-CONTROL team is integrating experimental data with multi-scale computational modelling. With this approach, the aim is to develop specific therapeutic approaches beyond the current paradigm in breast cancer treatment.

Going further, the general principles delineated by MECHANO-CONTROL could also have high applicability in other areas of oncology, as well as regenerative medicine and biomaterials. This has the potential to bring new treatments and relief from suffering for many.

Taking it scale by scale

Working at the nanometric, molecular level, MECHANO-CONTROL researchers are developing cellular microenvironments, enabled by substances that mimic naturally occurring cell components.

On the cell-to-organ scale, the team is combining controlled microenvironments and interfering strategies with the development of techniques to measure and control mechanical forces and adhesion in cells and tissues, and to evaluate their biological response.

At the organism scale, researchers are establishing how cellular mechanics can be controlled.

Project details

  • Project acronym: MECHANO-CONTROL
  • Participants: Spain (Coordinator), Germany, UK, Netherlands
  • Project N°: 731957
  • Total costs: € 7 134 928
  • EU contribution: € 7 134 928
  • Duration: January 2017 to December 2021

See also

Search articles

Notes:
To restrict search results to articles in the Information Centre, i.e. this site, use this search box rather than the one at the top of the page.

After searching, you can expand the results to include the whole Research and Innovation web site, or another section of it, or all Europa, afterwards without searching again.

Please note that new content may take a few days to be indexed by the search engine and therefore to appear in the results.

Source