How does your productivity stack up?

You know that person who always seems to be ahead of their deadlines, despite being swamped? Do you look at them with envy and wonder how they do it?

“Regardless of location, industry, or occupation, productivity is a challenge faced by every professional,” says Robert Pozen, senior lecturer at the MIT Sloan School of Management.

As part of his ongoing research and aided by MIT undergraduate Kevin Downey, Pozen surveyed 20,000 self-selected individuals in management from six continents to learn why some people are more productive than others.

The survey tool, dubbed the Pozen Productivity Rating, consists of 21 questions divided into seven categories: planning your schedule, developing daily routines, coping with your messages, getting a lot done, improving your communication skills, running effective meetings, and delegating to others. These particular habits and skills are core to Pozen’s MIT Sloan Executive Education program, Maximizing Your Productivity: How to Become an Efficient and Effective Executive, and his bestselling book, “Extreme Productivity: Boost Your Results, Reduce Your Hours.”

After cleaning up the data, Pozen and Downey obtained a complete set of answers from 19,957 respondents. Roughly half were residents of North America; another 21 percent were residents of Europe, and 19 percent were residents of Asia. The remaining 10 percent included residents of Australia, South America, and Africa.

They identified the groups of people with the highest productivity ratings and found that professionals with the highest scores tended to do well on the same clusters of habits:

  • They planned their work based on their top priorities and then acted with a definite objective;
  • they developed effective techniques for managing a high volume of information and tasks; and
  • they understood the needs of their colleagues, enabling short meetings, responsive communications, and clear directions.

The results were also interesting when parsed by the demographics of the survey participants.

Geographically, the average productivity score for respondents from North America was in the middle of the pack, even though Americans tend to work longer hours. In fact, the North American score was significantly lower than the average productivity scores for respondents from Europe, Asia, and Australia.

Age and seniority were highly correlated with personal productivity — older and more senior professionals recorded higher scores than younger and more junior colleagues. Habits of these more senior respondents included developing routines for low-value activities, managing message flow, running effective meetings, and delegating tasks to others.

While the overall productivity scores of male and female professionals were almost the same, there were some noteworthy differences in how women and men managed to be so productive. For example, women tended to score particularly high when it came to running effective meetings — keeping meetings to less than 90 minutes and finishing with an agreement of next steps. By contrast, men did particularly well at coping with high message volume — not looking at their emails too frequently and skipping over the messages of low value.

Coping with your daily flood of messages

While it’s clear that the ability to deal with inbox overload is key to productivity, how that’s accomplished may be less clear to many of us who shudder at our continuous backlog of emails.

“We all have so much small stuff, like email, that overwhelms us, and we wind up dedicating precious time to it,” says Pozen. “Most of us look at email every three to five minutes. Instead, look every hour or two, and when you do look, look only at subject matter and sender, and essentially skip over 60-80 percent of it, because most emails you get aren’t very useful.” Pozen also encourages answering important emails immediately instead of flagging them and then finding them again later (or forgetting altogether), as well as flagging important contacts and making ample use of email filters.

However, Pozen stresses that managing incoming emails, while an important skill, needs to be paired with other, more big-picture habits in order to be effective, such as defining your highest priorities. He warns that without a specific set of goals to pursue — both personal and professional — many ambitious people devote insufficient time to activities that actually support their top goals.

More tips for maximizing your productivity

If you want to become more productive, try developing the “habit clusters” demonstrated in Pozen’s survey results and possessed by the most productive professionals. This includes:

  • Focusing on your primary objectives: Every night, revise your next day’s schedule to stress your top priorities. Decide your purpose for reading any lengthy material, before you start.
  • Managing your work overload: Skip over 50-80 percent of your emails based on the sender and the subject. Break large projects into small steps — and start with step one.
  • Supporting your colleagues: Limit any meeting to 90 minutes or less and end each meeting with clearly defined next steps. Agree on success metrics with your team.

Pozen’s survey tool is still available online. Those completing it will receive a feedback report offering practical tips for improving productivity. You can also learn from Pozen firsthand in his MIT Executive Education program, Maximizing Your Personal Productivity.


Source

Professor Emeritus Sylvain Bromberger, philosopher of language and science, dies at 94

Professor Emeritus Sylvain Bromberger, a philosopher of language and of science who played a pivotal role in establishing MIT’s Department of Linguistics and Philosophy, died on Sept. 16 in Cambridge, Massachusetts. He was 94.

A faculty member for more than 50 years, Bromberger helped found the department in 1977 and headed the philosophy section for several years. He officially retired in 1993 but remained very active at MIT until his death.

Kindness and intellectual generosity

“Although he officially retired 25 years ago, Sylvain was an active and valued member of the department up to the very end,” said Alex Byrne, head of the Department of Linguistics and Philosophy. “He made enduring contributions to philosophy and linguistics, and his colleagues and students were frequent beneficiaries of his kindness and intellectual generosity. He had an amazing life in so many ways, and MIT is all the better for having been a part of it.”

Paul Egré, director of research at the French National Center for Scientific Research (aka CNRS) and a former visiting scholar at MIT, said, “Those of us who were lucky enough to know Sylvain have lost the dearest of friends, a unique voice, a distinctive smile and laugh, someone who yet seemed to know that life is vain and fragile in unsuspected ways, but also invaluable in others.”

Enduring contribution to fundamental issues about knowledge

Bromberger’s work centered largely on fundamental issues in epistemology, namely the theory of knowledge and the conditions that make knowledge possible or impossible. During the course of his career, he devoted a substantial part of his thinking to an examination of the ways in which we come to apprehend unsolved questions. His research in the philosophy of linguistics, carried out in part with the late Institute Professor Morris Halle of the linguistics section, included investigations into the foundations of phonology and of morphology.

Born in 1924 in Antwerp to a French-speaking Jewish family, Bromberger escaped the German invasion of Belgium with his parents and two brothers on May 10, 1940. After reaching Paris, then Bordeaux, his family obtained one of the last visas issued by the Portuguese consul Aristides de Sousa Mendes in Bayonne. Bromberger later dedicated the volume of his collected papers “On What We Know We Don’t Know: Explanation, Theory, Linguistics, and How Questions Shape Them” (University of Chicago Press, 1992) to Sousa Mendes.

The family fled to New York, and Bromberger was admitted to Columbia University. However, he chose to join the U.S. Army in 1942, and he went on to serve three years in the infantry. He took part in the liberation of Europe as a member of the 405th Regiment, 102nd Infantry Division. He was wounded during the invasion of Germany in 1945.

After leaving the Army, Bromberger studied physics and the philosophy of science at Columbia University, earning his bachelor’s degree in 1948. He received his PhD in philosophy from Harvard University in 1961.
 
Research and teaching at MIT

He served on the philosophy faculties at Princeton University and at the University of Chicago before joining MIT in 1966. Over the years, he trained many generations of MIT students, teaching alongside such notables as Halle, Noam Chomsky, Thomas Kuhn, and Ken Hale.

In the early part of his career, Bromberger focused on critiquing the so-called deductive-nomological model of explanation, which says that to explain a phenomenon is to deductively derive the statement reporting that phenomenon from laws (universal generalizations) and antecedent conditions. For example, we can explain that this water boils from the law that all water boils at 100 degrees C, and that the temperature of the water was elevated to exactly 100 C.

An influential article: Why-questions

One simple though key observation made by Bromberger in his analysis was that we may not only explain that the water boils at 100 C, but also how it boils, and even why it boils when heated up. This feature gradually led Bromberger to think about the semantics and pragmatics of questions and their answers.

Bromberger’s 1966 “Why-questions” paper was probably his most influential article. In it, he highlights the fact that most scientifically valid questions put us at first in a state in which we know all actual answers to the question to be false, but in which we can nevertheless recognize the question to have a correct answer (a state he calls “p-predicament,” with “p” for “puzzle”). According to Bromberger, why-questions are particularly emblematic of this state of p-predicament, because in order to ask a why-question rationally, a number of felicity conditions (or presuppositions) must be satisfied, which are discussed in his work.

The paper had an influence on ulterior accounts of explanation, notably Bas van Fraassen’s discussion of the semantic theory of contrastivism in his book “The Scientific Image” (to explain a phenomenon is to answer a why-question with a contrast class in mind). Still today, why-questions are recognized as questions whose semantics is hard to specify, in part for reasons Bromberger discussed.

In addition to investigating the syntactic, semantic, and pragmatic analysis of interrogatives, Bromberger also immersed himself in generative linguistics, with a particular interest in generative phonology, and the methodology of linguistic theory, teaching a seminar on the latter with Thomas Kuhn.

A lifelong engagement with new ideas

In 1993, the MIT Press published a collection of essays in linguistics to honor Bromberger on the occasion of his retirement. “The View From Building 20,” edited by Ken Hale and Jay Keyser, featured essays by Chomsky, Halle, Alec Marantz, and other distinguished colleagues.

In 2017, Egré and Robert May put together a workshop honoring Bromberger at the Ecole Normale Supérieure in Paris. Talks there centered on themes from Bromberger’s work, including metacognition, questions, linguistic theory, and problems concerning word individuation.

Tributes were read, notably this one from Chomsky, who used to take walks with Bromberger when they taught together:

“Those walks were a high point of the day for many years … almost always leaving me with the same challenging question: Why? Which I’ve come to think of as Sylvain’s question. And leaving me with the understanding that it is a question we should always ask when we have surmounted some barrier in inquiry and think we have an answer, only to realize that we are like mountain climbers who think they see the peak but when they approach it find that it still lies tantalizingly beyond.”

Egré noted that even when Bromberger was in his 90s, he had a “constant appetite for new ideas. He would always ask what your latest project was about, why it was interesting, and how you would deal with a specific problem,” Egré said. “His hope was that philosophy, linguistics, and the brain sciences would eventually join forces to uncover unprecedented dimensions of the human mind, erasing at least some of our ignorance.”

Bromberger’s wife of 64 years, Nancy, died in 2014. He is survived by two sons, Allen and Daniel; and three grandchildren, Michael Barrows, Abigail Bromberger, and Eliza Bromberger.
 

Written by Paul Egré and Kathryn O’Neill, with contributions from Daniel Bromberger, Allen Bromberger, Samuel Jay Keyser, Robert May, Agustin Rayo, Philippe Schlenker, and Benjamin Spector
 

Source

Artificial “muscles” achieve powerful pulling force

As a cucumber plant grows, it sprouts tightly coiled tendrils that seek out supports in order to pull the plant upward. This ensures the plant receives as much sunlight exposure as possible. Now, researchers at MIT have found a way to imitate this coiling-and-pulling mechanism to produce contracting fibers that could be used as artificial muscles for robots, prosthetic limbs, or other mechanical and biomedical applications.

While many different approaches have been used for creating artificial muscles, including hydraulic systems, servo motors, shape-memory metals, and polymers that respond to stimuli, they all have limitations, including high weight or slow response times. The new fiber-based system, by contrast, is extremely lightweight and can respond very quickly, the researchers say. The findings are being reported today in the journal Science.

The new fibers were developed by MIT postdoc Mehmet Kanik and MIT graduate student Sirma Örgüç, working with professors Polina Anikeeva, Yoel Fink, Anantha Chandrakasan, and C. Cem Taşan, and five others, using a fiber-drawing technique to combine two dissimilar polymers into a single strand of fiber.

The key to the process is mating together two materials that have very different thermal expansion coefficients — meaning they have different rates of expansion when they are heated. This is the same principle used in many thermostats, for example, using a bimetallic strip as a way of measuring temperature. As the joined material heats up, the side that wants to expand faster is held back by the other material. As a result, the bonded material curls up, bending toward the side that is expanding more slowly.

Credit: Courtesy of the researchers

Using two different polymers bonded together, a very stretchable cyclic copolymer elastomer and a much stiffer thermoplastic polyethylene, Kanik, Örgüç and colleagues produced a fiber that, when stretched out to several times its original length, naturally forms itself into a tight coil, very similar to the tendrils that cucumbers produce. But what happened next actually came as a surprise when the researchers first experienced it. “There was a lot of serendipity in this,” Anikeeva recalls.

As soon as Kanik picked up the coiled fiber for the first time, the warmth of his hand alone caused the fiber to curl up more tightly. Following up on that observation, he found that even a small increase in temperature could make the coil tighten up, producing a surprisingly strong pulling force. Then, as soon as the temperature went back down, the fiber returned to its original length. In later testing, the team showed that this process of contracting and expanding could be repeated 10,000 times “and it was still going strong,” Anikeeva says.

Credit: Courtesy of the researchers

One of the reasons for that longevity, she says, is that “everything is operating under very moderate conditions,” including low activation temperatures. Just a 1-degree Celsius increase can be enough to start the fiber contraction.

The fibers can span a wide range of sizes, from a few micrometers (millionths of a meter) to a few millimeters (thousandths of a meter) in width, and can easily be manufactured in batches up to hundreds of meters long. Tests have shown that a single fiber is capable of lifting loads of up to 650 times its own weight. For these experiments on individual fibers, Örgüç and Kanik have developed dedicated, miniaturized testing setups.

Credit: Courtesy of the researchers

The degree of tightening that occurs when the fiber is heated can be “programmed” by determining how much of an initial stretch to give the fiber. This allows the material to be tuned to exactly the amount of force needed and the amount of temperature change needed to trigger that force.

The fibers are made using a fiber-drawing system, which makes it possible to incorporate other components into the fiber itself. Fiber drawing is done by creating an oversized version of the material, called a preform, which is then heated to a specific temperature at which the material becomes viscous. It can then be pulled, much like pulling taffy, to create a fiber that retains its internal structure but is a small fraction of the width of the preform.

For testing purposes, the researchers coated the fibers with meshes of conductive nanowires. These meshes can be used as sensors to reveal the exact tension experienced or exerted by the fiber. In the future, these fibers could also include heating elements such as optical fibers or electrodes, providing a way of heating it internally without having to rely on any outside heat source to activate the contraction of the “muscle.”

Such fibers could find uses as actuators in robotic arms, legs, or grippers, and in prosthetic limbs, where their slight weight and fast response times could provide a significant advantage.

Some prosthetic limbs today can weigh as much as 30 pounds, with much of the weight coming from actuators, which are often pneumatic or hydraulic; lighter-weight actuators could thus make life much easier for those who use prosthetics. Such fibers might also find uses in tiny biomedical devices, such as a medical robot that works by going into an artery and then being activated,” Anikeeva suggests. “We have activation times on the order of tens of milliseconds to seconds,” depending on the dimensions, she says.

To provide greater strength for lifting heavier loads, the fibers can be bundled together, much as muscle fibers are bundled in the body. The team successfully tested bundles of 100 fibers. Through the fiber drawing process, sensors could also be incorporated in the fibers to provide feedback on conditions they encounter, such as in a prosthetic limb. Örgüç says bundled muscle fibers with a closed-loop feedback mechanism could find applications in robotic systems where automated and precise control are required.

Kanik says that the possibilities for materials of this type are virtually limitless, because almost any combination of two materials with different thermal expansion rates could work, leaving a vast realm of possible combinations to explore. He adds that this new finding was like opening a new window, only to see “a bunch of other windows” waiting to be opened.

“The strength of this work is coming from its simplicity,” he says.

The team also included MIT graduate student Georgios Varnavides, postdoc Jinwoo Kim, and undergraduate students Thomas Benavides, Dani Gonzalez, and Timothy Akintlio. The work was supported by the National Institute of Neurological Disorders and Stroke and the National Science Foundation.


Topics: Research, Materials Science and Engineering, DMSE, Mechanical engineering, Nanoscience and nanotechnology, Research Laboratory of Electronics, McGovern Institute, Brain and cognitive sciences, School of Science, School of Engineering, National Science Foundation (NSF)

Source

Model paves way for faster, more efficient translations of more languages

MIT researchers have developed a novel “unsupervised” language translation model — meaning it runs without the need for human annotations and guidance — that could lead to faster, more efficient computer-based translations of far more languages.

Translation systems from Google, Facebook, and Amazon require training models to look for patterns in millions of documents — such as legal and political documents, or news articles — that have been translated into various languages by humans. Given new words in one language, they can then find the matching words and phrases in the other language.

But this translational data is time consuming and difficult to gather, and simply may not exist for many of the 7,000 languages spoken worldwide. Recently, researchers have been developing “monolingual” models that make translations between texts in two languages, but without direct translational information between the two.

In a paper being presented this week at the Conference on Empirical Methods in Natural Language Processing, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) describe a model that runs faster and more efficiently than these monolingual models.

The model leverages a metric in statistics, called Gromov-Wasserstein distance, that essentially measures distances between points in one computational space and matches them to similarly distanced points in another space. They apply that technique to “word embeddings” of two languages, which are words represented as vectors — basically, arrays of numbers — with words of similar meanings clustered closer together. In doing so, the model quickly aligns the words, or vectors, in both embeddings that are most closely correlated by relative distances, meaning they’re likely to be direct translations.

In experiments, the researchers’ model performed as accurately as state-of-the-art monolingual models — and sometimes more accurately — but much more quickly and using only a fraction of the computation power.

“The model sees the words in the two languages as sets of vectors, and maps [those vectors] from one set to the other by essentially preserving relationships,” says the paper’s co-author Tommi Jaakkola, a CSAIL researcher and the Thomas Siebel Professor in the Department of Electrical Engineering and Computer Science and the Institute for Data, Systems, and Society. “The approach could help translate low-resource languages or dialects, so long as they come with enough monolingual content.”

The model represents a step toward one of the major goals of machine translation, which is fully unsupervised word alignment, says first author David Alvarez-Melis, a CSAIL PhD student: “If you don’t have any data that matches two languages … you can map two languages and, using these distance measurements, align them.”

Relationships matter most

Aligning word embeddings for unsupervised machine translation isn’t a new concept. Recent work trains neural networks to match vectors directly in word embeddings, or matrices, from two languages together. But these methods require a lot of tweaking during training to get the alignments exactly right, which is inefficient and time consuming.

Measuring and matching vectors based on relational distances, on the other hand, is a far more efficient method that doesn’t require much fine-tuning. No matter where word vectors fall in a given matrix, the relationship between the words, meaning their distances, will remain the same. For instance, the vector for “father” may fall in completely different areas in two matrices. But vectors for “father” and “mother” will most likely always be close together.

“Those distances are invariant,” Alvarez-Melis says. “By looking at distance, and not the absolute positions of vectors, then you can skip the alignment and go directly to matching the correspondences between vectors.”

That’s where Gromov-Wasserstein comes in handy. The technique has been used in computer science for, say, helping align image pixels in graphic design. But the metric seemed “tailor made” for word alignment, Alvarez-Melis says: “If there are points, or words, that are close together in one space, Gromov-Wasserstein is automatically going to try to find the corresponding cluster of points in the other space.”

For training and testing, the researchers used a dataset of publicly available word embeddings, called FASTTEXT, with 110 language pairs. In these embeddings, and others, words that appear more and more frequently in similar contexts have closely matching vectors. “Mother” and “father” will usually be close together but both farther away from, say, “house.”

Providing a “soft translation”

The model notes vectors that are closely related yet different from the others, and assigns a probability that similarly distanced vectors in the other embedding will correspond. It’s kind of like a “soft translation,” Alvarez-Melis says, “because instead of just returning a single word translation, it tells you ‘this vector, or word, has a strong correspondence with this word, or words, in the other language.’”

An example would be in the months of the year, which appear closely together in many languages. The model will see a cluster of 12 vectors that are clustered in one embedding and a remarkably similar cluster in the other embedding. “The model doesn’t know these are months,” Alvarez-Melis says. “It just knows there is a cluster of 12 points that aligns with a cluster of 12 points in the other language, but they’re different to the rest of the words, so they probably go together well. By finding these correspondences for each word, it then aligns the whole space simultaneously.”

The researchers hope the work serves as a “feasibility check,” Jaakkola says, to apply Gromov-Wasserstein method to machine-translation systems to run faster, more efficiently, and gain access to many more languages.

Additionally, a possible perk of the model is that it automatically produces a value that can be interpreted as quantifying, on a numerical scale, the similarity between languages. This may be useful for linguistics studies, the researchers say. The model calculates how distant all vectors are from one another in two embeddings, which depends on sentence structure and other factors. If vectors are all really close, they’ll score closer to 0, and the farther apart they are, the higher the score. Similar Romance languages such as French and Italian, for instance, score close to 1, while classic Chinese scores between 6 and 9 with other major languages.

“This gives you a nice, simple number for how similar languages are … and can be used to draw insights about the relationships between languages,” Alvarez-Melis says.


Topics: Research, Language, Machine learning, Artificial intelligence, Data, Algorithms, Computer science and technology, Computer Science and Artificial Intelligence Laboratory (CSAIL), IDSS, Electrical Engineering & Computer Science (eecs), School of Engineering

Source

Automated system generates robotic parts for novel tasks

An automated system developed by MIT researchers designs and 3-D prints complex robotic parts called actuators that are optimized according to an enormous number of specifications. In short, the system does automatically what is virtually impossible for humans to do by hand.  

In a paper published today in Science Advances, the researchers demonstrate the system by fabricating actuators — devices that mechanically control robotic systems in response to electrical signals — that show different black-and-white images at different angles. One actuator, for instance, portrays a Vincent van Gogh portrait when laid flat. Tilted an angle when it’s activated, however, it portrays the famous Edvard Munch painting “The Scream.” The researchers also 3-D printed floating water lilies with petals equipped with arrays of actuators and hinges that fold up in response to magnetic fields run through conductive fluids.

The actuators are made from a patchwork of three different materials, each with a different light or dark color and a property — such as flexibility and magnetization — that controls the actuator’s angle in response to a control signal. Software first breaks down the actuator design into millions of three-dimensional pixels, or “voxels,” that can each be filled with any of the materials. Then, it runs millions of simulations, filling different voxels with different materials. Eventually, it lands on the optimal placement of each material in each voxel to generate two different images at two different angles. A custom 3-D printer then fabricates the actuator by dropping the right material into the right voxel, layer by layer.

“Our ultimate goal is to automatically find an optimal design for any problem, and then use the output of our optimized design to fabricate it,” says first author Subramanian Sundaram PhD ’18, a former graduate student in the Computer Science and Artificial Intelligence Laboratory (CSAIL). “We go from selecting the printing materials, to finding the optimal design, to fabricating the final product in almost a completely automated way.”

The shifting images demonstrates what the system can do. But actuators optimized for appearance and function could also be used for biomimicry in robotics. For instance, other researchers are designing underwater robotic skins with actuator arrays meant to mimic denticles on shark skin. Denticles collectively deform to decrease drag for faster, quieter swimming. “You can imagine underwater robots having whole arrays of actuators coating the surface of their skins, which can be optimized for drag and turning efficiently, and so on,” Sundaram says.

Joining Sundaram on the paper are: Melina Skouras, a former MIT postdoc; David S. Kim, a former researcher in the Computational Fabrication Group; Louise van den Heuvel ’14, SM ’16; and Wojciech Matusik, an MIT associate professor in electrical engineering and computer science and head of the Computational Fabrication Group.

Navigating the “combinatorial explosion”

Robotic actuators today are becoming increasingly complex. Depending on the application, they must be optimized for weight, efficiency, appearance, flexibility, power consumption, and various other functions and performance metrics. Generally, experts manually calculate all those parameters to find an optimal design.  

Adding to that complexity, new 3-D-printing techniques can now use multiple materials to create one product. That means the design’s dimensionality becomes incredibly high. “What you’re left with is what’s called a ‘combinatorial explosion,’ where you essentially have so many combinations of materials and properties that you don’t have a chance to evaluate every combination to create an optimal structure,” Sundaram says.

In their work, the researchers first customized three polymer materials with specific properties they needed to build their actuators: color, magnetization, and rigidity. In the end, they produced a near-transparent rigid material, an opaque flexible material used as a hinge, and a brown nanoparticle material that responds to a magnetic signal. They plugged all that characterization data into a property library.

The system takes as input grayscale image examples — such as the flat actuator that displays the Van Gogh portrait but tilts at an exact angle to show “The Scream.” It basically executes a complex form of trial and error that’s somewhat like rearranging a Rubik’s Cube, but in this case around 5.5 million voxels are iteratively reconfigured to match an image and meet a measured angle.

Initially, the system draws from the property library to randomly assign different materials to different voxels. Then, it runs a simulation to see if that arrangement portrays the two target images, straight on and at an angle. If not, it gets an error signal. That signal lets it know which voxels are on the mark and which should be changed. Adding, removing, and shifting around brown magnetic voxels, for instance, will change the actuator’s angle when a magnetic field is applied. But, the system also has to consider how aligning those brown voxels will affect the image.

Voxel by voxel

To compute the actuator’s appearances at each iteration, the researchers adopted a computer graphics technique called “ray-tracing,” which simulates the path of light interacting with objects. Simulated light beams shoot through the actuator at each column of voxels. Actuators can be fabricated with more than 100 voxel layers. Columns can contain more than 100 voxels, with different sequences of the materials that radiate a different shade of gray when flat or at an angle.

When the actuator is flat, for instance, the light beam may shine down on a column containing many brown voxels, producing a dark tone. But when the actuator tilts, the beam will shine on misaligned voxels. Brown voxels may shift away from the beam, while more clear voxels may shift into the beam, producing a lighter tone. The system uses that technique to align dark and light voxel columns where they need to be in the flat and angled image. After 100 million or more iterations, and anywhere from a few to dozens of hours, the system will find an arrangement that fits the target images.

“We’re comparing what that [voxel column] looks like when it’s flat or when it’s titled, to match the target images,” Sundaram says. “If not, you can swap, say, a clear voxel with a brown one. If that’s an improvement, we keep this new suggestion and make other changes over and over again.”

To fabricate the actuators, the researchers built a custom 3-D printer that uses a technique called “drop-on-demand.” Tubs of the three materials are connected to print heads with hundreds of nozzles that can be individually controlled. The printer fires a 30-micron-sized droplet of the designated material into its respective voxel location. Once the droplet lands on the substrate, it’s solidified. In that way, the printer builds an object, layer by layer.

The work could be used as a stepping stone for designing larger structures, such as airplane wings, Sundaram says. Researchers, for instance, have similarly started breaking down airplane wings into smaller voxel-like blocks to optimize their designs for weight and lift, and other metrics. “We’re not yet able to print wings or anything on that scale, or with those materials. But I think this is a first step toward that goal,” Sundaram says.


Topics: Research, Computer science and technology, Algorithms, Artificial intelligence, Machine learning, Robots, Robotics, 3-D printing, Materials Science and Engineering, Computer Science and Artificial Intelligence Laboratory (CSAIL), Electrical Engineering & Computer Science (eecs), School of Engineering

Source

Machines that learn language more like kids do

Children learn language by observing their environment, listening to the people around them, and connecting the dots between what they see and hear. Among other things, this helps children establish their language’s word order, such as where subjects and verbs fall in a sentence.

In computing, learning language is the task of syntactic and semantic parsers. These systems are trained on sentences annotated by humans that describe the structure and meaning behind words. Parsers are becoming increasingly important for web searches, natural-language database querying, and voice-recognition systems such as Alexa and Siri. Soon, they may also be used for home robotics.

But gathering the annotation data can be time-consuming and difficult for less common languages. Additionally, humans don’t always agree on the annotations, and the annotations themselves may not accurately reflect how people naturally speak.

In a paper being presented at this week’s Empirical Methods in Natural Language Processing conference, MIT researchers describe a parser that learns through observation to more closely mimic a child’s language-acquisition process, which could greatly extend the parser’s capabilities. To learn the structure of language, the parser observes captioned videos, with no other information, and associates the words with recorded objects and actions. Given a new sentence, the parser can then use what it’s learned about the structure of the language to accurately predict a sentence’s meaning, without the video.

This “weakly supervised” approach — meaning it requires limited training data — mimics how children can observe the world around them and learn language, without anyone providing direct context. The approach could expand the types of data and reduce the effort needed for training parsers, according to the researchers. A few directly annotated sentences, for instance, could be combined with many captioned videos, which are easier to come by, to improve performance.

In the future, the parser could be used to improve natural interaction between humans and personal robots. A robot equipped with the parser, for instance, could constantly observe its environment to reinforce its understanding of spoken commands, including when the spoken sentences aren’t fully grammatical or clear. “People talk to each other in partial sentences, run-on thoughts, and jumbled language. You want a robot in your home that will adapt to their particular way of speaking … and still figure out what they mean,” says co-author Andrei Barbu, a researcher in the Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Center for Brains, Minds, and Machines (CBMM) within MIT’s McGovern Institute.

The parser could also help researchers better understand how young children learn language. “A child has access to redundant, complementary information from different modalities, including hearing parents and siblings talk about the world, as well as tactile information and visual information, [which help him or her] to understand the world,” says co-author Boris Katz, a principal research scientist and head of the InfoLab Group at CSAIL. “It’s an amazing puzzle, to process all this simultaneous sensory input. This work is part of bigger piece to understand how this kind of learning happens in the world.”

Co-authors on the paper are: first author Candace Ross, a graduate student in the Department of Electrical Engineering and Computer Science and CSAIL, and a researcher in CBMM; Yevgeni Berzak PhD ’17, a postdoc in the Computational Psycholinguistics Group in the Department of Brain and Cognitive Sciences; and CSAIL graduate student Battushig Myanganbayar.

Visual learner

For their work, the researchers combined a semantic parser with a computer-vision component trained in object, human, and activity recognition in video. Semantic parsers are generally trained on sentences annotated with code that ascribes meaning to each word and the relationships between the words. Some have been trained on still images or computer simulations.

The new parser is the first to be trained using video, Ross says. In part, videos are more useful in reducing ambiguity. If the parser is unsure about, say, an action or object in a sentence, it can reference the video to clear things up. “There are temporal components — objects interacting with each other and with people — and high-level properties you wouldn’t see in a still image or just in language,” Ross says.

The researchers compiled a dataset of about 400 videos depicting people carrying out a number of actions, including picking up an object or putting it down, and walking toward an object. Participants on the crowdsourcing platform Mechanical Turk then provided 1,200 captions for those videos. They set aside 840 video-caption examples for training and tuning, and used 360 for testing. One advantage of using vision-based parsing is “you don’t need nearly as much data — although if you had [the data], you could scale up to huge datasets,” Barbu says.

In training, the researchers gave the parser the objective of determining whether a sentence accurately describes a given video. They fed the parser a video and matching caption. The parser extracts possible meanings of the caption as logical mathematical expressions. The sentence, “The woman is picking up an apple,” for instance, may be expressed as: λxy. woman x, pick_up x y, apple y.

Those expressions and the video are inputted to the computer-vision algorithm, called “Sentence Tracker,” developed by Barbu and other researchers. The algorithm looks at each video frame to track how objects and people transform over time, to determine if actions are playing out as described. In this way, it determines if the meaning is possibly true of the video.

Connecting the dots

The expression with the most closely matching representations for objects, humans, and actions becomes the most likely meaning of the caption. The expression, initially, may refer to many different objects and actions in the video, but the set of possible meanings serves as a training signal that helps the parser continuously winnow down possibilities. “By assuming that all of the sentences must follow the same rules, that they all come from the same language, and seeing many captioned videos, you can narrow down the meanings further,” Barbu says.

In short, the parser learns through passive observation: To determine if a caption is true of a video, the parser by necessity must identify the highest probability meaning of the caption. “The only way to figure out if the sentence is true of a video [is] to go through this intermediate step of, ‘What does the sentence mean?’ Otherwise, you have no idea how to connect the two,” Barbu explains. “We don’t give the system the meaning for the sentence. We say, ‘There’s a sentence and a video. The sentence has to be true of the video. Figure out some intermediate representation that makes it true of the video.’”

The training produces a syntactic and semantic grammar for the words it’s learned. Given a new sentence, the parser no longer requires videos, but leverages its grammar and lexicon to determine sentence structure and meaning.

Ultimately, this process is learning “as if you’re a kid,” Barbu says. “You see world around you and hear people speaking to learn meaning. One day, I can give you a sentence and ask what it means and, even without a visual, you know the meaning.”

“This research is exactly the right direction for natural language processing,” says Stefanie Tellex, a professor of computer science at Brown University who focuses on helping robots use natural language to communicate with humans. “To interpret grounded language, we need semantic representations, but it is not practicable to make it available at training time. Instead, this work captures representations of compositional structure using context from captioned videos. This is the paper I have been waiting for!”

In future work, the researchers are interested in modeling interactions, not just passive observations. “Children interact with the environment as they’re learning. Our idea is to have a model that would also use perception to learn,” Ross says.

This work was supported, in part, by the CBMM, the National Science Foundation, a Ford Foundation Graduate Research Fellowship, the Toyota Research Institute, and the MIT-IBM Brain-Inspired Multimedia Comprehension project.


Topics: Research, Language, Machine learning, Artificial intelligence, Data, Computer vision, Human-computer interaction, McGovern Institute, Center for Brains Minds and Machines, Robots, Robotics, National Science Foundation (NSF), Computer science and technology, Computer Science and Artificial Intelligence Laboratory (CSAIL), Electrical Engineering & Computer Science (eecs), School of Engineering, MIT-IBM Watson AI Lab

Source

IDSS hosts inaugural Learning for Dynamics and Control conference

Over the next decade, the biggest generator of data is expected to be devices which sense and control the physical world. From autonomy to robotics to smart cities, this data explosion — paired with advances in machine learning — creates new possibilities for designing and optimizing technological systems that use their own real-time generated data to make decisions.

To address the many scientific questions and application challenges posed by the real-time physical processes of these “dynamical” systems, researchers from MIT and elsewhere organized a new annual conference called Learning for Dynamics and Control. Dubbed L4DC, the inaugural conference was hosted at MIT by the Institute for Data, Systems, and Society (IDSS).

As excitement has built around machine learning and autonomy, there is an increasing need to consider both the data that physical systems produce and feedback these systems receive, especially from their interactions with humans. That extends into the domains of data science, control theory, decision theory, and optimization.

“We decided to launch L4DC because we felt the need to bring together the communities of machine learning, robotics, and systems and control theory,” said IDSS Associate Director Ali Jadbabaie, a conference co-organizer and professor in IDSS, the Department of Civil and Environmental Engineering (CEE), and the Laboratory for Information and Decision Systems (LIDS).

“The goal was to bring together these researchers because they all converged on a very similar set of research problems and challenges,” added co-organizer Ben Recht, of the University of California at Berkeley, in opening remarks.

Over the two days of the conference, talks covered core topics from the foundations of learning of dynamics models, data-driven optimization for dynamical models and optimization for machine learning, reinforcement learning for physical systems, and reinforcement learning for both dynamical and control systems. Talks also featured examples of applications in fields like robotics, autonomy, and transportation systems.

“How could self-driving cars change urban systems?” asked Cathy Wu, an assistant professor in CEE, IDSS, and LIDS, in a talk that investigated how transportation and urban systems may change over the next few decades. Only a small percentage of autonomous vehicles are needed to significantly affect traffic systems, Wu argued, which will in turn affect other urban systems. “Distribution learning provides us with an understanding for integrating autonomy into urban systems,” said Wu.

Claire Tomlin of UC Berkeley presented on integrating learning into control in the context of safety in robotics. Tomlin’s team integrates learning mechanisms that help robots adapt to sudden changes, such as a gust of wind, an unexpected human behavior, or an unknown environment. “We’ve been working on a number of mechanisms for doing this computation in real time,” Tomlin said.

Pablo Parillo, a professor in the Department of Electrical Engineering and Computer Science and faculty member of both IDSS and LIDS, was also a conference organizer, along with George Pappas of the University of Pennsylvania and Melanie Zellinger of ETH Zurich.

L4DC was sponsored by the National Science Foundation, the U.S. Air Force Office of Scientific Research, the Office of Naval Research, and the Army Research Office, a part of the Combat Capabilities Development Command Army Research Laboratory (CCDC ARL).

“The cutting-edge combination of classical control with recent advances in artificial intelligence and machine learning will have significant and broad potential impact on Army multi-domain operations, and include a variety of systems that will incorporate autonomy, decision-making and reasoning, networking, and human-machine collaboration,” said Brian Sadler, senior scientist for intelligent systems, U.S. Army CCDC ARL.

Organizers plan to make L4DC a recurring conference, hosted at different institutions. “Everyone we invited to speak accepted,” Jadbabaie said. “The largest room in Stata was packed until the end of the conference. We take this as a testament to the growing interest in this area, and hope to grow and expand the conference further in the coming years.”


Topics: Institute for Data, Systems, and Society, Civil and environmental engineering, Laboratory for Information and Decision Systems (LIDS), Electrical Engineering & Computer Science (eecs), School of Engineering, Machine learning, Special events and guest speakers, Data, Research, Robotics, Transportation, Autonomous vehicles

Source

Professor Emerita Catherine Chvany, Slavic scholar, dies at 91

Professor Emerita Catherine Vakar Chvany, a renowned Slavic linguist and literature scholar who played a pivotal role in advancing the study of Russian language and literature in MIT’s Foreign Languages and Literatures Section (now Global Studies and Languages), died on Oct. 19 in Watertown, Massachusetts. She was 91.

Chvany served on the MIT faculty for 26 years before her retirement in 1993.

Global Studies and Languages head Emma Teng noted that MIT’s thriving Russian studies curriculum today is a legacy of Chvany’s foundational work in the department. And, Maria Khotimsky, senior lecturer in Russian, said, “Several generations of Slavists are grateful for Professor Chvany’s inspiring mentorship, while her works in Slavic poetics and linguistics are renowned in the U.S. and internationally.”

A prolific and influential scholar

A prolific scholar, Chvany wrote “On the Syntax of Be-Sentences in Russian” (Slavica Publishers, 1975); and co-edited four volumes: “New Studies in Russian Language and Literature” (Slavica, 1987); “Morphosyntax in Slavic” (Slavica, 1980); “Slavic Transformational Syntax” (University of Michigan, 1974); and “Studies in Poetics: Commemorative Volume: Krystyna Pomorska” (Slavica Publishers, 1995).

In 1996, linguists Olga Yokoyama and Emily Klenin published an edited collection of her work, “Selected Essays of Catherine V. Chvany” (Slavica).

In her articles, Chvany took up a range of issues in linguistics, including not only variations on the verb “to be” but also hierarchies of situations in syntax of agents and subjects; definiteness in Bulgarian, English, and Russian; other issues of lexical storage and transitivity; hierarchies in Russian cases; and issues of markedness, including an important overview, “The Evolution of the Concept of Markedness from the Prague Circle to Generative Grammar.”

In literature she took up language issues in the classic “Tale of Igor’s Campaign,” Teffi’s poems, Nikolai Leskov’s short stories, and a novella by Aleksandr Solzhenitsyn.

From Paris to Cambridge 

“Catherine Chvany was always so present that it is hard to think of her as gone,” said MIT Literature Professor Ruth Perry. “She had strong opinions and wasn’t afraid to speak out about them.”

Chvany was born on April 2, 1927, in Paris, France, to émigré Russian parents. During World War II, she and her younger sister Anna were sent first to the Pyrenees and then to the United States with assistance from a courageous young Unitarian minister’s wife, Martha Sharp.

Fluent in Russian and French, Chvany quickly mastered English. She graduated from the Girls’ Latin School in Boston in 1946 and attended Radcliffe College from 1946 to 1948. She left school to marry Lawrence Chvany and raise three children, Deborah, Barbara, and Michael.

In 1961-63, she returned to school and completed her undergraduate degree in linguistics at Harvard University. She received her PhD in Slavic languages and literatures from Harvard in 1970 and began her career as an instructor of Russian language at Wellesley College in 1966.

She joined the faculty at MIT in 1967 and became an assistant professor in 1971, an associate professor in 1974, and a full professor in 1983.

Warmth, generosity, and friendship

Historian Philip Khoury, who was dean of the School of Humanities, Arts and Social Sciences during the latter years of Chvany’s time at MIT, remembered her warmly as “a wonderful colleague who loved engaging with me on language learning and how the MIT Russian language studies program worked.”

Elizabeth Wood, a professor of Russian history, recalled the warm welcome that Chvany gave her when she came to MIT in 1990: “She always loved to stop and talk at the Tuesday faculty lunches, sharing stories of her life and her love of Slavic languages.”

Chvany’s influence was broad and longstanding, in part as a result of her professional affiliations. Chvany served on the advisory or editorial boards of “Slavic and East European Journal,” “Russian Language Journal,” “Journal of Slavic Linguistics,” “Peirce Seminar Papers,” “Essays in Poetics” (United Kingdom), and “Supostavitelno ezikoznanie” (Bulgaria).

Emily Klenin, an emerita professor of Slavic languages and literature at the University of California at Los Angeles, noted that Chvany had a practice of expressing gratitude to those whom she mentored. She connected that practice to Chvany’s experience of being aided during WWII. “Her warm and open attitude toward life was reflected in her continuing interest and friendship for the young people she mentored, even when, as most eventually did, they went on to lives involving completely different academic careers or even no academic career at all,” Klenin said.

Memorial reception at MIT on November 18

Chvany is survived by her children, Deborah Gyapong and her husband Tony of Ottawa, Canada; Barbara Chvany and her husband Ken Silbert of Orinda, California; and Michael Chvany and his wife Sally of Arlington, Massachusetts; her foster-brother, William Atkinson of Cambridge, Massachusetts; six grandchildren; and nine great grandchildren.

A memorial reception will be held on Sunday, Nov. 18, from 1:30 to 4:00 p.m. in the Samberg Conference Center, 7th floor. Donations in Chvany’s name may be made to the Unitarian Universalist Association. Visit Friends of the UUA for online donations. Please RSVP to Michael Chvany, Mike@BridgeStreetProductions.com, if you plan to attend the memorial.


Source

Pathways to a low-carbon China

Fulfilling the ultimate goal of the 2015 Paris Agreement on climate change — keeping global warming well below 2 degrees Celsius, if not 1.5 C — will be impossible without dramatic action from the world’s largest emitter of greenhouse gases, China. Toward that end, China began in 2017 developing an emissions trading scheme (ETS), a national carbon dioxide market designed to enable the country to meet its initial Paris pledge with the greatest efficiency and at the lowest possible cost. China’s pledge, or nationally determined contribution (NDC), is to reduce its CO2 intensity of gross domestic product (emissions produced per unit of economic activity) by 60 to 65 percent in 2030 relative to 2005, and to peak CO2 emissions around 2030.

When it’s rolled out, China’s carbon market will initially cover the electric power sector (which currently produces more than 3 billion tons of CO2) and likely set CO2 emissions intensity targets (e.g., grams of CO2 per kilowatt hour) to ensure that its short-term NDC is fulfilled. But to help the world achieve the long-term 2 C and 1.5 C Paris goals, China will need to continually decrease these targets over the course of the century.

A new study of China’s long-term power generation mix under the nation’s ETS projects that until 2065, renewable energy sources will likely expand to meet these targets; after that, carbon capture and storage (CCS) could be deployed to meet the more stringent targets that follow. Led by researchers at the MIT Joint Program on the Science and Policy of Global Change, the study appears in the journal Energy Economics.

This research provides insight into the level of carbon prices and mix of generation technologies needed for China to meet different CO2 intensity targets for the electric power sector,” says Jennifer Morris, lead author of the study and a research scientist at the MIT Joint Program. ”We find that coal CCS has the potential to play an important role in the second half of the century, as part of a portfolio that also includes renewables and possibly nuclear power.”

To evaluate the impacts of multiple potential ETS pathways — different starting carbon prices and rates of increase — on the deployment of CCS technology, the researchers enhanced the MIT Economic Projection and Policy Analysis (EPPA) model to include the joint program’s latest assessments of the costs of low-carbon power generation technologies in China. Among the technologies included in the model are natural gas, nuclear, wind, solar, coal with CCS, and natural gas with CCS. Assuming that power generation prices are the same across the country for any given technology, the researchers identify different ETS pathways in which CCS could play a key role in lowering the emissions intensity of China’s power sector, particularly for targets consistent with achieving the long-term 2 C and 1.5 C Paris goals by 2100.

The study projects a two-stage transition — first to renewables, and then to coal CCS. The transition from renewables to CCS is driven by two factors. First, at higher levels of penetration, renewables incur increasing costs related to accommodating the intermittency challenges posed by wind and solar. This paves the way for coal CCS. Second, as experience with building and operating CCS technology is gained, CCS costs decrease, allowing the technology to be rapidly deployed at scale after 2065 and replace renewables as the primary power generation technology.

The study shows that carbon prices of $35-40 per ton of CO2 make CCS technologies coupled with coal-based generation cost-competitive against other modes of generation, and that carbon prices higher than $100 per ton of CO2 allow for a significant expansion of CCS.

“Our study is at the aggregate level of the country,” says Sergey Paltsev, deputy director of the joint program. “We recognize that the cost of electricity varies greatly from province to province in China, and hope to include interactions between provinces in our future modeling to provide deeper understanding of regional differences. At the same time, our current results provide useful insights to decision-makers in designing more substantial emissions mitigation pathways.”


Topics: Joint Program on the Science and Policy of Global Change, MIT Energy Initiative, Climate change, Alternative energy, Energy, Environment, Economics, Greenhouse gases, Carbon dioxide, Research, Policy, Emissions, China, Technology and society

Source

Times Higher Education ranks MIT No.1 in business and economics, No.2 in arts and humanities

MIT has taken the top spot in the Business and Economics subject category in the 2019 Times Higher Education World University Rankings and, for the second year in a row, the No. 2 spot worldwide for Arts and Humanities.

The Times Higher Education World University Rankings is an annual publication of university rankings by Times Higher Education, a leading British education magazine. The rankings use a set of 13 rigorous performance indicators to evaluate schools both overall and within individual fields. Criteria include teaching and learning environment, research volume and influence, and international outlook.

Business and Economics

The No. 1 ranking for Business and Economics is based on an evaluation of both the MIT Department of Economics — housed in the MIT School of Humanities, Arts, and Social Sciences — and of the MIT Sloan School of Management.

“We are always delighted when the high quality of work going on in our school and across MIT is recognized, and warmly congratulate our colleagues in MIT Sloan with whom we share this honor,” said Melissa Nobles, the Kenan Sahin Dean of the School of Humanities, Arts, and Social Sciences (SHASS).

The Business and Economics ranking evaluated 585 universities for their excellence in business, management, accounting, finance, economics, and econometrics subjects. In this category, MIT was followed by Stanford University and Oxford University.

“Being recognized as first in business and management is gratifying and we are thrilled to share the honors with our colleagues in the MIT Department of Economics and MIT SHASS,” said David Schmittlein, dean of MIT Sloan.

MIT has long been a powerhouse in economics. For over a century, the Department of Economics at MIT has played a leading role in economics education, research, and public service and the department’s faculty have won a total of nine Nobel Prizes over the years. MIT Sloan faculty have also won two Nobels, and the school is known as a driving force behind MIT’s entrepreneurial ecosystem: Companies started by MIT alumni have created millions of jobs and generate nearly $2 trillion a year in revenue.

Arts and Humanities

The Arts and Humanities ranking evaluated 506 universities that lead in art, performing arts, design, languages, literature, linguistics, history, philosophy, theology, architecture, and archaeology subjects. MIT was rated just below Stanford and above Harvard University in this category. MIT’s high ranking reflects the strength of both the humanities disciplines and performing arts located in MIT SHASS and the design fields and humanistic work located in MIT’s School of Architecture and Planning (SA+P).

At MIT, outstanding humanities and arts programs in SHASS — including literature; history; music and theater arts; linguistics; philosophy; comparative media studies; writing; languages; science, technology and society; and women’s and gender studies — sit alongside equally strong initiatives within SA+P in the arts; architecture; design; urbanism; and history, theory, and criticism. SA+P is also home to the Media Lab, which focuses on unconventional research in technology, media, science, art, and design.

“The recognition from Times Higher Education confirms the importance of creativity and human values in the advancement of science and technology,” said Hashim Sarkis, dean of SA+P. “It also rewards MIT’s longstanding commitment to “The Arts” — words that are carved in the Lobby 7 dome signifying one of the main areas for the application of technology.”

Receiving awards in multiple categories and in categories that span multiple schools at MIT is a recognition of the success MIT has had in fostering cross-disciplinary thinking, said Dean Nobles.

“It’s a testament to the strength of MIT’s model that these areas of scholarship and pedagogy are deeply seeded in multiple administrative areas,” Nobles said. “At MIT, we know that solving challenging problems requires the combined insight and knowledge from many fields. The world’s complex issues are not only scientific and technological problems; they are as much human and ethical problems.”


Topics: Awards, honors and fellowships, Arts, Architecture, Business and management, Comparative Media Studies/Writing, Economics, Global Studies and Languages, Humanities, History, Literature, Linguistics, Management, Music, Philosophy, Theater, Urban studies and planning, Rankings, Media Lab, School of Architecture and Planning, Sloan School of Management, School of Humanities Arts and Social Sciences

Source

Experiments show dramatic increase in solar cell output

In any conventional silicon-based solar cell, there is an absolute limit on overall efficiency, based partly on the fact that each photon of light can only knock loose a single electron, even if that photon carried twice the energy needed to do so. But now, researchers have demonstrated a method for getting high-energy photons striking silicon to kick out two electrons instead of one, opening the door for a new kind of solar cell with greater efficiency than was thought possible.

While conventional silicon cells have an absolute theoretical maximum efficiency of about 29.1 percent conversion of solar energy, the new approach, developed over the last several years by researchers at MIT and elsewhere, could bust through that limit, potentially adding several percentage points to that maximum output. The results are described today in the journal Nature, in a paper by graduate student Markus Einzinger, professor of chemistry Moungi Bawendi, professor of electrical engineering and computer science Marc Baldo, and eight others at MIT and at Princeton University.

The basic concept behind this new technology has been known for decades, and the first demonstration that the principle could work was carried out by some members of this team six years ago. But actually translating the method into a full, operational silicon solar cell took years of hard work, Baldo says.

That initial demonstration “was a good test platform” to show that the idea could work, explains Daniel Congreve PhD ’15, an alumnus now at the Rowland Institute at Harvard, who was the lead author in that prior report and is a co-author of the new paper. Now, with the new results, “we’ve done what we set out to do” in that project, he says.

The original study demonstrated the production of two electrons from one photon, but it did so in an organic photovoltaic cell, which is less efficient than a silicon solar cell. It turned out that transferring the two electrons from a top collecting layer made of tetracene into the silicon cell “was not straightforward,” Baldo says. Troy Van Voorhis, a professor of chemistry at MIT who was part of that original team, points out that the concept was first proposed back in the 1970s, and says wryly that turning that idea into a practical device “only took 40 years.”

The key to splitting the energy of one photon into two electrons lies in a class of materials that possess “excited states” called excitons, Baldo says: In these excitonic materials, “these packets of energy propagate around like the electrons in a circuit,” but with quite different properties than electrons. “You can use them to change energy — you can cut them in half, you can combine them.” In this case, they were going through a process called singlet exciton fission, which is how the light’s energy gets split into two separate, independently moving packets of energy. The material first absorbs a photon, forming an exciton that rapidly undergoes fission into two excited states, each with half the energy of the original state.

But the tricky part was then coupling that energy over into the silicon, a material that is not excitonic. This coupling had never been accomplished before.

As an intermediate step, the team tried coupling the energy from the excitonic layer into a material called quantum dots. “They’re still excitonic, but they’re inorganic,” Baldo says. “That worked; it worked like a charm,” he says. By understanding the mechanism taking place in that material, he says, “we had no reason to think that silicon wouldn’t work.”

What that work showed, Van Voorhis says, is that the key to these energy transfers lies in the very surface of the material, not in its bulk. “So it was clear that the surface chemistry on silicon was going to be important. That was what was going to determine what kinds of surface states there were.” That focus on the surface chemistry may have been what allowed this team to succeed where others had not, he suggests.

The key was in a thin intermediate layer. “It turns out this tiny, tiny strip of material at the interface between these two systems [the silicon solar cell and the tetracene layer with its excitonic properties] ended up defining everything. It’s why other researchers couldn’t get this process to work, and why we finally did.” It was Einzinger “who finally cracked that nut,” he says, by using a layer of a material called hafnium oxynitride.

The layer is only a few atoms thick, or just 8 angstroms (ten-billionths of a meter), but it acted as a “nice bridge” for the excited states, Baldo says. That finally made it possible for the single high-energy photons to trigger the release of two electrons inside the silicon cell. That produces a doubling of the amount of energy produced by a given amount of sunlight in the blue and green part of the spectrum. Overall, that could produce an increase in the power produced by the solar cell — from a theoretical maximum of 29.1 percent, up to a maximum of about 35 percent.

Actual silicon cells are not yet at their maximum, and neither is the new material, so more development needs to be done, but the crucial step of coupling the two materials efficiently has now been proven. “We still need to optimize the silicon cells for this process,” Baldo says. For one thing, with the new system those cells can be thinner than current versions. Work also needs to be done on stabilizing the materials for durability. Overall, commercial applications are probably still a few years off, the team says.

Other approaches to improving the efficiency of solar cells tend to involve adding another kind of cell, such as a perovskite layer, over the silicon. Baldo says “they’re building one cell on top of another. Fundamentally, we’re making one cell — we’re kind of turbocharging the silicon cell. We’re adding more current into the silicon, as opposed to making two cells.”

The researchers have measured one special property of hafnium oxynitride that helps it transfer the excitonic energy. “We know that hafnium oxynitride generates additional charge at the interface, which reduces losses by a process called electric field passivation. If we can establish better control over this phenomenon, efficiencies may climb even higher.” Einzinger says. So far, no other material they’ve tested can match its properties.

The research was supported as part of the MIT Center for Excitonics, funded by the U.S. Department of Energy.


Topics: School of Engineering, Alternative energy, Chemistry, Excitonics, Climate change, Energy, MIT Energy Initiative, Research, Solar, Department of Energy (DoE), National Science Foundation (NSF), Research Laboratory of Electronics, Electrical Engineering & Computer Science (eecs), Materials Science and Engineering

Source

I think, therefore I code

To most of us, a 3-D-printed turtle just looks like a turtle; four legs, patterned skin, and a shell. But if you show it to a particular computer in a certain way, that object’s not a turtle — it’s a gun.

Objects or images that can fool artificial intelligence like this are called adversarial examples. Jessy Lin, a senior double-majoring in computer science and electrical engineering and in philosophy, believes that they’re a serious problem, with the potential to trip up AI systems involved in driverless cars, facial recognition, or other applications. She and several other MIT students have formed a research group called LabSix, which creates examples of these AI adversaries in real-world settings — such as the turtle identified as a rifle — to show that they are legitimate concerns.

Lin is also working on a project called Sajal, which is a system that could allow refugees to give their medical records to doctors via a QR code. This “mobile health passport” for refugees was born out of VHacks, a hackathon organized by the Vatican, where Lin worked with a team of people she’d met only a week before. The theme was to build something for social good — a guiding principle for Lin since her days as a hackathon-frequenting high school student.

“It’s kind of a value I’ve always had,” Lin says. “Trying to be thoughtful about, one, the impact that the technology that we put out into the world has, and, two, how to make the best use of our skills as computer scientists and engineers to do something good.”

Clearer thinking through philosophy

AI is one of Lin’s key interests in computer science, and she’s currently working in the Computational Cognitive Science group of Professor Josh Tenenbaum, which develops computational models of how humans and machines learn. The knowledge she’s gained through her other major, philosophy, relates more closely this work than it might seem, she says.

“There are a lot of ideas in [AI and language-learning] that tie into ideas from philosophy,” she says. “How the mind works, how we reason about things in the world, what concepts are. There are all these really interesting abstract ideas that I feel like … studying philosophy surprisingly has helped me think about better.”

Lin says she didn’t know a lot about philosophy coming into college. She liked the first class she took, during her first year, so she took another one, and another — before she knew it, she was hooked. It started out as a minor; this past spring, she declared it as a major.

“It helped me structure my thoughts about the world in general, and think more clearly about all kinds of things,” she says.

Through an interdisciplinary class on ethics and AI ethics, Lin realized the importance of incorporating perspectives from people who don’t work in computer science. Rather than writing those perspectives off, she wants to be someone inside the tech field who considers issues from a humanities perspective and listens to what people in other disciplines have to say.

Teaching computers to talk

Computers don’t learn languages the way that humans do — at least, not yet. Through her work in the Tenenbaum lab, Lin is trying to change that.

According to one hypothesis, when humans hear words, we figure out what they are by first saying them to ourselves in our heads. Some computer models aim to recreate this process, including recapitulating the individual sounds in a word. These “generative” models do capture some aspects of human language learning, but they have other drawbacks that make them impractical for use with real-world speech.

On the other hand, AI systems known as neural networks, which are trained on huge sets of data, have shown great success with speech recognition. Through several projects, Lin has been working on combining the strengths of both types of models, to better understand, for example, how children learn language even at a very young age.

Ultimately, Lin says, this line of research could contribute to the development of machines that can speak in a more flexible, human way.

Hackathons and other pastimes

Lin first discovered her passion for computer science at Great Neck North High School on Long Island, New York, where she loved staying up all night to create computer programs during hackathons. (More recently, Lin has played a key role in HackMIT, one of the Institute’s flagship hackathons. Among other activities, she helped organize the event from 2015 to 2017, and in 2016 was the director of corporate relations and sponsorship.) It was also during high school that she began to attend MIT Splash, a program hosted on campus offering a variety of classes for K-12 students.

“I was one of those people that always had this dream to come to MIT,” she says.

Lin says her parents and her two sisters have played a big role in supporting those dreams. However, her knack for artificial intelligence doesn’t seem to be genetic.

“My mom has her own business, and my dad is a lawyer, so … who knows where computer science came out of that?” she says, laughing.

In recent years, Lin has put her computer science skills to use in a variety of ways. While in high school, she interned at both New York University and Columbia University. During Independent Activities Period in 2018, she worked on security for Fidex, a friend’s cryptocurrency exchange startup. The following summer she interned at Google Research NYC on the natural language understanding team, where she worked on developing memory mechanisms that allow a machine to have a longer-term memory. For instance, a system would remember not only the last few phrases it read in a book, but a character from several chapters back. Lin now serves as a campus ambassador for Sequoia Capital, supporting entrepreneurship on campus.

She currently lives in East Campus, where she enjoys the “very vibrant dorm culture.” Students there organize building projects for each first-year orientation — when Lin arrived, they built a roller coaster. She’s helped with the building in the years since, including a geodesic dome that was taller than she is. Outside of class and building projects, she also enjoys photography.

Ultimately, Lin’s goal is to use her computer science skills to benefit the world. About her future after MIT, she says, “I think it could look something like trying to figure out how we can design AI that is increasingly intelligent but interacts with humans better.”


Topics: student, Undergraduate, Profile, Electrical Engineering & Computer Science (eecs), Philosophy, School of Engineering, School of Humanities Arts and Social Sciences, Technology and society, Humanities, Computer science and technology, Machine learning, Artificial intelligence, Algorithms, Language, Brain and cognitive science

Source

Making wireless communication more energy efficient

Omer Tanovic, a PhD candidate in the Department of Electrical Engineering and Computer Science, joined the Laboratory for Information and Decision Systems (LIDS) because he loves studying theory and turning research questions into solvable math problems. But Omer says that his engineering background — before coming to MIT he received undergraduate and master’s degrees in electrical engineering and computer science at the University of Sarajevo in Bosnia-Herzegovina — has taught him never to lose sight of the intended applications of his work, or the practical parameters for implementation.

“I love thinking about things on the abstract math level, but it’s also important to me that the work we are doing will help to solve real-world problems,” Omer says. “Instead of building circuits, I am creating algorithms that will help make better circuits.”

One real-world problem that captured Omer’s attention during his PhD is power efficiency in wireless operations. The success of wireless communications has led to massive infrastructure expansion in the United States and around the world. This has included many new cell towers and base stations. As these networks and the volume of information they handle grow, they consume an increasingly hefty amount of power, some of which goes to powering the system as it’s supposed to, but much of which is lost as heat due to energy inefficiency. This is a problem both for companies such as mobile network operators, which have to pay large utility bills to cover their operational costs, and for society at large, as the sector’s greenhouse gas emissions rise.

These concerns are what motivate Omer in his research. Most of the projects that he has worked on at MIT seek to design signal processing systems, optimized to different measures, that will increase power efficiency while ensuring that the output signal (what you hear when talking to someone on the phone, for instance) is true to the original input (what was said by the person on the other end of the call).

His latest project seeks to address the power efficiency problem by decreasing the peak-to-average power ratio (PAPR) of wireless communication signals. In the broadest sense, PAPR is an indirect indicator of how much power is required to send and receive a clear signal across a network. The lower this ratio is, the more energy-efficient the transmission. Namely, much of the power consumed in cellular networks is dedicated to power amplifiers, which collect low-power electronic input and convert it to a higher-power output, such as picking up a weak radio signal generated inside a cell phone and amplifying it so that, when emitted by an antenna it is strong enough to reach a cell tower. This ensures that the signal is robust enough to maintain adequate signal-to-noise ratio over the communication link. Power amplifiers are at their most efficient when operating near their saturation level, at maximum output power. However, because cellular network technology has evolved in a way that accommodates a huge volume and variety of information across the network — resulting in far less uniform signals than in the past — modern communication standards require signals with big peak-to-average power ratios. This means that a radio frequency transmitter must be designed such that the underlying power amplifier can handle peaks much higher than the average power being transmitted, and therefore, most of the time, the power amplifier is working inefficiently — far from its saturation level.

“Every cell tower has to have some kind of PAPR reduction algorithm in place in order to operate. But the algorithms they use are developed with little or no guaranties on improving system performance,” Omer says. “A common conception is that optimal algorithms, which would certainly improve system performance, are either too expensive to implement — in terms of power or computational capacity — or cannot be implemented at all.”

Omer, who is supervised by LIDS Professor Alexandre Megretski, designed an algorithm that can decrease the PAPR of a modern communication signal, which would allow the power amplifier to operate closer to its maximum efficiency, thus reducing the amount of energy lost in the process. To create this system he first considered it as an optimization problem, the conditions of which meant that any solution would not be implementable, as it would require infinite latency, meaning an infinite delay before transmitting the signal. However, Omer showed that the underlying optimal system, even though of infinite latency, has a desirable fading-memory property, and so he could create an approximation with finite latency — an acceptable lag time. From this, he developed a way to best approximate the optimal system. The approximation, which is implementable, allows tradeoffs between precision and latency, so that real-time realizations of the algorithm can improve power efficiency without adding too much transmission delay or too much distortion to the signal. Omer applied this system using standardized test signals for 4G communication and found that, on average, he could get around 50 percent reduction in the peak-to-average power ratio while satisfying standard measures of quality of digital communication signals.

Omer’s algorithm, along with improving power efficiency, is also computationally efficient. “This is important in order to ensure that the algorithm is not just theoretically implementable, but also practically implementable,” Omer says, once again stressing that abstract mathematical solutions are only valuable if they cohere to real-world parameters. Microchip real estate in communications is a limited commodity, so the algorithm cannot take up much space, and its mathematical operations have to be executed quickly, as latency is a critical factor in wireless communications. Omer believes that the algorithm could be adapted to solve other engineering problems with similar frameworks, including envelope tracking and model predictive control.

While he has been working on this project, Omer has made a home for himself at MIT. Two of his three sons were born here in Cambridge — in fact, the youngest was born on campus, in the stairwell of Omer and his wife’s graduate housing building. “The neighbors slept right through it,” Omer says with a laugh.

Omer quickly became an active member of the LIDS community when he arrived at MIT. Most notably, he was part of the LIDS student conference and student social committees, where, in addition to helping run the annual LIDS Student Conference, a signature lab event now in its 25th year, he also helped to organize monthly lunches, gatherings, and gaming competitions, including a semester-long challenge dubbed the OLIDSpics (an homage to the Olympic Games). He says that being on the committees was a great way to engage with and contribute to the LIDS community, a group for which he is grateful.

“At MIT, and especially at LIDS, you can learn something new from everyone you speak to. I’ve been in many places, and this is the only place where I’ve experienced a community like that,” Omer says.

As Omer’s time at LIDS draws to an end, he is still debating what to do next. On one hand, his love of solving real-world problems is drawing him toward industry. He spent four summers during his PhD interning at companies including the Mitsubishi Electric Research Lab. He enjoyed the fast pace of industry, being able to see his solutions implemented relatively quickly.

On the other hand, Omer is not sure he could ever leave academia for long; he loves research and is also truly passionate about teaching. Omer, who grew up in Bosnia-Herzegovina, began teaching in his first year of high school, at a math camp for younger children. He has been teaching in one form or another ever since.

At MIT, Omer has taught both undergraduate- and graduate-level courses, including as an instructor-G, an appointment only given to advanced students who have demonstrated teaching expertise. He has won two teaching awards, the MIT School of Engineering Graduate Student Extraordinary Teaching and Mentoring Award in 2018 and the MIT EECS Carlton E. Tucker Teaching Award in 2017.

The magnitude of Omer’s love for teaching is clear when he speaks about working with students: “That moment when you explain something to a student and you see them really understand the concept is priceless. No matter how much energy you have to spend to make that happen, it’s worth it,” Omer says.

In communications, power efficiency is key, but when it comes to research and teaching, there’s no limit to Omer’s energy.


Topics: Laboratory for Information and Decision Systems (LIDS), Electrical Engineering & Computer Science (eecs), School of Engineering, Wireless, Energy, Networks, Algorithms, Profile, Graduate, postdoctoral, Research, Emissions, Industry, Students

Source

Putting neural networks under the microscope

Researchers from MIT and the Qatar Computing Research Institute (QCRI) are putting the machine-learning systems known as neural networks under the microscope.

In a study that sheds light on how these systems manage to translate text from one language to another, the researchers developed a method that pinpoints individual nodes, or “neurons,” in the networks that capture specific linguistic features.

Neural networks learn to perform computational tasks by processing huge sets of training data. In machine translation, a network crunches language data annotated by humans, and presumably “learns” linguistic features, such as word morphology, sentence structure, and word meaning. Given new text, these networks match these learned features from one language to another, and produce a translation.

But, in training, these networks basically adjust internal settings and values in ways the creators can’t interpret. For machine translation, that means the creators don’t necessarily know which linguistic features the network captures.

In a paper being presented at this week’s Association for the Advancement of Artificial Intelligence conference, the researchers describe a method that identifies which neurons are most active when classifying specific linguistic features. They also designed a toolkit for users to analyze and manipulate how their networks translate text for various purposes, such as making up for any classification biases in the training data.

In their paper, the researchers pinpoint neurons that are used to classify, for instance, gendered words, past and present tenses, numbers at the beginning or middle of sentences, and plural and singular words. They also show how some of these tasks require many neurons, while others require only one or two.

“Our research aims to look inside neural networks for language and see what information they learn,” says co-author Yonatan Belinkov, a postdoc in the Computer Science and Artificial Intelligence Laboratory (CSAIL). “This work is about gaining a more fine-grained understanding of neural networks and having better control of how these models behave.”

Co-authors on the paper are: senior research scientist James Glass and undergraduate student Anthony Bau, of CSAIL; and Hassan Sajjad, Nadir Durrani, and Fahim Dalvi, of QCRI, part of Hamad Bin Khalifa University. 

Putting a microscope on neurons

Neural networks are structured in layers, where each layer consists of many processing nodes, each connected to nodes in layers above and below. Data are first processed in the lowest layer, which passes an output to the above layer, and so on. Each output has a different “weight” to determine how much it figures into the next layer’s computation. During training, these weights are constantly readjusted.

Neural networks used for machine translation train on annotated language data. In training, each layer learns different “word embeddings” for one word. Word embeddings are essentially tables of several hundred numbers combined in a way that corresponds to one word and that word’s function in a sentence. Each number in the embedding is calculated by a single neuron.

In their past work, the researchers trained a model to analyze the weighted outputs of each layer to determine how the layers classified any given embedding. They found that lower layers classified relatively simpler linguistic features — such as the structure of a particular word — and higher levels helped classify more complex features, such as how the words combine to form meaning.

In their new work, the researchers use this approach to determine how learned word embeddings make a linguistic classification. But they also implemented a new technique, called “linguistic correlation analysis,” that trains a model to home in on the individual neurons in each word embedding that were most important in the classification.

The new technique combines all the embeddings captured from different layers — which each contain information about the word’s final classification — into a single embedding. As the network classifies a given word, the model learns weights for every neuron that was activated during each classification process. This provides a weight to each neuron in each word embedding that fired for a specific part of the classification.

“The idea is, if this neuron is important, there should be a high weight that’s learned,” Belinkov says. “The neurons with high weights are the ones more important to predicting the certain linguistic property. You can think of the neurons as a lot of knobs you need to turn to get the correct combination of numbers in the embedding. Some knobs are more important than others, so the technique is a way to assign importance to those knobs.”

Neuron ablation, model manipulation

Because each neuron is weighted, it can be ranked in order of importance. To that end, the researchers designed a toolkit, called NeuroX, that automatically ranks all neurons of a neural network according to their importance and visualizes them in a web interface.

Users upload a network they’ve already trained, as well as new text. The app displays the text and, next to it, a list of specific neurons, each with an identification number. When a user clicks on a neuron, the text will be highlighted depending on which words and phrases the neuron activates for. From there, users can completely knock out — or “ablate” — the neurons, or modify the extent of their activation, to control how the network translates.

The task of ablation was used to determine if the researchers’ method accurately pinpointed the correct high-ranking neurons. In their paper, the researchers used the method to show that, by ablating high ranking neurons in a network, its performance in classifying correlated linguistic features dipped significantly. Alternatively, when they ablated lower-ranking neurons, performance suffered, but not as dramatically.

“After you get all these rankings, you want to see what happens when you kill these neurons and see how badly it affects performance,” Belinkov says. “That’s an important result proving that the neurons we find are, in fact, important to the classification process.”

One interesting application for the method is helping limit biases in language data. Machine-translation models, such as Google Translate, may train on data with gender bias, which can be problematic for languages with gendered words. Certain professions, for instance, may be more often referred to as male, and others as female. When a network translates new text, it may only produce the learned gender for those words. In many online English-to-Spanish translations, for instance, “doctor” often translates into its masculine version, while “nurse” translates into its feminine version.

“But we find we can trace individual neurons in charge of linguistic properties like gender,” Belinkov says. “If you’re able to trace them, maybe you can intervene somehow and influence the translation to translate these words more to the opposite gender … to remove or mitigate the bias.”

In preliminary experiments, the researchers modified neurons in a network to change translated text from past to present tense with 67 percent accuracy. They modified to switch the gender of the words with 21 percent accuracy. “It’s still a work in progress,” Belinkov says. A next step, he adds, is improving the methodology to achieve more accurate ablation and manipulation.


Source

Tiny motor can “walk” to carry out tasks

Years ago, MIT Professor Neil Gershenfeld had an audacious thought. Struck by the fact that all the world’s living things are built out of combinations of just 20 amino acids, he wondered: Might it be possible to create a kit of just 20 fundamental parts that could be used to assemble all of the different technological products in the world?

Gershenfeld and his students have been making steady progress in that direction ever since. Their latest achievement, presented this week at an international robotics conference, consists of a set of five tiny fundamental parts that can be assembled into a wide variety of functional devices, including a tiny “walking” motor that can move back and forth across a surface or turn the gears of a machine.

Previously, Gershenfeld and his students showed that structures assembled from many small, identical subunits can have numerous mechanical properties. Next, they demonstrated that a combination of rigid and flexible part types can be used to create morphing airplane wings, a longstanding goal in aerospace engineering. Their latest work adds components for movement and logic, and will be presented at the International Conference on Manipulation, Automation and Robotics at Small Scales (MARSS) in Helsinki, Finland, in a paper by Gershenfeld and MIT graduate student Will Langford.

Their work offers an alternative to today’s approaches to contructing robots, which largely fall into one of two types: custom machines that work well but are relatively expensive and inflexible, and reconfigurable ones that sacrifice performance for versatility. In the new approach, Langford came up with a set of five millimeter-scale components, all of which can be attached to each other by a standard connector. These parts include the previous rigid and flexible types, along with electromagnetic parts, a coil, and a magnet. In the future, the team plans to make these out of still smaller basic part types.

Using this simple kit of tiny parts, Langford assembled them into a novel kind of motor that moves an appendage in discrete mechanical steps, which can be used to turn a gear wheel, and a mobile form of the motor that turns those steps into locomotion, allowing it to “walk” across a surface in a way that is reminiscent of the molecular motors that move muscles. These parts could also be assembled into hands for gripping, or legs for walking, as needed for a particular task, and then later reassembled as those needs change. Gershenfeld refers to them as “digital materials,” discrete parts that can be reversibly joined, forming a kind of functional micro-LEGO.

The new system is a significant step toward creating a standardized kit of parts that could be used to assemble robots with specific capabilities adapted to a particular task or set of tasks. Such purpose-built robots could then be disassembled and reassembled as needed in a variety of forms, without the need to design and manufacture new robots from scratch for each application.

Langford’s initial motor has an ant-like ability to lift seven times its own weight. But if greater forces are required, many of these parts can be added to provide more oomph. Or if the robot needs to move in more complex ways, these parts could be distributed throughout the structure. The size of the building blocks can be chosen to match their application; the team has made nanometer-sized parts to make nanorobots, and meter-sized parts to make megarobots. Previously, specialized techniques were needed at each of these length scale extremes.

“One emerging application is to make tiny robots that can work in confined spaces,” Gershenfeld says. Some of the devices assembled in this project, for example, are smaller than a penny yet can carry out useful tasks.

To build in the “brains,” Langford has added part types that contain millimeter-sized integrated circuits, along with a few other part types to take care of connecting electrical signals in three dimensions.

The simplicity and regularity of these structures makes it relatively easy for their assembly to be automated. To do that, Langford has developed a novel machine that’s like a cross between a 3-D printer and the pick-and-place machines that manufacture electronic circuits, but unlike either of those, this one can produce complete robotic systems directly from digital designs. Gershenfeld says this machine is a first step toward to the project’s ultimate goal of “making an assembler that can assemble itself out of the parts that it’s assembling.”

“Standardization is an extremely important issue in microrobotics, to reduce the production costs and, as a result, to improve acceptance of this technology to the level of regular industrial robots,” says Sergej Fatikow, head of the Division of Microrobotics and Control Engineering, at the University of Oldenburg, Germany, who was not associated with this research. The new work “addresses assembling of sophisticated microrobotic systems from a small set of standard building blocks, which may revolutionize the field of microrobotics and open up numerous applications at small scales,” he says.


Source