Artist Russ Arasmith’s vision of Apollo was a potent vision of the program that shows in the works he created. Source
Smartphone apps may connect to vulnerable backend cloud servers
Most users likely unaware of vulnerabilities in smartphone apps
August 15, 2019
Cybersecurity researchers have discovered vulnerabilities in the backend systems that feed content and advertising to smartphone applications, potentially exposing users’ personal information.
In results reported at this week’s USENIX Security Symposium, researchers from the Georgia Institute of Technology and The Ohio State University identified more than 1,600 vulnerabilities in the support ecosystem behind the top 5,000 free apps available in the Google Play Store. The vulnerabilities, affecting multiple app categories, could allow hackers to break into databases that include personal information — and perhaps into users’ mobile devices.
To help developers improve the security of their mobile apps, the researchers have created an automated system called SkyWalker to vet cloud servers and software library systems. SkyWalker can examine the security of the servers supporting mobile applications, which are often operated by cloud hosting services rather than individual app developers.
“A lot of people might be surprised to learn that their phone apps are communicating with not just one, but likely tens or even hundreds of servers in the cloud,” said Brendan Saltaformaggio of Georgia Tech’s School of Electrical and Computer Engineering. “Users don’t know they are communicating with these servers because only the apps interact with them and they do so in the background. Until now, that has been a blind spot where nobody was looking for vulnerabilities.”
The researchers discovered 983 instances of known vulnerabilities. They are still investigating whether attackers could get into individual mobile devices connected to vulnerable servers.
“Security should not be an afterthought, it should be considered as a first-class citizen when designing systems – this is something we have advocated through the CNS Core Program,” says Samee Khan, a program director in NSF’s Directorate for Computer and Information Science and Engineering, which funded the research.
— NSF Public Affairs, (703) 292-8070 firstname.lastname@example.orgSource
Japanese scientists are shedding new light on the importance of light-sensing cells in the retina that process visual information. The researchers isolated the functions of melanopsin cells and demonstrated their crucial role in the perception of visual environment. This ushers in a new understanding of the biology of the eye and how visual information is processed.
The findings could contribute to more effective therapies for complications that relate to the eye. They can also serve as the basis for developing lighting and display systems.
The research was published in Scientific Reports on May 20th, 2019.
The back of the human eye is lined with the retina, a layer of various types of cells, called photoreceptors, that respond to different amounts of light. The cells that process a lot of light are called cones and those that process lower levels of light are named rods.
Up until recently, researchers have thought that when light struck the retina, rods and cones were the only two kinds of cells that react. Recent discoveries have revealed an entirely new type of cells, called intrinsically photosensitive retinal ganglion cells (ipRGCs). Unlike rods and cones, ipRGCs contain melanopsin, a photopigment that is sensitive to light. While it has been established that ipRGCs are involved in keeping the brain’s internal clock in sync with changes in daylight, their importance in the detection of the amount of light had not yet been well understood.
“Until now, the role of retinal melanopsin cells and how they contribute to the perception of the brightness of light have been unclear,” said Katsunori Okajima, a professor at the Faculty of Environment and Information Sciences, Yokohama National University and one of the authors of the study.
“We’ve found that melanopsin plays a crucial role on the human ability to see how well-lit the environment is. These findings are redefining the conventional system of light detection that so far has only taken into consideration two variables, namely brightness and the amount of incoming light. Our results suggest that brightness perception should rely on a third variable — the intensity of a stimulus that targets melanopsin.”
In the study, the authors showed how cones and melanopsin combine to allow the perception of brightness. In order to better assess the contribution of melanopsin to the detection of light, the melanopsin’s signals were isolated from cones and rods. This separation allowed for more accurate observation of the melanopsin signal alone. Visual stimuli were carefully designed and positioned in order to specifically stimulate the light-sensitive chemical. Also, the researchers used tracking software to measure study participants’ pupil diameters under each visual stimulus. This served as a way to determine the relationship between brightness perception and the actual visual stimulus intensity on the retina.
The researchers were able to show that the varying brightness levels of an image that was perceived is a sum of the melanopsin response and the response that is generated by the cones. The former is a linear readout and the latter is not. The results also show that melanopsin is not a minor contributor in brightness perception. Rather, it is a crucial player in brightness perception.
This work was supported by the Japan Society for the Promotion of Science Grants-in-Aid for Scientific Research (Grant Numbers 15H05926 and 18H04111).
Materials provided by Yokohama National University. Note: Content may be edited for style and length.
Europe That Protects: Safeguarding Our Planet, Safeguarding Our Health
3-4 December 2019, Finnish Institute for Health and Welfare (THL), Mannerheimintie 166, Helsinki, Finland
This conference will bring together relevant stakeholders including researchers, policy makers and regulators to identify and discuss the main scientific challenges for harvesting and enhancing the benefits of a sound environment for human health and the scientific challenges for solutions to overcome environmental threats to human health.
The main outcome will be recommendations on research and innovation priorities for the EU and Member States to work together to protect human health and our planet.
The European Commission’s Directorates-General for Research and Innovation (RTD) and for Communications Networks, Content and Technology (CNECT) are hosting the workshop ‘Cross-border Innovation Procurement in Health: EU funding opportunities & best practices’.
Designing, procuring and deploying innovative solutions is an important component of every public sector modernisation strategy, especially in the health and care sector. However, the EU health service providers face many challenges, such as funding, a fragmented legal framework, lack of common standards or interoperability and varying user/patient preferences across the EU.
Through this workshop, we expect to attract a representative audience (procurers, potential suppliers, regions or umbrella organisations and regulators) to discuss how such challenges can be addressed through existing EU Innovation Procurement instruments in the area of health, such as Pre-Commercial Procurement (PCP) and Public Procurement of Innovative solutions (PPI). We will present the new PCP & PPI funding opportunities for 2020 and discuss how to best design and implement successful projects. A networking session is foreseen during the event.
In order to express your interest in the event, please submit your pre-registration.
Amid rollbacks of the Clean Power Plan and other environmental regulations at the federal level, several U.S. states, cities, and towns have resolved to take matters into their own hands and implement policies to promote renewable energy and reduce greenhouse gas emissions. One popular approach, now in effect in 29 states and the District of Columbia, is to set Renewable Portfolio Standards (RPS), which require electricity suppliers to source a designated percentage of electricity from available renewable-power generating technologies.
Boosting levels of renewable electric power not only helps mitigate global climate change, but also reduces local air pollution. Quantifying the extent to which this approach improves air quality could help legislators better assess the pros and cons of implementing policies such as RPS. Toward that end, a research team at MIT has developed a new modeling framework that combines economic and air-pollution models to assess the projected subnational impacts of RPS and carbon pricing on air quality and human health, as well as on the economy and on climate change. In a study focused on the U.S. Rust Belt, their assessment showed that the financial benefits associated with air quality improvements from these policies would more than pay for the cost of implementing them. The results appear in the journal Environmental Research Letters.
“This research helps us better understand how clean-energy policies now under consideration at the subnational level might impact local air quality and economic growth,” says the study’s lead author Emil Dimanchev, a senior research associate at MIT’s Center for Energy and Environmental Policy Research, former research assistant at the MIT Joint Program on the Science and Policy of Global Change, and a 2018 graduate of the MIT Technology and Policy Program.
Burning fossil fuels for energy generation results in air pollution in the form of fine particulate matter (PM2.5). Exposure to PM2.5 can lead to adverse health effects that include lung cancer, stroke, and heart attacks. But avoiding those health effects — and the medical bills, lost income, and reduced productivity that comes with them — through the adoption of cleaner energy sources translates into significant cost savings, known as health co-benefits.
Applying their modeling framework, the MIT researchers estimated that existing RPS in the nation’s Rust Belt region generate a health co-benefit of $94 per ton of carbon dioxide (CO2) reduced in 2030, or 8 cents for each kilowatt hour (kWh) of renewable energy deployed in 2015 dollars. Their central estimate is 34 percent larger than total policy costs. The team also determined that carbon pricing delivers a health co-benefit of $211 per ton of CO2 reduced in 2030, 63 percent greater than the health co-benefit of reducing the same amount of CO2 through an RPS approach.
In an extension to their published work focused on the state of Ohio, the researchers evaluated the health effects and economy-wide costs of Ohio’s RPS using economic and atmospheric chemistry modeling. According to their best estimates, an average of 50 premature deaths per year will be avoided as a result of Ohio’s RPS in 2030. This translates to an economic benefit of $470 million per year, or 3 cents per kWh of renewable generation supported by the RPS. With costs of the RPS estimated at $300, that translates to an annual net health benefit of $170 million in 2030.
When the Ohio state legislature took up Ohio House Bill No. 6, which proposed to repeal the state’s RPS, Dimanchev shared these results on the Senate floor.
“According to our calculations, the magnitude of the air quality benefits resulting from Ohio’s RPS is substantial and exceeds its economic costs,” he argued. “While the state legislature ultimately weakened the RPS, our research concludes that this will worsen the health of Ohio residents.”
The MIT research team’s results for the Rust Belt are consistent with previous studies, which found that the health co-benefits of climate policy (including RPS and other instruments) tend to exceed policy costs.
“This work shows that there are real, immediate benefits to people’s health in states that take the lead on clean energy,” says MIT Associate Professor Noelle Selin, who led the study and holds a joint appointment in the Department of Earth, Atmospheric and Planetary Sciences and Institute for Data, Systems and Society. “Policymakers should take these impacts into account as they consider modifying these standards.”
The study was supported by the U.S. Environmental Protection Agency’s Air, Climate and Energy Centers Program.
On July 2, 2019, skywatchers in the beach city of La Serena, Chile, looked up at 4:38 p.m. local time to see a black circle in the sky. Source
When and where did humans develop language? To find out, look deep inside caves, suggests an MIT professor.
More precisely, some specific features of cave art may provide clues about how our symbolic, multifaceted language capabilities evolved, according to a new paper co-authored by MIT linguist Shigeru Miyagawa.
A key to this idea is that cave art is often located in acoustic “hot spots,” where sound echoes strongly, as some scholars have observed. Those drawings are located in deeper, harder-to-access parts of caves, indicating that acoustics was a principal reason for the placement of drawings within caves. The drawings, in turn, may represent the sounds that early humans generated in those spots.
In the new paper, this convergence of sound and drawing is what the authors call a “cross-modality information transfer,” a convergence of auditory information and visual art that, the authors write, “allowed early humans to enhance their ability to convey symbolic thinking.” The combination of sounds and images is one of the things that characterizes human language today, along with its symbolic aspect and its ability to generate infinite new sentences.
“Cave art was part of the package deal in terms of how homo sapiens came to have this very high-level cognitive processing,” says Miyagawa, a professor of linguistics and the Kochi-Manjiro Professor of Japanese Language and Culture at MIT. “You have this very concrete cognitive process that converts an acoustic signal into some mental representation and externalizes it as a visual.”
Cave artists were thus not just early-day Monets, drawing impressions of the outdoors at their leisure. Rather, they may have been engaged in a process of communication.
“I think it’s very clear that these artists were talking to one another,” Miyagawa says. “It’s a communal effort.”
The paper, “Cross-modality information transfer: A hypothesis about the relationship among prehistoric cave paintings, symbolic thinking, and the emergence of language,” is being published in the journal Frontiers in Psychology. The authors are Miyagawa; Cora Lesure, a PhD student in MIT’s Department of Linguistics; and Vitor A. Nobrega, a PhD student in linguistics at the University of Sao Paulo, in Brazil.
Re-enactments and rituals?
The advent of language in human history is unclear. Our species is estimated to be about 200,000 years old. Human language is often considered to be at least 100,000 years old.
“It’s very difficult to try to understand how human language itself appeared in evolution,” Miyagawa says, noting that “we don’t know 99.9999 percent of what was going on back then.” However, he adds, “There’s this idea that language doesn’t fossilize, and it’s true, but maybe in these artifacts [cave drawings], we can see some of the beginnings of homo sapiens as symbolic beings.”
While the world’s best-known cave art exists in France and Spain, examples of it abound throughout the world. One form of cave art suggestive of symbolic thinking — geometric engravings on pieces of ochre, from the Blombos Cave in southern Africa — has been estimated to be at least 70,000 years old. Such symbolic art indicates a cognitive capacity that humans took with them to the rest of the world.
“Cave art is everywhere,” Miyagawa says. “Every major continent inhabited by homo sapiens has cave art. … You find it in Europe, in the Middle East, in Asia, everywhere, just like human language.” In recent years, for instance, scholars have catalogued Indonesian cave art they believe to be roughly 40,000 years old, older than the best-known examples of European cave art.
But what exactly was going on in caves where people made noise and rendered things on walls? Some scholars have suggested that acoustic “hot spots” in caves were used to make noises that replicate hoofbeats, for instance; some 90 percent of cave drawings involve hoofed animals. These drawings could represent stories or the accumulation of knowledge, or they could have been part of rituals.
In any of these scenarios, Miyagawa suggests, cave art displays properties of language in that “you have action, objects, and modification.” This parallels some of the universal features of human language — verbs, nouns, and adjectives — and Miyagawa suggests that “acoustically based cave art must have had a hand in forming our cognitive symbolic mind.”
Future research: More decoding needed
To be sure, the ideas proposed by Miyagawa, Lesure, and Nobrega merely outline a working hypothesis, which is intended to spur additional thinking about language’s origins and point toward new research questions.
Regarding the cave art itself, that could mean further scrutiny of the syntax of the visual representations, as it were. “We’ve got to look at the content” more thoroughly, says Miyagawa. In his view, as a linguist who has looked at images of the famous Lascaux cave art from France, “you see a lot of language in it.” But it remains an open question how much a re-interpretation of cave art images would yield in linguistics terms.
The long-term timeline of cave art is also subject to re-evaluation on the basis of any future discoveries. If cave art is implicated in the development of human language, finding and properly dating the oldest known such drawings would help us place the orgins of language in human history — which may have happened fairly early on in our development.
“What we need is for someone to go and find in Africa cave art that is 120,000 years old,” Miyagawa quips.
At a minimum, a further consideration of cave art as part of our cognitive development may reduce our tendency to regard art in terms of our own experience, in which it probably plays a more strictly decorative role for more people.
“If this is on the right track, it’s quite possible that … cross-modality transfer helped develop a symbolic mind,” Miyagawa says. In that case, he adds, “art is not just something that is marginal to our culture, but central to the formation of our cognitive abilities.”
Scent brings songbirds to the yard
New research shows importance of scent in mating
August 15, 2019
Chickadees are interested in scents. That’s the news from a study out of Lehigh University, the first to document naturally hybridizing songbirds’ preference for the smell of their own species.
Amber Rice, an evolutionary biologist at Lehigh, studies hybridization — when separate species come into contact and mate — to better understand how species originate and how the existing, parent species are maintained. Rice researches the black-capped chickadee and its relative, the Carolina chickadee.
She and scientist Alex Van Huynh set out to test the potential for scent to act as a mate choice cue in the chickadees. They wondered whether smell might contribute to the reproductive isolation of black-capped and Carolina chickadees in a zone in Pennsylvania where birds are hybridizing.
Huynh and Rice found that black-capped and Carolina chickadees produce chemically distinct oils used to maintain their feathers; the oils also contain scent-producing compounds. The researchers discovered that both chickadee species prefer the smell of their own species over the smell of the other species. The results are published in a paper in the journal Ecology and Evolution.
“The sense of smell has been understudied in birds, particularly songbirds, because they frequently have such impressive plumage and song variation,” says Rice. “Other work has documented that some songbird species can smell, and prefer their own species’ odors, but this is the first example in currently hybridizing species that we know of.”
Jodie Jawor, a program director in NSF’s Division of Integrative Organismal Biology, which funded the research, says, “There’s still much to be learned about how animals communicate with one another. This work adds information to a unique and understudied communication dimension in birds.”
— NSF Public Affairs, (703) 292-8070 email@example.comSource
Canadian researchers have discovered that covert — or ‘silent’ — strokes are common in seniors after they have elective, non-cardiac surgery and double their risk of cognitive decline one year later.
While an overt stroke causes obvious symptoms, such as weakness in one arm or speech problems that last more than a day, a covert stroke is not obvious except on brain scans, such as MRI. Each year, approximately 0.5 per cent of the 50 million people age 65 years or greater worldwide who have major, non-cardiac surgery will suffer an overt stroke, but until now little was known about the incidence or impacts of silent stroke after surgery.
The results of the NeuroVISION study were published today in The Lancet.
“We’ve found that ‘silent’ covert strokes are actually more common than overt strokes in people aged 65 or older who have surgery,” said Dr. PJ Devereaux, co-principal investigator of the NeuroVISION study. Dr. Devereaux is a cardiologist at Hamilton Health Sciences (HHS), professor in the departments of health research methods, evidence, and impact, and medicine at McMaster University, and a senior scientist at the Population Health Research Institute of McMaster University and HHS.
Dr. Devereaux and his team found that one in 14 people over age 65 who had elective, non-cardiac surgery had a silent stroke, suggesting that as many as three million people in this age category globally suffer a covert stroke after surgery each year.
NeuroVISION involved 1,114 patients aged 65 years and older from 12 centres in North and South America, Asia, New Zealand, and Europe. All patients received an MRI within nine days of their surgery to look for imaging evidence of silent stroke. The research team followed patients for one year after their surgery to assess their cognitive capabilities. They found that people who had a silent stroke after surgery were more likely to experience cognitive decline, perioperative delirium, overt stroke or transient ischaemic attack within one year, compared to patients who did not have a silent stroke.
“Over the last century, surgery has greatly improved the health and the quality of life of patients around the world,” said Dr. Marko Mrkobrada, an associate professor of medicine at University of Western Ontario and co-principal investigator for the NeuroVISION study. “Surgeons are now able to operate on older and sicker patients thanks to improvements in surgical and anesthetic techniques. Despite the benefits of surgery, we also need to understand the risks.”
“Vascular brain injuries, both overt and covert, are more frequently being detected, recognized and prevented through research funded by our Institute and CIHR,” says Dr. Brian Rowe, scientific director of the Institute of Circulatory and Respiratory Health, Canadian Institutes of Health Research (CIHR). “The NeuroVISION Study provides important insights into the development of vascular brain injury after surgery, and adds to the mounting evidence of the importance of vascular health on cognitive decline. The results of NeuroVISION are important and represent a meaningful discovery that will facilitate tackling the issue of cognitive decline after surgery.”
There are lots more events (in the area of molecular life sciences) on the EMBO calendar
Suggest an event
If you would like an event to be listed in the calendar, you can suggest an event by filling in the form.
Please note that it must be relevant to the Research and Innovation activities of the European Union.
You can use the same form to request a change to an existing event.
Events by theme
|Related theme(s) and subtheme(s)|
|Countries involved in the project described in the article|
|| | | | | | | | | | ||
Add to PDF “basket”
The EU-funded WATERSPOUTT project’s partners in Malawi are designing and testing an innovative solar-ceramic water filtration system capable of providing local communities with a reliable source of clean water at the household level.
© africa #33233315, source: stock.adobe.com 2019
The World Health Organisation and UNICEF estimate that out of the nearly 660 million people in the world who lack reliable access to safe drinking water, half live in sub-Saharan Africa. In rural areas that remain unconnected to any municipal piped water supply, entire communities must obtain their drinking water from unsafe sources. As a result, these populations are constantly at risk of contracting a range of diseases.
In light of this challenge, the EU-funded WATERSPOUTT project is working to provide safe drinking water to communities that currently rely on unsafe sources. Specifically, researchers are increasing the uptake of Solar Disinfection (SODIS) by designing, piloting and bringing to market three innovative solar-based technologies (solar rainwater reactors, solar jerry cans and solar-ceramic filtrations). SODIS uses freely available solar energy to inactivate pathogens in water stored in transparent containers placed in direct sunlight.
Unlike traditional SODIS systems, which typically treat only two litres of water, the WATERSPOUTT solution will treat the large volumes (20L) needed by rural communities. In parallel to the development of this technology, the project also launched an outreach effort that, with the support of local authorities, aims to ensure that these technologies are adopted by the targeted communities.
Spotlight on Malawi
Malawi currently reports 86 % drinking water coverage for the country. Although access to safe water is increasing, there are ‘blind spots’ where limited access is a result of poor governance, high non-functionality, technical failure and water inequality. The most affected are in rural areas, where the water crisis is compounded with poverty and lack of public investment.
“WATERSPOUTT Malawi is working with product designers and local communities to develop a combined solar disinfection and ceramic filtration unit to treat water at the household level,” says Kingsley Lungu of the University of Malawi’s (UNIMA) Centre for Water Sanitation Health and Appropriate Technology Development (WASHTED), the project’s partner in the country. “This system will then be tested for efficacy and user acceptability through stringent controlled and field testing.”
UNIMA is playing a key role in the design, testing and evaluation of a first-of-its-kind prototype household water treatment technology that combines the basic principles of SODIS with a ceramic filter. Building on previous SODIS systems, the prototype aims to increase the acceptability of solar water disinfection by both increasing the volume of drinking water that can be produced and by providing a consistent volume of water at the household level – irrespective of weather conditions. Whenever possible, researchers are also using locally available materials for production, thus increasing opportunities to move towards commercialisation.
“To ensure that the filtration system will be used by local households, UNIMA researchers are conducting shared dialogue workshops to get community feedback on the concept,” adds Lungu. “This input is then integrated into the prototype, which we are actually co-designing with potential end-users and beneficiaries.”
Initial results look promising
“Initial results on the use of the SODIS bucket with a 1 % stabiliser under controlled testing has shown to be effective at treating contaminated well water and reducing pathogens to a safe level for consumption,” says Lungu. “This raises the potential opportunity for the use of a large volume SODIS system as a treatment solution on its own.”
Testing of such a combined solar-ceramic system will commence in October 2018. UNIMA will test the prototype at 777 households in southern Malawi’s Chikwawa district.
- Project acronym: WATERSPOUTT
- Participants: Ireland (Coordinator), Austria, Switzerland, Spain, Estonia, Italy, Malawi, Netherlands, Turkey, Uganda, UK, South Africa
- Project N°: 688928
- Total costs: € 3 571 945
- EU contribution: € 3 084 351
- Duration: June 2016 to May 2020
After searching, you can expand the results to include the whole Research and Innovation web site, or another section of it, or all Europa, afterwards without searching again.
Please note that new content may take a few days to be indexed by the search engine and therefore to appear in the results.
When Tracy Slatyer faced a crisis of confidence early in her educational career, Stephen Hawking’s “A Brief History of Time” and a certain fictional janitor at MIT helped to bolster her resolve.
Slatyer was 11 when her family moved from Canberra, Australia, to the island nation of Fiji. It was a three-year stay, as part of her father’s work for the South Pacific Forum, an intergovernmental organization.
“Fiji was quite a way behind the U.S. and Australia in terms of gender equality, and for a girl to be interested in math and science carried noticeable social stigma,” Slatyer recalls. “I got bullied quite a lot.”
She eventually sought guidance from the school counselor, who placed the blame for the bullying on the victim herself, saying that Slatyer wasn’t sufficiently “feminine.” Slatyer countered that the bullying seemed to be motivated by the fact that she was interested in and good at math, and she recalls the counselor’s unsympathetic advice: “Well, yes, honey, that’s a problem you can fix.”
“I went home and thought about it, and decided that math and science were important to me,” Slatyer says. “I was going to keep doing my best to learn more, and if I got bullied, so be it.”
She doubled down on her studies and spent a lot of time at the library; she also benefited from supportive parents, who gave her Hawking’s groundbreaking book on the origins of the universe and the nature of space and time.
“It seemed like the language in which these ideas could most naturally be described was that of mathematics,” Slatyer says. “I knew I was pretty good at math. And learning that that talent was potentially something I could apply to understanding how the universe worked, and maybe how it began, was very exciting to me.”
Around this same time, the movie “Good Will Hunting” came out in theaters. The story, of a townie custodian at MIT who is discovered as a gifted mathematician, had a motivating impact on Slatyer.
“What my 13-year-old self took out of this was, MIT was a place where, if you were talented at math, people would see that as a good thing rather than something to be stigmatized, and make you welcome — even if you were a janitor or a little girl from Fiji,” Slatyer says. “It was my first real indication that such places might exist. Since then, MIT has been an important symbol to me, of valuing intellectual inquiry and being willing to accept anyone in the world.”
This year, Slatyer received tenure at MIT and is now the Jerrold R. Zacharias Associate Professor of Physics and a member of the Center for Theoretical Physics and the Laboratory for Nuclear Science. She focuses on searching through telescope data for signals of mysterious phenomena such as dark matter, the invisible stuff that makes up more than 80 percent of the matter in the universe but has only been detected through its gravitational pull. In her teaching, she seeks to draw out and support a new and diverse crop of junior scientists.
“If you want to understand how the universe works, you want the very best and brightest people,” Slatyer says. “It’s essential that theoretical physics becomes more inclusive and welcoming, both from a moral perspective and to get the best science done.”
Slatyer’s family eventually moved back to Canberra, where she dove eagerly into the city’s educational opportunities.
After earning an undergraduate degree from the Australian National University, followed by a brief stint at the University of Melbourne, Slatyer was accepted to Harvard University as a physics graduate student. Her interests were slowly gravitating toward particle physics, but she was unsure about which direction to take. Then, two of her mentors put her in touch with a junior faculty member, Doug Finkbeiner, who was leading a project to mine astrophysical data for signals of new physics.
At the time, much of the physics community was eagerly anticipating the start-up of the Large Hadron Collider and the release of data on particle interactions at high energies, which could potentially reveal physics beyond the Standard Model.
In contrast, telescopes have long made public their own data on astrophysical phenomena. What if, instead of looking through these data for objects such as black holes and neutron stars that evolved over millions of years, one could comb through it for signals of more fundamental mysteries, such as hints of new elementary particles and even dark matter?
The prospects were new and exciting, and Slatyer promptly took on the challenge.
“Chasing that feeling”
In 2008, the Fermi Gamma-Ray Space Telescope launched, giving astronomers a new view of the cosmos in the gamma-ray band of the electromagnetic spectrum, where high-energy astrophysical phenomena can be seen. Slatyer and Finkbeiner proposed that Fermi’s data might also reveal signals of dark matter, which could theoretically produce high-energy electrons when dark matter particles collide.
In 2009, Fermi made its data available to the public, and Slatyer and Finkbeiner —together with Harvard postdoc Greg Dobler and collaborators at New York University — put their mining tools to work as soon as the data were released online.
The group eventually constructed a map of the Milky Way galaxy, shining in gamma rays, and revealed a fuzzy, egg-like shape. Upon further analysis, led by Slatyer’s fellow PhD student Meng Su, this fuzzy “haze” coalesced into a figure-eight, or double-bubble structure, extending some 25,000 light-years above and below the disc of the Milky Way. Such a structure had never been observed before. The group named the mysterious structure the “Fermi bubbles,” after the telescope that originally observed it.
“It was really special — we were the first people in the history of the world to be able to look at the sky in this way and understand that this structure was there,” Slatyer says. “That’s a really incredible feeling, and chasing that feeling is something that inspires and motivates me, and I think many scientists.”
Searching for the invisible
Today, Slatyer continues to sift through Fermi data for evidence of dark matter. The Fermi Bubbles’ distinctive shape makes it unlikely they are associated with dark matter; they are more likely to reveal a past eruption from the giant black hole at the Milky Way’s center, or outflows fueled by exploding stars. However, other signals are more promising.
Around the center of the Milky Way, where dark matter is thought to concentrate, there is a glow of gamma rays. In 2013, Slatyer, her first PhD student Nicholas Rodd, and collaborators at Harvard University and Fermilab showed this glow had properties similar to what theorists would expect if dark matter particles were colliding and producing visible light. However, in 2015, Slatyer and collaborators at MIT and Princeton University challenged this interpretation with a new analysis, showing that the glow was more consistent with originating from a new population of spinning neutron stars called pulsars.
But the case is not quite closed. Recently, Slatyer and MIT postdoc Rebecca Leane reanalyzed the same data, this time injecting a fake dark matter signal into the data, to see whether the techniques developed in 2015 could detect dark matter if it were there. But the signal was missed, suggesting that if there were other, actual signals of dark matter in the Fermi data, they could have been missed as well.
Slatyer is now improving on data mining techniques to better detect dark matter in the Fermi data, along with other astrophysical open data. But she won’t be discouraged if her search comes up empty.
“There’s no guarantee there is a dark matter signal,” Slatyer says. “But if you never look, you’ll never know. And in searching for dark matter signals in these datasets, you learn other things, like that our galaxy contains giant gamma-ray bubbles, and maybe a new population of pulsars, that no one ever knew about. If you look closely at the data, the universe will often tell you something new.”
MIT has been honored with 12 No. 1 subject rankings in the QS World University Rankings for 2018.
MIT received a No. 1 ranking in the following QS subject areas: Architecture/Built Environment; Linguistics; Chemical Engineering; Civil and Structural Engineering; Computer Science and Information Systems; Electrical and Electronic Engineering; Mechanical, Aeronautical and Manufacturing Engineering; Chemistry; Materials Science; Mathematics; Physics and Astronomy; and Statistics and Operational Research.
Additional high-ranking MIT subjects include: Art and Design (No. 4), Biological Sciences (No. 2), Earth and Marine Sciences (No. 3), Environmental Sciences (No. 3), Accounting and Finance (No. 2), Business and Management Studies (No. 4), and Economics and Econometrics (No. 2).
Quacquarelli Symonds Limited subject rankings, published annually, are designed to help prospective students find the leading schools in their field of interest. Rankings cover 48 disciplines and are based on an institute’s research quality and accomplishments, academic reputation, and graduate employment.
MIT has been ranked as the No. 1 university in the world by QS World University Rankings for six straight years.
Learn about the three ways to travel at (nearly) the speed of light. Source