MIT Latest News

Subscribe to MIT Latest News feed
MIT News is dedicated to communicating to the media and the public the news and achievements of the students, faculty, staff and the greater MIT community.
Updated: 23 hours 31 min ago

Co-creating climate futures with real-time data and spatial storytelling

Mon, 01/08/2024 - 2:25pm

Virtual story worlds and game engines aren’t just for video games anymore. They are now tools for scientists and storytellers to digitally twin existing physical spaces and then turn them into vessels to dream up speculative climate stories and build collective designs of the future. That’s the theory and practice behind the MIT WORLDING initiative.

Twice this year, WORLDING matched world-class climate story teams working in XR (extended reality) with relevant labs and researchers across MIT. One global group returned for a virtual gathering online in partnership with Unity for Humanity, while another met for one weekend in person, hosted at the MIT Media Lab.

“We are witnessing the birth of an emergent field that fuses climate science, urban planning, real-time 3D engines, nonfiction storytelling, and speculative fiction, and it is all fueled by the urgency of the climate crises,” says Katerina Cizek, lead designer of the WORLDING initiative at the Co-Creation Studio of MIT Open Documentary Lab. “Interdisciplinary teams are forming and blossoming around the planet to collectively imagine and tell stories of healthy, livable worlds in virtual 3D spaces and then finding direct ways to translate that back to earth, literally.”

At this year’s virtual version of WORLDING, five multidisciplinary teams were selected from an open call. In a week-long series of research and development gatherings, the teams met with MIT scientists, staff, fellows, students, and graduates, as well as other leading figures in the field. Guests ranged from curators at film festivals such as Sundance and Venice, climate policy specialists, and award-winning media creators to software engineers and renowned Earth and atmosphere scientists. The teams heard from MIT scholars in diverse domains, including geomorphology, urban planning as acts of democracy, and climate researchers at MIT Media Lab.

Mapping climate data

“We are measuring the Earth's environment in increasingly data-driven ways. Hundreds of terabytes of data are taken every day about our planet in order to study the Earth as a holistic system, so we can address key questions about global climate change,” explains Rachel Connolly, an MIT Media Lab research scientist focused in the “Future Worlds” research theme, in a talk to the group. “Why is this important for your work and storytelling in general? Having the capacity to understand and leverage this data is critical for those who wish to design for and successfully operate in the dynamic Earth environment.”

Making sense of billions of data points was a key theme during this year’s sessions. In another talk, Taylor Perron, an MIT professor of Earth, atmospheric and planetary sciences, shared how his team uses computational modeling combined with many other scientific processes to better understand how geology, climate, and life intertwine to shape the surfaces of Earth and other planets. His work resonated with one WORLDING team in particular, one aiming to digitally reconstruct the pre-Hispanic Lake Texcoco — where current day Mexico City is now situated — as a way to contrast and examine the region’s current water crisis.

Democratizing the future

While WORLDING approaches rely on rigorous science and the interrogation of large datasets, they are also founded on democratizing community-led approaches.

MIT Department of Urban Studies and Planning graduate Lafayette Cruise MCP '19 met with the teams to discuss how he moved his own practice as a trained urban planner to include a futurist component involving participatory methods. “I felt we were asking the same limited questions in regards to the future we were wanting to produce. We're very limited, very constrained, as to whose values and comforts are being centered. There are so many possibilities for how the future could be.”

Scaling to reach billions

This work scales from the very local to massive global populations. Climate policymakers are concerned with reaching billions of people in the line of fire. “We have a goal to reach 1 billion people with climate resilience solutions,” says Nidhi Upadhyaya, deputy director at Atlantic Council's Adrienne Arsht-Rockefeller Foundation Resilience Center. To get that reach, Upadhyaya is turning to games. “There are 3.3 billion-plus people playing video games across the world. Half of these players are women. This industry is worth $300 billion. Africa is currently among the fastest-growing gaming markets in the world, and 55 percent of the global players are in the Asia Pacific region.” She reminded the group that this conversation is about policy and how formats of mass communication can be used for policymaking, bringing about change, changing behavior, and creating empathy within audiences.

Socially engaged game development is also connected to education at Unity Technologies, a game engine company. “We brought together our education and social impact work because we really see it as a critical flywheel for our business,” said Jessica Lindl, vice president and global head of social impact/education at Unity Technologies, in the opening talk of WORLDING. “We upscale about 900,000 students, in university and high school programs around the world, and about 800,000 adults who are actively learning and reskilling and upskilling in Unity. Ultimately resulting in our mission of the ‘world is a better place with more creators in it,’ millions of creators who reach billions of consumers — telling the world stories, and fostering a more inclusive, sustainable, and equitable world.”

Access to these technologies is key, especially the hardware. “Accessibility has been missing in XR,” explains Reginé Gilbert, who studies and teaches accessibility and disability in user experience design at New York University. “XR is being used in artificial intelligence, assistive technology, business, retail, communications, education, empathy, entertainment, recreation, events, gaming, health, rehabilitation meetings, navigation, therapy, training, video programming, virtual assistance wayfinding, and so many other uses. This is a fun fact for folks: 97.8 percent of the world hasn't tried VR [virtual reality] yet, actually.”

Meanwhile, new hardware is on its way. The WORLDING group got early insights into the highly anticipated Apple Vision Pro headset, which promises to integrate many forms of XR and personal computing in one device. “They're really pushing this kind of pass-through or mixed reality,” said Dan Miller, a Unity engineer on the poly spatial team, collaborating with Apple, who described the experience of the device as “You are viewing the real world. You're pulling up windows, you're interacting with content. It’s a kind of spatial computing device where you have multiple apps open, whether it's your email client next to your messaging client with a 3D game in the middle. You’re interacting with all these things in the same space and at different times.”

“WORLDING combines our passion for social-impact storytelling and incredible innovative storytelling,” said Paisley Smith of the Unity for Humanity Program at Unity Technologies. She added, “This is an opportunity for creators to incubate their game-changing projects and connect with experts across climate, story, and technology.”

Meeting at MIT

In a new in-person iteration of WORLDING this year, organizers collaborated closely with Connolly at the MIT Media Lab to co-design an in-person weekend conference Oct. 25 - Nov. 7 with 45 scholars and professionals who visualize climate data at NASA, the National Oceanic and Atmospheric Administration, planetariums, and museums across the United States.

A participant said of the event, “An incredible workshop that had had a profound effect on my understanding of climate data storytelling and how to combine different components together for a more [holistic] solution.”

“With this gathering under our new Future Worlds banner,” says Dava Newman, director of the MIT Media Lab and Apollo Program Professor of Astronautics chair, “the Media Lab seeks to affect human behavior and help societies everywhere to improve life here on Earth and in worlds beyond, so that all — the sentient, natural, and cosmic — worlds may flourish.” 

“WORLDING’s virtual-only component has been our biggest strength because it has enabled a true, international cohort to gather, build, and create together. But this year, an in-person version showed broader opportunities that spatial interactivity generates — informal Q&As, physical worksheets, and larger-scale ideation, all leading to deeper trust-building,” says WORLDING producer Srushti Kamat SM ’23.

The future and potential of WORLDING lies in the ongoing dialogue between the virtual and physical, both in the work itself and in the format of the workshops.

Technique could efficiently solve partial differential equations for numerous applications

Mon, 01/08/2024 - 1:30pm

In fields such as physics and engineering, partial differential equations (PDEs) are used to model complex physical processes to generate insight into how some of the most complicated physical and natural systems in the world function.

To solve these difficult equations, researchers use high-fidelity numerical solvers, which can be very time-consuming and computationally expensive to run. The current simplified alternative, data-driven surrogate models, compute the goal property of a solution to PDEs rather than the whole solution. Those are trained on a set of data that has been generated by the high-fidelity solver, to predict the output of the PDEs for new inputs. This is data-intensive and expensive because complex physical systems require a large number of simulations to generate enough data. 

In a new paper, “Physics-enhanced deep surrogates for partial differential equations,” published in December in Nature Machine Intelligence, a new method is proposed for developing data-driven surrogate models for complex physical systems in such fields as mechanics, optics, thermal transport, fluid dynamics, physical chemistry, and climate models.

The paper was authored by MIT’s professor of applied mathematics Steven G. Johnson along with Payel Das and Youssef Mroueh of the MIT-IBM Watson AI Lab and IBM Research; Chris Rackauckas of Julia Lab; and Raphaël Pestourie, a former MIT postdoc who is now at Georgia Tech. The authors call their method "physics-enhanced deep surrogate" (PEDS), which combines a low-fidelity, explainable physics simulator with a neural network generator. The neural network generator is trained end-to-end to match the output of the high-fidelity numerical solver.

“My aspiration is to replace the inefficient process of trial and error with systematic, computer-aided simulation and optimization,” says Pestourie. “Recent breakthroughs in AI like the large language model of ChatGPT rely on hundreds of billions of parameters and require vast amounts of resources to train and evaluate. In contrast, PEDS is affordable to all because it is incredibly efficient in computing resources and has a very low barrier in terms of infrastructure needed to use it.”

In the article, they show that PEDS surrogates can be up to three times more accurate than an ensemble of feedforward neural networks with limited data (approximately 1,000 training points), and reduce the training data needed by at least a factor of 100 to achieve a target error of 5 percent. Developed using the MIT-designed Julia programming language, this scientific machine-learning method is thus efficient in both computing and data.

The authors also report that PEDS provides a general, data-driven strategy to bridge the gap between a vast array of simplified physical models with corresponding brute-force numerical solvers modeling complex systems. This technique offers accuracy, speed, data efficiency, and physical insights into the process.

Says Pestourie, “Since the 2000s, as computing capabilities improved, the trend of scientific models has been to increase the number of parameters to fit the data better, sometimes at the cost of a lower predictive accuracy. PEDS does the opposite by choosing its parameters smartly. It leverages the technology of automatic differentiation to train a neural network that makes a model with few parameters accurate.”

“The main challenge that prevents surrogate models from being used more widely in engineering is the curse of dimensionality — the fact that the needed data to train a model increases exponentially with the number of model variables,” says Pestourie. “PEDS reduces this curse by incorporating information from the data and from the field knowledge in the form of a low-fidelity model solver.”

The researchers say that PEDS has the potential to revive a whole body of the pre-2000 literature dedicated to minimal models — intuitive models that PEDS could make more accurate while also being predictive for surrogate model applications.

"The application of the PEDS framework is beyond what we showed in this study,” says Das. “Complex physical systems governed by PDEs are ubiquitous, from climate modeling to seismic modeling and beyond. Our physics-inspired fast and explainable surrogate models will be of great use in those applications, and play a complementary role to other emerging techniques, like foundation models."

The research was supported by the MIT-IBM Watson AI Lab and the U.S. Army Research Office through the Institute for Soldier Nanotechnologies. 

Stripes in a flowing liquid crystal suggest a route to “chiral” fluids

Mon, 01/08/2024 - 5:00am

Hold your hands out in front of you, and no matter how you rotate them, it’s impossible to superimpose one over the other. Our hands are a perfect example of chirality — a geometric configuration by which an object cannot be superimposed onto its mirror image.

Chirality is everywhere in nature, from our hands to the arrangement of our internal organs to the spiral structure of DNA. Chiral molecules and materials have been the key to many drug therapies, optical devices, and functional metamaterials. Scientists have until now assumed that chirality begets chirality — that is, chiral structures emerge from chiral forces and building blocks. But that assumption may need some retuning.

MIT engineers recently discovered that chirality can also emerge in an entirely nonchiral material, and through nonchiral means. In a study appearing today in Nature Communications, the team reports observing chirality in a liquid crystal — a material that flows like a liquid and has non ordered, crystal-like microstructure like a solid. They found that when the fluid flows slowly, its normally nonchiral microstructures spontaneously assemble into large, twisted, chiral structures. The effect is as if a conveyor belt of crayons, all symmetrically aligned, were to suddenly rearrange into large, spiral patterns once the belt reaches a certain speed.

The geometric transformation is unexpected, given that the liquid crystal is naturally nonchiral, or “achiral.” The team’s study thus opens a new path to generating chiral structures. The researchers envision that the structures, once formed, could serve as spiral scaffolds in which to assemble intricate molecular structures. The chiral liquid crystals could also be used as optical sensors, as their structural transformation would change the way they interact with light.

“This is exciting, because this gives us an easy way to structure these kinds of fluids,” says study co-author Irmgard Bischofberger, associate professor of mechanical engineering at MIT. “And from a fundamental level, this is a new way in which chirality can emerge.”

The study’s co-authors include lead author Qing Zhang PhD ’22, Weiqiang Wang and Rui Zhang of Hong Kong University of Science and Technology, and Shuang Zhou of the University of Massachusetts at Amherst.

Striking stripes

A liquid crystal is a phase of matter that embodies properties of both a liquid and a solid. Such in-between materials flow like liquid, and are molecularly structured like solids. Liquid crystals are used as the main element in pixels that make up LCD displays, as the symmetric alignment of their molecules can be uniformly switched with voltage to collectively create high-resolution images.

Bischofberger’s group at MIT studies how fluids and soft materials spontaneously form patterns in nature and in the lab. The team seeks to understand the mechanics underlying fluid transformations, which could be used to create new, reconfigurable materials.

In their new study, the researchers focused on a special type of nematic liquid crystal — a water-based fluid that contains microscopic, rod-like molecular structures. The rods normally align in the same direction throughout the fluid. Zhang was initially curious how the fluid would behave under various flow conditions.

“I tried this experiment for the first time at home, in 2020,” Zhang recalls. “I had samples of the fluid, and a small microscope, and one day I just set it to a low flow. When I came back, I saw this really striking pattern.”

She and her colleagues repeated her initial experiments in the lab. They fabricated a microfluidic channel out of two glass slides, separated by a very thin space, and connected to a main reservoir. The team slowly pumped samples of the liquid crystal through the reservoir and into the space between the plates, then took microscopy images of fluid as it flowed through.

Like Zhang’s initial experiments, the team observed an unexpected transformation: The normally uniform fluid began to form tiger-like stripes as it slowly moved through the channel.

“It was surprising that it formed any structure, but even more surprising once we actually knew what type of structure it formed,” Bischofberger says. “That’s where chirality comes in.”

Twist and flow

The team discovered that the fluid’s stripes were unexpectedly chiral, by using various optical and modeling techniques to effectively retrace the fluid’s flow. They observed that, when unmoving, the fluid’s microscopic rods are normally aligned in near-perfect formation. When the fluid is pumped through the channel quickly, the rods are in complete disarray. But at a slower, in-between flow, the structures start to wiggle, then progressively twist like tiny propellers, each one turning slightly more than the next.

If the fluid continues its slow flow, the twisting crystals assemble into large spiral structures that appear as stripes under the microscope.

“There’s this magic region, where if you just gently make them flow, they form these large spiral structures,” Zhang says.

The researchers modeled the fluid’s dynamics and found that the large spiral patterns emerged when the fluid arrived at a balance between two forces: viscosity and elasticity. Viscosity describes how easily a material flows, while elasticity is essentially how likely a material is to deform (for instance, how easily the fluid’s rods wiggle and twist).

“When these two forces are about the same, that’s when we see these spiral structures,” Bischofberger explains. “It’s kind of amazing that individual structures, on the order of nanometers, can assemble into much larger, millimeter-scale structures that are very ordered, just by pushing them a little bit out of equilibrium.”

The team realized that the twisted assemblages have a chiral geometry: If a mirror image was made of one spiral, it would not be possible to superimpose it over the original, no matter how the spirals were rearranged. The fact that the chiral spirals emerged from a nonchiral material, and through nonchiral means, is a first and points to a relatively simple way to engineer structured fluids.

“The results are indeed surprising and intriguing,” says Giuliano Zanchetta, associate professor at the University of Milan, who was not involved with the study. “It would be interesting to explore the boundaries of this phenomenon. I would see the reported chiral patterns as a promising way to periodically modulate optical properties at the microscale.”

“We now have some knobs to tune this structure,” Bischofberger says. “This might give us a new optical sensor that interacts with light in certain ways. It could also be used as scaffolds to grow and transport molecules for drug delivery. We’re excited to explore this whole new phase space.”

This research was supported, in part, by the U.S. National Science Foundation.

Inhalable sensors could enable early lung cancer detection

Fri, 01/05/2024 - 2:00pm

Using a new technology developed at MIT, diagnosing lung cancer could become as easy as inhaling nanoparticle sensors and then taking a urine test that reveals whether a tumor is present.

The new diagnostic is based on nanosensors that can be delivered by an inhaler or a nebulizer. If the sensors encounter cancer-linked proteins in the lungs, they produce a signal that accumulates in the urine, where it can be detected with a simple paper test strip.

This approach could potentially replace or supplement the current gold standard for diagnosing lung cancer, low-dose computed tomography (CT). It could have an especially significant impact in low- and middle-income countries that don’t have widespread availability of CT scanners, the researchers say.

“Around the world, cancer is going to become more and more prevalent in low- and middle-income countries. The epidemiology of lung cancer globally is that it’s driven by pollution and smoking, so we know that those are settings where accessibility to this kind of technology could have a big impact,” says Sangeeta Bhatia, the John and Dorothy Wilson Professor of Health Sciences and Technology and of Electrical Engineering and Computer Science at MIT, and a member of MIT’s Koch Institute for Integrative Cancer Research and the Institute for Medical Engineering and Science.

Bhatia is the senior author of the paper, which appears today in Science Advances. Qian Zhong, an MIT research scientist, and Edward Tan, a former MIT postdoc, are the lead authors of the study.

Inhalable particles

To help diagnose lung cancer as early as possible, the U.S. Preventive Services Task Force recommends that heavy smokers over the age of 50 undergo annual CT scans. However, not everyone in this target group receives these scans, and the high false-positive rate of the scans can lead to unnecessary, invasive tests.

Bhatia has spent the last decade developing nanosensors for use in diagnosing cancer and other diseases, and in this study, she and her colleagues explored the possibility of using them as a more accessible alternative to CT screening for lung cancer.

These sensors consist of polymer nanoparticles coated with a reporter, such as a DNA barcode, that is cleaved from the particle when the sensor encounters enzymes called proteases, which are often overactive in tumors. Those reporters eventually accumulate in the urine and are excreted from the body.

Previous versions of the sensors, which targeted other cancer sites such as the liver and ovaries, were designed to be given intravenously. For lung cancer diagnosis, the researchers wanted to create a version that could be inhaled, which could make it easier to deploy in lower resource settings.

“When we developed this technology, our goal was to provide a method that can detect cancer with high specificity and sensitivity, and also lower the threshold for accessibility, so that hopefully we can improve the resource disparity and inequity in early detection of lung cancer,” Zhong says.

To achieve that, the researchers created two formulations of their particles: a solution that can be aerosolized and delivered with a nebulizer, and a dry powder that can be delivered using an inhaler.

Once the particles reach the lungs, they are absorbed into the tissue, where they encounter any proteases that may be present. Human cells can express hundreds of different proteases, and some of them are overactive in tumors, where they help cancer cells to escape their original locations by cutting through proteins of the extracellular matrix. These cancerous proteases cleave DNA barcodes from the sensors, allowing the barcodes to circulate in the bloodstream until they are excreted in the urine.

In the earlier versions of this technology, the researchers used mass spectrometry to analyze the urine sample and detect DNA barcodes. However, mass spectrometry requires equipment that might not be available in low-resource areas, so for this version, the researchers created a lateral flow assay, which allows the barcodes to be detected using a paper test strip.

The researchers designed the strip to detect up to four different DNA barcodes, each of which indicates the presence of a different protease. No pre-treatment or processing of the urine sample is required, and the results can be read about 20 minutes after the sample is obtained.

“We were really pushing this assay to be point-of-care available in a low-resource setting, so the idea was to not do any sample processing, not do any amplification, just to be able to put the sample right on the paper and read it out in 20 minutes,” Bhatia says.

Accurate diagnosis

The researchers tested their diagnostic system in mice that are genetically engineered to develop lung tumors similar to those seen in humans. The sensors were administered 7.5 weeks after the tumors started to form, a time point that would likely correlate with stage 1 or 2 cancer in humans.

In their first set of experiments in the mice, the researchers measured the levels of 20 different sensors designed to detect different proteases. Using a machine learning algorithm to analyze those results, the researchers identified a combination of just four sensors that was predicted to give accurate diagnostic results. They then tested that combination in the mouse model and found that it could accurately detect early-stage lung tumors.

For use in humans, it’s possible that more sensors might be needed to make an accurate diagnosis, but that could be achieved by using multiple paper strips, each of which detects four different DNA barcodes, the researchers say.

The researchers now plan to analyze human biopsy samples to see if the sensor panels they are using would also work to detect human cancers. In the longer term, they hope to perform clinical trials in human patients. A company called Sunbird Bio has already run phase 1 trials on a similar sensor developed by Bhatia’s lab, for use in diagnosing liver cancer and a form of hepatitis known as nonalcoholic steatohepatitis (NASH).

In parts of the world where there is limited access to CT scanning, this technology could offer a dramatic improvement in lung cancer screening, especially since the results can be obtained during a single visit.

“The idea would be you come in and then you get an answer about whether you need a follow-up test or not, and we could get patients who have early lesions into the system so that they could get curative surgery or lifesaving medicines,” Bhatia says.

The research was funded by the Johnson & Johnson Lung Cancer Initiative, the Howard Hughes Medical Institute, the Koch Institute Support (core) Grant from the National Cancer Institute, and the National Institute of Environmental Health Sciences.

Improving patient safety using principles of aerospace engineering

Thu, 01/04/2024 - 1:10pm

Approximately 13 billion laboratory tests are administered every year in the United States, but not every result is timely or accurate. Laboratory missteps prevent patients from receiving appropriate, necessary, and sometimes lifesaving care. These medical errors are the third-leading cause of death in the nation. 

To help reverse this trend, a research team from the MIT Department of Aeronautics and Astronautics (AeroAstro) Engineering Systems Lab and Synensys, a safety management contractor, examined the ecosystem of diagnostic laboratory data. Their findings, including six systemic factors contributing to patient hazards in laboratory diagnostics tests, offer a rare holistic view of this complex network — not just doctors and lab technicians, but also device manufacturers, health information technology (HIT) providers, and even government entities such as the White House. By viewing the diagnostic laboratory data ecosystem as an integrated system, an approach based on systems theory, the MIT researchers have identified specific changes that can lead to safer behaviors for health care workers and healthier outcomes for patients. 

A report of the study, which was conducted by AeroAstro Professor Nancy Leveson, who serves as head of the System Safety and Cybersecurity group, along with Research Engineer John Thomas and graduate students Polly Harrington and Rodrigo Rose, was submitted to the U.S. Food and Drug Administration this past fall. Improving the infrastructure of laboratory data has been a priority for the FDA, who contracted the study through Synensis.

Hundreds of hazards, six causes

In a yearlong study that included more than 50 interviews, the Leveson team found the diagnostic laboratory data ecosystem to be vast yet fractured. No one understood how the whole system functioned or the totality of substandard treatment patients received. Well-intentioned workers were being influenced by the system to carry out unsafe actions, MIT engineers wrote.  

Test results sent to the wrong patients, incompatible technologies that strain information sharing between the doctor and lab technician, and specimens transported to the lab without guarantees of temperature control were just some of the hundreds of hazards the MIT engineers identified. The sheer volume of potential risks, known as unsafe control actions (UCAs), should not dissuade health care stakeholders from seeking change, Harrington says. 

“While there are hundreds of UCAs, there are only six systemic factors that are causing these hazards,” she adds. “Using a system-based methodology, the medical community can address many of these issues with one swoop.” 

Four of the systemic factors — decentralization, flawed communication and coordination, insufficient focus on safety-related regulations, and ambiguous or outdated standards — reflect the need for greater oversight and accountability. The two remaining systemic factors — misperceived notions of risk and lack of systems theory integration — call for a fundamental shift in perspective and operations. For instance, the medical community, including doctors themselves, tends to blame physicians when errors occur. Understanding the real risk levels associated with laboratory data and HIT might prompt more action for change, the report’s authors wrote. 

“There’s this expectation that doctors will catch every error,” Harrington says. “It’s unreasonable and unfair to expect that, especially when they have no reason to assume the data they're getting is flawed.”

Think like an engineer

Systems theory may be a new concept to the medical community, but the aviation industry has used it for decades. 

“After World War II, there were so many commercial aviation crashes that the public was scared to fly,” says Leveson, a leading expert in system and software safety. In the early 2000s, she developed the System-Theoretic Process Analysis (STPA), a technique based on systems theory that offers insights into how complex systems can become safer. Researchers used STPA in its report to the FDA. “Industry and government worked together to put controls and error reporting in place. Today, there are nearly zero crashes in the U.S. What’s happening in health care right now is like having a Boeing 787 crash every day.” 

Other engineering principles that work well in aviation, such as control systems, could be applied to health care as well, Thomas says. For instance, closed-loop controls solicit feedback so a system can change and adapt. Having laboratories confirm that physicians received their patients’ test results or investigating all reports of diagnostic errors are examples of closed-loop controls that are not mandated in the current ecosystem, Thomas says. 

“Operating without controls is like asking a robot to navigate a city street blindfolded,” Thomas says. “There’s no opportunity for course correction. Closed-loop controls help inform future decision-making, and, at this point in time, it’s missing in the U.S. health-care system.” 

The Leveson team will continue working with Synensys on behalf of the FDA. Their next study will investigate diagnostic screenings outside the laboratory, such as at a physician’s office (point of care) or at home (over the counter). Since the start of the Covid-19 pandemic, nonclinical lab testing has surged in the country. About 600 million Covid-19 tests were sent to U.S. households between January and September 2022, according to Synensys. Yet, few systems are in place to aggregate these data or report findings to public health agencies.  

“There’s a lot of well-meaning people trying to solve this and other lab data challenges,” Rose says. “If we can convince people to think of health care as an engineered system, we can go a long way in solving some of these entrenched problems.”

The Synensys research contract is art of the Systemic Harmonization and Interoperability Enhancement for Laboratory Data (SHIELD) campaign, an agency initiative that seeks assistance and input in using systems theory to address this challenge. 

Inclusive research for social change

Thu, 01/04/2024 - 12:50pm

Pair a decades-old program dedicated to creating research opportunities for underrepresented minorities and populations with a growing initiative committed to tackling the very issues at the heart of such disparities, and you’ll get a transformative partnership that only MIT can deliver. 

Since 1986, the MIT Summer Research Program (MSRP) has led an institutional effort to prepare underrepresented students (minorities, women in STEM, or students with low socioeconomic status) for doctoral education by pairing them with MIT labs and research groups. For the past three years, the Initiative on Combatting Systemic Racism (ICSR), a cross-disciplinary research collaboration led by MIT’s Institute for Data, Systems, and Society (IDSS), has joined them in their mission, helping bring the issue full circle by providing MSRP students with the opportunity to use big data and computational tools to create impactful changes toward racial equity.

“ICSR has further enabled our direct engagement with undergrads, both within and outside of MIT,” says Fotini Christia, the Ford International Professor of the Social Sciences, associate director of IDSS, and co-organizer for the initiative. “We've found that this line of research has attracted students interested in examining these topics with the most rigorous methods.”

The initiative fits well under the IDSS banner, as IDSS research seeks solutions to complex societal issues through a multidisciplinary approach that includes statistics, computation, modeling, social science methodologies, human behavior, and an understanding of complex systems. With the support of faculty and researchers from all five schools and the MIT Schwarzman College of Computing, the objective of ICSR is to work on an array of different societal aspects of systemic racism through a set of verticals including policing, housing, health care, and social media.

Where passion meets impact

Grinnell senior Mia Hines has always dreamed of using her love for computer science to support social justice. She has experience working with unhoused people and labor unions, and advocating for Indigenous peoples’ rights. When applying to college, she focused her essay on using technology to help Syrian refugees.

“As a Black woman, it's very important to me that we focus on these areas, especially on how we can use technology to help marginalized communities,” Hines says. “And also, how do we stop technology or improve technology that is already hurting marginalized communities?”   

Through MSRP, Hines was paired with research advisor Ufuoma Ovienmhada, a fourth-year doctoral student in the Department of Aeronautics and Astronautics at MIT. A member of Professor Danielle Wood’s Space Enabled research group at MIT’s Media Lab, Ovienmhada received funding from an ICSR Seed Grant and NASA's Applied Sciences Program to support her ongoing research measuring environmental injustice and socioeconomic disparities in prison landscapes. 

“I had been doing satellite remote sensing for environmental challenges and sustainability, starting out looking at coastal ecosystems, when I learned about an issue called ‘prison ecology,’” Ovienmhada explains. “This refers to the intersection of mass incarceration and environmental justice.”

Ovienmhada’s research uses satellite remote sensing and environmental data to characterize exposures to different environmental hazards such as air pollution, extreme heat, and flooding. “This allows others to use these datasets for real-time advocacy, in addition to creating public awareness,” she says.

Focused especially on extreme heat, Hines used satellite remote sensing to monitor the fluctuation of temperature to assess the risk being imposed on prisoners, including death, especially in states like Texas, where 75 percent of prisons either don't have full air conditioning or have none at all.

“Before this project I had done little to no work with geospatial data, and as a budding data scientist, getting to work with and understanding different types of data and resources is really helpful,” Hines says. “I was also funded and afforded the flexibility to take advantage of IDSS’s Data Science and Machine Learning online course. It was really great to be able to do that and learn even more.”

Filling the gap

Much like Hines, Harvey Mudd senior Megan Li was specifically interested in the IDSS-supported MSRP projects. She was drawn to the interdisciplinary approach, and she seeks in her own work to apply computational methods to societal issues and to make computer science more inclusive, considerate, and ethical. 

Working with Aurora Zhang, a grad student in IDSS’s Social and Engineering Systems PhD program, Li used county-level data on income and housing prices to quantify and visualize how affordability based on income alone varies across the United States. She then expanded the analysis to include assets and debt to determine the most common barriers to home ownership.

“I spent my day-to-day looking at census data and writing Python scripts that could work with it,” reports Li. “I also reached out to the Census Bureau directly to learn a little bit more about how they did their data collection, and discussed questions related to some of their previous studies and working papers that I had reviewed.” 

Outside of actual day-to-day research, Li says she learned a lot in conversations with fellow researchers, particularly changing her “skeptical view” of whether or not mortgage lending algorithms would help or hurt home buyers in the approval process. “I think I have a little bit more faith now, which is a good thing.”

“Harvey Mudd is undergraduate-only, and while professors do run labs here, my specific research areas are not well represented,” Li says. “This opportunity was enormous in that I got the experience I need to see if this research area is actually something that I want to do long term, and I got more mirrors into what I would be doing in grad school from talking to students and getting to know faculty.”

Closing the loop

While participating in MSRP offered crucial research experience to Hines, the ICSR projects enabled her to engage in topics she's passionate about and work that could drive tangible societal change.

“The experience felt much more concrete because we were working on these very sophisticated projects, in a supportive environment where people were very excited to work with us,” she says.

A significant benefit for Li was the chance to steer her research in alignment with her own interests. “I was actually given the opportunity to propose my own research idea, versus supporting a graduate student's work in progress,” she explains. 

For Ovienmhada, the pairing of the two initiatives solidifies the efforts of MSRP and closes a crucial loop in diversity, equity, and inclusion advocacy. 

“I've participated in a lot of different DEI-related efforts and advocacy and one thing that always comes up is the fact that it’s not just about bringing people in, it's also about creating an environment and opportunities that align with people’s values,” Ovienmhada says. “Programs like MSRP and ICSR create opportunities for people who want to do work that’s aligned with certain values by providing the needed mentoring and financial support.”

Researchers 3D print components for a portable mass spectrometer

Thu, 01/04/2024 - 12:00am

Mass spectrometers, devices that identify chemical substances, are widely used in applications like crime scene analysis, toxicology testing, and geological surveying. But these machines are bulky, expensive, and easy to damage, which limits where they can be effectively deployed.

Using additive manufacturing, MIT researchers produced a mass filter, which is the core component of a mass spectrometer, that is far lighter and cheaper than the same type of filter made with traditional techniques and materials.

Their miniaturized filter, known as a quadrupole, can be completely fabricated in a matter of hours for a few dollars. The 3D-printed device is as precise as some commercial-grade mass filters that can cost more than $100,000 and take weeks to manufacture.

Built from durable and heat-resistant glass-ceramic resin, the filter is 3D printed in one step, so no assembly is required. Assembly often introduces defects that can hamper the performance of quadrupoles.

This lightweight, cheap, yet precise quadrupole is one important step in Luis Fernando Velásquez-García’s 20-year quest to produce a 3D-printed, portable mass spectrometer.

“We are not the first ones to try to do this. But we are the first ones who succeeded at doing this. There are other miniaturized quadrupole filters, but they are not comparable with professional-grade mass filters. There are a lot of possibilities for this hardware if the size and cost could be smaller without adversely affecting the performance,” says Velásquez-García, a principal research scientist in MIT’s Microsystems Technology Laboratories (MTL) and senior author of a paper detailing the miniaturized quadrupole.

For instance, a scientist could bring a portable mass spectrometer to remote areas of the rainforest, using it to rapidly analyze potential pollutants without shipping samples back to a lab. And a lightweight device would be cheaper and easier to send into space, where it could monitor chemicals in Earth’s atmosphere or on those of distant planets.

Velásquez-García is joined on the paper by lead author Colin Eckhoff, an MIT graduate student in electrical engineering and computer science (EECS); Nicholas Lubinsky, a former MIT postdoc; and Luke Metzler and Randall Pedder of Ardara Technologies. The research is published in Advanced Science.

Size matters

At the heart of a mass spectrometer is the mass filter. This component uses electric or magnetic fields to sort charged particles based on their mass-to-charge ratio. In this way, the device can measure the chemical components in a sample to identify an unknown substance.

A quadrupole, a common type of mass filter, is composed of four metallic rods surrounding an axis. Voltages are applied to the rods, which produce an electromagnetic field. Depending on the properties of the electromagnetic field, ions with a specific mass-to-charge ratio will swirl around through the middle of the filter, while other particles escape out the sides. By varying the mix of voltages, one can target ions with different mass-to-charge ratios.

While fairly simple in design, a typical stainless-steel quadrupole might weigh several kilograms. But miniaturizing a quadrupole is no easy task. Making the filter smaller usually introduces errors during the manufacturing process. Plus, smaller filters collect fewer ions, which makes chemical analysis less sensitive.

“You can’t make quadrupoles arbitrarily smaller — there is a tradeoff,” Velásquez-García adds.

His team balanced this tradeoff by leveraging additive manufacturing to make miniaturized quadrupoles with the ideal size and shape to maximize precision and sensitivity.

They fabricate the filter from a glass-ceramic resin, which is a relatively new printable material that can withstand temperatures up to 900 degrees Celsius and performs well in a vacuum.

The device is produced using vat photopolymerization, a process where a piston pushes into a vat of liquid resin until it nearly touches an array of LEDs at the bottom. These illuminate, curing the resin that remains in the minuscule gap between the piston and the LEDs. A tiny layer of cured polymer is then stuck to the piston, which rises up and repeats the cycle, building the device one tiny layer at a time.

“This is a relatively new technology for printing ceramics that allows you to make very precise 3D objects. And one key advantage of additive manufacturing is that you can aggressively iterate the designs,” Velásquez-García says.

Since the 3D printer can form practically any shape, the researchers designed a quadrupole with hyperbolic rods. This shape is ideal for mass filtering but difficult to make with conventional methods. Many commercial filters employ rounded rods instead, which can reduce performance.

They also printed an intricate network of triangular lattices surrounding the rods, which provides durability while ensuring the rods remain positioned correctly if the device is moved or shaken.

To finish the quadrupole, the researchers used a technique called electroless plating to coat the rods with a thin metal film, which makes them electrically conductive. They cover everything but the rods with a masking chemical and then submerge the quadrupole in a chemical bath heated to a precise temperature and stirring conditions. This deposits a thin metal film on the rods uniformly without damaging the rest of the device or shorting the rods.

“In the end, we made quadrupoles that were the most compact but also the most precise that could be made, given the constraints of our 3D printer,” Velásquez-García says.

Maximizing performance

To test their 3D-printed quadrupoles, the team swapped them into a commercial system and found that they could attain higher resolutions than other types of miniature filters. Their quadrupoles, which are about 12 centimeters in length, are one-quarter the density of comparable stainless-steel filters.

In addition, further experiments suggest that their 3D-printed quadrupoles could achieve precision that is on par with that of largescale commercial filters.

“Mass spectrometry is one of the most important of all scientific tools, and Velásquez-Garcia and co-workers describe the design, construction, and performance of a quadrupole mass filter that has several advantages over earlier devices,” says Graham Cooks, the Henry Bohn Hass Distinguished Professor of Chemistry in the Aston Laboratories for Mass Spectrometry at Purdue University, who was not involved with this work. “The advantages derive from these facts: It is much smaller and lighter than most commercial counterparts and it is fabricated monolithically, using additive construction. … It is an open question as to how well the performance will compare with that of quadrupole ion traps, which depend on the same electric fields for mass measurement but which do not have the stringent geometrical requirements of quadrupole mass filters.”

“This paper represents a real advance in the manufacture of quadrupole mass filters (QMF). The authors bring together their knowledge of manufacture using advanced materials, QMF drive electronics, and mass spectrometry to produce a novel system with good performance at low cost,” adds Steve Taylor, professor of electrical engineering and electronics at the University of Liverpool, who was also not involved with this paper. “Since QMFs are at the heart of the ‘analytical engine’ in many other types of mass spectrometry systems, the paper has an important significance across the whole mass spectrometry field, which worldwide represents a multibillion-dollar industry.”

In the future, the researchers plan to boost the quadrupole’s performance by making the filters longer. A longer filter can enable more precise measurements since more ions that are supposed to be filtered out will escape as the chemical travels along its length. They also intend to explore different ceramic materials that could better transfer heat.

“Our vision is to make a mass spectrometer where all the key components can be 3D printed, contributing to a device with much less weight and cost without sacrificing performance. There is still a lot of work to do, but this is a great start,” Velásquez-Garcia adds.

This work was funded by Empiriko Corporation.

MIT community members elected to the National Academy of Inventors for 2023

Wed, 01/03/2024 - 3:30pm

The National Academy of Inventors (NAI) recently announced the election of more than 160 individuals to their 2023 class of fellows. Among them are two members of the MIT Koch Institute for Integrative Cancer Research, Professor Daniel G. Anderson and Principal Research Scientist Ana Jaklenec. In addition, 11 MIT alumni were also recognized.

The highest professional distinction accorded solely to academic inventors, election to the NAI recognizes individuals who have created or facilitated outstanding inventions that have made a tangible impact on quality of life, economic development, and the welfare of society.  

“Daniel and Ana embody some of the Koch Institute’s core values of interdisciplinary innovation and drive to translate their discoveries into real impact for patients,” says Matthew Vander Heiden, director of the Koch Institute. “Their election to the academy is very well-deserved, and we are honored to count them both among the Koch Institute’s and MIT’s research community.”

Daniel Anderson is the Joseph R. Mares (1924) Professor of Chemical Engineering, and a core member of the Institute for Medical Engineering and Science. He is a leading researcher in the fields of nanotherapeutics and biomaterials. Anderson’s work has led to advances in a range of areas, including medical devices, cell therapy, drug delivery, gene therapy, and material science, and has resulted in the publication of more than 500 papers, patents, and patent applications. He has founded several companies, including Living Proof, Olivo Labs, Crispr Therapeutics (CRSP), Sigilon Therapeutics, Verseau Therapeutics, oRNA, and VasoRx. He is a member of National Academy of Medicine, the Harvard-MIT Division of Health Science and Technology, and is an affiliate of the Broad Institute of MIT and Harvard and the Ragon Institute of MGH, MIT and Harvard.

Ana Jaklenec, a principal research scientist and principal investigator at the Koch Institute, is a leader in the fields of bioengineering and materials science, focused on controlled delivery and stability of therapeutics for global health. She is an inventor of several drug delivery technologies that have the potential to enable equitable access to medical care globally. Her lab is developing new manufacturing techniques for the design of materials at the nano- and micro-scale for self-boosting vaccines, 3D printed on-demand microneedles, heat-stable polymer-based carriers for oral delivery of micronutrients and probiotics, and long-term drug delivery systems for cancer immunotherapy. She has published over 100 manuscripts, patents, and patent applications and has founded three companies: Particles for Humanity, VitaKey, and OmniPulse Biosciences.

The 11 MIT alumni who were elected to the NAI for 2023 include:

  • Michel Barsoum PhD ’85 (Materials Science and Engineering);
  • Eric Burger ’84 (Electrical Engineering and Computer Science);
  • Kevin Kelly SM ’88, PhD ’91 (Mechanical Engineering);
  • Ali Khademhosseini PhD ’05 (Biological Engineering);
  • Joshua Makower ’85 (Mechanical Engineering);
  • Marcela Maus ’97 (Biology);
  • Milos Popovic SM ’02, PhD ’08 (Electrical Engineering and Computer Science);
  • Milica Radisic PhD ’04 (Chemical Engineering);
  • David Reinkensmeyer ’88 (Electrical Engineering);
  • Boris Rubinsky PhD ’81 (Mechanical Engineering); and
  • Paul S. Weiss ’80, SM ’80 (Chemistry).

Since its inception in 2012, the NAI Fellows program has grown to include 1,898 exceptional researchers and innovators, who hold over 63,000 U.S. patents and 13,000 licensed technologies. NAI Fellows are known for the societal and economic impact of their inventions, contributing to major advancements in science and consumer technologies. Their innovations have generated over $3 trillion in revenue and generated 1 million jobs.    

“This year’s class of NAI Fellows showcases the caliber of researchers that are found within the innovation ecosystem. Each of these individuals are making significant contributions to both science and society through their work,” says Paul R. Sanberg, president of the NAI. “This new class, in conjunction with our existing fellows, are creating innovations that are driving crucial advancements across a variety of disciplines and are stimulating the global and national economy in immeasurable ways as they move these technologies from lab to marketplace.” 

AI agents help explain other AI systems

Wed, 01/03/2024 - 3:10pm

Explaining the behavior of trained neural networks remains a compelling puzzle, especially as these models grow in size and sophistication. Like other scientific challenges throughout history, reverse-engineering how artificial intelligence systems work requires a substantial amount of experimentation: making hypotheses, intervening on behavior, and even dissecting large networks to examine individual neurons. To date, most successful experiments have involved large amounts of human oversight. Explaining every computation inside models the size of GPT-4 and larger will almost certainly require more automation — perhaps even using AI models themselves. 

Facilitating this timely endeavor, researchers from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed a novel approach that uses AI models to conduct experiments on other systems and explain their behavior. Their method uses agents built from pretrained language models to produce intuitive explanations of computations inside trained networks.

Central to this strategy is the “automated interpretability agent” (AIA), designed to mimic a scientist’s experimental processes. Interpretability agents plan and perform tests on other computational systems, which can range in scale from individual neurons to entire models, in order to produce explanations of these systems in a variety of forms: language descriptions of what a system does and where it fails, and code that reproduces the system’s behavior. Unlike existing interpretability procedures that passively classify or summarize examples, the AIA actively participates in hypothesis formation, experimental testing, and iterative learning, thereby refining its understanding of other systems in real time. 

Complementing the AIA method is the new “function interpretation and description” (FIND) benchmark, a test bed of functions resembling computations inside trained networks, and accompanying descriptions of their behavior. One key challenge in evaluating the quality of descriptions of real-world network components is that descriptions are only as good as their explanatory power: Researchers don’t have access to ground-truth labels of units or descriptions of learned computations. FIND addresses this long-standing issue in the field by providing a reliable standard for evaluating interpretability procedures: explanations of functions (e.g., produced by an AIA) can be evaluated against function descriptions in the benchmark.  

For example, FIND contains synthetic neurons designed to mimic the behavior of real neurons inside language models, some of which are selective for individual concepts such as “ground transportation.” AIAs are given black-box access to synthetic neurons and design inputs (such as “tree,” “happiness,” and “car”) to test a neuron’s response. After noticing that a synthetic neuron produces higher response values for “car” than other inputs, an AIA might design more fine-grained tests to distinguish the neuron’s selectivity for cars from other forms of transportation, such as planes and boats. When the AIA produces a description such as “this neuron is selective for road transportation, and not air or sea travel,” this description is evaluated against the ground-truth description of the synthetic neuron (“selective for ground transportation”) in FIND. The benchmark can then be used to compare the capabilities of AIAs to other methods in the literature. 

Sarah Schwettmann PhD '21, co-lead author of a paper on the new work and a research scientist at CSAIL, emphasizes the advantages of this approach. “The AIAs’ capacity for autonomous hypothesis generation and testing may be able to surface behaviors that would otherwise be difficult for scientists to detect. It’s remarkable that language models, when equipped with tools for probing other systems, are capable of this type of experimental design,” says Schwettmann. “Clean, simple benchmarks with ground-truth answers have been a major driver of more general capabilities in language models, and we hope that FIND can play a similar role in interpretability research.”

Automating interpretability 

Large language models are still holding their status as the in-demand celebrities of the tech world. The recent advancements in LLMs have highlighted their ability to perform complex reasoning tasks across diverse domains. The team at CSAIL recognized that given these capabilities, language models may be able to serve as backbones of generalized agents for automated interpretability. “Interpretability has historically been a very multifaceted field,” says Schwettmann. “There is no one-size-fits-all approach; most procedures are very specific to individual questions we might have about a system, and to individual modalities like vision or language. Existing approaches to labeling individual neurons inside vision models have required training specialized models on human data, where these models perform only this single task. Interpretability agents built from language models could provide a general interface for explaining other systems — synthesizing results across experiments, integrating over different modalities, even discovering new experimental techniques at a very fundamental level.” 

As we enter a regime where the models doing the explaining are black boxes themselves, external evaluations of interpretability methods are becoming increasingly vital. The team’s new benchmark addresses this need with a suite of functions with known structure, that are modeled after behaviors observed in the wild. The functions inside FIND span a diversity of domains, from mathematical reasoning to symbolic operations on strings to synthetic neurons built from word-level tasks. The dataset of interactive functions is procedurally constructed; real-world complexity is introduced to simple functions by adding noise, composing functions, and simulating biases. This allows for comparison of interpretability methods in a setting that translates to real-world performance.      

In addition to the dataset of functions, the researchers introduced an innovative evaluation protocol to assess the effectiveness of AIAs and existing automated interpretability methods. This protocol involves two approaches. For tasks that require replicating the function in code, the evaluation directly compares the AI-generated estimations and the original, ground-truth functions. The evaluation becomes more intricate for tasks involving natural language descriptions of functions. In these cases, accurately gauging the quality of these descriptions requires an automated understanding of their semantic content. To tackle this challenge, the researchers developed a specialized “third-party” language model. This model is specifically trained to evaluate the accuracy and coherence of the natural language descriptions provided by the AI systems, and compares it to the ground-truth function behavior. 

FIND enables evaluation revealing that we are still far from fully automating interpretability; although AIAs outperform existing interpretability approaches, they still fail to accurately describe almost half of the functions in the benchmark. Tamar Rott Shaham, co-lead author of the study and a postdoc in CSAIL, notes that “while this generation of AIAs is effective in describing high-level functionality, they still often overlook finer-grained details, particularly in function subdomains with noise or irregular behavior. This likely stems from insufficient sampling in these areas. One issue is that the AIAs’ effectiveness may be hampered by their initial exploratory data. To counter this, we tried guiding the AIAs’ exploration by initializing their search with specific, relevant inputs, which significantly enhanced interpretation accuracy.” This approach combines new AIA methods with previous techniques using pre-computed examples for initiating the interpretation process.

The researchers are also developing a toolkit to augment the AIAs’ ability to conduct more precise experiments on neural networks, both in black-box and white-box settings. This toolkit aims to equip AIAs with better tools for selecting inputs and refining hypothesis-testing capabilities for more nuanced and accurate neural network analysis. The team is also tackling practical challenges in AI interpretability, focusing on determining the right questions to ask when analyzing models in real-world scenarios. Their goal is to develop automated interpretability procedures that could eventually help people audit systems — e.g., for autonomous driving or face recognition — to diagnose potential failure modes, hidden biases, or surprising behaviors before deployment. 

Watching the watchers

The team envisions one day developing nearly autonomous AIAs that can audit other systems, with human scientists providing oversight and guidance. Advanced AIAs could develop new kinds of experiments and questions, potentially beyond human scientists’ initial considerations. The focus is on expanding AI interpretability to include more complex behaviors, such as entire neural circuits or subnetworks, and predicting inputs that might lead to undesired behaviors. This development represents a significant step forward in AI research, aiming to make AI systems more understandable and reliable.

“A good benchmark is a power tool for tackling difficult challenges,” says Martin Wattenberg, computer science professor at Harvard University who was not involved in the study. “It's wonderful to see this sophisticated benchmark for interpretability, one of the most important challenges in machine learning today. I'm particularly impressed with the automated interpretability agent the authors created. It's a kind of interpretability jiu-jitsu, turning AI back on itself in order to help human understanding.”

Schwettmann, Rott Shaham, and their colleagues presented their work at NeurIPS 2023 in December.  Additional MIT coauthors, all affiliates of the CSAIL and the Department of Electrical Engineering and Computer Science (EECS), include graduate student Joanna Materzynska, undergraduate student Neil Chowdhury, Shuang Li PhD ’23, Assistant Professor Jacob Andreas, and Professor Antonio Torralba. Northeastern University Assistant Professor David Bau is an additional coauthor.

The work was supported, in part, by the MIT-IBM Watson AI Lab, Open Philanthropy, an Amazon Research Award, Hyundai NGV, the U.S. Army Research Laboratory, the U.S. National Science Foundation, the Zuckerman STEM Leadership Program, and a Viterbi Fellowship.

Complex, unfamiliar sentences make the brain’s language network work harder

Wed, 01/03/2024 - 5:00am

With help from an artificial language network, MIT neuroscientists have discovered what kind of sentences are most likely to fire up the brain’s key language processing centers.

The new study reveals that sentences that are more complex, either because of unusual grammar or unexpected meaning, generate stronger responses in these language processing centers. Sentences that are very straightforward barely engage these regions, and nonsensical sequences of words don’t do much for them either.

For example, the researchers found this brain network was most active when reading unusual sentences such as “Buy sell signals remains a particular,” taken from a publicly available language dataset called C4. However, it went quiet when reading something very straightforward, such as “We were sitting on the couch.”

“The input has to be language-like enough to engage the system,” says Evelina Fedorenko, Associate Professor of Neuroscience at MIT and a member of MIT’s McGovern Institute for Brain Research. “And then within that space, if things are really easy to process, then you don’t have much of a response. But if things get difficult, or surprising, if there’s an unusual construction or an unusual set of words that you’re maybe not very familiar with, then the network has to work harder.”

Fedorenko is the senior author of the study, which appears today in Nature Human Behavior. MIT graduate student Greta Tuckute is the lead author of the paper.

Processing language

In this study, the researchers focused on language-processing regions found in the left hemisphere of the brain, which includes Broca’s area as well as other parts of the left frontal and temporal lobes of the brain.

“This language network is highly selective to language, but it’s been harder to actually figure out what is going on in these language regions,” Tuckute says. “We wanted to discover what kinds of sentences, what kinds of linguistic input, drive the left hemisphere language network.”

The researchers began by compiling a set of 1,000 sentences taken from a wide variety of sources — fiction, transcriptions of spoken words, web text, and scientific articles, among many others.

Five human participants read each of the sentences while the researchers measured their language network activity using functional magnetic resonance imaging (fMRI). The researchers then fed those same 1,000 sentences into a large language model — a model similar to ChatGPT, which learns to generate and understand language from predicting the next word in huge amounts of text — and measured the activation patterns of the model in response to each sentence.

Once they had all of those data, the researchers trained a mapping model, known as an “encoding model,” which relates the activation patterns seen in the human brain with those observed in the artificial language model. Once trained, the model could predict how the human language network would respond to any new sentence based on how the artificial language network responded to these 1,000 sentences.

The researchers then used the encoding model to identify 500 new sentences that would generate maximal activity in the human brain (the “drive” sentences), as well as sentences that would elicit minimal activity in the brain’s language network (the “suppress” sentences).

In a group of three new human participants, the researchers found these new sentences did indeed drive and suppress brain activity as predicted.

“This ‘closed-loop’ modulation of brain activity during language processing is novel,” Tuckute says. “Our study shows that the model we’re using (that maps between language-model activations and brain responses) is accurate enough to do this. This is the first demonstration of this approach in brain areas implicated in higher-level cognition, such as the language network.”

Linguistic complexity

To figure out what made certain sentences drive activity more than others, the researchers analyzed the sentences based on 11 different linguistic properties, including grammaticality, plausibility, emotional valence (positive or negative), and how easy it is to visualize the sentence content.

For each of those properties, the researchers asked participants from crowd-sourcing platforms to rate the sentences. They also used a computational technique to quantify each sentence’s “surprisal,” or how uncommon it is compared to other sentences.

This analysis revealed that sentences with higher surprisal generate higher responses in the brain. This is consistent with previous studies showing people have more difficulty processing sentences with higher surprisal, the researchers say.

Another linguistic property that correlated with the language network’s responses was linguistic complexity, which is measured by how much a sentence adheres to the rules of English grammar and how plausible it is, meaning how much sense the content makes, apart from the grammar.

Sentences at either end of the spectrum — either extremely simple, or so complex that they make no sense at all — evoked very little activation in the language network. The largest responses came from sentences that make some sense but require work to figure them out, such as “Jiffy Lube of — of therapies, yes,” which came from the Corpus of Contemporary American English dataset.

“We found that the sentences that elicit the highest brain response have a weird grammatical thing and/or a weird meaning,” Fedorenko says. “There’s something slightly unusual about these sentences.”

The researchers now plan to see if they can extend these findings in speakers of languages other than English. They also hope to explore what type of stimuli may activate language processing regions in the brain’s right hemisphere.

The research was funded by an Amazon Fellowship from the Science Hub, an International Doctoral Fellowship from the American Association of University Women, the MIT-IBM Watson AI Lab, the National Institutes of Health, the McGovern Institute, the Simons Center for the Social Brain, and MIT’s Department of Brain and Cognitive Sciences.

Building technology that empowers city residents

Wed, 01/03/2024 - 12:00am

Kwesi Afrifa came to MIT from his hometown of Accra, Ghana, in 2020 to pursue an interdisciplinary major in urban planning and computer science. Growing up amid the many moving parts of a large, densely populated city, he had often observed aspects of urban life that could be made more efficient. He decided to apply his interest in computing and coding to address these problems by creating software tools for city planners.

Now a senior, Afrifa works at the City Form Lab led by Andres Sevstuk, collaborating on an open-source, Python-based tool that allows researchers and policymakers to analyze pedestrians’ behaviors. The package, which launches next month, will make it more feasible for researchers and city planners to investigate how changes to a city’s structural characteristics impact walkability and the pedestrian experience.

During his first two years at MIT, Afrifa worked in the Civic Data Design Lab led by Associate Professor Sarah Williams, where he helped build sensing tools and created an online portal for people living in Kibera, Nairobi, to access the internet and participate in survey research.

After graduation, he will go on to work as a software engineer at a startup in New York. After several years, he hopes to start his own company, building urban data tools for integration into mapping and location-based software applications.

“I see it as my duty to make city systems more efficient, deepen the connection between residents and their communities, and make existing in them better for everyone, including groups which have often been marginalized,” he says.

“Cities are special places”

Afrifa believes that in urban settings, technology has a unique power to both accelerate development and empower citizens.

He witnessed such unifying power in high school, when he created the website ghanabills.com, which aggregated bills of parliament in Ghana, providing easy access to this information as well as a place for people to engage in discussion on the bills. He describes the effect of this technology as a “democratizing force.”

Afrifa also explored the connection between cities and community as an executive member of Code for Good, a program that connects MIT students interested in software with nonprofits throughout the Boston area. He served as a mentor for students and worked on finding nonprofits to match them up with.

Language and visibility

Sharing African languages and cultures is also important to Afrifa. In his first two years at MIT, he and other African students across the country started the Mandla app, which he describes as a Duolingo for African languages. It had gamified lessons, voice translations, and other interactive features for learning. “We wanted to solve the problem of language revitalization and bring African languages to the broader diaspora,” he says. At its peak a year ago, the app had 50,000 daily active users.

Although the Mandla App was discontinued due to lack of funding, Afrifa has found other ways to promote African culture at MIT. He is currently collaborating with architecture graduate students TJ Bayowa and Courage Kpodo on a “A Tale of Two Coasts,” an upcoming short film and multimedia installation that delves into the intricate connections between perceptions of African art and identity spanning two coasts of the Atlantic Ocean. This ongoing collaboration, which Afrifa says is still taking shape, is something he hopes to expand beyond MIT.

Discovering arts

As a child, Afrifa enjoyed writing poetry. Growing up with parents who loved literature, Afrifa was encouraged to become involved with the theater and art scene of Accra. He didn’t expect to continue this interest at MIT, but then he discovered the Black Theater Guild (BTG).

The theater group had been active at MIT from the 1990s to around 2005. It was revived by Afrifa in his sophomore year when Professor Jay Scheib, head of Music and Theater Arts at MIT, encouraged him to write, direct, and produce more of his work after his final project for 21M.710 (Script Analysis), a dramaturgy class taught by Scheib.

Since then, the BTG has held two productions in the past two years: “Nkrumah’s Last Day,” in spring 2022, and “Shooting the Sheriff,” in spring 2023, both of which were written and directed by Afrifa. “It’s been very rewarding to conceptualize ideas, write stories and have this amazing community of people come together and produce it,” he says.

When asked if he will continue to pursue theater post-grad, Afrifa says: “That’s 100 percent the goal.”

Culturally informed design: Unearthing ingenuity where it always was

Tue, 01/02/2024 - 3:50pm

Pedro Reynolds-Cuéllar, an MIT PhD student in both media arts and sciences and art, culture, and technology (ACT), explores how technology and culture intersect in spaces often overlooked by mainstream society, stretching beyond the usual scope of design research.

A former lecturer and researcher at MIT D-Lab with experience in robotics, Reynolds-Cuéllar is an ACT Future Heritage Lab affiliate, a member of the Space Enabled Group within the MIT Media Lab, and a MAD Fellow who hails from rural Colombia, where resourcefulness isn't a skill but a way of life. “I grew up seeing impressive ingenuity in solving a lot of problems, building contraptions, tools, and infrastructure … all sorts of things. Investigating this ingenuity has been the question driving my entire PhD,” he reflects.

Emphasizing the importance of cultural elements in how people collaborate, his work encourages a more localized, culturally informed perspective on technology design. “I am interested in investigating how technology takes place in geographies and spaces that are outside of mainstream society, mostly rural places,” he says.

At the heart of South America, Colombia is home to over 80 distinct groups of Indigenous tribes known to exist, each carrying unique customs, beliefs, and practices. This contributes to Colombia's cultural mosaic and linguistic diversity, with more than 68 spoken languages. This meant plenty of opportunities for Reynolds-Cuéllar to engage with communities without trying to reshape or “fix” them, but rather to amplify their intrinsic strengths and amplify their voices.

“My colleagues and I developed a digital platform meticulously documenting collaborative processes when designing technology. This platform, called Retos, captures the invaluable social capital that blooms from these interactions,” Reynolds-Cuéllar explains. Born from a need to foster cross-pollination, the platform serves as a bridge between universities, companies, and rural Colombian organizations, enhancing their existing initiatives and facilitating processes such as funding applications. It received an award from MIT Solve and the 2022 MIT Prize for Open Data from MIT Libraries. 

Designing with culture in mind

Reynolds-Cuéllar's approach isn’t formulaic. “Culture is pivotal in shaping collaboration dynamics,” he emphasizes. “Reading about collaboration can make it seem like something universal, but I don’t think it works that way. This means common research methods are not always effective. You must ‘tune in,’ and build upon existing methods in the local fabric.” This understanding fuels Reynolds-Cuéllar’s work, allowing him to sculpt each project to resonate with a community's distinct cultural context. At the heart of his doctoral research, he integrates Indigenous knowledge and what he calls “ancestral technology into design practices — a form of world-making (design) that primarily supports cultural cohesion, rooted in bounded geography and with a history that lives through collective memory. “I'm prompting designers, who may lack direct access to Indigenous scholarship, to recalibrate their design approaches,” Reynolds-Cuéllar articulates.

This appeal to look into multiple perspectives and methodologies broadens the horizons of conventional design thinking. Beyond designing things for a specific function or solution, Reynolds-Cuéllar looks at practices that also help maintain the cultural fabric of a place. He gives the example of weaving looms, which are not only the result of ingenious design, but also allow Indigenous communities to build artifacts with great cultural meaning and economic benefit: “When I work on the loom … I feel differently. I have access to a different state of mind and can easily get into a flow. I am building things where I can tell the story of my life within my culture. I'm making something that is meaningful for people around me, and I'm not doing it alone, we're doing it all together,” adds Reynolds-Cuéllar.

Among his ventures, Reynolds-Cuéllar's work with coffee farmers stands out. His projects in collaboration with these communities are all about empowering coffee farmers to refine their processes and gain agency over their livelihood and economic undertakings.

“The coffee industry in Colombia is intricate, with various layers influencing farmers’ lives, from bioengineered seeds to chemical fertilizers, and centralized roasting operations. It’s political and even philosophical,” Reynolds-Cuéllar states. Coffee farmers could sell the raw beans for a low price to the powerful Federación Nacional de Cafeteros (the National Federation of Coffee Growers of Colombia), but there are other alternatives to foster agency and self-determination. “We collaborate with coffee growing collectives, helping them to achieve consistency in roasting procedures, improve equipment designs, and set up packaging infrastructure,” which means farmers can produce higher-value specialty coffee which they can choose to sell directly to consumers. Reynolds-Cuéllar's work creates ripple effects, bolstering autonomy and local economies.

Too many questions

Throughout his research, Reynolds-Cuéllar describes a turning point in meeting an Indigenous cultural and social leader: “We were collaborating with a group of fishermen on Colombia's Atlantic coast, within an Indigenous community. Our initial curriculum mirrored conventional design methods. Yet, the leader's insight shifted my perspective profoundly. It was the first time my methods were being challenged.” The encounter prompted Reynolds-Cuéllar to scrutinize his methodology: “This leader told me: ‘You guys ask a lot of questions.’ I started explaining the benefit of questions, and methods in the usual design jargon. He replied: ‘I still think you ask too many questions. We ask the most important questions, and then we spend a lot of time reflecting on them,” remembers Reynolds-Cuéllar. This shift underscored the realization that there is no such thing as universal design, and that standardized methodologies don’t universally translate. They sometimes inadvertently strip away cultural nuances, where they could instead cultivate their dynamic expression.

For Reynolds-Cuéllar, his participation in MAD’s design fellowship has been instrumental. The fellowship not only provided essential funding but also offered a sense of community. “The fellowship facilitated meaningful conversations, especially talks like Dori Tunstall's on ‘Decolonizing Design,’” Reynolds-Cuéllar reflects. The financial support also translated into practical aid, allowing him to advance his projects, including compensating field researchers in Colombia.

Beyond academic pursuits, Reynolds-Cuéllar envisions writing a book titled “The Atlas of Ancestral Technology of Colombia.” More than mere documentation, this large atlas format would be a compendium of the myriad stories Reynolds-Cuéllar has unearthed, with illustrating images crafted in Colombia — visual representations from each culture, descriptions, and local stories about these artifacts. “I want a book that could counter some of the predominant narratives on design,” asserts Reynolds-Cuéllar. Through his work, Reynolds-Cuéllar already started to craft a blueprint for approaching design with cultural significance and intention, laying the foundation for a more inclusive and purposeful approach to technology and innovation.

Climate action, here and now

Tue, 01/02/2024 - 12:00am

A few years ago, David Hsu started taking a keen interest in some apartment buildings in Brooklyn and the Bronx — but not because he was looking for a place to live. Hsu, an associate professor at MIT, works on urban climate change solutions. The property owners were retrofitting their buildings to make them net-zero emitters of carbon dioxide via better insulation, ventilation, and electric heating and appliances. They also wanted to see the effect on interior air quality.

In the process, the owners started working with Hsu and an MIT team to assess the results using top-grade air quality sensors. They found that beyond its climate benefits, retrofitting lowered indoor pollutants from high levels to almost-undetectable levels. It is a win-win outcome.

“Not only are those buildings cleaner and use less energy and do not emit greenhouse gases, they also have better air quality,” Hsu says. “The hopeful thing is that as we remake our buildings for decarbonization, a lot of technologies are so superior that our lives will be better, too.”

Hsu’s projects frequently yield practical, concrete steps for climate action. In New York City, Hsu found, mandating the measurement of energy use lowered consumption 13 to 14 percent over four years. In a 2017 paper, he and his co-authors studied which climate actions would most reduce carbon emissions in 11 major U.S. cities. Cleveland and Denver can greatly reduce use of fossil fuels, for example, while better energy efficiency in new homes would make a big difference in Houston and Phoenix.

“You have to figure out what works and doesn’t work,” Hsu says. “I try to figure out how we can have cleaner and healthier cities that will be more sustainable, equitable, and more just.”

Significantly, Hsu does not just prescribe climate action elsewhere, he also works for change at MIT. He helped create a zero-emissions roadmap for MIT’s School of Architecture and Planning as well as the Department of Urban Studies and Planning, where he is an associate professor of urban and environmental planning and is part of Fast Forward: MIT’s Climate Action Plan for the Decade, serving in the Climate Education Working Group.

“People can get depressed about how you tackle this large, civilization-wide problem, and then you realize lots of other people care about this. Lots of smart people at MIT and other places are working on it, and there are lots of things we can do, individually and collectively,” Hsu says.

And as Hsu’s work shows, lots of people tackle the climate crisis by working on local issues. For his research and teaching, Hsu was granted tenure at MIT this year.

Urban planning by way of Amherst

Hsu studies cities, but is not from one. Growing up in the college town of Amherst, Massachusetts, Hsu could walk out of his home and “be in the woods in a minute.” He attended Yale University as an undergraduate, majoring in physics, and started venturing into New York City with friends. After graduation, Hsu moved there and got a job.

Or three jobs, really. Over the next 10 years, Hsu worked as an engineer, in real estate finance, and for the New York City government as a vice president at the NYC Economic Development Corporation, where he helped manage the city’s post-September 11 redevelopment of the East River waterfront. Eventually, he decided to pursue graduate studies in urban planning, building on his experience.

“Engineering, finance, and government, you put those three things together and they’re basically urban planning,” Hsu says. “It took me a decade after school to realize urban planning is a thing I could do. I say to students, ‘You’re lucky, you have this major. I never had this in college.’”

As a graduate student, Hsu received an MS from Cornell University in applied and engineering physics, then an MSc from the London School of Economics and Political Science in city design and social science, before getting his PhD in urban design and planning at the University of Washington in Seattle. He served on the faculty at the University of Pennsylvania before moving to MIT in 2015.

Hsu studies an array of topics involving local governments and climate policy. He has published multiple papers on Philadelphia’s attempts to refurbish its stormwater infrastructure, for example. His studies about retrofitted apartment buildings are forthcoming as three papers. A 2022 Hsu paper, “Straight out of Cape Cod,” looked at the origins of Community Choice Aggregation, an approach to purchasing clean energy that started in a few Massachusetts communities and now involves 11 percent of the U.S. population.

“I joke that the ideal reader of my articles is not a mayor and it’s not an academic, it’s a midcareer bureaucrat trying to implement a policy,” Hsu says.

Actually, that’s no mere joke. At MIT, City of Cambridge officials have contacted Hsu to discuss his studies of New York and Philadelphia, something he welcomes. Even if not in local government himself, Hsu says, “I know I can do research that might move some of those projects along. It’s my way of trying to contribute to the world outside of academia.”

“It’s all important”

There is still another way Hsu contributes to climate action: by influencing what MIT does. He helped craft the climate policies of the School of Architecture and Planning and the Department of Urban Studies and Planning, which aim to produce net zero emissions for the department through the use of tools like carbon offsets for travel. As part of the Institute-wide Climate Education Working Group convened under the Fast Forward plan, Hsu is busy thinking about how to integrate climate studies into MIT education.

“Our Fast Forward team does great work together. David McGee, Lisa Ghaffari, Kate Trimble, Antje Danielson, Curt Newton, they’re so engaged,” says Hsu. “Our students are terrifically hard-working and skilled and care about climate change, but don’t know how to affect it necessarily. We want to give them on-ramps and skills.”

He is also chair of the fast-growing 11-6 major that combines urban studies and planning with computer science.

“Climate change is happening so fast, and is so big, that every job could be climate-change related,” Hsu says. “If people leave MIT with a higher base understanding of climate change, then you can be a lawyer or consultant or work in finance or computer science and address the unsolved problems.”

Indeed, Hsu thinks many students, who he believes increasingly recognize the severity of climate change, need to prioritize the battle against it when shaping their careers.

“Our fight against climate change is not going to be over by 2050, but 25 years from now, we’re going to know if we transitioned to a net-zero-emitting society for the sake of humanity,” Hsu says. “The students are more aware than ever that climate change is going to dominate their lives. I want students to look back with satisfaction that they helped society.”

More bluntly, he says: “Are you going to say, ‘Oh, I made some money and enhanced my career, but the planet’s going to be destroyed? Or ideally will you find a job that’s satisfying and can support your future hopes for yourself and your family, and also save the planet? Because I think there are a lot of [job] options like that out there.”

Hsu adds, “We’re going to need people pulling in different directions. It’s all important. That’s the message to our students. Go find something you think is important and use your skills. We’re going to need that many people to work on climate change.”

A carbon-lite atmosphere could be a sign of water and life on other terrestrial planets, MIT study finds

Thu, 12/28/2023 - 5:00am

Scientists at MIT, the University of Birmingham, and elsewhere say that astronomers’ best chance of finding liquid water, and even life on other planets, is to look for the absence, rather than the presence, of a chemical feature in their atmospheres.

The researchers propose that if a terrestrial planet has substantially less carbon dioxide in its atmosphere compared to other planets in the same system, it could be a sign of liquid water — and possibly life — on that planet’s surface.

What’s more, this new signature is within the sights of NASA’s James Webb Space Telescope (JWST). While scientists have proposed other signs of habitability, those features are challenging if not impossible to measure with current technologies. The team says this new signature, of relatively depleted carbon dioxide, is the only sign of habitability that is detectable now.

“The Holy Grail in exoplanet science is to look for habitable worlds, and the presence of life, but all the features that have been talked about so far have been beyond the reach of the newest observatories,” says Julien de Wit, assistant professor of planetary sciences at MIT. “Now we have a way to find out if there’s liquid water on another planet. And it’s something we can get to in the next few years.”

The team’s findings appear today in Nature Astronomy. De Wit co-led the study with Amaury Triaud of the University of Birmingham in the UK. Their MIT co-authors include Benjamin Rackham, Prajwal Niraula, Ana Glidden Oliver Jagoutz, Matej Peč, Janusz Petkowski, and Sara Seager, along with Frieder Klein at the Woods Hole Oceanographic Institution (WHOI), Martin Turbet of Ècole Polytechnique in France, and Franck Selsis of the Laboratoire d’astrophysique de Bordeaux.

Beyond a glimmer

Astronomers have so far detected more than 5,200 worlds beyond our solar system. With current telescopes, astronomers can directly measure a planet’s distance to its star and the time it takes it to complete an orbit. Those measurements can help scientists infer whether a planet is within a habitable zone. But there’s been no way to directly confirm whether a planet is indeed habitable, meaning that liquid water exists on its surface.

Across our own solar system, scientists can detect the presence of liquid oceans by observing “glints” — flashes of sunlight that reflect off liquid surfaces. These glints, or specular reflections, have been observed, for instance, on Saturn’s largest moon, Titan, which helped to confirm the moon’s large lakes.

Detecting a similar glimmer in far-off planets, however, is out of reach with current technologies. But de Wit and his colleagues realized there’s another habitable feature close to home that could be detectable in distant worlds.

“An idea came to us, by looking at what’s going on with the terrestrial planets in our own system,” Triaud says.

Venus, Earth, and Mars share similarities, in that all three are rocky and inhabit a relatively temperate region with respect to the sun. Earth is the only planet among the trio that currently hosts liquid water. And the team noted another obvious distinction: Earth has significantly less carbon dioxide in its atmosphere.

“We assume that these planets were created in a similar fashion, and if we see one planet with much less carbon now, it must have gone somewhere,” Triaud says. “The only process that could remove that much carbon from an atmosphere is a strong water cycle involving oceans of liquid water.”

Indeed, the Earth’s oceans have played a major and sustained role in absorbing carbon dioxide. Over hundreds of millions of years, the oceans have taken up a huge amount of carbon dioxide, nearly equal to the amount that persists in Venus’ atmosphere today. This planetary-scale effect has left Earth’s atmosphere significantly depleted of carbon dioxide  compared to its planetary neighbors.

“On Earth, much of the atmospheric carbon dioxide has been sequestered in seawater and solid rock over geological timescales, which has helped to regulate climate and habitability for billions of years,” says study co-author Frieder Klein.

The team reasoned that if a similar depletion of carbon dioxide were detected in a far-off planet, relative to its neighbors, this would be a reliable signal of liquid oceans and life on its surface.

“After reviewing extensively the literature of many fields from biology, to chemistry, and even carbon sequestration in the context of climate change, we believe that indeed if we detect carbon depletion, it has a good chance of being a strong sign of liquid water and/or life,” de Wit says.

A roadmap to life

In their study, the team lays out a strategy for detecting habitable planets by searching for a signature of depleted carbon dioxide. Such a search would work best for “peas-in-a-pod” systems, in which multiple terrestrial planets, all about the same size, orbit relatively close to each other, similar to our own solar system. The first step the team proposes is to confirm that the planets have atmospheres, by simply looking for the presence of carbon dioxide, which is expected to dominate most planetary atmospheres.

“Carbon dioxide is a very strong absorber in the infrared, and can be easily detected in the atmospheres of exoplanets,” de Wit explains. “A signal of carbon dioxide can then reveal the presence of exoplanet atmospheres.”

Once astronomers determine that multiple planets in a system host atmospheres, they can move on to measure their carbon dioxide content, to see whether one planet has significantly less than the others. If so, the planet is likely habitable, meaning that it hosts significant bodies of liquid water on its surface.

But habitable conditions doesn’t necessarily mean that a planet is inhabited. To see whether life might actually exist, the team proposes that astronomers look for another feature in a planet’s atmosphere: ozone.

On Earth, the researchers note that plants and some microbes contribute to drawing carbon dioxide, although not nearly as much as the oceans. Nevertheless, as part of this process, the lifeforms emit oxygen, which reacts with the sun’s photons to transform into ozone — a molecule that is far easier to detect than oxygen itself.

The researchers say that if a planet’s atmosphere shows signs of both ozone and depleted carbon dioxide, it likely is a habitable, and inhabited world.

“If we see ozone, chances are pretty high that it’s connected to carbon dioxide being consumed by life,” Triaud says. “And if it’s life, it’s glorious life. It would not be just a few bacteria. It would be a planetary-scale biomass that’s able to process a huge amount of carbon, and interact with it.”

The team estimates that NASA’s James Webb Space Telescope would be able to measure carbon dioxide, and possibly ozone, in nearby, multiplanet systems such as TRAPPIST-1 — a seven-planet system that orbits a bright star, just 40 light years from Earth.

“TRAPPIST-1 is one of only a handful of systems where we could do terrestrial atmospheric studies with JWST,” de Wit says. “Now we have a roadmap for finding habitable planets. If we all work together, paradigm-shifting discoveries could be done within the next few years.”

Pages