MIT Latest News
MIT engineers develop a magnetic transistor for more energy-efficient electronics
Transistors, the building blocks of modern electronics, are typically made of silicon. Because it’s a semiconductor, this material can control the flow of electricity in a circuit. But silicon has fundamental physical limits that restrict how compact and energy-efficient a transistor can be.
MIT researchers have now replaced silicon with a magnetic semiconductor, creating a magnetic transistor that could enable smaller, faster, and more energy-efficient circuits. The material’s magnetism strongly influences its electronic behavior, leading to more efficient control of the flow of electricity.
The team used a novel magnetic material and an optimization process that reduces the material’s defects, which boosts the transistor’s performance.
The material’s unique magnetic properties also allow for transistors with built-in memory, which would simplify circuit design and unlock new applications for high-performance electronics.
“People have known about magnets for thousands of years, but there are very limited ways to incorporate magnetism into electronics. We have shown a new way to efficiently utilize magnetism that opens up a lot of possibilities for future applications and research,” says Chung-Tao Chou, an MIT graduate student in the departments of Electrical Engineering and Computer Science (EECS) and Physics, and co-lead author of a paper on this advance.
Chou is joined on the paper by co-lead author Eugene Park, a graduate student in the Department of Materials Science and Engineering (DMSE); Julian Klein, a DMSE research scientist; Josep Ingla-Aynes, a postdoc in the MIT Plasma Science and Fusion Center; Jagadeesh S. Moodera, a senior research scientist in the Department of Physics; and senior authors Frances Ross, TDK Professor in DMSE; and Luqiao Liu, an associate professor in EECS, and a member of the Research Laboratory of Electronics; as well as others at the University of Chemistry and Technology in Prague. The paper appears today in Physical Review Letters.
Overcoming the limits
In an electronic device, silicon semiconductor transistors act like tiny light switches that turn a circuit on and off, or amplify weak signals in a communication system. They do this using a small input voltage.
But a fundamental physical limit of silicon semiconductors prevents a transistor from operating below a certain voltage, which hinders its energy efficiency.
To make more efficient electronics, researchers have spent decades working toward magnetic transistors that utilize electron spin to control the flow of electricity. Electron spin is a fundamental property that enables electrons to behave like tiny magnets.
So far, scientists have mostly been limited to using certain magnetic materials. These lack the favorable electronic properties of semiconductors, constraining device performance.
“In this work, we combine magnetism and semiconductor physics to realize useful spintronic devices,” Liu says.
The researchers replace the silicon in the surface layer of a transistor with chromium sulfur bromide, a two-dimensional material that acts as a magnetic semiconductor.
Due to the material’s structure, researchers can switch between two magnetic states very cleanly. This makes it ideal for use in a transistor that smoothly switches between “on” and “off.”
“One of the biggest challenges we faced was finding the right material. We tried many other materials that didn’t work,” Chou says.
They discovered that changing these magnetic states modifies the material’s electronic properties, enabling low-energy operation. And unlike many other 2D materials, chromium sulfur bromide remains stable in air.
To make a transistor, the researchers pattern electrodes onto a silicon substrate, then carefully align and transfer the 2D material on top. They use tape to pick up a tiny piece of material, only a few tens of nanometers thick, and place it onto the substrate.
“A lot of researchers will use solvents or glue to do the transfer, but transistors require a very clean surface. We eliminate all those risks by simplifying this step,” Chou says.
Leveraging magnetism
This lack of contamination enables their device to outperform existing magnetic transistors. Most others can only create a weak magnetic effect, changing the flow of current by a few percent or less. Their new transistor can switch or amplify the electric current by a factor of 10.
They use an external magnetic field to change the magnetic state of the material, switching the transistor using significantly less energy than would usually be required.
The material also allows them to control the magnetic states with electric current. This is important because engineers cannot apply magnetic fields to individual transistors in an electronic device. They need to control each one electrically.
The material’s magnetic properties could also enable transistors with built-in memory, simplifying the design of logic or memory circuits.
A typical memory device has a magnetic cell to store information and a transistor to read it out. Their method can combine both into one magnetic transistor.
“Now, not only are transistors turning on and off, they are also remembering information. And because we can switch the transistor with greater magnitude, the signal is much stronger so we can read out the information faster, and in a much more reliable way,” Liu says.
Building on this demonstration, the researchers plan to further study the use of electrical current to control the device. They are also working to make their method scalable so they can fabricate arrays of transistors.
This research was supported, in part, by the Semiconductor Research Corporation, the U.S. Defense Advanced Research Projects Agency (DARPA), the U.S. National Science Foundation (NSF), the U.S. Department of Energy, the U.S. Army Research Office, and the Czech Ministry of Education, Youth, and Sports. The work was partially carried out at the MIT.nano facilities.
Bigger datasets aren’t always better
Determining the least expensive path for a new subway line underneath a metropolis like New York City is a colossal planning challenge — involving thousands of potential routes through hundreds of city blocks, each with uncertain construction costs. Conventional wisdom suggests extensive field studies across many locations would be needed to determine the costs associated with digging below certain city blocks.
Because these studies are costly to conduct, a city planner would want to perform as few as possible while still gathering the most useful data for making an optimal decision.
With almost countless possibilities, how would they know where to start?
A new algorithmic method developed by MIT researchers could help. Their mathematical framework provably identifies the smallest dataset that guarantees finding the optimal solution to a problem, often requiring fewer measurements than traditional approaches suggest.
In the case of the subway route, this method considers the structure of the problem (the network of city blocks, construction constraints, and budget limits) and the uncertainty surrounding costs. The algorithm then identifies the minimum set of locations where field studies would guarantee finding the least expensive route. The method also identifies how to use this strategically collected data to find the optimal decision.
This framework applies to a broad class of structured decision-making problems under uncertainty, such as supply chain management or electricity network optimization.
“Data are one of the most important aspects of the AI economy. Models are trained on more and more data, consuming enormous computational resources. But most real-world problems have structure that can be exploited. We’ve shown that with careful selection, you can guarantee optimal solutions with a small dataset, and we provide a method to identify exactly which data you need,” says Asu Ozdaglar, Mathworks Professor and head of the MIT Department of Electrical Engineering and Computer Science (EECS), deputy dean of the MIT Schwarzman College of Computing, and a principal investigator in the Laboratory for Information and Decision Systems (LIDS).
Ozdaglar, co-senior author of a paper on this research, is joined by co-lead authors Omar Bennouna, an EECS graduate student, and his brother Amine Bennouna, a former MIT postdoc who is now an assistant professor at Northwestern University; and co-senior author Saurabh Amin, co-director of Operations Research Center, a professor in the MIT Department of Civil and Environmental Engineering, and a principal investigator in LIDS. The research will be presented at the Conference on Neural Information Processing Systems.
An optimality guarantee
Much of the recent work in operations research focuses on how to best use data to make decisions, but this assumes these data already exist.
The MIT researchers started by asking a different question — what are the minimum data needed to optimally solve a problem? With this knowledge, one could collect far fewer data to find the best solution, spending less time, money, and energy conducting experiments and training AI models.
The researchers first developed a precise geometric and mathematical characterization of what it means for a dataset to be sufficient. Every possible set of costs (travel times, construction expenses, energy prices) makes some particular decision optimal. These “optimality regions” partition the decision space. A dataset is sufficient if it can determine which region contains the true cost.
This characterization offers the foundation of the practical algorithm they developed that identifies datasets that guarantee finding the optimal solution.
Their theoretical exploration revealed that a small, carefully selected dataset is often all one needs.
“When we say a dataset is sufficient, we mean that it contains exactly the information needed to solve the problem. You don’t need to estimate all the parameters accurately; you just need data that can discriminate between competing optimal solutions,” says Amine Bennouna.
Building on these mathematical foundations, the researchers developed an algorithm that finds the smallest sufficient dataset.
Capturing the right data
To use this tool, one inputs the structure of the task, such as the objective and constraints, along with the information they know about the problem.
For instance, in supply chain management, the task might be to reduce operational costs across a network of dozens of potential routes. The company may already know that some shipment routes are especially costly, but lack complete information on others.
The researchers’ iterative algorithm works by repeatedly asking, “Is there any scenario that would change the optimal decision in a way my current data can't detect?” If yes, it adds a measurement that captures that difference. If no, the dataset is provably sufficient.
This algorithm pinpoints the subset of locations that need to be explored to guarantee finding the minimum-cost solution.
Then, after collecting those data, the user can feed them to another algorithm the researchers developed which finds that optimal solution. In this case, that would be the shipment routes to include in a cost-optimal supply chain.
“The algorithm guarantees that, for whatever scenario could occur within your uncertainty, you’ll identify the best decision,” Omar Bennouna says.
The researchers’ evaluations revealed that, using this method, it is possible to guarantee an optimal decision with a much smaller dataset than would typically be collected.
“We challenge this misconception that small data means approximate solutions. These are exact sufficiency results with mathematical proofs. We’ve identified when you’re guaranteed to get the optimal solution with very little data — not probably, but with certainty,” Amin says.
In the future, the researchers want to extend their framework to other types of problems and more complex situations. They also want to study how noisy observations could affect dataset optimality.
“I was impressed by the work’s originality, clarity, and elegant geometric characterization. Their framework offers a fresh optimization perspective on data efficiency in decision-making,” says Yao Xie, the Coca-Cola Foundation Chair and Professor at Georgia Tech, who was not involved with this work.
Small, inexpensive hydrophone boosts undersea signals
Researchers at MIT Lincoln Laboratory have developed a first-of-its-kind hydrophone built around a simple, commercially available microphone. The device, leveraging a common microfabrication process known as microelectromechanical systems (MEMS), is significantly smaller and less expensive than current hydrophones, yet has equal or exceeding sensitivity. The hydrophone could have applications for the U.S. Navy, as well as industry and the scientific research community.
"Given the broad interest from the Navy in low-cost hydrophones, we were surprised that this design had not been pursued before," says Daniel Freeman, who leads this work in the Advanced Materials and Microsystems Group. "Hydrophones are critical for undersea sensing in a variety of applications and platforms. Our goal was to demonstrate that we could develop a device at reduced size and cost without sacrificing performance."
Essentially an underwater microphone, a hydrophone is an instrument that converts sound waves into electrical signals, allowing us to "hear" and record sounds in the ocean and other bodies of water. These signals can later be analyzed and interpreted, providing valuable information about the underwater environment.
MEMS devices are incredibly small systems — ranging from a few millimeters down to microns (smaller than a human hair) — with tiny moving parts. They are used in a variety of sensors, including microphones, gyroscopes, and accelerometers. The small size of MEMS sensors has made them crucial in various applications, from smartphones to medical devices. Currently, no commercially available hydrophones utilize MEMS technology, so the team set out to understand whether such a design was possible.
With funding from the Office of the Under Secretary of War for Research and Engineering to develop a novel hydrophone, the team first planned to use microfabrication, an area of expertise at the laboratory, to develop their device. However, that approach proved to be too costly and involved to pursue. This obstacle led the team to pivot and build their hydrophone around a commercially available MEMS microphone. "We had to come up with an inexpensive alternative without giving up performance, and this is what led us to build the design around a microphone, which to our knowledge is a novel approach," Freeman explains.
In collaboration with researchers at Tufts University, as well as industry partners SeaLandAire Technologies and Navmar Applied Sciences Corp., the team made the hydrophone by encapsulating the MEMS microphone in a polymer with low permeability to water while leaving an air cavity around the microphone’s diaphragm (the component of the microphone that vibrates in response to sound waves). One key challenge that they faced was the possibility of losing too much signal to the packaging and the air cavity around the MEMS microphone. After a substantial amount of simulation, design iterations, and testing, the team found that the signal lost from incorporating air into the device was compensated for by the very high sensitivity of the MEMS microphone itself. As a result, the device was able to perform at a sensitivity comparable to high-end hydrophones at depths down to 400 feet and temperatures as low as 40 degrees Fahrenheit. To date, the collaborative effort has involved computational modeling, system electronics design and fabrication, prototype unit manufacturing, and calibrator and pool testing.
In July, eight researchers traveled to Seneca Lake in New York to test a variety of devices. The hydrophones were lowered to increasing depths in the water — 100 feet at first, then incrementally lower down to 400 feet. At each depth, acoustic signals of varying frequencies were transmitted for the instrument to record. The transmitted signals were calibrated to a known level so they could then measure the actual sensitivity of the hydrophones across different frequencies. When the sound hits the hydrophone’s diaphragm, it generates an electrical signal that is amplified, digitized, and transmitted to a recording device at the surface for post-test data analysis. The team utilized both commercial underwater cables as well as Lincoln Laboratory’s fiber-based sensing arrays.
"This was our first field test in deep water, and therefore it was an important milestone in demonstrating the ability to operate in a realistic environment, rather than the water chambers that we’d been using," Freeman says. "Our hope was that the performance of our device would match what we've seen in our water tank, where we tested at high hydrostatic pressure across a range of frequencies. In other words, we hoped this test would provide results that confirm our predictions based on lab-based testing."
The test results were excellent, showing that the sensitivity and the signal-to-noise was within a few decibels of the quietest ocean state, known as sea state zero. Moreover, this performance was achieved in deep water, at 400 feet, and with very low temperatures, around 40 degrees Fahrenheit.
The prototype hydrophone has applications across a wide variety of commercial and military use-cases owing to its small size, efficient power draw, and low cost.
"We're in discussion with the Department of War about transitioning this technology to the U.S. government and industry," says Freeman. "There is still some room for optimizing the design, but we think we've demonstrated that this hydrophone has the key benefits of being robust, high performance, and very low cost."
Q&A: On the ethics of catastrophe
At first glimpse, student Jack Carson might appear too busy to think beyond his next problem set, much less tackle major works of philosophy. The sophomore, who plans to double major in electrical engineering with computing and mathematics, has been both an officer in Impact@MIT and a Social and Ethical Responsibility in Computing (SERC) Fellow in the MIT Schwarzman College of Computer Science — and is an active member of Concourse.
But this fall, Carson was awarded first place in the Elie Wiesel Prize in Ethics Essay Contest for his entry, “We Know Only Men: Reading Emmanuel Levinas On The Rez,” a comparative exploration of Jewish and Cherokee ethical thought. The deeply researched essay links Carson’s hometown in Adair County, Oklahoma, to the village of Le Chambon sur Lignon, France, and attempts to answer the question: “What is to be done after catastrophe?” Carson explains in this interview.
Q: The prompt for your entry in the Elie Wiesel Prize in Ethics Essay Contest was: “What challenges awaken your conscience? Is it the conflicts in American society? An international crisis? Maybe a difficult choice you currently face or a hard decision you had to make?” How did you land on the topic you’d write about?
A: It was really an insight that just came to me as I struggled with reading Levinas, who is notoriously challenging. The Talmud is a tradition very far from my own, but, as I read Levinas’ lectures on the Talmud, I realized that his project is one that I can relate to: preserving a culture that has been completely displaced, where not destroyed. The more I read of Levinas’ work the more I realized that his philosophy of radical alterity — that you must act when confronted with another person who you can never really comprehend — arose naturally from his efforts to show how to preserve Jewish cultural continuity. In the same if less articulated way, the life I’ve witnessed in Eastern Oklahoma has led people to “act first, think later” — to use a Levinasian term. So it struck me that similar situations of displaced cultures had led to a similar ethical approach. Given that Levinas was writing about Jewish life in Eastern Europe and I was immersed in a heavily Native American culture, the congruence of the two ethical approaches seemed surprising. I thought, perhaps rightly, that it showed something essentially human that could be abstracted away from the very different cultural settings.
Q: Your entry for the contest is a meditation on the ethical similarities between ga-du-gi, the Cherokee concept of communal effort toward the betterment of all; the actions of the Huguenot inhabitants of the French village of Le Chambon sur Lignon (who protected thousands of Jewish refugees during Nazi occupation); and the Jewish philosopher Emmanuel Levinas’ interpretation of the Talmud, which essentially posits that action must come first in an ethical framework, not second. Did you find your own personal philosophy changing as a result of engaging with these ideas — or, perhaps more appropriately — have you noticed your everyday actions changing?
A: Yes, definitely my personal philosophy has been affected by thinking through Levinas’ demanding approach. Like a lot of people, I sit around thinking through what ethical approach I prefer. Should I be a utilitarian? A virtue theorist? A Kantian? Something else? Levinas had no time for this. He urged acting, not thinking, when confronted with human need. I wrote about the resistance movement of Le Chambon because those brave citizens also just acted without thinking — in a very Levinasian way. That seems a strange thing to valorize, as we are often taught to think before you act, and this is probably good advice! But sometimes you can think your way right out of helping people in need.
Levinas instructed that you should act in the face of the overwhelming need of what he would call the “Other.” That’s a rather intimidating term, but I read it as meaning just “other people.” The Le Chambon villagers, who protected Jews fleeing the Nazis, and the Cherokees lived this, acting in an almost pre-theoretical way in helping people in need that is really quite beautiful. And for Levinas, I’d note that the problematic word is “because.” And I wrote about how “because” is indeed a thin reed that the murderers will always break.
Put a little differently, “because” suggests that you have to have “reasons” that complete the phrase and make it coherent. This might seem almost a matter of logic. But Levinas says no. Because the genocide starts when the reasons are attacked. For example, you might believe we should help some persecuted group “because” they are really just like you and me. And that’s true, of course. But Levinas knows that the killers always start by dehumanizing their targets, so they convince you that the victims are not really like you at all, but are more like “vermin” or “insects.” So the “because” condition fails, and that’s when the murdering starts. So you should just act and then think, says Levinas, and this immunizes you from that rhetorical poison. It’s a counterintuitive idea, but powerful when you really think about it.
Q: You open with a particularly striking question: What is to be done after catastrophe? Do you feel more sure of your answer, now that you’ve deeply considered these disparate response to a catastrophic event — or do you have more questions?
A: I am still not sure what to do after world-historical catastrophes like genocides. I guess I’d say there is nothing to do — other than maintain a kind of radical hope that has no basis in evidence. “Catastrophes” like those I write about — the Holocaust, the Trail of Tears — are more than just acts of physical destruction. They destroy whole ways of being and uproot whole systems of meaning-making. Cultural concepts become void overnight, as their preconditions are destroyed.
There is a great book by Jonathan Lear called “Radical Hope.” It begins with a discussion of a Plains Indian leader named Plenty Coups. After removal to the reservation in the 19th century, he is quoted as saying, “But when the buffalo went away the hearts of my people fell to the ground, and they could not lift them up again. After this nothing happened.” Lear ponders what that last sentence is all about. What did Plenty Coups mean when he said “after this nothing happened?” Obviously, life’s daily activities still happened: births, deaths, eating, drinking, and such. So what does it mean? It’s perplexing. In the end, Lear concludes that Plenty Coups was making an ontological statement, in which he meant that all of the things that gave life meaning — all of those things that make the word “happen” actually signify something — had been erased. Events occurred, but didn’t “happen” because they fell into a world that to Plenty Coups lacked any sense at all. And Plenty Coups was not wrong about this; for him and his people, the world lost intelligibility. Nonetheless, Plenty Coups continued to lead his people, even amidst great deprivation, even though he never found a new basis for belief. He only had “radical hope” — which gave Lears’ book its name — that some new way of life might arise over time. I guess my answer to “what happens after catastrophe?” is just, well, “nothing happens” in the sense Plenty Coups meant it. And “radical hope” is all you get, if anything.
Q: There’s a memorable scene in your essay in which, during a visit to your community cemetery near Stilwell, your grandfather points out the burial plots that hold both your ancestors, and that will eventually hold him and you. You describe this moment beautifully as a comforting and connective chain linking you to both past and future communities. How does being part of that chain shape your life?
A: I feel this sense of knowing where you will be buried — alongside all of your ancestors — is a great gift. That sounds a little odd, but it gives a rootedness that is very removed from most people’s experience today. And the cemetery is just a stand-in for a whole cultural structure that gives me a sense of role and responsibility. The lack of these, I think, creates a real sense of alienation, and this alienation is the condition of our age. So I feel lucky to have a strong sense of place and a place that will always be home. Lincoln talked about the “mystic chords of memory.” I feel this very mystical attachment to Oklahoma. The idea that this road or this community is one where every member of your family for generations has lived — or even if they moved away, always considered “home” — is very powerful. It always gives an answer to “Who are you?” That’s a hard question, but I can always say, “We are from Adair County,” and this is a sufficient answer. And back home, people would instantly nod their heads at the adequacy of this response. As I said, it’s a little mystical, but maybe that’s a strength, not a weakness.
Q: People might be surprised to learn that the winner of an essay contest focusing on ethics is actually not an English or philosophy major, but is instead in EECS. What areas and current issues in the field do you find interesting from an ethical perspective?
A: I think the pace of technological change — and society’s struggle to keep up — shows you how important philosophy, literature, history, and the liberal arts really are. Whether it’s algorithmic bias affecting real lives, or questions about what values we encode in AI systems, these aren’t just technical problems, but fundamentally about who we are and what we owe each other. It is true that I’m majoring in 6-5 [electrical engineering with computing] and 18 [mathematics], and of course these disciplines are extraordinarily important. But the humanities are something very important to me, as they do answer fundamental questions about who we are, what we owe to others, why people act this way or that, and how we should think through social issues. I despair when I hear brilliant engineers say they read nothing longer than a blog post. If anything, the humanities should be more important overall at MIT.
When I was younger, I just happened across a discussion of CP Snow’s famous essay on the “Two Cultures.” In it, he talks about his scientist friends who had never read Shakespeare, and his literary friends who couldn’t explain thermodynamics. In a modest way, I’ve always thought that I’d like my education to be one that allowed me to participate in the two cultures. The essay on Levinas is my attempt to pursue this type of education.
Four from MIT named 2026 Rhodes Scholars
Vivian Chinoda ’25, Alice Hall, Sofia Lara, and Sophia Wang ’24 have been selected as 2026 Rhodes Scholars and will begin fully funded postgraduate studies at the University of Oxford in the U.K. next fall. Hall, Lara, and Wang, are U.S. Rhodes Scholars; Chinoda was awarded the Rhodes Zimbabwe Scholarship.
The scholars were supported by Associate Dean Kim Benard and the Distinguished Fellowships team in Career Advising and Professional Development. They received additional mentorship and guidance from the Presidential Committee on Distinguished Fellowships.
“MIT students never cease to amaze us with their creativity, vision, and dedication,” says Professor Taylor Perron, who co-chairs the committee along with Professor Nancy Kanwisher. “This is especially true of this year’s Rhodes scholars. It’s remarkable how they are simultaneously so talented in their respective fields and so adept at communicating their goals to the world. I look forward to seeing how these outstanding young leaders shape the future. It’s an honor to work with such talented students.”
Vivian Chinoda ’25
Vivian Chinoda, from Harare, Zimbabwe, was named a Rhodes Zimbabwe Scholar on Oct. 10. Chinoda graduated this spring with a BS in business analytics. At Oxford, she hopes to pursue the MSc in social data science and a master’s degree in public policy. Chinoda aims to foster economic development and equitable resource access for Zimbabwean communities by promoting social innovation and evidence-based policy.
At MIT, Chinoda researched the impacts of the EU’s General Data Protection Regulation on stakeholders and key indicators, such as innovation, with the Institute for Data, Systems, and Society. She supported the Digital Humanities Lab and MIT Ukraine in building a platform to connect and fundraise for exiled Ukrainian scientists. With the MIT Office of Sustainability, Chinoda co-led the plan for a campus transition to a fully electric vehicle fleet, advancing the Institute’s Climate Action Plan.
Chinoda’s professional experience includes roles as a data science and research intern at Adaviv (a controlled-environment agriculture startup) and a product manager at Red Hat, developing AI tools for open-source developers.
Beyond academics, Chinoda served as first-year outreach chair and vice president of the African Students’ Association, where she co-founded the Impact Fund, raising over $30,000 to help members launch social impact initiatives in their countries. She was a scholar in the Social and Ethical Responsibilities of Computing (SERC) program, studying big-data ethics across sectors like criminal justice and health care, and a PKG social impact internship participant. Chinoda also enjoys fashion design, which she channeled into reviving the MIT Black Theatre Guild, earning her the 2025 Laya and Jerome B. Wiesner Student Art Award.
Alice Hall
Alice Hall is a senior from Philadelphia studying chemical engineering with a minor in Spanish. At Oxford, she will earn a DPhil in engineering, focusing on scaling sustainable heating and cooling technologies. She is passionate about bridging technology, leadership, and community to address the climate crisis.
Hall’s research journey began in the Lienhard Group, developing computational and techno-economic models of electrodialysis for nutrient reclamation from brackish groundwater. She then worked in the Langer Lab, investigating alveolar-capillary barrier function to enhance lung viability for transplantation. During a summer in Madrid, she collaborated with the European Space Agency to optimize surface treatments for satellite materials.
Hall’s current research in the Olivetti Group, as part of the MIT Climate Project, examines the manufacturing scalability of early-stage clean energy solutions. Hall has gained industry experience through internships with Johnson and Johnson and Procter and Gamble.
Hall represents the student body as president of MIT’s Undergraduate Association. She also serves on the Presidential Advisory Cabinet, the executive boards of the Chemical Engineering Undergraduate Student Advisory Board and MIT’s chapter of the American Institute of Chemical Engineers, the Corporation Joint Advisory Committee, the Compton Lectures Advisory Committee, and the MIT Alumni Association Board of Directors as an invited guest.
She is an active member of the Gordon-MIT Engineering Leadership Program, the Black Students’ Union, and the National Society of Black Engineers. As a member of the varsity basketball team, she earned both NEWMAC and D3hoops.com Region 2 Rookie of the Year honors in 2023.
Sofia Lara
Hailing from Los Angeles, Sofia Lara is a senior majoring in biological engineering with a minor in Spanish. As a Rhodes Scholar at Oxford, she will pursue a DPhil in clinical medicine, leveraging UK biobank data to develop sex-stratified dosing protocols and safety guidelines for the NHS.
Lara aspires to transform biological complexity from medicine’s blind spots into a therapeutic superpower where variability reveals hidden possibilities and precision medicine becomes truly precise.
At the Broad Institute of MIT and Harvard, Lara investigates the cGAS-STING immune pathway in cancer. Her thesis, a comprehensive genome-wide association study illuminating the role of STING variation in disease pathology, aims to expand understanding of STING-linked immune disorders.
Lara co-founded the MIT-Harvard Future of Biology Conference, convening multidisciplinary researchers to interrogate vulnerabilities in cancer biology. As president of MIT Baker House, she steered community initiatives and executed the legendary Piano Drop, mobilizing hundreds of students in an enduring ritual of collective resilience. Lara captains the MIT Archery Team, serves as music director for MIT Catholic Community, and channels empathy through hand-stitched crocheted octopuses for pediatric patients at the Massachusetts General Hospital.
Sophia Wang ’24
Sophia Wang, from Woodbridge, Connecticut, graduated with a BS in aerospace engineering and a concentration in the design of highly autonomous systems. At Oxford, she will pursue an MSc in mathematical and theoretical physics, followed by an MSc in global governance and diplomacy.
As an undergraduate, Wang conducted research with the MIT Space Telecommunications Astronomy Radiation (STAR) Lab and the MIT Media Lab’s Tangible Media Group and Center for Bits and Atoms. She also interned at the NASA Jet Propulsion Laboratory, working on engineering projects for exoplanet detection missions, the Mars Sample Return mission, and terrestrial proofs-of-concept for self-assembly in space.
Since graduating from MIT, Wang has been engaged in a number of projects. In Bhutan, she contributes to national technology policy centered on mindful development. In Japan, she is a founding researcher at the Henkaku Center, where she is creating an international network of academic institutions. As a venture capitalist, she recently worked with commercial space stations on the effort to replace the International Space Station, which will decommission in 2030. Wang’s creative prototyping tools, such as a modular electromechanical construction kit, are used worldwide through the Fab Foundation, a network of 2,500+ community digital fabrication labs.
An avid cook, Wang created with friends Mince, a pop-up restaurant that serves fine-dining meals to MIT students. Through MIT Global Teaching Labs, Wang taught STEM courses in Kazakhstan and Germany, and she taught digital fabrication and 3D printing workshops across the U.S. as a teacher and cyclist with MIT Spokes.
Study suggests 40Hz sensory stimulation may benefit some Alzheimer’s patients for years
A new research paper documents the outcomes of five volunteers who continued to receive 40Hz light and sound stimulation for around two years after participating in an MIT early-stage clinical study of the potential Alzheimer’s disease (AD) therapy. The results show that for the three participants with late-onset Alzheimer’s disease, several measures of cognition remained significantly higher than comparable Alzheimer’s patients in national databases. Moreover, in the two late-onset volunteers who donated plasma samples, levels of Alzheimer’s biomarker tau proteins were significantly decreased.
The three volunteers who experienced these benefits were all female. The two other participants, each of whom were males with early-onset forms of the disease, did not exhibit significant benefits after two years. The dataset, while small, represents the longest-term test so far of the safe, noninvasive treatment method (called GENUS, for gamma entrainment using sensory stimuli), which is also being evaluated in a nationwide clinical trial run by MIT-spinoff company Cognito Therapeutics.
“This pilot study assessed the long-term effects of daily 40Hz multimodal GENUS in patients with mild AD,” the authors wrote in an open-access paper in Alzheimer's & Dementia: The Journal of the Alzheimer’s Association. “We found that daily 40Hz audiovisual stimulation over 2 years is safe, feasible, and may slow cognitive decline and biomarker progression, especially in late-onset AD patients.”
Diane Chan, a former research scientist in The Picower Institute for Learning and Memory and a neurologist at Massachusetts General Hospital, is the study’s lead and co-corresponding author. Picower Professor Li-Huei Tsai, director of The Picower Institute and the Aging Brain Initiative at MIT, is the study’s senior and co-corresponding author.
An “open label” extension
In 2020, MIT enrolled 15 volunteers with mild Alzheimer’s disease in an early-stage trial to evaluate whether an hour a day of 40Hz light and sound stimulation, delivered via an LED panel and speaker in their homes, could deliver clinically meaningful benefits. Several studies in mice had shown that the sensory stimulation increases the power and synchrony of 40Hz gamma frequency brain waves, preserves neurons and their network connections, reduces Alzheimer’s proteins such as amyloid and tau, and sustains learning and memory. Several independent groups have also made similar findings over the years.
MIT’s trial, though cut short by the Covid-19 pandemic, found significant benefits after three months. The new study examines outcomes among five volunteers who continued to use their stimulation devices on an “open label” basis for two years. These volunteers came back to MIT for a series of tests 30 months after their initial enrollment. Because four participants started the original trial as controls (meaning they initially did not receive 40Hz stimulation), their open label usage was six to nine months shorter than the 30-month period.
The testing at zero, three, and 30 months of enrollment included measurements of their brain wave response to the stimulation, MRI scans of brain volume, measures of sleep quality, and a series of five standard cognitive and behavioral tests. Two participants gave blood samples. For comparison to untreated controls, the researchers combed through three national databases of Alzheimer’s patients, matching thousands of them on criteria such as age, gender, initial cognitive scores, and retests at similar time points across a 30-month span.
Outcomes and outlook
The three female late-onset Alzheimer’s volunteers showed improvement or slower decline on most of the cognitive tests, including significantly positive differences compared to controls on three of them. These volunteers also showed increased brain-wave responsiveness to the stimulation at 30 months and showed improvement in measures of circadian rhythms. In the two late-onset volunteers who gave blood samples, there were significant declines in phosphorylated tau (47 percent for one and 19.4 percent for the other) on a test recently approved by the U.S. Food and Drug Administration as the first plasma biomarker for diagnosing Alzheimer’s.
“One of the most compelling findings from this study was the significant reduction of plasma pTau217, a biomarker strongly correlated with AD pathology, in the two late-onset patients in whom follow-up blood samples were available,” the authors wrote in the journal. “These results suggest that GENUS could have direct biological impacts on Alzheimer’s pathology, warranting further mechanistic exploration in larger randomized trials.”
Although the initial trial results showed preservation of brain volume at three months among those who received 40Hz stimulation, that was not significant at the 30-month time point. And the two male early-onset volunteers did not show significant improvements on cognitive test scores. Notably, the early onset patients showed significantly reduced brain-wave responsiveness to the stimulation.
Although the sample is small, the authors hypothesize that the difference between the two sets of patients is likely attributable to the difference in disease onset, rather than the difference in gender.
“GENUS may be less effective in early onset Alzheimer’s disease patients, potentially owing to broad pathological differences from late-onset Alzheimer’s disease that could contribute to differential responses,” the authors wrote. “Future research should explore predictors of treatment response, such as genetic and pathological markers.”
Currently, the research team is studying whether GENUS may have a preventative effect when applied before disease onset. The new trial is recruiting participants aged 55-plus with normal memory who have or had a close family member with Alzheimer's disease, including early-onset.
In addition to Chan and Tsai, the paper’s other authors are Gabrielle de Weck, Brennan L. Jackson, Ho-Jun Suk, Noah P. Milman, Erin Kitchener, Vanesa S. Fernandez Avalos, MJ Quay, Kenji Aoki, Erika Ruiz, Andrew Becker, Monica Zheng, Remi Philips, Rosalind Firenze, Ute Geigenmüller, Bruno Hammerschlag, Steven Arnold, Pia Kivisäkk, Michael Brickhouse, Alexandra Touroutoglou, Emery N. Brown, Edward S. Boyden, Bradford C. Dickerson, and Elizabeth B. Klerman.
Funding for the research came from the Freedom Together Foundation, the Robert A. and Renee E. Belfer Family Foundation, the Eleanor Schwartz Charitable Foundation, the Dolby Family, Che King Leo, Amy Wong and Calvin Chin, Kathleen and Miguel Octavio, the Degroof-VM Foundation, the Halis Family Foundation, Chijen Lee, Eduardo Eurnekian, Larry and Debora Hilibrand, Gary Hua and Li Chen, Ko Han Family, Lester Gimpelson, David B Emmes, Joseph P. DiSabato and Nancy E. Sakamoto, Donald A. and Glenda G. Mattes, the Carol and Gene Ludwig Family Foundation, Alex Hu and Anne Gao, Elizabeth K. and Russell L. Siegelman, the Marc Haas Foundation, Dave and Mary Wargo, James D. Cook, and the Nobert H. Hardner Foundation.
John Marshall and Erin Kara receive postdoctoral mentoring award
Shining a light on the critical role of mentors in a postdoc’s career, the MIT Postdoctoral Association presented the fourth annual Excellence in Postdoctoral Mentoring Awards to professors John Marshall and Erin Kara.
The awards honor faculty and principal investigators who have distinguished themselves across four areas: the professional development opportunities they provide, the work environment they create, the career support they provide, and their commitment to continued professional relationships with their mentees.
They were presented at the annual Postdoctoral Appreciation event hosted by the Office of the Vice President for Research (VPR), on Sept. 17.
An MIT Postdoctoral Association (PDA) committee, chaired this year by Danielle Coogan, oversees the awards process in coordination with VPR and reviews nominations by current and former postdocs. “[We’re looking for] someone who champions a researcher, a trainee, but also challenges them,” says Bettina Schmerl, PDA president in 2024-25. “Overall, it’s about availability, reasonable expectations, and empathy. Someone who sees the postdoctoral scholar as a person of their own, not just someone who is working for them.” Marshall’s and Kara’s steadfast dedication to their postdocs set them apart, she says.
Speaking at the VPR resource fair during National Postdoc Appreciation Week, Vice President for Research Ian Waitz acknowledged “headwinds” in federal research funding and other policy issues, but urged postdocs to press ahead in conducting the very best research. “Every resource in this room is here to help you succeed in your path,” he said.
Waitz also commented on MIT’s efforts to strengthen postdoctoral mentoring over the last several years, and the influence of these awards in bringing lasting attention to the importance of mentoring. “The dossiers we’re getting now to nominate people [for the awards] may have five, 10, 20 letters of support,” he noted. “What we know about great mentoring is that it carries on between academic generations. If you had a great mentor, then you are more likely to be an amazing mentor once you’ve seen it demonstrated.”
Ann Skoczenski, director of MIT Postdoctoral Services, works closely with Waitz and the Postdoctoral Association to address the goals and concerns of MIT’s postdocs to ensure a successful experience at the Institute. “The PDA and the whole postdoctoral community do critical work at MIT, and it’s a joy to recognize them and the outstanding mentors who guide them,” said Skoczenski.
A foundation in good science
The awards recognize excellent mentors in two categories. Marshall, professor of oceanography in the Department of Earth, Atmospheric and Planetary Sciences, received the “Established Mentor Award.”
Nominators described Marshall’s enthusiasm for research as infectious, creating an exciting work environment that sets the tone. “John’s mentorship is unique in that he immerses his mentees in the heart of cutting-edge research. His infectious curiosity and passion for scientific excellence make every interaction with him a thrilling and enriching experience,” one postdoc wrote.
At the heart of Marshall’s postdoc relationships is a straightforward focus on doing good science and working alongside postdocs and students as equals. As one nominator wrote, “his approach is centered on empowering his mentees to assume full responsibility for their work, engage collaboratively with colleagues, and make substantial contributions to the field of science.”
His high expectations are matched by the generous assistance he provides his postdocs when needed. “He balances scientific rigor with empathy, offers his time generously, and treats his mentees as partners in discovery,” a nominator wrote.
Navigating career decisions and gaining the right experience along the way are important aspects of the postdoc experience. “When it was time for me to move to a different step in my career, John offered me the opportunities to expand my skills by teaching, co-supervising PhD students, working independently with other MIT faculty members, and contributing to grant writing,” one postdoc wrote.
Marshall’s research group has focused on ocean circulation and coupled climate dynamics involving interactions between motions on different scales, using theory, laboratory experiments, observations and innovative approaches to global ocean modeling.
“I’ve always told my postdocs, if you do good science, everything will sort itself out. Just do good work,” Marshall says. “And I think it’s important that you allow the glory to trickle down.”
Marshall sees postdoc appointments as a time they can learn to play to their strengths while focusing on important scientific questions. “Having a great postdoc [working] with you and then seeing them going on to great things, it’s such a pleasure to see them succeed,” he says.
“I’ve had a number of awards. This one means an awful lot to me, because the students and the postdocs matter as much as the science.”
Supporting the whole person
Kara, associate professor of physics, received the “Early Career Mentor Award.”
Many nominators praised Kara’s ability to give advice based on her postdocs’ individual goals. “Her mentoring style is carefully tailored to the particular needs of every individual, to accommodate and promote diverse backgrounds while acknowledging different perspectives, goals, and challenges,” wrote one nominator.
Creating a welcoming and supportive community in her research group, Kara empowers her postdocs by fostering their independence. “Erin’s unique approach to mentorship reminds us of the joy of pursuing our scientific curiosities, enables us to be successful researchers, and prepares us for the next steps in our chosen career path,” said one. Another wrote, “Rather than simply giving answers, she encourages independent thinking by asking the right questions, helping me to arrive at my own solutions and grow as a researcher.”
Kara’s ability to offer holistic, nonjudgmental advice was a throughline in her nominations. “Beyond her scientific mentorship, what truly sets Erin apart is her thoughtful and honest guidance around career development and life beyond work,” one wrote. Another nominator highlighted their positive relationship, writing, “I feel comfortable sharing my concerns and challenges with her, knowing that I will be met with understanding, insightful advice, and unwavering support.”
Kara’s research group is focused on understanding the physics behind how black holes grow and affect their environments. Kara has advanced a new technique called X-ray reverberation mapping, which allows astronomers to map the gas falling on to black holes and measure the effects of strongly curved spacetime close to the event horizon.
“I feel like postdocs hold a really special place in our research groups because they come with their own expertise,” says Kara. “I’ve hired them particularly because I want to learn and grow from them as well, and hopefully vice versa.” Kara focuses her mentorship on providing for autonomy, giving postdocs their own mentorship opportunities, and treating them like colleagues.
A postdoc appointment “is this really pivotal time in your career, when you’re figuring out what it is you want to do with the rest of your life,” she says. “So if I can help postdocs navigate that by giving them some support, but also giving them independence to be able to take their next steps, that feels incredibly valuable.”
“I just feel like they make my work/life so rich, and it’s not a hard thing to mentor them because they all are such awesome people and they make our research group really fun.”
MIT Haystack scientists study recent geospace storms and resulting light shows
The northern lights, or aurora borealis, one of nature's most spectacular visual shows, can be elusive. Conventional wisdom says that to see them, we need to travel to northern Canada or Alaska. However, in the past two years, New Englanders have been seeing these colorful atmospheric displays on a few occasions — including this week — from the comfort of their backyards, as auroras have been visible in central and southern New England and beyond. These unusual auroral events have been driven by increased space weather activity, a phenomenon studied by a team of MIT Haystack Observatory scientists.
Auroral events are generated when particles in space are energized by complicated processes in the near-Earth environment, following which they interact with gases high up in the atmosphere. Space weather events such as coronal mass ejections, in which large amounts of material are ejected from our sun, along with geomagnetic storms, greatly increase energy input into those space regions near Earth. These inputs then trigger other processes that cause an increase in energetic particles entering our atmosphere.
The result is variable colorful lights when the newly energized particles crash into atoms and molecules high above Earth's surface. Recent significant geomagnetic storm events have triggered these auroral displays at latitudes lower than normal — including sightings across New England and other locations across North America.
New England has been enjoying more of these spectacular light shows, such as this week's displays and those during the intense geomagnetic solar storms in May and October 2024, because of increased space weather activity.
Research has determined that auroral displays occur when selected atoms and molecules high in the upper atmosphere are excited by incoming charged particles, which are boosted in energy by intense solar activity. The most common auroral display colors are pink/red and green, with colors varying according to the altitude at which these reactions occur. Red auroras come from lower-energy particles exciting neutral oxygen and cause emissions at altitudes above 150 miles. Green auroras come from higher-energy particles exciting neutral oxygen and cause emissions at altitudes below 150 miles. Rare purple and blue aurora come from excited molecular nitrogen ions and occur during the most intense events.
Scientists measure the magnitude of geomagnetic activity driving auroras in several different ways. One of these uses sensitive magnetic field-measuring equipment at stations around the planet to obtain a geomagnetic storm measurement known as Kp, on a scale from 1 (least activity) to 9 (greatest activity), in three-hour intervals. Higher Kp values indicate the possibility — not a guarantee — of greater auroral sightings as the location of auroral displays move to lower latitudes. Typically, when the Kp index reaches a range of 6 or higher, this indicates that aurora viewings are more likely outside the usual northern ranges. The geomagnetic storm events of this week reached a Kp value of 9, indicating very strong activity in the sun–Earth system.
At MIT Haystack Observatory in Westford, Massachusetts, geospace and atmospheric physics scientists study the atmosphere and its aurora year-round by combining observations from many different instruments. These include ground-based sensors — including large upper-atmosphere radars that bounce signals off particles in the ionosphere — as well as data from space satellites. These tools provide key information, such as density, temperature, and velocity, on conditions and disturbances in the upper atmosphere: basic information that helps researchers at MIT and elsewhere understand the weather in space.
Haystack geospace research is primarily funded through science funding by U.S. federal agencies such as the National Science Foundation (NSF) and NASA. This work is crucial for our increasingly spacefaring civilization, which requires continual expansion of our understanding of how space weather affects life on Earth, including vital navigation systems such as GPS, worldwide communication infrastructure, and the safety of our power grids. Research in this area is especially important in modern times, as humans increasingly use low Earth orbit for commercial satellite constellations and other systems, and as civilization further progresses into space.
Studies of the variations in our atmosphere and its charged component, known as the ionosphere, have revealed the strong influence of the sun. Beyond the normal white light that we experience each day, the sun also emits many other wavelengths of light, from infrared to extreme ultraviolet. Of particular interest are the extreme ultraviolet portions of solar output, which have enough energy to ionize atoms in the upper atmosphere. Unlike its white light component, the sun's output at these very short wavelengths has many different short- and long-term variations, but the most well known is the approximately 11-year solar cycle, in which the sun goes from minimum to maximum output.
Scientists have determined that the most recent peak in activity, known as solar maximum, occurred within the past 12 months. This is good news for auroral watchers, as the most active period for severe geomagnetic storms that drive auroral displays at New England latitudes occurs during the three-year period following solar maximum.
Despite intensive research to date, we still have a great deal more to learn about space weather and its effects on the near-Earth environment. MIT Haystack Observatory continues to advance knowledge in this area.
Larisa Goncharenko, lead geospace scientist and assistant director at Haystack, states, "In general, understanding space weather well enough to forecast it is considerably more challenging than even normal weather forecasting near the ground, due to the vast distances involved in space weather forces. Another important factor comes from the combined variation of Earth's neutral atmosphere, affected by gravity and pressure, and from the charged particle portion of the atmosphere, created by solar radiation and additionally influenced by the geometry of our planet's magnetic field. The complex interplay between these elements provides rich complexity and a sustained, truly exciting scientific opportunity to improve our understanding of basic physics in this vital part of our home in the solar system, for the benefit of civilization."
For up-to-date space weather forecasts and predictions of possible aurora events, visit SpaceWeather.com or NOAA's Aurora Viewline site.
MIT startup aims to expand America’s lithium production
China dominates the global supply of lithium. The country processes about 65 percent of the battery material and has begun on-again, off-again export restrictions of lithium-based products critical to the economy.
Fortunately, the U.S. has significant lithium reserves, most notably in the form of massive underground brines across south Arkansas and east Texas. But recovering that lithium through conventional techniques would be an energy-intensive and environmentally damaging proposition — if it were profitable at all.
Now, the startup Lithios, founded by Mo Alkhadra PhD ’22 and Martin Z. Bazant, the Chevron Chair Professor of Chemical Engineering, is commercializing a new process of lithium recovery it calls Advanced Lithium Extraction. The company uses electricity to drive a reaction with electrode materials that capture lithium from salty brine water, leaving behind other impurities.
Lithios says its process is more selective and efficient than other direct lithium-extraction techniques being developed. It also represents a far cleaner and less energy-intensive alternative to mining and the solar evaporative ponds that are used to extract lithium from underground brines in the high deserts of South America.
Lithios has been running a pilot system continuously extracting lithium from real brine waters from around the world since June. It also recently shipped an early version of its system to a commercial partner scaling up operations in Arkansas.
With the core technology of its modular systems largely validated, next year Lithios plans to begin operating a larger version capable of producing 10 to 100 tons of lithium carbonate per year. From there, the company plans to build a commercial facility that will be able to produce 25,000 tons of lithium carbonate each year. That would represent a massive increase in the total lithium production of the U.S., which is currently limited to less than 5,000 tons per year.
“There’s been a big push recently, and especially in the last year, to secure domestic supplies of lithium and break away from the Chinese chokehold on the critical mineral supply chain,” Alkhadra says. “We have an abundance of lithium deposits at our disposal in the U.S., but we lack the tools to turn those resources into value.”
Adapting a technology
Bazant realized the need for new approaches to mining lithium while working with battery companies through his lab in MIT’s Department of Chemical Engineering. His group has studied battery materials and electrochemical separation for decades.
As part of his PhD in Bazant’s lab, Alkhadra studied electrochemical processes for separation of dissolved metals, with a focus on removing lead from drinking water and treating industrial wastewater. As Alkhadra got closer to graduation, he and Bazant looked at the most promising commercial applications for his work.
It was 2021, and lithium prices were in the midst of a historic spike driven by the metal’s importance in batteries.
Today, lithium comes primarily from mining or through a slow evaporative process that uses miles of surface ponds to refine and recover lithium from wastewater. Both are energy-intensive and damaging to the environment. They are also dominated by Chinese companies and supply chains.
“A lot of hard rock mining is done in Australia, but most of the rock is shipped as a concentrate to China for refining because they’re the ones who have the technology,” Bazant explains.
Other direct lithium-extraction methods use chemicals and filters, but the founders say those methods struggle to be profitable with U.S. lithium reserves, which have low concentrations of lithium and high levels of impurities.
“Those methods work when you have a good grade of lithium brine, but they become increasingly uneconomical as you get lower-quality resources, which is exactly what the industry is going through right now,” Alkhadra says. “The evaporative process has a huge footprint — we’re talking about the size of Manhattan island for a single project. Conveniently, recovering minerals from those low concentrations was the essence of my PhD work at MIT. We simply had to adapt the technology to the new use case.”
While conducting early talks with potential customers, Alkhadra received guidance from MIT’s Venture Mentoring Service, the MIT Sandbox Innovation Fund, and the Massachusetts Clean Energy Center. Lithios officially formed when he completed his PhD in 2022 and received the Activate Fellowship. Lithios grew at The Engine, an MIT startup incubator, before moving to their pilot and manufacturing facility in Medford, Massachusetts, in 2024.
Today, Lithios uses an undisclosed electrode material that attaches to lithium when exposed to precise voltages.
“Think of a big battery with water flowing into the system,” Alkhadra explains. “When the brine comes into contact with our electrodes, it selectively pulls lithium while rejecting all the other contaminants. When the lithium has been loaded onto our capture materials, we can simply change the direction of the electrical current to release the lithium back into a clean water stream. It’s similar to charging and discharging a battery.”
Bazant says the company’s lithium-absorbing materials are an ideal fit for this application.
“One of the main challenges of using battery electrodes to extract lithium is how to complete the system,” Bazant says. “We have a great lithium-extraction material that is very stable in water and has wonderful performance. We also learned how to formulate both electrodes with controlled ion transport and mixing to make the process much more efficient and low cost.”
Growing in the ‘MIT spirit’
A U.S. geological survey last year showed the underground Smackover Formation contains between 5 and 19 million tons of lithium in southwest Arkansas alone.
“If you just estimate how much lithium is in that region based on today’s prices, it’s about $2 trillion worth of lithium that can’t be accessed,” Bazant says. “If you could extract these resources efficiently, it would make a huge impact.”
Earlier this year, Lithios shipped its pilot system to a commercial partner in Arkansas to further validate its approach in the region. Lithios also plans to deploy several additional pilot and demonstration projects with other major partners in the oil and gas and mining industries in the coming years.
“After this field deployment, Lithios will quickly scale toward a commercial demonstration plant that will be operational by 2027, with the intent to scale to a kiloton-per-year commercial facility before the end of the decade,” Alkhadra says.
Although Lithios is currently focused on lithium, Bazant says the company’s approach could also be adopted to materials such as rare earth elements and transition metals further down the line.
“We’re developing a unique technology that could make the U.S. the center of the world for critical minerals separation, and we couldn’t have done this anywhere else,” Bazant says. “MIT was the perfect environment, mainly because of the people. There are so many fantastic scientists and businesspeople in the MIT ecosystem who are very technically savvy and ready to jump into a project like this. Our first employees were all MIT people, and they really brought the MIT spirit to our company.”
From nanoscale to global scale: Advancing MIT’s special initiatives in manufacturing, health, and climate
“MIT.nano is essential to making progress in high-priority areas where I believe that MIT has a responsibility to lead,” opened MIT president Sally Kornbluth at the 2025 Nano Summit. “If we harness our collective efforts, we can make a serious positive impact.”
It was these collective efforts that drove discussions at the daylong event hosted by MIT.nano and focused on the importance of nanoscience and nanotechnology across MIT's special initiatives — projects deemed critical to MIT’s mission to help solve the world’s greatest challenges. With each new talk, common themes were reemphasized: collaboration across fields, solutions that can scale up from lab to market, and the use of nanoscale science to enact grand-scale change.
“MIT.nano has truly set itself apart, in the Institute's signature way, with an emphasis on cross-disciplinary collaboration and open access,” said Kornbluth. “Today, you're going to hear about the transformative impact of nanoscience and nanotechnology, and how working with the very small can help us do big things for the world together.”
Collaborating on health
Angela Koehler, faculty director of the MIT Health and Life Sciences Collaborative (MIT HEALS) and the Charles W. and Jennifer C. Johnson Professor of Biological Engineering, opened the first session with a question: How can we build a community across campus to tackle some of the most transformative problems in human health? In response, three speakers shared their work enabling new frontiers in medicine.
Ana Jaklenec, principal research scientist at the Koch Institute for Integrative Cancer Research, spoke about single-injection vaccines, and how her team looked to the techniques used in fabrication of electrical engineering components to see how multiple pieces could be packaged into a tiny device. “MIT.nano was instrumental in helping us develop this technology,” she said. “We took something that you can do in microelectronics and the semiconductor industry and brought it to the pharmaceutical industry.”
While Jaklenec applied insight from electronics to her work in health care, Giovanni Traverso, the Karl Van Tassel Career Development Professor of Mechanical Engineering, who is also a gastroenterologist at Brigham and Women’s Hospital, found inspiration in nature, studying the cephalopod squid and remora fish to design ingestible drug delivery systems. Representing the industry side of life sciences, Mirai Bio senior vice president Jagesh Shah SM ’95, PhD ’99 presented his company’s precision-targeted lipid nanoparticles for therapeutic delivery. Shah, as well as the other speakers, emphasized the importance of collaboration between industry and academia to make meaningful impact, and the need to strengthen the pipeline for young scientists.
Manufacturing, from the classroom to the workforce
Paving the way for future generations was similarly emphasized in the second session, which highlighted MIT’s Initiative for New Manufacturing (MIT INM). “MIT’s dedication to manufacturing is not only about technology research and education, it’s also about understanding the landscape of manufacturing, domestically and globally,” said INM co-director A. John Hart, the Class of 1922 Professor and head of the Department of Mechanical Engineering. “It’s about getting people — our graduates who are budding enthusiasts of manufacturing — out of campus and starting and scaling new companies,” he said.
On progressing from lab to market, Dan Oran PhD ’21 shared his career trajectory from technician to PhD student to founding his own company, Irradiant Technologies. “How are companies like Dan’s making the move from the lab to prototype to pilot production to demonstration to commercialization?” asked the next speaker, Elisabeth Reynolds, professor of the practice in urban studies and planning at MIT. “The U.S. capital market has not historically been well organized for that kind of support.” She emphasized the challenge of scaling innovations from prototype to production, and the need for workforce development.
“Attracting and retaining workforce is a major pain point for manufacturing businesses,” agreed John Liu, principal research scientist in mechanical engineering at MIT. To keep new ideas flowing from the classroom to the factory floor, Liu proposes a new worker type in advanced manufacturing — the technologist — someone who can be a bridge to connect the technicians and the engineers.
Bridging ecosystems with nanoscience
Bridging people, disciplines, and markets to affect meaningful change was also emphasized by Benedetto Marelli, mission director for the MIT Climate Project and associate professor of civil and environmental engineering at MIT.
“If we’re going to have a tangible impact on the trajectory of climate change in the next 10 years, we cannot do it alone,” he said. “We need to take care of ecology, health, mobility, the built environment, food, energy, policies, and trade and industry — and think about these as interconnected topics.”
Faculty speakers in this session offered a glimpse of nanoscale solutions for climate resiliency. Michael Strano, the Carbon P. Dubbs Professor of Chemical Engineering, presented his group’s work on using nanoparticles to turn waste methane and urea into renewable materials. Desirée Plata, the School of Engineering Distinguished Climate and Energy Professor, spoke about scaling carbon dioxide removal systems. Mechanical engineering professor Kripa Varanasi highlighted, among other projects, his lab’s work on improving agricultural spraying so pesticides adhere to crops, reducing agricultural pollution and cost.
In all of these presentations, the MIT faculty highlighted the tie between climate and the economy. “The economic systems that we have today are depleting to our resources, inherently polluting,” emphasized Plata. “The goal here is to use sustainable design to transition the global economy.”
What do people do at MIT.nano?
This is where MIT.nano comes in, offering shared access facilities where researchers can design creative solutions to these global challenges. “What do people do at MIT.nano?” asked associate director for Fab.nano Jorg Scholvin ’00, MNG ’01, PhD ’06 in the session on MIT.nano’s ecosystem. With 1,500 individuals and over 20 percent of MIT faculty labs using MIT.nano, it’s a difficult question to quickly answer. However, in a rapid-fire research showcase, students and postdocs gave a response that spanned 3D transistors and quantum devices to solar solutions and art restoration. Their work reflects the challenges and opportunities shared at the Nano Summit: developing technologies ready to scale, uniting disciplines to tackle complex problems, and gaining hands-on experience that prepares them to contribute to the future of hard tech.
The researchers’ enthusiasm carried the excitement and curiosity that President Kornbluth mentioned in her opening remarks, and that many faculty emphasized throughout the day. “The solutions to the problems we heard about today may come from inventions that don't exist yet,” said Strano. “These are some of the most creative people, here at MIT. I think we inspire each other.”
Robert N. Noyce (1953) Cleanroom at MIT.nano
Collaborative inspiration is not new to the MIT culture. The Nano Summit sessions focused on where we are today, and where we might be going in the future, but also reflected on how we arrived at this moment. Honoring visionaries of nanoscience and nanotechnology, President Emeritus L. Rafael Reif delivered the closing remarks and an exciting announcement — the dedication of the MIT.nano cleanroom complex. Made possible through a gift by Ray Stata SB ’57, SM ’58, this research space, 45,000 square feet of ISO 5, 6, and 7 cleanrooms, will be named the Robert N. Noyce (1953) Cleanroom.
“Ray Stata was — and is — the driving force behind nanoscale research at MIT,” said Reif. “I want to thank Ray, whose generosity has allowed MIT to honor Robert Noyce in such a fitting way.”
Ray Stata co-founded Analog Devices in 1965, and Noyce co-founded Fairchild Semiconductor in 1957, and later Intel in 1968. Noyce, widely regarded as the “Mayor of Silicon Valley,” became chair of the Semiconductor Industry Association in 1977, and over the next 40 years, semiconductor technology advanced a thousandfold, from micrometers to nanometers.
“Noyce was a pioneer of the semiconductor industry,” said Stata. “It is due to his leadership and remarkable contributions that electronics technology is where it is today. It is an honor to be able to name the MIT.nano cleanroom after Bob Noyce, creating a permanent tribute to his vision and accomplishments in the heart of the MIT campus.”
To conclude his remarks and the 2025 Nano Summit, Reif brought the nano journey back to today, highlighting technology giants such as Lisa Su ’90, SM ’91, PhD ’94, for whom Building 12, the home of MIT.nano, is named. “MIT has educated a large number of remarkable leaders in the semiconductor space,” said Reif. “Now, with the Robert Noyce Cleanroom, this amazing MIT community is ready to continue to shape the future with the next generation of nano discoveries — and the next generation of nano leaders, who will become living legends in their own time.”
Green bananas can’t throw 3.091 Fun Run off course
The night before the Department of Materials Science and Engineering (DMSE)’s 3.091 Fun Run, organizer Bianca Sinausky opened a case of bananas she’d ordered and was met with a surprise: the fruit was bright green.
“I looked around for paper bags, but I only found a few,” says Sinausky, graduate academic administrator for the department, referring to a common hack for speeding up ripening. “It was hopeless.”
That is, until facilities manager Kevin Rogers came up with a plan: swap the green bananas for ripe ones from MIT’s Banana Lounge, a free campus snack and study space stocked with fruit.
“It was genius,” Sinausky says. “The runners would have their snack, and the race could go on.”
DMSE checked in with the Banana Lounge a little late, but logistics lead senior Colin Clark approved anyway. “So that’s where that box came from,” he says.
On a bright fall morning, ripe bananas awaited 20 DMSE students and faculty in the Oct. 15 run, which started and finished at the Zesiger Sports and Fitness Center and wound along pedestrian paths across the MIT campus. Department head Polina Anikeeva, an avid runner, says the goal was to build community, enjoy the outdoors, and celebrate 3.091 (Introduction to Solid-State Chemistry), a popular first-year class and General Institute Requirement.
“We realized 3.091 was so close to 5 kilometers — 3.1 miles — it was the perfect opportunity,” Anikeeva says, admitting she made the initial connection. “I think about things like that.”
For many participants, running is a regular hobby—but doing it with colleagues made it even more enjoyable. “I usually run a few times a week, and I thought it would be fun to log some more miles in my training block with the DMSE community,” says graduate student Jessica Dong, who is training for the Cambridge Half Marathon this month.
Fellow graduate student Rishabh Kothari agrees. “I was excited to support a department event that aligns with my general hobbies,” says Kothari, who recently ran the Chicago Marathon and tied for first in his age category in the DMSE run. “I find running to be a great community-building activity.”
While fun runs are usually noncompetitive, organizers still recognized the fastest runners by age group.
Unlike an official road race, organized by a race company — the City of Cambridge currently isn’t allowing new races — the DMSE run was managed internally by an informal cohort of colleagues, Sinausky says, which meant a fair amount of work.
“The hardest part was walking the route and putting the mileage out, and also putting out arrows,” she says. “When a race company does it, they do it properly.”
There were a few minor snags — some runners went the wrong way, and two walkers got lost. “So I think we need to mark the course better,” Sinausky says.
Others found charm in the run’s rough edges.
“My favorite part of the run was when a group of us got confused about the route, so we cut through the lawn in front of Tang Hall,” Dong says. At the finish line, she showed off a red DMSE hat — one of the giveaways laid out alongside ripe bananas and bottles of water.
Looking ahead to what organizers hope will be an annual event, the team is considering purchasing race timing equipment. Modern road races distribute bibs outfitted with RFID chips, which track each runner’s start and finish. Sinausky’s method — employing a smartphone timer and Anikeeva tracking finish times on a clipboard — was less high-tech, but effective for the small number of participants.
“We would see the runners coming, and Polina would say, ‘OK, bib 21.’ And then I would yell out the time,” she says. “I think that if more people showed up, it would’ve been harder.”
Sinausky hopes to boost participation in coming years. Early interest was strong, with 63 registering, but fewer than a third showed up on race day. The week’s delay due to rain — and several straight days of rain since — likely didn’t help, she says.
Overall, she says, the run was a success, with participants saying they hope it will become a new DMSE tradition.
“It was great to see everyone finish and enjoy themselves,” Kothari says. “A nice morning to be around friends.”
Transforming complex research into compelling stories
For students, postdocs, and early-career researchers, communicating complex ideas in a clear and compelling manner has become an essential skill. Whether applying for academic positions, pitching research to funders, or collaborating across disciplines, the ability to present work clearly and effectively can be as critical as the work itself.
Recognizing this need, The MIT Office of Graduate Education (OGE) has partnered with the Writing and Communication Center (WCC) to launch the WCC Communication Studio: a self-service recording and editing space designed to help users sharpen their oral presentation and communication skills. Open to all members of the MIT community as of this fall, the studio offers a first-of-its-kind resource at MIT for developing and refining research presentations, mock interview conversations, elevator pitches, and more.
Housed in WCC’s Ames Street office, the studio is equipped with high-quality microphones and user-friendly video recording and editing tools, all designed to be used with the PitchVantage software.
How does it work? Users can access tutorials, example videos, and a reservation system through the WCC’s website. After completing a short orientation on how to use the technology and space responsibly, users are ready to pitch to simulated audiences, who react in real time to various elements of delivery. Users can also watch their recorded presentations and receive personalized feedback on nine elements of presentation delivery: pitch, pace, volume variability, verbal distractors, pace, eye contact, volume, engagement, and pauses.
Designed with students in mind
“Through years of individual and group consultations with MIT students and scholars, we realized that developing strong presentation skills requires more than feedback — it requires sustained, embodied practice,” explains Elena Kallestinova, director of the WCC. “The Oral Communication Studio was created to fill that gap.”
Those who have used the studio during its initial lifespan say that its interactive format helps to provide real-time, actionable feedback on their verbal delivery. Additionally, the program offers notes on overall stage presence, including subtle actions such as hand gestures and eye contact. For students, this can be the key to ensuring that their delivery is both confident and clearly accessible once it comes time to present.
“I’ve been using the studio to practice for conferences and job interviews,” says Fabio Castro, a PhD student studying civil engineering. His favorite feature? The instant feedback from the virtual figures watching the presentation, which allows him to not only prepare to speak in front of an audience, but to read their nonverbal cues and adjust his delivery accordingly.
The studio also addresses a practical challenge facing many PhD students and postdocs in their role as emerging researchers: the high stakes of presenting. For many, their first major talk may be in front of a hiring committee, research institute, or funding body — audiences that may heavily influence their next career step. The studio gives them a low-pressure environment in which to rehearse so that they enter these spaces confidently.
Aditi Ramakrishnan, an MBA student in the MIT Sloan School of Management, acknowledges the importance of this tool for emerging professionals. As a business student, she explains, “a lot of your job involves pitching.” She credits the WCC with helping to take her pitching game “from good to excellent,” identifying small details such as unnecessary “filler” words and understanding the difference between a strong stage presence and a distracting one.
A new frontier in communication support at MIT
While MIT has long been recognized for its excellence in technical education, the studio represents a broader focus on arming students and researchers alike with the tools that they need to amplify their work to larger audiences.
“The WCC Communication Studio gives students a place to rehearse, get immediate feedback, and iterate until their ideas land clearly and confidently,” explains Denzil Streete, OGE’s senior associate dean and director. “It’s not just about better slides or smoother delivery; it’s about unlocking and scaling access to more modern tools so more graduate students can translate breakthrough research into real-world impact.”
"The studio is a resource for the entire MIT community,” says Kallestinova, emphasizing that this new resource serves as a support for not only graduate students, but also undergrads, researchers, and even faculty. “Whether used as a supplement to classroom instruction or as a follow-up to coaching sessions, the studio offers a dedicated space for rehearsal, reflection, and growth, helping all users build confidence, clarity, and command in their communication."
The studio joins an array of existing resources within the WCC, including a Public Speaking Certificate Program, a peer-review group for creative writers, and a number of revolving workshops throughout the year.
A culture of communication
From grant funding and academic collaboration to public outreach and policy impact, effective speaking skills are more important than ever.
“No matter how brilliant the idea, it has to be clearly communicated by the researcher or scholar in order to have impact,” says Amanda Cornwall, associate director of graduate student professional development at Career Advising and Professional Development (CAPD).
“Explaining complex concepts to a broader audience takes practice and skill. When a researcher can build confidence in their speaking abilities, they have the power to transport their audience and show the way to new possibilities,” she adds. “This is why communication is one of the professional development competencies that we emphasize at MIT; it matters in every context, from small conversations to teaching to speeches that might change the world.”
The studio’s launch comes among a broader institutional focus on communication. CAPD, the Teaching and Learning Lab, the OGE, and academic departments have recognized the value of, and provided increasing levels of support for, professional development training alongside technical expertise.
Workshops already offered by the WCC, CAPD, and other campus partners work to highlight best practices for conference talks, long-form interviews, and more. The WCC Communication Studio provides a practical extension of these efforts. Looking ahead, the studio aims to not only serve as a training space, but also help foster a culture of communication excellence among researchers and educators.
Returning farming to city centers
A new class is giving MIT students the opportunity to examine the historical and practical considerations of urban farming while developing a real-world understanding of its value by working alongside a local farm’s community.
Course 4.182 (Resilient Urbanism: Green Commons in the City) is taught in two sections by instructors in the Program in Science, Technology, and Society and the School of Architecture and Planning, in collaboration with The Common Good Co-op in Dorchester.
The first section was completed in spring 2025 and the second section is scheduled for spring 2026. The course is taught by STS professor Kate Brown, visiting lecturer Justin Brazier MArch ’24, and Kafi Dixon, lead farmer and executive director of The Common Good.
“This project is a way for students to investigate the real political, financial, and socio-ecological phenomena that can help or hinder an urban farm’s success,” says Brown, the Thomas M. Siebel Distinguished Professor in History of Science.
Brown teaches environmental history, the history of food production, and the history of plants and people. She describes a history of urban farming that centered sustainable practices, financial investment and stability, and lasting connections among participants.
Brown says urban farms have sustained cities for decades.
“Cities are great places to grow produce,” Brown asserts. “City dwellers produce lots of compostable materials.”
Brazier’s research ranges from affordable housing to urban agricultural gardens, exploring topics like sustainable architecture, housing, and food security.
“My work designing vacant lots as community gardens offered a link between Kafi’s work with Common Good and my interests in urban design,” Brazier says. “Urban farms offer opportunities to eliminate food deserts in underserved areas while also empowering historically marginalized communities.”
Before they agreed to collaborate on the course, Dixon reached out to Brown asking for help with several challenges related to her urban farm including zoning, location, and infrastructure.
“As the lead farmer and executive director of Common Good Co-op, I happened upon Kate Brown’s research and work and saw that it aligned with our cooperative model’s intentions,” Dixon says. “I reached out to Kate, and she replied, which humbled and excited me.”
“Design itself is a form of communication,” Dixon adds, describing the collaborative nature of farming sustenance and development. “For many under-resourced communities, innovating requires a research-based approach.”
The project is among the inaugural cohort of initiatives to receive support from the SHASS Education Innovation Fund, which is administered by the MIT Human Insight Collaborative (MITHIC).
Community development, investment, and collaboration
The class’s first section paired students with community members and the City of Boston to change the farm’s zoning status and create a green space for long-term farming and community use. Students spent time at Common Good during the course, including one weekend during which they helped with weeding the garden beds for spring planting.
One objective of the class is to help Common Good avoid potential pitfalls associated with gentrification. “A study in Philadelphia showed that gentrification occurs within 1,000 feet of a community garden,” Brown says.
“Farms and gardens are a key part of community and public health,” Dixon continues.
Students in the second section will design and build infrastructure — including a mobile chicken coop and a pavilion to protect farmers from the elements — for Common Good. The course also aims to secure a green space designation for the farm and ensure it remains an accessible community space. “We want to prevent developers from acquiring the land and displacing the community,” Brown says, avoiding past scenarios in which governments seized inhabitants’ property while offering little or no compensation.
Students in the 2025 course also produced a guide on how to navigate the complex rules surrounding zoning and related development. Students in the next STS section will research the history of food sovereignty and Black feminist movements in Dorchester and Roxbury. Using that research, they will construct an exhibit focused on community activism for incorporation into the coop’s facade.
Imani Bailey, a second-year master’s student in the Department of Architecture’s MArch program, was among the students in the course’s first section.
“By taking this course, I felt empowered to directly engage with the community in a way no other class I have taken so far has afforded me the ability to,” she says.
Bailey argues for urban farms’ value as both a financial investment and space for communal interaction, offering opportunities for engagement and the implementation of sustainable practices.
“Urban farms are important in the same way a neighbor is,” she adds. “You may not necessarily need them to own your home, but a good one makes your property more valuable, sometimes financially, but most importantly in ways that cannot be assigned a monetary value.”
The intersection of agriculture, community, and technology
Technology, the course’s participants believe, can offer solutions to some of the challenges related to ensuring urban farms’ viability.
“Cities like Amsterdam are redesigning themselves to improve walkability, increase the appearance of small gardens in the city, and increase green space,” Brown says. By creating spaces that center community and a collective approach to farming, it’s possible to reduce both greenhouse emissions and impacts related to climate change.
Additionally, engineers, scientists, and others can partner with communities to develop solutions to transportation and public health challenges. By redesigning sewer systems, empowering microbiologists to design microbial inoculants that can break down urban food waste at the neighborhood level, and centering agriculture-related transportation in the places being served, it’s possible to sustain community support and related infrastructure.
“Community is cultivated, nurtured, and grown from prolonged interaction, sharing ideas, and the creation of place through a shared sense of ownership,” Bailey argues. “Urban farms present the conditions for communities to develop.”
Bailey values the course because it leaves the theoretical behind, instead focusing on practical solutions. “We seldom see our design ideas become tangible," she says. “This class offered an opportunity to design and build for a real client in the real world.”
Brazier says the course and its projects prove everyone has something to contribute and can have a voice in what happens with their neighborhoods. “Despite these communities’ distrust of some politicians, we partnered to work on solutions related to zoning,” he says, “and supported community members’ advocacy efforts.”
How drones are altering contemporary warfare
In recent months, Russia has frequently flown drones into NATO territory, where NATO countries typically try to shoot them down. By contrast, when three Russian fighter jets made an incursion into Estonian airspace in September, they were intercepted and no attempt was made to shoot them down — although the incident did make headlines and led to a Russian diplomat being expelled from Estonia.
Those incidents follow a global pattern of recent years. Drone operations, to this point, seem to provoke different responses compared to other kinds of military action, especially the use of piloted warplanes. Drone warfare is expanding but not necessarily provoking major military responses, either by the countries being attacked or by the aggressor countries that have drones shot down.
“There was a conventional wisdom that drones were a slippery slope that would enable leaders to use force in all kinds of situations, with a massively destabilizing effect,” says MIT political scientist Erik Lin-Greenberg. “People thought if drones were used all over the place, this would lead to more escalation. But in many cases where drones are being used, we don’t see that escalation.”
On the other hand, drones have made military action more pervasive. It is at least possible that in the future, drone-oriented combat will be both more common and more self-contained.
“There is a revolutionary effect of these systems, in that countries are essentially increasing the range of situations in which leaders are willing to deploy military force,” Lin-Greenberg says. To this point, though, he adds, “these confrontations are not necessarily escalating.”
Now Lin-Greenberg examines these dynamics in a new book, “The Remote Revolution: Drones and Modern Statecraft,” published by Cornell University Press. Lin-Greenberg is an associate professor in MIT’s Department of Political Science.
Lin-Greenberg brings a distinctive professional background to the subject of drone warfare. Before returning to graduate school, he served as a U.S. Air Force officer; today he commands a U.S. Air Force reserve squadron. His thinking is informed by his experiences as both a scholar and practitioner.
“The Remote Revolution” also has a distinctive methodology that draws on multiple ways of studying the topic. In writing the book, Lin-Greenberg conducted experiments based on war games played by national security professionals; conducted surveys of expert and public thinking about drones; developed in-depth case studies from history; and dug into archives broadly to fully understand the history of drone use, which in fact goes back several decades.
The book’s focus is drone use during the 2000s, as the technology has become more readily available; today about 100 countries have access to military drones. Many have used them during tensions and skirmishes with other countries.
“Where I argue this is actually revolutionary is during periods of crises, which fall below the threshold of war, in that these new technologies take human operators out of harm’s way and enable states to do things they wouldn’t otherwise do,” Lin-Greenberg says.
Indeed, a key point is that drones lower the costs of military action for countries — and not just financial costs, but human and political costs, too. Incidents and problems that might plague leaders if they involved military personnel, forcing major responses, seem to lessen when drones are involved.
“Because these systems don’t have a human on board, they’re inherently cheaper and different in the minds of decision-makers,” Lin-Greenberg says. “That means they’re willing to use these systems during disputes, and if other states are shooting them down, the side sending them is less likely to retaliate, because they’re losing a machine but not a man or woman on board.”
In this sense, the uses of drones “create new rungs on the escalation ladder,” as Lin-Greenberg writes in the book. Drone incidents don’t necessarily lead to wider military action, and may not even lead to the same kinds of international relations issues as incidents involving piloted aircraft.
Consider a counterfactual that Lin-Greenberg raises in the book. One of the most notorious episodes of Cold War tension between the U.S. and U.S.S.R. occurred in 1960, when U.S. pilot Gary Powers was shot down and captured in the Soviet Union, leading to a diplomatic standoff and a canceled summit between U.S. President Dwight Eisenhower and Soviet leader Nikita Khrushchev.
“Had that been a drone, it’s very likely the summit would have continued,” Lin-Greenberg says. “No one would have said anything. The Soviet Union would have been embarrassed to admit their airspace was violated and the U.S. would have just [publicly] ignored what was going on, because there would not have been anyone sitting in a prison. There are a lot of exercises where you can ask how history could have been different.”
None of this is to say that drones present straightforward solutions to international relations problems. They may present the appearance of low-cost military engagement, but as Lin-Greenberg underlines in the book, the effects are more complicated.
“To be clear, the remote revolution does not suggest that drones prevent war,” Lin-Greenberg writes. Indeed, one of the problems they raise, he emphasizes, is the “moral hazard” that arises from leaders viewing drones as less costly, which can lead to even more military confrontations.
Moreover, the trends in drone warfare so far yield predictions for the future that are “probabilistic rather than deterministic,” as Lin-Greenberg writes. Perhaps some political or military leaders will start to use drones to attack new targets that will inevitably generate major responses and quickly escalate into broad wars. Current trends do not guarantee future outcomes.
“There are a lot of unanswered questions in this area,” Lin-Greenberg says. “So much is changing. What does it look like when more drones are more autonomous? I still hope this book lays a foundation for future dicussions, even as drones are used in different ways.”
Other scholars have praised “The Remote Revolution.” Joshua Kertzer, a professor of international studies and government at Harvard University, has hailed Lin-Greenberg’s “rich expertise, methodological rigor, and creative insight,” while Michael Horowitz, a political scientist and professor of international relations at the University of Pennsylvania, has called it “an incredible book about the impact of drones on the international security environment.”
For his part, Lin-Greenberg says, “My hope is the book will be read by academics and practitioners and people who choose to focus on parts of it they’re interested in. I tried to write the book in way that’s approachable.”
Publication of the book was supported by funding from MIT’s Security Studies Program.
MIT senior turns waste from the fishing industry into biodegradable plastic
Sometimes the answers to seemingly intractable environmental problems are found in nature itself.
Take the growing challenge of plastic waste. Jacqueline Prawira, an MIT senior in the Department of Materials Science and Engineering (DMSE), has developed biodegradable, plastic-like materials from fish offal, as featured in a recent segment on the CBS show “The Visioneers with Zay Harding.”
“We basically made plastics to be too good at their job. That also means the environment doesn’t know what to do with this, because they simply won’t degrade,” Prawira told Harding. “And now we’re literally drowning in plastic. By 2050, plastics are expected to outweigh fish in the ocean.”
“The Visioneers” regularly highlights environmental innovators. The episode featuring Prawira premiered during a special screening at Climate Week NYC on Sept. 24.
Her inspiration came from the Asian fish market her family visits. Once the fish they buy are butchered, the scales are typically discarded.
“But I also started noticing they’re actually fairly strong. They’re thin, somewhat flexible, and pretty lightweight, too, for their strength,” Prawira says. “And that got me thinking: Well, what other material has these properties? Plastics.”
She transformed this waste product into a transparent, thin-film material that can be used for disposable products such as grocery bags, packaging, and utensils.
Both her fish-scale material and a composite she developed don’t just mimic plastic — they address one of its biggest flaws. “If you put them in composting environments, [they] will degrade on their own naturally without needing much, if any, external help,” Prawira says.
This isn’t Prawira’s first environmental innovation. Working in DMSE Professor Yet-Ming Chiang’s lab, she helped develop a low-carbon process for making cement — the world’s most widely used construction material, and a major emitter of carbon dioxide. The process, called silicate subtraction, enables compounds to form at lower temperatures, cutting fossil fuel use.
Prawira and her co-inventors in the Chiang lab are also using the method to extract valuable lithium with zero waste. The process is patented and is being commercialized through the startup Rock Zero.
For her achievements, Prawira recently received the Barry Goldwater Scholarship, awarded to undergraduates pursuing careers in science, mathematics, or engineering.
In her “Visioneers” interview, she shared her hope for more sustainable ways of living.
“I’m hoping that we can have daily lives that can be more in sync with the environment,” Prawira said. “So you don’t always have to choose between the convenience of daily life and having to help protect the environment.”
New lightweight polymer film can prevent corrosion
MIT researchers have developed a lightweight polymer film that is nearly impenetrable to gas molecules, raising the possibility that it could be used as a protective coating to prevent solar cells and other infrastructure from corrosion, and to slow the aging of packaged food and medicines.
The polymer, which can be applied as a film mere nanometers thick, completely repels nitrogen and other gases, as far as can be detected by laboratory equipment, the researchers found. That degree of impermeability has never been seen before in any polymer, and rivals the impermeability of molecularly-thin crystalline materials such as graphene.
“Our polymer is quite unusual. It’s obviously produced from a solution-phase polymerization reaction, but the product behaves like graphene, which is gas-impermeable because it’s a perfect crystal. However, when you examine this material, one would never confuse it with a perfect crystal,” says Michael Strano, the Carbon P. Dubbs Professor of Chemical Engineering at MIT.
The polymer film, which the researchers describe today in Nature, is made using a process that can be scaled up to large quantities and applied to surfaces much more easily than graphene.
Strano and Scott Bunch, an associate professor of mechanical engineering at Boston University, are the senior authors of the new study. The paper’s lead authors are Cody Ritt, a former MIT postdoc who is now an assistant professor at the University of Colorado at Boulder; Michelle Quien, an MIT graduate student; and Zitang Wei, an MIT research scientist.
Bubbles that don’t collapse
Strano’s lab first reported the novel material — a two-dimensional polymer called a 2D polyaramid that self-assembles into molecular sheets using hydrogen bonds — in 2022. To create such 2D polymer sheets, which had never been done before, the researchers used a building block called melamine, which contains a ring of carbon and nitrogen atoms. Under the right conditions, these monomers can expand in two dimensions, forming nanometer-sized disks. These disks stack on top of each other, held together by hydrogen bonds between the layers, which make the structure very stable and strong.
That polymer, which the researchers call 2DPA-1, is stronger than steel but has only one-sixth the density of steel.
In their 2022 study, the researchers focused on testing the material’s strength, but they also did some preliminary studies of its gas permeability. For those studies, they created “bubbles” out of the films and filled them with gas. With most polymers, such as plastics, gas that is trapped inside will seep out through the material, causing the bubble to deflate quickly.
However, the researchers found that bubbles made of 2DPA-1 did not collapse — in fact, bubbles that they made in 2021 are still inflated. “I was quite surprised initially,” Ritt says. “The behavior of the bubbles didn’t follow what you’d expect for a typical, permeable polymer. This required us to rethink how to properly study and understand molecular transport across this new material.”
“We set up a series of careful experiments to first prove that the material is molecularly impermeable to nitrogen,” Strano says. “It could be considered tedious work. We had to make micro-bubbles of the polymer and fill them with a pure gas like nitrogen, and then wait. We had to repeatedly check over an exceedingly long period of time that they weren’t collapsed, in order to report the record impermeability value.”
Traditional polymers allow gases through because they consist of a tangle of spaghetti-like molecules that are loosely joined together. This leaves tiny gaps between the strands. Gas molecules can seep through these gaps, which is why polymers always have at least some degree of gas permeability.
However, the new 2D polymer is essentially impermeable because of the way that the layers of disks stick to each other.
“The fact that they can pack flat means there’s no volume between the two-dimensional disks, and that’s unusual. With other polymers, there’s still space between the one-dimensional chains, so most polymer films allow at least a little bit of gas to get through,” Strano says.
George Schatz, a professor of chemistry and chemical and biological engineering at Northwestern University, described the results as “remarkable.”
“Normally polymers are reasonably permeable to gases, but the polyaramids reported in this paper are orders of magnitude less permeable to most gases under conditions with industrial relevance,” says Schatz, who was not involved in the study.
A protective coating
In addition to nitrogen, the researchers also exposed the polymer to helium, argon, oxygen, methane, and sulfur hexafluoride. They found that 2DPA-1’s permeability to those gases was at least 1/10,000 that of any other existing polymer. That makes it nearly as impermeable as graphene, which is completely impermeable to gases because of its defect-free crystalline structure.
Scientists have been working on developing graphene coatings as a barrier to prevent corrosion in solar cells and other devices. However, scaling up the creation of graphene films is difficult, in large part because they can’t be simply painted onto surfaces.
“We can only make crystal graphene in very small patches,” Strano says. “A little patch of graphene is molecularly impermeable, but it doesn’t scale. People have tried to paint it on, but graphene does not stick to itself but slides when sheared. Graphene sheets moving past each other are considered almost frictionless.”
On the other hand, the 2DPA-1 polymer sticks easily because of the strong hydrogen bonds between the layered disks. In this paper, the researchers showed that a layer just 60 nanometers thick could extend the lifetime of a perovskite crystal by weeks. Perovskites are materials that hold promise as cheap and lightweight solar cells, but they tend to break down much faster than the silicon solar panels that are now widely used.
A 60-nanometer coating extended the perovskite’s lifetime to about three weeks, but a thicker coating would offer longer protection, the researchers say. The films could also be applied to a variety of other structures.
“Using an impermeable coating such as this one, you could protect infrastructure such as bridges, buildings, rail lines — basically anything outside exposed to the elements. Automotive vehicles, aircraft and ocean vessels could also benefit. Anything that needs to be sheltered from corrosion. The shelf life of food and medications can also be extended using such materials,” Strano says.
The other application demonstrated in this paper is a nanoscale resonator — essentially a tiny drum that vibrates at a particular frequency. Larger resonators, with sizes around 1 millimeter or less, are found in cell phones, where they allow the phone to pick up the frequency bands it uses to transmit and receive signals.
“In this paper, we made the first polymer 2D resonator, which you can do with our material because it’s impermeable and quite strong, like graphene,” Strano says. “Right now, the resonators in your phone and other communications devices are large, but there’s an effort to shrink them using nanotechnology. To make them less than a micron in size would be revolutionary. Cell phones and other devices could be smaller and reduce the power expenditures needed for signal processing.”
Resonators can also be used as sensors to detect very tiny molecules, including gas molecules.
The research was funded, in part, by the Center for Enhanced Nanofluidic Transport-Phase 2, an Energy Frontier Research Center funded by the U.S. Department of Energy Office of Science, as well as the National Science Foundation.
This research was carried out, in part, using MIT.nano’s facilities.
Teaching large language models how to absorb new knowledge
In an MIT classroom, a professor lectures while students diligently write down notes they will reread later to study and internalize key information ahead of an exam.
Humans know how to learn new information, but large language models can’t do this in the same way. Once a fully trained LLM has been deployed, its “brain” is static and can’t permanently adapt itself to new knowledge.
This means that if a user tells an LLM something important today, it won’t remember that information the next time this person starts a new conversation with the chatbot.
Now, a new approach developed by MIT researchers enables LLMs to update themselves in a way that permanently internalizes new information. Just like a student, the LLM generates its own study sheets from a user’s input, which it uses to memorize the information by updating its inner workings.
The model generates multiple self-edits to learn from one input, then applies each one to see which improves its performance the most. This trial-and-error process teaches the model the best way to train itself.
The researchers found this approach improved the accuracy of LLMs at question-answering and pattern-recognition tasks, and it enabled a small model to outperform much larger LLMs.
While there are still limitations that must be overcome, the technique could someday help artificial intelligence agents consistently adapt to new tasks and achieve changing goals in evolving environments.
“Just like humans, complex AI systems can’t remain static for their entire lifetimes. These LLMs are not deployed in static environments. They are constantly facing new inputs from users. We want to make a model that is a bit more human-like — one that can keep improving itself,” says Jyothish Pari, an MIT graduate student and co-lead author of a paper on this technique.
He is joined on the paper by co-lead author Adam Zweiger, an MIT undergraduate; graduate students Han Guo and Ekin Akyürek; and senior authors Yoon Kim, an associate professor in the Department of Electrical Engineering and Computer Science (EECS) and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL), and Pulkit Agrawal, an associate professor in EECS and member of CSAIL. The research will be presented at the Conference on Neural Information Processing Systems.
Teaching the model to learn
LLMs are neural network models that have billions of parameters, called weights, that contain the model’s knowledge and process inputs to make predictions. During training, the model adapts these weights to learn new information contained in its training data.
But once it is deployed, the weights are static and can’t be permanently updated anymore.
However, LLMs are very good at a process called in-context learning, in which a trained model learns a new task by seeing a few examples. These examples guide the model’s responses, but the knowledge disappears before the next conversation.
The MIT researchers wanted to leverage a model’s powerful in-context learning capabilities to teach it how to permanently update its weights when it encounters new knowledge.
The framework they developed, called SEAL for “self-adapting LLMs,” enables an LLM to generate new synthetic data based on an input, and then determine the best way to adapt itself and learn from that synthetic data. Each piece of synthetic data is a self-edit the model can apply.
In the case of language, the LLM creates synthetic data by rewriting the information, and its implications, in an input passage. This is similar to how students make study sheets by rewriting and summarizing original lecture content.
The LLM does this multiple times, then quizzes itself on each self-edit to see which led to the biggest boost in performance on a downstream task like question answering. It uses a trial-and-error method known as reinforcement learning, where it receives a reward for the greatest performance boost.
Then the model memorizes the best study sheet by updating its weights to internalize the information in that self-edit.
“Our hope is that the model will learn to make the best kind of study sheet — one that is the right length and has the proper diversity of information — such that updating the model based on it leads to a better model,” Zweiger explains.
Choosing the best method
Their framework also allows the model to choose the way it wants to learn the information. For instance, the model can select the synthetic data it wants to use, the rate at which it learns, and how many iterations it wants to train on.
In this case, not only does the model generate its own training data, but it also configures the optimization that applies that self-edit to its weights.
“As humans, we know how we learn best. We want to grant that same ability to large language models. By providing the model with the ability to control how it digests this information, it can figure out the best way to parse all the data that are coming in,” Pari says.
SEAL outperformed several baseline methods across a range of tasks, including learning a new skill from a few examples and incorporating knowledge from a text passage. On question answering, SEAL improved model accuracy by nearly 15 percent and on some skill-learning tasks, it boosted the success rate by more than 50 percent.
But one limitation of this approach is a problem called catastrophic forgetting: As the model repeatedly adapts to new information, its performance on earlier tasks slowly declines.
The researchers plan to mitigate catastrophic forgetting in future work. They also want to apply this technique in a multi-agent setting where several LLMs train each other.
“One of the key barriers to LLMs that can do meaningful scientific research is their inability to update themselves based on their interactions with new information. Though fully deployed self-adapting models are still far off, we hope systems able to learn this way could eventually overcome this and help advance science,” Zweiger says.
This work is supported, in part, by the U.S. Army Research Office, the U.S. Air Force AI Accelerator, the Stevens Fund for MIT UROP, and the MIT-IBM Watson AI Lab.
Understanding the nuances of human-like intelligence
What can we learn about human intelligence by studying how machines “think?” Can we better understand ourselves if we better understand the artificial intelligence systems that are becoming a more significant part of our everyday lives?
These questions may be deeply philosophical, but for Phillip Isola, finding the answers is as much about computation as it is about cogitation.
Isola, the newly tenured associate professor in the Department of Electrical Engineering and Computer Science (EECS), studies the fundamental mechanisms involved in human-like intelligence from a computational perspective.
While understanding intelligence is the overarching goal, his work focuses mainly on computer vision and machine learning. Isola is particularly interested in exploring how intelligence emerges in AI models, how these models learn to represent the world around them, and what their “brains” share with the brains of their human creators.
“I see all the different kinds of intelligence as having a lot of commonalities, and I’d like to understand those commonalities. What is it that all animals, humans, and AIs have in common?” says Isola, who is also a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL).
To Isola, a better scientific understanding of the intelligence that AI agents possess will help the world integrate them safely and effectively into society, maximizing their potential to benefit humanity.
Asking questions
Isola began pondering scientific questions at a young age.
While growing up in San Francisco, he and his father frequently went hiking along the northern California coastline or camping around Point Reyes and in the hills of Marin County.
He was fascinated by geological processes and often wondered what made the natural world work. In school, Isola was driven by an insatiable curiosity, and while he gravitated toward technical subjects like math and science, there was no limit to what he wanted to learn.
Not entirely sure what to study as an undergraduate at Yale University, Isola dabbled until he came upon cognitive sciences.
“My earlier interest had been with nature — how the world works. But then I realized that the brain was even more interesting, and more complex than even the formation of the planets. Now, I wanted to know what makes us tick,” he says.
As a first-year student, he started working in the lab of his cognitive sciences professor and soon-to-be mentor, Brian Scholl, a member of the Yale Department of Psychology. He remained in that lab throughout his time as an undergraduate.
After spending a gap year working with some childhood friends at an indie video game company, Isola was ready to dive back into the complex world of the human brain. He enrolled in the graduate program in brain and cognitive sciences at MIT.
“Grad school was where I felt like I finally found my place. I had a lot of great experiences at Yale and in other phases of my life, but when I got to MIT, I realized this was the work I really loved and these are the people who think similarly to me,” he says.
Isola credits his PhD advisor, Ted Adelson, the John and Dorothy Wilson Professor of Vision Science, as a major influence on his future path. He was inspired by Adelson’s focus on understanding fundamental principles, rather than only chasing new engineering benchmarks, which are formalized tests used to measure the performance of a system.
A computational perspective
At MIT, Isola’s research drifted toward computer science and artificial intelligence.
“I still loved all those questions from cognitive sciences, but I felt I could make more progress on some of those questions if I came at it from a purely computational perspective,” he says.
His thesis was focused on perceptual grouping, which involves the mechanisms people and machines use to organize discrete parts of an image as a single, coherent object.
If machines can learn perceptual groupings on their own, that could enable AI systems to recognize objects without human intervention. This type of self-supervised learning has applications in areas such autonomous vehicles, medical imaging, robotics, and automatic language translation.
After graduating from MIT, Isola completed a postdoc at the University of California at Berkeley so he could broaden his perspectives by working in a lab solely focused on computer science.
“That experience helped my work become a lot more impactful because I learned to balance understanding fundamental, abstract principles of intelligence with the pursuit of some more concrete benchmarks,” Isola recalls.
At Berkeley, he developed image-to-image translation frameworks, an early form of generative AI model that could turn a sketch into a photographic image, for instance, or turn a black-and-white photo into a color one.
He entered the academic job market and accepted a faculty position at MIT, but Isola deferred for a year to work at a then-small startup called OpenAI.
“It was a nonprofit, and I liked the idealistic mission at that time. They were really good at reinforcement learning, and I thought that seemed like an important topic to learn more about,” he says.
He enjoyed working in a lab with so much scientific freedom, but after a year Isola was ready to return to MIT and start his own research group.
Studying human-like intelligence
Running a research lab instantly appealed to him.
“I really love the early stage of an idea. I feel like I am a sort of startup incubator where I am constantly able to do new things and learn new things,” he says.
Building on his interest in cognitive sciences and desire to understand the human brain, his group studies the fundamental computations involved in the human-like intelligence that emerges in machines.
One primary focus is representation learning, or the ability of humans and machines to represent and perceive the sensory world around them.
In recent work, he and his collaborators observed that the many varied types of machine-learning models, from LLMs to computer vision models to audio models, seem to represent the world in similar ways.
These models are designed to do vastly different tasks, but there are many similarities in their architectures. And as they get bigger and are trained on more data, their internal structures become more alike.
This led Isola and his team to introduce the Platonic Representation Hypothesis (drawing its name from the Greek philosopher Plato) which says that the representations all these models learn are converging toward a shared, underlying representation of reality.
“Language, images, sound — all of these are different shadows on the wall from which you can infer that there is some kind of underlying physical process — some kind of causal reality — out there. If you train models on all these different types of data, they should converge on that world model in the end,” Isola says.
A related area his team studies is self-supervised learning. This involves the ways in which AI models learn to group related pixels in an image or words in a sentence without having labeled examples to learn from.
Because data are expensive and labels are limited, using only labeled data to train models could hold back the capabilities of AI systems. With self-supervised learning, the goal is to develop models that can come up with an accurate internal representation of the world on their own.
“If you can come up with a good representation of the world, that should make subsequent problem solving easier,” he explains.
The focus of Isola’s research is more about finding something new and surprising than about building complex systems that can outdo the latest machine-learning benchmarks.
While this approach has yielded much success in uncovering innovative techniques and architectures, it means the work sometimes lacks a concrete end goal, which can lead to challenges.
For instance, keeping a team aligned and the funding flowing can be difficult when the lab is focused on searching for unexpected results, he says.
“In a sense, we are always working in the dark. It is high-risk and high-reward work. Every once in while, we find some kernel of truth that is new and surprising,” he says.
In addition to pursuing knowledge, Isola is passionate about imparting knowledge to the next generation of scientists and engineers. Among his favorite courses to teach is 6.7960 (Deep Learning), which he and several other MIT faculty members launched four years ago.
The class has seen exponential growth, from 30 students in its initial offering to more than 700 this fall.
And while the popularity of AI means there is no shortage of interested students, the speed at which the field moves can make it difficult to separate the hype from truly significant advances.
“I tell the students they have to take everything we say in the class with a grain of salt. Maybe in a few years we’ll tell them something different. We are really on the edge of knowledge with this course,” he says.
But Isola also emphasizes to students that, for all the hype surrounding the latest AI models, intelligent machines are far simpler than most people suspect.
“Human ingenuity, creativity, and emotions — many people believe these can never be modeled. That might turn out to be true, but I think intelligence is fairly simple once we understand it,” he says.
Even though his current work focuses on deep-learning models, Isola is still fascinated by the complexity of the human brain and continues to collaborate with researchers who study cognitive sciences.
All the while, he has remained captivated by the beauty of the natural world that inspired his first interest in science.
Although he has less time for hobbies these days, Isola enjoys hiking and backpacking in the mountains or on Cape Cod, skiing and kayaking, or finding scenic places to spend time when he travels for scientific conferences.
And while he looks forward to exploring new questions in his lab at MIT, Isola can’t help but contemplate how the role of intelligent machines might change the course of his work.
He believes that artificial general intelligence (AGI), or the point where machines can learn and apply their knowledge as well as humans can, is not that far off.
“I don’t think AIs will just do everything for us and we’ll go and enjoy life at the beach. I think there is going to be this coexistence between smart machines and humans who still have a lot of agency and control. Now, I’m thinking about the interesting questions and applications once that happens. How can I help the world in this post-AGI future? I don’t have any answers yet, but it’s on my mind,” he says.
Leading quantum at an inflection point
Danna Freedman is seeking the early adopters.
She is the faculty director of the nascent MIT Quantum Initiative, or QMIT. In this new role, Freedman is giving shape to an ambitious, Institute-wide effort to apply quantum breakthroughs to the most consequential challenges in science, technology, industry, and national security.
The interdisciplinary endeavor, the newest of MIT President Sally Kornbluth’s strategic initiatives, will bring together MIT researchers and domain experts from a range of industries to identify and tackle practical challenges wherever quantum solutions could achieve the greatest impact.
“We’ve already seen how the breadth of progress in quantum has created opportunities to rethink the future of security and encryption, imagine new modes of navigation, and even measure gravitational waves more precisely to observe the cosmos in an entirely new way,” says Freedman, the Frederick George Keyes Professor of Chemistry. “What can we do next? We’re investing in the promise of quantum, and where the legacy will be in 20 years.”
QMIT — the name is a nod to the “qubit,” the basic unit of quantum information — will formally launch on Dec. 8 with an all-day event on campus. Over time, the initiative plans to establish a physical home in the heart of campus for academic, public, and corporate engagement with state-of-the-art integrated quantum systems. Beyond MIT’s campus, QMIT will also work closely with the U.S. government and MIT Lincoln Laboratory, applying the lab’s capabilities in quantum hardware development, systems engineering, and rapid prototyping to national security priorities.
“The MIT Quantum Initiative seizes a timely opportunity in service to the nation’s scientific, economic, and technological competitiveness,” says Ian A. Waitz, MIT’s vice president for research. “With quantum capabilities approaching an inflection point, QMIT will engage students and researchers across all our schools and the college, as well as companies around the world, in thinking about what a step change in sensing and computational power will mean for a wide range of fields. Incredible opportunities exist in health and life sciences, fundamental physics research, cybersecurity, materials science, sensing the world around us, and more.”
Identifying the right questions
Quantum phenomena are as foundational to our world as light or gravity. At an extremely small scale, the interactions of atoms and subatomic particles are controlled by a different set of rules than the physical laws of the macro-sized world. These rules are called quantum mechanics.
“Quantum, in a sense, is what underlies everything,” says Freedman.
By leveraging quantum properties, quantum devices can process information at incredible speed to solve complex problems that aren’t feasible on classical supercomputers, and to enable ultraprecise sensing and measurement. Those improvements in speed and precision will become most powerful when optimized in relation to specific use cases, and as part of a complete quantum system. QMIT will focus on collaboration across domains to co-develop quantum tools, such as computers, sensors, networks, simulations, and algorithms, alongside the intended users of these systems.
As it develops, QMIT will be organized into programmatic pillars led by top researchers in quantum including Paola Cappellaro, Ford Professor of Engineering and professor of nuclear science and engineering and of physics; Isaac Chuang, Julius A. Stratton Professor in Electrical Engineering and Physics; Pablo Jarillo-Herrero, Cecil and Ida Green Professor of Physics; William Oliver, Henry Ellis Warren (1894) Professor of Electrical Engineering and Computer Science and professor of physics; Vladan Vuletić, Lester Wolfe Professor of Physics; and Jonilyn Yoder, associate leader of the Quantum-Enabled Computation Group at MIT Lincoln Laboratory.
While supporting the core of quantum research in physics, engineering, mathematics, and computer science, QMIT promises to expand the community at its frontiers, into astronomy, biology, chemistry, materials science, and medicine.
“If you provide a foundation that somebody can integrate with, that accelerates progress a lot,” says Freedman. “Perhaps we want to figure out how a quantum simulator we’ve built can model photosynthesis, if that’s the right question — or maybe the right question is to study 10 failed catalysts to see why they failed.”
“We are going to figure out what real problems exist that we could approach with quantum tools, and work toward them in the next five years,” she adds. “We are going to change the forward momentum of quantum in a way that supports impact.”
The MIT Quantum Initiative will be administratively housed in the Research Laboratory of Electronics (RLE), with support from the Office of the Vice President for Research (VPR) and the Office of Innovation and Strategy.
QMIT is a natural expansion of MIT’s Center for Quantum Engineering (CQE), a research powerhouse that engages more than 80 principal investigators across the MIT campus and Lincoln Laboratory to accelerate the practical application of quantum technologies.
“CQE has cultivated a tremendously strong ecosystem of students and researchers, engaging with U.S. government sponsors and industry collaborators, including through the popular Quantum Annual Research Conference (QuARC) and professional development classes,” says Marc Baldo, the Dugald C. Jackson Professor in Electrical Engineering and director of RLE.
“With the backing of former vice president for research Maria Zuber, former Lincoln Lab director Eric Evans, and Marc Baldo, we launched CQE and its industry membership group in 2019 to help bridge MIT’s research efforts in quantum science and engineering,” says Oliver, CQE’s director, who also spent 20 years at Lincoln Laboratory, most recently as a Laboratory Fellow. “We have an important opportunity now to deepen our commitment to quantum research and education, and especially in engaging students from across the Institute in thinking about how to leverage quantum science and engineering to solve hard problems.”
Two years ago, Peter Fisher, the Thomas A. Frank (1977) Professor of Physics, in his role as associate vice president for research computing and data, assembled a faculty group led by Cappellaro and involving Baldo, Oliver, Freedman, and others, to begin to build an initiative that would span the entire Institute. Now, capitalizing on CQE’s success, Oliver will lead the new MIT Quantum Initiative’s quantum computing pillar, which will broaden the work of CQE into a larger effort that focuses on quantum computing, industry engagement, and connecting with end users.
The “MIT-hard” problem
QMIT will build upon the Institute’s historic leadership in quantum science and engineering. In the spring of 1981, MIT hosted the first Physics of Computation Conference at the Endicott House, bringing together nearly 50 physics and computing researchers to consider the practical promise of quantum — an intellectual moment that is now widely regarded as the kickoff of the second quantum revolution. (The first was the fundamental articulation of quantum mechanics 100 years ago.)
Today, research in quantum science and engineering produces a steady stream of “firsts” in the lab and a growing number of startup companies.
In collaboration with partners in industry and government, MIT researchers develop advances in areas like quantum sensing, which involves the use of atomic-scale systems to measure certain properties, like distance and acceleration, with extreme precision. Quantum sensing could be used in applications like brain imaging devices that capture more detail, or air traffic control systems with greater positional accuracy.
Another key area of research is quantum simulation, which uses the power of quantum computers to accurately emulate complex systems. This could fuel the discovery of new materials for energy-efficient electronics or streamline the identification of promising molecules for drug development.
“Historically, when we think about the most well-articulated challenges that quantum will solve,” Freedman says, “the best ones have come from inside of MIT. We’re open to technological solutions to problems, and nontraditional approaches to science. In many respects, we are the early adopters.”
But she also draws a sharp distinction between blue-sky thinking about what quantum might do, and the deeply technical, deeply collaborative work of actually drawing the roadmap. “That’s the ‘MIT-hard’ problem,” she says.
The QMIT launch event on Dec. 8 will feature talks and discussions featuring MIT faculty, including Nobel laureates and industry leaders.
MIT Energy Initiative launches Data Center Power Forum
With global power demand from data centers expected to more than double by 2030, the MIT Energy Initiative (MITEI) in September launched an effort that brings together MIT researchers and industry experts to explore innovative solutions for powering the data-driven future. At its annual research conference, MITEI announced the Data Center Power Forum, a targeted research effort for MITEI member companies interested in addressing the challenges of data center power demand. The Data Center Power Forum builds on lessons from MITEI’s May 2025 symposium on the energy to power the expansion of artificial intelligence (AI) and focus panels related to data centers at the fall 2024 research conference.
In the United States, data centers consumed 4 percent of the country’s electricity in 2023, with demand expected to increase to 9 percent by 2030, according to the Electric Power Research Institute. Much of the growth in demand is from the increasing use of AI, which is placing an unprecedented strain on the electric grid. This surge in demand presents a serious challenge for the technology and energy sectors, government policymakers, and everyday consumers, who may see their electric bills skyrocket as a result.
“MITEI has long supported research on ways to produce more efficient and cleaner energy and to manage the electric grid. In recent years, MITEI has also funded dozens of research projects relevant to data center energy issues. Building on this history and knowledge base, MITEI’s Data Center Power Forum is convening a specialized community of industry members who have a vital stake in the sustainable growth of AI and the acceleration of solutions for powering data centers and expanding the grid,” says William H. Green, the director of MITEI and the Hoyt C. Hottel Professor of Chemical Engineering.
MITEI’s mission is to advance zero- and low-carbon solutions to expand energy access and mitigate climate change. MITEI works with companies from across the energy innovation chain, including in the infrastructure, automotive, electric power, energy, natural resources, and insurance sectors. MITEI member companies have expressed strong interest in the Data Center Power Forum and are committing to support focused research on a wide range of energy issues associated with data center expansion, Green says.
MITEI’s Data Center Power Forum will provide its member companies with reliable insights into energy supply, grid load operations and management, the built environment, and electricity market design and regulatory policy for data centers. The forum complements MIT’s deep expertise in adjacent topics such as low-power processors, efficient algorithms, task-specific AI, photonic devices, quantum computing, and the societal consequences of data center expansion. As part of the forum, MITEI’s Future Energy Systems Center is funding projects relevant to data center energy in its upcoming proposal cycles. MITEI Research Scientist Deep Deka has been named the program manager for the forum.
“Figuring out how to meet the power demands of data centers is a complicated challenge. Our research is coming at this from multiple directions, from looking at ways to expand transmission capacity within the electrical grid in order to bring power to where it is needed, to ensuring the quality of electrical service for existing users is not diminished when new data centers come online, and to shifting computing tasks to times and places when and where energy is available on the grid," said Deka.
MITEI currently sponsors substantial research related to data center energy topics across several MIT departments. The existing research portfolio includes more than a dozen projects related to data centers, including low- or zero-carbon solutions for energy supply and infrastructure, electrical grid management, and electricity market policy. MIT researchers funded through MITEI’s industry consortium are also designing more energy-efficient power electronics and processors and investigating behind-the-meter low-/no-carbon power plants and energy storage. MITEI-supported experts are studying how to use AI to optimize electrical distribution and the siting of data centers and conducting techno-economic analyses of data center power schemes. MITEI’s consortium projects are also bringing fresh perspectives to data center cooling challenges and considering policy approaches to balance the interests of shareholders.
By drawing together industry stakeholders from across the AI and grid value chain, the Data Center Power Forum enables a richer dialog about solutions to power, grid, and carbon management problems in a noncommercial and collaborative setting.
“The opportunity to meet and to hold discussions on key data center challenges with other forum members from different sectors, as well as with MIT faculty members and research scientists, is a unique benefit of this MITEI-led effort,” Green says.
MITEI addressed the issue of data center power needs with its company members during its fall 2024 Annual Research Conference with a panel session titled, “The extreme challenge of powering data centers in a decarbonized way.” MITEI Director of Research Randall Field led a discussion with representatives from large technology companies Google and Microsoft, known as “hyperscalers,” as well as Madrid-based infrastructure developer Ferrovial S.E. and utility company Exelon Corp. Another conference session addressed the related topic, “Energy storage and grid expansion.” This past spring, MITEI focused its annual Spring Symposium on data centers, hosting faculty members and researchers from MIT and other universities, business leaders, and a representative of the Federal Energy Regulatory Commission for a full day of sessions on the topic, “AI and energy: Peril and promise.”
