MIT Latest News
MIT engineers develop a magnetic transistor for more energy-efficient electronics
Transistors, the building blocks of modern electronics, are typically made of silicon. Because it’s a semiconductor, this material can control the flow of electricity in a circuit. But silicon has fundamental physical limits that restrict how compact and energy-efficient a transistor can be.
MIT researchers have now replaced silicon with a magnetic semiconductor, creating a magnetic transistor that could enable smaller, faster, and more energy-efficient circuits. The material’s magnetism strongly influences its electronic behavior, leading to more efficient control of the flow of electricity.
The team used a novel magnetic material and an optimization process that reduces the material’s defects, which boosts the transistor’s performance.
The material’s unique magnetic properties also allow for transistors with built-in memory, which would simplify circuit design and unlock new applications for high-performance electronics.
“People have known about magnets for thousands of years, but there are very limited ways to incorporate magnetism into electronics. We have shown a new way to efficiently utilize magnetism that opens up a lot of possibilities for future applications and research,” says Chung-Tao Chou, an MIT graduate student in the departments of Electrical Engineering and Computer Science (EECS) and Physics, and co-lead author of a paper on this advance.
Chou is joined on the paper by co-lead author Eugene Park, a graduate student in the Department of Materials Science and Engineering (DMSE); Julian Klein, a DMSE research scientist; Josep Ingla-Aynes, a postdoc in the MIT Plasma Science and Fusion Center; Jagadeesh S. Moodera, a senior research scientist in the Department of Physics; and senior authors Frances Ross, TDK Professor in DMSE; and Luqiao Liu, an associate professor in EECS, and a member of the Research Laboratory of Electronics; as well as others at the University of Chemistry and Technology in Prague. The paper appears today in Physical Review Letters.
Overcoming the limits
In an electronic device, silicon semiconductor transistors act like tiny light switches that turn a circuit on and off, or amplify weak signals in a communication system. They do this using a small input voltage.
But a fundamental physical limit of silicon semiconductors prevents a transistor from operating below a certain voltage, which hinders its energy efficiency.
To make more efficient electronics, researchers have spent decades working toward magnetic transistors that utilize electron spin to control the flow of electricity. Electron spin is a fundamental property that enables electrons to behave like tiny magnets.
So far, scientists have mostly been limited to using certain magnetic materials. These lack the favorable electronic properties of semiconductors, constraining device performance.
“In this work, we combine magnetism and semiconductor physics to realize useful spintronic devices,” Liu says.
The researchers replace the silicon in the surface layer of a transistor with chromium sulfur bromide, a two-dimensional material that acts as a magnetic semiconductor.
Due to the material’s structure, researchers can switch between two magnetic states very cleanly. This makes it ideal for use in a transistor that smoothly switches between “on” and “off.”
“One of the biggest challenges we faced was finding the right material. We tried many other materials that didn’t work,” Chou says.
They discovered that changing these magnetic states modifies the material’s electronic properties, enabling low-energy operation. And unlike many other 2D materials, chromium sulfur bromide remains stable in air.
To make a transistor, the researchers pattern electrodes onto a silicon substrate, then carefully align and transfer the 2D material on top. They use tape to pick up a tiny piece of material, only a few tens of nanometers thick, and place it onto the substrate.
“A lot of researchers will use solvents or glue to do the transfer, but transistors require a very clean surface. We eliminate all those risks by simplifying this step,” Chou says.
Leveraging magnetism
This lack of contamination enables their device to outperform existing magnetic transistors. Most others can only create a weak magnetic effect, changing the flow of current by a few percent or less. Their new transistor can switch or amplify the electric current by a factor of 10.
They use an external magnetic field to change the magnetic state of the material, switching the transistor using significantly less energy than would usually be required.
The material also allows them to control the magnetic states with electric current. This is important because engineers cannot apply magnetic fields to individual transistors in an electronic device. They need to control each one electrically.
The material’s magnetic properties could also enable transistors with built-in memory, simplifying the design of logic or memory circuits.
A typical memory device has a magnetic cell to store information and a transistor to read it out. Their method can combine both into one magnetic transistor.
“Now, not only are transistors turning on and off, they are also remembering information. And because we can switch the transistor with greater magnitude, the signal is much stronger so we can read out the information faster, and in a much more reliable way,” Liu says.
Building on this demonstration, the researchers plan to further study the use of electrical current to control the device. They are also working to make their method scalable so they can fabricate arrays of transistors.
This research was supported, in part, by the Semiconductor Research Corporation, the U.S. Defense Advanced Research Projects Agency (DARPA), the U.S. National Science Foundation (NSF), the U.S. Department of Energy, the U.S. Army Research Office, and the Czech Ministry of Education, Youth, and Sports. The work was partially carried out at the MIT.nano facilities.
Can AI help predict which heart-failure patients will worsen within a year?
Characterized by weakened or damaged heart musculature, heart failure results in the gradual buildup of fluid in a patient’s lungs, legs, feet, and other parts of the body. The condition is chronic and incurable, often leading to arrhythmias or sudden cardiac arrest. For many centuries, bloodletting and leeches were the treatment of choice, famously practiced by barber surgeons in Europe, during a time when physicians rarely operated on patients.
In the 21st century, the management of heart failure has become decidedly less medieval: Today, patients undergo a combination of healthy lifestyle changes, prescription of medications, and sometimes use pacemakers. Yet heart failure remains one of the leading causes of morbidity and mortality, placing a substantial burden on health-care systems across the globe.
“About half of the people diagnosed with heart failure will die within five years of diagnosis,” says Teya Bergamaschi, an MIT PhD student in the lab of Nina T. and Robert H. Rubin Professor Collin Stultz and the co-first author of a new paper introducing a deep learning model for predicting heart failure. “Understanding how a patient will fare after hospitalization is really important in allocating finite resources.”
The paper, published in Lancet eClinical Medicine by a team of researchers at MIT, Mass General Brigham, and Harvard Medical School, shares results from developing and testing PULSE-HF, which stands loosely for “Predict changes in left ventricULar Systolic function from ECGs of patients who have Heart Failure.” The project was conducted in Stultz’s lab, which is affiliated with the MIT Abdul Latif Jameel Clinic for Machine Learning in Health. Developed and retrospectively tested across three different patient cohorts from Massachusetts General Hospital, Brigham and Women’s Hospital, and MIMIC-IV (a publicly available dataset), the deep learning model accurately predicts changes in the left ventricular ejection fraction (LVEF), which is the percentage of blood being pumped out of the left ventricle of the heart.
A healthy human heart pumps out about 50 to 70 percent of blood from the left ventricle with each beat — anything less is considered a sign of a potential problem. “The model takes an [electrocardiogram] and outputs a prediction of whether or not there will be an ejection fraction within the next year that falls below 40 percent,” says Tiffany Yau, an MIT PhD student in Stultz’s lab who is also co-first author of the PULSE-HF paper. “That is the most severe subgroup of heart failure.”
If PULSE-HF predicts that a patient’s ejection fraction is likely to worsen within a year, the clinician can prioritize the patient for follow-up. Subsequently, lower-risk patients can reduce their number of hospital visits and the amount of time spent getting 10 electrodes adhered to their body for a 12-lead ECG. The model can also be deployed in low-resource clinical settings, including doctors offices in rural areas that don’t typically have a cardiac sonographer employed to run ultrasounds on a daily basis.
“The biggest thing that distinguishes [PULSE-HF] from other heart failure ECG methods is instead of detection, it does forecasting,” says Yau. The paper notes that to date, no other methods exist for predicting future LVEF decline among patients with heart failure.
During the testing and validation process, the researchers used a metric known as "area under the receiver operating characteristic curve" (AUROC) to measure PULSE-HF’s performance. AUROC is typically used to measure a model’s ability to discriminate between classes on a scale from 0 to 1, with 0.5 being random and 1 being perfect. PULSE-HF achieved AUROCs ranging from 0.87 to 0.91 across all three patient cohorts.
Notably, the researchers also built a version of PULSE-HF for single-lead ECGs, meaning only one electrode needs to be placed on the body. While 12-lead ECGs are generally considered superior for being more comprehensive and accurate, the performance of the single-lead version of PULSE-HF was just as strong as the 12-lead version.
Despite the elegant simplicity behind the idea of PULSE-HF, like most clinical AI research, it belies a laborious execution. “It’s taken years [to complete this project],” Bergamaschi recalls. “It’s gone through many iterations.”
One of the team’s biggest challenges was collecting, processing, and cleaning the ECG and echocardiogram datasets. While the model aims to forecast a patient’s ejection fraction, the labels for the training data weren’t always readily available. Much like a student learning from a textbook with an answer key, labeling is critical for helping machine-learning models correctly identify patterns in data.
Clean, linear text in the form of TXT files typically works best when training models. But echocardiogram files typically come in the form of PDFs, and when PDFs are converted to TXT files, the text (which gets broken up by line breaks and formatting) becomes difficult for the model to read. The unpredictable nature of real-life scenarios, like a restless patient or a loose lead, also marred the data. “There are a lot of signal artifacts that need to be cleaned,” Bergamaschi says. “It’s kind of a never-ending rabbit hole.”
While Bergamaschi and Yau acknowledge that more complicated methods could help filter the data for better signals, there is a limit to the usefulness of these approaches. “At what point do you stop?” Yau asks. “You have to think about the use case — is it easiest to have this model that works on data that is slightly messy? Because it probably will be.”
The researchers anticipate that the next step for PULSE-HF will be testing the model in a prospective study on real patients, whose future ejection fraction is unknown.
Despite the challenges inherent to bringing clinical AI tools like PULSE-HF over the finish line, including the possible risk of prolonging a PhD by another year, the students feel that the years of hard work were worthwhile.
“I think things are rewarding partially because they’re challenging,” Bergamaschi says. “A friend said to me, ‘If you think you will find your calling after graduation, if your calling is truly calling, it will be there in the one additional year it takes you to graduate.’ … The way we’re measured as researchers in [the ML and health] space is different from other researchers in ML space. Everyone in this community understands the unique challenges that exist here.”
“There’s too much suffering in the world,” says Yau, who joined Stultz’s lab after a health event made her realize the importance of machine learning in health care. “Anything that tries to ease suffering is something that I would consider a valuable use of my time.”
Discovering the joy of future-forward electrical engineering
“It’s a real validation of all the work behind the scenes,” says Karl Berggren, faculty head of electrical engineering within the MIT Department of Electrical Engineering and Computer Science (EECS). He’s looking at the numbers of new enrollees in Course 6-5, Electrical Engineering With Computing, the flagship electrical engineering degree offered by EECS, which was launched last fall.
The new major has been embraced by the MIT student community. “The fact that Course 6-5 is now the third-most selected major among first-year students shows that the department is clearly meeting a growing need for a curriculum that bridges electrical engineering and computing. This growth is coming from students already interested in pursuing a degree in EECS,” says Anantha Chandrakasan, MIT’s provost. “The major was thoughtfully designed to offer a strong foundation in core electrical engineering concepts — such as circuits, signals, systems, and architecture — while also providing well-structured specialization tracks that prepare students for the future of the field.”
Those tracks include structured paths to explore not only the traditional domains of electrical engineering (such as hardware design and energy systems), but cutting-edge fields such as nanoelectronics, quantum systems engineering, and photonics.
“They are very flexible, and essentially allow me to take whatever I want, with the tracks filling up almost automatically,” says 6-5 major Charles Reischer. “For me, it essentially reduces the amount of specific required classes in the major, which has been helpful for choosing the classes I find interesting.”
Jelena Notaros, who helped develop the Electromagnetics and Photonics track within the new major, has seen the new wave of student interest from the other side. “It’s been incredibly rewarding … I think students are excited to have the opportunity to take a class where they can learn about a cutting-edge field and test real state-of-the-art chip hardware using industry-standard equipment.” Notaros’s class, 6.2320 (Silicon Photonics), includes features not found in a university class anywhere else, such as a sequence in which students can test actual chips at three electronic-photonic probe stations.
Another 6-5 track, Quantum Systems Engineering, features direct student access to quantum hardware, including electron-nuclear systems and state-of-the-art simulations methods and tools. Professor Dirk Englund, who teaches multiple courses within the track, explains, “it’s been so successful in part through strong industry support, including from QuTools Inc. Students work with the same tech we use in the Boston-Area Quantum Network Testbed — the metro quantum network linking MIT, Lincoln Lab, and Harvard, and the NSF CQN.”
Many of Englund’s students have gone on to pursue a career in quantum information science, either in grad school or in industry. “Students recognize quantum engineering is the future. They see they’re building the foundation for metro-scale quantum networks.”
The new curriculum’s emphasis on hands-on learning is deliberate, and ubiquitous throughout 6-5. Within the Circuits track, students who enroll in class 6.208 (Semiconductor Electronic Circuits) will get an opportunity not only to design a circuit, but to actually see their design made, in a process called “tape-out.” Professor Ruonan Han, who helped design the course, explains, “a tape-out is a perfect training that poses [real-life] constraints and forces the students to solve practical engineering problems. Through circuit simulation using mainstream industry CAD tools, the students better understand how deep-scaled transistors differ from the ideal behaviors taught in textbooks. By drawing the layouts of the silicon and metal patterns, the students learn how a modern chip is made, layer by layer. The complex (and often frustrating) rules of the layout also keep reminding the students of all the technical limitations during the chip manufacturing, and make them better appreciate all the accomplishments in semiconductor manufacturing. Even the firm and non-negotiable tape-out submission deadline forces the students to not only wisely manage their development timeline, but also to experience heart-beating moments when decisions on critical engineering trade-offs should be made (in order to deliver). To these students, it was such relentless efforts that gave them lots of satisfaction and pride when they finally hold their own chips in hand.”
The sense of completing a full problem-solving cycle is echoed in class 6.900 (Engineering for Impact), a capstone course designed by Professor Joel Voldman, a former faculty head of electrical engineering, along with Senior Lecturer Joe Steinmeyer. Over the course of a semester, students team with city governments and nonprofits to solve complex local issues. The course is designed not only to introduce students to realistic project management factors (such as budgets, timelines, and stakeholders), but also to give them a taste of the satisfaction of engineering a solution that meets a real community’s need.
“I’ve taken 6.900, and it’s been eye-opening in the collaboration of hardware, firmware, and software to create a cohesive and working product,” says Andrea Leang, a senior majoring in 6-2 who nonetheless decided to try the new course. “In my 6-2 experience, I spent the first two years taking more CS [computer science] classes, but as I went into junior year, I wanted to explore more EE [electrical engineering].” That desire led Leang to Voltage, the student group for electrical engineers. “Honestly, it was the first big community of EE I’ve joined. Joining Voltage opened my eyes to what MIT had to offer on EE, and a community who was enthusiastic to share their knowledge.”
Matthew Kim, one of the executives of the Voltage group, echoes Leang’s experience. “It has been great working [...] to build a community for EE. We heard faculty say that they wanted to be more engaged with students and communicate more, and it has definitely been felt with the restart and support of Voltage. And I’m hopeful that the community will continue to grow.”
That growth has been rapid. The new major’s enrollment is now roughly equivalent to the combined enrollment in the older 6-1 and 6-2 programs, showing the desirability of a major that incorporates fundamentals of both computing and electrical engineering.
Department head Professor Asu Ozdaglar is thrilled with the energizing effect of the new major. “We are delighted to see the initial success of the 6-5 major, which provides our students an exciting and forward-looking curriculum, developed through extensive work and great deal of thought by electrical engineering faculty. The new curriculum reflects the critical role computing plays in electrical engineering, whether in designing new devices and circuits, analyzing data, or in studying complex systems, which almost invariably combine hardware and software."
“What excites me most about this major is how it empowers students to bring ideas to life — from the invisible signals that connect our world to the complex systems that drive modern technology,” says Dan Huttenlocher, dean of the MIT Schwarzman College of Computing and the Henry Warren Professor of Electrical Engineering and Computer Science. “Students are using computation as a creative and analytical tool to expand the boundaries of engineering. They gain a deep understanding of how hardware and software come together to drive technological progress.”
The new degree program’s designers are gratified by the swell of student interest.
“The buzz surrounding the classes and the new 6-5 degree program is fantastic,” says Voldman. “It’s great to see the strong student interest in what we’ve put together.”
3 Questions: Fortifying our planetary defenses
When people think of asteroids, they tend to picture rare, civilization-ending impacts like those depicted in movies such as “Armageddon.” In reality, the asteroids most likely to affect modern society are much smaller. While kilometer-scale impacts occur only every tens of millions of years, decameter-scale (building-sized) objects strike Earth far more frequently: roughly every couple decades. As astronomers develop new ways to detect and track these smaller asteroids, planetary defense becomes increasingly relevant for protecting the space-based infrastructure that underpins modern life, from GPS navigation to global communications.
The good news for us earthlings is that a team of MIT researchers is on this space-case. Associate Professor Julien de Wit, Research Scientist Artem Burdanov, and their colleagues recently developed a new asteroid-detection method that could be used to track potential asteroid impactors and help protect our planet. They have now applied this new technique to the James Webb Space Telescope (JWST), demonstrating that JWST can be used to detect and characterize decameter-scale asteroids all the way out to the main belt, a crucial step in fortifying our planetary safety and security. De Wit and his colleagues recently co-led with with Andrew Rivkin PhD ’91 new observations of an asteroid called 2024 YR4, which made headlines last year when it was first discovered. They were able to determine that the asteroid will not collide with the Moon, which could have had impacts on Earth’s critical satellite systems.
De Wit, Burdanov, Assistant Professor Richard Teague, and Research Scientist Saverio Cambioni spoke to MIT News about the importance of planetary defense and how MIT astronomers are helping to lead the charge to ensure our planet’s safety.
Q: What is planetary defense and how is the field changing?
Burdanov: Planetary defense is a field of science and engineering that’s focused on preventing asteroids and comets from hitting the Earth. While traditionally the field has been focused on much larger asteroids, thanks to new observational capabilities the field is growing to include monitoring much smaller asteroids that could also have an impact.
De Wit: When people think about asteroids they tend to think of impacts along the lines of these rare, civilization-ending “dinosaur killer” asteroids — objects that are scientifically fascinating but, happily, statistically unlikely on human timescales. But as soon as you move to smaller asteroids, there are so many of them that you’re looking at impacts happening every few decades or less. That becomes much more relevant on human timescales.
Now that our society has become increasingly reliant on space-based infrastructure for communication, navigation technologies like GPS and satellite-based security systems, we can be affected by different populations of smaller asteroids. These smaller asteroids will probably lead to zero direct human casualties but would have very different consequences on our space infrastructure. At the same time, because they are smaller, they require different technologies to monitor and understand them, both for the detection and for the characterization. At MIT, we are working to redefine planetary defense in a way that is far more pertinent, personable, and practical — focusing on these much smaller asteroids that could have real consequences. In other words, planetary defense is no longer just about avoiding extinction-level events. It is about protecting the systems we depend on in the near term.
Q: Why are observations with telescopes like the James Webb Space Telescope (JWST) so important to keeping our planet safe?
Teague: We’re entering a time now where we have these large-scale sky surveys that are going to be producing an incredible amount of data. We’re trying to develop the framework here at MIT where we can sift through that data as quickly and efficiently as possible, and then use the resources that we have available, such as the optical and radio observatories that we run like the MIT Haystack and Wallace Observatories, to follow up on those potential threats as quickly as possible and determine whether they could be problematic.
We’ve been doing trial observations to try and piece together how fast we can do this. The challenging thing is that the smaller objects that we’ve been talking about, the decameter ones, they’re really hard to detect from the ground. They’re just so small, and so that’s why we really need to use space-based facilities like JWST to help keep our planet safe. JWST is just incomparable, really, for detecting these very small, faint objects. A lot of our work at the moment at MIT is trying to understand is how do we build that entire pipeline — from detection to risk assessment to mitigation — under one roof to make it as efficient as possible. And I think this is a really MIT-type of problem to solve. There’s not many places that have the same range of experts in astronomy and engineering and technology to really tackle this properly. It’s really exciting that MIT hosts all these sorts of experts that we’re bringing together to solve this problem and keep our planet safer.
Cambioni: There is going to be what I like to call an asteroid revolution coming up because in addition to JWST’s observational capabilities, there is a new observatory in Chile called the Vera Rubin Observatory that could increase the detection of known small objects in space by a factor of 10. The most important thing to keep in mind, though, is that this observatory will detect the objects but may lose a lot of them. This is where a part of our work is coming in, to basically follow that object and map it as soon as possible. Additionally, Vera Rubin only looks at the reflected light, and it doesn’t get a precise estimate of an asteroid’s size. This gap between detection and characterization is a fundamental problem of asteroid science, between how many objects we discover and how fast we can characterize them. At MIT, we are using our in-house capabilities to help characterize these objects. That includes the MIT Wallace Observatory and the MIT Haystack Observatory.
Q: What role can MIT play in this new era of planetary defense?
De Wit: The reality is that, given the occurrence rate of these smaller asteroids and the new observational capabilities now coming online — from the Rubin Observatory to space-based facilities like JWST — we expect that within the next decade we will identify a handful of decameter-scale objects whose trajectories place them on course to impact the Earth-Moon system within this century. At that point, society will face a very practical question: whether, and how, to respond. Because these are much smaller objects than the dinosaur-killing asteroids, the types of mitigation strategies that we may envision are different. This is also where I think MIT might have an important role to play in the development, design, and potentially even construction of cost-effective, rapid-response asteroid-mitigation strategies. To help organize that effort, we have begun bringing together researchers across the Institute through the Planetary Defense at MIT project, working closely with colleagues on the engineering side.
Teague: What I’m particularly excited about is the way we’ve managed to engage students at MIT in this research as well. We’ve really focused on the impactful research and the way we’re bridging departments and labs within MIT, and this has been a fantastic way to engage students with practical astronomy and research. Saverio has run an IAP [Independent Activities Period] course, and we’re also running a student observing lab with the Wallace Observatory, where we hire a cohort of students every semester, and they’re taught how to use these observatories remotely. They take the data, do the analysis, and this semester, we've got on the order of 10 undergraduate students that are going to be working throughout the semester to take these observations and help us build this observation pipeline.
It's great that here at MIT we’re not only pushing the forefront of the research, but we’re also training the next generation of astronomers that is going to come in and carry this project through and into the future.
2026 MacVicar Faculty Fellows named
Two outstanding MIT educators have been named MacVicar Faculty Fellows: professor of mechanical engineering Amos Winter and professor of electrical engineering and computer science Nickolai Zeldovich.
For more than 30 years, the MacVicar Faculty Fellows Program has recognized exemplary and sustained contributions to undergraduate education at MIT. The program is named in honor of Margaret MacVicar, MIT’s first dean for undergraduate education and founder of the Undergraduate Research Opportunities Program (UROP). Fellows are chosen through an annual and highly competitive nomination process. The Registrar’s Office coordinates and administers the award on behalf of the Division of Graduate and Undergraduate Education. Nominations are reviewed by an advisory committee, and the provost selects the fellows.
Amos Winter: Bringing excitement to the classroom
Amos Winter is the Germeshausen Professor in the Department of Mechanical Engineering (MechE). He joined the faculty in 2012 and is best known for teaching class 2.007 (Design and Manufacturing I).
A hallmark of Winter’s pedagogy is the way he connects technical learning and core engineering science with real-world impacts. His approach keeps students actively engaged and encourages critical thinking while developing their competence and confidence as design engineers. Current graduate student Ariel Mobius ’24 writes, “Professor Winter is a transformative educator. He successfully blends rigorous technical instruction with lessons on problem scoping and hands-on learning and backs it all up with personalized mentorship. He is a committed advocate for his students and has fundamentally shaped my path as a mechanical engineer.”
Especially notable is Winter’s energetic style and use of interactive materials and demonstrations to make fundamental topics tangible. “He wheels in a large steamer trunk filled with demos he has built or collected to illustrate the day’s topic,” writes Class of 1948 Career Development Professor and assistant professor of mechanical engineering Kaitlyn Becker. “Some demos are enduring classics and others newly designed each year.” Through his “Gearhead Moment of Zen” Winter will share an astonishing car stunt to explain the mechanics using course material. “The theatrics stay in students’ minds,” says Becker, highlighting how Winter’s dramatic examples reinforce learning.
These techniques, combined with a supportive culture, allowed Winter to transform 2.007 from a core class and first subject in engineering design into a celebration of student effort and learning. Throughout the term, students learn how to design and build objects culminating in a robot competition in which their creations tackle themed challenges on a life-size game board. In the past, fewer than half the students were able to compete and today, boosted by Winter’s mentorship and enthusiasm, nearly 97 percent finish a competition-ready robot.
Ralph E. and Eloise F. Cross Professor of Mechanical Engineering David Hardt writes, “Thanks to Amos, this subject has become transformative for many MechE undergraduates.” Becker concurs: “He is the heart and captain of the 2.007 ‘cheer squad,’ cultivating a caring and motivated teaching team.”
Current graduate student Aidan Salazar ’25 notes, “His teaching philosophy is grounded in empowerment: he encourages students to take risks when designing while giving them the confidence and support needed to do so with thoughtful engineering analysis.”
Winter is also deeply invested in students’ growth outside the classroom. He serves as faculty supervisor for MIT’s Formula SAE (Society of Automotive Engineers) and Solar Car teams and guides related UROP projects. In fall 2025 alone, he advised nearly 50 UROP students from the teams, demonstrating his commitment to experiential learning and ability to mentor students at scale.
Salazar continues: “He has offered extraordinary contributions in helping MIT undergraduates embody the Institute’s ‘mens-et-manus’ [‘mind-and-hand’] motto, and I am grateful to be one of the individuals shaped by his teaching.”
“I have always looked up to my colleagues who are MacVicar Fellows as the best educators at the Institute,” writes Winter. “What makes this acknowledgement even more special to me is by earning it from teaching 2.007, which I often cite as one of the best parts of my job. The class is where most mechanical engineering undergraduates gain their first real engineering experience by physically realizing a machine of their own conception. It has been extremely gratifying to watch a generation of students translate their knowledge of engineering and design from the class into their careers … I am honored to have played a role in their intellectual growth and done so meaningfully enough to be recognized as a MacVicar Fellow.”
Nickolai Zeldovich: Inspiring independent thinkers and future teachers
Nickolai Zeldovich is the Joan and Irwin M. (1957) Jacobs Professor of Electrical Engineering and Computer Science (EECS). Student testimonials highlight his unique ability to activate their problem-solving skills, cultivate their intellectual curiosity, and infuse learning with joy.
Katarina Cheng ’25 writes, “From my first day of lecture in the course, I was immediately drawn in by Professor Zeldovich’s joy and enthusiasm for every facet of security and its power,” and Rotem Hemo ’17, ’18 says that Zeldovich “empowers students to find solutions themselves.”
Yael Tauman Kalai, the Ellen Swallow Richards (1873) Professor and professor of EECS concurs. She notes that his lectures — with back-and-forth discussion and probing questions — encourage independent thinking and ensure that “everyone feels a little smarter at the end. It is not surprising that students love him.”
Zeldovich’s affinity for problem-solving translates to his curricular work as well. When he arrived at MIT in 2008, Course 6 offered classes in theoretical and applied cryptography, but lacked a dedicated systems security subject. Recognizing this as a significant gap, Zeldovich took it upon himself to create class 6.566/6.858 (Computer Systems Security) in 2009. Since then, the subject has become a central part of the curriculum, but sustained interest from undergraduates revealed another need, and in 2021 he partnered with colleagues to create a dedicated introductory course: 6.1600 (Foundations of Computer Security).
Edwin Sibley Webster Professor of EECS Srini Devadas writes: “What our curriculum was sorely in need of was a systems security class, and Nickolai immediately and single-handedly created [it],” and has “taught this class to rave reviews ever since.”
The impact of Zeldovich’s thoughtful, inquiry-driven approach to pedagogy extends beyond the walls of his classroom, inspiring future educators, teaching assistants (TAs), and even his faculty colleagues at MIT.
Henry Corrigan-Gibbs, the Douglas Ross (1954) Career Development Professor of Software Technology and associate professor of computer science, writes that Zeldovich has “proven himself to be a dedicated teacher of teachers … One of the things that makes teaching with Nickolai so much fun is that he shares his passion with the undergraduates and MEng students who join the course staff as TAs.”
“[He] encourages the TAs to contribute their own creative ideas to the course,” continues Corrigan-Gibbs. “It should not be a surprise then that 100% of the TAs that we have had in our class have signed up to teach with Nickolai again.”
“Due, in no small part, to how I saw Nickolai lead his classroom, I was inspired to become an educator myself,” writes MIT alumna Anna Arpaci-Dusseau ’23, SM ’24. “I saw that the role of an instructor is not only to teach, but to innovate by thinking of creative projects, and to connect by listening to students’ concerns. As I go forward in my career, I am grateful to have such a wonderful example of an educator to look up to.”
Kalai adds, “I have learned a great deal from the two times that I have ‘taken’ (part of) the class from Nickolai. His extensive knowledge and experience are evident in every lecture. There is so much variety to Nickolai’s teaching.”
Nickolai Zeldovich is the recipient of numerous awards including the EECS Spira Teaching Award (2013), the Edgerton Faculty Achievement Award (2014), the EECS Faculty Research Innovation Fellowship (2018), and the EECS Jamieson Award for Excellence in Teaching (2024).
On receiving this award, Zeldovich says, “MIT has a culture of strong undergraduate education, so being selected as a MacVicar Fellow was truly an honor. It’s a joy to teach smart students about computer systems, and the tradition of co-teaching classes in the EECS department helped me improve as a teacher. Most of all, I look forward to continuing to teach MIT’s students!”
Learn more about the MacVicar Faculty Fellows Program on the Registrar’s Office website.
3 Questions: On the future of AI and the mathematical and physical sciences
Curiosity-driven research has long sparked technological transformations. A century ago, curiosity about atoms led to quantum mechanics, and eventually the transistor at the heart of modern computing. Conversely, the steam engine was a practical breakthrough, but it took fundamental research in thermodynamics to fully harness its power.
Today, artificial intelligence and science find themselves at a similar inflection point. The current AI revolution has been fueled by decades of research in the mathematical and physical sciences (MPS), which provided the challenging problems, datasets, and insights that made modern AI possible. The 2024 Nobel Prizes in physics and chemistry, recognizing foundational AI methods rooted in physics and AI applications for protein design, made this connection impossible to miss.
In 2025, MIT hosted a Workshop on the Future of AI+MPS, funded by the National Science Foundation with support from the MIT School of Science and the MIT departments of Physics, Chemistry, and Mathematics. The workshop brought together leading AI and science researchers to chart how the MPS domains can best capitalize on — and contribute to — the future of AI. Now a white paper, with recommendations for funding agencies, institutions, and researchers, has been published in Machine Learning: Science and Technology. In this interview, Jesse Thaler, MIT professor of physics and chair of the workshop, describes key themes and how MIT is positioning itself to lead in AI and science.
Q: What are the report’s key themes regarding last year’s gathering of leaders across the mathematical and physical sciences?
A: Gathering so many researchers at the forefront of AI and science in one room was illuminating. Though the workshop participants came from five distinct scientific communities — astronomy, chemistry, materials science, mathematics, and physics — we found many similarities in how we are each engaging with AI. A real consensus emerged from our animated discussions: Coordinated investment in computing and data infrastructures, cross-disciplinary research techniques, and rigorous training can meaningfully advance both AI and science.
One of the central insights was that this has to be a two-way street. It’s not just about using AI to do better science; science can also make AI better. Scientists excel at distilling insights from complex systems, including neural networks, by uncovering underlying principles and emergent behaviors. We call this the “science of AI,” and it comes in three flavors: science driving AI, where scientific reasoning informs foundational AI approaches; science inspiring AI, where scientific challenges push the development of new algorithms; and science explaining AI, where scientific tools help illuminate how machine intelligence actually works.
In my own field of particle physics, for instance, researchers are developing real-time AI algorithms to handle the data deluge from collider experiments. This work has direct implications for discovering new physics, but the algorithms themselves turn out to be valuable well beyond our field. The workshop made clear that the science of AI should be a community priority — it has the potential to transform how we understand, develop, and control AI systems.
Of course, bridging science and AI requires people who can work across both worlds. Attendees consistently emphasized the need for “centaur scientists” — researchers with genuine interdisciplinary expertise. Supporting these polymaths at every career stage, from integrated undergraduate courses to interdisciplinary PhD programs to joint faculty hires, emerged as essential.
Q: How do MIT’s AI and science efforts align with the workshop recommendations?
A: The workshop framed its recommendations around three pillars: research, talent, and community. As director of the NSF Institute for Artificial Intelligence and Fundamental Interactions (IAIFI) — a collaborative AI and physics effort among MIT and Harvard, Northeastern, and Tufts universities — I’ve seen firsthand how effective this framework can be. Scaling this up to MIT, we can see where progress is being made and where opportunities lie.
On the research front, MIT is already enabling AI-and-science work in both directions. Even a quick scroll through MIT News shows how individual researchers across the School of Science are pursuing AI-driven projects, building a pipeline of knowledge and surfacing new opportunities. At the same time, collaborative efforts like IAIFI and the Accelerated AI Algorithms for Data-Driven Discovery (A3D3) Institute concentrate interdisciplinary energy for greater impact. The MIT Generative AI Impact Consortium is also supporting application-driven AI work at the university scale.
To foster early-career AI-and-science talent, several initiatives are training the next generation of centaur scientists. The MIT Schwarzman College of Computing's Common Ground for Computing Education program helps students become “bilingual” in computing and their home discipline. Interdisciplinary PhD pathways are also gaining traction; IAIFI worked with the MIT Institute for Data, Systems, and Society to create one in physics, statistics, and data science, and about 10 percent of physics PhD students now opt for it — a number that's likely to grow. Dedicated postdoctoral roles like the IAIFI Fellowship and Tayebati Fellowship give early-career researchers the freedom to pursue interdisciplinary work. Funding centaur scientists and giving them space to build connections across domains, universities, and career stages has been transformative.
Finally, community-building ties it all together. From focused workshops to large symposia, organizing interdisciplinary events signals that AI and science isn’t siloed work — it’s an emerging field. MIT has the talent and resources to make a significant impact, and hosting these gatherings at multiple scales helps establish that leadership.
Q: What lessons can MIT draw about further advancing its AI-and-science efforts?
A: The workshop crystallized something important: The institutions that lead in AI and science will be the ones that think systematically, not piecemeal. Resources are finite, so priorities matter. Workshop attendees were clear about what becomes possible when an institution coordinates hires, research, and training around a cohesive strategy.
MIT is well positioned to build on what’s already underway with more structural initiatives — joint faculty lines across computing and scientific domains, expanded interdisciplinary degree pathways, and deliberate “science of AI” funding. We’re already seeing moves in this direction; this year, the MIT Schwarzman College of Computing and the Department of Physics are conducting their first-ever joint faculty search, which is exciting to see.
The virtuous cycle of AI-and-science has the potential to be truly transformative — offering deeper insight into AI, accelerating scientific discovery, and producing robust tools for both. By developing an intentional strategy, MIT will be well positioned to lead in, and benefit from, the coming waves of AI.
New MIT class uses anthropology to improve chatbots
Young adults growing up in the attention economy — preparing for adult life, with social media and chatbots competing for their attention — can easily fall into unhealthy relationships with digital platforms. But what if chatbots weren’t mere distractions from real life? Could they be designed humanely, as moral partners whose digital goal is to be a social guide rather than an addictive escape?
At MIT, a friendship between two professors — one an anthropologist, the other a computer scientist — led to creation of an undergraduate class that set out to find the answer to those questions. Combining the two seemingly disparate disciplines, the class encourages students to design artificial intelligence chatbots in humane ways that help users improve themselves.
The class, 6.S061/21A.S02 (Humane User Experience Design, a.k.a. Humane UXD), is an upper-level computer science class cross-listed with anthropology. This unique cross-listing allows computer science majors to fulfill a humanities requirement while also pursuing their career objectives. The two professors use methods from linguistic anthropology to teach students how to integrate the interactional and interpersonal needs of humans into programming.
Professor Arvind Satyanarayan, a computer scientist whose research develops tools for interactive data visualization and user interfaces, and Professor Graham Jones, an anthropologist whose research focuses on communication, created Humane UXD last summer with a grant from the MIT Morningside Academy for Design (MAD). The MIT MAD Design Curriculum Program provides funding for faculty to develop new classes or enhance existing classes using innovative pedagogical approaches that transcend departmental boundaries.
The Design Curriculum Program is currently accepting applications for the 2026-27 academic year; the deadline is Friday, March 20.
Jones and Satyanarayan met several years ago when they co-advised a doctoral student’s research on data visualization for visually impaired people. They’ve since become close friends who can pretty much finish one another’s sentences.
“There’s a way in which you don’t really fully externalize what you know or how you think until you’re teaching,” Jones says. “So, it’s been really fun for me to see Arvind unfurl his expertise as a teacher in a way that lets me see how the pieces fit together — and discover underlying commonalities between our disciplines and our ways of thinking.”
Satyanarayan continues that thought: “One of the things I really enjoyed is the reciprocal version of what Graham said, which is that my field — human-computer interaction — inherited a lot of methods from anthropology, such as interviews and user studies and observation studies. And over the decades, those methods have gotten more and more watered down. As a result, a lot of things have been lost.
“For instance, it was very exciting for me to see how an anthropologist teaches students to interview people. It’s completely different than how I would do it. With my way, we lose the rapport and connection you need to build with your interview participant. Instead, we just extract data from them.”
For Jones’ part, teaching with a computer scientist holds another kind of allure: design. He says that human speech and interaction are organized into underlying genres with stable sets of rules that differentiate an interview at a cocktail party from a conversation at a funeral.
“ChatGPT and other large language models are trained on naturally occurring human communication, so they have all those genres inside them in a latent state, waiting to be activated,” he says.
“As a social scientist, I teach methods for analyzing human conversation, and give students very powerful tools to do that. But it ends up usually being an exercise in pure research, whereas this is a design class, where students are building real-world systems.”
The curriculum appears to be on target for preparing students for jobs after graduation. One student sought permission to miss class for a week because he had a trial internship at a chatbot startup; when he returned, he said his work at the startup was just like what he was learning in class. He got the job.
The sampling of group projects below, built with Google’s Gemini, demonstrates some of what’s possible when, as Jones says, “there’s a really deep intertwining of the technology piece with the humanities piece.” The students’ design work shows that entirely new ways of programming can be conceptualized when the humane is made a priority.
The bots demonstrate clearly that an interdisciplinary class can be designed in such a way that everyone benefits: Students learn more and differently; they can fulfill a non-major course requirement by taking a class that is directly beneficial to their careers; and long-term faculty partnerships can be forged or strengthened.
Team Pond
One project promises to be particularly useful for graduating seniors. Pond is designed to help young college graduates adapt to the challenges of independent adult life. Team Pond configured the chatbot not to simply parrot the user, or to sycophantically praise wrong answers. Instead, Pond provides advice to help with “adulting” (behaving as a responsible adult).
“Pond is built to be your companion from college life into post-college life, to help you in your transition from being a small fish in a small pond to being a small fish in a very big pond,” says sophomore Mary Feliz.
“College is very much a high-proximity and high-context environment, in the sense that everybody around you is going through the same thing, and it’s easy to build relationships or find opportunities, because there are structured pathways that you have access to,” explains graduate student Emaan Khan. “Post-grad life is low-context. You’re not always surrounded by your peers or your professors. It’s no-proximity also, in the sense that you don’t have opportunities at your doorstep. Pond is a tool to help empower you to access certain opportunities, or learn how to navigate.”
Pond’s developers are graduate student Claire Camacho, Harvard University graduate student Charlotte Cheah, Feliz, and Khan. They trained Pond to offer expertise in three areas: social life, professional life, and adult skills.
The students demonstrated Pond’s social communication by asking how to start conversations in a hobby group. Next, they exhibited Pond’s adulting communication by typing in a question.
“My landlord has asked me personal questions that make me uncomfortable. What should I do?”
Appearing in Pond’s textbox was this response: “Politely set boundaries. You are not obligated to answer personal questions. You can gently redirect the conversation back to property-related topics. Know your rights: familiarize yourself with tenant rights in your city or state regarding privacy and landlord conduct.”
When asked how to do that, Pond offered specific instructions: “You can try saying phrases like: ‘I prefer to keep our conversations focused on the property,’ and ‘let’s stick to topics related to the apartment, please.’ Always redirect the conversation back to the apartment or a legitimate maintenance issue. Keep your tone polite but firm. Document any conversations if needed.”
Pond also offered a role-playing scenario to help the user learn what polite-but-firm language might be in that situation.
“The ethos of the practice mode is that you are actively building a skill, so that after using Pond for some time, you feel confident that you can swim on your own,” Khan says. The chatbot uses a point system that allows users to graduate from a topic, and a treasure chest to store prizes, elements added to boost the bot’s appeal.
Team News Nest
Another of the projects, News Nest, provides a sophisticated means of helping young people engage with credible news sources in a way that makes it fun. The name is derived from the program’s 10 appealing and colorful birds, each of which focuses on a particular area of news. If you want the headlines, you ask Polly the Parrot, the main news carrier; if you’re interested in science, Gaia the Goose guides you. The flock also includes Flynn the Falcon, sports reporter; Credo the Crow, for crime and legal news; Edwin the Eagle, a business and economics news guide; Pizzazz the Peacock for pop and entertainment stories; and Pixel the Pigeon, a technology news specialist.
News Nest’s development team is made up of MIT seniors Tiana Jiang and Krystal Montgomery, and junior Natalie Tan. They intentionally built News Nest to prevent “doomscrolling,” provide media transparency (sources and political leanings are always shown), and they created a clever, healthy buffer from emotional manipulation and engagement traps by employing birds rather than human characters.
Team M^3 (Multi-Agent Murder Mystery)
A third team, M^3, decided to experiment with making AI humane by keeping it fun. MIT senior Rodis Aguilar, junior David De La Torre, and second-year Deeraj Pothapragada developed M^3, a social deduction multi-agent murder mystery that incorporates four chatbots as different personalities: Gemini, OpenAI’s ChatGPT, xAI’s Grok, and Anthropic’s Claude. The user is the fifth player.
Like a regular murder mystery, there are locations, weapons, and lies. The user has to guess who committed the murder. It’s very similar to a board or online game played with real players, only these are enhanced AI opponents you can’t see, who may or may not tell the truth in response to questions. Users can’t get too involved with one chatbot, because they’re playing all four. Also, as in a real life murder mystery game, the user is sometimes guilty.
New photonic device efficiently beams light into free space
Photonic chips use light to process data instead of electricity, enabling faster communication speeds and greater bandwidth. Most of that light typically stays on the chip, trapped in optical wires, and is difficult to transmit to the outside world in an efficient manner.
If a lot of light could be rapidly and precisely beamed off the chip, free from the confines of the wiring, it could open the door to higher-resolution displays, smaller Lidar systems, more precise 3D printers, or larger-scale quantum computers.
Now, researchers from MIT and elsewhere have developed a new class of photonic devices that enable the precise broadcasting of light from the chip into free space in a scalable way.
Their chip uses an array of microscopic structures that curl upward, resembling tiny, glowing ski jumps. The researchers can carefully control how light is emitted from thousands of these tiny structures at once.
They used this new platform to project detailed, full-color images that are roughly half the size of a grain of table salt. Used in this way, the technology could aid in the development of lightweight augmented reality glasses or compact displays.
They also demonstrated how photonic “ski jumps” could be used to precisely control quantum bits, or qubits, in a quantum computing system.
“On a chip, light travels in wires, but in our normal, free-space world, light travels wherever it wants. Interfacing between these two worlds has long been a challenge. But now, with this new platform, we can create thousands of individually controllable laser beams that can interact with the world outside the chip in a single shot,” says Henry Wen, a visiting research scientist in the Research Laboratory of Electronics (RLE) at MIT, research scientist at MITRE, and co-lead author of a paper on the new platform.
He is joined on the paper by co-lead authors Matt Saha, of MITRE; Andrew S. Greenspon, a visiting scientist in RLE and MITRE; Matthew Zimmermann, of MITRE; Matt Eichenfeld, a professor at the University of Arizona; senior author Dirk Englund, a professor in the MIT Department of Electrical Engineering and Computer Science and principal investigator in the Quantum Photonics and Artificial Intelligence Group and the RLE; as well as others at MIT, MITRE, Sandia National Laboratories, and the University of Arizona. The research appears today in Nature.
A scalable platform
This work grew out of the Quantum Moonshot Program, a collaboration between MIT, the University of Colorado at Boulder, the MITRE Corporation, and Sandia National Laboratories to develop a novel quantum computing platform using the diamond-based qubits being developed in the Englund lab.
These diamond-based qubits are controlled using laser beams, and the researchers needed a way to interact with millions of qubits at once.
“We can’t control a million laser beams, but we may need to control a million qubits. So, we needed something that can shoot laser beams into free space and scan them over a large area, kind of like firing a T-shirt gun into the crowd at a sports stadium,” Wen says.
Existing methods used to broadcast and steer light off a photonic chip typically work with only a few beams at once and can’t scale up enough to interact with millions of qubits.
To create a scalable platform, the researchers developed a new fabrication technique. Their method produces photonic chips with tiny structures that curve upward off the chip’s surface to shine laser beams into free space.
They built these tiny “ski jumps” for light by creating two-layer structures from two different materials. Each material expands differently when it cools down from the high fabrication temperatures.
The researchers designed the structures with special patterns in each layer so that, when the temperature changes, the difference in strain between the materials causes the entire structure to curve upward as it cools.
This is the same effect as in an old-fashioned thermostat, which utilizes a coil of two metallic materials that curl and uncurl based on the temperature in the room, triggering the HVAC system. “Both of these materials, silicon nitride and aluminum nitride, were separate technologies. Finding a way to put them together was really the fabrication innovation that enables the ski jumps. This wouldn’t have been possible without the pioneering contributions of Matt Eichenfield and Andrew Leenheer at Sandia National Labs,” Wen says.
On the chip, connected waveguides funnel light to the ski jump structures. The researchers use a series of modulators to rapidly and precisely control how that light is turned on and off, enabling them to project light off the chip and move it around in free space.
Painting with light
They can broadcast light in different colors and, by tweaking the frequencies of light, adjust the density of the pattern that is emitted. In this way, they can essentially paint pictures in free space using light.
“This system is so stable we don’t even need to correct for errors. The pattern stays perfectly still on its own. We just calculate what color lasers need to be on at a given time and then turn it on,” he says.
Because the individual points of light, or pixels, are so tiny, the researchers can use this platform to generate extremely high-resolution displays. For instance, with their technique, 30,000 pixels can be fit into the same area that can hold only two pixels used in smartphone displays, Wen says.
“Our platform is the ideal optical engine because our pixels are at the physical limit of how small a pixel can be,” he adds.
Beyond high-resolution displays and larger quantum computers with diamond-based qubits, the method could be used to produce Lidars that are small enough to fit on tiny robots.
It could also be utilized in 3D printing processes that fabricate objects using lasers to cure layers of resin. Because their chip generates controllable beams of light so rapidly, it could greatly increase the speed of these printing processes, allowing users to create more complex objects.
In the future, the researchers want to scale their system up and conduct additional experiments on the yield and uniformity of the light, design a larger system to capture light from an array of photonic chips with “ski jumps,” and conduct robustness tests to see how long the devices last.
“We envision this opening the door to a new class of lab-on-chip capabilities and lithographically defined micro-opto-robotic agents,” Wen says.
This research was funded, in part, by the MITRE Quantum Moonshot Program, the U.S. Department of Energy, and the Center for Integrated Nanotechnologies.
A better method for planning complex visual tasks
MIT researchers have developed a generative artificial intelligence-driven approach for planning long-term visual tasks, like robot navigation, that is about twice as effective as some existing techniques.
Their method uses a specialized vision-language model to perceive the scenario in an image and simulate actions needed to reach a goal. Then a second model translates those simulations into a standard programming language for planning problems, and refines the solution.
In the end, the system automatically generates a set of files that can be fed into classical planning software, which computes a plan to achieve the goal. This two-step system generated plans with an average success rate of about 70 percent, outperforming the best baseline methods that could only reach about 30 percent.
Importantly, the system can solve new problems it hasn’t encountered before, making it well-suited for real environments where conditions can change at a moment’s notice.
“Our framework combines the advantages of vision-language models, like their ability to understand images, with the strong planning capabilities of a formal solver,” says Yilun Hao, an aeronautics and astronautics (AeroAstro) graduate student at MIT and lead author of an open-access paper on this technique. “It can take a single image and move it through simulation and then to a reliable, long-horizon plan that could be useful in many real-life applications.”
She is joined on the paper by Yongchao Chen, a graduate student in the MIT Laboratory for Information and Decision Systems (LIDS); Chuchu Fan, an associate professor in AeroAstro and a principal investigator in LIDS; and Yang Zhang, a research scientist at the MIT-IBM Watson AI Lab. The paper will be presented at the International Conference on Learning Representations.
Tackling visual tasks
For the past few years, Fan and her colleagues have studied the use of generative AI models to perform complex reasoning and planning, often employing large language models (LLMs) to process text inputs.
Many real-world planning problems, like robotic assembly and autonomous driving, have visual inputs that an LLM can’t handle well on its own. The researchers sought to expand into the visual domain by utilizing vision-language models (VLMs), powerful AI systems that can process images and text.
But VLMs struggle to understand spatial relationships between objects in a scene and often fail to reason correctly over many steps. This makes it difficult to use VLMs for long-range planning.
On the other hand, scientists have developed robust, formal planners that can generate effective long-horizon plans for complex situations. However, these software systems can’t process visual inputs and require expert knowledge to encode a problem into language the solver can understand.
Fan and her team built an automatic planning system that takes the best of both methods. The system, called VLM-guided formal planning (VLMFP), utilizes two specialized VLMs that work together to turn visual planning problems into ready-to-use files for formal planning software.
The researchers first carefully trained a small model they call SimVLM to specialize in describing the scenario in an image using natural language and simulating a sequence of actions in that scenario. Then a much larger model, which they call GenVLM, uses the description from SimVLM to generate a set of initial files in a formal planning language known as the Planning Domain Definition Language (PDDL).
The files are ready to be fed into a classical PDDL solver, which computes a step-by-step plan to solve the task. GenVLM compares the results of the solver with those of the simulator and iteratively refines the PDDL files.
“The generator and simulator work together to be able to reach the exact same result, which is an action simulation that achieves the goal,” Hao says.
Because GenVLM is a large generative AI model, it has seen many examples of PDDL during training and learned how this formal language can solve a wide range of problems. This existing knowledge enables the model to generate accurate PDDL files.
A flexible approach
VLMFP generates two separate PDDL files. The first is a domain file that defines the environment, valid actions, and domain rules. It also produces a problem file that defines the initial states and the goal of a particular problem at hand.
“One advantage of PDDL is the domain file is the same for all instances in that environment. This makes our framework good at generalizing to unseen instances under the same domain,” Hao explains.
To enable the system to generalize effectively, the researchers needed to carefully design just enough training data for SimVLM so the model learned to understand the problem and goal without memorizing patterns in the scenario. When tested, SimVLM successfully described the scenario, simulated actions, and detected if the goal was reached in about 85 percent of experiments.
Overall, the VLMFP framework achieved a success rate of about 60 percent on six 2D planning tasks and greater than 80 percent on two 3D tasks, including multirobot collaboration and robotic assembly. It also generated valid plans for more than 50 percent of scenarios it hadn’t seen before, far outpacing the baseline methods.
“Our framework can generalize when the rules change in different situations. This gives our system the flexibility to solve many types of visual-based planning problems,” Fan adds.
In the future, the researchers want to enable VLMFP to handle more complex scenarios and explore methods to identify and mitigate hallucinations by the VLMs.
“In the long term, generative AI models could act as agents and make use of the right tools to solve much more complicated problems. But what does it mean to have the right tools, and how do we incorporate those tools? There is still a long way to go, but by bringing visual-based planning into the picture, this work is an important piece of the puzzle,” Fan says.
This work was funded, in part, by the MIT-IBM Watson AI Lab.
2026 MIT Sloan Sports Analytics Conference shows why data make a difference
With time dwindling in the Olympic women’s ice hockey gold medal game on Feb. 19, players for Team USA and Team Canada lined up for a key faceoff in Canada’s end. Canada had a 1-0 lead. USA had 2:23 left, and an ace up their sleeve: analytics.
USA Coach John Wroblewski pulled the goalkeeper, to get a player advantage, and had forward Alex Carpenter take the faceoff. Statistics show that Carpenter is not only very good at winning faceoffs; she also wins a lot of them cleanly. That allows her team to quickly regain possession, without too many teammates nearby. Knowing that, Wroblewski directed the USA players to spread out, largely away from the faceoff circle, in position to circulate the puck as soon as they got it back.
Carpenter won the faceoff, and Team USA quickly started a passing move. Laila Edwards soon launched a shot that longtime star Hilary Knight deflected in for the crucial, game-tying goal with 2:04 left. Team USA then won in overtime. And data-driven decision-making had also won big; indeed, it helped change the Olympics.
“What it does for a coach, the other thing these analytics do, is … it allows you to move forward with this confidence level,” Wroblewski said on Saturday at the 20th annual MIT Sloan Sports Analytics Conference (SSAC), during a hockey analytics panel where he detailed his decision-making for that faceoff, and in the gold medal game generally.
Using the data, he added, lets coaches “limit the emotion” that might cloud their in-game decisions.
“By the time you get to that decision, you’re then allowed the freedom to step away from the decision, to allow the players to go earn their medal,” Wroblewski added.
You don’t usually find coaches divulging their tactical secrets just three weeks after a big game has been played. But then, this is the MIT Sloan conference, a trailblazing forum that has helped analytics ideas spread throughout sports. Coaches, players, and analysts know any data-driven discussion will find an interested audience.
“Analytics was massive for us going into the gold medal game,” Wroblewski said.
20 years on: From classrooms to convention halls
The 20th edition of SSAC was a strong one, with many substantive panel discussions and interviews; the annual research paper, hackathon, and case study contests; mentorship events and informal networking opportunities; and more. Over 2,500 people attended the two-day event, held at Boston’s Menino Conference and Exhibition Center (MCEC). The conference was founded in 2007 by Daryl Morey, now president of basketball operations for the NBA Philadelphia 76ers, and Jessica Gelman, now CEO of the Kraft Analytics Group.
The first three editions of the conference were held on the MIT campus. In 2010, it first moved to the MCEC (one of two regular convention-center sites it uses), and starting in 2011, the conference became a two-day event.
Today people attend for the panels, the career opportunities, and, in some cases, to make news. NBA Commissioner Adam Silver was on hand this year, engaging in an on-stage conversation with former WNBA great Sue Bird, publicly addressing some of the key issues facing his league, and drawing wide media coverage.
First, though, Silver reflected about attending the second edition of the conference on the MIT campus in 2008, when he was deputy commissioner.
“It was literally a classroom of 20 people we were talking to,” Silver recalled. “I think it was the beginning of the moment when people were taking sports as a discipline more seriously. … I give Jessica and Daryl a lot of credit [for that].”
Addressing tanking and gambling
A core part of Silver’s comments focused on two big issues in pro basketball: tanking and gambling. About eight NBA teams appear to be tanking this season, that is, losing games in order to increase their chances of getting a high draft pick.
“We are going to make substantial changes for next year,” Silver said, although he also added: “I am an incrementalist. I think we’ve got to be a little bit careful about how huge a change we make at once. I’m not ruling anything out. But I am paying attention to that.”
To be sure, tanking has long been a part of professional basketball, as Bird noted during the conversation.
“We did it in Seattle, to be honest,” Bird said. “Breanna Stewart was coming out of college. We were in a ‘rebuild.’”
Still, in this NBA season, tanking has become an epidemic, in “a little bit of a perfect storm,” as Silver put it on Friday. And almost every proposed solution seems to have drawbacks. Perhaps the simplest cure for tanking, actually, would be robust analytical studies showing that it is not a very effective team-building strategy. If that is what the numbers reveal, of course.
Meanwhile, multiple arrests of NBA players and coaches at the beginning of the season show further that sports gambling continues to present challenges to professional sports leagues.
“I personally think there should be more regulation now, not less,” Silver said on Friday, suggesting that federal rules would simplify things in the U.S., where 39 states allow sports gambling to some extent. He also said the NBA can continue to work on monitoring data to protect against gambling scandals.
“I think there are some large-platform companies are that are looking at a business opportunity to come in and in a much more sophisticated way work as a detection service with the league,” Silver said.
Through it all, Silver said, the NBA will continue to be a data-driven operation. Have you watched a game with a long instant-replay review, and gotten a little impatient? Still, have you kept watching that game? So does almost everyone.
“For years people would tell us, ‘Don’t use instant replay, because you’ll turn fans off,’” Silver said. However, he added, “The data suggests, in terms of ratings and what servers tell us, you almost never lose a fan when you’re going to replay. Because they want to see the replay and they want to see what happened.”
The minnows got big
Sports analytics took root in baseball, with its discrete pitcher-hitter actions. Legendary MLB general manager Branch Rickey employed a statistician for the great Brooklyn Dodgers of the 1950s; the famous manager Earl Weaver thought analytically with the Baltimore Orioles in the 1970s. Baseball analyst Bill James made sports analytics a viable pursuit with his annual “Baseball Abstract” bestsellers in the 1980s, and Michael Lewis’ “Moneyball” popularized it.
But data can be applied to all sports — and sometimes is most valuable when only some teams are interested in it. Take soccer. In the English Premier League, about three clubs have been heavily oriented around analytics over the last decade: Liverpool FC, Brighton FC, and Brentford FC. That has helped Liverpool win multiple titles, while Brighton and Brentford, smaller clubs, have startled many with their success.
Saturday at SSAC, Brentford’s majority owner Matthew Benham made one of his most visible public appearances, in an onstage interview with podcaster Roger Bennett. Benham first made money wagering on soccer, then invested in Brentford, his childhood club.
“The information we used in the early days was really, really rudimentary,” Benham said. In his account, his success building an analytics-based club has only partly been about the numbers.
“A lot of the success has just been in running things efficiently.” Benham said. He prefers to have management discussions that are an “exchange of views, rather than debate,” since the latter implies an interaction with a clear winner and loser. Instead, compiling independent-minded views from his executives is more important.
Brentford also uses “a combination of old-style scouting and data” for its player acquisition decisions, Benham said. Not every decision works. Brentford could have signed current Arsenal FC star Eberechi Eze for a mere $4 million pounds in 2019, and passed; Crystal Palace FC acquired Eze, then realized a windfall when Arsenal purchased his services.
Still, pressed by Bennett to specify a little more about his analytical thinking, Benham implied that strikers are valuable not only for their finishing skills, but for consistently getting open for shots on goal. Fans tend to focus too much on a player’s misses, rather than how many chances are created by their off-ball work.
“Getting in position is way, way more informative than finishing,” Benham said.
A similar insight seems to have guided Liverpool’s thinking. As it happens, a Friday panel at SSAC featured Ian Graham, who ran Liverpool’s analytics operations from 2012 to 2023, and weighed in on a number of subjects. Among other things, Graham noted, teams are too cautious when tied late in a match; soccer grants three points for a win, one for a draw, and zero for a loss, so from a tied position, the reward for winning is twice as great as the penalty for losing.
“Teams don’t go for it enough,” Graham said. “Teams think a draw is an okay result.”
The limits of knowledge
Sports, of course, are ultimately played by imperfect, injury-prone, and sometimes exhausted athletes. One consistent lesson from the MIT Sloan conference involves the limits of data and plans.
“We think the data is giving us an answer, when actually it’s giving us some information, and we still have to make a choice,” said Ariana Andonian, vice president of player personnel for the Philadelphia 76ers, during a basketball panel on Saturday.
Asked about the promise of artificial intelligence for sports analytics, Sonia Raman, head coach of the WNBA’s Seattle Storm, noted that its insights might always be limited by circumstances.
“It’s not like you can just get an AI report in the middle of the game that says, ‘Get some shooting in,’” said Raman, who, prior to coaching in the WNBA and NBA served for 12 years as head coach of the MIT women’s basketball team.
“You can have a great plan, but if it’s poorly executed, it’s way worse than a poor plan that’s well executed,” added Steven Adams, a center for the NBA’s Houston Rockets (who is currently not playing due to injury), during the same panel.
And yet, in some games and matches, the analytics do work, the plans do come to fruition, and the numbers do make a difference. When that happens, as John Wroblewski can now attest, the results are golden.
3 Questions: Building predictive models to characterize tumor progression
Just as Darwin’s finches evolved in response to natural selection in order to endure, the cells that make up a cancerous tumor similarly counter selective pressures in order to survive, evolve, and spread. Tumors are, in fact, complex sets of cells with their own unique structure and ability to change.
Today, artificial Intelligence and machine learning tools offer an unparalleled opportunity to illuminate the generalizable rules governing tumor progression on the genetic, epigenetic, metabolic, and microenvironmental levels.
Matthew G. Jones, an assistant professor in the MIT Department of Biology, the Koch Institute for Integrative Cancer Research, and the Institute for Medical Engineering and Science, hopes to use computational approaches to build predictive models — to play a game of chess with cancer, making sense of a tumor’s ability to evolve and resist treatment with the ultimate goal of improving patient outcomes. In this interview, he describes his current work.
Q: What aspect of tumor progression are you working to explore and characterize?
A: A very common story with cancer is that patients will respond to a therapy at first, and then eventually that treatment will stop working. The reason this largely happens is that tumors have an incredible, and very challenging, ability to evolve: the ability to change their genetic makeup, protein signaling composition, and cellular dynamics. The tumor as a system also evolves at a structural level. Oftentimes, the reason why a patient succumbs to a tumor is because either the tumor has evolved to a state we can no longer control, or it evolves in an unpredictable manner.
In many ways, cancers can be thought of as, on the one hand, incredibly dysregulated and disorganized, and on the other hand, as having their own internal logic, which is constantly changing. The central thesis of my lab is that tumors follow stereotypical patterns in space and time, and we’re hoping to use computation and experimental technology to decode the molecular processes underlying these transformations.
We’re focused on one specific way tumors are evolving through a form of DNA amplification called extrachromosomal DNA. Excised from the chromosome, these ecDNAs are circularized and exist as their own separate pool of DNA particles in the nucleus.
Initially discovered in the 1960s, ecDNA were thought to be a rare event in cancer. However, as researchers began applying next-generation sequencing to large patient cohorts in the 2010s, it seemed like not only were these ecDNA amplifications conferring the ability of tumors to adapt to stresses, and therapies, faster, but that they were far more prevalent than initially thought.
We now know these ecDNA amplifications are apparent in about 25 percent of cancers, in the most aggressive cancers: brain, lung, and ovarian cancers. We have found that, for a variety of reasons, ecDNA amplifications are able to change the rule book by which tumors evolve in ways that allow them to accelerate to a more aggressive disease in very surprising ways.
Q: How are you using machine learning and artificial intelligence to study ecDNA amplifications and tumor evolution?
A: There’s a mandate to translate what I’m doing in the lab to improve patients’ lives. I want to start with patient data to discover how various evolutionary pressures are driving disease and the mutations we observe.
One of the tools we use to study tumor evolution is single-cell lineage tracing technologies. Broadly, they allow us to study the lineages of individual cells. When we sample a particular cell, not only do we know what that cell looks like, but we can (ideally) pinpoint exactly when aggressive mutations appeared in the tumor’s history. That evolutionary history gives us a way of studying these dynamic processes that we otherwise wouldn’t be able to observe in real time, and helps us make sense of how we might be able to intercept that evolution.
I hope we’re going to get better at stratifying patients who will respond to certain drugs, to anticipate and overcome drug resistance, and to identify new therapeutic targets.
Q: What excited you about joining the MIT community?
A: One of the things that I was really attracted to was the integration of excellence in both engineering and biological sciences. At the Koch Institute, every floor is structured to promote this interface between engineers and basic scientists, and beyond campus, we can connect with all the biomedical research enterprises in the greater Boston area.
Another thing that drew me to MIT was the fact that it places such a strong emphasis on education, training, and investing in student success. I’m a personal believer that what distinguishes academic research from industry research is that academic research is fundamentally a service job, in that we are training the next generation of scientists.
It was always a mission of mine to bring excellence to both computational and experimental technology disciplines. The types of trainees I’m hoping to recruit are those who are eager to collaborate and solve big problems that require both disciplines. The KI [Koch Institute] is uniquely set up for this type of hybrid lab: my dry lab is right next to my wet lab, and it’s a source of collaboration and connection, and that reflects the KI’s general vision.
How Joseph Paradiso’s sensing innovations bridge the arts, medicine, and ecology
Joseph Paradiso thinks that the most engaging research questions usually span disciplines.
Paradiso was trained as a physicist and completed his PhD in experimental high-energy physics at MIT in 1981. His father was a photographer and filmmaker working at MIT, MIT Lincoln Laboratory, and the MITRE Corporation, so he grew up in a house where artists, scientists, and engineers regularly gathered and interesting music was always playing.
That mix of influences led him to the MIT Media Lab, where he is the Alexander W. Dreyfoos Professor, academic head of the Program in Media Arts and Sciences, and director of the Responsive Environments research group.
At the Media Lab, Paradiso conducts research that engages sensing of different kinds and applies it across diverse and often extreme applications. He works on developing technologies that can efficiently capture and process multiple sensing modalities, and leverages this capability in application domains like the internet of things, medicine, environmental sensing, space exploration, and artistic expression. These efforts use that information to help people better understand the world, express themselves, and connect with one another.
Early in his career, Paradiso helped pioneer the field of wireless wearable sensing. He built many systems with multiple embedded sensors that could send information from the human body in real-time. One of his early flagship projects in this area was a pair of shoes fielded in 1997 for real-time augmented dance performance that embedded 16 sensors in each shoe, allowing wearers’ movements to directly generate music through algorithmic mapping. And Paradiso’s research at the Media Lab has consistently focused on sensing and using that information in new ways.
“When I would list all the sensors … people would laugh. But now, my watch is measuring most of these things,” Paradiso notes. “The world has moved.”
That progression from early prototypes to everyday technology helped lay the groundwork for devices people now use regularly to track activity, health, and performance.
As sensing systems improved, Paradiso expanded his work from individuals to groups. He developed platforms that allowed dance ensembles to create music together through their collective motion. Achieving this required Paradiso and his team to develop new ways for compact wearable devices to communicate wirelessly at high speed, as well as new approaches to real-time data processing and extending the range of available microelectromechanical systems (MEMS) sensors.
Those same sensing platforms were later adapted for sports medicine in 2006. Working with doctors who support elite athletes, his array of compact, wearable sensors captured large amounts of high-speed motion data from multiple points on the body, aimed at helping clinicians assess injury risk, performance, and recovery on the go, without the complex equipment typically associated with biomechanical monitoring and clinical settings.
More recently, Paradiso’s research has extended beyond humans. Through collaborations with National Geographic Explorers, his team has deployed sensors in remote environments to study animal behavior, including low-power compact wearable devices to detect the environmental conditions around the animal as well as track them (currently on lions and hyenas in Botswana and goats in Chile), and acoustic sensors with onboard AI to detect and monitor populations of endangered honeybees in Patagonia. This work provides new ways to understand how ecosystems function and how the planet is changing.
Paradiso was named an IEEE Fellow in January, recognizing his achievement in wireless wearable sensing and mobile energy harvesting. This is the highest grade of membership in IEEE, the world’s leading professional association dedicated to advancing technology for the benefit of humanity.
Across art, health, and the natural world, Paradiso’s work reflects how foundational research at MIT can seed technologies that ripple outward over time, shaping new applications and opening new fields. As advances in wearable technologies drive the rush toward the ever-more-connected human, a persistent existential question lurks.
“Where do I stop, versus others begin?” Paradiso asks.
For him, the aim is not novelty for its own sake, but amplification: using technology to help people become more perceptive, better connected, and more aware of their place in a larger system.
MIT School of Engineering faculty receive awards in fall 2025
Each year, faculty and researchers across the MIT School of Engineering are recognized with prestigious awards for their contributions to research, technology, society, and education. To celebrate these achievements, the school periodically highlights select honors received by members of its departments, institutes, labs, and centers. The following individuals were recognized in fall 2025:
Hal Abelson, the Class of 1922 Professor in the Department of Electrical Engineering and Computer Science, received the 2025 Lifetime Achievement Award for Excellence from Open Education Global. The award honors his foundational impact on open education, Creative Commons, and open knowledge movements.
Faez Ahmed, the Henry L. Doherty Career Development Professor in Ocean Utilization in the Department of Mechanical Engineering, received an Amazon Research Award for his project “AutoDA‑Sim: A Multi‑Agent Framework for Safe, Aesthetic, and Aerodynamic Vehicle Design.” Amazon Research Awards provide unrestricted funds and AWS Promotional Credits to academic researchers investigating various research topics in multiple disciplines.
Pulkit Agrawal, an associate professor in the Department of Electrical Engineering and Computer Science, received the 2025 IROS Toshio Fukuda Young Professional Award for contributions to robot learning, policy learning, agile locomotion, and dexterous manipulation. The award recognizes outstanding contributions of an individual of the IROS community who has pioneered activities in robotics and intelligent systems.
Ahmad Bahai, a professor of the practice in the Department of Electrical Engineering and Computer Science, was elected to the 2025 class of Fellows of the National Academy of Inventors for contribution to innovation in new semiconductor devices with extensive applications in clinical grade personal sensors for a variety of biomarkers. The honor recognizes inventors whose patented work has made a meaningful global impact.
Yufeng (Kevin) Chen, an associate professor in the Department of Electrical Engineering and Computer Science, received the 2025 IROS Toshio Fukuda Young Professional Award for contributions to insect‑scale multimodal robots and soft‑actuated aerial systems. The award recognizes outstanding contributions of an individual of the IROS community who has pioneered activities in robotics and intelligent systems.
Angela Koehler, the Charles W. and Jennifer C. Johnson Professor in the Department of Biological Engineering, received the 2025 Sato Memorial International Award from the Pharmaceutical Society of Japan, recognizing advancements in pharmaceutical sciences and U.S.–Japan scientific collaboration.
Dina Katabi, the Thuan (1990) and Nicole Pham Professor in the Department of Electrical Engineering and Computer Science, was elected to the National Academy of Medicine for pioneering digital health technology that enables noninvasive, off-body remote health monitoring via AI and wireless signals, and for developing digital biomarkers for Parkinson’s progression and detection. Election to the academy is considered one of the highest honors in the fields of health and medicine, and recognizes individuals who have demonstrated outstanding professional achievement and commitment to service.
Darcy McRose, the Thomas D. and Virginia W. Cabot Career Development Professor in the Department of Civil and Environmental Engineering, was selected as a 2025 Packard Fellow for Science and Engineering. The Packard Foundation established the Packard Fellowships for Science and Engineering to allow the nation’s most promising early-career scientists and engineers flexible funding to take risks and explore new frontiers in their fields of study.
Muriel Médard, the NEC Professor of Software Science and Engineering in the Department of Electrical Engineering and Computer Science, received the 2026 IEEE Richard W. Hamming Medal for contributions to coding for reliable communications and networking. Recognized for breakthroughs in network coding and information theory, Médard’s innovations improve the reliability of data transmission in applications such as streaming video, wireless networks, and satellite communications. The award is given for exceptional contributions to information sciences, systems and technology.
Tess Smidt, an associate professor in the Department of Electrical Engineering and Computer Science, was selected as a 2025 AI2050 Fellow by Schmidt Sciences for her project, “Hierarchical Representations of Complex Physical Systems with Euclidean Neural Networks.” The program supports research that aims to help AI benefit humanity by mid‑century.
MIT undergraduates help US high schoolers tackle calculus
This year in a rural school district in southeastern Montana, one high school student is taking calculus. For many people, calculus is daunting enough, even when teachers are used to offering it and peers are around to help. Studying it solo can be even harder. Yet this lone student has an unusual source of support: weekly tutoring directly from an MIT undergraduate, by Zoom, a long-distance but helpful way to stay on track.
It's part of a new program called the MIT4America Calculus Project, launched from the Institute last summer, in which MIT undergraduates and alumni work with school districts across the U.S., from Montana to Texas to New York, to tutor high school students. The logic is compelling: Students are highly proficient at calculus at MIT, where it is almost a requirement for admissions and success. The new civic-minded outreach program lets those MIT people share their knowledge and skills, getting high schoolers ready for further studies and even jobs, especially in STEM fields.
“Calculus is a gateway for many students into STEM higher education and careers,” says MIT Professor Eric Klopfer, a co-director of the MIT4America Calculus Project. “We can help more students, in more places, fulfill requirements and get into great universities across the country, whether MIT or others, and then into STEM careers. We want to make sure they have the skills to do that.”
At this point, the project is working closely with 14 school districts across the U.S., deploying 30 current MIT undergraduates and seven alumni as tutors. The weekly sessions are carefully coordinated with school administrators and teachers, and the MIT tutors have all received training. The program started with an in-person summer calculus camp in 2025; by next summer, the goal is to be collaborating with about 20 schools districts.
“We want it to have a lasting impact,” says Claudia Urrea, an education scholar and co-director of the MIT4America Calculus Project “It’s not just about students passing an exam, but having tutors who look like what the students want to be in the future, who are mentors, have conversations, and make sure the high school students are learning.”
Klopfer and Urrea bring substantial experience to the project. Klopfer is a professor and director of the Scheller Teacher Education Program and the Education Arcade at MIT; Urrea is executive director for the PreK-12 Initiative at MIT Open Learning.
The MIT4America Calculus Project is supported through a gift from the Siegel Family Endowment and was developed as a project in consultation with David Siegel SM ’86, PhD ’91, a computer scientist and entrepreneur who is chairman of the firm Two Sigma.
“David Siegel came to us with two powerful questions: How can we spread the educational impact of MIT beyond our walls? And how can we open doors to STEM careers for U.S. high school students who don’t have access to calculus?” says MIT President Sally Kornbluth.
She adds: “The MIT4America Calculus Project answers those questions in a perfectly MIT way: Reflecting the Institute’s longstanding commitment to national service, the MIT4America Calculus Project supplies an innovative answer to a hard practical problem, and it taps the uncommon skill of the people of MIT to create opportunity for others. We’re enormously grateful to David for his inspiration and guidance, and to the Siegel Family Endowment for the financial support that brought this idea to life.”
The U.S. has more than 13,000 school districts, and about half of them offer calculus classes. The MIT effort aims to work with districts that already have existing programs but are striving to add educational support for them, often while facing funding constraints or other limitations.
In contrast to the one-student calculus situation in Montana, the project is also working with a 5,000-student district in Texas, south of Dallas, where about 60 high school students take calculus; currently five Institute undergraduates are tutoring 15 students from the district’s schools.
“Other organizations are involved in efforts like this, but I think MIT brings some unique things to it,” Klopfer says. “I think involving our undergraduates in this is an awesome contribution. Our students really do come from all over the place, and are sometimes connecting back to their home states and communities, and that makes a difference on both sides.”
He adds: “I see benefits for our students, too. They develop good ways of communicating, working with other people and building skills. They can gain a lot of great experience.”
In addition to the in-person summer calculus camp, which is expected to continue, and the weekly video tutoring, the MIT4America Calculus Project is working on developing online tools that help guide high school students as well. Still, Urrea emphasizes, the project is built around “the importance of people. A community of support is very important, to have connections that build over time. The human aspect of the program is irreplaceable.”
The MIT tutors must pass rigorous training sessions that cover pedagogy and other aspects of working with high school students, and know they are making a substantial commitment of time and effort.
It has been worth it, as teachers say their high school students have been responding very well to the MIT tutors.
“For students to be able to see themselves in their tutors is a really cool thing,” says Shilpa Agrawal ’15, director of computer science and an AP calculus AB teacher at Comp Sci High in the Bronx, New York, where 15 students are participating in the project.
“It’s led to a lot of success for my students,” adds Agrawal, who majored in computer science at MIT. She is part of the national network of MIT-connected teachers who have been helping the program grow organically, having reached out to Jenny Gardony, manager of the MIT4America Calculus Project.
Gardony, who is also the math project manager in MIT’s Scheller Teacher Education program, has been receiving enthusiastic emails from teachers in other participating districts since the project started.
“I have to start by saying thank you,” one teacher wrote to Gardony, adding that one student “was so excited in class today. The session she had with you made her so confident. She’s always nervous, but today she was smiling and helping others, and that was 100 percent because of you.”
Gardony adds: “The fact that a busy teacher takes the time to send that email, I’m touched they would do that.”
Understanding how “marine snow” acts as a carbon sink
In some parts of the deep ocean, it can look like it’s snowing. This “marine snow” is the dust and detritus that organisms slough off as they die and decompose. Marine snow can fall several kilometers to the deepest parts of the ocean, where the particles are buried in the seafloor for millennia.
Now, researchers at MIT and their collaborators have found that as marine snow falls, tiny hitchhikers may limit how deep the particles can sink before dissolving away. The team shows that when bacteria hitch a ride on marine snow particles, the microbes can eat away at calcium carbonate, which is an essential ballast that helps particles sink.
The findings, which appear this week in the Proceedings of the National Academy of Sciences, could explain how calcium carbonate dissolves in shallow layers of the ocean, where scientists had assumed it should remain intact. The results could also change scientists’ understanding of how quickly the ocean can sequester carbon from the atmosphere.
Marine snow is a main vehicle by which the ocean stores carbon. At the ocean’s surface, phytoplankton absorb carbon dioxide from the atmosphere and convert the gas into other forms of carbon, including calcium carbonate — the same stuff that’s found in shells and corals. When they die, bits of phytoplankton drift down through the ocean as marine snow, carrying the carbon with them. If the particles make it to the deep ocean, the carbon they carry can be buried and locked away for hundreds to thousands of years.
But the new study suggests bacteria may be working against the ocean’s ability to sequester carbon. By eroding the particles’ calcium carbonate, bacteria can significantly slow the sinking of marine snow. The more they linger, the more likely the particles are to be respired quickly, releasing carbon dioxide into the shallow ocean, and possibly back into the atmosphere.
“What we’ve shown is that carbon may not sink as deep or as fast as one may expect,” says study co-author Andrew Babbin, an associate professor in the Department of Earth, Atmospheric and Planetary Sciences and a mission director at the Climate Project at MIT. “As humanity tries to design our way out of the problem of having so much CO2 in the atmosphere, we have to take into account these natural microbial mechanisms and feedbacks.”
The study’s primary author is Benedict Borer, a former MIT postdoc who is now an assistant professor of marine and coastal sciences at the Rutgers School of Environmental and Biological Sciences; co-authors include Adam Subhas and Matthew Hayden at the Woods Hole Oceanographic Institution and Ryan Woosley, a principal research scientist at MIT’s Center for Sustainability Science and Strategy.
Losing weight
Marine snow acts as the ocean’s main “biological pump,” the process by which the ocean pulls carbon from the surface down into the deep ocean. Scientists estimate that marine snow is responsible for drawing down billions of tons of carbon each year. Marine snow’s ability to sink comes mainly from minerals such as calcium carbonate embedded within the particles. The mineral is a dense ballast that weighs down the particle. The more calcium carbonate a particle has, the faster it sinks.
Scientists had assumed based on thermodynamics that calcium carbonate should not dissolve within the ocean’s upper layers, given the general temperature and pH conditions in the surface ocean. Any calcium carbonate that is bound up in marine snow should then safely sink to depths greater than 1,000 meters without dissolving along the way.
But oceanographers have long observed signs of dissolved calcium carbonate in the upper layers of the ocean, suggesting that something other than the ocean’s macroscale conditions was dissolving the mineral and slowing down the ocean’s biological pump.
And indeed, the MIT team has found that what is dissolving calcium carbonate in shallow waters is a microscale process that occurs within the immediate environment of an individual particle.
“Most oceanographers think about the macroscale, and in this instance what’s happening in microscopic particles is what is actually controlling bulk seawater chemistry,” Borer says. “Consequences abound for the ocean’s carbon dioxide sequestration capacity.”
A sinking sweetspot
In their new study, the researchers set up an experiment to simulate a sinking particle of marine snow and its interactions at the microscale. The team synthesized particles similar to marine snow that they made from varying concentrations of calcium carbonate and bacteria — organisms that are often found feasting on the particles in the ocean.
“The ocean is a fairly dilute medium with respect to organic matter,” Babbin says. “So organisms like bacteria have to search for food. And particles of marine snow are like cheeseburgers for bacteria.”
The team designed a small microfluidic chip to contain the particles, and flowed seawater through the chip at various rates to simulate different sinking speeds in the ocean. Their experiments revealed that whenever particles hosted any bacteria, they also rapidly lost some calcium carbonate, which dissolved into the surrounding seawater. As bacteria feed on the particles’ organic material, the microbes excrete acidic waste products that act to dissolve the particles’ inorganic, ballasting calcium carbonate.
The researchers also found that the amount of calcium carbonate that dissolves depends on how fast the particles sink. They flowed seawater around the particles at slow, intermediate, and fast speeds and found that both slow and fast sinking limit the amount of calcium carbonate that’s dissolved. With slow sinking, particles don’t receive as much oxygen from their surroundings, which essentially suffocates any hitchhiking bacteria. When particles sink quickly, bacteria may be sufficiently oxygenated, but any waste products that they produce can be easily flushed away before they can dissolve the particles’ calcium carbonate.
At intermediate speeds, there is a sweet spot: Bacteria are sufficiently oxygenated and can also build up enough waste, enabling the microbes to efficiently dissolve calcium carbonate.
Overall, the work shows that bacteria can have a significant effect on marine snow’s ability to sink and sequester carbon in the deep ocean. Bacteria can be found everywhere, and particularly in the shallower ocean regions. Even if macroscale conditions in these upper layers should not dissolve calcium carbonate, the study finds bacteria working at the microscale most likely do.
The findings could explain oceanographers’ observations of dissolved calcium carbonate in shallow ocean regions. They also illustrate that bacteria and other microbes may be working against the ocean’s natural ability to sequester carbon, by dissolving marine snow’s ballast and slowing its descent into the deep ocean. As humans consider climate solutions that involve enhancing the ocean’s biological pump, the researchers emphasize that bacteria’s role must be taken into account.
“Insights from this work are vital to predict how ecosystems will respond to marine carbon dioxide removal attempts, and overall how the oceans will change in response to future climate scenarios,” says Benedict Borer, who carried out the study’s experiments as a postdoc in MIT’s Department of Earth, Atmospheric and Planetary Sciences.
This work was supported, in part, by the Simons Foundation, the National Science Foundation, and the Climate Project at MIT.
Neurons receive precisely tailored teaching signals as we learn
When we learn a new skill, the brain has to decide — cell by cell — what to change. New research from MIT suggests it can do that with surprising precision, sending targeted feedback to individual neurons so each one can adjust its activity in the right direction.
The finding echoes a key idea from modern artificial intelligence. Many AI systems learn by comparing their output to a target, computing an “error” signal, and using it to fine-tune connections within the network. A long-standing question has been whether the brain also uses that kind of individualized feedback. In an open-access study published in the Feb. 25 issue of the journal Nature, MIT researchers report evidence that it does.
A research team led by Mark Harnett, a McGovern Institute for Brain Research investigator and associate professor in the Department of Brain and Cognitive Sciences at MIT, discovered these instructive signals in mice by training animals to control the activity of specific neurons using a brain-computer interface (BCI). Their approach, the researchers say, can be used to further study the relationships between artificial neural networks and real brains, in ways that are expected to both improve understanding of biological learning and enable better brain-inspired artificial intelligence.
The changing brain
Our brains are constantly changing as we interact with the world, modifying their circuitry as we learn and adapt. “We know a lot from 50 years of studies that there are many ways to change the strength of connections between neurons,” Harnett says. “What the field really lacks is a way of understanding how those changes are orchestrated to actually produce efficient learning.”
Some actions — and the neural connections that enable them — are reinforced with the release of neuromodulators like dopamine or norepinephrine in the brain. But those signals are broadcast to large groups of neurons, without discriminating between cells’ individual contributions to a failure or a success. “Reinforcement learning via neuromodulators works, but it’s inefficient, because all the neurons and all the synapses basically get only one signal,” Harnett says.
Machine learning uses an alternative, and extremely powerful, way to learn from mistakes. Using a method called back propagation, artificial neural networks compute an error signal and use it to adjust their individual connections. They do this over and over, learning from experience how to fine-tune their networks for success. “It works really well and it’s computationally very effective,” Harnett says.
It seemed likely that brains might use similar error signals for learning. But neuroscientists were skeptical that brains would have the precision to send tailored signals to individual neurons, due to the constraints imposed by using living cells and circuits instead of software and equations. A major problem for testing this idea was how to find the signals that provide personalized instructions to neurons, which are called vectorized instructive signals. The challenge, explains Valerio Francioni, first author of the Nature paper and a former postdoc in Harnett’s lab, is that scientists don’t know how individual neurons contribute to specific behaviors.
“If I was recording your brain activity while you were learning to play piano,” Francioni explains, “I would learn that there is a correlation between the changes happening in your brain and you learning piano. But if you asked me to make you a better piano player by manipulating your brain activity, I would not be able to do that, because we don’t know how the activity of individual neurons map to that ultimate performance.”
Without knowing which neurons need to become more active and which ones should be reined in, it is impossible to look for signals directing those changes.
Understanding neuron function
To get around this problem, Harnett’s team developed a brain-computer interface task to directly link neural activity and reward outcome — akin to linking the keys of the piano directly to the activity of single neurons. To succeed at the task, certain neurons needed to increase their activity, whereas others were required to decrease their activity.
They set up a BCI to directly link activity in those neurons — just eight to 10 of the millions of neurons in a mouse’s brain — to a visual readout, providing sensory feedback to the mice about their performance. Success was accompanied by delivery of a sugary reward.
“Now if you ask me, ‘How does the mouse get more rewards? Which neuron do you have to activate and which neuron do you have to inhibit?’ I know exactly what the answer to that question is,” says Francioni, whose work was supported by a Y. Eva Tan Fellowship from the Yang Tan Collective at MIT.
The scientists didn’t know the exact function of the particular neurons they linked to the BCI, but the cells were active enough that mice received occasional rewards whenever the signals happened to be right. Within a week, mice learned to switch on the right neurons while leaving the other set of neurons inactive, earning themselves more rewards.
Francioni monitored the target neurons daily during this learning process using a powerful microscope to visualize fluorescent indicators of neural activity. He zeroed in on the neurons’ branching dendrites, where the appropriate feedback signals have long been suspected to arrive. At the same time, he tracked activity in the parent cell bodies of those neurons. The team used these data to examine the relationship between signals received at a neuron’s dendrites and its activity, as well as how these changed when mice were rewarded for activating the right neurons or when they failed at their task.
Vectorized neural signals
They concluded that the two groups of neurons whose activity controlled the BCI in opposite ways, also received opposing error signals at their dendrites as the mice learned. Some were told to ramp up their activity during the task, while others were instructed to dial it down. What’s more, when the team manipulated the dendrites to inhibit these instructive signals, mice failed to learn the task. “This is the first biological evidence that vectorized [neuron-specific] signal-based instructive learning is taking place in the cortex,” Harnett says.
The discovery of vectorized signals in the brain — and the team’s ability to find them — should promote more back-and-forth between neuroscientists and machine learning researchers, says postdoc Vincent Tang. “It provides further incentive for the machine learning community to keep developing models and proposing new hypotheses along this direction,” he says. “Then we can come back and test them.”
The researchers say they are just as excited about applying their approach to future experiments as they are about their current discovery.
“Machine learning offers a robust, mathematically tractable way to really study learning. The fact that we can now translate at least some of this directly into the brain is very powerful,” Francioni says.
Harnett says the approach opens new opportunities to investigate possible parallels between the brain and machine learning. “Now we can go after figuring out, how does cortex learn? How do other brain regions learn? How similar or how different is it to this particular algorithm? Can we figure out how to build better, more brain-inspired models from what we learn from the biology?” he says. “This feels like a really big new beginning.”
Improving AI models’ ability to explain their predictions
In high-stakes settings like medical diagnostics, users often want to know what led a computer vision model to make a certain prediction, so they can determine whether to trust its output.
Concept bottleneck modeling is one method that enables artificial intelligence systems to explain their decision-making process. These methods force a deep-learning model to use a set of concepts, which can be understood by humans, to make a prediction. In new research, MIT computer scientists developed a method that coaxes the model to achieve better accuracy and clearer, more concise explanations.
The concepts the model uses are usually defined in advance by human experts. For instance, a clinician could suggest the use of concepts like “clustered brown dots” and “variegated pigmentation” to predict that a medical image shows melanoma.
But previously defined concepts could be irrelevant or lack sufficient detail for a specific task, reducing the model’s accuracy. The new method extracts concepts the model has already learned while it was trained to perform that particular task, and forces the model to use those, producing better explanations than standard concept bottleneck models.
The approach utilizes a pair of specialized machine-learning models that automatically extract knowledge from a target model and translate it into plain-language concepts. In the end, their technique can convert any pretrained computer vision model into one that can use concepts to explain its reasoning.
“In a sense, we want to be able to read the minds of these computer vision models. A concept bottleneck model is one way for users to tell what the model is thinking and why it made a certain prediction. Because our method uses better concepts, it can lead to higher accuracy and ultimately improve the accountability of black-box AI models,” says lead author Antonio De Santis, a graduate student at Polytechnic University of Milan who completed this research while a visiting graduate student in the Computer Science and Artificial Intelligence Laboratory (CSAIL) at MIT.
He is joined on a paper about the work by Schrasing Tong SM ’20, PhD ’26; Marco Brambilla, professor of computer science and engineering at Polytechnic University of Milan; and senior author Lalana Kagal, a principal research scientist in CSAIL. The research will be presented at the International Conference on Learning Representations.
Building a better bottleneck
Concept bottleneck models (CBMs) are a popular approach for improving AI explainability. These techniques add an intermediate step by forcing a computer vision model to predict the concepts present in an image, then use those concepts to make a final prediction.
This intermediate step, or “bottleneck,” helps users understand the model’s reasoning.
For example, a model that identifies bird species could select concepts like “yellow legs” and “blue wings” before predicting a barn swallow.
But because these concepts are often generated in advance by humans or large language models (LLMs), they might not fit the specific task. In addition, even if given a set of pre-defined concepts, the model sometimes utilizes undesirable learned information anyway, which is a problem known as information leakage.
“These models are trained to maximize performance, so the model might secretly use concepts we are unaware of,” De Santis explains.
The MIT researchers had a different idea: Since the model has been trained on a vast amount of data, it may have learned the concepts needed to generate accurate predictions for the particular task at hand. They sought to build a CBM by extracting this existing knowledge and converting it into text a human can understand.
In the first step of their method, a specialized deep-learning model called a sparse autoencoder selectively takes the most relevant features the model learned and reconstructs them into a handful of concepts. Then, a multimodal LLM describes each concept in plain language.
This multimodal LLM also annotates images in the dataset by identifying which concepts are present and absent in each image. The researchers use this annotated dataset to train a concept bottleneck module to recognize the concepts.
They incorporate this module into the target model, forcing it to make predictions using only the set of learned concepts the researchers extracted.
Controlling the concepts
They overcame many challenges as they developed this method, from ensuring the LLM annotated concepts correctly to determining whether the sparse autoencoder had identified human-understandable concepts.
To prevent the model from using unknown or unwanted concepts, they restrict it to use only five concepts for each prediction. This also forces the model to choose the most relevant concepts and makes the explanations more understandable.
When they compared their approach to state-of-the-art CBMs on tasks like predicting bird species and identifying skin lesions in medical images, their method achieved the highest accuracy while providing more precise explanations.
Their approach also generated concepts that were more applicable to the images in the dataset.
“We’ve shown that extracting concepts from the original model can outperform other CBMs, but there is still a tradeoff between interpretability and accuracy that needs to be addressed. Black-box models that are not interpretable still outperform ours,” De Santis says.
In the future, the researchers want to study potential solutions to the information leakage problem, perhaps by adding additional concept bottleneck modules so unwanted concepts can’t leak through. They also plan to scale up their method by using a larger multimodal LLM to annotate a bigger training dataset, which could boost performance.
“I’m excited by this work because it pushes interpretable AI in a very promising direction and creates a natural bridge to symbolic AI and knowledge graphs,” says Andreas Hotho, professor and head of the Data Science Chair at the University of Würzburg, who was not involved with this work. “By deriving concept bottlenecks from the model’s own internal mechanisms rather than only from human-defined concepts, it offers a path toward explanations that are more faithful to the model and opens many opportunities for follow-up work with structured knowledge.”
This research was supported by the Progetto Rocca Doctoral Fellowship, the Italian Ministry of University and Research under the National Recovery and Resilience Plan, Thales Alenia Space, and the European Union under the NextGenerationEU project.
Personal tech, social media, and the “decline of humanity”
Social psychologist Jonathan Haidt presented a forceful analysis of the damage smartphones and social media are doing to our cognition, our civic fabric, and our children’s wellbeing, while calling for renewed action to ward off their effects, in the latest of MIT’s Compton Lectures on Wednesday.
“Around the world, people are getting diminished,” Haidt said. “Less intelligent, less happy, less competent. And it’s happening very fast … My argument is that if we continue with current trends as AI is coming in, it’s going to accelerate. The decline of humanity is going to accelerate.”
Haidt is the Thomas Cooley Professor of Ethical Leadership at New York University’s Stern School of Business and the author of the recent bestseller “The Anxious Generation,” which suggests that the widespread adoption of social media in the 2010s has been especially damaging to young women, making them prone to anxiety and depression.
But as Haidt has continued to examine the effects of social media on society, he has started focusing on additional issues. Our inability to put our phones away, our compulsion to check social media, and the way we spend hours a day watching short-form videos, may be causing problems that go far beyond any rise in anxiety and depression.
“It turns out, it’s not the biggest thing,” Haidt said. “There’s something bigger. It is the destruction of the human capacity to pay attention. Because this is affecting most people, including most adults. And if you imagine humanity with 10 to 50 percent of its attentional ability sucked out of it, there’s not much left. We’re not very capable of doing things if we can’t focus or stay on a task for more than 30 seconds.”
Whatever solution may emerge to these problems, Haidt declared, is going to have to come from “human agency. People see a problem, they figure out a way around it. That’s what I’m hoping to promote here [to] this very important audience. So please consider what I’m saying, these trends, and then work to change them.”
Haidt’s lecture, titled, “Life After Babel: Democracy and Human Development in the Fractured, Lonely World That Technology Gave Us,” was delivered before a capacity audience of over 400 people in MIT’s Huntington Hall (Room 10-250).
The lecture spanned a variety of related topics, with Haidt presenting chart after chart showing the onset of declines in cognition, educational achievement, and happiness, which all have seemed to occur soon after the widespread adoption of smartphones in the 2010s. The individual adoption of smartphones, he notes, has been compounded by the way schools brought internet-connected computing devices into classrooms around the same time.
“The biggest, the most costly mistake we’ve ever made in the history of American education [was] to put computers and high tech on people’s desks,” Haidt said.
Distractible students with shorter attention spans are reading fewer books, he noted; some cinema students cannot sit through films. The top quartile of students is continuing to do well, he noted, but for most students, proficiency levels have dipped notably since the 2010s.
“Fifty years of progress in education, 50 years of progress, up in smoke, gone,” Haidt said. “We’re back to where we were 50 years ago. That’s pretty big, that’s pretty serious.”
As Haidt mentioned multiple times in his remarks, he is not an opponent of all forms of technology, or even personal communication technology, but rather is seeking to mitigate its harmful effects.
“I love tech, I love modernity, we’re all dependent on it, I love my iPhone,” Haidt said. Just as he finished that sentence, an audience member’s cellphone started ringing loudly — drawing a huge laugh from the audience.
“I did not plant that, that was a truly spontaneous demonstration of what I’m talking about,” Haidt said.
Haidt was introduced by MIT President Sally A. Kornbluth, who called him “a leading voice for reforming society’s relationship with technology.” She praised Haidt’s work, noting that he wants to “encourage us to imagine a more positive role for technology in humanity’s future.”
The Karl Taylor Compton Lecture Series was introduced in 1957. It is named for MIT’s ninth president, who led the Institute from 1930 to 1948 and also served as chair of the MIT Corporation from 1948 to 1954.
Compton, as Kornbluth observed, helped MIT evolve from being more strictly an engineering school into “a great global university” with “a new focus on fundamental scientific research.” During World War II, she added, Compton “helped invent the longstanding partnership between the federal government and America’s research universities.”
Haidt received his undergraduate degree from Yale University and his PhD from the University of Pennsylvania. He taught on the faculty at the University of Virginia for 16 years before joining New York University. He has written several widely discussed books about contemporary civic life. Haidt observed that the problems stemming from device distraction and compulsion appear to have hit so-called Gen Z — those born from roughly the mid 1990s to the early 2010s — especially hard, though he emphasized that people in that cohort are essentially victims of circumstance.
“I am not blaming Gen Z,” Haidt said. “I am saying we raised our kids in a way — we allowed the technology companies to take over childhood. We allowed a few giant companies to own our children’s attention, to show them millions of short videos, to destroy their ability to pay attention, to stop them from reading books, and this is the result.”
For a portion of his remarks, Haidt also examined the consequences of social media for politics, showing data that chart the global diminishment of democracy since the 2010s, while the world has become soaked in misinformation and conflictual online interactions.
“That, I think, is what digital technology has done to us,” Haidt said. “It was supposed to connect us, but instead it has broken things, divided us, and made it very, very hard to ever have common facts, common truths, common stories again.”
Towards the end of his remarks, Haidt also speculated that the effects of using AI will be corrosive as well, intellectually and psychologically.
“AI is not exactly going to make us better at interacting with human beings,” Haidt said.
With all this in mind, what is to be done, to limit the intellectual and social damage from tech devices and social media? For one thing, Haidt suggested, we should be less impressed by high-tech innovations and social media.
“We need to disenthrall ourselves from technology,” Haidt said, paraphrasing a line written by President Abraham Lincoln. He added: “I suggest that we have a generally negative view … of social media and of AI.” This kind of “more emotionally negative or ambivalent view” will make it easier for us to reverse the way technology seems to control us.
As a practical matter, Haidt suggested, that means taking steps to limit our exposure to technology. His own public-advocacy group, The Anxious Generation Movement, suggests a set of four reforms: No smartphones for kids before they are high-school age; no social media before age 16; making school phone-free, from bell to bell; and giving kids more independence, free play, and responsibility in the world.
Certainly there is movement toward some of these concepts. Some school districts in the U.S. are banning or limiting phone usage; Australia has also instituted a ban on social media for anyone under 16, while a handful of other countries have announced similar plans.
“There’s a gigantic techlash happening right now,” Haidt suggested. For all the sudden changes technology has introduced within the last 15 years, it is still possible, for now, for people to find a way out of our tech-induced predicament.
“The good news is, there is human agency,” Haidt said.
Seeds of something different
In Berlin in the early 1870s, tourists began visiting a neighborhood called Barackia. It did not have museums, palaces, or any other typical attractions. Barackia was a working-class neighborhood where people grew their own food, lived in small dwellings, and established communal arrangements outside the normal reach of government. For a while, anyway: In 1872, authorities moved in and cleared out Barackia.
Still, the concept of small urban farming caught on, and by 1900, about 50,000 Berlin households were growing food, often in so-called arbor colonies. The practice has never really been abandoned: Today, by law, Germany provides residents the right to garden, still a very popular activity in urban areas.
“In a little space, you can grow a lot of produce,” says MIT Professor Kate Brown, author of a new history of urban gardening. “Once you set things up, it need not take too much of your time. You can have another job and still grow food. You go to Berlin, and many German cities, and you’re surrounded by these allotment gardens.”
But as the residents of Barackia found out, there is a politics that comes with growing your own food on common land. Other interests may want to claim or at least control the land themselves. Or they may want to tap into the labor being applied to gardening. One way or another, when many people start gardening for themselves, core questions about the organization of society seem to sprout up, too.
Brown examines urban gardening and its politics in her book, “Tiny Gardens Everywhere: The Past Present, and Future of the Self-Provisioning City,” published by W.W. Norton. Brown is the Thomas M. Siebel Distinguished Professor in History of Science within MIT’s Program in Science, Technology, and Society. In a book with global scope, ranging from Estonia to Amsterdam and Washington, Brown contends that urban gardening has many positive spillover effects, from health and environmental benefits to community-building — apart from periods of pushback when others are trying to eliminate it.
“Community after community, people work together to create food provisioning practices,” Brown says. “And after people come together for food and gardening, then they start to solve other problems they have.”
Whose land?
“Tiny Gardens Everywhere” was several years in making, featuring extensive archival research, with firsthand material interspersed too. Brown’s story begins in England, which had a very long tradition of people farming on common land, often in ingenious, productive ways. “Every bit of space was used,” Brown says.
Then in the late 18th century, the advent of “enclosures” for wealthy landowners privatized much land and changed social life for many. Poorer residents, even when given allotments, found them not big enough for self-sustaining farming.
“Private property is largely an English invention of the late 18th century,” Brown says. “Before that, and in many parts of the world to this day, people live with a communal sense of the ownership of the land.”
In Brown’s interpretation, the enclosure movement did not just claim more land for Britain’s upper class. In an industrializing society, it forced peasants into the factory labor force, whether in cities or in rural mills.
“Really what they were doing when they were enclosing land was trying to control labor, as much as controlling land,” Brown says. “Because of their reliance on the commons, peasants were self-sufficient. Who wants to go work in a factory when you could be out having fun in the forest? Expelling people was a way to force them to become homeless, the landless proletariat, with nothing to sell but their labor, for 10 or 18 hours a day.”
As Brown chronicles in detail, conflicts between communal agriculture and propertied classes have often arisen since then, in varying forms. And sometimes, in now-surprising places, because urban gardening has been more extensive than we realize.
A core section of “Tiny Gardens Everywhere” focuses on Washington, in the middle of the 20th century. During the Great Migration, which started a few decades earlier, African Americans moved north en masse, resettling in cities. They brought extensive knowledge with them about agricultural practices. In the part of Washington east of the Anacostia River, Black neighborhoods relied heavily on local gardening.
“They set up workers’ cooperatives and food cooperatives,” Brown observes. Despite often living in difficult circumstances, she adds, “I think it’s very interesting that people found really smart ways to adapt. If the neighborhood had no garbage collection, they’ll compost. No sewers, they’ll compost.”
Over time, though, authorities started claiming more land, designating homes to be torn down, and restricting the ability of residents to garden. And as Brown chronicles in the book, local officials have used restrictions on urban gardening as a form of social control, with one outcome being a homogenized social and physical landscape characterized by grass lawns for the affluent.
How much food?
Even if urban gardening has been fairly common in the past, it is natural to ask: How much food can it really provide? As Brown sees it, there is not one simple answer to that question. At one point, victory gardens provided about 40 percent of all produce grown in the U.S. during World War II, for one thing. More recently, In 1996, 91 percent of the potatoes Russians ate came from urban allotment gardens on 1.5 percent of the country’s arable land.
As Brown also points out in the book, we may not be growing as much produce on giant farms as we think. Only 2 percent of agricultural land in the U.S. is used to produce fruit and vegetables, for instance. The U.S., as a variety of analysts and writers have observed, has corn-and soy-heavy agricultural systems at its largest scales, principally yielding corn-based products. That means, Brown says, “They’re really inefficiently [working] to produce ethanol, corn syrup, chips, and cookies.”
In sum, she adds, “Yes, I do think it’s possible to take an urban space and grow a good part of the fruits and vegetables that people need there.”
It is possible, Brown believes, for things to change on this front. For instance, Florida, Illinois, and Maine, three fairly different states in terms of politics, all have laws providing the right to garden. Oklahoma has a similar bill in the works.
“I think this approach to looking at our right to grow food, to self-provision, to step outside of markets for our most essential needs, is something that represents a unifying set of desires in our hyperpolarized political landscape,” Brown says.
Other scholars have praised “Tiny Gardens Everywhere.” Sunil Amrith, a professor of history at Yale University, has said that Brown uses “enviable skill, craft, and insight” to show “that the past of small-scale urban provisioning contains the seeds of a more resilient future for us all.”
For her part, Brown hopes the book will not only appeal to readers, but spur them to become more active about the issue, as gardeners, local policy advocates, or both.
“One of the drumbeats of this book is that people do — and maybe we all should — win the right to garden,” Brown says.
Studying the genetic basis of disease to explore fundamental biological questions
When Associate Professor Eliezer Calo PhD ’11 was applying for faculty positions, he was drawn to MIT not only because it’s his alma mater, but also because the Department of Biology places high value on exploring fundamental questions in biology.
In his own lab, Calo studies how craniofacial malformations arise. One motivation is to seek new treatments for those conditions, but another is to learn more about fundamental biological processes such as protein synthesis and embryonic development.
“We use genes that are mutated in disease to uncover fundamental biology,” Calo says. “Mutations that happen in disease are an experiment of nature, telling us that those are the important genes, and then we follow them up not only to understand the disease, but to fundamentally understand what the genes are doing.”
Calo’s work has led to new insights into how ribosomes form and how they control protein synthesis, as well as how the nucleolus, the birthplace of ribosomes in eukaryotic cells, has evolved over hundreds of millions of years.
In addition to earning his PhD at MIT, Calo is also an alumnus of MIT’s Summer Research Program (MSRP), which helps to prepare undergraduate students to pursue graduate education. Since starting his lab at MIT, Calo has made a point to serve as a research mentor for the program every summer.
“I feel that it’s important to pay back to the program that helped me realize what I wanted to do,” he says.
A nontraditional path
Growing up in a mountainous region of Puerto Rico, Calo was the first person from his family to finish high school. While attending the University of Puerto Rico at Rio Piedras, the largest university in Puerto Rico, he explored a few different majors before settling on chemistry.
One of Calo’s chemistry professors invited him to work in her lab, where he did a research project studying the pharmacokinetics of cell receptors found on the surface of astrocytes, a type of brain cell.
“It was a good mix of biology and chemistry,” he says. “I think that that was the catalyst to my pursuit of a career in the sciences.”
He learned about MSRP from Mandana Sassanfar, a senior lecturer in biology at MIT and director of outreach for several MIT departments, at an event hosted by the University of Puerto Rico for students interested in careers in science. He was accepted into the program, and during the summer after his junior year, he worked in the lab of Stephen Bell, an MIT professor of biology. That experience, he says, was transformative.
“Without that experience, I would have probably chosen another career,” Calo says. In Puerto Rico, “science was fun, but it was a struggle. We had to make everything from scratch, and then you spend more time making reagents than doing the experiments. When I came to MIT, I was always doing experiments.”
During that time, he realized he liked working in biology labs more than chemistry labs, so when he applied to graduate school, he decided to move into biology. He applied to five schools, including MIT. “Once MIT sent me the acceptance, I just had to say yes. There was no saying no.”
At MIT, Calo thought he might study biochemistry, but he ended up focusing on cancer biology instead, working with Jacqueline Lees, an MIT biology professor, to study the role of the tumor suppressor protein Rb.
After finishing his PhD, Calo felt burnt out and wasn’t sure if he wanted to continue along the academic track. His thesis committee advisors encouraged him to do a postdoc just to try it out, and he ended up going to Stanford University, where he fell in love with California and switched to a new research focus. Working with Joanna Wysocka, a professor of developmental biology at Stanford, he began investigating how development is affected by the regulation of proteins that make up cellular ribosomes — a topic his lab still studies today.
Returning to MIT
When searching for faculty jobs, Calo focused mainly on schools in California, but also sent an application to MIT. As he was deciding between offers from MIT and the University of California at Berkeley, a phone call from Angelika Amon, the late MIT professor of biology, convinced him to take the cross-country leap back to MIT.
“She had me on the phone for more than one hour telling me why I should come to MIT,” he recalls. “And that was so heartwarming that I could not say no.”
Since starting his lab in 2017, Calo has been studying how defects in the production of ribosomes give rise to diseases, in particular craniofacial malformations such as cleft palate.
Ribosomes, the organelles where protein synthesis occurs, consist of two subunits made of about 80 proteins. A longstanding question in biology has been why mutations that affect ribosome formation appear to primarily affect the development of the face, but not the rest of the body.
In a 2018 study, Calo discovered that this is because the mutations that affect ribosomes can have secondary effects that influence craniofacial development. In embryonic cells that form the face, a mutation in a gene called TCOF1 activates p53 at a higher level than in other embryonic cells. High levels of p53 cause some of those cells to undergo programmed cell death, leading to Treacher-Collins Syndrome, a disorder that produces underdeveloped bones in the jaw and cheek.
His lab has shown that p53 overactivation is also responsible for craniofacial disorders caused by mutations in RNA splicing factors.
Calo’s work on ribosome formation also led him to explore another cell organelle known as the nucleolus, whose role is to help build ribosomes. In 2023, he found that a gene called TCOF1, which can lead to craniofacial malformations when mutated, is critical for forming the three compartments that make up the nucleolus.
That finding, he says, could help to explain a major evolutionary shift that occurred around 300 million years ago, when the nucleolus transitioned from two to three compartments. This “tripartite” nucleolus is found in all reptiles, birds, and mammals.
“That was quite surprising,” Calo says. “Studying disease-related genes allowed us to understand a very fundamental biological process of how the nucleolus evolved, which has been a question in the field that nobody could figure out the answer for.”
