MIT Latest News
MIT engineers develop a magnetic transistor for more energy-efficient electronics
Transistors, the building blocks of modern electronics, are typically made of silicon. Because it’s a semiconductor, this material can control the flow of electricity in a circuit. But silicon has fundamental physical limits that restrict how compact and energy-efficient a transistor can be.
MIT researchers have now replaced silicon with a magnetic semiconductor, creating a magnetic transistor that could enable smaller, faster, and more energy-efficient circuits. The material’s magnetism strongly influences its electronic behavior, leading to more efficient control of the flow of electricity.
The team used a novel magnetic material and an optimization process that reduces the material’s defects, which boosts the transistor’s performance.
The material’s unique magnetic properties also allow for transistors with built-in memory, which would simplify circuit design and unlock new applications for high-performance electronics.
“People have known about magnets for thousands of years, but there are very limited ways to incorporate magnetism into electronics. We have shown a new way to efficiently utilize magnetism that opens up a lot of possibilities for future applications and research,” says Chung-Tao Chou, an MIT graduate student in the departments of Electrical Engineering and Computer Science (EECS) and Physics, and co-lead author of a paper on this advance.
Chou is joined on the paper by co-lead author Eugene Park, a graduate student in the Department of Materials Science and Engineering (DMSE); Julian Klein, a DMSE research scientist; Josep Ingla-Aynes, a postdoc in the MIT Plasma Science and Fusion Center; Jagadeesh S. Moodera, a senior research scientist in the Department of Physics; and senior authors Frances Ross, TDK Professor in DMSE; and Luqiao Liu, an associate professor in EECS, and a member of the Research Laboratory of Electronics; as well as others at the University of Chemistry and Technology in Prague. The paper appears today in Physical Review Letters.
Overcoming the limits
In an electronic device, silicon semiconductor transistors act like tiny light switches that turn a circuit on and off, or amplify weak signals in a communication system. They do this using a small input voltage.
But a fundamental physical limit of silicon semiconductors prevents a transistor from operating below a certain voltage, which hinders its energy efficiency.
To make more efficient electronics, researchers have spent decades working toward magnetic transistors that utilize electron spin to control the flow of electricity. Electron spin is a fundamental property that enables electrons to behave like tiny magnets.
So far, scientists have mostly been limited to using certain magnetic materials. These lack the favorable electronic properties of semiconductors, constraining device performance.
“In this work, we combine magnetism and semiconductor physics to realize useful spintronic devices,” Liu says.
The researchers replace the silicon in the surface layer of a transistor with chromium sulfur bromide, a two-dimensional material that acts as a magnetic semiconductor.
Due to the material’s structure, researchers can switch between two magnetic states very cleanly. This makes it ideal for use in a transistor that smoothly switches between “on” and “off.”
“One of the biggest challenges we faced was finding the right material. We tried many other materials that didn’t work,” Chou says.
They discovered that changing these magnetic states modifies the material’s electronic properties, enabling low-energy operation. And unlike many other 2D materials, chromium sulfur bromide remains stable in air.
To make a transistor, the researchers pattern electrodes onto a silicon substrate, then carefully align and transfer the 2D material on top. They use tape to pick up a tiny piece of material, only a few tens of nanometers thick, and place it onto the substrate.
“A lot of researchers will use solvents or glue to do the transfer, but transistors require a very clean surface. We eliminate all those risks by simplifying this step,” Chou says.
Leveraging magnetism
This lack of contamination enables their device to outperform existing magnetic transistors. Most others can only create a weak magnetic effect, changing the flow of current by a few percent or less. Their new transistor can switch or amplify the electric current by a factor of 10.
They use an external magnetic field to change the magnetic state of the material, switching the transistor using significantly less energy than would usually be required.
The material also allows them to control the magnetic states with electric current. This is important because engineers cannot apply magnetic fields to individual transistors in an electronic device. They need to control each one electrically.
The material’s magnetic properties could also enable transistors with built-in memory, simplifying the design of logic or memory circuits.
A typical memory device has a magnetic cell to store information and a transistor to read it out. Their method can combine both into one magnetic transistor.
“Now, not only are transistors turning on and off, they are also remembering information. And because we can switch the transistor with greater magnitude, the signal is much stronger so we can read out the information faster, and in a much more reliable way,” Liu says.
Building on this demonstration, the researchers plan to further study the use of electrical current to control the device. They are also working to make their method scalable so they can fabricate arrays of transistors.
This research was supported, in part, by the Semiconductor Research Corporation, the U.S. Defense Advanced Research Projects Agency (DARPA), the U.S. National Science Foundation (NSF), the U.S. Department of Energy, the U.S. Army Research Office, and the Czech Ministry of Education, Youth, and Sports. The work was partially carried out at the MIT.nano facilities.
Bryan Bryson: Engineering solutions to the tough problem of tuberculosis
On his desk, Bryan Bryson ’07, PhD ’13 still has the notes he used for the talk he gave at MIT when he interviewed for a faculty position in biological engineering. On that sheet, he outlined the main question he wanted to address in his lab: How do immune cells kill bacteria?
Since starting his lab in 2018, Bryson has continued to pursue that question, which he sees as critical for finding new ways to target infectious diseases that have plagued humanity for centuries, especially tuberculosis. To make significant progress against TB, researchers need to understand how immune cells respond to the disease, he says.
“Here is a pathogen that has probably killed more people in human history than any other pathogen, so you want to learn how to kill it,” says Bryson, now an associate professor at MIT. “That has really been the core of our scientific mission since I started my lab. How does the immune system see this bacterium and how does the immune system kill the bacterium? If we can unlock that, then we can unlock new therapies and unlock new vaccines.”
The only TB vaccine now available, the BCG vaccine, is a weakened version of a bacterium that causes TB in cows. This vaccine is widely administered in some parts of the world, but it poorly protects adults against pulmonary TB. Although some treatments are available, tuberculosis still kills more than a million people every year.
“To me, making a better TB vaccine comes down to a question of measurement, and so we have really tried to tackle that problem head-on. The mission of my lab is to develop new measurement modalities and concepts that can help us accelerate a better TB vaccine,” says Bryson, who is also a member of the Ragon Institute of Mass General Brigham, MIT, and Harvard.
From engineering to immunology
Engineering has deep roots in Bryson’s family: His great-grandfather was an engineer who worked on the Panama Canal, and his grandmother loved to build things and would likely have become an engineer if she had had the educational opportunity, Bryson says.
The oldest of four sons, Bryson was raised primarily by his mother and grandparents, who encouraged his interest in science. When he was three years old, his family moved from Worcester, Massachusetts, to Miami, Florida, where he began tinkering with engineering himself, building robots out of Styrofoam cups and light bulbs. After moving to Houston, Texas, at the beginning of seventh grade, Bryson joined his school’s math team.
As a high school student, Bryson had his heart set on studying biomedical engineering in college. However, MIT, one of his top choices, didn’t have a biomedical engineering program, and biological engineering wasn’t yet offered as an undergraduate major. After he was accepted to MIT, his family urged him to attend and then figure out what he would study.
Throughout his first year, Bryson deliberated over his decision, with electrical engineering and computer science (EECS) and aeronautics and astronautics both leading contenders. As he recalls, he thought he might study aero/astro with a minor in biomedical engineering and work on spacesuit design.
However, during an internship the summer after his first year, his mentor gave him a valuable piece of advice: “You should study something that will let you have a lot of options, because you don’t know how the world is going to change.”
When he came back to MIT for his sophomore year, Bryson switched his major to mechanical engineering, with a bioengineering track. He also started looking for undergraduate research positions. A poster in the hallway grabbed his attention, and he ended up with working with the professor whose work was featured: Linda Griffith, a professor of biological engineering and mechanical engineering.
Bryson’s experience in the lab “changed the trajectory of my life,” he says. There, he worked on building microfluidic devices that could be used to grow liver tissue from hepatocytes. He enjoyed the engineering aspects of the project, but he realized that he also wanted to learn more about the cells and why they behaved the way they did. He ended up staying at MIT to earn a PhD in biological engineering, working with Forest White.
In White’s lab, Bryson studied cell signaling processes and how they are altered in diseases such as cancer and diabetes. While doing his PhD research, he also became interested in studying infectious diseases. After earning his degree, he went to work with a professor of immunology at the Harvard School of Public Health, Sarah Fortune.
Fortune studies tuberculosis, and in her lab, Bryson began investigating how Mycobacterium tuberculosis interacts with host cells. During that time, Fortune instilled in him a desire to seek solutions to tuberculosis that could be transformative — not just identifying a new antibiotic, for example, but finding a way to dramatically reduce the incidence of the disease. This, he thought, could be done by vaccination, and in order to do that, he needed to understand how immune cells response to the disease.
“That postdoc really taught me how to think bravely about what you could do if you were not limited by the measurements you could make today,” Bryson says. “What are the problems we really need to solve? There are so many things you could think about with TB, but what’s the thing that’s going to change history?”
Pursuing vaccine targets
Since joining the MIT faculty eight years ago, Bryson and his students have developed new ways to answer the question he posed in his faculty interviews: How does the immune system kill bacteria?
One key step in this process is that immune cells must be able to recognize bacterial proteins that are displayed on the surfaces of infected cells. Mycobacterium tuberculosis produces more than 4,000 proteins, but only a small subset of those end up displayed by infected cells. Those proteins would likely make the best candidates for a new TB vaccine, Bryson says.
Bryson’s lab has developed ways to identify those proteins, and so far, their studies have revealed that many of the TB antigens displayed to the immune system belong to a class of proteins known as type 7 secretion system substrates. Mycobacterium tuberculosis expresses about 100 of these proteins, but which of these 100 are displayed by infected cells varies from person to person, depending on their genetic background.
By studying blood samples from people of different genetic backgrounds, Bryson’s lab has identified the TB proteins displayed by infected cells in about 50 percent of the human population. He is now working on the remaining 50 percent and believes that once those studies are finished, he’ll have a very good idea of which proteins could be used to make a TB vaccine that would work for nearly everyone.
Once those proteins are chosen, his team can work on designing the vaccine and then testing it in animals, with hopes of being ready for clinical trials in about six years.
In spite of the challenges ahead, Bryson remains optimistic about the possibility of success, and credits his mother for instilling a positive attitude in him while he was growing up.
“My mom decided to raise all four of her children by herself, and she made it look so flawless,” Bryson says. “She instilled a sense of ‘you can do what you want to do,’ and a sense of optimism. There are so many ways that you can say that something will fail, but why don’t we look to find the reasons to continue?”
One of the things he loves about MIT is that he has found a similar can-do attitude across the Institute.
“The engineer ethos of MIT is that yes, this is possible, and what we’re trying to find is the way to make this possible,” he says. “I think engineering and infectious disease go really hand-in-hand, because engineers love a problem, and tuberculosis is a really hard problem.”
When not tackling hard problems, Bryson likes to lighten things up with ice cream study breaks at Simmons Hall, where he is an associate head of house. Using an ice cream machine he has had since 2009, Bryson makes gallons of ice cream for dorm residents several times a year. Nontraditional flavors such as passion fruit or jalapeno strawberry have proven especially popular.
“Recently I did flavors of fall, so I did a cinnamon ice cream, I did a pear sorbet,” he says. “Toasted marshmallow was a huge hit, but that really destroyed my kitchen.”
Pablo Jarillo-Herrero wins BBVA Foundation Frontiers of Knowledge Award
Pablo Jarillo-Herrero, the Cecil and Ida Green Professor of Physics at MIT, has won the 2025 BBVA Foundation Frontiers of Knowledge Award in Basic Sciences for “discoveries concerning the ‘magic angle’ that allows the behavior of new materials to be transformed and controlled.”
He shares the 400,000-euro award with Allan MacDonald of the University of Texas at Austin. According to the BBVA Foundation, “the pioneering work of the two physicists has achieved both the theoretical foundation and experimental validation of a new field where superconductivity, magnetism, and other properties can be obtained by rotating new two-dimensional materials like graphene.” Graphene is a single layer of carbon atoms arranged in hexagons resembling a honeycomb structure.
Theoretical foundation, experimental validation
In a theoretical model published in 2011, MacDonald predicted that on twisting two graphene layers at a given angle, of around 1 degree, the interaction of electrons would produce new emerging properties.
In 2018, Jarillo-Herrero delivered the experimental confirmation of this “magic angle” by rotating two graphene sheets in a way that transformed the material’s behavior, giving rise to new properties like superconductivity.
The physicists’ work “has opened up new frontiers in physics by demonstrating that rotating matter to a given angle allows us to control its behavior, obtaining properties that could have a major industrial impact,” explained award committee member María José García Borge, a research professor at the Institute for the Structure of Matter. “Superconductivity, for example, could bring about far more sustainable electricity transmission, with virtually no energy loss.”
Almost science fiction
MacDonald’s initial discovery had little immediate impact. It was not until some years later, when it was confirmed in the laboratory by Jarillo-Herrero, that its true importance was revealed.
“The community would never have been so interested in my subject, if there hadn’t been an experimental program that realized that original vision,” observes MacDonald, who refers to his co-laureate’s achievement as “almost science fiction.”
Jarillo-Herrero had been intrigued by the possible effects of placing two graphene sheets on top of each other with a precise rotational alignment, because “it was uncharted territory, beyond the reach of the physics of the past, so was bound to produce some interesting results.”
But the scientist was still unsure of how to make it work in the lab. For years, he had been stacking together layers of the super-thin material, but without being able to specify the angle between them. Finally, he devised a way to do so, making the angle smaller and smaller until he got to the “magic” angle of 1.1 degrees at which the graphene revealed some extraordinary behavior.
“It was a big surprise, because the technique we used, though conceptually straightforward, was hard to pull off in the lab,” says Jarillo-Herrero, who is also affiliated with the Materials Research Laboratory.
Since 2009, the BBVA has given Frontiers of Knowledge Awards to more than a dozen MIT faculty members. The Frontiers of Knowledge Awards, spanning eight prize categories, recognize world-class research and cultural creation and aim to celebrate and promote the value of knowledge as a global public good. The BBVA Foundation works to support scientific research and cultural creation, disseminate knowledge and culture, and recognize talent and innovation.
Cancer’s secret safety net
Researchers in Class of 1942 Professor of Chemistry Matthew D. Shoulders’ lab have uncovered a sinister hidden mechanism that can allow cancer cells to survive (and, in some cases, thrive) even when hit with powerful drugs. The secret lies in a cellular “safety net” that gives cancer the freedom to develop aggressive mutations.
This fascinating intersection between molecular biology and evolutionary dynamics, published Jan. 22 on the cover of Molecular Cell, focuses on the most famous anti-cancer gene in the human body, TP53 (tumor protein 53, known as p53), and suggests that cancer cells don’t just mutate by accident — they create a specialized environment that makes dangerous mutations possible.
The guardian under attack
Tasked with the job of stopping damaged cells from dividing, the p53 protein has been known for decades as the “guardian of the genome” and is the most mutated gene in cancer. Some of the most perilous of these mutations are known as “dominant-negative” variants. Not only do they stop working, but they actually prevent any healthy p53 in the cell from doing its job, essentially disarming the body’s primary defense system.
To function, p53 and most other proteins must fold into specific 3D shapes, much like precise cellular origami. Typically, if a mutation occurs that ruins this shape, the protein becomes a tangled mess, and the cell destroys it.
A specialized network of proteins, called cellular chaperones, help proteins fold into their correct shape, collectively known as the proteostasis network.
“Many chaperone networks are known to be upregulated in cancer cells, for reasons that are not totally clear,” says Stephanie Halim, a graduate student in the Shoulders Group and co-first author of the study, along with Rebecca Sebastian PhD ’22. “We hypothesized that increasing the activities of these helpful protein folding networks can allow cancer cells to tolerate more mutations than a regular cell.”
The research team investigated a “helper” system in the cell called the proteostasis network. This network involves many proteins known as chaperones that help other proteins fold correctly. A master regulator called Heat Shock Factor 1 (HSF1) controls the composition of the proteostasis network, with HSF1 activity upregulating the network to create supportive protein folding environments in response to stress. In healthy cells, HSF1 stays dormant until heat or toxins appear. In cancer, HSF1 is often permanently in action mode.
To see how this works in real-time, the team created a specialized cancer cell line that let them chemically “turn up” the activity of HSF1 on demand. They then used a cutting-edge technique to express every possible singly mutated version of a p53 protein — testing thousands of different genetic “typos” at once.
The results were clear: When HSF1 was amplified, the cancer cells became much better at handling “bad” mutations. Normally, these specific mutations are so physically disruptive that they would cause the protein to collapse and fail. However, with HSF1 providing extra folding help, these unstable, cancer-driving proteins were able to stay intact and keep the cancer growing.
“These findings show that chaperone networks can reshape the fundamental mutational tolerance of the most mutated gene in cancer, linking proteostasis network activity directly to cancer development,” said Halim. “This work also puts us one step closer to understanding how tinkering with cellular protein folding pathways can help with cancer treatment.”
Unravelling cancer’s safety net
The study revealed that HSF1 activity specifically protects normally disruptive amino acid substitutions located deep inside the protein’s core — the most sensitive areas. Without this extra folding help, these substitutions would likely cause degradation of these proteins. With it, the cancer cell can keep these broken proteins around to help it grow.
This discovery helps explain why cancer is so resilient, and why previous attempts to treat cancer by blocking chaperone proteins (like HSP90, an abundant cellular chaperone) have been so complex. By understanding how cancer “buffers” its own bad mutations, doctors may one day be able to break that safety net, forcing the cancer’s own mutations to become its downfall.
The research was conducted in collaboration with the labs of professors Yu-Shan Lin of Tufts University; Francisco J. Sánchez-Rivera of the MIT Department of Biology; William C. Hahn, institute member of the Broad Institute of MIT and Harvard and professor of medicine in the Department of Medical Oncology at the Dana-Farber Cancer Institute and Harvard Medical School; and Marc L. Mendillo of Northwestern University.
Richard Hynes, a pioneer in the biology of cellular adhesion, dies at 81
MIT Professor Emeritus Richard O. Hynes PhD ’71, a cancer biologist whose discoveries reshaped modern understandings of how cells interact with each other and their environment, passed away on Jan. 6. He was 81.
Hynes is best known for his discovery of integrins, a family of cell-surface receptors essential to cell–cell and cell–matrix adhesion. He played a critical role in establishing the field of cell adhesion biology, and his continuing research revealed mechanisms central to embryonic development, tissue integrity, and diseases including cancer, fibrosis, thrombosis, and immune disorders.
Hynes was the Daniel K. Ludwig Professor for Cancer Research, Emeritus, an emeritus professor of biology, and a member of the Koch Institute for Integrated Cancer Research at MIT and the Broad Institute of MIT and Harvard. During his more than 50 years on the faculty at MIT, he was deeply respected for his academic leadership at the Institute and internationally, as well as his intellectual rigor and contributions as an educator and mentor.
“Richard had an enormous impact in his career. He was a visionary leader of the MIT Cancer Center, what is now the Koch Institute, during a time when the progress in understanding cancer was just starting to be translated into new therapies,” reflects Matthew Vander Heiden, director of the Koch Institute and the Lester Wolfe (1919) Professor of Molecular Biology. “The research from his laboratory launched an entirely new field by defining the molecules that mediate interactions between cells and between cells and their environment. This laid the groundwork for better understanding the immune system and metastasis.”
Pond skipper
Born in Kenya, Hynes grew up during the 1950s in Liverpool, in the United Kingdom. While he sometimes recounted stories of being schoolmates with two of the Beatles, and in the same Boy Scouts troop as Paul McCartney, his academic interests were quite different, and he specialized in the sciences at a young age. Both of his parents were scientists: His father was a freshwater ecologist, and his mother a physics teacher. Hynes and all three of his siblings followed their parents into scientific fields.
"We talked science at home, and if we asked questions, we got questions back, not answers. So that conditioned me into being a scientist, for sure," Hynes said of his youth.
He described his time as an undergraduate and master’s student at Cambridge University during the 1960s as “just fantastic,” noting that it was shortly after two 1962 Nobel Prizes were awarded to Cambridge researchers — one to Francis Crick and James Watson for the structure of DNA, the other to John Kendrew and Max Perutz for the structures of proteins — and Cambridge was “the place to be” to study biology.
Newly married, Hynes and his wife traded Cambridge, U.K. for Cambridge, Massachusetts, so that he could conduct doctoral work at MIT under the direction of Paul Gross. He tried (and by his own assessment, failed) to differentiate maternal messages among the three germ layers of sea urchin embryos. However, he did make early successful attempts to isolate the globular protein tubulin, a building block for essential cellular structures, from sea urchins.
Inspired by a course he had taken with Watson in the United States, Hynes began work during his postdoc at the Institute of Cancer Research in the U.K. on the early steps of oncogenic transformation and the role of cell migration and adhesion; it was here that he made his earliest discovery and characterizations of the fibronectin protein.
Recruited back to MIT by Salvador Luria, founding director of the MIT Center for Cancer Research, whom he had met during a summer at Woods Hole Oceanographic Institute on Cape Cod, Hynes returned to the Institute in 1975 as a founding faculty member of the center and an assistant professor in the Department of Biology.
Big questions about tiny cells
To his own research, Hynes brought the same spirit of inquiry that had characterized his upbringing, asking fundamental questions: How do cells interact with each other? How do they stick together to form tissues?
His research focused on proteins that allow cells to adhere to each other and to the extracellular matrix — a mesh-like network that surrounds cells, providing structural support, as well as biochemical and mechanical cues from the local microenvironment. These proteins include integrins, a type of cell surface receptor, and fibronectins, a family of extracellular adhesive proteins. Integrins are the major adhesion receptors connecting the extracellular matrix to the intracellular cytoskeleton, or main architectural support within the cell.
Hynes began his career as a developmental biologist, studying how cells move to the correct locations during embryonic development. During this stage of development, proper modulation of cell adhesion is critical for cells to move to the correct locations in the embryo.
Hynes’ work also revealed that dysregulation of cell-to-matrix contact plays an important role in cancer cells’ ability to detach from a tumor and spread to other parts of the body, key steps in metastasis.
As a postdoc, Hynes had begun studying the differences in the surface landscapes of healthy cells and tumor cells. It was this work that led to the discovery of fibronectin, which is often lost when cells become cancerous.
He and others found that fibronectin is an important part of the extracellular matrix. When fibronectin is lost, cancer cells can more easily free themselves from their original location and metastasize to other sites in the body. By studying how fibronectin normally interacts with cells, Hynes and others discovered a family of cell surface receptors known as integrins, which function as important physical links with the extracellular matrix. In humans, 24 integrin proteins have been identified. These proteins help give tissues their structure, enable blood to clot, and are essential for embryonic development.
“Richard’s discoveries, along with others’, of cell surface integrins led to the development of a number of life-altering treatments. Among these are treatment of autoimmune diseases such as multiple sclerosis,” notes longtime colleague Phillip Sharp, MIT Institute professor emeritus.
As research technologies advanced, including proteomic and extracellular matrix isolation methods developed directly in Hynes’ laboratory, he and his group were able to uncover increasingly detailed information about specific cell adhesion proteins, the biological mechanisms by which they operate, and the roles they play in normal biology and disease.
In cancer, their work helped to uncover how cell adhesion (and the loss thereof) and the extracellular matrix contribute not only to fundamental early steps in the metastatic process, but also tumor progression, therapeutic response, and patient prognosis. This included studies that mapped matrix protein signatures associated with cancer and non-cancer cells and tissues, followed by investigations into how differentially expressed matrix proteins can promote or suppress cancer progression.
Hynes and his colleagues also demonstrated how extracellular matrix composition can influence immunotherapy, such as the importance of a family of cell adhesion proteins called selectins for recruiting natural killer cells to tumors. Further, Hynes revealed links between fibronectin, integrins, and other matrix proteins with tumor angiogenesis, or blood vessel development, and also showed how interaction with platelets can stimulate tumor cells to remodel the extracellular matrix to support invasion and metastasis. In pursuing these insights into the oncogenic mechanisms of matrix proteins, Hynes and members of his laboratory have identified useful diagnostic and prognostic biomarkers, as well as therapeutic targets.
Along the way, Hynes shaped not only the research field, but also the careers of generations of trainees.
“There was much to emulate in Richard’s gentle, patient, and generous approach to mentorship. He centered the goals and interests of his trainees, fostered an inclusive and intellectually rigorous environment, and cared deeply about the well-being of his lab members. Richard was a role model for integrity in both personal and professional interactions and set high expectations for intellectual excellence,” recalls Noor Jailkhani, a former Hynes Lab postdoc.
Jailkhani is CEO and co-founder, with Hynes, of Matrisome Bio, a biotech company developing first-in-class targeted therapies for cancer and fibrosis by leveraging the extracellular matrix. “The impact of his long and distinguished scientific career was magnified through the generations of trainees he mentored, whose influence spans academia and the biotechnology industry worldwide. I believe that his dedication to mentorship stands among his most far-reaching and enduring contributions,” she says.
A guiding light
Widely sought for his guidance, Hynes served in a number of key roles at MIT and in the broader scientific community. As head of MIT’s Department of Biology from 1989 to 1991, then a decade as director of the MIT Center for Cancer Research, his leadership has helped shape the Institute’s programs in both areas.
“Words can’t capture what a fabulous human being Richard was. I left every interaction with him with new insights and the warm glow that comes from a good conversation,” says Amy Keating, the Jay A. Stein (1968) Professor, professor of biology and biological engineering, and head of the Department of Biology. “Richard was happy to share stories, perspectives, and advice, always with a twinkle in his eye that conveyed his infinite interest in and delight with science, scientists, and life itself. The calm support that he offered me, during my years as department head, meant a lot and helped me do my job with confidence.”
Hynes served as director of the MIT Center for Cancer Research from 1991 until 2001, positioning the center’s distinguished cancer biology program for expansion into its current, interdisciplinary research model as MIT’s Koch Institute for Integrative Cancer Research. “He recruited and strongly supported Tyler Jacks to the faculty, who subsequently became director and headed efforts to establish the Koch Institute,” recalls Sharp.
Jacks, a David H. Koch (1962) Professor of Biology and founding director of the Koch Institute, remembers Hynes as a thoughtful, caring, and highly effective leader in the Center for Cancer Research, or CCR, and in the Department of Biology. “I was fortunate to be able to lean on him when I took over as CCR director. He encouraged me to drop in — unannounced — with questions and concerns, which I did regularly. I learned a great deal from Richard, at every level,” he says.
Hynes’ leadership and recognition extended well beyond MIT to national and international contexts, helping to shape policy and strengthen connections between MIT researchers and the wider field. He served as a scientific governor of the Wellcome Trust, a global health research and advocacy foundation based in the United Kingdom, and co-chaired U.S. National Academy committees establishing guidelines for stem cell and genome editing research.
“Richard was an esteemed scientist, a stimulating colleague, a beloved mentor, a role model, and to me a partner in many endeavors both within and beyond MIT,” notes H. Robert Horvitz, a David H. Koch (1962) Professor of Biology. He was a wonderful human being, and a good friend. I am sad beyond words at his passing.”
Awarded Howard Hughes medical investigator status in 1988, Hynes’ research and leadership have since been recognized with a number of other notable honors. Most recently, he received the 2022 Albert Lasker Basic Medical Research Award, which he shared with Erkki Ruoslahti of Sanford Burnham Prebys and Timothy Springer of Harvard University, for his discovery of integrins and pioneering work in cell adhesion.
His other awards include the Canada Gairdner International Award, the Distinguished Investigator Award from the International Society for Matrix Biology, the Robert and Claire Pasarow Medical Research Award, the E.B. Wilson Medal from the American Society for Cell Biology, the David Rall Medal from the National Academy of Medicine and the Paget-Ewing Award from the Metastasis Research Society. Hynes was a member of the National Academy of Sciences, the National Academy of Medicine, the Royal Society of London, the American Association for the Advancement of Science, and the American Academy of Arts and Sciences.
Easily recognized by a commanding stature that belied his soft-spoken nature, Hynes was known around MIT’s campus not only for his acuity, integrity, and wise counsel, but also for his community spirit and service. From serving food at community socials to moderating events and meetings or recognizing the success of colleagues and trainees, his willingness to help spanned roles of every size.
“Richard was a phenomenal friend and colleague. He approached complex problems with a thoughtfulness and clarity that few can achieve,” notes Vander Heiden. “He was also so generous in his willingness to provide help and advice, and did so with a genuine kindness that was appreciated by everyone.”
Hynes is survived by his wife Fleur, their sons Hugh and Colin and their partners, and four grandchildren.
Biology-based brain model matches animals in learning, enables new discovery
A new computational model of the brain based closely on its biology and physiology not only learned a simple visual category learning task exactly as well as lab animals, but even enabled the discovery of counterintuitive activity by a group of neurons that researchers working with animals to perform the same task had not noticed in their data before, says a team of scientists at Dartmouth College, MIT, and the State University of New York at Stony Brook.
Notably, the model produced these achievements without ever being trained on any data from animal experiments. Instead, it was built from scratch to faithfully represent how neurons connect into circuits and then communicate electrically and chemically across broader brain regions to produce cognition and behavior. Then, when the research team asked the model to perform the same task that they had previously performed with the animals (looking at patterns of dots and deciding which of two broader categories they fit), it produced highly similar neural activity and behavioral results, acquiring the skill with almost exactly the same erratic progress.
“It’s just producing new simulated plots of brain activity that then only afterward are being compared to the lab animals. The fact that they match up as strikingly as they do is kind of shocking,” says Richard Granger, a professor of psychological and brain sciences at Dartmouth and senior author of a new study in Nature Communications that describes the model.
A goal in making the model, and newer iterations developed since the paper was written, is not only to offer insight into how the brain works, but also how it might work differently in disease and what interventions could correct those aberrations, adds co-author Earl K. Miller, Picower Professor in The Picower Institute for Learning and Memory at MIT. Miller, Granger, and other members of the research team have founded the company Neuroblox.ai to develop the models’ biotech applications. Co-author Lilianne R. Mujica-Parodi, a biomedical engineering professor at Stony Brook who is lead principal investigator for the Neuroblox Project, is CEO of the company.
“The idea is to make a platform for biomimetic modeling of the brain so you can have a more efficient way of discovering, developing, and improving neurotherapeutics. Drug development and efficacy testing, for example, can happen earlier in the process, on our platform, before the risk and expense of clinical trials,” says Miller, who is also a faculty member of MIT’s Department of Brain and Cognitive Sciences.
Making a biomimetic model
Dartmouth postdoc Anand Pathak created the model, which differs from many others in that it incorporates both small details, such as how individual pairs of neurons connect with each other, and large-scale architecture, including how information processing across regions is affected by neuromodulatory chemicals such as acetylcholine. Pathak and the team iterated their designs to ensure they obeyed various constraints observed in real brains, such as how neurons become synchronized by broader rhythms. Many other models focus only on the small or big scales, but not both, he says.
“We didn’t want to lose the tree, and we didn’t want to lose the forest,” Pathak says.
The metaphorical “trees,” called “primitives” in the study, are small circuits of a few neurons each that connect based on electrical and chemical principles of real cells to perform fundamental computational functions. For example, within the model’s version of the brain’s cortex, one primitive design has excitatory neurons that receive input from the visual system via synapse connections affected by the neurotransmitter glutamate. Those excitatory neurons then densely connect with inhibitory neurons in a competition to signal them to shut down the other excitatory neurons — a “winner-take-all” architecture found in real brains that regulates information processing.
At a larger scale, the model encompasses four brain regions needed for basic learning and memory tasks: a cortex, a brainstem, a striatum, and a “tonically active neuron” (TAN) structure that can inject a little “noise” into the system via bursts of aceytlcholine. For instance, as the model engaged in the task of categorizing the presented patterns of dots, the TAN at first ensured some variability in how the model acted on the visual input so that the model could learn by exploring varied actions and their outcomes. As the model continued to learn, cortex and striatum circuits strengthened connections that suppressed the TAN, enabling the model to act on what it was learning with increasing consistency.
As the model engaged in the learning task, real-world properties emerged, including a dynamic that Miller has commonly observed in his research with animals. As learning progressed, the cortex and striatum became more synchronized in the “beta” frequency band of brain rhythms, and this increased synchrony correlated with times when the model (and the animals) made the correct category judgement about what they were seeing.
Revealing “incongruent” neurons
But the model also presented the researchers with a group of neurons — about 20 percent — whose activity appeared highly predictive of error. When these so-called “incongruent” neurons influenced circuits, the model would make the wrong category judgement. At first, Granger says, the team figured it was a quirk of the model. But then they looked at the real-brain data Miller’s lab accumulated when animals performed the same task.
“Only then did we go back to the data we already had, sure that this couldn’t be in there because somebody would have said something about it, but it was in there, and it just had never been noticed or analyzed,” he says.
Miller says these counterintuitive cells might serve a purpose: it’s all well and good to learn the rules of a task, but what if the rules change? Trying out alternatives from time to time can enable a brain to stumble upon a newly emerging set of conditions. Indeed, a separate Picower Institute lab recently published evidence that humans and other animals do this sometimes.
While the model described in the new paper performed beyond the team’s expectations, Granger says, the team has been expanding it to make it sophisticated enough to handle a greater variety of tasks and circumstances. For instance, they have added more regions and new neuromodulatory chemicals. They’ve also begun to test how interventions such as drugs affect its dynamics.
In addition to Granger, Miller, Pathak and Mujica-Parodi, the paper’s other authors are Scott Brincat, Haris Organtzidis, Helmut Strey, Sageanne Senneff, and Evan Antzoulatos.
The Baszucki Brain Research Fund, United States, the Office of Naval Research, and the Freedom Together Foundation provided support for the research.
Akorfa Dagadu named 2027 Schwarzman Scholar
MIT undergraduate Akorfa Dagadu has been named a Schwarzman Scholar and will join the program’s Class of 2026-27 scholars from 40 countries and 83 universities. This year’s 150 Schwarzman Scholars were selected for their leadership potential from a pool of over 5,800 applicants, the highest number in the Schwarzman Scholarship’s 11-year history.
Schwarzman Scholars pursue a one-year, fully funded master’s degree program in global affairs at Schwarzman College, Tsinghua University, in Beijing, China. The graduate curriculum focuses on the pillars of leadership, global affairs, and China, with additional opportunities for cultural immersion, experiential learning, and professional development. The program aims to build a global network of leaders with a well-rounded understanding of China’s evolving role in the world.
Hailing from Ghana, Dagadu is a senior majoring in chemical-biological engineering. At MIT, she researches how enzyme-polymer systems can be designed to break down plastics at end-of-life, work that has been recognized internationally through publications and awards, including the CellPress Rising Scientist Award.
Dagadu is the founder of Ishara, a venture transforming recycling in Ghana by connecting informal waste pickers to transparent, efficient systems with potential to scale across growth markets. She aspires to establish a materials innovation hub in Africa to address the end-of-life of materials, from plastics to e-waste.
MIT’s Schwarzman Scholar applicants receive guidance and mentorship from the distinguished fellowships team in MIT Career Advising and Professional Development, as well as the Presidential Committee on Distinguished Fellowships. Students and alumni interested in learning more should contact Kimberly Benard, associate dean and director of distinguished fellowships and academic excellence.
Featured video: How tiny satellites help us track hurricanes and other weather events
MIT Lincoln Laboratory has transformed weather intelligence by miniaturizing microwave sounders, instruments that measure Earth's atmospheric temperature, moisture, and water vapor. These instruments are 1/100th the size of traditional sounders aboard multibillion-dollar satellites, enabling them to fit on shoebox-sized CubeSats.
When deployed in a constellation, the CubeSats can observe rapidly intensifying storms near-hourly — providing fresh data to forecasting professionals during critical windows of storm development that have largely been undetectable by past remote-sensing technology.
Developed at Lincoln Laboratory, the mini microwave sounders were first demonstrated on NASA's TROPICS mission, which measured temperature and humidity soundings as well as precipitation. TROPICS concluded in 2025 with over 11 billion observations, providing scientists with key insights into tropical cyclone evolution.
Now the technology has been licensed by the commercial firm Tomorrow.io, allowing for the enhancement of global weather coverage for customers in aviation, logistics, agriculture, and emergency management. Tomorrow.io provides clients with hyperlocal forecasts around the globe and is set to launch their own constellation of satellites based on the TROPICS program. Says John Springman, Tomorrow.io's head of space and sensing: “Our overall goal is to fundamentally improve weather forecasts, and that'll improve our downstream products like our weather intelligence.”
Video by Tim Briggs/Lincoln Laboratory | 13 minutes, 58 seconds
Professor of the practice Robert Liebeck, leading expert on aircraft design, dies at 87
Robert Liebeck, a professor of the practice in the MIT Department of Aeronautics and Astronautics and one of the world’s leading experts on aircraft design, aerodynamics, and hydrodynamics, died on Jan. 12 at age 87.
“Bob was a mentor and dear friend to so many faculty, alumni, and researchers at AeroAstro over the course of 25 years,” says Julie Shah, department head and the H.N. Slater Professor of Aeronautics and Astronautics at MIT. “He’ll be deeply missed by all who were fortunate enough to know him.”
Liebeck’s long and distinguished career in aerospace engineering included a number of foundational contributions to aerodynamics and aircraft design, beginning with his graduate research into high-lift airfoils. His novel designs came to be known as “Liebeck airfoils” and are used primarily for high-altitude reconnaissance airplanes; Liebeck airfoils have also been adapted for use in Formula One racing cars, racing sailboats, and even a flying replica of a giant pterosaur.
He was perhaps best known for his groundbreaking work on blended wing body (BWB) aircraft. He oversaw the BWB project at Boeing during his celebrated five-decade tenure at the company, working closely with NASA on the X-48 experimental aircraft. After retiring as senior technical fellow at Boeing in 2020, Liebeck remained active in BWB research. He served as technical advisor at BWB startup JetZero, which is aiming to build a more fuel-efficient aircraft for both military and commercial use and has set a target date of 2027 for its demonstration flight.
Liebeck was appointed a professor of the practice at MIT in 2000, and taught classes on aircraft design and aerodynamics.
“Bob contributed to the department both in aircraft capstones and also in advising students and mentoring faculty, including myself,” says John Hansman, the T. Wilson Professor of Aeronautics and Astronautics. “In addition to aviation, Bob was very significant in car racing and developed the downforce wing and flap system which has become standard on F1, IndyCar, and NASCAR cars.”
He was a major contributor to the Silent Aircraft Project, a collaboration between MIT and Cambridge University led by Dame Ann Dowling. Liebeck also worked closely with Professor Woody Hoburg ’08 and his research group, advising on students’ research into efficient methods for designing aerospace vehicles. Before Hoburg was accepted into the NASA astronaut corps in 2017, the group produced an open-source Python package, GPkit, for geometric programming, which was used to design a five-day endurance unmanned aerial vehicle for the U.S. Air Force.
“Bob was universally respected in aviation and he was a good friend to the department,” remembers Professor Ed Greitzer.
Liebeck was an AIAA honorary fellow and Boeing senior technical fellow, as well as a member of the National Academy of Engineering, Royal Aeronautical Society, and Academy of Model Aeronautics. He was a recipient of the Guggenheim Medal and ASME Spirit of St. Louis Medal, among many other awards, and was inducted into the International Air and Space Hall of Fame.
An avid runner and motorcyclist, Liebeck is remembered by friends and colleagues for his adventurous nature and generosity of spirit. Throughout a career punctuated by honors and achievements, Liebeck found his greatest satisfaction in teaching. In addition to his role at MIT, he was an adjunct faculty member at University of California at Irving and served as faculty member for that university’s Design/Build/Fly and Human-Powered Airplane teams.
“It is the one job where I feel I have done some good — even after a bad lecture,” he told AeroAstro Magazine in 2007. “I have decided that I am finally beginning to understand aeronautical engineering, and I want to share that understanding with our youth.”
Electrifying boilers to decarbonize industry
More than 200 years ago, the steam boiler helped spark the Industrial Revolution. Since then, steam has been the lifeblood of industrial activity around the world. Today the production of steam — created by burning gas, oil, or coal to boil water — accounts for a significant percentage of global energy use in manufacturing, powering the creation of paper, chemicals, pharmaceuticals, food, and more.
Now, the startup AtmosZero, founded by Addison Stark SM ’10, PhD ’14; Todd Bandhauer; and Ashwin Salvi, is taking a new approach to electrify the centuries-old steam boiler. The company has developed a modular heat pump capable of delivering industrial steam at temperatures up to 150 degrees Celsius to serve as a drop-in replacement for combustion boilers.
The company says its first 1-megawatt steam system is far cheaper to operate than commercially available electric solutions thanks to ultra-efficient compressor technology, which uses 50 percent less electricity than electric resistive boilers. The founders are hoping that’s enough to make decarbonized steam boilers drive the next industrial revolution.
“Steam is the most important working fluid ever,” says Stark, who serves as AtmosZero’s CEO. “Today everything is built around the ubiquitous availability of steam. Cost-effectively electrifying that requires innovation that can scale. In other words, it requires a mass-produced product — not one-off projects.”
Tapping into steam
Stark joined the Technology and Policy Program when he came to MIT in 2007. He ultimately completed a dual master’s degree by adding mechanical engineering to his studies.
“I was interested in the energy transition and in accelerating solutions to enable that,” Stark says. “The transition isn’t happening in a vacuum. You need to align economics, policy, and technology to drive that change.”
Stark stayed at MIT to earn his PhD in mechanical engineering, studying thermochemical biofuels.
After MIT, Stark began working on early-stage energy technologies with the Department of Energy’s Advanced Research Projects Agency— Energy (ARPA-E), with a focus on manufacturing efficiency, the energy-water nexus, and electrification.
“Part of that work involved applying my training at MIT to things that hadn’t really been innovated on for 50 years,” Stark says. “I was looking at the heat exchanger. It’s so fundamental. I thought, ‘How might we reimagine it in the context of modern advances in manufacturing technology?’”
The problem is as difficult as it is consequential, touching nearly every corner of the global industrial economy. More than 2.2 gigatons of CO2 emissions are generated each year to turn water into steam — accounting for more than 5 percent of global energy-related emissions.
In 2020, Stark co-authored an article in the journal Joule with Gregory Thiel SM ’12, PhD ’15 titled, “To decarbonize industry, we must decarbonize heat.” The article examined opportunities for industrial heat decarbonization, and it got Stark excited about the potential impact of a standardized, scalable electric heat pump.
Most electric boiler options today bring huge increases in operating costs. Many also make use of factory waste heat, which requires pricey retrofits. Stark never imagined he’d become an entrepreneur, but he soon realized no one was going to act on his findings for him.
“The only path to seeing this invention brought out into the world was to found and run the company,” Stark says. “It’s something I didn’t anticipate or necessarily want, but here I am.”
Stark partnered with former ARPA-E awardee Todd Bandhauer, who had been inventing new refrigerant compressor technology in his lab at Colorado State University, and former ARPA-E colleague Ashwin Salvi. The team officially founded AtmosZero in 2022.
“The compressor is the engine of the heat pump and defines the efficiency, cost, and performance,” Stark says. “The fundamental challenge of delivering heat is that the higher your heat pump is raising the air temperature, the lower your maximum efficiency. It runs into thermodynamic limitations. By designing for optimum efficiency in the operational windows that matter for the refrigerants we’re using, and for the precision manufacturing of our compressors, we’re able to maximize the individual stages of compression to maximize operational efficiency.”
The system can work with waste heat from air or water, but it doesn’t need waste heat to work. Many other electric boilers rely on waste heat, but Stark thinks that adds too much complexity to installation and operations.
Instead, in AtmosZero’s novel heat pump cycle, heat from ambient-temperature air is used to warm a liquid heat transfer material, which evaporates a refrigerant so it flows into the system’s series of compressors and heat exchangers, reaching high enough temperatures to boil water while recovering heat from the refrigerant once it reaches lower temperatures. The system can be ramped up and down to seamlessly fit into existing industrial processes.
“We can work just like a combustion boiler,” Stark says. “At the end of the day, customers don’t want to change how their manufacturing facilities operate in order to electrify. You can’t change or increase complexity on-site.”
That approach means the boiler can be deployed in a range of industrial contexts without unique project costs or other changes.
“What we really offer is flexibility and something that can drop in with ease and minimize total capital costs,” Stark says.
From 1 to 1,000
AtmosZero already has a pilot 650 kilowatt system operating at a customer facility near its headquarters in Loveland, Colorado. The company is currently focused on demonstrating year-round durability and reliability of the system as they work to build out their backlog of orders and prepare to scale.
Stark says once the system is brought to a customer’s facility, it can be installed in an afternoon and deployed in a matter of days, with zero downtime.
AtmosZero is aiming to deliver a handful of units to customers over the next year or two, with plans to deploy hundreds of units a year after that. The company is currently targeting manufacturing plants using under 10 megawatts of thermal energy at peak demand, which represents most U.S. manufacturing facilities.
Stark is proud to be part of a growing group of MIT-affiliated decarbonization startups, some of which are targeting specific verticals, like Boston Metal for steel and Sublime Systems for cement. But he says beyond the most common materials, the industry gets very fragmented, with one of the only common threads being the use of steam.
“If we look across industrial segments, we see the ubiquity of steam,” Stark says. “It’s a tremendously ripe opportunity to have impact at scale. Steam cannot be removed from industry. So much of every industrial process that we’ve designed over the last 160 years has been around the availability of steam. So, we need to focus on ways to deliver low-emissions steam rather than removing it from the equation.”
Why it’s critical to move beyond overly aggregated machine-learning metrics
MIT researchers have identified significant examples of machine-learning model failure when those models are applied to data other than what they were trained on, raising questions about the need to test whenever a model is deployed in a new setting.
“We demonstrate that even when you train models on large amounts of data, and choose the best average model, in a new setting this ‘best model’ could be the worst model for 6-75 percent of the new data,” says Marzyeh Ghassemi, an associate professor in MIT’s Department of Electrical Engineering and Computer Science (EECS), a member of the Institute for Medical Engineering and Science, and principal investigator at the Laboratory for Information and Decision Systems.
In a paper that was presented at the Neural Information Processing Systems (NeurIPS 2025) conference in December, the researchers point out that models trained to effectively diagnose illness in chest X-rays at one hospital, for example, may be considered effective in a different hospital, on average. The researchers’ performance assessment, however, revealed that some of the best-performing models at the first hospital were the worst-performing on up to 75 percent of patients at the second hospital, even though when all patients are aggregated in the second hospital, high average performance hides this failure.
Their findings demonstrate that although spurious correlations — a simple example of which is when a machine-learning system, not having “seen” many cows pictured at the beach, classifies a photo of a beach-going cow as an orca simply because of its background — are thought to be mitigated by just improving model performance on observed data, they actually still occur and remain a risk to a model’s trustworthiness in new settings. In many instances — including areas examined by the researchers such as chest X-rays, cancer histopathology images, and hate speech detection — such spurious correlations are much harder to detect.
In the case of a medical diagnosis model trained on chest X-rays, for example, the model may have learned to correlate a specific and irrelevant marking on one hospital’s X-rays with a certain pathology. At another hospital where the marking is not used, that pathology could be missed.
Previous research by Ghassemi’s group has shown that models can spuriously correlate such factors as age, gender, and race with medical findings. If, for instance, a model has been trained on more older people’s chest X-rays that have pneumonia and hasn’t “seen” as many X-rays belonging to younger people, it might predict that only older patients have pneumonia.
“We want models to learn how to look at the anatomical features of the patient and then make a decision based on that,” says Olawale Salaudeen, an MIT postdoc and the lead author of the paper, “but really anything that’s in the data that’s correlated with a decision can be used by the model. And those correlations might not actually be robust with changes in the environment, making the model predictions unreliable sources of decision-making.”
Spurious correlations contribute to the risks of biased decision-making. In the NeurIPS conference paper, the researchers showed that, for example, chest X-ray models that improved overall diagnosis performance actually performed worse on patients with pleural conditions or enlarged cardiomediastinum, meaning enlargement of the heart or central chest cavity.
Other authors of the paper included PhD students Haoran Zhang and Kumail Alhamoud, EECS Assistant Professor Sara Beery, and Ghassemi.
While previous work has generally accepted that models ordered best-to-worst by performance will preserve that order when applied in new settings, called accuracy-on-the-line, the researchers were able to demonstrate examples of when the best-performing models in one setting were the worst-performing in another.
Salaudeen devised an algorithm called OODSelect to find examples where accuracy-on-the-line was broken. Basically, he trained thousands of models using in-distribution data, meaning the data were from the first setting, and calculated their accuracy. Then he applied the models to the data from the second setting. When those with the highest accuracy on the first-setting data were wrong when applied to a large percentage of examples in the second setting, this identified the problem subsets, or sub-populations. Salaudeen also emphasizes the dangers of aggregate statistics for evaluation, which can obscure more granular and consequential information about model performance.
In the course of their work, the researchers separated out the “most miscalculated examples” so as not to conflate spurious correlations within a dataset with situations that are simply difficult to classify.
The NeurIPS paper releases the researchers’ code and some identified subsets for future work.
Once a hospital, or any organization employing machine learning, identifies subsets on which a model is performing poorly, that information can be used to improve the model for its particular task and setting. The researchers recommend that future work adopt OODSelect in order to highlight targets for evaluation and design approaches to improving performance more consistently.
“We hope the released code and OODSelect subsets become a steppingstone,” the researchers write, “toward benchmarks and models that confront the adverse effects of spurious correlations.”
To flexibly organize thought, the brain makes use of space
Our thoughts are specified by our knowledge and plans, yet our cognition can also be fast and flexible in handling new information. How does the well-controlled and yet highly nimble nature of cognition emerge from the brain’s anatomy of billions of neurons and circuits?
A study by researchers in The Picower Institute for Learning and Memory at MIT provides new evidence from tests in animals that the answer might be found within a theory called “spatial computing.”
First proposed in 2023 by Picower Professor Earl K. Miller and colleagues Mikael Lundqvist and Pawel Herman, spatial computing theory explains how neurons in the prefrontal cortex can be organized on the fly into a functional group capable of carrying out the information processing required by a cognitive task. Moreover, it allows for neurons to participate in multiple such groups, as years of experiments have shown that many prefrontal neurons can indeed participate in multiple tasks at once.
The basic idea of the theory is that the brain recruits and organizes ad hoc “task forces” of neurons by using “alpha” and “beta” frequency brain waves (about 10-30Hz) to apply control signals to physical patches of the prefrontal cortex. Rather than having to rewire themselves into new physical circuits every time a new task must be done, the neurons in the patch instead process information by following the patterns of excitation and inhibition imposed by the waves.
Think of the alpha and beta frequency waves as stencils that shape when and where in the prefrontal cortex groups of neurons can take in or express information from the senses, Miller says. In that way, the waves represent the rules of the task and can organize how the neurons electrically “spike” to process the information content needed for the task.
“Cognition is all about large-scale neural self-organization,” says Miller, senior author of the paper in Current Biology and a faculty member in MIT’s Department of Brain and Cognitive Sciences. “Spatial computing explains how the brain does that.”
Testing five predictions
A theory is just an idea. In the study, lead author Zhen Chen and other current and former members of Miller’s lab put spatial computing to the test by examining whether five predictions it makes about neural activity and brain wave patterns were actually evident in measurements made in the prefrontal cortex of animals as they engaged in two working memory and one categorization tasks. Across the tasks there were distinct pieces of sensory information to process (e.g., “A blue square appeared on the screen followed by a green triangle”) and rules to follow (e.g., “When new shapes appear on the screen, do they match the shapes I saw before and appear in the same order?”)
The first two predictions were that alpha and beta waves should represent task controls and rules, while the spiking activity of neurons should represent the sensory inputs. When the researchers analyzed the brain wave and spiking readings gathered by the four electrode arrays implanted in the cortex, they found that indeed these predictions were true. Neural spikes, but not the alpha/beta waves, carried sensory information. While both spikes and the alpha/beta waves carried task information, it was strongest in the waves, and it peaked at times relevant to when rules were needed to carry out the tasks.
Notably, in the categorization task, the researchers purposely varied the level of abstraction to make categorization more or less cognitively difficult. The researchers saw that the greater the difficulty, the stronger the alpha/beta wave power was, further showing that it carries task rules.
The next two predictions were that alpha/beta would be spatially organized, and that when and where it was strong, the sensory information represented by spiking would be suppressed, but where and when it was weak, spiking would increase. These predictions also held true in the data. Under the electrodes, Chen, Miller, and the team could see distinct spatial patterns of higher or lower wave power, and where power was high, the sensory information in spiking was low, and vice versa.
Finally, if spatial computing is valid, the researchers predicted, then trial by trial, alpha/beta power and timing should accurately correlate with the animals’ performance. Sure enough, there were significant differences in the signals on trials where the animals performed the tasks correctly versus when they made mistakes. In particular, the measurements predicted mistakes due to messing up task rules versus sensory information. For instance, alpha/beta discrepancies pertained to the order in which stimuli appeared (first square then triangle) rather than the identity of the individual stimuli (square or triangle).
Compatible with findings in humans
By conducting this study with animals, the researchers were able to make direct measurements of individual neural spikes as well as brain waves, and in the paper, they note that other studies in humans report some similar findings. For instance, studies using noninvasive EEG and MEG brain wave readings show that humans use alpha oscillations to inhibit activity in task-irrelevant areas under top-down control, and that alpha oscillations appear to govern task-related activity in the prefrontal cortex.
While Miller says he finds the results of the new study, and their intersection with human studies, to be encouraging, he acknowledges that more evidence is still needed. For instance, his lab has shown that brain waves are typically not still (like a jump rope), but travel across areas of the brain. Spatial computing should account for that, he says.
In addition to Chen and Miller, the paper’s other authors are Scott Brincat, Mikael Lundqvist, Roman Loonis, and Melissa Warden.
The U.S. Office of Naval Research, The Freedom Together Foundation, and The Picower Institute for Learning and Memory funded the study.
A new way to “paint with light” to create radiant, color-changing items
Gemstones like precious opal are beautiful to look at and deceivingly complex. As you look at such gems from different angles, you’ll see a variety of tints glisten, causing you to question what color the rock actually is. It’s iridescent thanks to something called structural color — microscopic structures that reflect light to produce radiant hues.
Structural color can be found across different organisms in nature, such as on the tails of peacocks and the wings of certain butterflies. Scientists and artists have been working to replicate this quality, but outside of the lab, it’s still very hard to recreate, causing a barrier to on-demand, customizable fabrication. Instead, companies and individual designers alike have resorted to adding existing color-changing objects like feathers and gems to things like personal items, clothes, and artwork.
Now MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) researchers have replicated nature’s brilliance with a new optical system called “MorphoChrome.” MorphoChrome allows users to design and program iridescence onto everyday objects (like a glove, for example), augmenting them with the structurally colored multi-color glimmer reminiscent of many gemstones. You select particular colors from a color wheel in the team’s software program and use their handheld device to “paint” with multi-color light onto holographic film. Then, you apply that painted sheet to 3D-printed objects or flexible substrates such as fashion items, sporting goods, and other personal accessories, using their unique epoxy resin transfer process.
“We wanted to tap into the innate intelligence of nature,” says MIT Department of Electrical Engineering and Computer Science (EECS) PhD student and CSAIL researcher Paris Myers SM ’25, who is a lead author on a recent paper presenting MorphoChrome. “In the past, you couldn’t easily synthesize structural color yourself, but using pigments or dyes gave you full creative expression. With our system, you have full creative agency over this new material space, predictably programming iridescent designs in real-time.”
MorphoChrome showed it could add a luminous touch to things like a necklace charm of a butterfly. What started as a static, black accessory became a shiny pendant with green, orange, and blue glimmers, thanks to the system’s programmable color process. MorphoChrome also turned golfing gloves into beginner-friendly training equipment that shine green when you hold a golf club at the correct angle, and even helped one user adorn their fingernails with a gemstone-like look.
These multi-color displays are the result of a handheld fabrication process where MorphoChrome acts as a “brush" to paint with red-green-blue (RGB) laser light, while a holographic photopolymer film (used for things like passports and debit cards) is the canvas. Users first connect the system’s handheld device to a computer via a USB-C port, then open the software program. They can then click “send color” to rapidly transmit different hues from their laptop or home computer to the MorphoChrome hardware tool.
This handheld device transforms the colors on a screen into a controllable, multi-color RGB laser light output that instantly exposes the film, a sort of canvas where users can explore different combinations of hues. About the size of a glue bottle, MorphoChrome’s optical machine houses red, green, and blue lasers, which are activated at various intensities depending on the color chosen. These lights are reflected off mirrors toward an optical prism, where the colors mix and are promptly released as a single combined beam of light.
After designing the film, one can fabricate diverse structurally colored objects by first coating a chosen object with a thin layer of epoxy resin. Next, the holographic film (litiholographics) — composed of a photopolymer layer and a protective plastic backing — is bonded to the object through a 20-second ultraviolet cure, essentially using a handheld UV light to transfer the colored design onto the surface. Finally, users peel off the film’s protective plastic sheet, revealing a color-changing, structurally-colored object that looks like a jewel.
Do try this at home
MorphoChrome is surprisingly user-friendly, consisting of a straightforward fabrication blueprint and an easy-to-use device that encourages do-it-yourself designers and other makers to explore iridescent designs at home. Instead of spending time searching for hard-to-find artistic materials or chemically synthesizing structural color in the lab, users can focus on expressing various ideas and experimenting with programming different radiant color mixes.
The array of possible colors stems from intriguing fusions. Nagenta, for instance, is created after the system’s blue and red lasers mix. Selecting cyan on the MorphoChrome software’s color wheel will mix the green and blue lights.
Users should note that the time it takes to fully expose the film to each color will vary, based on the researchers’ multi-color findings and the intrinsic properties of holographic photopolymer film. MorphoChrome activates green in 2.5 seconds, whereas red takes about 3 seconds, and blue needs roughly 6 seconds to saturate. The reason for this discrepancy is that each color is a particular wavelength of light, requiring a certain level of light exposure (blue needing more than green or red).
Look at this hologram
MorphoChrome builds upon previous work on stretchable structural color by co-author Benjamin Miller PhD ’24, Professor Mathias Kolle, and Kolle’s Laboratory for Biologically Inspired Photonic Engineering group at MIT's Department of Mechanical Engineering. The CSAIL researchers, who work in the Human-Computer Interaction Engineering Group, say that MorphoChrome also advances their ongoing work on merging computation with unique materials to create dynamic, programmable color interfaces.
Going forward, their goal is to push the capabilities of holographic structural color as a reprogrammable design and manufacturing space, empowering individuals and industries alike to customize iridescent and diffuse multi-color interfaces. “The polymer sheet we went with here is holographic, which has potential beyond what we’re showing here,” says co-author Yunyi Zhu ’20, MEng ’21, who is an MIT EECS PhD student and CSAIL researcher. “We’re working on adapting our process for creating entire 3D light fields in one film.”
Customizing full light-field holographic messages onto objects would allow users to encode information and 3D images. One could imagine, for example, that a passport could have a sticker that beams out a 3D green check mark. This hologram would signal its authenticity when viewed through a particular device or at a certain angle.
The team is also inspired by how animals use structural color as an adaptive communication channel and camouflage technique. Going forward, they are curious how programmable structural color could be integrated into different types of environments, perhaps as camouflage for soft robotic structures to blend into an environment. For instance, they imagine a robot studying jungle terrain may need to match the appearance of nearby bushes to collect data, with a human reprogramming the machine’s color from afar.
In the meantime, MorphoChrome recreates the majestic structural color found in various ecosystems, connecting a natural phenomenon with our creative processes. MIT researchers will look to improve the system’s color gamut and maximize how luminous mixed colors are. They’re also considering using another material for the device’s casing, since its current 3D-printing housing leaks out some light.
“Being able to easily create and manipulate structural color is a great new tool, and opens up new avenues for discovery and expression,” says Liti Holographics CEO Paul Christie SM ’97, who wasn’t involved in the research. “Simplifying the process to be more easily accessible allows for new applications to be developed in a wider range of areas, from art and jewelry to functional fabric.”
Myers, Zhu, and Miller wrote the paper with senior author Stefanie Mueller, who is an MIT associate professor of electrical engineering and computer science and CSAIL principal investigator. Their research was supported by the National Science Foundation, and presented as a demo paper and poster at the 2025 ACM Symposium on Computational Fabrication in November.
Polar weather on Jupiter and Saturn hints at the planets’ interior details
Over the years, passing spacecraft have observed mystifying weather patterns at the poles of Jupiter and Saturn. The two planets host very different types of polar vortices, which are huge atmospheric whirlpools that rotate over a planet’s polar region. On Saturn, a single massive polar vortex appears to cap the north pole in a curiously hexagonal shape, while on Jupiter, a central polar vortex is surrounded by eight smaller vortices, like a pan of swirling cinnamon rolls.
Given that both planets are similar in many ways — they are roughly the same size and made from the same gaseous elements — the stark difference in their polar weather patterns has been a longstanding mystery.
Now, MIT scientists have identified a possible explanation for how the two different systems may have evolved. Their findings could help scientists understand not only the planets’ surface weather patterns, but also what might lie beneath the clouds, deep within their interiors.
In a study appearing this week in the Proceedings of the National Academy of Sciences, the team simulates various ways in which well-organized vortex patterns may form out of random stimulations on a gas giant. A gas giant is a large planet that is made mostly of gaseous elements, such as Jupiter and Saturn. Among a wide range of plausible planetary configurations, the team found that, in some cases, the currents coalesced into a single large vortex, similar to Saturn’s pattern, whereas other simulations produced multiple large circulations, akin to Jupiter’s vortices.
After comparing simulations, the team found that vortex patterns, and whether a planet develops one or multiple polar vortices, comes down to one main property: the “softness” of a vortex’s base, which is related to the interior composition. The scientists liken an individual vortex to a whirling cylinder spinning through a planet’s many atmospheric layers. When the base of this swirling cylinder is made of softer, lighter materials, any vortex that evolves can only grow so large. The final pattern can then allow for multiple smaller vortices, similar to those on Jupiter. In contrast, if a vortex’s base is made of harder, denser stuff, it can grow much larger and subsequently engulf other vortices to form one single, massive vortex, akin to the monster cyclone on Saturn.
“Our study shows that, depending on the interior properties and the softness of the bottom of the vortex, this will influence the kind of fluid pattern you observe at the surface,” says study author Wanying Kang, assistant professor in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). “I don’t think anyone’s made this connection between the surface fluid pattern and the interior properties of these planets. One possible scenario could be that Saturn has a harder bottom than Jupiter.”
The study’s first author is MIT graduate student Jiaru Shi.
Spinning up
Kang and Shi’s new work was inspired by images of Jupiter and Saturn that have been taken by the Juno and Cassini missions. NASA’s Juno spacecraft has been orbiting around Jupiter since 2016, and has captured stunning images of the planet’s north pole and its multiple swirling vortices. From these images, scientists have estimated that each of Jupiter’s vortices is immense, spanning about 3,000 miles across — almost half as wide as the Earth itself.
The Cassini spacecraft, prior to intentionally burning up in Saturn’s atmosphere in 2017, orbited the ringed planet for 13 years. Its observations of Saturn’s north pole recorded a single, hexagonal-shaped polar vortex, about 18,000 miles wide.
“People have spent a lot of time deciphering the differences between Jupiter and Saturn,” Shi says. “The planets are about the same size and are both made mostly of hydrogen and helium. It’s unclear why their polar vortices are so different.”
Shi and Kang set out to identify a physical mechanism that would explain why one planet might evolve a single vortex, while the other hosts multiple vortices. To do so, they worked with a two-dimensional model of surface fluid dynamics. While a polar vortex is three-dimensional in nature, the team reasoned that they could accurately represent vortex evolution in two dimensions, as the fast rotation of Jupiter and Saturn enforces uniform motion along the rotating axis.
“In a fast-rotating system, fluid motion tends to be uniform along the rotating axis,” Kang explains. “So, we were motivated by this idea that we can reduce a 3D dynamical problem to a 2D problem because the fluid pattern does not change in 3D. This makes the problem hundreds of times faster and cheaper to simulate and study.”
Getting to the bottom
Following this reasoning, the team developed a two-dimensional model of vortex evolution on a gas giant, based on an existing equation that describes how swirling fluid evolves over time.
“This equation has been used in many contexts, including to model midlatitude cyclones on Earth,” Kang says. “We adapted the equation to the polar regions of Jupiter and Saturn.”
The team applied their two-dimensional model to simulate how fluid would evolve over time on a gas giant under different scenarios. In each scenario, the team varied the planet’s size, its rate of rotation, its internal heating, and the softness or hardness of the rotating fluid, among other parameters. They then set a random “noise” condition, in which fluid initially flowed in random patterns across the planet’s surface. Finally, they observed how the fluid evolved over time given the scenario’s specific conditions.
Over multiple different simulations, they observed that some scenarios evolved to form a single large polar vortex, like Saturn, whereas others formed multiple smaller vortices, like Jupiter. After analyzing the combinations of parameters and variables in each scenario and how they related to the final outcome, they landed on a single mechanism to explain whether a single or multiple vortices evolve: As random fluid motions start to coalesce into individual vortices, the size to which a vortex can grow is limited by how soft the bottom of the vortex is. The softer, or lighter the gas is that is rotating at the bottom of a vortex, the smaller the vortex is in the end, allowing for multiple smaller-scale vortices to coexist at a planet’s pole, similar to those on Jupiter.
Conversely, the harder or denser a vortex bottom is, the larger the system can grow, to a size where eventually it can follow the planet’s curvature as a single, planetary-scale vortex, like the one on Saturn.
If this mechanism is indeed what is at play on both gas giants, it would suggest that Jupiter could be made of softer, lighter material, while Saturn may harbor heavier stuff in its interior.
“What we see from the surface, the fluid pattern on Jupiter and Saturn, may tell us something about the interior, like how soft the bottom is,” Shi says. “And that is important because maybe beneath Saturn’s surface, the interior is more metal-enriched and has more condensable material which allows it to provide stronger stratification than Jupiter. ”
"Because Jupiter and Saturn are otherwise so similar, their different polar weather has been a puzzle,” says Yohai Kaspi, a professor of geophysical fluid dynamics at the Weizmann Institute of Science, and a member of the Juno mission’s science team, who was not involved in the new study. “The work by Shi and Kang reveals a surprising link between these differences and the planets’ deep interior ‘softness’, offering a new way to map the key internal properties that shape their atmospheres."
This research was supported, in part, by a Mathworks Fellowship and endowed funding from MIT’s Department of Earth, Atmospheric and Planetary Sciences.
Demystifying college for enlisted veterans and service members
“I went into the military right after high school, mostly because I didn’t really see the value of academics,” says Air Force veteran and MIT sophomore Justin Cole.
His perspective on education shifted, however, after he experienced several natural disasters during his nine years of service. As a satellite systems operator in Colorado, Cole volunteered in the aftermath of the 2013 Black Forest fire, the state’s most destructive fire at the time. And in 2018, while he was leading a team in Okinawa conducting signal-monitoring work on communications satellites, two Category 5 typhoons barreled through the area within 26 days.
“I realized, this climate stuff is really a prerequisite to national security objectives in almost every sense, so I knew that school was going to be the thing that would help prepare me to make a difference,” he says. In 2023, after leaving the Air Force to work for climate-focused nonprofits and take engineering courses, Cole participated in an intense, weeklong STEM boot camp at MIT. “It definitely reaffirmed that I wanted to continue down the path of at least getting a bachelor’s, and it also inspired me to apply to MIT,” he says. He transferred in 2024 and is majoring in climate system science and engineering.
“It’s a lot like the MIT experience”
MIT runs the boot camp every summer as part of the nonprofit Warrior-Scholar Project (WSP), which started at Yale University in 2012. WSP offers a range of programming designed to help enlisted veterans and service members transition from the military to higher education. The academic boot camp program, which aims to simulate a week of undergraduate life, is offered at 19 schools nationwide in three areas: business, college readiness, and STEM.
MIT joined WSP in 2017 as one of the first three campuses to offer the STEM boot camp. “It was definitely rigorous,” Cole recalls, “not getting tons of sleep, grinding psets at night with friends … it’s a lot like the MIT experience.” In addition to problem sets, every day at MIT-WSP is packed with faculty lectures on math and physics, recitations, working on research projects, and tours of MIT campus labs. Scholars also attend daily college success workshops on topics such as note taking, time management, and applying to college. The schedule is meticulously mapped out — including travel times — from 0845 to 2200, Sunday through Friday.
Michael McDonald, an associate professor of physics at the Kavli Institute for Astrophysics and Space Research, and Navy veteran Nelson Olivier MBA ’17 have run the MIT-WSP program since its inception. At the time, WSP wanted to expand its STEM boot camps to other universities, so a Yale astrophysicist colleague recruited McDonald. Meanwhile, Olivier’s former Navy SEAL Team THREE teammate — who happened to be the WSP CEO — convinced Olivier to help launch the program while he was at the MIT Sloan School of Management, along with classmate Bill Kindred MBA ’17.
Now in its 10th year, MIT-WSP has hosted over 120 scholars, 93 percent of whom have gone on to attend schools like Stanford University, Georgetown University, University of Notre Dame, Harvard University, and the University of California at Berkeley. MIT-WSP alumni who have graduated now work at employers such as Meta, Price Waterhouse Coopers, Boeing, and BAE Systems.
Translating helicopter repairs to Newton’s laws
McDonald has a lot of fun teaching WSP scholars every summer. “When I pose a question to my first-year physics class in September, no one wants to meet my eyes or raise their hand for fear of embarrassing themselves,” he says. “But I ask a question to this group of, say, 12 vets, and 12 hands shoot up, they are all answering over each other, and then asking questions to follow up on the question. They are just curious and hungry, and they couldn’t care less about how they come off. … As a professor, it’s like your dream class.”
Every year, McDonald witnesses a predictable transformation among the scholars. They start off eager enough, however “by Tuesday, they are miserable, they’re pretty beaten down. But by the end of the week, they’re like, ‘I could do another week,’” he says.
Their confidence grows as they recognize that, while they may not have taken college courses, their military experience is invaluable. “It’s just a matter of convincing these guys that what they are already doing is what we are looking for. We have guys that say, ‘I don’t know if I can succeed in an engineering program,’ but then in the field, they are repairing helicopters. And I’m like, ‘Oh no, you can do this stuff!’ They just need to understand the background of why that helicopter that they are building works.”
Olivier agrees. “The enlisted veteran has a leg up because they’ve already done this before. They are just translating it from either fixing a radio or messing around with the components of a bomb to understanding Newton’s laws. That’s a thing of beauty, when you see that.”
Fostering a virtuous cycle
While just seeing themselves succeed at MIT-WSP helps instill confidence among scholars, meeting veterans who have made the leap into academia has a multiplier effect. To that end, the WSP organization provides each academic boot camp with alumni, called fellows, to teach college success workshops, provide support, and share their experiences in higher education.
“When I was at boot camp, we had two WSP fellows who were at Columbia, one at Princeton, and one who just got accepted to Harvard,” Cole recalls. “Just seeing people existing at these institutions made me realize, this is a thing that is doable.” The following summer, he became a fellow as well.
Former Marine Corps communications operator Aaron Kahler, who attended MIT-WSP in 2024, particularly recalls meeting a veteran PhD student while the group toured the neuroscience facility. “It was really cool seeing instances of successful vets doing their thing at MIT,” he says. “There were a lot more than we thought.”
Over the years, McDonald has made an effort to recruit more MIT veterans to staff the program. One of them is Andrea Henshall, a retired major in the Air Force and a PhD student in the Department of Aeronautics and Astronautics. After joining the Ask Me Anything panel a few years ago, she’s become increasingly involved, presenting lectures, mentoring participants, offering tours of the motion capture lab where she conducts experiments, and informally mentoring scholars.
“It’s so inspiring to hear so many students at the end of the week say, ‘I never considered a place like MIT until the boot camp, or until somebody told me, hey, you can be here, too.’ Or they see examples of enlisted veterans, like Justin, who’ve transitioned to a place like MIT and shown that it’s possible,” says Henshall.
At the conclusion of MIT-WSP, scholars receive a tangible reminder of what’s possible: a challenge coin designed by Olivier and McDonald. “In the military, the challenge coin usually has the emblem of the unit and symbolizes the ethos of the unit,” Olivier explains. On one side of the MIT-WSP coin are Newton’s laws of motion, superimposed over the WSP logo. MIT's “mens et manus” (“mind and hand”) motto appears on the other side, beneath an image of the Great Dome inscribed with the scholar’s name.
“As you go into Killian Court you see all the names of Pasteur, Newton, et cetera, but Building 10 doesn’t have a name on it,” he says. “So we say, ‘earn your space there on these buildings. Do something significant that will impact the human experience.’ And that’s what we think each one of these guys and gals can do.”
Kahler keeps the coin displayed on his desk at MIT, where he’s now a first-year student, for inspiration. “I don’t think I would be here if it weren’t for the Warrior-Scholar Project,” he says.
How collective memory of the Rwandan genocide was preserved
The 1994 genocide in Rwanda took place over a little more than three months, during which militias representing the Hutu ethnic group conducted a mass murder of members of the Tutsi ethnic group along with some politically moderate members of the Hutu and Twa groups. Soon after, local citizens and aid workers began to document the atrocities that had occurred in the country.
They were establishing evidence of a genocide that many outsiders were slow to acknowledge; other countries and the U.N. did not recognize it until 1998. By preserving scenes of massacre and victims’ remains, this effort allowed foreigners, journalists, and neighbors to witness what had happened. Though the citizens’ work was emotionally and physically challenging, they used these sites of memory to seek justice for victims who had been killed and harmed.
In so doing, these efforts turned memory into officially recognized history. Now, in a new book, MIT scholar Delia Wendel carefully explores this work, shedding new light on the people who created the state’s genocide memorials, and the decisions they made in the process — such as making the remains of the dead available for public viewing. She also examines how the state gained control of the effort and has chosen to represent the past through these memorials.
“I’m seeking to recuperate this forgotten history of the ethics of the work, while also contending with the motivations of state sovereignty that has sustained it,” says Wendel, who is the Class of 1922 Career Development Associate Professor of Urban Studies and International Development in MIT’s Department of Urban Studies and Planning (DUSP).
That book, “Rwanda’s Genocide Heritage: Between Justice and Sovereignty,” is published by Duke University Press and is freely available through the MIT Libraries. In it, Wendel uncovers new details about the first efforts to preserve the memory of the genocide, analyzes the social and political dynamics, and examines their impact on people and public spaces.
“The shift from memory to history is important because it also requires recognition that is official or more public in nature,” Wendel says. “Survivors, their kin, their relatives, they know their histories. What they’re wishing to happen is a form of repair, or justice, or empowerment, that comes with disclosing those histories. That truth-telling aspect is really important.”
Conversations and memory
Wendel’s book was well over a decade in the making — and emerged from a related set of scholarly inquiries about peace-building activities in the wake of genocide. For this project, about memorializing genocide, Wendel visited over 30 villages in Rwanda over a span of many years, gradually making connections and building dialogues with citizens, in addition to conducting more conventional social science research.
“Speaking with rual residents started to unlock a lot of different types of conversations,” Wendel says of those visits. “A good deal of those conversations had to do with memory, and with relationships to place, neighbors, and authority.” She adds: “These are topics that people are very hesitant to speak about, and rightly so. This has been a book that took a long time to research and build some semblance of trust.”
During her research, Wendel also talked at length with some key figures involved in the process, including Louis Kanamugire, a Rwandan who became the first head of the country’s post-war Genocide Memorial Commission. Kanamugire, who lost his parents in the genocide, felt it was necessary to preserve and display the remains of genocide victims, including at four key sites that later become official state memorials.
This process involved, as Wendel puts it, the “gruesome” work of cleaning and preserving bodies and bones and preserving material remains to provide both material evidence of genocide and the grounds for beginning the work of societal repair and individual healing.
Wendel also uncovers, in detail for the first time, the work done by Mario Ibarra, a Chilean aid worker for the U.N. who also investigated atrocities, photographed evidence extensively, conducted preservation work, and contributed to the country’s Genocide Memorial Commission as well. The relationships between global human rights practice and genocide survivors seeking justice, in terms of preserving and documenting evidence, is at the core of the book and, Wendel believes, a previously underappreciated aspect of this topic.
“The story of Rwanda memorialization that has typically been told is one of state control,” Wendel says. “But in the beginning, the government followed independent initiatives by this human rights worker and local residents who really spurred this on.”
In the book, Wendel also examines how Rwanda’s memorialization practices relates to those of other countries, often in the so-called Global South. This phenomenon is something she terms “trauma heritage,” and has followed similar trajectories across countries in Africa and South America, for instance.
“Trauma heritage is the act of making visible the violence that had been actively hidden, and intervening in the dynamics of power,” she says. “Making such public spaces for silenced pain is a way of seeking recognition of those harms, and [seeking] forms of justice and repair.”
The tensions of memorialization
To be clear, Rwanda has been able to construct genocide memorials in the first place because, in the mid-1990s, Tutsi troops regained power in the country by defeating their Hutu adversaries. Subsequently, in a state without unlimited free expression, the government has considerable control over the content and forms of memorialization that take place.
Meanwhile, there have always been differing views about, say, displaying victims’ remains, and to what degree such a practice underlines their humanity or emphasizes the dehumanizing treatment they suffered. Then too, atrocities can produce a wide range of psychological responses among the living, including survivors’ guilt and the sheer difficulty many experience in expressing what they have witnessed. The process of memorialization, in such circumstances, will likely be fraught.
“The book is about the tensions and paradoxes between the ethics of this work and its politics, which have a lot to do with state sovereignty and control,” Wendel says. “It’s rooted in the tension between what’s invisible and what’s visible, between this bid to be seen and to recognize the humanity of the victims and yet represent this dehumanizing violence. These are irresolvable dilemmas that were felt by the people doing this work.”
Or, as Wendel writes in the book, Rwandans and others immersed in similar struggles for justice around the world have had to grapple with the “messy politics of repair, searching for seemingly impossible redress for injustice.”
Other experts have praised Wendel’s book, such as Pumla Gobodo-Madikizela, a professor at Stellenbosch University in South Africa, who studies the psychological effects of mass violence. Gobodo-Madikizela has cited Wendel’s “extraordinary narratives” about the book’s principal figures, observing that they “not only preserve the remains but also reclaim the victims’ humanity. … Wendel shows how their labor becomes a defiant insistence on visibility that transforms the act of cleaning into a form of truth-telling, making injustice materially and spatially undeniable.”
For her part, Wendel hopes the book will engage readers interested in multiple related issues, including Rwandan and African history, the practices and politics of public memory, human rights and peace-building, and the design of public memorials and related spaces, including those built in the aftermath of traumatic historical episodes.
“Rwanda’s genocide heritage remains an important endeavor in memory justice, even if its politics need to be contended with at the same time,” Wendel says.
Helping companies with physical operations around the world run more intelligently
Running large companies in construction, logistics, energy, and manufacturing requires careful coordination between millions of people, devices, and systems. For more than a decade, Samsara has helped those companies connect their assets to get work done more intelligently.
Founded by John Bicket SM ’05 and Sanjit Biswas SM ’05, Samsara’s platform gives companies with physical operations a central hub to track and learn from workers, equipment, and other infrastructure. Layered on top of that platform are real-time analytics and notifications designed to prevent accidents, reduce risks, save fuel, and more.
Tens of thousands of customers have used Samsara’s platform to improve their operations since its founding in 2015. Home Depot, for instance, used Samsara’s artificial intelligence-equipped dashcams to reduce their total auto liability claims by 65 percent in one year. Maxim Crane Works saved more than $13 million in maintenance costs using Samsara’s equipment and vehicle diagnostic data in 2024. Mohawk Industries, the world’s largest flooring manufacturer, improved their route efficiency and saved $7.75 million annually.
“It’s all about real-world impact,” says Biswas, Samsara’s CEO. “These organizations have complex operations and are functioning at a massive scale. Workers are driving millions of miles and consuming tons of fuel. If you can understand what’s happening and run analysis in the cloud, you can find big efficiency improvements. In terms of safety, these workers are putting their lives at risk every day to keep this infrastructure running. You can literally save lives if you can reduce risk.”
Finding big problems
Biswas and Bicket started PhD programs at MIT in 2002, both conducting research around networking in the Computer Science and Artificial Intelligence Laboratory (CSAIL). They eventually applied their studies to build a wireless network called MIT RoofNet.
Upon graduating with master’s degrees, Biswas and Bicket decided to commercialize the technologies they worked on, founding the company Meraki in 2006.
“How do you get big Wi-Fi networks out in the world?” Biswas asks. “With MIT RoofNet, we covered Cambridge in Wi-Fi. We wanted to enable other people to build big Wi-Fi networks and make Wi-Fi go mainstream for larger campuses and offices.”
Over the next six years, Meraki’s technology was used to create millions of Wi-Fi networks around the world. In 2012, Meraki was acquired by Cisco. Biswas and Bicket left Cisco in 2015, unsure of what they’d work on next.
“The way we found ourselves to Samsara was through the same curiosity we had as graduate students,” Biswas says. “This time it dealt more with the planet’s infrastructure. We were thinking about how utilities work, and how construction happens at the scale of cities and states. It drew us into operations, which is the infrastructure backbone of the planet.”
As the founders learned about industries like logistics, utilities, and construction, they realized they could use their technical background to improve safety and efficiency.
“All these industries have a lot in common,” Biswas says. “They have a lot of field workers — often thousands of them — they have a lot of assets like trucks and equipment, and they’re trying to orchestrate it all. The throughline was the importance of data.”
When they founded Samsara 10 years ago, many people were still collecting field data with pen and paper.
“Because of our technical background, we knew that if you could collect the data and run sophisticated algorithms like AI over it, you could get a ton of insights and improve the way those operations run,” Biswas says.
Biswas says extracting insights from data is easy. Making field-ready products and getting them into the hands of frontline workers took longer.
Samsara started by tapping into existing sensors in buildings, cars, and other assets. They also built their own, including AI-equipped cameras and GPS trackers that can monitor driving behavior. That formed the foundation of Samsara’s Connected Operations Platform. On top of that, Samsara Intelligence processes data in the cloud and provides insights like ways to calculate the best routes for commercial vehicles, be more proactive with maintenance, and reduce fuel consumption.
Samsara’s platform can be used to detect if a commercial vehicle or snowplow driver is on their phone and send an audio message nudging them to stay safe and focused. The platform can also deliver training and coaching.
“That’s the kind of thing that reduces risk, because workers are way less likely to be distracted,” Biswas says. “If you do for millions of workers, you reduce risk at scale.”
The platform also allows managers to query their data in a ChatGPT-style interface, asking questions such as: Who are my safest drivers? Which vehicles need maintenance? And what are my least fuel-efficient trucks?
“Our platform helps recognize frontline workers who are safe and efficient in their job,” Biswas says. “These people are largely unsung heroes. They keep our planet running, but they don’t hear ‘thank you’ very often. Samsara helps companies recognize the safest workers on the field and give them recognition and rewards. So, it’s about modernizing equipment but also improving the experience of millions of people that help run this vital infrastructure.”
Continuing to grow
Today Samsara processes 20 trillion data points a year and monitors 90 million miles of driving. The company employs about 4,000 people across North America and Europe.
“It still feels early for us,” Biswas says. “We’ve been around for 10 years and gotten some scale, but we needed to build this platform to be able to build more products and have more impact. If you step back, operations is 40 percent of the world’s GDP, so we see a lot of opportunities to do more with this data. For instance, weather is part of Samsara Intelligence, and weather is 20 to 25 percent of the risk, and so we’re training AI models to reduce risk from the weather. And on the sustainability side, the more data we have, the more we can help optimize for things like fuel consumption or transitioning to electric vehicles. Maintenance is another fascinating data problem.”
The founders have also maintained a connection with MIT — and not just because the City of Boston’s Department of Public Works and the MBTA are customers. Last year, the Biswas Family Foundation announced funding for a four-year postdoctoral fellowship program at MIT for early-stage researchers working to improve health care.
Biswas says Samsara’s journey has been incredibly rewarding and notes the company is well-positioned to leverage advances in AI to further its impact going forward.
“It’s been a lot of fun and also a lot of hard work,” Biswas says. “What’s exciting is that each decade of the company feels different. It’s almost like a new chapter — or a whole new book. Right now, there’s so many incredible things happening with data and AI. It feels as exciting as it did in the early days of the company. It feels very much like a startup.”
How an online MIT course in supply chain management sparked a new career
As a college student, Kevin Power never considered working in supply chain management; in fact, he didn’t know it was an option. He earned an undergraduate degree in manufacturing engineering while working full time at an oil refinery, which demanded a rigorous routine of shift work, long days, and evening classes.
After graduation, he found himself searching for new learning opportunities, and stumbled upon the online courses of the MITx MicroMasters Program in Supply Chain Management, an online program of the MIT Center for Transportation and Logistics. Starting with Supply Chain Analytics (SC0x), Power was drawn in immediately by how directly applicable the lessons were to real work.
“So many courses that you do are more theoretical,” he reflects. “Everything I learned, I could apply it directly to my work and see the value in doing it. So as soon as I finished Supply Chain Analytics, I decided, OK, I’ll finish the whole program.” What he didn’t yet know was that he belonged to the very audience the MicroMasters was designed for — lifelong learners. Learners are often working professionals who want deep, flexible training while continuing their careers.
After completing the five-course MicroMasters track and earning his credential, Power uncovered another opportunity: the MIT SCM Blended Master’s Program, which pairs the online credential with a one-semester, on-campus program, resulting in a master of applied science degree in supply chain management.
For Power, the blend of online and in-person learning proved pivotal. He describes his MicroMasters experience as fertile ground for deep, self-paced study. “I’m a very introverted kind of learner, so I prefer to just learn out of a textbook and online,” he says. But, once in the MIT SCM program, he tapped into the soft skills he needs to stand out in the industry. “When I came to campus, it was more about networking and being able to communicate with executives, on top of our academic work,” he says. The immersive environment of combining scholarly rigor with real-world experience among peers across the supply chain industry is at the heart of what the blended program aims to facilitate.
During his time on campus, Power’s research included simulation modeling in port shipping and generative-AI–driven projects focused on supply chain resilience. “I had never done simulation modeling before, and right now it’s huge in the industry,” he says. “If I were trying to apply for a simulation modeling job, I’m sure it would help me greatly having done this.”
His project, completed with fellow MIT SCM student Yassine Lahlou-Kamal, was one of the winners at the 2025 Annual MIT Global SCALE Network Supply Chain Student Research Expo, in which students showcased their industry-sponsored thesis and capstone projects. This experience pays off in his current work with Elenna Dugundji in her Deep Knowledge Lab for Supply Chain and Logistics.
Beyond academics and research, Power threw himself into the fast-paced world of hackathons, despite having never participated in one before. “I’m very competitive,” Power confesses, “and I feel like I learn something new every time.” His first effort, an internal MIT competition called Hack-Nation’s Global AI Hackathon, earned him a win with an AI sports-betting agent project that fuses model-driven analysis with web scraping. Soon after, he tackled the OpenAI Red Teaming Challenge on Kaggle. Despite joining the competition halfway through the 15-day window, he raced through the final week and was selected as one of the winners. “It gave me a lot of confidence … that the things I’m working on right now are cutting-edge, even in the eyes of OpenAI.”
In terms of his return on investment in the degree, Power says, “I’m getting so much value out of being here. Even from just doing the Kaggle competition, I won more than the cost of my full MIT degree.” Long-term, Power has been impressed that “as far as I know, everybody that was looking for a job in the supply chain program has one.” The data back him up, as every student from the MIT SCM residential program Class of 2025 secured a job within six months of graduation.
Now a current master’s student in the MIT Technology and Policy Program, looking ahead, Power says, “I want to do a startup. A lot of the ideas came from research I’ve done here.”
Reflecting on the transformation he’s experienced in just 10 months of the program, he calls it “crazy.” “The SCM program really is amazing … I’d recommend it to anyone.”
Fostering MIT’s Japan connection
Born and raised in Japan as part of a military family, Christine Pilcavage knows first-hand about the value of an immersive approach to exploration.
“Any experience in a different context improves an individual,” says Pilcavage, who has also lived in Cambodia, the Philippines, and Kenya.
It’s that ethos that Pilcavage brings to her role as managing director of MISTI Japan, which connects MIT students and faculty to Institute collaborators in Japan. In her role, Pilcavage sends students to Japan for internship and research opportunities. She also shares Japanese culture on campus with activities like Ikebana classes during Independent Activities Period and a Japanese Film Festival.
MIT’s connection to Japan dates back before 1874, when its first Japanese student graduated. Later, 1911 saw the foundation of the MIT Association of Japan, Japan’s first MIT trans-Pacific alumni club. That organization later evolved into the MIT Club of Japan.
MISTI Japan predates the MIT International Science and Technology Initiatives (MISTI)’s creation. The MIT-Japan Program was established in 1981 to prepare MIT students to be better scientists and engineers who understand and work effectively with Japan. The program sought to foster a deeper U.S.-Japan collaboration in science and technology amidst Japan's growing economic and technological power. MIT-Japan began sending students to Japan in 1983.
Students in the MIT-Japan Program complete a three-to-12-month internship at their host institution, and the immersive experiences are invaluable. “Japan is so different from the Western world,” Pilcavage notes. “For example, in Japanese, verbs end sentences, so it’s important to develop patience and listen carefully when communicating.”
Pilcavage believes there is tremendous value in creating and supporting a program like MISTI at MIT. Traveling to areas outside the Institute and the United States can expose students to diverse cultures, aid the exploration of challenges, help them discover solutions, improve language learning, and foster communication.
“We want our students to think and create,” she says. “They need to see beyond the MIT bubble and think carefully about how to solve difficult problems and help others.”
Japan, Pilcavage continues, is monocultural in ways the United States isn’t. While English is spoken in larger cities, it’s harder to find it spoken in rural areas. “MIT students teach STEM topics to rural Japanese kids in Japanese,” Pilcavage says, citing a program that’s been teaching STEAM workshops in the tsunami-affected area in Northern Japan since 2017. “Learning to code switch means they improve their language skills while also learning important cultural nuances, like body language.”
Pilcavage emphasizes the importance of “learning differently” for MIT students and the Japanese people with whom they interact. “I wanted our students to engage with the local population,” she says, encouraging them to develop what she calls “cultural resilience.”
Journey to MIT
Pilcavage — whose educational background includes master’s degrees in international affairs and public health, and undergraduate study in economics and psychology — has also worked with the United States Agency for International Development (USAID), the Japanese government, the Japan International Cooperation Agency (JICA), and the World Health Organization on global health and educational issues in Africa and Asia.
Pilcavage first came to Cambridge, Massachusetts, looking for hands-on experience in public health and community outcomes in a role with Management Sciences for Health, co-founded by MIT Sloan School of Management alumnus Ron O’Connor SM ’71. There, she investigated reproductive and women’s health and supported a Japanese nonprofit affiliated with the organization.
She has since developed strong ties to Cambridge and MIT. “I was married in the MIT Chapel to an MIT alum, and our reception was held in Walker Memorial,” she says. “I was a migratory bird who landed on a tree, and my husband is the tree that has deep local roots here.”
In keeping with her ethos of overcoming roadblocks to success, Pilcavage encourages students to challenge themselves. “I’ve tried to model that behavior throughout my career,” she says.
Following her arrival at MIT In 2013, Pilcavage worked with the Comprehensive Initiative on Technology Evaluation (CITE), an MIT Department of Urban Studies and Planning project established in 2012 to develop new methods for product evaluation in global development. Formerly funded by USAID, Pilcavage administered the $10 million research program, which sought to learn which low-cost interventions worked best by evaluating products designed for people living in lower-income communities.
“It’s important to learn how to manage real-world challenges and deal with them effectively,” she argues. “Creating a collaborative environment in which people can discover solutions is how things get done.”
A career of service
Pilcavage has been recognized for her outstanding contributions to encouraging positive relations between America and Japan. She received the Foreign Minister's Commendation from the Japanese Ministry of Foreign Affairs and the John E. Thayer III Award from the Japan Society of Boston.
“I’m honored to join a community of people who have dedicated their lives to strengthening ties between the U.S. and Japan,” Pilcavage says when asked about the awards. “It’s exciting and humbling to be recognized for doing something I love.”
“Chris is a determined, empathetic leader who inspires our students and is committed to advancing both MIT’s mission and U.S.-Japan relations,” says Richard Samuels, the Ford International Professor of Political Science at MIT, and founder and faculty director of MISTI Japan. “I can think of no one more deserving of these awards.”
Pilcavage is excited about new MISTI Japan initiatives that are in development or already underway. “We’re launching our first global classroom with [MIT historian] Hiromu Nagahara and [lecturer in Japanese] Takako Aikawa,” she notes. “Students will visit cities like Kyoto and Hiroshima, and explore Japanese history and culture up close.”
Additionally, Pilcavage is developing social impact workshops and consistently questioning how to improve MIT Japan’s work and its impact. She’s always looking for new projects and new ways to engage and encourage students. “How can I make the program better?” she asks when considering MISTI Japan and its value to MIT and its students.
“I tell people I have the best job in the world,” she says. “I get to share my culture with the MIT community and work with the best colleagues who are nurturing and supportive. I believe I’ve found my home here.”
Efficient cooling method could enable chip-based trapped-ion quantum computers
Quantum computers could rapidly solve complex problems that would take the most powerful classical supercomputers decades to unravel. But they’ll need to be large and stable enough to efficiently perform operations. To meet this challenge, researchers at MIT and elsewhere are developing trapped-ion quantum computers based on ultra-compact photonic chips. These chip-based systems offer a scalable alternative to existing trapped-ion quantum computers, which rely on bulky optical equipment.
The ions in these quantum computers must be cooled to extremely cold temperatures to minimize vibrations and prevent errors. So far, such trapped-ion systems based on photonic chips have been limited to inefficient and slow cooling methods.
Now, a team of researchers at MIT and MIT Lincoln Laboratory has implemented a much faster and more energy-efficient method for cooling trapped ions using photonic chips. Their approach achieved cooling to about 10 times below the limit of standard laser cooling.
Key to this technique is a photonic chip that incorporates precisely designed antennas to manipulate beams of tightly focused, intersecting light.
The researchers’ initial demonstration takes a key step toward scalable chip-based architectures that could someday enable quantum computing systems with greater efficiency and stability.
“We were able to design polarization-diverse integrated-photonics devices, utilize them to develop a variety of novel integrated-photonics-based systems, and apply them to show very efficient ion cooling. However, this is just the beginning of what we can do using these devices. By introducing polarization diversity to integrated-photonics-based trapped-ion systems, this work opens the door to a variety of advanced operations for trapped ions that weren’t previously attainable, even beyond efficient ion cooling — all research directions we are excited to explore in the future,” says Jelena Notaros, the Robert J. Shillman Career Development Associate Professor of Electrical Engineering and Computer Science (EECS) at MIT, a member of the Research Laboratory of Electronics, and senior author of a paper on this architecture.
She is joined on the paper by lead authors Sabrina Corsetti, an EECS graduate student; Ethan Clements, a former postdoc who is now a staff scientist at MIT Lincoln Laboratory; Felix Knollmann, a graduate student in the Department of Physics; John Chiaverini, senior member of the technical staff at Lincoln Laboratory and a principal investigator in MIT’s Center for Quantum Engineering; as well as others at Lincoln Laboratory and MIT. The research appears today in two joint publications in Light: Science and Applications and Physical Review Letters.
Seeking scalability
While there are many types of quantum systems, this research is focused on trapped-ion quantum computing. In this application, a charged particle called an ion is formed by peeling an electron from an atom, and then trapped using radio-frequency signals and manipulated using optical signals.
Researchers use lasers to encode information in the trapped ion by changing its state. In this way, the ion can be used as a quantum bit, or qubit. Qubits are the building blocks of a quantum computer.
To prevent collisions between ions and gas molecules in the air, the ions are held in vacuum, often created with a device known as a cryostat. Traditionally, bulky lasers sit outside the cryostat and shoot different light beams through the cryostat’s windows toward the chip. These systems require a room full of optical components to address just a few dozen ions, making it difficult to scale to the large numbers of ions needed for advanced quantum computing. Slight vibrations outside the cryostat can also disrupt the light beams, ultimately reducing the accuracy of the quantum computer.
To get around these challenges, MIT researchers have been developing integrated-photonics-based systems. In this case, the light is emitted from the same chip that traps the ion. This improves scalability by eliminating the need for external optical components.
“Now, we can envision having thousands of sites on a single chip that all interface up to many ions, all working together in a scalable way,” Knollmann says.
But integrated-photonics-based demonstrations to date have achieved limited cooling efficiencies.
Keeping their cool
To enable fast and accurate quantum operations, researchers use optical fields to reduce the kinetic energy of the trapped ion. This causes the ion to cool to nearly absolute zero, an effective temperature even colder than cryostats can achieve.
But common methods have a higher cooling floor, so the ion still has a lot of vibrational energy after the cooling process completes. This would make it hard to use the qubits for high-quality computations.
The MIT researchers utilized a more complex approach, known as polarization-gradient cooling, which involves the precise interaction of two beams of light.
Each light beam has a different polarization, which means the field in each beam is oscillating in a different direction (up and down, side to side, etc.). Where these beams intersect, they form a rotating vortex of light that can force the ion to stop vibrating even more efficiently.
Although this approach had been shown previously using bulk optics, it hadn’t been shown before using integrated photonics.
To enable this more complex interaction, the researchers designed a chip with two nanoscale antennas, which emit beams of light out of the chip to manipulate the ion above it.
These antennas are connected by waveguides that route light to the antennas. The waveguides are designed to stabilize the optical routing, which improves the stability of the vortex pattern generated by the beams.
“When we emit light from integrated antennas, it behaves differently than with bulk optics. The beams, and generated light patterns, become extremely stable. Having these stable patterns allows us to explore ion behaviors with significantly more control,” Clements says.
The researchers also designed the antennas to maximize the amount of light that reaches the ion. Each antenna has tiny curved notches that scatter light upward, spaced just right to direct light toward the ion.
“We built upon many years of development at Lincoln Laboratory to design these gratings to emit diverse polarizations of light,” Corsetti says.
They experimented with several architectures, characterizing each to better understand how it emitted light.
With their final design in place, the researchers demonstrated ion cooling that was nearly 10 times below the limit of standard laser cooling, referred to as the Doppler limit. Their chip was able to reach this limit in about 100 microseconds, several times faster than other techniques.
“The demonstration of enhanced performance using optics integrated in the ion-trap chip lays the foundation for further integration that can allow new approaches for quantum-state manipulation, and that could improve the prospects for practical quantum-information processing,” adds Chiaverini. “Key to achieving this advance was the cross-Institute collaboration between the MIT campus and Lincoln groups, a model that we can build on as we take these next steps.”
In the future, the team plans to conduct characterization experiments on different chip architectures and demonstrate polarization-gradient cooling with multiple ions. In addition, they hope to explore other applications that could benefit from the stable light beams they can generate with this architecture.
Other authors who contributed to this research are Ashton Hattori (MIT), Zhaoyi Li (MIT), Milica Notaros (MIT), Reuel Swint (Lincoln Laboratory), Tal Sneh (MIT), Patrick Callahan (Lincoln Laboratory), May Kim (Lincoln Laboratory), Aaron Leu (MIT), Gavin West (MIT), Dave Kharas (Lincoln Laboratory), Thomas Mahony (Lincoln Laboratory), Colin Bruzewicz (Lincoln Laboratory), Cheryl Sorace-Agaskar (Lincoln Laboratory), Robert McConnell (Lincoln Laboratory), and Isaac Chuang (MIT).
This work is funded, in part, by the U.S. Department of Energy, the U.S. National Science Foundation, the MIT Center for Quantum Engineering, the U.S. Department of Defense, an MIT Rolf G. Locher Endowed Fellowship, and an MIT Frederick and Barbara Cronin Fellowship.
