MIT Latest News

Study in India shows kids use different math skills at work vs. school
In India, many kids who work in retail markets have good math skills: They can quickly perform a range of calculations to complete transactions. But as a new study shows, these kids often perform much worse on the same kinds of problems as they are taught in the classroom. This happens even though many of these students still attend school or attended school through 7th or 8th grades.
Conversely, the study also finds, Indian students who are still enrolled in school and don’t have jobs do better on school-type math problems, but they often fare poorly at the kinds of problems that occur in marketplaces.
Overall, both the “market kids” and the “school kids” struggle with the approach the other group is proficient in, raising questions about how to help both groups learn math more comprehensively.
“For the school kids, they do worse when you go from an abstract problem to a concrete problem,” says MIT economist Esther Duflo, co-author of a new paper detailing the study’s results. “For the market kids, it’s the opposite.”
Indeed, the kids with jobs who are also in school “underperform despite being extraordinarily good at mental math,” says Abhijit Banerjee an MIT economist and another co-author of the paper. “That for me was always the revelation, that the one doesn’t translate into the other.”
The paper, “Children’s arithmetic skills do not transfer between applied and academic math,” is published today in Nature. The authors are Banerjee, the Ford Professor of Economics at MIT; Swati Bhattacharjee of the newspaper Ananda Bazar Patrika, in Kolkata, India; Raghabendra Chattopadhyay of the Indian Institute of Management in Kolkata; Duflo, the Abdul Latif Jameel Professor of Poverty Alleviation and Development Economics at MIT; Alejandro J. Ganimian, a professor of applied psychology and economics at New York University; Kailash Rajaha, a doctoral candidate in economics at MIT; and Elizabeth S. Spelke, a professor of psychology at Harvard University.
Duflo and Banerjee shared the Nobel Prize in Economics in 2019 and are co-founders of MIT’s Jameel Abdul Lateef Poverty Action Lab (J-PAL), a global leader in development economics.
Three experiments
The study consists largely of three data-collection exercises with some embedded experiments. The first one shows that 201 kids working in markets in Kolkata do have good math skills. For instance, a researcher, posing as an ordinary shopper, would ask for the cost of 800 grams of potatoes sold at 20 rupees per kilogram, then ask for the cost of 1.4 kilograms of onions sold at 15 rupees per kilo. They would request the combined answer — 37 rupees — then hand the market worker a 200 rupee note and collect 163 rupees back. All told, the kids working in markets correctly solved this kind of problem from 95 to 98 percent of the time by the second try.
However, when the working children were pulled aside (with their parents’ permission) and given a standardized Indian national math test, just 32 percent could correctly divide a three-digit number by a one-digit number, and just 54 percent could correctly subtract a two-digit number from another two-digit number two times. Clearly, the kids’ skills were not yielding classroom results.
The researchers then conducted a second study with 400 kids working in markets in Delhi, which replicated the results: Working kids had a strong ability to handle market transactions, but only about 15 percent of the ones also in school were at average proficiency in math.
In the second study, the researchers also asked the reverse question: How do students doing well in school fare at market math problems? Here, with 200 students from 17 Delhi schools who do not work in markets, they found that 96 percent of the students could solve typical problems with a pencil, paper, unlimited time, and one opportunity to self-correct. But when the students had to solve the problems in a make-believe “market” setting, that figure dropped to just 60 percent. The students had unlimited time and access to paper and pencil, so that figure may actually overestimate how they would fare in a market.
Finally, in a third study, conducted in Delhi with over 200 kids, the researchers compared the performances of both “market” and “school” kids again on numerous math problems in varying conditions. While 85 percent of the working kids got the right answer to a market transaction problem, only 10 percent of nonworking kids correctly answered a question of similar difficulty, when faced with limited time and with no aids like pencil and paper. However, given the same division and subtraction problems, but with pencil and paper, 59 percent of nonmarket kids got them right, compared to 45 percent of market kids.
To further evaluate market kids and school kids on a level playing field, the researchers then presented each group with a word problem about a boy going to the market and buying two vegetables. Roughly one-third of the market kids were able to solve this without any aid, while fewer than 1 percent of the school kids did.
Why might the performance of the nonworking students decline when given a problem in market conditions?
“They learned an algorithm but didn’t understand it,” Banerjee says.
Meanwhile, the market kids seemed to use certain tactics to handle retail transactions. For one thing, they appear to use rounding well. Take a problem like 43 times 11. To handle that intuitively, you might multiply 43 times 10, and then add 43, for the final answer of 473. This appears to be what they are doing.
“The market kids are able to exploit base 10, so they do better on base 10 problems,” Duflo says. “The school kids have no idea. It makes no difference to them. The market kids may have additional tricks of this sort that we did not see.” On the other hand, the school kids had a better grasp of formal written methods of divison, subtraction, and more.
Going farther in school
The findings raise a significant point about students skills and academic progress. While it is a good thing that the kids with market jobs are proficient at generating rapid answers, it would likely be better for the long-term futures if they also did well in school and wound up with a high school degree or better. Finding a way to cross the divide between informal and formal ways of tackling math problems, then, could notably help some Indian children.
The fact that such a divide exists, meanwhile, suggests some new approaches could be tried in the classroom.
Banerjee, for one, suspects that part of the issue is a classroom process making it seem as if there is only one true route to funding an arithmetic answer. Instead, he believes, following the work of co-author Spelke, that helping students reason their way to an approximation of the right answer can help them truly get a handle on what is needed to solve these types of problems.
Even so, Duflo adds, “We don’t want to blame the teachers. It’s not their fault. They are given a strict curriculum to follow, and strict methods to follow.”
That still leaves open the question of what to change, in concrete classroom terms. That topic, it happens, is something the research group is in the process of weighing, as they consider new experiments that might address it directly. The current finding, however, makes clear progress would be useful.
“These findings highlight the importance of educational curricula that bridge the gap between intuitive and formal mathematics,” the authors state in the paper.
Support for the research was provided, in part, by the Abdul Latif Jameel Poverty Action Lab’s Post-Primary Education Initiative, the Foundation Blaise Pascal, and the AXA Research Fund.
Physicists measure a key aspect of superconductivity in “magic-angle” graphene
Superconducting materials are similar to the carpool lane in a congested interstate. Like commuters who ride together, electrons that pair up can bypass the regular traffic, moving through the material with zero friction.
But just as with carpools, how easily electron pairs can flow depends on a number of conditions, including the density of pairs that are moving through the material. This “superfluid stiffness,” or the ease with which a current of electron pairs can flow, is a key measure of a material’s superconductivity.
Physicists at MIT and Harvard University have now directly measured superfluid stiffness for the first time in “magic-angle” graphene — materials that are made from two or more atomically thin sheets of graphene twisted with respect to each other at just the right angle to enable a host of exceptional properties, including unconventional superconductivity.
This superconductivity makes magic-angle graphene a promising building block for future quantum-computing devices, but exactly how the material superconducts is not well-understood. Knowing the material’s superfluid stiffness will help scientists identify the mechanism of superconductivity in magic-angle graphene.
The team’s measurements suggest that magic-angle graphene’s superconductivity is primarily governed by quantum geometry, which refers to the conceptual “shape” of quantum states that can exist in a given material.
The results, which are reported today in the journal Nature, represent the first time scientists have directly measured superfluid stiffness in a two-dimensional material. To do so, the team developed a new experimental method which can now be used to make similar measurements of other two-dimensional superconducting materials.
“There’s a whole family of 2D superconductors that is waiting to be probed, and we are really just scratching the surface,” says study co-lead author Joel Wang, a research scientist in MIT’s Research Laboratory of Electronics (RLE).
The study’s co-authors from MIT’s main campus and MIT Lincoln Laboratory include co-lead author and former RLE postdoc Miuko Tanaka as well as Thao Dinh, Daniel Rodan-Legrain, Sameia Zaman, Max Hays, Bharath Kannan, Aziza Almanakly, David Kim, Bethany Niedzielski, Kyle Serniak, Mollie Schwartz, Jeffrey Grover, Terry Orlando, Simon Gustavsson, Pablo Jarillo-Herrero, and William D. Oliver, along with Kenji Watanabe and Takashi Taniguchi of the National Institute for Materials Science in Japan.
Magic resonance
Since its first isolation and characterization in 2004, graphene has proven to be a wonder substance of sorts. The material is effectively a single, atom-thin sheet of graphite consisting of a precise, chicken-wire lattice of carbon atoms. This simple configuration can exhibit a host of superlative qualities in terms of graphene’s strength, durability, and ability to conduct electricity and heat.
In 2018, Jarillo-Herrero and colleagues discovered that when two graphene sheets are stacked on top of each other, at a precise “magic” angle, the twisted structure — now known as magic-angle twisted bilayer graphene, or MATBG — exhibits entirely new properties, including superconductivity, in which electrons pair up, rather than repelling each other as they do in everyday materials. These so-called Cooper pairs can form a superfluid, with the potential to superconduct, meaning they could move through a material as an effortless, friction-free current.
“But even though Cooper pairs have no resistance, you have to apply some push, in the form of an electric field, to get the current to move,” Wang explains. “Superfluid stiffness refers to how easy it is to get these particles to move, in order to drive superconductivity.”
Today, scientists can measure superfluid stiffness in superconducting materials through methods that generally involve placing a material in a microwave resonator — a device which has a characteristic resonance frequency at which an electrical signal will oscillate, at microwave frequencies, much like a vibrating violin string. If a superconducting material is placed within a microwave resonator, it can change the device’s resonance frequency, and in particular, its “kinetic inductance,” by an amount that scientists can directly relate to the material’s superfluid stiffness.
However, to date, such approaches have only been compatible with large, thick material samples. The MIT team realized that to measure superfluid stiffness in atomically thin materials like MATBG would require a new approach.
“Compared to MATBG, the typical superconductor that is probed using resonators is 10 to 100 times thicker and larger in area,” Wang says. “We weren’t sure if such a tiny material would generate any measurable inductance at all.”
A captured signal
The challenge to measuring superfluid stiffness in MATBG has to do with attaching the supremely delicate material to the surface of the microwave resonator as seamlessly as possible.
“To make this work, you want to make an ideally lossless — i.e., superconducting — contact between the two materials,” Wang explains. “Otherwise, the microwave signal you send in will be degraded or even just bounce back instead of going into your target material.”
Will Oliver’s group at MIT has been developing techniques to precisely connect extremely delicate, two-dimensional materials, with the goal of building new types of quantum bits for future quantum-computing devices. For their new study, Tanaka, Wang, and their colleagues applied these techniques to seamlessly connect a tiny sample of MATBG to the end of an aluminum microwave resonator. To do so, the group first used conventional methods to assemble MATBG, then sandwiched the structure between two insulating layers of hexagonal boron nitride, to help maintain MATBG’s atomic structure and properties.
“Aluminum is a material we use regularly in our superconducting quantum computing research, for example, aluminum resonators to read out aluminum quantum bits (qubits),” Oliver explains. “So, we thought, why not make most of the resonator from aluminum, which is relatively straightforward for us, and then add a little MATBG to the end of it? It turned out to be a good idea.”
“To contact the MATBG, we etch it very sharply, like cutting through layers of a cake with a very sharp knife,” Wang says. “We expose a side of the freshly-cut MATBG, onto which we then deposit aluminum — the same material as the resonator — to make a good contact and form an aluminum lead.”
The researchers then connected the aluminum leads of the MATBG structure to the larger aluminum microwave resonator. They sent a microwave signal through the resonator and measured the resulting shift in its resonance frequency, from which they could infer the kinetic inductance of the MATBG.
When they converted the measured inductance to a value of superfluid stiffness, however, the researchers found that it was much larger than what conventional theories of superconductivity would have predicted. They had a hunch that the surplus had to do with MATBG’s quantum geometry — the way the quantum states of electrons correlate to one another.
“We saw a tenfold increase in superfluid stiffness compared to conventional expectations, with a temperature dependence consistent with what the theory of quantum geometry predicts,” Tanaka says. “This was a ‘smoking gun’ that pointed to the role of quantum geometry in governing superfluid stiffness in this two-dimensional material.”
“This work represents a great example of how one can use sophisticated quantum technology currently used in quantum circuits to investigate condensed matter systems consisting of strongly interacting particles,” adds Jarillo-Herrero.
This research was funded, in part, by the U.S. Army Research Office, the National Science Foundation, the U.S. Air Force Office of Scientific Research, and the U.S. Under Secretary of Defense for Research and Engineering.
A complementary study on magic-angle twisted trilayer graphene (MATTG), conducted by a collaboration between Philip Kim’s group at Harvard University and Jarillo-Herrero’s group at MIT appears in the same issue of Nature.
Timeless virtues, new technologies
As the story goes, the Scottish inventor James Watt envisioned how steam engines should work on one day in 1765, when he was walking across Glasgow Green, a park in his hometown. Watt realized that putting a separate condenser in an engine would allow its main cylinder to remain hot, making the engine more efficient and compact than the huge steam engines then in existence.
And yet Watt, who had been pondering the problem for a while, needed a partnership with entrepreneur Matthew Boulton to get a practical product to market, starting in 1775 and becoming successful in later years.
“People still use this story of Watt’s ‘Eureka!’ moment, which Watt himself promoted later in his life,” says MIT Professor David Mindell, an engineer and historian of science and engineering. “But it took 20 years of hard labor, during which Watt struggled to support a family and had multiple failures, to get it out in the world. Multiple other inventions were required to achieve what we today call product-market fit.”
The full story of the steam engine, Mindell argues, is a classic case of what is today called “process innovation,” not just “product innovation.” Inventions are rarely fully-formed products, ready to change the world. Mostly, they need a constellation of improvements, and sustained persuasion, to become adopted into industrial systems.
What was true for Watt still holds, as Mindell’s body of work shows. Most technology-driven growth today comes from overlapping advances, when inventors and companies tweak and improve things over time. Now, Mindell explores those ideas in a forthcoming book, “The New Lunar Society: An Enlightenment Guide to the Next Industrial Revolution,” being published on Feb. 24 by the MIT Press. Mindell is professor of aeronautics and astronautics and the Dibner Professor of the History of Engineering and Manufacturing at MIT, where he has also co-founded the Work of the Future initiative.
“We’ve overemphasized product innovation, although we’re very good at it,” Mindell says. “But it’s become apparent that process innovation is just as important: how you improve the making, fixing, rebuilding, or upgrading of systems. These are deeply entangled. Manufacturing is part of process innovation.”
Today, with so many things being positioned as world-changing products, it may be especially important to notice that being adaptive and persistent is practically the essence of improvement.
“Young innovators don’t always realize that when their invention doesn’t work at first, they’re at the start of a process where they have to refine and engage, and find the right partners to grow,” Mindell says.
Manufacturing at home
The title of Mindell’s book refers to British Enlightenment thinkers and inventors — Watt was one of them — who used to meet in a group they called the Lunar Society, centered in Birmingham. This included pottery innovator Josiah Wedgewood; physician Erasmus Darwin; chemist Joseph Priestley; and Boulton, a metal manufacturer whose work and capital helped make Watt’s improved steam engine a reliable product. The book moves between chapters on the old Lunar Society and those on contemporary industrial systems, drawing parallels between then and now.
“The stories about the Lunar Society are models for the way people can go about their careers, engineering or otherwise, in a way they may not see in popular press about technology today,” Mindell says. “Everyone told Wedgwood he couldn’t compete with Chinese porcelain, yet he learned from the Lunar Society and built an English pottery industry that led the world.”
Applying the Lunar Society’s virtues to contemporary industry leads Mindell to a core set of ideas about technology. Research shows that design and manufacturing should be adjacent if possible, not outsourced globally, to accelerate learning and collaboration. The book also argues that technology should address human needs and that venture capital should focus more on industrial systems than it does. (Mindell has co-founded a firm, called Unless, that invests in companies by using venture financing structures better-suited to industrial transformation.)
In seeing a new industrialism taking shape, Mindell suggests that its future includes new ways of working, collaborating, and valuing knowledge throughout organizations, as well as more AI-based open-source tools for small and mid-size manufacturers. He also contends that a new industrialism should include greater emphasis on maintenance and repair work, which are valuable sources of knowledge about industrial devices and systems.
“We’ve undervalued how to keep things running, while simultaneously hollowing out the middle of the workforce,” he says. “And yet, operations and maintenance are sites of product innovation. Ask the person who fixes your car or dishwasher. They’ll tell you the strengths and weaknesses of every model.”
All told, “The sum total of this work, over time, amounts to a new industrialism if it elevates its cultural status into a movement that values the material basis of our lives and seeks to improve it, literally from the ground up,” Mindell writes in the book.
“The book doesn’t predict the future,” he says. “But rather it suggests how to talk about the future of industry with optimism and realism, as opposed to saying, this is the utopian future where machines do everything, and people just sit back in chairs with wires coming out of their heads.”
Work of the Future
“The New Lunar Society” is a concise book with expansive ideas. Mindell also devotes chapters to the convergence of the Industrial-era Enlightenment, the founding of the U.S., and the crucial role of industry in forming the republic.
“The only founding father who signed all of the critical documents in the founding of the country, Benjamin Franklin, was also the person who crystallized the modern science of electricity and deployed its first practical invention, the lightning rod,” Mindell says. “But there were multiple figures, including Thomas Jefferson and Paul Revere, who integrated the industrial Enlightenment with democracy. Industry has been core to American democracy from the beginning.”
Indeed, as Mindell emphasizes in the book, “industry,” beyond evoking smokestacks, has a human meaning: If you are hard-working, you are displaying industry. That meshes with the idea of persistently redeveloping an invention over time.
Despite the high regard Mindell holds for the Industrial Enlightenment, he recognizes that the era’s industrialization brought harsh working conditions, as well as environmental degradation. As one of the co-founders of MIT’s Work of the Future initiative, he argues that 21st-century industrialism needs to rethink some of its fundamentals.
“The ideals of [British] industrialization missed on the environment, and missed on labor,” Mindell says. “So at this point, how do we rethink industrial systems to do better?” Mindell argues that industry must power an economy that grows while decarbonizing.
After all, Mindell adds, “About 70 percent of greenhouse gas emissions are from industrial sectors, and all of the potential solutions involve making lots of new stuff. Even if it’s just connectors and wire. We’re not going to decarbonize or address global supply chain crises by deindustrializing, we’re going to get there by reindustrializing.”
“The New Lunar Society” has received praise from technologists and other scholars. Joel Mokyr, an economic historian at Northwestern University who coined the term “Industrial Enlightenment,” has stated that Mindell “realizes that innovation requires a combination of knowing and making, mind and hand. … He has written a deeply original and insightful book.” Jeff Wilke SM ’93, a former CEO of Amazon’s consumer business, has said the book “argues compellingly that a thriving industrial base, adept at both product and process innovation, underpins a strong democracy.”
Mindell hopes the audience for the book will range from younger technologists to a general audience of anyone interested in the industrial future.
“I think about young people in industrial settings and want to help them see they’re part of a great tradition and are doing important things to change the world,” Mindell says. “There is a huge audience of people who are interested in technology but find overhyped language does not match their aspirations or personal experience. I’m trying to crystallize this new industrialism as a way of imagining and talking about the future.”
Driving innovation, from Silicon Valley to Detroit
Across a career’s worth of pioneering product designs, Doug Field’s work has shaped the experience of anyone who’s ever used a MacBook Air, ridden a Segway, or driven a Tesla Model 3.
But his newest project is his most ambitious yet: reinventing the Ford automobile, one of the past century’s most iconic pieces of technology.
As Ford’s chief electric vehicle (EV), digital, and design officer, Field is tasked with leading the development of the company’s electric vehicles, while making new software platforms central to all Ford models.
To bring Ford Motor Co. into that digital and electric future, Field effectively has to lead a fast-moving startup inside the legacy carmaker. “It is incredibly hard, figuring out how to do ‘startups’ within large organizations,” he concedes.
If anyone can pull it off, it’s likely to be Field. Ever since his time in MIT’s Leaders for Global Operations (then known as “Leaders in Manufacturing”) program studying organizational behavior and strategy, Field has been fixated on creating the conditions that foster innovation.
“The natural state of an organization is to make it harder and harder to do those things: to innovate, to have small teams, to go against the grain,” he says. To overcome those forces, Field has become a master practitioner of the art of curating diverse, talented teams and helping them flourish inside of big, complex companies.
“It’s one thing to make a creative environment where you can come up with big ideas,” he says. “It’s another to create an execution-focused environment to crank things out. I became intrigued with, and have been for the rest of my career, this question of how can you have both work together?”
Three decades after his first stint as a development engineer at Ford Motor Co., Field now has a chance to marry the manufacturing muscle of Ford with the bold approach that helped him rethink Apple’s laptops and craft Tesla’s Model 3 sedan. His task is nothing less than rethinking how cars are made and operated, from the bottom up.
“If it’s only creative or execution, you’re not going to change the world,” he says. “If you want to have a huge impact, you need people to change the course you’re on, and you need people to build it.”
A passion for design
From a young age, Field had a fascination with automobiles. “I was definitely into cars and transportation more generally,” he says. “I thought of cars as the place where technology and art and human design came together — cars were where all my interests intersected.”
With a mother who was an artist and musician and an engineer father, Field credits his parents’ influence for his lifelong interest in both the aesthetic and technical elements of product design. “I think that’s why I’m drawn to autos — there’s very much an aesthetic aspect to the product,” he says.
After earning a degree in mechanical engineering from Purdue University, Field took a job at Ford in 1987. The big Detroit automakers of that era excelled at mass-producing cars, but weren’t necessarily set up to encourage or reward innovative thinking. Field chafed at the “overstructured and bureaucratic” operational culture he encountered.
The experience was frustrating at times, but also valuable and clarifying. He realized that he “wanted to work with fast-moving, technology-based businesses.”
“My interest in advancing technical problem-solving didn’t have a place in the auto industry” at the time, he says. “I knew I wanted to work with passionate people and create something that didn’t exist, in an environment where talent and innovation were prized, where irreverence was an asset and not a liability. When I read about Silicon Valley, I loved the way they talked about things.”
During that time, Field took two years off to enroll in MIT’s LGO program, where he deepened his technical skills and encountered ideas about manufacturing processes and team-driven innovation that would serve him well in the years ahead.
“Some of core skill sets that I developed there were really, really important,” he says, “in the context of production lines and production processes.” He studied systems engineering and the use of Monte Carlo simulations to model complex manufacturing environments. During his internship with aerospace manufacturer Pratt & Whitney, he worked on automated design in computer-aided design (CAD) systems, long before those techniques became standard practice.
Another powerful tool he picked up was the science of probability and statistics, under the tutelage of MIT Professor Alvin Drake in his legendary course 6.041/6.431 (Probabilistic Systems Analysis). Field would go on to apply those insights not only to production processes, but also to characterizing variability in people’s aptitudes, working styles, and talents, in the service of building better, more innovative teams. And studying organizational strategy catalyzed his career-long interest in “ways to look at innovation as an outcome, rather than a random spark of genius.”
“So many things I was lucky to be exposed to at MIT,” Field says, were “all building blocks, pieces of the puzzle, that helped me navigate through difficult situations later on.”
Learning while leading
After leaving Ford in 1993, Field worked at Johnson and Johnson Medical for three years in process development. There, he met Segway inventor Dean Kamen, who was working on a project called the iBOT, a gyroscopic powered wheelchair that could climb stairs.
When Kamen spun off Segway to develop a new personal mobility device using the same technology, Field became his first hire. He spent nearly a decade as the firm’s chief technology officer.
At Segway, Field’s interests in vehicles, technology, innovation, process, and human-centered design all came together.
“When I think about working now on electric cars, it was a real gift,” he says. The problems they tackled prefigured the ones he would grapple with later at Tesla and Ford. “Segway was very much a precursor to a modern EV. Completely software controlled, with higher-voltage batteries, redundant systems, traction control, brushless DC motors — it was basically a miniature Tesla in the year 2000.”
At Segway, Field assembled an “amazing” team of engineers and designers who were as passionate as he was about pushing the envelope. “Segway was the first place I was able to hand-pick every single person I worked with, define the culture, and define the mission.”
As he grew into this leadership role, he became equally engrossed with cracking another puzzle: “How do you prize people who don’t fit in?”
“Such a fundamental part of the fabric of Silicon Valley is the love of embracing talent over a traditional organization’s ways of measuring people,” he says. “If you want to innovate, you need to learn how to manage neurodivergence and a very different set of personalities than the people you find in large corporations.”
Field still keeps the base housing of a Segway in his office, as a reminder of what those kinds of teams — along with obsessive attention to detail — can achieve.
Before joining Apple in 2008, he showed that component, with its clean lines and every minuscule part in its place in one unified package, to his prospective new colleagues. “They were like, “OK, you’re one of us,’” he recalls.
He soon became vice president of hardware development for all Mac computers, leading the teams behind the MacBook Air and MacBook Pro and eventually overseeing more than 2,000 employees. “Making things really simple and really elegant, thinking about the product as an integrated whole, that really took me into Apple.”
The challenge of giving the MacBook Air its signature sleek and light profile is an example.
“The MacBook Air was the first high-volume consumer electronic product built out of a CNC-machined enclosure,” says Field. He worked with industrial design and technology teams to devise a way to make the laptop from one solid piece of aluminum and jettison two-thirds of the parts found in the iMac. “We had material cut away so that every single screw and piece of electronics sat down into it an integrated way. That’s how we got the product so small and slim.”
“When I interviewed with Jony Ive” — Apple’s legendary chief design officer — “he said your ability to zoom out and zoom in was the number one most important ability as a leader at Apple.” That meant zooming out to think about “the entire ethos of this product, and the way it will affect the world” and zooming all the way back in to obsess over, say, the physical shape of the laptop itself and what it feels like in a user’s hands.
“That thread of attention to detail, passion for product, design plus technology rolled directly into what I was doing at Tesla,” he says. When Field joined Tesla in 2013, he was drawn to the way the brash startup upended the approach to making cars. “Tesla was integrating digital technology into cars in a way nobody else was. They said, ‘We’re not a car company in Silicon Valley, we’re a Silicon Valley company and we happen to make cars.’”
Field assembled and led the team that produced the Model 3 sedan, Tesla’s most affordable vehicle, designed to have mass-market appeal.
That experience only reinforced the importance, and power, of zooming in and out as a designer — in a way that encompasses the bigger human resources picture.
“You have to have a broad sense of what you’re trying to accomplish and help people in the organization understand what it means to them,” he says. “You have to go across and understand operations enough to glue all of those (things) together — while still being great at and focused on something very, very deeply. That’s T-shaped leadership.”
He credits his time at LGO with providing the foundation for the “T-shaped leadership” he practices.
“An education like the one I got at MIT allowed me to keep moving that ‘T’, to focus really deep, learn a ton, teach as much as I can, and after something gets more mature, pull out and bed down into other areas where the organization needs to grow or where there’s a crisis.”
The power of marrying scale to a “startup mentality”
In 2018, Field returned to Apple as a vice president for special projects. “I left Tesla after Model 3 and Y started to ramp, as there were people better than me to run high-volume manufacturing,” he says. “I went back to Apple hoping what Tesla had learned would motivate Apple to get into a different market.”
That market was his early love: cars. Field quietly led a project to develop an electric vehicle at Apple for three years.
Then Ford CEO Jim Farley came calling. He persuaded Field to return to Ford in late 2021, partly by demonstrating how much things had changed since his first stint as the carmaker.
“Two things came through loud and clear,” Field says. “One was humility. ‘Our success is not assured.’” That attitude was strikingly different from Field’s early experience in Detroit, encountering managers who were resistant to change. “The other thing was urgency. Jim and Bill Ford said the exact same thing to me: ‘We have four or five years to completely remake this company.’”
“I said, ‘OK, if the top of company really believes that, then the auto industry may be ready for what I hope to offer.’”
So far, Field is energized and encouraged by the appetite for reinvention he’s encountered this time around at Ford.
“If you can combine what Ford does really well with what a Tesla or Rivian can do well, this is something to be reckoned with,” says Field. “Skunk works have become one of the fundamental tools of my career,” he says, using an industry term that describes a project pursued by a small, autonomous group of people within a larger organization.
Ford has been developing a new, lower-cost, software-enabled EV platform — running all of the car’s sensors and components from a central digital operating system — with a “skunk works” team for the past two years. The company plans to build new sedans, SUVs, and small pickups based on this new platform.
With other legacy carmakers like Volvo racing into the electric future and fierce competition from EV leaders Tesla and Rivian, Field and his colleagues have their work cut out for them.
If he succeeds, leveraging his decades of learning and leading from LGO to Silicon Valley, then his latest chapter could transform the way we all drive — and secure a spot for Ford at the front of the electric vehicle pack in the process.
“I’ve been lucky to feel over and over that what I’m doing right now — they are going to write a book about it,” say Field. “This is a big deal, for Ford and the U.S. auto industry, and for American industry, actually.”
How telecommunications cables can image the ground beneath us
When people think about fiber optic cables, its usually about how they’re used for telecommunications and accessing the internet. But fiber optic cables — strands of glass or plastic that allow for the transmission of light — can be used for another purpose: imaging the ground beneath our feet.
MIT Department of Earth, Atmospheric and Planetary Sciences (EAPS) PhD student Hilary Chang recently used the MIT fiber optic cable network to successfully image the ground underneath campus using a method known as distributed acoustic sensing (DAS). By using existing infrastructure, DAS can be an efficient and effective way to understand ground composition, a critical component for assessing the seismic hazard of areas, or how at risk they are from earthquake damage.
“We were able to extract very nice, coherent waves from the surroundings, and then use that to get some information about the subsurface,” says Chang, the lead author of a recent paper describing her work that was co-authored with EAPS Principal Research Scientist Nori Nakata.
Dark fibers
The MIT campus fiber optic system, installed from 2000 to 2003, services internal data transport between labs and buildings as well as external transport, such as the campus internet (MITNet). There are three major cable hubs on campus from which lines branch out into buildings and underground, much like a spiderweb.
The network allocates a certain number of strands per building, some of which are “dark fibers,” or cables that are not actively transporting information. Each campus fiber hub has redundant backbone cables between them so that, in the event of a failure, network transmission can switch to the dark fibers without loss of network services.
DAS can use existing telecommunication cables and ambient wavefields to extract information about the materials they pass through, making it a valuable tool for places like cities or the ocean floor, where conventional sensors can’t be deployed. Chang, who studies earthquake waveforms and the information we can extract from them, decided to try it out on the MIT campus.
In order to get access to the fiber optic network for the experiment, Chang reached out to John Morgante, a manager of infrastructure project engineering with MIT Information Systems and Technology (IS&T). Morgante has been at MIT since 1998 and was involved with the original project installing the fiber optic network, and was thus able to provide personal insight into selecting a route.
“It was interesting to listen to what they were trying to accomplish with the testing,” says Morgante. While IS&T has worked with students before on various projects involving the school’s network, he said that “in the physical plant area, this is the first that I can remember that we’ve actually collaborated on an experiment together.”
They decided on a path starting from a hub in Building 24, because it was the longest running path that was entirely underground; above-ground wires that cut through buildings wouldn’t work because they weren’t grounded, and thus were useless for the experiment. The path ran from east to west, beginning in Building 24, traveling under a section of Massachusetts Ave., along parts of Amherst and Vassar streets, and ending at Building W92.
“[Morgante] was really helpful,” says Chang, describing it as “a very good experience working with the campus IT team.”
Locating the cables
After renting an interrogator, a device that sends laser pulses to sense ambient vibrations along the fiber optic cables, Chang and a group of volunteers were given special access to connect it to the hub in Building 24. They let it run for five days.
To validate the route and make sure that the interrogator was working, Chang conducted a tap test, in which she hit the ground with a hammer several times to record the precise GPS coordinates of the cable. Conveniently, the underground route is marked by maintenance hole covers that serve as good locations to do the test. And, because she needed the environment to be as quiet as possible to collect clean data, she had to do it around 2 a.m.
“I was hitting it next to a dorm and someone yelled ‘shut up,’ probably because the hammer blows woke them up,” Chang recalls. “I was sorry.” Thankfully, she only had to tap at a few spots and could interpolate the locations for the rest.
During the day, Chang and her fellow students — Denzel Segbefia, Congcong Yuan, and Jared Bryan — performed an additional test with geophones, another instrument that detects seismic waves, out on Brigg’s Field where the cable passed under it to compare the signals. It was an enjoyable experience for Chang; when the data were collected in 2022, the campus was coming out of pandemic measures, with remote classes sometimes still in place. “It was very nice to have everyone on the field and do something with their hands,” she says.
The noise around us
Once Chang collected the data, she was able to see plenty of environmental activity in the waveforms, including the passing of cars, bikes, and even when the train that runs along the northern edge of campus made its nightly passes.
After identifying the noise sources, Chang and Nakata extracted coherent surface waves from the ambient noises and used the wave speeds associated with different frequencies to understand the properties of the ground the cables passed through. Stiffer materials allow fast velocities, while softer material slows it.
“We found out that the MIT campus is built on soft materials overlaying a relatively hard bedrock,” Chang says, which confirms previously known, albeit lower-resolution, information about the geology of the area that had been collected using seismometers.
Information like this is critical for regions that are susceptible to destructive earthquakes and other seismic hazards, including the Commonwealth of Massachusetts, which has experienced earthquakes as recently as this past week. Areas of Boston and Cambridge characterized by artificial fill during rapid urbanization are especially at risk due to its subsurface structure being more likely to amplify seismic frequencies and damage buildings. This non-intrusive method for site characterization can help ensure that buildings meet code for the correct seismic hazard level.
“Destructive seismic events do happen, and we need to be prepared,” she says.
Mishael Quraishi named 2025 Churchill Scholar
MIT senior Mishael Quraishi has been selected as a 2025-26 Churchill Scholar and will undertake an MPhil in archaeological research at Cambridge University in the U.K. this fall.
Quraishi, who is majoring in material sciences and archeology with a minor in ancient and medieval studies, envisions a future career as a materials scientist, using archeological methods to understand how ancient techniques can be applied to modern problems.
At the Masic Lab at MIT, Quraishi was responsible for studying Egyptian blue, the world’s oldest synthetic pigment, to uncover ancient methods for mass production. Through this research, she secured an internship at the Metropolitan Museum of Art’s Department of Scientific Research, where she characterized pigments on the Amathus sarcophagus. Last fall, she presented her findings to kick off the International Roundtable on Polychromy at the Getty Museum. Quraishi has continued research in the Masic lab and her work on the “Blue Room” of Pompeii was featured on NBC nightly news.
Outside of research, Quraishi has been active in MIT’s makerspace and art communities. She has created engravings and acrylic pourings in the MIT MakerWorkshop, metal sculptures in the MIT Forge, and colored glass rods in the MIT Metropolis makerspace. Quraishi also plays the piano and harp and has sung with the Harvard Summer Chorus and the Handel and Haydn Society. She currently serves as the president of the Society for Undergraduates in Materials Science (SUMS) and captain of the lightweight women’s rowing team that won MIT’s first Division I national championship title in 2022.
“We are delighted that Mishael will have the opportunity to expand her important and interesting research at Cambridge University,” says Kim Benard, associate dean of distinguished fellowships. “Her combination of scientific inquiry, humanistic approach, and creative spirit make her an ideal representative of MIT.”
The Churchill Scholarship is a highly competitive fellowship that annually offers 16 American students the opportunity to pursue a funded graduate degree in science, mathematics, or engineering at Churchill College within Cambridge University. The scholarship, which was established in 1963, honors former British Prime Minister Winston Churchill’s vision of U.S.-U.K. scientific exchange. Since 2017, two additional Kanders Churchill Scholarships have been awarded each year for studies in science policy.
MIT students interested in learning more about the Churchill Scholarship should contact Benard in MIT Career Advising and Professional Development.
Aligning AI with human values
Senior Audrey Lorvo is researching AI safety, which seeks to ensure increasingly intelligent AI models are reliable and can benefit humanity. The growing field focuses on technical challenges like robustness and AI alignment with human values, as well as societal concerns like transparency and accountability. Practitioners are also concerned with the potential existential risks associated with increasingly powerful AI tools.
“Ensuring AI isn’t misused or acts contrary to our intentions is increasingly important as we approach artificial general intelligence (AGI),” says Lorvo, a computer science, economics, and data science major. AGI describes the potential of artificial intelligence to match or surpass human cognitive capabilities.
An MIT Schwarzman College of Computing Social and Ethical Responsibilities of Computing (SERC) scholar, Lorvo looks closely at how AI might automate AI research and development processes and practices. A member of the Big Data research group, she’s investigating the social and economic implications associated with AI’s potential to accelerate research on itself and how to effectively communicate these ideas and potential impacts to general audiences including legislators, strategic advisors, and others.
Lorvo emphasizes the need to critically assess AI’s rapid advancements and their implications, ensuring organizations have proper frameworks and strategies in place to address risks. “We need to both ensure humans reap AI’s benefits and that we don’t lose control of the technology,” she says. “We need to do all we can to develop it safely.”
Her participation in efforts like the AI Safety Technical Fellowship reflect her investment in understanding the technical aspects of AI safety. The fellowship provides opportunities to review existing research on aligning AI development with considerations of potential human impact. “The fellowship helped me understand AI safety’s technical questions and challenges so I can potentially propose better AI governance strategies,” she says. According to Lorvo, companies on AI’s frontier continue to push boundaries, which means we’ll need to implement effective policies that prioritize human safety without impeding research.
Value from human engagement
When arriving at MIT, Lorvo knew she wanted to pursue a course of study that would allow her to work at the intersection of science and the humanities. The variety of offerings at the Institute made her choices difficult, however.
“There are so many ways to help advance the quality of life for individuals and communities,” she says, “and MIT offers so many different paths for investigation.”
Beginning with economics — a discipline she enjoys because of its focus on quantifying impact — Lorvo investigated math, political science, and urban planning before choosing Course 6-14.
“Professor Joshua Angrist’s econometrics classes helped me see the value in focusing on economics, while the data science and computer science elements appealed to me because of the growing reach and potential impact of AI,” she says. “We can use these tools to tackle some of the world’s most pressing problems and hopefully overcome serious challenges.”
Lorvo has also pursued concentrations in urban studies and planning and international development.
As she’s narrowed her focus, Lorvo finds she shares an outlook on humanity with other members of the MIT community like the MIT AI Alignment group, from whom she learned quite a bit about AI safety. “Students care about their marginal impact,” she says.
Marginal impact, the additional effect of a specific investment of time, money, or effort, is a way to measure how much a contribution adds to what is already being done, rather than focusing on the total impact. This can potentially influence where people choose to devote their resources, an idea that appeals to Lorvo.
“In a world of limited resources, a data-driven approach to solving some of our biggest challenges can benefit from a tailored approach that directs people to where they’re likely to do the most good,” she says. “If you want to maximize your social impact, reflecting on your career choice’s marginal impact can be very valuable.”
Lorvo also values MIT’s focus on educating the whole student and has taken advantage of opportunities to investigate disciplines like philosophy through MIT Concourse, a program that facilitates dialogue between science and the humanities. Concourse hopes participants gain guidance, clarity, and purpose for scientific, technical, and human pursuits.
Student experiences at the Institute
Lorvo invests her time outside the classroom in creating memorable experiences and fostering relationships with her classmates. “I’m fortunate that there’s space to balance my coursework, research, and club commitments with other activities, like weightlifting and off-campus initiatives,” she says. “There are always so many clubs and events available across the Institute.”
These opportunities to expand her worldview have challenged her beliefs and exposed her to new interest areas that have altered her life and career choices for the better. Lorvo, who is fluent in French, English, Spanish, and Portuguese, also applauds MIT for the international experiences it provides for students.
“I’ve interned in Santiago de Chile and Paris with MISTI and helped test a water vapor condensing chamber that we designed in a fall 2023 D-Lab class in collaboration with the Madagascar Polytechnic School and Tatirano NGO [nongovernmental organization],” she says, “and have enjoyed the opportunities to learn about addressing economic inequality through my International Development and D-Lab classes.”
As president of MIT’s Undergraduate Economics Association, Lorvo connects with other students interested in economics while continuing to expand her understanding of the field. She enjoys the relationships she’s building while also participating in the association’s events throughout the year. “Even as a senior, I’ve found new campus communities to explore and appreciate,” she says. “I encourage other students to continue exploring groups and classes that spark their interests throughout their time at MIT.”
After graduation, Lorvo wants to continue investigating AI safety and researching governance strategies that can help ensure AI’s safe and effective deployment.
“Good governance is essential to AI’s successful development and ensuring humanity can benefit from its transformative potential,” she says. “We must continue to monitor AI’s growth and capabilities as the technology continues to evolve.”
Understanding technology’s potential impacts on humanity, doing good, continually improving, and creating spaces where big ideas can see the light of day continue to drive Lorvo. Merging the humanities with the sciences animates much of what she does. “I always hoped to contribute to improving people’s lives, and AI represents humanity’s greatest challenge and opportunity yet,” she says. “I believe the AI safety field can benefit from people with interdisciplinary experiences like the kind I’ve been fortunate to gain, and I encourage anyone passionate about shaping the future to explore it.”
Eleven MIT faculty receive Presidential Early Career Awards
Eleven MIT faculty, including nine from the School of Engineering and two from the School of Science, were awarded the Presidential Early Career Award for Scientists and Engineers (PECASE). More than 15 additional MIT alumni were also honored.
Established in 1996 by President Bill Clinton, the PECASE is awarded to scientists and engineers “who show exceptional potential for leadership early in their research careers.” The latest recipients were announced by the White House on Jan. 14 under President Joe Biden. Fourteen government agencies recommended researchers for the award.
The MIT faculty and alumni honorees are among 400 scientists and engineers recognized for innovation and scientific contributions. Those from the School of Engineering and School of Science who were honored are:
- Tamara Broderick, associate professor in the Department of Electrical Engineering and Computer Science (EECS), was nominated by the Office of Naval Research for her project advancing “Lightweight representations for decentralized learning in data-rich environments.”
- Michael James Carbin SM ’09, PhD ’15, associate professor in the Department of EECS, was nominated by the National Science Foundation (NSF) for his CAREER award, a project that developed techniques to execute programs reliably on approximate and unreliable computation substrates.
- Christina Delimitrou, the KDD Career Development Professor in Communications and Technology and associate Professor in the Department of EECS, was nominated by the NSF for her group’s work on redesigning the cloud system stack given new cloud programming frameworks like microservices and serverless compute, as well as designing hardware acceleration techniques that make cloud data centers more predictable and resource-efficient.
- Netta Engelhardt, the Biedenharn Career Development Associate Professor of Physics, was nominated by the Department of Energy for her research on the black hole information paradox and its implications for the fundamental quantum structure of space and time.
- Robert Gilliard Jr., the Novartis Associate Professor of Chemistry, was selected based the results generated from his 2020 National Science Foundation CAREER award entitled: "CAREER: Boracycles with Unusual Bonding as Creative Strategies for Main-Group Functional Materials.”
- Heather Janine Kulik PD ’09, PhD ’09, the Lammot du Pont Professor of Chemical Engineering, was nominated by the NSF for her 2019 proposal entitled “CAREER: Revealing spin-state-dependent reactivity in open-shell single atom catalysts with systematically-improvable computational tools.”
- Nuno Loureiro, professor in the Department of Nuclear Science and Engineering, was nominated by the NSF for his work on the generation and amplification of magnetic fields in the universe.
- Robert Macfarlane, associate professor in the Department of Materials Science and Engineering, was nominated by the Department of Defense (DoD)’s Air Force Office of Scientific Research. His research focuses on making new materials using molecular and nanoscale building blocks.
- Ritu Raman, the Eugene Bell Career Development Professor of Tissue Engineering in the Department of Mechanical Engineering, was nominated by the DoD for her ARO-funded research that explored leveraging biological actuators in next-generation robots that can sense and adapt to their environments.
- Ellen Roche, the Latham Family Career Development Professor and associate department head in the Department of Mechanical Engineering, was nominated by the NSF for her CAREER award, a project that aims to create a cutting-edge benchtop model combining soft robotics and organic tissue to accurately simulate the motions of the heart and diaphragm.
- Justin Wilkerson, a visiting associate professor in the Department of Aeronautics and Astronautics, was nominated by the Air Force Office of Scientific Research (AFOSR) for his research primarily related to the design and optimization of novel multifunctional composite materials that can survive extreme environments.
Additional MIT alumni who were honored include: Elaheh Ahmadi ’20, MNG ’21; Ambika Bajpayee MNG ’07, PhD ’15; Katherine Bouman SM ’13, PhD ’17; Walter Cheng-Wan Lee ’95, MNG ’95, PhD ’05; Ismaila Dabo PhD ’08; Ying Diao SM ’10, PhD ’12; Eno Ebong ’99; Soheil Feizi- Khankandi SM ’10, PhD ’16; Mark Finlayson SM ’01, PhD ’12; Chelsea B. Finn ’14; Grace Xiang Gu SM ’14, PhD ’18; David Michael Isaacson PhD ’06, AF ’16; Lewei Lin ’05; Michelle Sander PhD ’12; Kevin Solomon SM ’08, PhD ’12; and Zhiting Tian PhD ’14.
Introducing the MIT Generative AI Impact Consortium
From crafting complex code to revolutionizing the hiring process, generative artificial intelligence is reshaping industries faster than ever before — pushing the boundaries of creativity, productivity, and collaboration across countless domains.
Enter the MIT Generative AI Impact Consortium, a collaboration between industry leaders and MIT’s top minds. As MIT President Sally Kornbluth highlighted last year, the Institute is poised to address the societal impacts of generative AI through bold collaborations. Building on this momentum and established through MIT’s Generative AI Week and impact papers, the consortium aims to harness AI’s transformative power for societal good, tackling challenges before they shape the future in unintended ways.
“Generative AI and large language models [LLMs] are reshaping everything, with applications stretching across diverse sectors,” says Anantha Chandrakasan, dean of the School of Engineering and MIT’s chief innovation and strategy officer, who leads the consortium. “As we push forward with newer and more efficient models, MIT is committed to guiding their development and impact on the world.”
Chandrakasan adds that the consortium’s vision is rooted in MIT’s core mission. “I am thrilled and honored to help advance one of President Kornbluth’s strategic priorities around artificial intelligence,” he says. “This initiative is uniquely MIT — it thrives on breaking down barriers, bringing together disciplines, and partnering with industry to create real, lasting impact. The collaborations ahead are something we’re truly excited about.”
Developing the blueprint for generative AI’s next leap
The consortium is guided by three pivotal questions, framed by Daniel Huttenlocher, dean of the MIT Schwarzman College of Computing and co-chair of the GenAI Dean’s oversight group, that go beyond AI’s technical capabilities and into its potential to transform industries and lives:
- How can AI-human collaboration create outcomes that neither could achieve alone?
- What is the dynamic between AI systems and human behavior, and how do we maximize the benefits while steering clear of risks?
- How can interdisciplinary research guide the development of better, safer AI technologies that improve human life?
Generative AI continues to advance at lightning speed, but its future depends on building a solid foundation. “Everybody recognizes that large language models will transform entire industries, but there's no strong foundation yet around design principles,” says Tim Kraska, associate professor of electrical engineering and computer science in the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) and co-faculty director of the consortium.
“Now is a perfect time to look at the fundamentals — the building blocks that will make generative AI more effective and safer to use,” adds Kraska.
"What excites me is that this consortium isn’t just academic research for the distant future — we’re working on problems where our timelines align with industry needs, driving meaningful progress in real time," says Vivek F. Farias, the Patrick J. McGovern (1959) Professor at the MIT Sloan School of Management, and co-faculty director of the consortium.
A “perfect match” of academia and industry
At the heart of the Generative AI Impact Consortium are six founding members: Analog Devices, The Coca-Cola Co., OpenAI, Tata Group, SK Telecom, and TWG Global. Together, they will work hand-in-hand with MIT researchers to accelerate breakthroughs and address industry-shaping problems.
The consortium taps into MIT’s expertise, working across schools and disciplines — led by MIT’s Office of Innovation and Strategy, in collaboration with the MIT Schwarzman College of Computing and all five of MIT’s schools.
“This initiative is the ideal bridge between academia and industry,” says Chandrakasan. “With companies spanning diverse sectors, the consortium brings together real-world challenges, data, and expertise. MIT researchers will dive into these problems to develop cutting-edge models and applications into these different domains.”
Industry partners: Collaborating on AI’s evolution
At the core of the consortium’s mission is collaboration — bringing MIT researchers and industry partners together to unlock generative AI’s potential while ensuring its benefits are felt across society.
Among the founding members is OpenAI, the creator of the generative AI chatbot ChatGPT.
“This type of collaboration between academics, practitioners, and labs is key to ensuring that generative AI evolves in ways that meaningfully benefit society,” says Anna Makanju, vice president of global impact at OpenAI, adding that OpenAI “is eager to work alongside MIT’s Generative AI Consortium to bridge the gap between cutting-edge AI research and the real-world expertise of diverse industries.”
The Coca-Cola Co. recognizes an opportunity to leverage AI innovation on a global scale. “We see a tremendous opportunity to innovate at the speed of AI and, leveraging The Coca-Cola Company's global footprint, make these cutting-edge solutions accessible to everyone,” says Pratik Thakar, global vice president and head of generative AI. “Both MIT and The Coca-Cola Company are deeply committed to innovation, while also placing equal emphasis on the legally and ethically responsible development and use of technology.”
For TWG Global, the consortium offers the ideal environment to share knowledge and drive advancements. “The strength of the consortium is its unique combination of industry leaders and academia, which fosters the exchange of valuable lessons, technological advancements, and access to pioneering research,” says Drew Cukor, head of data and artificial intelligence transformation. Cukor adds that TWG Global “is keen to share its insights and actively engage with leading executives and academics to gain a broader perspective of how others are configuring and adopting AI, which is why we believe in the work of the consortium.”
The Tata Group views the collaboration as a platform to address some of AI’s most pressing challenges. “The consortium enables Tata to collaborate, share knowledge, and collectively shape the future of generative AI, particularly in addressing urgent challenges such as ethical considerations, data privacy, and algorithmic biases,” says Aparna Ganesh, vice president of Tata Sons Ltd.
Similarly, SK Telecom sees its involvement as a launchpad for growth and innovation. Suk-geun (SG) Chung, SK Telecom executive vice president and chief AI global officer, explains, “Joining the consortium presents a significant opportunity for SK Telecom to enhance its AI competitiveness in core business areas, including AI agents, AI semiconductors, data centers (AIDC), and physical AI,” says Chung. “By collaborating with MIT and leveraging the SK AI R&D Center as a technology control tower, we aim to forecast next-generation generative AI technology trends, propose innovative business models, and drive commercialization through academic-industrial collaboration.”
Alan Lee, chief technology officer of Analog Devices (ADI), highlights how the consortium bridges key knowledge gaps for both his company and the industry at large. “ADI can’t hire a world-leading expert in every single corner case, but the consortium will enable us to access top MIT researchers and get them involved in addressing problems we care about, as we also work together with others in the industry towards common goals,” he says.
The consortium will host interactive workshops and discussions to identify and prioritize challenges. “It’s going to be a two-way conversation, with the faculty coming together with industry partners, but also industry partners talking with each other,” says Georgia Perakis, the John C Head III Dean (Interim) of the MIT Sloan School of Management and professor of operations management, operations research and statistics, who serves alongside Huttenlocher as co-chair of the GenAI Dean’s oversight group.
Preparing for the AI-enabled workforce of the future
With AI poised to disrupt industries and create new opportunities, one of the consortium’s core goals is to guide that change in a way that benefits both businesses and society.
“When the first commercial digital computers were introduced [the UNIVAC was delivered to the U.S. Census Bureau in 1951], people were worried about losing their jobs,” says Kraska. “And yes, jobs like large-scale, manual data entry clerks and human ‘computers,’ people tasked with doing manual calculations, largely disappeared over time. But the people impacted by those first computers were trained to do other jobs.”
The consortium aims to play a key role in preparing the workforce of tomorrow by educating global business leaders and employees on generative AI evolving uses and applications. With the pace of innovation accelerating, leaders face a flood of information and uncertainty.
“When it comes to educating leaders about generative AI, it’s about helping them navigate the complexity of the space right now, because there’s so much hype and hundreds of papers published daily,” says Kraska. “The hard part is understanding which developments could actually have a chance of changing the field and which are just tiny improvements. There's a kind of FOMO [fear of missing out] for leaders that we can help reduce.”
Defining success: Shared goals for generative AI impact
Success within the initiative is defined by shared progress, open innovation, and mutual growth. “Consortium participants recognize, I think, that when I share my ideas with you, and you share your ideas with me, we’re both fundamentally better off,” explains Farias. “Progress on generative AI is not zero-sum, so it makes sense for this to be an open-source initiative.”
While participants may approach success from different angles, they share a common goal of advancing generative AI for broad societal benefit. “There will be many success metrics,” says Perakis. “We’ll educate students, who will be networking with companies. Companies will come together and learn from each other. Business leaders will come to MIT and have discussions that will help all of us, not just the leaders themselves.”
For Analog Devices’ Alan Lee, success is measured in tangible improvements that drive efficiency and product innovation: “For us at ADI, it’s a better, faster quality of experience for our customers, and that could mean better products. It could mean faster design cycles, faster verification cycles, and faster tuning of equipment that we already have or that we’re going to develop for the future. But beyond that, we want to help the world be a better, more efficient place.”
Ganesh highlights success through the lens of real-world application. “Success will also be defined by accelerating AI adoption within Tata companies, generating actionable knowledge that can be applied in real-world scenarios, and delivering significant advantages to our customers and stakeholders,” she says.
Generative AI is no longer confined to isolated research labs — it’s driving innovation across industries and disciplines. At MIT, the technology has become a campus-wide priority, connecting researchers, students, and industry leaders to solve complex challenges and uncover new opportunities. “It's truly an MIT initiative,” says Farias, “one that’s much larger than any individual or department on campus.”
David Darmofal SM ’91, PhD ’93 named vice chancellor for undergraduate and graduate education
David L. Darmofal SM ’91, PhD ’93 will serve as MIT’s next vice chancellor for undergraduate and graduate education, effective Feb. 17. Chancellor Melissa Nobles announced Darmofal’s appointment today in a letter to the MIT community.
Darmofal succeeds Ian A. Waitz, who stepped down in May to become MIT’s vice president for research, and Daniel E. Hastings, who has been serving in an interim capacity.
A creative innovator in research-based teaching and learning, Darmofal is the Jerome C. Hunsaker Professor of Aeronautics and Astronautics. Since 2017, he and his wife Claudia have served as heads of house at The Warehouse, an MIT graduate residence.
“Dave knows the ins and outs of education and student life at MIT in a way that few do,” Nobles says. “He’s a head of house, an alum, and the parent of a graduate. Dave will bring decades of first-hand experience to the role.”
“An MIT education is incredibly special, combining passionate students, staff, and faculty striving to use knowledge and discovery to drive positive change for the world,” says Darmofal. “I am grateful for this opportunity to play a part in supporting MIT’s academic mission.”
Darmofal’s leadership experience includes service from 2008 to 2011 as associate and interim department head in the Department of Aeronautics and Astronautics, overseeing undergraduate and graduate programs. He was the AeroAstro director of digital education from 2020 to 2022, including leading the department’s response to remote learning during the Covid-19 pandemic. He currently serves as director of the MIT Aerospace Computational Science and Engineering Laboratory and is a member of the Center for Computational Science and Engineering (CCSE) in the MIT Stephen A. Schwarzman College of Computing.
As an MIT faculty member and administrator, Darmofal has been involved in designing more flexible degree programs, developing open digital-learning opportunities, creating first-year advising seminars, and enhancing professional and personal development opportunities for students. He also contributed his expertise in engineering pedagogy to the development of the Schwarzman College of Computing’s Common Ground efforts, to address the need for computing education across many disciplines.
“MIT students, staff, and faculty share a common bond as problem solvers. Talk to any of us about an MIT education, and you will get an earful on not only what we need to do better, but also how we can actually do it. The Office of the Vice Chancellor can help bring our community of problem solvers together to enable improvements in our academics,” says Darmofal.
Overseeing the academic arm of the Chancellor’s Office, the vice chancellor’s portfolio is extensive. Darmofal will lead professionals across more than a dozen units, covering areas such as recruitment and admissions, financial aid, student systems, advising, professional and career development, pedagogy, experiential learning, and support for MIT’s more than 100 graduate programs. He will also work collaboratively with many of MIT’s student organizations and groups, including with the leaders of the Undergraduate Association and the Graduate Student Council, and administer the relationship with the graduate student union.
“Dave will be a critical part of my office’s efforts to strengthen and expand critical connections across all areas of student life and learning,” Nobles says. She credits the search advisory group, co-chaired by professors Laurie Boyer and Will Tisdale, in setting the right tenor for such an important role and leading a thorough, inclusive process.
Darmofal’s research is focused on computational methods for partial differential equations, especially fluid dynamics. He earned his SM and PhD degrees in aeronautics and astronautics in 1991 and 1993, respectively, from MIT, and his BS in aerospace engineering in 1989 from the University of Michigan. Prior to joining MIT in 1998, he was an assistant professor in the Department of Aerospace Engineering at Texas A&M University from 1995 to 1998. Currently, he is the chair of AeroAstro’s Undergraduate Committee and the graduate officer for the CCSE PhD program.
“I want to echo something that Dan Hastings said recently,” Darmofal says. “We have a lot to be proud of when it comes to an MIT education. It’s more accessible than it has ever been. It’s innovative, with unmatched learning opportunities here and around the world. It’s home to academic research labs that attract the most talented scholars, creators, experimenters, and engineers. And ultimately, it prepares graduates who do good.”
User-friendly system can help developers build more efficient simulations and AI models
The neural network artificial intelligence models used in applications like medical image processing and speech recognition perform operations on hugely complex data structures that require an enormous amount of computation to process. This is one reason deep-learning models consume so much energy.
To improve the efficiency of AI models, MIT researchers created an automated system that enables developers of deep learning algorithms to simultaneously take advantage of two types of data redundancy. This reduces the amount of computation, bandwidth, and memory storage needed for machine learning operations.
Existing techniques for optimizing algorithms can be cumbersome and typically only allow developers to capitalize on either sparsity or symmetry — two different types of redundancy that exist in deep learning data structures.
By enabling a developer to build an algorithm from scratch that takes advantage of both redundancies at once, the MIT researchers’ approach boosted the speed of computations by nearly 30 times in some experiments.
Because the system utilizes a user-friendly programming language, it could optimize machine-learning algorithms for a wide range of applications. The system could also help scientists who are not experts in deep learning but want to improve the efficiency of AI algorithms they use to process data. In addition, the system could have applications in scientific computing.
“For a long time, capturing these data redundancies has required a lot of implementation effort. Instead, a scientist can tell our system what they would like to compute in a more abstract way, without telling the system exactly how to compute it,” says Willow Ahrens, an MIT postdoc and co-author of a paper on the system, which will be presented at the International Symposium on Code Generation and Optimization.
She is joined on the paper by lead author Radha Patel ’23, SM ’24 and senior author Saman Amarasinghe, a professor in the Department of Electrical Engineering and Computer Science (EECS) and a principal researcher in the Computer Science and Artificial Intelligence Laboratory (CSAIL).
Cutting out computation
In machine learning, data are often represented and manipulated as multidimensional arrays known as tensors. A tensor is like a matrix, which is a rectangular array of values arranged on two axes, rows and columns. But unlike a two-dimensional matrix, a tensor can have many dimensions, or axes, making tensors more difficult to manipulate.
Deep-learning models perform operations on tensors using repeated matrix multiplication and addition — this process is how neural networks learn complex patterns in data. The sheer volume of calculations that must be performed on these multidimensional data structures requires an enormous amount of computation and energy.
But because of the way data in tensors are arranged, engineers can often boost the speed of a neural network by cutting out redundant computations.
For instance, if a tensor represents user review data from an e-commerce site, since not every user reviewed every product, most values in that tensor are likely zero. This type of data redundancy is called sparsity. A model can save time and computation by only storing and operating on non-zero values.
In addition, sometimes a tensor is symmetric, which means the top half and bottom half of the data structure are equal. In this case, the model only needs to operate on one half, reducing the amount of computation. This type of data redundancy is called symmetry.
“But when you try to capture both of these optimizations, the situation becomes quite complex,” Ahrens says.
To simplify the process, she and her collaborators built a new compiler, which is a computer program that translates complex code into a simpler language that can be processed by a machine. Their compiler, called SySTeC, can optimize computations by automatically taking advantage of both sparsity and symmetry in tensors.
They began the process of building SySTeC by identifying three key optimizations they can perform using symmetry.
First, if the algorithm’s output tensor is symmetric, then it only needs to compute one half of it. Second, if the input tensor is symmetric, then algorithm only needs to read one half of it. Finally, if intermediate results of tensor operations are symmetric, the algorithm can skip redundant computations.
Simultaneous optimizations
To use SySTeC, a developer inputs their program and the system automatically optimizes their code for all three types of symmetry. Then the second phase of SySTeC performs additional transformations to only store non-zero data values, optimizing the program for sparsity.
In the end, SySTeC generates ready-to-use code.
“In this way, we get the benefits of both optimizations. And the interesting thing about symmetry is, as your tensor has more dimensions, you can get even more savings on computation,” Ahrens says.
The researchers demonstrated speedups of nearly a factor of 30 with code generated automatically by SySTeC.
Because the system is automated, it could be especially useful in situations where a scientist wants to process data using an algorithm they are writing from scratch.
In the future, the researchers want to integrate SySTeC into existing sparse tensor compiler systems to create a seamless interface for users. In addition, they would like to use it to optimize code for more complicated programs.
This work is funded, in part, by Intel, the National Science Foundation, the Defense Advanced Research Projects Agency, and the Department of Energy.
With generative AI, MIT chemists quickly calculate 3D genomic structures
Every cell in your body contains the same genetic sequence, yet each cell expresses only a subset of those genes. These cell-specific gene expression patterns, which ensure that a brain cell is different from a skin cell, are partly determined by the three-dimensional structure of the genetic material, which controls the accessibility of each gene.
MIT chemists have now come up with a new way to determine those 3D genome structures, using generative artificial intelligence. Their technique can predict thousands of structures in just minutes, making it much speedier than existing experimental methods for analyzing the structures.
Using this technique, researchers could more easily study how the 3D organization of the genome affects individual cells’ gene expression patterns and functions.
“Our goal was to try to predict the three-dimensional genome structure from the underlying DNA sequence,” says Bin Zhang, an associate professor of chemistry and the senior author of the study. “Now that we can do that, which puts this technique on par with the cutting-edge experimental techniques, it can really open up a lot of interesting opportunities.”
MIT graduate students Greg Schuette and Zhuohan Lao are the lead authors of the paper, which appears today in Science Advances.
From sequence to structure
Inside the cell nucleus, DNA and proteins form a complex called chromatin, which has several levels of organization, allowing cells to cram 2 meters of DNA into a nucleus that is only one-hundredth of a millimeter in diameter. Long strands of DNA wind around proteins called histones, giving rise to a structure somewhat like beads on a string.
Chemical tags known as epigenetic modifications can be attached to DNA at specific locations, and these tags, which vary by cell type, affect the folding of the chromatin and the accessibility of nearby genes. These differences in chromatin conformation help determine which genes are expressed in different cell types, or at different times within a given cell.
Over the past 20 years, scientists have developed experimental techniques for determining chromatin structures. One widely used technique, known as Hi-C, works by linking together neighboring DNA strands in the cell’s nucleus. Researchers can then determine which segments are located near each other by shredding the DNA into many tiny pieces and sequencing it.
This method can be used on large populations of cells to calculate an average structure for a section of chromatin, or on single cells to determine structures within that specific cell. However, Hi-C and similar techniques are labor-intensive, and it can take about a week to generate data from one cell.
To overcome those limitations, Zhang and his students developed a model that takes advantage of recent advances in generative AI to create a fast, accurate way to predict chromatin structures in single cells. The AI model that they designed can quickly analyze DNA sequences and predict the chromatin structures that those sequences might produce in a cell.
“Deep learning is really good at pattern recognition,” Zhang says. “It allows us to analyze very long DNA segments, thousands of base pairs, and figure out what is the important information encoded in those DNA base pairs.”
ChromoGen, the model that the researchers created, has two components. The first component, a deep learning model taught to “read” the genome, analyzes the information encoded in the underlying DNA sequence and chromatin accessibility data, the latter of which is widely available and cell type-specific.
The second component is a generative AI model that predicts physically accurate chromatin conformations, having been trained on more than 11 million chromatin conformations. These data were generated from experiments using Dip-C (a variant of Hi-C) on 16 cells from a line of human B lymphocytes.
When integrated, the first component informs the generative model how the cell type-specific environment influences the formation of different chromatin structures, and this scheme effectively captures sequence-structure relationships. For each sequence, the researchers use their model to generate many possible structures. That’s because DNA is a very disordered molecule, so a single DNA sequence can give rise to many different possible conformations.
“A major complicating factor of predicting the structure of the genome is that there isn’t a single solution that we’re aiming for. There’s a distribution of structures, no matter what portion of the genome you’re looking at. Predicting that very complicated, high-dimensional statistical distribution is something that is incredibly challenging to do,” Schuette says.
Rapid analysis
Once trained, the model can generate predictions on a much faster timescale than Hi-C or other experimental techniques.
“Whereas you might spend six months running experiments to get a few dozen structures in a given cell type, you can generate a thousand structures in a particular region with our model in 20 minutes on just one GPU,” Schuette says.
After training their model, the researchers used it to generate structure predictions for more than 2,000 DNA sequences, then compared them to the experimentally determined structures for those sequences. They found that the structures generated by the model were the same or very similar to those seen in the experimental data.
“We typically look at hundreds or thousands of conformations for each sequence, and that gives you a reasonable representation of the diversity of the structures that a particular region can have,” Zhang says. “If you repeat your experiment multiple times, in different cells, you will very likely end up with a very different conformation. That’s what our model is trying to predict.”
The researchers also found that the model could make accurate predictions for data from cell types other than the one it was trained on. This suggests that the model could be useful for analyzing how chromatin structures differ between cell types, and how those differences affect their function. The model could also be used to explore different chromatin states that can exist within a single cell, and how those changes affect gene expression.
Another possible application would be to explore how mutations in a particular DNA sequence change the chromatin conformation, which could shed light on how such mutations may cause disease.
“There are a lot of interesting questions that I think we can address with this type of model,” Zhang says.
The researchers have made all of their data and the model available to others who wish to use it.
The research was funded by the National Institutes of Health.
MIT engineers help multirobot systems stay in the safety zone
Drone shows are an increasingly popular form of large-scale light display. These shows incorporate hundreds to thousands of airborne bots, each programmed to fly in paths that together form intricate shapes and patterns across the sky. When they go as planned, drone shows can be spectacular. But when one or more drones malfunction, as has happened recently in Florida, New York, and elsewhere, they can be a serious hazard to spectators on the ground.
Drone show accidents highlight the challenges of maintaining safety in what engineers call “multiagent systems” — systems of multiple coordinated, collaborative, and computer-programmed agents, such as robots, drones, and self-driving cars.
Now, a team of MIT engineers has developed a training method for multiagent systems that can guarantee their safe operation in crowded environments. The researchers found that once the method is used to train a small number of agents, the safety margins and controls learned by those agents can automatically scale to any larger number of agents, in a way that ensures the safety of the system as a whole.
In real-world demonstrations, the team trained a small number of palm-sized drones to safely carry out different objectives, from simultaneously switching positions midflight to landing on designated moving vehicles on the ground. In simulations, the researchers showed that the same programs, trained on a few drones, could be copied and scaled up to thousands of drones, enabling a large system of agents to safely accomplish the same tasks.
“This could be a standard for any application that requires a team of agents, such as warehouse robots, search-and-rescue drones, and self-driving cars,” says Chuchu Fan, associate professor of aeronautics and astronautics at MIT. “This provides a shield, or safety filter, saying each agent can continue with their mission, and we’ll tell you how to be safe.”
Fan and her colleagues report on their new method in a study appearing this month in the journal IEEE Transactions on Robotics. The study’s co-authors are MIT graduate students Songyuan Zhang and Oswin So as well as former MIT postdoc Kunal Garg, who is now an assistant professor at Arizona State University.
Mall margins
When engineers design for safety in any multiagent system, they typically have to consider the potential paths of every single agent with respect to every other agent in the system. This pair-wise path-planning is a time-consuming and computationally expensive process. And even then, safety is not guaranteed.
“In a drone show, each drone is given a specific trajectory — a set of waypoints and a set of times — and then they essentially close their eyes and follow the plan,” says Zhang, the study’s lead author. “Since they only know where they have to be and at what time, if there are unexpected things that happen, they don’t know how to adapt.”
The MIT team looked instead to develop a method to train a small number of agents to maneuver safely, in a way that could efficiently scale to any number of agents in the system. And, rather than plan specific paths for individual agents, the method would enable agents to continually map their safety margins, or boundaries beyond which they might be unsafe. An agent could then take any number of paths to accomplish its task, as long as it stays within its safety margins.
In some sense, the team says the method is similar to how humans intuitively navigate their surroundings.
“Say you’re in a really crowded shopping mall,” So explains. “You don’t care about anyone beyond the people who are in your immediate neighborhood, like the 5 meters surrounding you, in terms of getting around safely and not bumping into anyone. Our work takes a similar local approach.”
Safety barrier
In their new study, the team presents their method, GCBF+, which stands for “Graph Control Barrier Function.” A barrier function is a mathematical term used in robotics that calculates a sort of safety barrier, or a boundary beyond which an agent has a high probability of being unsafe. For any given agent, this safety zone can change moment to moment, as the agent moves among other agents that are themselves moving within the system.
When designers calculate barrier functions for any one agent in a multiagent system, they typically have to take into account the potential paths and interactions with every other agent in the system. Instead, the MIT team’s method calculates the safety zones of just a handful of agents, in a way that is accurate enough to represent the dynamics of many more agents in the system.
“Then we can sort of copy-paste this barrier function for every single agent, and then suddenly we have a graph of safety zones that works for any number of agents in the system,” So says.
To calculate an agent’s barrier function, the team’s method first takes into account an agent’s “sensing radius,” or how much of the surroundings an agent can observe, depending on its sensor capabilities. Just as in the shopping mall analogy, the researchers assume that the agent only cares about the agents that are within its sensing radius, in terms of keeping safe and avoiding collisions with those agents.
Then, using computer models that capture an agent’s particular mechanical capabilities and limits, the team simulates a “controller,” or a set of instructions for how the agent and a handful of similar agents should move around. They then run simulations of multiple agents moving along certain trajectories, and record whether and how they collide or otherwise interact.
“Once we have these trajectories, we can compute some laws that we want to minimize, like say, how many safety violations we have in the current controller,” Zhang says. “Then we update the controller to be safer.”
In this way, a controller can be programmed into actual agents, which would enable them to continually map their safety zone based on any other agents they can sense in their immediate surroundings, and then move within that safety zone to accomplish their task.
“Our controller is reactive,” Fan says. “We don’t preplan a path beforehand. Our controller is constantly taking in information about where an agent is going, what is its velocity, how fast other drones are going. It’s using all this information to come up with a plan on the fly and it’s replanning every time. So, if the situation changes, it’s always able to adapt to stay safe.”
The team demonstrated GCBF+ on a system of eight Crazyflies — lightweight, palm-sized quadrotor drones that they tasked with flying and switching positions in midair. If the drones were to do so by taking the straightest path, they would surely collide. But after training with the team’s method, the drones were able to make real-time adjustments to maneuver around each other, keeping within their respective safety zones, to successfully switch positions on the fly.
In similar fashion, the team tasked the drones with flying around, then landing on specific Turtlebots — wheeled robots with shell-like tops. The Turtlebots drove continuously around in a large circle, and the Crazyflies were able to avoid colliding with each other as they made their landings.
“Using our framework, we only need to give the drones their destinations instead of the whole collision-free trajectory, and the drones can figure out how to arrive at their destinations without collision themselves,” says Fan, who envisions the method could be applied to any multiagent system to guarantee its safety, including collision avoidance systems in drone shows, warehouse robots, autonomous driving vehicles, and drone delivery systems.
This work was partly supported by the U.S. National Science Foundation, MIT Lincoln Laboratory under the Safety in Aerobatic Flight Regimes (SAFR) program, and the Defence Science and Technology Agency of Singapore.
From bench to bedside, and beyond
In medical school, Matthew Dolan ’81 briefly considered specializing in orthopedic surgery because of the materials science nature of the work — but he soon realized that he didn’t have the innate skills required for that type of work.
“I’ll be honest with you — I can’t parallel park,” he jokes. “You can consider a lot of things, but if you find the things that you’re good at and that excite you, you can hopefully move forward with those.”
Dolan certainly has, tackling problems from bench to bedside and beyond. Both in the United States and abroad through the U.S. Air Force, Dolan has emerged as a leader in immunology and virology, and has served as director of the Defense Institute for Medical Operations. He’s worked on everything from foodborne illnesses and Ebola to biological weapons and Covid-19, and has even been a guest speaker on NPR’s “Science Friday.”
“This is fun and interesting, and I believe that, and I work hard to convey that — and it’s contagious,” he says. “You can affect people with that excitement.”
Pieces of the puzzle
Dolan fondly recalls his years at MIT, and is still in touch with many of the “brilliant” and “interesting” friends he made while in Cambridge.
He notes that the challenges that were the most rewarding in his career were also the ones that MIT had uniquely prepared him for. Dolan, a Course 7 major, naturally took many classes outside of biology as part of his undergraduate studies: organic chemistry was foundational for understanding toxicology while studying chemical weapons, while pathogens like Legionella, which causes pneumonia and can spread through water systems such as ice machines or air conditioners, are solved at the interface between public health and ecology.
“I learned that learning can be a high-intensity experience,” Dolan recalls. “You can be aggressive in your learning; you can learn and excel in a wide variety of things and gather up all the knowledge and knowledgeable people to work together towards solutions.”
Dolan, for example, worked in the Amazon Basin in Peru on a public health crisis of a sharp rise in childhood mortality due to malaria. The cause was a few degrees removed from the immediate problem: human agriculture had affected the Amazon’s tributaries, leading to still and stagnant water where before there had been rushing streams and rivers. This change in the environment allowed a certain mosquito species of “avid human biters” to thrive.
“It can be helpful and important for some people to have a really comprehensive and contextual view of scientific problems and biological problems,” he says. “It’s very rewarding to put the pieces in a puzzle like that together.”
Choosing To serve
Dolan says a key to finding meaning in his work, especially during difficult times, is a sentiment from Alsatian polymath and Nobel Peace Prize winner Albert Schweitzer: “The only ones among you who will be really happy are those who will have sought and found how to serve.”
One of Dolan’s early formative experiences was working in the heart of the HIV/AIDS epidemic, at a time when there was no effective treatment. No matter how hard he worked, the patients would still die.
“Failure is not an option — unless you have to fail. You can’t let the failures destroy you,” he says. “There are a lot of other battles out there, and it’s self-indulgent to ignore them and focus on your woe.”
Lasting impacts
Dolan couldn’t pick a favorite country, but notes that he’s always impressed seeing how people value the chance to excel with science and medicine when offered resources and respect. Ultimately, everyone he’s worked with, no matter their differences, was committed to solving problems and improving lives.
Dolan worked in Russia after the Berlin Wall fell, on HIV/AIDS in Moscow and tuberculosis in the Russian Far East. Although relations with Russia are currently tense, to say the least, Dolan remains optimistic for a brighter future.
“People that were staunch adversaries can go on to do well together,” he says. “Sometimes, peace leads to partnership. Remembering that it was once possible gives me great hope.”
Dolan understands that the most lasting impact he has had is, likely, teaching: Time marches on, and discoveries can be lost to history, but teaching and training people continues and propagates. In addition to guiding the next generation of health-care specialists, Dolan also developed programs in laboratory biosafety and biosecurity with the U.S. departments of State and Defense, and taught those programs around the world.
“Working in prevention gives you the chance to take care of process problems before they become people problems — patient care problems,” he says. “I have been so impressed with the courageous and giving people that have worked with me.”
MIT spinout Gradiant reduces companies’ water use and waste by billions of gallons each day
When it comes to water use, most of us think of the water we drink. But industrial uses for things like manufacturing account for billions of gallons of water each day. For instance, making a single iPhone, by one estimate, requires more than 3,000 gallons.
Gradiant is working to reduce the world’s industrial water footprint. Founded by a team from MIT, Gradiant offers water recycling, treatment, and purification solutions to some of the largest companies on Earth, including Coca Cola, Tesla, and the Taiwan Semiconductor Manufacturing Company. By serving as an end-to-end water company, Gradiant says it helps companies reuse 2 billion gallons of water each day and saves another 2 billion gallons of fresh water from being withdrawn.
The company’s mission is to preserve water for generations to come in the face of rising global demand.
“We work on both ends of the water spectrum,” Gradiant co-founder and CEO Anurag Bajpayee SM ’08, PhD ’12 says. “We work with ultracontaminated water, and we can also provide ultrapure water for use in areas like chip fabrication. Our specialty is in the extreme water challenges that can’t be solved with traditional technologies.”
For each customer, Gradiant builds tailored water treatment solutions that combine chemical treatments with membrane filtration and biological process technologies, leveraging a portfolio of patents to drastically cut water usage and waste.
“Before Gradiant, 40 million liters of water would be used in the chip-making process. It would all be contaminated and treated, and maybe 30 percent would be reused,” explains Gradiant co-founder and COO Prakash Govindan PhD ’12. “We have the technology to recycle, in some cases, 99 percent of the water. Now, instead of consuming 40 million liters, chipmakers only need to consume 400,000 liters, which is a huge shift in the water footprint of that industry. And this is not just with semiconductors. We’ve done this in food and beverage, we’ve done this in renewable energy, we’ve done this in pharmaceutical drug production, and several other areas.”
Learning the value of water
Govindan grew up in a part of India that experienced a years-long drought beginning when he was 10. Without tap water, one of Govindan’s chores was to haul water up the stairs of his apartment complex each time a truck delivered it.
“However much water my brother and I could carry was how much we had for the week,” Govindan recalls. “I learned the value of water the hard way.”
Govindan attended the Indian Institute of Technology as an undergraduate, and when he came to MIT for his PhD, he sought out the groups working on water challenges. He began working on a water treatment method called carrier gas extraction for his PhD under Gradiant co-founder and MIT Professor John Lienhard.
Bajpayee also worked on water treatment methods at MIT, and after brief stints as postdocs at MIT, he and Govindan licensed their work and founded Gradiant.
Carrier gas extraction became Gradiant’s first proprietary technology when the company launched in 2013. The founders began by treating wastewater created by oil and gas wells, landing their first partner in a Texas company. But Gradiant gradually expanded to solving water challenges in power generation, mining, textiles, and refineries. Then the founders noticed opportunities in industries like electronics, semiconductors, food and beverage, and pharmaceuticals. Today, oil and gas wastewater treatment makes up a small percentage of Gradiant’s work.
As the company expanded, it added technologies to its portfolio, patenting new water treatment methods around reverse osmosis, selective contaminant extraction, and free radical oxidation. Gradiant has also created a digital system that uses AI to measure, predict, and control water treatment facilities.
“The advantage Gradiant has over every other water company is that R&D is in our DNA,” Govindan says, noting Gradiant has a world-class research lab at its headquarters in Boston. “At MIT, we learned how to do cutting-edge technology development, and we never let go of that.”
The founders compare their suite of technologies to LEGO bricks they can mix and match depending on a customer’s water needs. Gradiant has built more than 2,500 of these end-to-end systems for customers around the world.
“Our customers aren’t water companies; they are industrial clients like semiconductor manufacturers, drug companies, and food and beverage companies,” Bajpayee says. “They aren’t about to start operating a water treatment plant. They look at us as their water partner who can take care of the whole water problem.”
Continuing innovation
The founders say Gradiant has been roughly doubling its revenue each year over the last five years, and it’s continuing to add technologies to its platform. For instance, Gradiant recently developed a critical minerals recovery solution to extract materials like lithium and nickel from customers’ wastewater, which could expand access to critical materials essential to the production of batteries and other products.
“If we can extract lithium from brine water in an environmentally and economically feasible way, the U.S. can meet all of its lithium needs from within the U.S.,” Bajpayee says. “What’s preventing large-scale extraction of lithium from brine is technology, and we believe what we have now deployed will open the floodgates for direct lithium extraction and completely revolutionized the industry.”
The company has also validated a method for eliminating PFAS — so-called toxic “forever chemicals” — in a pilot project with a leading U.S. semiconductor manufacturer. In the near future, it hopes to bring that solution to municipal water treatment plants to protect cities.
At the heart of Gradiant’s innovation is the founders’ belief that industrial activity doesn’t have to deplete one of the world’s most vital resources.
“Ever since the industrial revolution, we’ve been taking from nature,” Bajpayee says. “By treating and recycling water, by reducing water consumption and making industry highly water efficient, we have this unique opportunity to turn the clock back and give nature water back. If that’s your driver, you can’t choose not to innovate.”
Rare and mysterious cosmic explosion: Gamma-ray burst or jetted tidal disruption event?
Highly energetic explosions in the sky are commonly attributed to gamma-ray bursts. We now understand that these bursts originate from either the merger of two neutron stars or the collapse of a massive star. In these scenarios, a newborn black hole is formed, emitting a jet that travels at nearly the speed of light. When these jets are directed toward Earth, we can observe them from vast distances — sometimes billions of light-years away — due to a relativistic effect known as Doppler boosting. Over the past decade, thousands of such gamma-ray bursts have been detected.
Since its launch in 2024, the Einstein Probe — an X-ray space telescope developed by the Chinese Academy of Sciences (CAS) in partnership with European Space Agency (ESA) and the Max Planck Institute for Extraterrestrial Physics — has been scanning the skies looking for energetic explosions, and in April the telescope observed an unusual event designated as EP240408A. Now an international team of astronomers, including Dheeraj Pasham from MIT, Igor Andreoni from University of North Carolina at Chapel Hill, and Brendan O’Connor from Carnegie Mellon University, and others have investigated this explosion using a slew of ground-based and space-based telescopes, including NuSTAR, Swift, Gemini, Keck, DECam, VLA, ATCA, and NICER, which was developed in collaboration with MIT.
An open-access report of their findings, published Jan. 27 in The Astrophysical Journal Letters, indicates that the characteristics of this explosion do not match those of typical gamma-ray bursts. Instead, it may represent a rare new class of powerful cosmic explosion — a jetted tidal disruption event, which occurs when a supermassive black hole tears apart a star.
“NICER’s ability to steer to pretty much any part of the sky and monitor for weeks has been instrumental in our understanding of these unusual cosmic explosions,” says Pasham, a research scientist at the MIT Kavli Institute for Astrophysics and Space Research.
While a jetted tidal disruption event is plausible, the researchers say the lack of radio emissions from this jet is puzzling. O’Connor surmises, “EP240408a ticks some of the boxes for several different kinds of phenomena, but it doesn’t tick all the boxes for anything. In particular, the short duration and high luminosity are hard to explain in other scenarios. The alternative is that we are seeing something entirely new!”
According to Pasham, the Einstein Probe is just beginning to scratch the surface of what seems possible. “I’m excited to chase the next weird explosion from the Einstein Probe”, he says, echoing astronomers worldwide who look forward to the prospect of discovering more unusual explosions from the farthest reaches of the cosmos.
Evelina Fedorenko receives Troland Award from National Academy of Sciences
The National Academy of Sciences (NAS) recently announced that MIT Associate Professor Evelina Fedorenko will receive a 2025 Troland Research Award for her groundbreaking contributions toward understanding the language network in the human brain.
The Troland Research Award is given annually to recognize unusual achievement by early-career researchers within the broad spectrum of experimental psychology.
Fedorenko, an associate professor of brain and cognitive sciences and a McGovern Institute for Brain Research investigator, is interested in how minds and brains create language. Her lab is unpacking the internal architecture of the brain’s language system and exploring the relationship between language and various cognitive, perceptual, and motor systems. Her novel methods combine precise measures of an individual’s brain organization with innovative computational modeling to make fundamental discoveries about the computations that underlie the uniquely human ability for language.
Fedorenko has shown that the language network is selective for language processing over diverse non-linguistic processes that have been argued to share computational demands with language, such as math, music, and social reasoning. Her work has also demonstrated that syntactic processing is not localized to a particular region within the language network, and every brain region that responds to syntactic processing is at least as sensitive to word meanings.
She has also shown that representations from neural network language models, such as ChatGPT, are similar to those in the human language brain areas. Fedorenko also highlighted that although language models can master linguistic rules and patterns, they are less effective at using language in real-world situations. In the human brain, that kind of functional competence is distinct from formal language competence, she says, requiring not just language-processing circuits but also brain areas that store knowledge of the world, reason, and interpret social interactions. Contrary to a prominent view that language is essential for thinking, Fedorenko argues that language is not the medium of thought and is primarily a tool for communication.
Ultimately, Fedorenko’s cutting-edge work is uncovering the computations and representations that fuel language processing in the brain. She will receive the Troland Award this April, during the annual meeting of the NAS in Washington.
3 Questions: Modeling adversarial intelligence to exploit AI’s security vulnerabilities
If you’ve watched cartoons like Tom and Jerry, you’ll recognize a common theme: An elusive target avoids his formidable adversary. This game of “cat-and-mouse” — whether literal or otherwise — involves pursuing something that ever-so-narrowly escapes you at each try.
In a similar way, evading persistent hackers is a continuous challenge for cybersecurity teams. Keeping them chasing what’s just out of reach, MIT researchers are working on an AI approach called “artificial adversarial intelligence” that mimics attackers of a device or network to test network defenses before real attacks happen. Other AI-based defensive measures help engineers further fortify their systems to avoid ransomware, data theft, or other hacks.
Here, Una-May O'Reilly, an MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) principal investigator who leads the Anyscale Learning For All Group (ALFA), discusses how artificial adversarial intelligence protects us from cyber threats.
Q: In what ways can artificial adversarial intelligence play the role of a cyber attacker, and how does artificial adversarial intelligence portray a cyber defender?
A: Cyber attackers exist along a competence spectrum. At the lowest end, there are so-called script-kiddies, or threat actors who spray well-known exploits and malware in the hopes of finding some network or device that hasn't practiced good cyber hygiene. In the middle are cyber mercenaries who are better-resourced and organized to prey upon enterprises with ransomware or extortion. And, at the high end, there are groups that are sometimes state-supported, which can launch the most difficult-to-detect "advanced persistent threats" (or APTs).
Think of the specialized, nefarious intelligence that these attackers marshal — that's adversarial intelligence. The attackers make very technical tools that let them hack into code, they choose the right tool for their target, and their attacks have multiple steps. At each step, they learn something, integrate it into their situational awareness, and then make a decision on what to do next. For the sophisticated APTs, they may strategically pick their target, and devise a slow and low-visibility plan that is so subtle that its implementation escapes our defensive shields. They can even plan deceptive evidence pointing to another hacker!
My research goal is to replicate this specific kind of offensive or attacking intelligence, intelligence that is adversarially-oriented (intelligence that human threat actors rely upon). I use AI and machine learning to design cyber agents and model the adversarial behavior of human attackers. I also model the learning and adaptation that characterizes cyber arms races.
I should also note that cyber defenses are pretty complicated. They've evolved their complexity in response to escalating attack capabilities. These defense systems involve designing detectors, processing system logs, triggering appropriate alerts, and then triaging them into incident response systems. They have to be constantly alert to defend a very big attack surface that is hard to track and very dynamic. On this other side of attacker-versus-defender competition, my team and I also invent AI in the service of these different defensive fronts.
Another thing stands out about adversarial intelligence: Both Tom and Jerry are able to learn from competing with one another! Their skills sharpen and they lock into an arms race. One gets better, then the other, to save his skin, gets better too. This tit-for-tat improvement goes onwards and upwards! We work to replicate cyber versions of these arms races.
Q: What are some examples in our everyday lives where artificial adversarial intelligence has kept us safe? How can we use adversarial intelligence agents to stay ahead of threat actors?
A: Machine learning has been used in many ways to ensure cybersecurity. There are all kinds of detectors that filter out threats. They are tuned to anomalous behavior and to recognizable kinds of malware, for example. There are AI-enabled triage systems. Some of the spam protection tools right there on your cell phone are AI-enabled!
With my team, I design AI-enabled cyber attackers that can do what threat actors do. We invent AI to give our cyber agents expert computer skills and programming knowledge, to make them capable of processing all sorts of cyber knowledge, plan attack steps, and to make informed decisions within a campaign.
Adversarially intelligent agents (like our AI cyber attackers) can be used as practice when testing network defenses. A lot of effort goes into checking a network's robustness to attack, and AI is able to help with that. Additionally, when we add machine learning to our agents, and to our defenses, they play out an arms race we can inspect, analyze, and use to anticipate what countermeasures may be used when we take measures to defend ourselves.
Q: What new risks are they adapting to, and how do they do so?
A: There never seems to be an end to new software being released and new configurations of systems being engineered. With every release, there are vulnerabilities an attacker can target. These may be examples of weaknesses in code that are already documented, or they may be novel.
New configurations pose the risk of errors or new ways to be attacked. We didn't imagine ransomware when we were dealing with denial-of-service attacks. Now we're juggling cyber espionage and ransomware with IP [intellectual property] theft. All our critical infrastructure, including telecom networks and financial, health care, municipal, energy, and water systems, are targets.
Fortunately, a lot of effort is being devoted to defending critical infrastructure. We will need to translate that to AI-based products and services that automate some of those efforts. And, of course, to keep designing smarter and smarter adversarial agents to keep us on our toes, or help us practice defending our cyber assets.
MIT students' works redefine human-AI collaboration
Imagine a boombox that tracks your every move and suggests music to match your personal dance style. That’s the idea behind “Be the Beat,” one of several projects from MIT course 4.043/4.044 (Interaction Intelligence), taught by Marcelo Coelho in the Department of Architecture, that were presented at the 38th annual NeurIPS (Neural Information Processing Systems) conference in December 2024. With over 16,000 attendees converging in Vancouver, NeurIPS is a competitive and prestigious conference dedicated to research and science in the field of artificial intelligence and machine learning, and a premier venue for showcasing cutting-edge developments.
The course investigates the emerging field of large language objects, and how artificial intelligence can be extended into the physical world. While “Be the Beat” transforms the creative possibilities of dance, other student submissions span disciplines such as music, storytelling, critical thinking, and memory, creating generative experiences and new forms of human-computer interaction. Taken together, these projects illustrate a broader vision for artificial intelligence: one that goes beyond automation to catalyze creativity, reshape education, and reimagine social interactions.
Be the Beat
“Be the Beat,” by Ethan Chang, an MIT mechanical engineering and design student, and Zhixing Chen, an MIT mechanical engineering and music student, is an AI-powered boombox that suggests music from a dancer's movement. Dance has traditionally been guided by music throughout history and across cultures, yet the concept of dancing to create music is rarely explored.
“Be the Beat” creates a space for human-AI collaboration on freestyle dance, empowering dancers to rethink the traditional dynamic between dance and music. It uses PoseNet to describe movements for a large language model, enabling it to analyze dance style and query APIs to find music with similar style, energy, and tempo. Dancers interacting with the boombox reported having more control over artistic expression and described the boombox as a novel approach to discovering dance genres and choreographing creatively.
A Mystery for You
“A Mystery for You,” by Mrinalini Singha SM ’24, a recent graduate in the Art, Culture, and Technology program, and Haoheng Tang, a recent graduate of the Harvard University Graduate School of Design, is an educational game designed to cultivate critical thinking and fact-checking skills in young learners. The game leverages a large language model (LLM) and a tangible interface to create an immersive investigative experience. Players act as citizen fact-checkers, responding to AI-generated “news alerts” printed by the game interface. By inserting cartridge combinations to prompt follow-up “news updates,” they navigate ambiguous scenarios, analyze evidence, and weigh conflicting information to make informed decisions.
This human-computer interaction experience challenges our news-consumption habits by eliminating touchscreen interfaces, replacing perpetual scrolling and skim-reading with a haptically rich analog device. By combining the affordances of slow media with new generative media, the game promotes thoughtful, embodied interactions while equipping players to better understand and challenge today’s polarized media landscape, where misinformation and manipulative narratives thrive.
Memorscope
“Memorscope,” by MIT Media Lab research collaborator Keunwook Kim, is a device that creates collective memories by merging the deeply human experience of face-to-face interaction with advanced AI technologies. Inspired by how we use microscopes and telescopes to examine and uncover hidden and invisible details, Memorscope allows two users to “look into” each other’s faces, using this intimate interaction as a gateway to the creation and exploration of their shared memories.
The device leverages AI models such as OpenAI and Midjourney, introducing different aesthetic and emotional interpretations, which results in a dynamic and collective memory space. This space transcends the limitations of traditional shared albums, offering a fluid, interactive environment where memories are not just static snapshots but living, evolving narratives, shaped by the ongoing relationship between users.
Narratron
“Narratron,” by Harvard Graduate School of Design students Xiying (Aria) Bao and Yubo Zhao, is an interactive projector that co-creates and co-performs children's stories through shadow puppetry using large language models. Users can press the shutter to “capture” protagonists they want to be in the story, and it takes hand shadows (such as animal shapes) as input for the main characters. The system then develops the story plot as new shadow characters are introduced. The story appears through a projector as a backdrop for shadow puppetry while being narrated through a speaker as users turn a crank to “play” in real time. By combining visual, auditory, and bodily interactions in one system, the project aims to spark creativity in shadow play storytelling and enable multi-modal human-AI collaboration.
Perfect Syntax
“Perfect Syntax,” by Karyn Nakamura ’24, is a video art piece examining the syntactic logic behind motion and video. Using AI to manipulate video fragments, the project explores how the fluidity of motion and time can be simulated and reconstructed by machines. Drawing inspiration from both philosophical inquiry and artistic practice, Nakamura's work interrogates the relationship between perception, technology, and the movement that shapes our experience of the world. By reimagining video through computational processes, Nakamura investigates the complexities of how machines understand and represent the passage of time and motion.
Smart carbon dioxide removal yields economic and environmental benefits
Last year the Earth exceeded 1.5 degrees Celsius of warming above preindustrial times, a threshold beyond which wildfires, droughts, floods, and other climate impacts are expected to escalate in frequency, intensity, and lethality. To cap global warming at 1.5 C and avert that scenario, the nearly 200 signatory nations of the Paris Agreement on climate change will need to not only dramatically lower their greenhouse gas emissions, but also take measures to remove carbon dioxide (CO2) from the atmosphere and durably store it at or below the Earth’s surface.
Past analyses of the climate mitigation potential, costs, benefits, and drawbacks of different carbon dioxide removal (CDR) options have focused primarily on three strategies: bioenergy with carbon capture and storage (BECCS), in which CO2-absorbing plant matter is converted into fuels or directly burned to generate energy, with some of the plant’s carbon content captured and then stored safely and permanently; afforestation/reforestation, in which CO2-absorbing trees are planted in large numbers; and direct air carbon capture and storage (DACCS), a technology that captures and separates CO2 directly from ambient air, and injects it into geological reservoirs or incorporates it into durable products.
To provide a more comprehensive and actionable analysis of CDR, a new study by researchers at the MIT Center for Sustainability Science and Strategy (CS3) first expands the option set to include biochar (charcoal produced from plant matter and stored in soil) and enhanced weathering (EW) (spreading finely ground rock particles on land to accelerate storage of CO2 in soil and water). The study then evaluates portfolios of all five options — in isolation and in combination — to assess their capability to meet the 1.5 C goal, and their potential impacts on land, energy, and policy costs.
The study appears in the journal Environmental Research Letters. Aided by their global multi-region, multi-sector Economic Projection and Policy Analysis (EPPA) model, the MIT CS3 researchers produce three key findings.
First, the most cost-effective, low-impact strategy that policymakers can take to achieve global net-zero emissions — an essential step in meeting the 1.5 C goal — is to diversify their CDR portfolio, rather than rely on any single option. This approach minimizes overall cropland and energy consumption, and negative impacts such as increased food insecurity and decreased energy supplies.
By diversifying across multiple CDR options, the highest CDR deployment of around 31.5 gigatons of CO2 per year is achieved in 2100, while also proving the most cost-effective net-zero strategy. The study identifies BECCS and biochar as most cost-competitive in removing CO2 from the atmosphere, followed by EW, with DACCS as uncompetitive due to high capital and energy requirements. While posing logistical and other challenges, biochar and EW have the potential to improve soil quality and productivity across 45 percent of all croplands by 2100.
“Diversifying CDR portfolios is the most cost-effective net-zero strategy because it avoids relying on a single CDR option, thereby reducing and redistributing negative impacts on agriculture, forestry, and other land uses, as well as on the energy sector,” says Solene Chiquier, lead author of the study who was a CS3 postdoc during its preparation.
The second finding: There is no optimal CDR portfolio that will work well at global and national levels. The ideal CDR portfolio for a particular region will depend on local technological, economic, and geophysical conditions. For example, afforestation and reforestation would be of great benefit in places like Brazil, Latin America, and Africa, by not only sequestering carbon in more acreage of protected forest but also helping to preserve planetary well-being and human health.
“In designing a sustainable, cost-effective CDR portfolio, it is important to account for regional availability of agricultural, energy, and carbon-storage resources,” says Sergey Paltsev, CS3 deputy director, MIT Energy Initiative senior research scientist, and supervising co-author of the study. “Our study highlights the need for enhancing knowledge about local conditions that favor some CDR options over others.”
Finally, the MIT CS3 researchers show that delaying large-scale deployment of CDR portfolios could be very costly, leading to considerably higher carbon prices across the globe — a development sure to deter the climate mitigation efforts needed to achieve the 1.5 C goal. They recommend near-term implementation of policy and financial incentives to help fast-track those efforts.