Feed aggregator

Processing our technological angst through humor

MIT Latest News - Wed, 07/09/2025 - 12:00am

The first time Steve Jobs held a public demo of the Apple Macintosh, in early 1984, scripted jokes were part of the rollout. First, Jobs pulled the machine out of a bag. Then, using speech technology from Samsung, the Macintosh made a quip about rival IBM’s mainframes: “Never trust a computer you can’t lift.”

There’s a reason Jobs was doing that. For the first few decades that computing became part of cultural life, starting in the 1950s, computers seemed unfriendly, grim, and liable to work against human interests. Take the 1968 film “2001: A Space Odyssey,” in which the onboard computer, HAL, turns against the expedition’s astronauts. It’s a famous cultural touchstone. Jobs, in selling the idea of a personal computer, was using humor to ease concerns about the machines.

“Against the sense of computing as cold and numbers-driven, the fact that this computer was using voice technology to deliver jokes made it seem less forbidding, less evil,” says MIT scholar Benjamin Mangrum.

In fact, this dynamic turns up throughout modern culture, in movies, television, fiction, and the theater. We often deal with our doubts and fears about computing through humor, whether reconciling ourselves to machines or critiquing them. Now, Mangrum analyzes this phenomenon in a new book, “The Comedy of Computation: Or, How I Learned to Stop Worrying and Love Obsolescence,” published this month by Stanford University Press.

“Comedy has been a form for making this technology seem ordinary,” says Mangrum, an associate professor in MIT’s literature program. “Where in other circumstances computing might seem inhuman or impersonal, comedy allows us to incorporate it into our lives in a way that makes it make sense.”

Reversals of fortune

Mangrum’s interest in the subject was sparked partly by William Marchant’s 1955 play, “The Desk Set” — a romantic comedy later turned into a film starring Katharine Hepburn and Spencer Tracy — which queries, among other things, how office workers will co-exist alongside computers.

Perhaps against expectations, romantic comedies have turned out to be one of the most prominent contemporary forms of culture that grapple with technology and its effects on us. Mangrum, in the book, explains why: Their plot structure often involves reversals, which sometimes are extended to technology, too. Computing might seem forbidding, but it might also pull people together.

“One of the common tropes about romantic comedies is that there are characters or factors in the drama that obstruct the happy union of two people,” Mangrum observes. “And often across the arc of the drama, the obstruction or obstructive character is transformed into a partner, or collaborator, and assimilated within the happy couple’s union. That provides a template for how some cultural producers want to present the experience of computing. It begins as an obstruction and ends as a partner.”

That plot structure, Mangrum notes, dates to antiquity and was common in Shakespeare’s day. Still, as he writes in the book, there is “no timeless reality called Comedy,” as the vehicles and forms of it change over time. Beyond that, specific jokes about computing can quickly become outmoded. Steve Jobs made fun of mainframes, and the 1998 Nora Ephron comedy “You’ve Got Mail” got laughs out of dial-up modems, but those jokes might leave most people puzzled today.

“Comedy is not a fixed resource,” Mangrum says. “It’s an ever-changing toolbox.”

Continuing this evolution into the 21st century, Mangrum observes that a lot of computational comedy centers on an entire category of commentary he calls “the Great Tech-Industrial Joke.” This focuses on the gap between noble-sounding declared aspirations of technology and the sometimes-dismal outcomes it creates.

Social media, for instance, promised new worlds of connectivity and social exploration, and has benefits people enjoy — but it has also generated polarization, misinformation, and toxicity. Technology’s social effects are complex. Whole televisions shows, such as “Silicon Valley,” have dug into this terrain.

“The tech industry announces that some of its products have revolutionary or utopian aims, but the achievements of many of them fall far short of that,” Mangrum says. “It’s a funny setup for a joke. People have been claiming we’re saving the world, when actually we’re just processing emails faster. But it’s a mode of criticism aimed at big tech, since its products are more complicated.”

A complicated, messy picture

“The Comedy of Computation” digs into several other facets of modern culture and technology. The notion of personal authenticity, as Mangrum observes, is a fairly recent and modern construct in society — and it’s another sphere of life that collides with computing, since social media is full of charges of inauthenticity.

“That ethics of authenticity connects to comedy, as we make jokes about people not being authentic,” Mangrum says.

“The Comedy of Computation” has received praise from other scholars. Mark Goble, a professor of English at the University of California at Berkeley, has called it “essential for understanding the technological world in its complexity, absurdity, and vibrancy.”

For his part, Mangrum emphasizes that his book is an exploration of the full complexity of technology, culture, and society.

“There’s this really complicated, messy picture,” Mangrum says. “And comedy sometimes finds a way of experiencing and finding pleasure in that messiness, and other times it neatly wraps it up in a lesson that can make things neater than they actually are.”

Mangrum adds that the book focuses on “the combination of the threat and pleasure that’s involved across the history of the computer, in the ways it’s been assimilated and shaped society, with real advances and benefits, along with real threats, for instance to employment. I’m interested in the duality, the simultaneous and seemingly conflicting features of that experience.”

Amplified warming accelerates deoxygenation in the Arctic Ocean

Nature Climate Change - Wed, 07/09/2025 - 12:00am

Nature Climate Change, Published online: 09 July 2025; doi:10.1038/s41558-025-02376-0

Rapid warming of the global ocean and amplified Arctic warming will alter the ocean biogeochemistry. Here the authors show that Atlantic water inflow, and the subsequent subduction and circulation, is reducing dissolved oxygen in the Arctic due to reduced solubility with increased temperatures.

EFF to US Court of Appeals: Protect Taxpayer Privacy

EFF: Updates - Tue, 07/08/2025 - 3:10pm

EFF has filed an amicus brief in Trabajadores v. Bessent, a case concerning the Internal Revenue Service (IRS) sharing protected personal tax information with the Department of Homeland Security for the purposes of immigration enforcement. Our expertise in  privacy and data sharing makes us the ideal organization to step in and inform the judge: government actions like this have real-world consequences. The IRS’s sharing, and especially bulk sharing, of data is improper and  makes taxpayers vulnerable to inevitable mistakes. As a practical matter, the sharing of data that IRS had previously claimed was protected undermines the trust important civil institutions require in order to be effective. 

You can read the entire brief here

The brief makes two particular arguments. The first is that if the Tax Reform Act, the statute under which the IRS found the authority to share the data, is considered to be ambiguous, and that the statute should be interpreted in light of the legislative intent and historical background, which disfavors disclosure. The brief reads,

Given the historical context, and decades of subsequent agency promises to protect taxpayer confidentiality and taxpayer reliance on those promises, the Administration’s abrupt decision to re-interpret §6103 to allow sharing with ICE whenever a potential “criminal proceeding” can be posited, is a textbook example of an arbitrary and capricious action even if the statute can be read to be ambiguous.

The other argument we make to the court is that data scientists agree: when you try to corroborate information between two databases in which information is only partially identifiable, mistakes happen. We argue:

Those errors result from such mundane issues as outdated information, data entry errors, and taxpayers or tax preparer submission of incorrect names or addresses. If public reports are correct, and officials intend to share information regarding 700,000 or even 7 million taxpayers, the errors will multiply, leading to the mistaken targeting, detention, deportation, and potentially even physical harm to regular taxpayers.

Information silos in the government exist for a reason. Here, it was designed to protect individual privacy and prevent executive abuse that can come with unfettered access to properly-collected information.  The concern motivating Congress to pass the Tax Reform Act was the same as that behind Privacy Act of 1974 and the 1978 Right to Financial Privacy Act. These laws were part of a wave of reforms Congress considered necessary to address the misuse of tax data to spy on and harass political opponents, dissidents, civil rights activists, and anti-war protestors in the 1960s and early 1970s. Congress saw the need to ensure that data collected for one purpose should only be used for that purpose, with very narrow exceptions, or else it is prone to abuse. Yet the IRS is currently sharing information to allow ICE to enforce immigration law.

Taxation in the United States operates through a very simple agreement: the government requires taxes from people working inside the United States in order to function. In order to get people to pay their taxes, including undocumented immigrants living and working in the United States, the IRS has previously promised that the data they collect will not be used against a person for punitive reasons. This increases people to pay taxes and alleviates concerns of people people may have to avoid interacting with the government. But the IRS’s reversal has greatly harmed that trust and has potential to have far reaching and negative ramifications, including decreasing future tax revenue.

Consolidating government information so that the agencies responsible for healthcare, taxes, or financial support are linked to agencies that police, surveil, and fine people is a recipe for disaster. For that reason, EFF is proud to submit this amicus brief in Trabajadores v. Bessent in support of taxpayer privacy. 

Related Cases: American Federation of Government Employees v. U.S. Office of Personnel Management

Flood predictions could worsen when Trump’s cuts take hold

ClimateWire News - Tue, 07/08/2025 - 6:14am
Forecasts and warnings largely worked during the Texas catastrophe. Those systems are expected to degrade as President Donald Trump’s agenda takes hold.

Chicago teachers won climate action in their contract. But funding issues loom.

ClimateWire News - Tue, 07/08/2025 - 6:13am
The school system must contend with a $700 million budget deficit and the likelihood of less money from the federal government.

Trump orders crackdown on ‘green’ subsidies

ClimateWire News - Tue, 07/08/2025 - 6:11am
An executive order tells the Treasury Department to enforce language in the GOP megabill that phases out Biden-era tax credits for wind and solar projects.

Climate activists say Alito boosted their lawsuit against EPA

ClimateWire News - Tue, 07/08/2025 - 6:11am
Two Supreme Court decisions support the argument by youth challengers that they can sue the Trump administration over climate change, a new brief said.

Dems demand investigations into deadly Texas floods

ClimateWire News - Tue, 07/08/2025 - 6:09am
Top lawmakers in the House and Senate are asking whether staffing cuts played a role.

California AG beats his own lawyers in suit related to climate case

ClimateWire News - Tue, 07/08/2025 - 6:08am
A judge sided with Rob Bonta over his decision to pay high-priced private lawyers to oversee the state's case against oil giants.

California Democrats retreat on climate

ClimateWire News - Tue, 07/08/2025 - 6:07am
A changing political climate has the party recalibrating on climate policies.

Floods like the one that hit Texas are US's top storm-related killer

ClimateWire News - Tue, 07/08/2025 - 6:06am
Waters rise so quickly that people are caught off guard, according to the National Weather Service. Many people run into trouble while traveling.

Days of monsoon rains, flooding kill at least 72 in Pakistan

ClimateWire News - Tue, 07/08/2025 - 6:06am
Emergency services have been on maximum alert since last month after 17 tourists from the same family were swept away by the Swat River in the northwest.

Greece imposes work breaks as heat wave grips country

ClimateWire News - Tue, 07/08/2025 - 6:05am
The labor ministry ordered the work stoppage, in effect from midday to 5 p.m. local time, for outdoor manual labor and food delivery services.

Challenges of institutional adaptation

Nature Climate Change - Tue, 07/08/2025 - 12:00am

Nature Climate Change, Published online: 08 July 2025; doi:10.1038/s41558-025-02388-w

Adaptation efforts require responsive and adaptive institutions. Some progress has been made, but more systematic institutional adaptation is needed given the growing climate hazards.

Study could lead to LLMs that are better at complex reasoning

MIT Latest News - Tue, 07/08/2025 - 12:00am

For all their impressive capabilities, large language models (LLMs) often fall short when given challenging new tasks that require complex reasoning skills.

While an accounting firm’s LLM might excel at summarizing financial reports, that same model could fail unexpectedly if tasked with predicting market trends or identifying fraudulent transactions.

To make LLMs more adaptable, MIT researchers investigated how a certain training technique can be strategically deployed to boost a model’s performance on unfamiliar, difficult problems.

They show that test-time training, a method that involves temporarily updating some of a model’s inner workings during deployment, can lead to a sixfold improvement in accuracy. The researchers developed a framework for implementing a test-time training strategy that uses examples of the new task to maximize these gains.

Their work could improve a model’s flexibility, enabling an off-the-shelf LLM to adapt to complex tasks that require planning or abstraction. This could lead to LLMs that would be more accurate in many applications that require logical deduction, from medical diagnostics to supply chain management.

“Genuine learning — what we did here with test-time training — is something these models can’t do on their own after they are shipped. They can’t gain new skills or get better at a task. But we have shown that if you push the model a little bit to do actual learning, you see that huge improvements in performance can happen,” says Ekin Akyürek PhD ’25, lead author of the study.

Akyürek is joined on the paper by graduate students Mehul Damani, Linlu Qiu, Han Guo, and Jyothish Pari; undergraduate Adam Zweiger; and senior authors Yoon Kim, an assistant professor of Electrical Engineering and Computer Science (EECS) and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL); and Jacob Andreas, an associate professor in EECS and a member of CSAIL. The research will be presented at the International Conference on Machine Learning.

Tackling hard domains

LLM users often try to improve the performance of their model on a new task using a technique called in-context learning. They feed the model a few examples of the new task as text prompts which guide the model’s outputs.

But in-context learning doesn’t always work for problems that require logic and reasoning.

The MIT researchers investigated how test-time training can be used in conjunction with in-context learning to boost performance on these challenging tasks. Test-time training involves updating some model parameters — the internal variables it uses to make predictions — using a small amount of new data specific to the task at hand.

The researchers explored how test-time training interacts with in-context learning. They studied design choices that maximize the performance improvements one can coax out of a general-purpose LLM.

“We find that test-time training is a much stronger form of learning. While simply providing examples can modestly boost accuracy, actually updating the model with those examples can lead to significantly better performance, particularly in challenging domains,” Damani says.

In-context learning requires a small set of task examples, including problems and their solutions. The researchers use these examples to create a task-specific dataset needed for test-time training.

To expand the size of this dataset, they create new inputs by slightly changing the problems and solutions in the examples, such as by horizontally flipping some input data. They find that training the model on the outputs of this new dataset leads to the best performance.

In addition, the researchers only update a small number of model parameters using a technique called low-rank adaption, which improves the efficiency of the test-time training process.

“This is important because our method needs to be efficient if it is going to be deployed in the real world. We find that you can get huge improvements in accuracy with a very small amount of parameter training,” Akyürek says.

Developing new skills

Streamlining the process is key, since test-time training is employed on a per-instance basis, meaning a user would need to do this for each individual task. The updates to the model are only temporary, and the model reverts to its original form after making a prediction.

A model that usually takes less than a minute to answer a query might take five or 10 minutes to provide an answer with test-time training, Akyürek adds.

“We wouldn’t want to do this for all user queries, but it is useful if you have a very hard task that you want to the model to solve well. There also might be tasks that are too challenging for an LLM to solve without this method,” he says.

The researchers tested their approach on two benchmark datasets of extremely complex problems, such as IQ puzzles. It boosted accuracy as much as sixfold over techniques that use only in-context learning.

Tasks that involved structured patterns or those which used completely unfamiliar types of data showed the largest performance improvements.

“For simpler tasks, in-context learning might be OK. But updating the parameters themselves might develop a new skill in the model,” Damani says.

In the future, the researchers want to use these insights toward the development of models that continually learn.

The long-term goal is an LLM that, given a query, can automatically determine if it needs to use test-time training to update parameters or if it can solve the task using in-context learning, and then implement the best test-time training strategy without the need for human intervention.

This work is supported, in part, by the MIT-IBM Watson AI Lab and the National Science Foundation.

Avoid urban development policy that fuels climate risk

Nature Climate Change - Tue, 07/08/2025 - 12:00am

Nature Climate Change, Published online: 08 July 2025; doi:10.1038/s41558-025-02365-3

Urban development policies, designed to improve city resilience, could unintentionally increase the exposure to climate risk. This Comment discusses the impact of misaligned incentives, miscalculated benefits and costs, and overlooked behavioural responses on policy outcomes, as well as future directions.

MIT chemists boost the efficiency of a key enzyme in photosynthesis

MIT Latest News - Mon, 07/07/2025 - 2:00pm

During photosynthesis, an enzyme called rubisco catalyzes a key reaction — the incorporation of carbon dioxide into organic compounds to create sugars. However, rubisco, which is believed to be the most abundant enzyme on Earth, is very inefficient compared to the other enzymes involved in photosynthesis.

MIT chemists have now shown that they can greatly enhance a version of rubisco found in bacteria from a low-oxygen environment. Using a process known as directed evolution, they identified mutations that could boost rubisco’s catalytic efficiency by up to 25 percent.

The researchers now plan to apply their technique to forms of rubisco that could be used in plants to help boost their rates of photosynthesis, which could potentially improve crop yields.

“This is, I think, a compelling demonstration of successful improvement of a rubisco’s enzymatic properties, holding out a lot of hope for engineering other forms of rubisco,” says Matthew Shoulders, the Class of 1942 Professor of Chemistry at MIT.

Shoulders and Robert Wilson, a research scientist in the Department of Chemistry, are the senior authors of the new study, which appears this week in the Proceedings of the National Academy of Sciences. MIT graduate student Julie McDonald is the paper’s lead author.

Evolution of efficiency

When plants or photosynthetic bacteria absorb energy from the sun, they first convert it into energy-storing molecules such as ATP. In the next phase of photosynthesis, cells use that energy to transform a molecule known as ribulose bisphosphate into glucose, which requires several additional reactions. Rubisco catalyzes the first of those reactions, known as carboxylation. During that reaction, carbon from CO2 is added to ribulose bisphosphate.

Compared to the other enzymes involved in photosynthesis, rubisco is very slow, catalyzing only one to 10 reactions per second. Additionally, rubisco can also interact with oxygen, leading to a competing reaction that incorporates oxygen instead of carbon — a process that wastes some of the energy absorbed from sunlight.

“For protein engineers, that’s a really attractive set of problems because those traits seem like things that you could hopefully make better by making changes to the enzyme’s amino acid sequence,” McDonald says.

Previous research has led to improvement in rubisco’s stability and solubility, which resulted in small gains in enzyme efficiency. Most of those studies used directed evolution — a technique in which a naturally occurring protein is randomly mutated and then screened for the emergence of new, desirable features.

This process is usually done using error-prone PCR, a technique that first generates mutations in vitro (outside of the cell), typically introducing only one or two mutations in the target gene. In past studies on rubisco, this library of mutations was then introduced into bacteria that grow at a rate relative to rubisco activity. Limitations in error-prone PCR and in the efficiency of introducing new genes restrict the total number of mutations that can be generated and screened using this approach. Manual mutagenesis and selection steps also add more time to the process over multiple rounds of evolution.

The MIT team instead used a newer mutagenesis technique that the Shoulders Lab previously developed, called MutaT7. This technique allows the researchers to perform both mutagenesis and screening in living cells, which dramatically speeds up the process. Their technique also enables them to mutate the target gene at a higher rate.

“Our continuous directed evolution technique allows you to look at a lot more mutations in the enzyme than has been done in the past,” McDonald says.

Better rubisco

For this study, the researchers began with a version of rubisco, isolated from a family of semi-anaerobic bacteria known as Gallionellaceae, that is one of the fastest rubisco found in nature. During the directed evolution experiments, which were conducted in E. coli, the researchers kept the microbes in an environment with atmospheric levels of oxygen, creating evolutionary pressure to adapt to oxygen.

After six rounds of directed evolution, the researchers identified three different mutations that improved the rubisco’s resistance to oxygen. Each of these mutations are located near the enzyme’s active site (where it performs carboxylation or oxygenation). The researchers believe that these mutations improve the enzyme’s ability to preferentially interact with carbon dioxide over oxygen, which leads to an overall increase in carboxylation efficiency.

“The underlying question here is: Can you alter and improve the kinetic properties of rubisco to operate better in environments where you want it to operate better?” Shoulders says. “What changed through the directed evolution process was that rubisco began to like to react with oxygen less. That allows this rubisco to function well in an oxygen-rich environment, where normally it would constantly get distracted and react with oxygen, which you don’t want it to do.”

In ongoing work, the researchers are applying this approach to other forms of rubisco, including rubisco from plants. Plants are believed to lose about 30 percent of the energy from the sunlight they absorb through a process called photorespiration, which occurs when rubisco acts on oxygen instead of carbon dioxide.

“This really opens the door to a lot of exciting new research, and it’s a step beyond the types of engineering that have dominated rubisco engineering in the past,” Wilson says. “There are definite benefits to agricultural productivity that could be leveraged through a better rubisco.”

The research was funded, in part, by the National Science Foundation, the National Institutes of Health, an Abdul Latif Jameel Water and Food Systems Lab Grand Challenge grant, and a Martin Family Society Fellowship for Sustainability.

Professor Emeritus Barry Vercoe, a pioneering force in computer music, dies at 87

MIT Latest News - Mon, 07/07/2025 - 12:30pm

MIT Professor Emeritus Barry Lloyd Vercoe, a pioneering force in computer music, a founding faculty member of the MIT Media Lab, and a leader in the development of MIT’s Music and Theater Arts Section, passed away on June 15. He was 87.

Vercoe’s life was a rich symphony of artistry, science, and innovation that led to profound enhancements of musical experience for expert musicians as well as for the general public — and especially young people.

Born in Wellington, New Zealand, on July 24, 1937, Vercoe earned bachelor’s degrees in music (in 1959) and mathematics (in 1962) from the University of Auckland, followed by a doctor of musical arts in music composition from the University of Michigan in 1968.

After completing postdoctoral research in digital audio processing at Princeton University and a visiting lectureship at Yale University, Vercoe joined MIT’s Department of Humanities (Music) in 1971, beginning a tenure in the department that lasted through 1984. During this period, he played a key role in advancing what would become MIT’s Music and Theater Arts (MTA) Section, helping to shape its forward-thinking curriculum and interdisciplinary philosophy. Vercoe championed the integration of musical creativity with scientific inquiry, laying the groundwork for MTA’s enduring emphasis on music technology and experimental composition.

In 1973, Vercoe founded MIT’s Experimental Music Studio (EMS) — the Institute’s first dedicated computer music facility, and one of the first in the world. Operated under the auspices of the music program, EMS became a crucible for innovation in algorithmic composition, digital synthesis, and computer-assisted performance. His leadership not only positioned MIT as a hub for music technology, but also influenced how the Institute approached the intersection of the arts with engineering. This legacy is honored today by a commemorative plaque in the Kendall Square MBTA station.

Violist, faculty founder of the MIT Chamber Music Society, and Institute Professor Marcus Thompson says: “Barry was first and foremost a fine musician, and composer for traditional instruments and ensembles. As a young professor, he taught our MIT undergraduates to write and sing Renaissance counterpoint as he envisioned how the act of traditional music-making offered a guide to potential artistic interaction between humans and computers. In 1976, he enlisted me to premiere what became his iconic, and my most-performed, work, ‘Synapse for Viola and Computer.’”

During a Guggenheim Fellowship in 1982–83, Vercoe developed the Synthetic Performer, a groundbreaking real-time interactive accompaniment system, while working closely with flautist Larry Beauregard at the Institute for Research and Coordination in Acoustics/Music (IRCAM) in Paris.

In 1984, Vercoe became a founding faculty member of the MIT Media Lab, where he launched the Music, Mind, and Machine group. His research spanned machine listening, music cognition, and real-time digital audio synthesis. His Csound language, created in 1985, is still widely used for music programming, and his contributions helped define the MPEG-4 Structured Audio standard.

He also served as associate academic head of the Media Lab’s graduate program in Media Arts and Sciences (MAS). Vercoe mentored many future leaders in digital music and sound computation, including two of his MAS graduate students — Anna Huang SM ’08 and Paris Smaragdis PhD ’01 — who have recently joined MIT’s music faculty, and Miller Puckette, an emeritus faculty member at the University of California at San Diego, and Richard Boulanger, a professor of electronic production and design at the Berklee College of Music.

“Barry Vercoe will be remembered by designers, developers, researchers, and composers for his greatest ‘composition,’ Csound, his free and open-source software synthesis language,” states Boulanger. “I know that, through Csound, Barry’s musical spirit will live on, not only in my teaching, my research, and my music, but in the apps, plugins, and musical compositions of generations to come.”

Tod Machover, faculty director of the MIT Media Lab and Muriel R. Cooper Professor of Music and Media, reflects, “Barry Vercoe was a giant in the field of computer music whose innovations in software synthesis, interactive performance, and educational tools for young people influenced and inspired many, including myself. He was a superb mentor, always making sure that artistic sensibility drove music tech innovation, and that sophisticated expression was at the core of Media Lab — and MIT — culture.”

Vercoe’s work earned numerous accolades. In addition to the Guggenheim Fellowship, he was also honored with the 1992 Computerworld Smithsonian Award for innovation and the 2004 SEAMUS Lifetime Achievement Award.

Beyond MIT, Vercoe consulted with Analog Devices and collaborated with international institutions like IRCAM under the direction of Pierre Boulez. His commitment to democratizing music technology was evident in his contributions to the One Laptop per Child initiative, which brought accessible digital sound tools to young people in underserved communities worldwide.

He is survived by his former wives, Kathryn Veda Vaughn and Elizabeth Vercoe; their children, Andrea Vercoe and Scott Vercoe; and generations of students and collaborators who continue to build on his groundbreaking work. A memorial service for family will be held in New Zealand later this summer, and a special event in his honor will take place at MIT in the fall. The Media Lab will share details about the MIT gathering as they become available.

Named professor emeritus at the MIT Media Lab upon his retirement in 2010, Vercoe’s legacy embodies the lab’s — and MIT’s — vision of creative, ethical, interdisciplinary research at the convergence of art, science, and technology. His music, machines, and generously inventive spirit will continue to forever shape the way we listen, learn, and communicate.

New postdoctoral fellowship program to accelerate innovation in health care

MIT Latest News - Mon, 07/07/2025 - 10:00am

The MIT Health and Life Sciences Collaborative (MIT HEALS) is launching the Biswas Postdoctoral Fellowship Program to advance the work of outstanding early-career researchers in health and life sciences. Supported by a gift from the Biswas Family Foundation, the program aims to help apply cutting-edge research to improve health care and the lives of millions.

The program will support exceptional postdocs dedicated to innovation in human health care through a full range of pathways, such as leveraging AI in health-related research, developing low-cost diagnostics, and the convergence of life sciences with such areas as economics, business, policy, or the humanities. With initial funding of $12 million, five four-year fellowships will be awarded for each of the next four years, starting in early 2026.

“An essential goal of MIT HEALS is to find new ways and opportunities to deliver health care solutions at scale, and the Biswas Family Foundation shares our commitment to scalable innovation and broad impact. MIT is also in the talent business, and the foundation’s gift allows us to bring exceptional scholars to campus to explore some of the most pressing issues in human health and build meaningful connections across academia and industry. We look forward to welcoming the first cohort of Biswas Fellows to MIT,” says MIT president Sally Kornbluth.

“We are deeply honored to launch this world-class postdoctoral fellows program,” adds Anantha P. Chandrakasan, MIT’s chief innovation and strategy officer and head of MIT HEALS. “We fully expect to attract top candidates from around the globe to lead innovative cross-cutting projects in AI and health, cancer therapies, diagnostics, and beyond. These fellows will be selected through a rigorous process overseen by a distinguished committee, and will have the opportunity to collaborate with our faculty on the most promising and impactful ideas.”

Angela Koehler, faculty lead of MIT HEALS, professor in MIT’s Department of Biological Engineering, and associate director of the Koch Institute for Integrative Cancer Research, emphasized that the objectives of MIT HEALS align well with a stated goal of the Biswas Family Foundation: to leverage “scientific and technological advancements to revolutionize health care and make a lasting impact on global public health.”

“Health care is a team sport,” Koehler says. “MIT HEALS seeks to create connections involving investigators with diverse expertise across the Institute to tackle the most transformative problems impacting human health. Members of the MIT community are well poised to participate in teams and make an impact.”

MIT HEALS also seeks to maximize its effectiveness by expanding collaboration with medical schools and hospitals, starting with defining important problems that can be approached through research, and continuing all the way to clinical studies, Koehler says.

The Biswas Family Foundation has already demonstrated a similar strategy.

“The Biswas family has a history of enabling connections and partnerships between institutions that each bring a piece to the puzzle,” Koehler says. “This could be a dataset, an algorithm, an agent, a technology platform, or patients.”

Hope Biswas, co-founder of the Biswas Family Foundation with her husband, MIT alumnus Sanjit Biswas SM ’05, also highlighted the synergies between the foundation and MIT.

“The Biswas Family Foundation is proud to support the MIT HEALS initiative, which reimagines how scientific discovery can translate into real-world health impact. Its focus on promoting interdisciplinary collaboration to find new solutions to challenges in health care aligns closely with our mission to advance science and technology to improve health outcomes at scale,” Biswas says.

“As part of this commitment,” Biswas adds, “we are especially proud to support outstanding postdoctoral scholars focused on high-impact cross-disciplinary work in fields such as computational biology, nanoscale therapeutics, women’s health, and fundamental, curiosity-driven life sciences research. We are excited to contribute to an effort that brings together cutting-edge science and a deep commitment to translating knowledge into action.”

AI and machine-learning systems present a new universe of opportunities to investigate disease, biological mechanisms, therapeutics, and health care delivery using huge datasets.

“AI and computational systems biology can improve the accuracy of diagnostic approaches, enable the development of precision medicines, improve choices related to individualized treatment strategy, and improve operational efficiency within health care systems,” says Koehler. “Sanjit and Hope’s support of broad initiatives in AI and computational systems biology will help MIT researchers explore a variety of paths to impact human health on a large scale.”

Frontiers in health-related research are increasingly found where diverse fields converge, and Koehler provides the example of how advances in high-throughput experimentation to develop large datasets “may couple well with the development of new computation or AI tools.” She adds that the four-year funding term provided by the postdoctoral fellowship is “long enough to enable fellows to think big and take on projects at interfaces, emerging as bilingual researchers at the end of the program.”

Chandrakasan sees potential in the program for the Biswas Fellows to make revolutionary progress in health research.

“I’m incredibly grateful to the Biswas Family Foundation for their generous support in enabling transformative research at MIT,” Chandrakasan says.

Exploring data and its influence on political behavior

MIT Latest News - Mon, 07/07/2025 - 10:00am

Data and politics are becoming increasingly intertwined. Today’s political campaigns and voter mobilization efforts are now entirely data-driven. Voters, pollsters, and elected officials are relying on data to make choices that have local, regional, and national impacts.

A Department of Political Science course offers students tools to help make sense of these choices and their outcomes.

In class 17.831 (Data and Politics), students are introduced to principles and practices necessary to understand electoral and other types of political behavior. Taught by associate professor of political science Daniel Hidalgo, students use real-world datasets to explore topics like election polling and prediction, voter turnout, voter targeting, and shifts in public opinion over time.

The course wants students to describe why and how the use of data and statistical methods has changed electoral politics, understand the basic principles of social science statistics, and analyze data using modern statistical computing tools. The course capstone is an original project that involves the collection, analysis, and interpretation of original survey data used in modern campaigns.

“I wanted to create an applied, practice-based course that would appeal to undergraduates and provide a foundation for parsing, understanding, and reporting on large datasets in politics,” says Hidalgo, who redesigned the course for the spring 2025 semester.

Hidalgo, who also works in the Political Methodology Lab at MIT, investigates the political economy of elections, campaigns, and representation in developing democracies, especially in Latin America, as well as quantitative methods in the social sciences.

Politics and modernity

The influence of, and access to, artificial intelligence and large language models makes a course like Data and Politics even more important, Hidalgo says. “You have to understand the people at the other end of the data,” he argues.

The course also centers the human element in politics, exploring conflict, bias, their structures, and impacts while also working to improve information literacy and coherent storytelling.

“Data analysis and collection will never be perfect,” Hidalgo says. “But analyzing and understanding who holds which ideas, and why, and using the information to tell a coherent story is valuable in politics and elsewhere.”

The “always on” nature of news and related content, coupled with the variety of communications channels available to voters, has increased the complexity of the data collection process in polling and campaigns. “In the past, people would answer the phone when you called their homes,” Hidalgo notes, describing analog methods previously used to collect voter data. Now, political scientists, data analysts, and others must contend with the availability of streaming content, mobile devices, and other channels comprising a vast, fractured media ecosystem.

The course opens a window into what happens behind the scenes of local and national political campaigns, which appealed to second-year political science major Jackson Hamilton. “I took this class hoping to expand my ability to use coding for political science applications, and in order to better understand how political models and predictions work,” he says.

“We tailor-made our own sets of questions and experimental designs that we thought would be interesting,” Hamilton adds. “I found that political issues that get a lot of media coverage are not necessarily the same issues which divide lawmakers, at least locally.”

Transparency and accountability in politics and other areas

Teaching students to use tools like polling and data analysis effectively can improve their ability to identify and combat disinformation and misinformation. “As a political scientist, I’m substantively engaged,” Hidalgo says, “and I’d like to help others be engaged, too.”

“There’s lots of data available, and this course provides a foundation and the resources necessary to understand and visualize it,” Hidalgo continues. “The ability to design, implement, and understand surveys has value inside and outside the classroom.”

In politics, Hidalgo believes equipping students to navigate these spaces effectively can potentially improve and increase civic engagement. Data, he says, can help defend ideas. “There’s so much information, it’s important to develop the skills and abilities necessary to understand and visualize it,” he says. “This has value for everyone.”

Second-year physics major Sean Wilson, who also took the class this spring, notes the value of data visualization and analysis both as a potential physicist and a voter. “Data analysis in both politics and in physics is essential work given that voting tendencies, public opinion, and government leadership change so often in the United States,” he says, “and that modeling can be used to support physical hypotheses and improve our understanding of how things work.”

For Wilson, the course can help anyone interested in understanding large groups’ behaviors. “Political scientists are constantly working to better understand how and why certain events occur in U.S. politics, and data analysis is an effective tool for doing so,” he says. “Members of a representative democracy can make better decisions with this kind of information.”

Hamilton, meanwhile, learned more about the behind-the-scenes machinery at work in electoral politics. “I had the opportunity to create a couple of budget trade-off questions, to get a sense of what people actually thought the government should spend money on when they had to make choices,” he says.

“Computer science and data science aren’t just useful for STEM applications; data science approaches can also be extremely useful in many social sciences,” Hamilton argues.

“[Hidalgo helped me realize] that I needed to understand and use data science approaches to gain a deeper understanding of my areas of interest,” Hamilton says. “He focuses on how different approaches in coding can be applied to different types of problems in political science.” 

Pages