Feed aggregator

EFFecting Change: Pride in Digital Freedom

EFF: Updates - Wed, 06/11/2025 - 8:06pm

Join us for our next EFFecting Change livestream this Thursday! We're talking about emerging laws and platform policies that affect the digital privacy and free expression rights of the LGBT+ community, and how this echoes the experience of marginalized people across the world.

EFFecting Change Livestream Series:
Pride in Digital Freedom
Thursday, June 12th
4:00 PM - 5:00 PM Pacific - Check Local Time
This event is LIVE and FREE!

Join our panel featuring EFF Senior Staff Technologist Daly Barnett, EFF Legislative Activist Rindala Alajaji, Chosen Family Law Center Senior Legal Director Andy Izenson, and Woodhull Freedom Foundation Chief Operations Officer Mandy Salley while they discuss what is happening and what should change to protect digital freedom.

effectingchangepride_social_banner.png

We hope you and your friends can join us live! Be sure to spread the word, and share our past livestreams. Please note that all events will be recorded for later viewing on our YouTube page.

Want to make sure you don’t miss our next livestream? Here’s a link to sign up for updates about this series: eff.org/ECUpdates.

Congress Can Act Now to Protect Reproductive Health Data

EFF: Updates - Wed, 06/11/2025 - 6:58pm

State, federal, and international regulators are increasingly concerned about the harms they believe the internet and new technology are causing to users of all categories. Lawmakers are currently considering many proposals that are intended to provide protections to the most vulnerable among us. Too often, however, those proposals do not carefully consider the likely unintended consequences or even whether the law will actually reduce the harms it’s supposed to target. That’s why EFF supports Rep. Sara Jacobs’ newly reintroduced “My Body, My Data" Act, which will protect the privacy and safety of people seeking reproductive health care, while maintaining important constitutional protections and avoiding any erosion of end-to-end encryption. 

Privacy fears should never stand in the way of healthcare. That's why this common-sense bill will require businesses and non-governmental organizations to act responsibly with personal information concerning reproductive health care. Specifically, it restricts them from collecting, using, retaining, or disclosing reproductive health information that isn't essential to providing the service someone requests.

The bill would protect people who use fertility or period-tracking apps or are seeking information about reproductive health services.

These restrictions apply to companies that collect personal information related to a person’s reproductive or sexual health. That includes data related to pregnancy, menstruation, surgery, termination of pregnancy, contraception, basal body temperature or diagnoses. The bill would protect people who, for example, use fertility or period-tracking apps or are seeking information about reproductive health services. 

We are proud to join Center for Democracy and Technology, Electronic Privacy Information Center, National Partnership for Women & Families, Planned Parenthood Federation of America, Reproductive Freedom for All, Physicians for Reproductive Health, National Women’s Law Center, National Abortion Federation, Catholics for Choice, National Council for Jewish Women, Power to Decide, United for Reproductive & Gender Equity, Indivisible, Guttmacher, and National Network of Abortion Funds, and All* Above All in support of this bill. 

In addition to the restrictions on company data processing, this bill also provides people with necessary rights to access and delete their reproductive health information. Companies must also publish a privacy policy, so that everyone can understand what information companies process and why. It also ensures that companies are held to public promises they make about data protection and gives the Federal Trade Commission the authority to hold them to account if they break those promises. 

The bill also lets people take on companies that violate their privacy with a strong private right of action. Empowering people to bring their own lawsuits not only places more control in the individual's hands, but also ensures that companies will not take these regulations lightly. 

Finally, while Rep. Jacobs' bill establishes an important national privacy foundation for everyone, it also leaves room for states to pass stronger or complementary laws to protect the data privacy of those seeking reproductive health care. 

We thank Rep. Jacobs and Sens. Mazie Hirono and Ron Wyden for taking up this important bill and using it as an opportunity not only to protect those seeking reproductive health care, but also highlight why data privacy is an important element of reproductive justice. 

Decarbonizing steel is as tough as steel

MIT Latest News - Wed, 06/11/2025 - 4:30pm

The long-term aspirational goal of the Paris Agreement on climate change is to cap global warming at 1.5 degrees Celsius above preindustrial levels, and thereby reduce the frequency and severity of floods, droughts, wildfires, and other extreme weather events. Achieving that goal will require a massive reduction in global carbon dioxide (CO2) emissions across all economic sectors. A major roadblock, however, could be the industrial sector, which accounts for roughly 25 percent of global energy- and process-related CO2 emissions — particularly within the iron and steel sector, industry’s largest emitter of CO2.

Iron and steel production now relies heavily on fossil fuels (coal or natural gas) for heat, converting iron ore to iron, and making steel strong. Steelmaking could be decarbonized by a combination of several methods, including carbon capture technology, the use of low- or zero-carbon fuels, and increased use of recycled steel. Now a new study in the Journal of Cleaner Production systematically explores the viability of different iron-and-steel decarbonization strategies.

Today’s strategy menu includes improving energy efficiency, switching fuels and technologies, using more scrap steel, and reducing demand. Using the MIT Economic Projection and Policy Analysis model, a multi-sector, multi-region model of the world economy, researchers at MIT, the University of Illinois at Urbana-Champaign, and ExxonMobil Technology and Engineering Co. evaluate the decarbonization potential of replacing coal-based production processes with electric arc furnaces (EAF), along with either scrap steel or “direct reduced iron” (DRI), which is fueled by natural gas with carbon capture and storage (NG CCS DRI-EAF) or by hydrogen (H2 DRI-EAF).

Under a global climate mitigation scenario aligned with the 1.5 C climate goal, these advanced steelmaking technologies could result in deep decarbonization of the iron and steel sector by 2050, as long as technology costs are low enough to enable large-scale deployment. Higher costs would favor the replacement of coal with electricity and natural gas, greater use of scrap steel, and reduced demand, resulting in a more-than-50-percent reduction in emissions relative to current levels. Lower technology costs would enable massive deployment of NG CCS DRI-EAF or H2 DRI-EAF, reducing emissions by up to 75 percent.

Even without adoption of these advanced technologies, the iron-and-steel sector could significantly reduce its CO2 emissions intensity (how much CO2 is released per unit of production) with existing steelmaking technologies, primarily by replacing coal with gas and electricity (especially if it is generated by renewable energy sources), using more scrap steel, and implementing energy efficiency measures.

“The iron and steel industry needs to combine several strategies to substantially reduce its emissions by mid-century, including an increase in recycling, but investing in cost reductions in hydrogen pathways and carbon capture and sequestration will enable even deeper emissions mitigation in the sector,” says study supervising author Sergey Paltsev, deputy director of the MIT Center for Sustainability Science and Strategy (MIT CS3) and a senior research scientist at the MIT Energy Initiative (MITEI).

This study was supported by MIT CS3 and ExxonMobil through its membership in MITEI.

The shadow architects of power

MIT Latest News - Wed, 06/11/2025 - 4:25pm

In Washington, where conversations about Russia often center on a single name, political science doctoral candidate Suzanne Freeman is busy redrawing the map of power in autocratic states. Her research upends prevailing narratives about Vladimir Putin’s Russia, asking us to look beyond the individual to understand the system that produced him.

“The standard view is that Putin originated Russia’s system of governance and the way it engages with the world,” Freeman explains. “My contention is that Putin is a product of a system rather than its author, and that his actions are very consistent with the foreign policy beliefs of the organization in which he was educated.”

That organization — the KGB and its successor agencies — stands at the center of Freeman’s dissertation, which examines how authoritarian intelligence agencies intervene in their own states’ foreign policy decision-making processes, particularly decisions about using military force.

Dismantling the “yes men” myth

Past scholarship has relied on an oversimplified characterization of intelligence agencies in authoritarian states. “The established belief that I’m challenging is essentially that autocrats surround themselves with ‘yes’ men,” Freeman says. She notes that this narrative stems in great part from a famous Soviet failure, when intelligence officers were too afraid to contradict Stalin’s belief that Nazi Germany wouldn’t invade in 1941.

Freeman’s research reveals a far more complex reality. Through extensive archival work, including newly declassified documents from Lithuania, Moldova, and Poland, she shows that intelligence agencies in authoritarian regimes actually have distinct foreign policy preferences and actively work to advance them.

“These intelligence agencies are motivated by their organizational interests, seeking to survive and hold power inside and beyond their own borders,” Freeman says.

When an international situation threatens those interests, authoritarian intelligence agencies may intervene in the policy process using strategies Freeman has categorized in an innovative typology: indirect manipulation (altering collected intelligence), direct manipulation (misrepresenting analyzed intelligence), preemption in the field (unauthorized actions that alter a foreign crisis), and coercion (threats against political leadership).

“By intervene, I mean behaving in some way that’s inappropriate in accordance with what their mandate is,” Freeman explains. That mandate includes providing policy advice. “But sometimes intelligence agencies want to make their policy advice look more attractive by manipulating information,” she notes. “They may change the facts out on the ground, or in very rare circumstances, coerce policymakers.”

From Soviet archives to modern Russia

Rather than studying contemporary Russia alone, Freeman uses historical case studies of the Soviet Union’s KGB. Her research into this agency’s policy intervention covers eight foreign policy crises between 1950 and 1981, including uprisings in Eastern Europe, the Sino-Soviet border dispute, and the Soviet-Afghan War.

What she discovered contradicts prior assumptions that the agency was primarily a passive information provider. “The KGB had always been important for Soviet foreign policy and gave policy advice about what they thought should be done,” she says. Intelligence agencies were especially likely to pursue policy intervention when facing a “dual threat:” domestic unrest sparked by foreign crises combined with the loss of intelligence networks abroad.

This organizational motivation, rather than simply following a leader’s preferences, drove policy recommendations in predictable ways.

Freeman sees striking parallels to Russia’s recent actions in Ukraine. “This dual organizational threat closely mirrors the threat that the KGB faced in Hungary in 1956, Czechoslovakia in 1968, and Poland from 1980 to 1981,” she explains. After 2014, Ukrainian intelligence reform weakened Russian intelligence networks in the country — a serious organizational threat to Russia’s security apparatus.

“Between 2014 and 2022, this network weakened,” Freeman notes. “We know that Russian intelligence had ties with a polling firm in Ukraine, where they had data saying that 84 percent of the population would view them as occupiers, that almost half of the Ukrainian population was willing to fight for Ukraine.” In spite of these polls, officers recommended going into Ukraine anyway.

This pattern resembles the KGB’s advocacy for invading Afghanistan using the manipulation of intelligence — a parallel that helps explain Russia’s foreign policy decisions beyond just Putin’s personal preferences.

Scholarly detective work

Freeman’s research innovations have allowed her to access previously unexplored material. “From a methodological perspective, it’s new archival material, but it’s also archival material from regions of a country, not the center,” she explains.

In Moldova, she examined previously classified KGB documents: huge amounts of newly available and unstructured documents that provided details into how anti-Soviet sentiment during foreign crises affected the KGB.

Freeman’s willingness to search beyond central archives distinguishes her approach, especially valuable as direct research in Russia becomes increasingly difficult. “People who want to study Russia or the Soviet Union who are unable to get to Russia can still learn very meaningful things, even about the central state, from these other countries and regions.”

From Boston to Moscow to MIT

Freeman grew up in Boston in an academic, science-oriented family; both her parents were immunologists. Going against the grain, she was drawn to history, particularly Russian and Soviet history, beginning in high school.

“I was always curious about the Soviet Union and why it fell apart, but I never got a clear answer from my teachers,” says Freeman. “This really made me want to learn more and solve that puzzle myself." 

At Columbia University, she majored in Slavic studies and completed a master’s degree at the School of International and Public Affairs. Her undergraduate thesis examined Russian military reform, a topic that gained new relevance after Russia’s 2014 invasion of Ukraine.

Before beginning her doctoral studies at MIT, Freeman worked at the Russia Maritime Studies Institute at the U.S. Naval War College, researching Russian military strategy and doctrine. There, surrounded by scholars with political science and history PhDs, she found her calling.

“I decided I wanted to be in an academic environment where I could do research that I thought would prove valuable,” she recalls.

Bridging academia and public education

Beyond her core research, Freeman has established herself as an innovator in war-gaming methodology. With fellow PhD student Benjamin Harris, she co-founded the MIT Wargaming Working Group, which has developed a partnership with the Naval Postgraduate School to bring mid-career military officers and academics together for annual simulations.

Their work on war-gaming as a pedagogical tool resulted in a peer-reviewed publication in PS: Political Science & Politics titled “Crossing a Virtual Divide: Wargaming as a Remote Teaching Tool.” This research demonstrates that war games are effective tools for active learning even in remote settings and can help bridge the civil-military divide.

When not conducting research, Freeman works as a tour guide at the International Spy Museum in Washington. “I think public education is important — plus they have a lot of really cool KGB objects,” she says. “I felt like working at the Spy Museum would help me keep thinking about my research in a more fun way and hopefully help me explain some of these things to people who aren’t academics.”

Looking beyond individual leaders

Freeman’s work offers vital insight for policymakers who too often focus exclusively on autocratic leaders, rather than the institutional systems surrounding them. “I hope to give people a new lens through which to view the way that policy is made,” she says. “The intelligence agency and the type of advice that it provides to political leadership can be very meaningful.”

As tensions with Russia continue, Freeman believes her research provides a crucial framework for understanding state behavior beyond individual personalities. “If you're going to be negotiating and competing with these authoritarian states, thinking about the leadership beyond the autocrat seems very important.”

Currently completing her dissertation as a predoctoral fellow at George Washington University’s Institute for Security and Conflict Studies, Freeman aims to contribute critical scholarship on Russia’s role in international security and inspire others to approach complex geopolitical questions with systematic research skills.

“In Russia and other authoritarian states, the intelligence system may endure well beyond a single leader’s reign,” Freeman notes. “This means we must focus not on the figures who dominate the headlines, but on the institutions that shape them.” 

Bringing meaning into technology deployment

MIT Latest News - Wed, 06/11/2025 - 4:15pm

In 15 TED Talk-style presentations, MIT faculty recently discussed their pioneering research that incorporates social, ethical, and technical considerations and expertise, each supported by seed grants established by the Social and Ethical Responsibilities of Computing (SERC), a cross-cutting initiative of the MIT Schwarzman College of Computing. The call for proposals last summer was met with nearly 70 applications. A committee with representatives from every MIT school and the college convened to select the winning projects that received up to $100,000 in funding.

“SERC is committed to driving progress at the intersection of computing, ethics, and society. The seed grants are designed to ignite bold, creative thinking around the complex challenges and possibilities in this space,” said Nikos Trichakis, co-associate dean of SERC and the J.C. Penney Professor of Management. “With the MIT Ethics of Computing Research Symposium, we felt it important to not just showcase the breadth and depth of the research that’s shaping the future of ethical computing, but to invite the community to be part of the conversation as well.”

“What you’re seeing here is kind of a collective community judgment about the most exciting work when it comes to research, in the social and ethical responsibilities of computing being done at MIT,” said Caspar Hare, co-associate dean of SERC and professor of philosophy.

The full-day symposium on May 1 was organized around four key themes: responsible health-care technology, artificial intelligence governance and ethics, technology in society and civic engagement, and digital inclusion and social justice. Speakers delivered thought-provoking presentations on a broad range of topics, including algorithmic bias, data privacy, the social implications of artificial intelligence, and the evolving relationship between humans and machines. The event also featured a poster session, where student researchers showcased projects they worked on throughout the year as SERC Scholars.

Highlights from the MIT Ethics of Computing Research Symposium in each of the theme areas, many of which are available to watch on YouTube, included:

Making the kidney transplant system fairer

Policies regulating the organ transplant system in the United States are made by a national committee that often takes more than six months to create, and then years to implement, a timeline that many on the waiting list simply can’t survive.

Dimitris Bertsimas, vice provost for open learning, associate dean of business analytics, and Boeing Professor of Operations Research, shared his latest work in analytics for fair and efficient kidney transplant allocation. Bertsimas’ new algorithm examines criteria like geographic location, mortality, and age in just 14 seconds, a monumental change from the usual six hours.

Bertsimas and his team work closely with the United Network for Organ Sharing (UNOS), a nonprofit that manages most of the national donation and transplant system through a contract with the federal government. During his presentation, Bertsimas shared a video from James Alcorn, senior policy strategist at UNOS, who offered this poignant summary of the impact the new algorithm has:

“This optimization radically changes the turnaround time for evaluating these different simulations of policy scenarios. It used to take us a couple months to look at a handful of different policy scenarios, and now it takes a matter of minutes to look at thousands and thousands of scenarios. We are able to make these changes much more rapidly, which ultimately means that we can improve the system for transplant candidates much more rapidly.”

The ethics of AI-generated social media content

As AI-generated content becomes more prevalent across social media platforms, what are the implications of disclosing (or not disclosing) that any part of a post was created by AI? Adam Berinsky, Mitsui Professor of Political Science, and Gabrielle Péloquin-Skulski, PhD student in the Department of Political Science, explored this question in a session that examined recent studies on the impact of various labels on AI-generated content.

In a series of surveys and experiments affixing labels to AI-generated posts, the researchers looked at how specific words and descriptions impacted users’ perception of deception, their intent to engage with the post, and ultimately if the post was true or false.

“The big takeaway from our initial set of findings is that one size doesn’t fit all,” said Péloquin-Skulski. “We found that labeling AI-generated images with a process-oriented label reduces belief in both false and true posts. This is quite problematic, as labeling intends to reduce people’s belief in false information, not necessarily true information. This suggests that labels combining both process and veracity might be better at countering AI-generated misinformation.”

Using AI to increase civil discourse online

“Our research aims to address how people increasingly want to have a say in the organizations and communities they belong to,” Lily Tsai explained in a session on experiments in generative AI and the future of digital democracy. Tsai, Ford Professor of Political Science and director of the MIT Governance Lab, is conducting ongoing research with Alex Pentland, Toshiba Professor of Media Arts arts Sciences, and a larger team.

Online deliberative platforms have recently been rising in popularity across the United States in both public- and private-sector settings. Tsai explained that with technology, it’s now possible for everyone to have a say — but doing so can be overwhelming, or even feel unsafe. First, too much information is available, and secondly, online discourse has become increasingly “uncivil.”

The group focuses on “how we can build on existing technologies and improve them with rigorous, interdisciplinary research, and how we can innovate by integrating generative AI to enhance the benefits of online spaces for deliberation.” They have developed their own AI-integrated platform for deliberative democracy, DELiberation.io, and rolled out four initial modules. All studies have been in the lab so far, but they are also working on a set of forthcoming field studies, the first of which will be in partnership with the government of the District of Columbia.

Tsai told the audience, “If you take nothing else from this presentation, I hope that you’ll take away this — that we should all be demanding that technologies that are being developed are assessed to see if they have positive downstream outcomes, rather than just focusing on maximizing the number of users.”

A public think tank that considers all aspects of AI

When Catherine D’Ignazio, associate professor of urban science and planning, and Nikko Stevens, postdoc at the Data + Feminism Lab at MIT, initially submitted their funding proposal, they weren’t intending to develop a think tank, but a framework — one that articulated how artificial intelligence and machine learning work could integrate community methods and utilize participatory design.

In the end, they created Liberatory AI, which they describe as a “rolling public think tank about all aspects of AI.” D’Ignazio and Stevens gathered 25 researchers from a diverse array of institutions and disciplines who authored more than 20 position papers examining the most current academic literature on AI systems and engagement. They intentionally grouped the papers into three distinct themes: the corporate AI landscape, dead ends, and ways forward.

“Instead of waiting for Open AI or Google to invite us to participate in the development of their products, we’ve come together to contest the status quo, think bigger-picture, and reorganize resources in this system in hopes of a larger societal transformation,” said D’Ignazio.

EPA revoking Biden’s climate limits for power plants

ClimateWire News - Wed, 06/11/2025 - 2:08pm
The 2024 regulation required existing coal-burning power plants to begin capturing their carbon dioxide pollution in the 2030s.

Photonic processor could streamline 6G wireless signal processing

MIT Latest News - Wed, 06/11/2025 - 2:00pm

As more connected devices demand an increasing amount of bandwidth for tasks like teleworking and cloud computing, it will become extremely challenging to manage the finite amount of wireless spectrum available for all users to share.

Engineers are employing artificial intelligence to dynamically manage the available wireless spectrum, with an eye toward reducing latency and boosting performance. But most AI methods for classifying and processing wireless signals are power-hungry and can’t operate in real-time.

Now, MIT researchers have developed a novel AI hardware accelerator that is specifically designed for wireless signal processing. Their optical processor performs machine-learning computations at the speed of light, classifying wireless signals in a matter of nanoseconds.

The photonic chip is about 100 times faster than the best digital alternative, while converging to about 95 percent accuracy in signal classification. The new hardware accelerator is also scalable and flexible, so it could be used for a variety of high-performance computing applications. At the same time, it is smaller, lighter, cheaper, and more energy-efficient than digital AI hardware accelerators.

The device could be especially useful in future 6G wireless applications, such as cognitive radios that optimize data rates by adapting wireless modulation formats to the changing wireless environment.

By enabling an edge device to perform deep-learning computations in real-time, this new hardware accelerator could provide dramatic speedups in many applications beyond signal processing. For instance, it could help autonomous vehicles make split-second reactions to environmental changes or enable smart pacemakers to continuously monitor the health of a patient’s heart.

“There are many applications that would be enabled by edge devices that are capable of analyzing wireless signals. What we’ve presented in our paper could open up many possibilities for real-time and reliable AI inference. This work is the beginning of something that could be quite impactful,” says Dirk Englund, a professor in the MIT Department of Electrical Engineering and Computer Science, principal investigator in the Quantum Photonics and Artificial Intelligence Group and the Research Laboratory of Electronics (RLE), and senior author of the paper.

He is joined on the paper by lead author Ronald Davis III PhD ’24; Zaijun Chen, a former MIT postdoc who is now an assistant professor at the University of Southern California; and Ryan Hamerly, a visiting scientist at RLE and senior scientist at NTT Research. The research appears today in Science Advances.

Light-speed processing  

State-of-the-art digital AI accelerators for wireless signal processing convert the signal into an image and run it through a deep-learning model to classify it. While this approach is highly accurate, the computationally intensive nature of deep neural networks makes it infeasible for many time-sensitive applications.

Optical systems can accelerate deep neural networks by encoding and processing data using light, which is also less energy intensive than digital computing. But researchers have struggled to maximize the performance of general-purpose optical neural networks when used for signal processing, while ensuring the optical device is scalable.

By developing an optical neural network architecture specifically for signal processing, which they call a multiplicative analog frequency transform optical neural network (MAFT-ONN), the researchers tackled that problem head-on.

The MAFT-ONN addresses the problem of scalability by encoding all signal data and performing all machine-learning operations within what is known as the frequency domain — before the wireless signals are digitized.

The researchers designed their optical neural network to perform all linear and nonlinear operations in-line. Both types of operations are required for deep learning.

Thanks to this innovative design, they only need one MAFT-ONN device per layer for the entire optical neural network, as opposed to other methods that require one device for each individual computational unit, or “neuron.”

“We can fit 10,000 neurons onto a single device and compute the necessary multiplications in a single shot,” Davis says.   

The researchers accomplish this using a technique called photoelectric multiplication, which dramatically boosts efficiency. It also allows them to create an optical neural network that can be readily scaled up with additional layers without requiring extra overhead.

Results in nanoseconds

MAFT-ONN takes a wireless signal as input, processes the signal data, and passes the information along for later operations the edge device performs. For instance, by classifying a signal’s modulation, MAFT-ONN would enable a device to automatically infer the type of signal to extract the data it carries.

One of the biggest challenges the researchers faced when designing MAFT-ONN was determining how to map the machine-learning computations to the optical hardware.

“We couldn’t just take a normal machine-learning framework off the shelf and use it. We had to customize it to fit the hardware and figure out how to exploit the physics so it would perform the computations we wanted it to,” Davis says.

When they tested their architecture on signal classification in simulations, the optical neural network achieved 85 percent accuracy in a single shot, which can quickly converge to more than 99 percent accuracy using multiple measurements.  MAFT-ONN only required about 120 nanoseconds to perform entire process.

“The longer you measure, the higher accuracy you will get. Because MAFT-ONN computes inferences in nanoseconds, you don’t lose much speed to gain more accuracy,” Davis adds.

While state-of-the-art digital radio frequency devices can perform machine-learning inference in a microseconds, optics can do it in nanoseconds or even picoseconds.

Moving forward, the researchers want to employ what are known as multiplexing schemes so they could perform more computations and scale up the MAFT-ONN. They also want to extend their work into more complex deep learning architectures that could run transformer models or LLMs.

This work was funded, in part, by the U.S. Army Research Laboratory, the U.S. Air Force, MIT Lincoln Laboratory, Nippon Telegraph and Telephone, and the National Science Foundation.

Have a damaged painting? Restore it in just hours with an AI-generated “mask”

MIT Latest News - Wed, 06/11/2025 - 11:00am

Art restoration takes steady hands and a discerning eye. For centuries, conservators have restored paintings by identifying areas needing repair, then mixing an exact shade to fill in one area at a time. Often, a painting can have thousands of tiny regions requiring individual attention. Restoring a single painting can take anywhere from a few weeks to over a decade.

In recent years, digital restoration tools have opened a route to creating virtual representations of original, restored works. These tools apply techniques of computer vision, image recognition, and color matching, to generate a “digitally restored” version of a painting relatively quickly.

Still, there has been no way to translate digital restorations directly onto an original work, until now. In a paper appearing today in the journal Nature, Alex Kachkine, a mechanical engineering graduate student at MIT, presents a new method he’s developed to physically apply a digital restoration directly onto an original painting.

The restoration is printed on a very thin polymer film, in the form of a mask that can be aligned and adhered to an original painting. It can also be easily removed. Kachkine says that a digital file of the mask can be stored and referred to by future conservators, to see exactly what changes were made to restore the original painting.

“Because there’s a digital record of what mask was used, in 100 years, the next time someone is working with this, they’ll have an extremely clear understanding of what was done to the painting,” Kachkine says. “And that’s never really been possible in conservation before.”

As a demonstration, he applied the method to a highly damaged 15th century oil painting. The method automatically identified 5,612 separate regions in need of repair, and filled in these regions using 57,314 different colors. The entire process, from start to finish, took 3.5 hours, which he estimates is about 66 times faster than traditional restoration methods.

Kachkine acknowledges that, as with any restoration project, there are ethical issues to consider, in terms of whether a restored version is an appropriate representation of an artist’s original style and intent. Any application of his new method, he says, should be done in consultation with conservators with knowledge of a painting’s history and origins.

“There is a lot of damaged art in storage that might never be seen,” Kachkine says. “Hopefully with this new method, there’s a chance we’ll see more art, which I would be delighted by.”

Digital connections

The new restoration process started as a side project. In 2021, as Kachkine made his way to MIT to start his PhD program in mechanical engineering, he drove up the East Coast and made a point to visit as many art galleries as he could along the way.

“I’ve been into art for a very long time now, since I was a kid,” says Kachkine, who restores paintings as a hobby, using traditional hand-painting techniques. As he toured galleries, he came to realize that the art on the walls is only a fraction of the works that galleries hold. Much of the art that galleries acquire is stored away because the works are aged or damaged, and take time to properly restore.

“Restoring a painting is fun, and it’s great to sit down and infill things and have a nice evening,” Kachkine says. “But that’s a very slow process.”

As he has learned, digital tools can significantly speed up the restoration process. Researchers have developed artificial intelligence algorithms that quickly comb through huge amounts of data. The algorithms learn connections within this visual data, which they apply to generate a digitally restored version of a particular painting, in a way that closely resembles the style of an artist or time period. However, such digital restorations are usually displayed virtually or printed as stand-alone works and cannot be directly applied to retouch original art.

“All this made me think: If we could just restore a painting digitally, and effect the results physically, that would resolve a lot of pain points and drawbacks of a conventional manual process,” Kachkine says.

“Align and restore”

For the new study, Kachkine developed a method to physically apply a digital restoration onto an original painting, using a 15th-century painting that he acquired when he first came to MIT. His new method involves first using traditional techniques to clean a painting and remove any past restoration efforts.

“This painting is almost 600 years old and has gone through conservation many times,” he says. “In this case there was a fair amount of overpainting, all of which has to be cleaned off to see what’s actually there to begin with.”

He scanned the cleaned painting, including the many regions where paint had faded or cracked. He then used existing artificial intelligence algorithms to analyze the scan and create a virtual version of what the painting likely looked like in its original state.

Then, Kachkine developed software that creates a map of regions on the original painting that require infilling, along with the exact colors needed to match the digitally restored version. This map is then translated into a physical, two-layer mask that is printed onto thin polymer-based films. The first layer is printed in color, while the second layer is printed in the exact same pattern, but in white.

“In order to fully reproduce color, you need both white and color ink to get the full spectrum,” Kachkine explains. “If those two layers are misaligned, that’s very easy to see. So I also developed a few computational tools, based on what we know of human color perception, to determine how small of a region we can practically align and restore.”

Kachkine used high-fidelity commercial inkjets to print the mask’s two layers, which he carefully aligned and overlaid by hand onto the original painting and adhered with a thin spray of conventional varnish. The printed films are made from materials that can be easily dissolved with conservation-grade solutions, in case conservators need to reveal the original, damaged work. The digital file of the mask can also be saved as a detailed record of what was restored.

For the painting that Kachkine used, the method was able to fill in thousands of losses in just a few hours. “A few years ago, I was restoring this baroque Italian painting with probably the same order magnitude of losses, and it took me nine months of part-time work,” he recalls. “The more losses there are, the better this method is.”

He estimates that the new method can be orders of magnitude faster than traditional, hand-painted approaches. If the method is adopted widely, he emphasizes that conservators should be involved at every step in the process, to ensure that the final work is in keeping with an artist’s style and intent.

“It will take a lot of deliberation about the ethical challenges involved at every stage in this process to see how can this be applied in a way that’s most consistent with conservation principles,” he says. “We’re setting up a framework for developing further methods. As others work on this, we’ll end up with methods that are more precise.”

This work was supported, in part, by the John O. and Katherine A. Lutz Memorial Fund. The research was carried out, in part, through the use of equipment and facilities at MIT.Nano, with additional support from the MIT Microsystems Technology Laboratories, the MIT Department of Mechanical Engineering, and the MIT Libraries.

What to look for in Zeldin’s power plant rule repeal

ClimateWire News - Wed, 06/11/2025 - 6:21am
EPA will unveil a proposal Wednesday to roll back pollution limits on power generation.

Energy secretary is forced to defend Trump’s Empire Wind rescue

ClimateWire News - Wed, 06/11/2025 - 6:20am
Chris Wright's comments offer the most detailed explanation yet for the administration's secretive decision to lift its stop-work order on the New York project.

Trump says FEMA overhaul will come after hurricane season

ClimateWire News - Wed, 06/11/2025 - 6:19am
The president's remarks Tuesday signal that states will continue to get federal disaster aid this year but may see changes in 2026.

Climate change fueled May’s record-breaking Arctic heat

ClimateWire News - Wed, 06/11/2025 - 6:18am
Melting on the Greenland ice sheet rose to 17 times its normal rate as temperatures soared.

New Zealand greens sue government over climate plan

ClimateWire News - Wed, 06/11/2025 - 6:17am
The lawsuit is said to be the first in the world to challenge a government for relying on tree planting to address climate change.

Northern India heat wave disrupts lives, raises health worries

ClimateWire News - Wed, 06/11/2025 - 6:16am
The searing heat is not just a seasonal discomfort but underscores a growing challenge for the country's overwhelmed health infrastructure.

Greta Thunberg hits back at Donald Trump over anger management jibe

ClimateWire News - Wed, 06/11/2025 - 6:16am
The Swedish activist battles the U.S. president again. This time it's over her Gaza relief mission.

Fitch warns of rising mortgage-bond risk due to extreme weather

ClimateWire News - Wed, 06/11/2025 - 6:15am
The physical fallout of climate change has implications for corners of fixed-income markets traditionally viewed as among the safest in the world.

Greece orders first evacuation of wildfire season

ClimateWire News - Wed, 06/11/2025 - 6:14am
The Mediterranean country has taken a more robust response to wildfires in recent years.

Window-sized device taps the air for safe drinking water

MIT Latest News - Wed, 06/11/2025 - 5:00am

Today, 2.2 billion people in the world lack access to safe drinking water. In the United States, more than 46 million people experience water insecurity, living with either no running water or water that is unsafe to drink. The increasing need for drinking water is stretching traditional resources such as rivers, lakes, and reservoirs.

To improve access to safe and affordable drinking water, MIT engineers are tapping into an unconventional source: the air. The Earth’s atmosphere contains millions of billions of gallons of water in the form of vapor. If this vapor can be efficiently captured and condensed, it could supply clean drinking water in places where traditional water resources are inaccessible.

With that goal in mind, the MIT team has developed and tested a new atmospheric water harvester and shown that it efficiently captures water vapor and produces safe drinking water across a range of relative humidities, including dry desert air.

The new device is a black, window-sized vertical panel, made from a water-absorbent hydrogel material, enclosed in a glass chamber coated with a cooling layer. The hydrogel resembles black bubble wrap, with small dome-shaped structures that swell when the hydrogel soaks up water vapor. When the captured vapor evaporates, the domes shrink back down in an origami-like transformation. The evaporated vapor then condenses on the the glass, where it can flow down and out through a tube, as clean and drinkable water.

The system runs entirely on its own, without a power source, unlike other designs that require batteries, solar panels, or electricity from the grid. The team ran the device for over a week in Death Valley, California — the driest region in North America. Even in very low-humidity conditions, the device squeezed drinking water from the air at rates of up to 160 milliliters (about two-thirds of a cup) per day.

The team estimates that multiple vertical panels, set up in a small array, could passively supply a household with drinking water, even in arid desert environments. What’s more, the system’s water production should increase with humidity, supplying drinking water in temperate and tropical climates.

“We have built a meter-scale device that we hope to deploy in resource-limited regions, where even a solar cell is not very accessible,” says Xuanhe Zhao, the Uncas and Helen Whitaker Professor of Mechanical Engineering and Civil and Environmental Engineering at MIT. “It’s a test of feasibility in scaling up this water harvesting technology. Now people can build it even larger, or make it into parallel panels, to supply drinking water to people and achieve real impact.”

Zhao and his colleagues present the details of the new water harvesting design in a paper appearing today in the journal Nature Water. The study’s lead author is former MIT postdoc “Will” Chang Liu, who is currently an assistant professor at the National University of Singapore (NUS). MIT co-authors include Xiao-Yun Yan, Shucong Li, and Bolei Deng, along with collaborators from multiple other institutions.

Carrying capacity

Hydrogels are soft, porous materials that are made mainly from water and a microscopic network of interconnecting polymer fibers. Zhao’s group at MIT has primarily explored the use of hydrogels in biomedical applications, including adhesive coatings for medical implantssoft and flexible electrodes, and noninvasive imaging stickers.

“Through our work with soft materials, one property we know very well is the way hydrogel is very good at absorbing water from air,” Zhao says.

Researchers are exploring a number of ways to harvest water vapor for drinking water. Among the most efficient so far are devices made from metal-organic frameworks, or MOFs — ultra-porous materials that have also been shown to capture water from dry desert air. But the MOFs do not swell or stretch when absorbing water, and are limited in vapor-carrying capacity.

Water from air

The group’s new hydrogel-based water harvester addresses another key problem in similar designs. Other groups have designed water harvesters out of micro- or nano-porous hydrogels. But the water produced from these designs can be salty, requiring additional filtering. Salt is a naturally absorbent material, and researchers embed salts — typically, lithium chloride — in hydrogel to increase the material’s water absorption. The drawback, however, is that this salt can leak out with the water when it is eventually collected.

The team’s new design significantly limits salt leakage. Within the hydrogel itself, they included an extra ingredient: glycerol, a liquid compound that naturally stabilizes salt, keeping it within the gel rather than letting it crystallize and leak out with the water. The hydrogel itself has a microstructure that lacks nanoscale pores, which further prevents salt from escaping the material. The salt levels in the water they collected were below the standard threshold for safe drinking water, and significantly below the levels produced by many other hydrogel-based designs.

In addition to tuning the hydrogel’s composition, the researchers made improvements to its form. Rather than keeping the gel as a flat sheet, they molded it into a pattern of small domes resembling bubble wrap, that act to increase the gel’s surface area, along with the amount of water vapor it can absorb.

The researchers fabricated a half-square-meter of hydrogel and encased the material in a window-like glass chamber. They coated the exterior of the chamber with a special polymer film, which helps to cool the glass and stimulates any water vapor in the hydrogel to evaporate and condense onto the glass. They installed a simple tubing system to collect the water as it flows down the glass.

In November 2023, the team traveled to Death Valley, California, and set up the device as a vertical panel. Over seven days, they took measurements as the hydrogel absorbed water vapor during the night (the time of day when water vapor in the desert is highest). In the daytime, with help from the sun, the harvested water evaporated out from the hydrogel and condensed onto the glass.

Over this period, the device worked across a range of humidities, from 21 to 88 percent, and produced between 57 and 161.5 milliliters of drinking water per day. Even in the driest conditions, the device harvested more water than other passive and some actively powered designs.

“This is just a proof-of-concept design, and there are a lot of things we can optimize,” Liu says. “For instance, we could have a multipanel design. And we’re working on a next generation of the material to further improve its intrinsic properties.”

“We imagine that you could one day deploy an array of these panels, and the footprint is very small because they are all vertical,” says Zhao, who has plans to further test the panels in many resource-limited regions. “Then you could have many panels together, collecting water all the time, at household scale.”

This work was supported, in part, by the MIT J-WAFS Water and Food Seed Grant, the MIT-Chinese University of Hong Kong collaborative research program, and the UM6P-MIT collaborative research program.

How the brain solves complicated problems

MIT Latest News - Wed, 06/11/2025 - 5:00am

The human brain is very good at solving complicated problems. One reason for that is that humans can break problems apart into manageable subtasks that are easy to solve one at a time.

This allows us to complete a daily task like going out for coffee by breaking it into steps: getting out of our office building, navigating to the coffee shop, and once there, obtaining the coffee. This strategy helps us to handle obstacles easily. For example, if the elevator is broken, we can revise how we get out of the building without changing the other steps.

While there is a great deal of behavioral evidence demonstrating humans’ skill at these complicated tasks, it has been difficult to devise experimental scenarios that allow precise characterization of the computational strategies we use to solve problems.

In a new study, MIT researchers have successfully modeled how people deploy different decision-making strategies to solve a complicated task — in this case, predicting how a ball will travel through a maze when the ball is hidden from view. The human brain cannot perform this task perfectly because it is impossible to track all of the possible trajectories in parallel, but the researchers found that people can perform reasonably well by flexibly adopting two strategies known as hierarchical reasoning and counterfactual reasoning.

The researchers were also able to determine the circumstances under which people choose each of those strategies.

“What humans are capable of doing is to break down the maze into subsections, and then solve each step using relatively simple algorithms. Effectively, when we don’t have the means to solve a complex problem, we manage by using simpler heuristics that get the job done,” says Mehrdad Jazayeri, a professor of brain and cognitive sciences, a member of MIT’s McGovern Institute for Brain Research, an investigator at the Howard Hughes Medical Institute, and the senior author of the study.

Mahdi Ramadan PhD ’24 and graduate student Cheng Tang are the lead authors of the paper, which appears today in Nature Human Behavior. Nicholas Watters PhD ’25 is also a co-author.

Rational strategies

When humans perform simple tasks that have a clear correct answer, such as categorizing objects, they perform extremely well. When tasks become more complex, such as planning a trip to your favorite cafe, there may no longer be one clearly superior answer. And, at each step, there are many things that could go wrong. In these cases, humans are very good at working out a solution that will get the task done, even though it may not be the optimal solution.

Those solutions often involve problem-solving shortcuts, or heuristics. Two prominent heuristics humans commonly rely on are hierarchical and counterfactual reasoning. Hierarchical reasoning is the process of breaking down a problem into layers, starting from the general and proceeding toward specifics. Counterfactual reasoning involves imagining what would have happened if you had made a different choice. While these strategies are well-known, scientists don’t know much about how the brain decides which one to use in a given situation.

“This is really a big question in cognitive science: How do we problem-solve in a suboptimal way, by coming up with clever heuristics that we chain together in a way that ends up getting us closer and closer until we solve the problem?” Jazayeri says.

To overcome this, Jazayeri and his colleagues devised a task that is just complex enough to require these strategies, yet simple enough that the outcomes and the calculations that go into them can be measured.

The task requires participants to predict the path of a ball as it moves through four possible trajectories in a maze. Once the ball enters the maze, people cannot see which path it travels. At two junctions in the maze, they hear an auditory cue when the ball reaches that point. Predicting the ball’s path is a task that is impossible for humans to solve with perfect accuracy.

“It requires four parallel simulations in your mind, and no human can do that. It’s analogous to having four conversations at a time,” Jazayeri says. “The task allows us to tap into this set of algorithms that the humans use, because you just can’t solve it optimally.”

The researchers recruited about 150 human volunteers to participate in the study. Before each subject began the ball-tracking task, the researchers evaluated how accurately they could estimate timespans of several hundred milliseconds, about the length of time it takes the ball to travel along one arm of the maze.

For each participant, the researchers created computational models that could predict the patterns of errors that would be seen for that participant (based on their timing skill) if they were running parallel simulations, using hierarchical reasoning alone, counterfactual reasoning alone, or combinations of the two reasoning strategies.

The researchers compared the subjects’ performance with the models’ predictions and found that for every subject, their performance was most closely associated with a model that used hierarchical reasoning but sometimes switched to counterfactual reasoning.

That suggests that instead of tracking all the possible paths that the ball could take, people broke up the task. First, they picked the direction (left or right), in which they thought the ball turned at the first junction, and continued to track the ball as it headed for the next turn. If the timing of the next sound they heard wasn’t compatible with the path they had chosen, they would go back and revise their first prediction — but only some of the time.

Switching back to the other side, which represents a shift to counterfactual reasoning, requires people to review their memory of the tones that they heard. However, it turns out that these memories are not always reliable, and the researchers found that people decided whether to go back or not based on how good they believed their memory to be.

“People rely on counterfactuals to the degree that it’s helpful,” Jazayeri says. “People who take a big performance loss when they do counterfactuals avoid doing them. But if you are someone who’s really good at retrieving information from the recent past, you may go back to the other side.”

Human limitations

To further validate their results, the researchers created a machine-learning neural network and trained it to complete the task. A machine-learning model trained on this task will track the ball’s path accurately and make the correct prediction every time, unless the researchers impose limitations on its performance.

When the researchers added cognitive limitations similar to those faced by humans, they found that the model altered its strategies. When they eliminated the model’s ability to follow all possible trajectories, it began to employ hierarchical and counterfactual strategies like humans do. If the researchers reduced the model’s memory recall ability, it began to switch to hierarchical only if it thought its recall would be good enough to get the right answer — just as humans do.

“What we found is that networks mimic human behavior when we impose on them those computational constraints that we found in human behavior,” Jazayeri says. “This is really saying that humans are acting rationally under the constraints that they have to function under.”

By slightly varying the amount of memory impairment programmed into the models, the researchers also saw hints that the switching of strategies appears to happen gradually, rather than at a distinct cut-off point. They are now performing further studies to try to determine what is happening in the brain as these shifts in strategy occur.

The research was funded by a Lisa K. Yang ICoN Fellowship, a Friends of the McGovern Institute Student Fellowship, a National Science Foundation Graduate Research Fellowship, the Simons Foundation, the Howard Hughes Medical Institute, and the McGovern Institute.

Betting on Your Digital Rights: EFF Benefit Poker Tournament at DEF CON 33

EFF: Updates - Tue, 06/10/2025 - 9:17pm

Hacker Summer Camp is almost here... and with it comes the Third Annual EFF Benefit Poker Tournament at DEF CON 33 hosted by security expert Tarah Wheeler.

Please join us at the same place and time as last year: Friday, August 8th, at high noon at the Horseshoe Poker Room. The fees haven’t changed; it’s still $250 to register plus $100 the day of the tournament with unlimited rebuys. (AND your registration donation covers your EFF membership for the year.) 

Tarah Wheeler—EFF board member and resident poker expert—has been working hard on the tournament since last year! We will have Lintile as emcee this year and there's going to be bug bounties! When you take someone out of the tournament, they will give you a pin. Prizes—and major bragging rights—go to the player with the most bounty pins. Be sure to register today and see Lintile in action!

Did we mention there will be Celebrity Bounties? Knock out Wendy Nather, Chris “WeldPond” Wysopal, Jake “MalwareJake” Williams and get neat EFF swag and the respect of your peers! Plus, as always, knock out Tarah's dad Mike, and she donates $250 to the EFF in your name!

Register Now

Find Full Event Details and Registration

Have a friend that might be interested but not sure how to play? Have you played some poker before but could use a refresher? Join poker pro Mike Wheeler (Tarah’s dad) and celebrities for a free poker clinic from 11:00 am-11:45 am just before the tournament. Mike will show you the rules, strategy, table behavior, and general Vegas slang at the poker table. Even if you know poker pretty well, come a bit early and help out.

Register today and reserve your deck. Be sure to invite your friends to join you!

 

Pages