Feed aggregator

Trump says FEMA overhaul will come after hurricane season

ClimateWire News - Wed, 06/11/2025 - 6:19am
The president's remarks Tuesday signal that states will continue to get federal disaster aid this year but may see changes in 2026.

Climate change fueled May’s record-breaking Arctic heat

ClimateWire News - Wed, 06/11/2025 - 6:18am
Melting on the Greenland ice sheet rose to 17 times its normal rate as temperatures soared.

New Zealand greens sue government over climate plan

ClimateWire News - Wed, 06/11/2025 - 6:17am
The lawsuit is said to be the first in the world to challenge a government for relying on tree planting to address climate change.

Northern India heat wave disrupts lives, raises health worries

ClimateWire News - Wed, 06/11/2025 - 6:16am
The searing heat is not just a seasonal discomfort but underscores a growing challenge for the country's overwhelmed health infrastructure.

Greta Thunberg hits back at Donald Trump over anger management jibe

ClimateWire News - Wed, 06/11/2025 - 6:16am
The Swedish activist battles the U.S. president again. This time it's over her Gaza relief mission.

Fitch warns of rising mortgage-bond risk due to extreme weather

ClimateWire News - Wed, 06/11/2025 - 6:15am
The physical fallout of climate change has implications for corners of fixed-income markets traditionally viewed as among the safest in the world.

Greece orders first evacuation of wildfire season

ClimateWire News - Wed, 06/11/2025 - 6:14am
The Mediterranean country has taken a more robust response to wildfires in recent years.

Window-sized device taps the air for safe drinking water

MIT Latest News - Wed, 06/11/2025 - 5:00am

Today, 2.2 billion people in the world lack access to safe drinking water. In the United States, more than 46 million people experience water insecurity, living with either no running water or water that is unsafe to drink. The increasing need for drinking water is stretching traditional resources such as rivers, lakes, and reservoirs.

To improve access to safe and affordable drinking water, MIT engineers are tapping into an unconventional source: the air. The Earth’s atmosphere contains millions of billions of gallons of water in the form of vapor. If this vapor can be efficiently captured and condensed, it could supply clean drinking water in places where traditional water resources are inaccessible.

With that goal in mind, the MIT team has developed and tested a new atmospheric water harvester and shown that it efficiently captures water vapor and produces safe drinking water across a range of relative humidities, including dry desert air.

The new device is a black, window-sized vertical panel, made from a water-absorbent hydrogel material, enclosed in a glass chamber coated with a cooling layer. The hydrogel resembles black bubble wrap, with small dome-shaped structures that swell when the hydrogel soaks up water vapor. When the captured vapor evaporates, the domes shrink back down in an origami-like transformation. The evaporated vapor then condenses on the the glass, where it can flow down and out through a tube, as clean and drinkable water.

The system runs entirely on its own, without a power source, unlike other designs that require batteries, solar panels, or electricity from the grid. The team ran the device for over a week in Death Valley, California — the driest region in North America. Even in very low-humidity conditions, the device squeezed drinking water from the air at rates of up to 160 milliliters (about two-thirds of a cup) per day.

The team estimates that multiple vertical panels, set up in a small array, could passively supply a household with drinking water, even in arid desert environments. What’s more, the system’s water production should increase with humidity, supplying drinking water in temperate and tropical climates.

“We have built a meter-scale device that we hope to deploy in resource-limited regions, where even a solar cell is not very accessible,” says Xuanhe Zhao, the Uncas and Helen Whitaker Professor of Mechanical Engineering and Civil and Environmental Engineering at MIT. “It’s a test of feasibility in scaling up this water harvesting technology. Now people can build it even larger, or make it into parallel panels, to supply drinking water to people and achieve real impact.”

Zhao and his colleagues present the details of the new water harvesting design in a paper appearing today in the journal Nature Water. The study’s lead author is former MIT postdoc “Will” Chang Liu, who is currently an assistant professor at the National University of Singapore (NUS). MIT co-authors include Xiao-Yun Yan, Shucong Li, and Bolei Deng, along with collaborators from multiple other institutions.

Carrying capacity

Hydrogels are soft, porous materials that are made mainly from water and a microscopic network of interconnecting polymer fibers. Zhao’s group at MIT has primarily explored the use of hydrogels in biomedical applications, including adhesive coatings for medical implantssoft and flexible electrodes, and noninvasive imaging stickers.

“Through our work with soft materials, one property we know very well is the way hydrogel is very good at absorbing water from air,” Zhao says.

Researchers are exploring a number of ways to harvest water vapor for drinking water. Among the most efficient so far are devices made from metal-organic frameworks, or MOFs — ultra-porous materials that have also been shown to capture water from dry desert air. But the MOFs do not swell or stretch when absorbing water, and are limited in vapor-carrying capacity.

Water from air

The group’s new hydrogel-based water harvester addresses another key problem in similar designs. Other groups have designed water harvesters out of micro- or nano-porous hydrogels. But the water produced from these designs can be salty, requiring additional filtering. Salt is a naturally absorbent material, and researchers embed salts — typically, lithium chloride — in hydrogel to increase the material’s water absorption. The drawback, however, is that this salt can leak out with the water when it is eventually collected.

The team’s new design significantly limits salt leakage. Within the hydrogel itself, they included an extra ingredient: glycerol, a liquid compound that naturally stabilizes salt, keeping it within the gel rather than letting it crystallize and leak out with the water. The hydrogel itself has a microstructure that lacks nanoscale pores, which further prevents salt from escaping the material. The salt levels in the water they collected were below the standard threshold for safe drinking water, and significantly below the levels produced by many other hydrogel-based designs.

In addition to tuning the hydrogel’s composition, the researchers made improvements to its form. Rather than keeping the gel as a flat sheet, they molded it into a pattern of small domes resembling bubble wrap, that act to increase the gel’s surface area, along with the amount of water vapor it can absorb.

The researchers fabricated a half-square-meter of hydrogel and encased the material in a window-like glass chamber. They coated the exterior of the chamber with a special polymer film, which helps to cool the glass and stimulates any water vapor in the hydrogel to evaporate and condense onto the glass. They installed a simple tubing system to collect the water as it flows down the glass.

In November 2023, the team traveled to Death Valley, California, and set up the device as a vertical panel. Over seven days, they took measurements as the hydrogel absorbed water vapor during the night (the time of day when water vapor in the desert is highest). In the daytime, with help from the sun, the harvested water evaporated out from the hydrogel and condensed onto the glass.

Over this period, the device worked across a range of humidities, from 21 to 88 percent, and produced between 57 and 161.5 milliliters of drinking water per day. Even in the driest conditions, the device harvested more water than other passive and some actively powered designs.

“This is just a proof-of-concept design, and there are a lot of things we can optimize,” Liu says. “For instance, we could have a multipanel design. And we’re working on a next generation of the material to further improve its intrinsic properties.”

“We imagine that you could one day deploy an array of these panels, and the footprint is very small because they are all vertical,” says Zhao, who has plans to further test the panels in many resource-limited regions. “Then you could have many panels together, collecting water all the time, at household scale.”

This work was supported, in part, by the MIT J-WAFS Water and Food Seed Grant, the MIT-Chinese University of Hong Kong collaborative research program, and the UM6P-MIT collaborative research program.

How the brain solves complicated problems

MIT Latest News - Wed, 06/11/2025 - 5:00am

The human brain is very good at solving complicated problems. One reason for that is that humans can break problems apart into manageable subtasks that are easy to solve one at a time.

This allows us to complete a daily task like going out for coffee by breaking it into steps: getting out of our office building, navigating to the coffee shop, and once there, obtaining the coffee. This strategy helps us to handle obstacles easily. For example, if the elevator is broken, we can revise how we get out of the building without changing the other steps.

While there is a great deal of behavioral evidence demonstrating humans’ skill at these complicated tasks, it has been difficult to devise experimental scenarios that allow precise characterization of the computational strategies we use to solve problems.

In a new study, MIT researchers have successfully modeled how people deploy different decision-making strategies to solve a complicated task — in this case, predicting how a ball will travel through a maze when the ball is hidden from view. The human brain cannot perform this task perfectly because it is impossible to track all of the possible trajectories in parallel, but the researchers found that people can perform reasonably well by flexibly adopting two strategies known as hierarchical reasoning and counterfactual reasoning.

The researchers were also able to determine the circumstances under which people choose each of those strategies.

“What humans are capable of doing is to break down the maze into subsections, and then solve each step using relatively simple algorithms. Effectively, when we don’t have the means to solve a complex problem, we manage by using simpler heuristics that get the job done,” says Mehrdad Jazayeri, a professor of brain and cognitive sciences, a member of MIT’s McGovern Institute for Brain Research, an investigator at the Howard Hughes Medical Institute, and the senior author of the study.

Mahdi Ramadan PhD ’24 and graduate student Cheng Tang are the lead authors of the paper, which appears today in Nature Human Behavior. Nicholas Watters PhD ’25 is also a co-author.

Rational strategies

When humans perform simple tasks that have a clear correct answer, such as categorizing objects, they perform extremely well. When tasks become more complex, such as planning a trip to your favorite cafe, there may no longer be one clearly superior answer. And, at each step, there are many things that could go wrong. In these cases, humans are very good at working out a solution that will get the task done, even though it may not be the optimal solution.

Those solutions often involve problem-solving shortcuts, or heuristics. Two prominent heuristics humans commonly rely on are hierarchical and counterfactual reasoning. Hierarchical reasoning is the process of breaking down a problem into layers, starting from the general and proceeding toward specifics. Counterfactual reasoning involves imagining what would have happened if you had made a different choice. While these strategies are well-known, scientists don’t know much about how the brain decides which one to use in a given situation.

“This is really a big question in cognitive science: How do we problem-solve in a suboptimal way, by coming up with clever heuristics that we chain together in a way that ends up getting us closer and closer until we solve the problem?” Jazayeri says.

To overcome this, Jazayeri and his colleagues devised a task that is just complex enough to require these strategies, yet simple enough that the outcomes and the calculations that go into them can be measured.

The task requires participants to predict the path of a ball as it moves through four possible trajectories in a maze. Once the ball enters the maze, people cannot see which path it travels. At two junctions in the maze, they hear an auditory cue when the ball reaches that point. Predicting the ball’s path is a task that is impossible for humans to solve with perfect accuracy.

“It requires four parallel simulations in your mind, and no human can do that. It’s analogous to having four conversations at a time,” Jazayeri says. “The task allows us to tap into this set of algorithms that the humans use, because you just can’t solve it optimally.”

The researchers recruited about 150 human volunteers to participate in the study. Before each subject began the ball-tracking task, the researchers evaluated how accurately they could estimate timespans of several hundred milliseconds, about the length of time it takes the ball to travel along one arm of the maze.

For each participant, the researchers created computational models that could predict the patterns of errors that would be seen for that participant (based on their timing skill) if they were running parallel simulations, using hierarchical reasoning alone, counterfactual reasoning alone, or combinations of the two reasoning strategies.

The researchers compared the subjects’ performance with the models’ predictions and found that for every subject, their performance was most closely associated with a model that used hierarchical reasoning but sometimes switched to counterfactual reasoning.

That suggests that instead of tracking all the possible paths that the ball could take, people broke up the task. First, they picked the direction (left or right), in which they thought the ball turned at the first junction, and continued to track the ball as it headed for the next turn. If the timing of the next sound they heard wasn’t compatible with the path they had chosen, they would go back and revise their first prediction — but only some of the time.

Switching back to the other side, which represents a shift to counterfactual reasoning, requires people to review their memory of the tones that they heard. However, it turns out that these memories are not always reliable, and the researchers found that people decided whether to go back or not based on how good they believed their memory to be.

“People rely on counterfactuals to the degree that it’s helpful,” Jazayeri says. “People who take a big performance loss when they do counterfactuals avoid doing them. But if you are someone who’s really good at retrieving information from the recent past, you may go back to the other side.”

Human limitations

To further validate their results, the researchers created a machine-learning neural network and trained it to complete the task. A machine-learning model trained on this task will track the ball’s path accurately and make the correct prediction every time, unless the researchers impose limitations on its performance.

When the researchers added cognitive limitations similar to those faced by humans, they found that the model altered its strategies. When they eliminated the model’s ability to follow all possible trajectories, it began to employ hierarchical and counterfactual strategies like humans do. If the researchers reduced the model’s memory recall ability, it began to switch to hierarchical only if it thought its recall would be good enough to get the right answer — just as humans do.

“What we found is that networks mimic human behavior when we impose on them those computational constraints that we found in human behavior,” Jazayeri says. “This is really saying that humans are acting rationally under the constraints that they have to function under.”

By slightly varying the amount of memory impairment programmed into the models, the researchers also saw hints that the switching of strategies appears to happen gradually, rather than at a distinct cut-off point. They are now performing further studies to try to determine what is happening in the brain as these shifts in strategy occur.

The research was funded by a Lisa K. Yang ICoN Fellowship, a Friends of the McGovern Institute Student Fellowship, a National Science Foundation Graduate Research Fellowship, the Simons Foundation, the Howard Hughes Medical Institute, and the McGovern Institute.

Oppose STOP CSAM: Protecting Kids Shouldn’t Mean Breaking the Tools That Keep Us Safe

EFF: Updates - Tue, 06/10/2025 - 7:08pm

A Senate bill re-introduced this week threatens security and free speech on the internet. EFF urges Congress to reject the STOP CSAM Act of 2025 (S. 1829), which would undermine services offering end-to-end encryption and force internet companies to take down lawful user content.   

TAKE ACTION

Tell Congress Not to Outlaw Encrypted Apps

As in the version introduced last Congress, S. 1829 purports to limit the online spread of child sexual abuse material (CSAM), also known as child pornography. CSAM is already highly illegal. Existing law already requires online service providers who have actual knowledge of “apparent” CSAM on their platforms to report that content to the National Center for Missing and Exploited Children (NCMEC). NCMEC then forwards actionable reports to law enforcement agencies for investigation. 

S. 1829 goes much further than current law and threatens to punish any service that works to keep its users secure, including those that do their best to eliminate and report CSAM. The bill applies to “interactive computer services,” which broadly includes private messaging and email apps, social media platforms, cloud storage providers, and many other internet intermediaries and online service providers. 

The Bill Threatens End-to-End Encryption

The bill makes it a crime to intentionally “host or store child pornography” or knowingly “promote or facilitate” the sexual exploitation of children. The bill also opens the door for civil lawsuits against providers for the intentional, knowing or even reckless “promotion or facilitation” of conduct relating to child exploitation, the “hosting or storing of child pornography,” or for “making child pornography available to any person.”  

The terms “promote” and “facilitate” are broad, and civil liability may be imposed based on a low recklessness state of mind standard. This means a court can find an app or website liable for hosting CSAM even if the app or website did not even know it was hosting CSAM, including because the provider employed end-to-end encryption and could not view the contents of content uploaded by users.

Creating new criminal and civil claims against providers based on broad terms and low standards will undermine digital security for all internet users. Because the law already prohibits the distribution of CSAM, the bill’s broad terms could be interpreted as reaching more passive conduct, like merely providing an encrypted app.  

Due to the nature of their services, encrypted communications providers who receive a notice of CSAM may be deemed to have “knowledge” under the criminal law even if they cannot verify and act on that notice. And there is little doubt that plaintiffs’ lawyers will (wrongly) argue that merely providing an encrypted service that can be used to store any image—not necessarily CSAM—recklessly facilitates the sharing of illegal content.  

Affirmative Defense Is Expensive and Insufficient 

While the bill includes an affirmative defense that a provider can raise if it is “technologically impossible” to remove the CSAM without “compromising encryption,” it is not sufficient to protect our security. Online services that offer encryption shouldn’t have to face the impossible task of proving a negative in order to avoid lawsuits over content they can’t see or control. 

First, by making this protection an affirmative defense, providers must still defend against litigation, with significant costs to their business. Not every platform will have the resources to fight these threats in court, especially newcomers that compete with entrenched giants like Meta and Google. Encrypted platforms should not have to rely on prosecutorial discretion or favorable court rulings after protracted litigation. Instead, specific exemptions for encrypted providers should be addressed in the text of the bill.  

Second, although technologies like client-side scanning break encryption, members of Congress have misleadingly claimed otherwise. Plaintiffs are likely to argue that providers who do not use these techniques are acting recklessly, leading many apps and websites to scan all of the content on their platforms and remove any content that a state court could find, even wrongfully, is CSAM.

TAKE ACTION

Tell Congress Not to Outlaw Encrypted Apps

The Bill Threatens Free Speech by Creating a New Exception to Section 230 

The bill allows a new type of lawsuit to be filed against internet platforms, accusing them of “facilitating” child sexual exploitation based on the speech of others. It does this by creating an exception to Section 230, the foundational law of the internet and online speech. Section 230 provides partial immunity to internet intermediaries when sued over content posted by their users. Without that protection, platforms are much more likely to aggressively monitor and censor users.

Section 230 creates the legal breathing room for internet intermediaries to create online spaces for people to freely communicate around the world, with low barriers to entry. However, creating a new exception that exposes providers to more lawsuits will cause them to limit that legal exposure. Online services will censor more and more user content and accounts, with minimal regard as to whether that content is in fact legal. Some platforms may even be forced to shut down or may not even get off the ground in the first place, for fear of being swept up in a flood of litigation and claims around alleged CSAM. On balance, this harms all internet users who rely on intermediaries to connect with their communities and the world at large. 

Once-a-week pill for schizophrenia shows promise in clinical trials

MIT Latest News - Tue, 06/10/2025 - 6:30pm

For many patients with schizophrenia, other psychiatric illnesses, or diseases such as hypertension and asthma, it can be difficult to take their medicine every day. To help overcome that challenge, MIT researchers have developed a pill that can be taken just once a week and gradually releases medication from within the stomach.

In a phase 3 clinical trial conducted by MIT spinout Lyndra Therapeutics, the researchers used the once-a-week pill to deliver a widely used medication for managing the symptoms of schizophrenia. They found that this treatment regimen maintained consistent levels of the drug in patients’ bodies and controlled their symptoms just as well as daily doses of the drug. The results are published today in Lancet Psychiatry.

“We’ve converted something that has to be taken once a day to once a week, orally, using a technology that can be adapted for a variety of medications,” says Giovanni Traverso, an associate professor of mechanical engineering at MIT, a gastroenterologist at Brigham and Women’s Hospital, an associate member of the Broad Institute, and an author of the study. “The ability to provide a sustained level of drug for a prolonged period, in an easy-to-administer system, makes it easier to ensure patients are receiving their medication.”

Traverso’s lab began developing the ingestible capsule studied in this trial more than 10 years ago, as part of an ongoing effort to make medications easier for patients to take. The capsule is about the size of a multivitamin, and once swallowed, it expands into a star shape that helps it remain in the stomach until all of the drug is released.

Richard Scranton, chief medical officer of Lyndra Therapeutics, is the senior author of the paper, and Leslie Citrome, a clinical professor of psychiatry and behavioral sciences at New York Medical College School of Medicine, is the lead author. Nayana Nagaraj, medical director at Lyndra Therapeutics, and Todd Dumas, senior director of pharmacometrics at Certara, are also authors.

Sustained delivery

Over the past decade, Traverso’s lab has been working on a variety of capsules that can be swallowed and remain in the digestive tract for days or weeks, slowly releasing their drug payload. In 2016, his team reported the star-shaped device, which was then further developed by Lyndra for clinical trials in patients with schizophrenia.

The device contains six arms that can be folded in, allowing it to fit inside a capsule. The capsule dissolves when the device reaches the stomach, allowing the arms to spring out. Once the arms are extended, the device becomes too large to pass through the pylorus (the exit of the stomach), so it remains freely floating in the stomach as drugs are slowly released from the arms. After about a week, the arms break off on their own, and each segment exits the stomach and passes through the digestive tract.

For the clinical trials, the capsule was loaded with risperidone, a commonly prescribed medication used to treat schizophrenia. Most patients take the drug orally once a day. There are also injectable versions that can be given every two weeks, every month, or every two months, but they require administration by a health care provider and are not always acceptable to patients.

The MIT and Lyndra team chose to focus on schizophrenia in hopes that a drug regimen that could be administered less frequently, through oral delivery, could make treatment easier for patients and their caregivers.

“One of the areas of unmet need that was recognized early on is neuropsychiatric conditions, where the illness can limit or impair one’s ability to remember to take their medication,” Traverso says. “With that in mind, one of the conditions that has been a big focus has been schizophrenia.”

The phase 3 trial was coordinated by researchers at Lyndra and enrolled 83 patients at five different sites around the United States. Forty-five of those patients completed the full five weeks of the study, in which they took one risperidone-loaded capsule per week.

Throughout the study, the researchers measured the amount of drug in each patient’s bloodstream. Each week, they found a sharp increase on the day the pill was given, followed by a slow decline over the next week. The levels were all within the optimal range, and there was less variation over time than is seen when patients take a pill each day.

Effective treatment

Using an evaluation known as the Positive and Negative Syndrome Scale (PANSS), the researchers also found that the patients’ symptoms remained stable throughout the study.

“One of the biggest obstacles in the care of people with chronic illnesses in general is that medications are not taken consistently. This leads to worsening symptoms, and in the case of schizophrenia, potential relapse and hospitalization,” Citrome says. “Having the option to take medication by mouth once a week represents an important option that can assist with adherence for the many patients who would prefer oral medications versus injectable formulations.”

Side effects from the treatment were minimal, the researchers found. Some patients experienced mild acid reflux and constipation early in the study, but these did not last long. The results, showing effectiveness of the capsule and few side effects, represent a major milestone in this approach to drug delivery, Traverso says.

“This really demonstrates that what we had hypothesized a decade ago, which is that a single capsule providing a drug depot within the GI tract could be possible,” he says. “Here what you see is that the capsule can achieve the drug levels that were predicted, and also control symptoms in a sizeable cohort of patients with schizophrenia.”

The investigators now hope to complete larger phase 3 studies before applying for FDA approval of this delivery approach for risperidone. They are also preparing for phase 1 trials using this capsule to deliver other drugs, including contraceptives.

“We are delighted that this technology which started at MIT has reached the point of phase 3 clinical trials,” says Robert Langer, the David H. Koch Institute Professor at MIT, who was an author of the original study on the star capsule and is a co-founder of Lyndra Therapeutics.

The research was funded by Lyndra Therapeutics.

Despite Changes, A.B. 412 Still Harms Small Developers

EFF: Updates - Tue, 06/10/2025 - 6:07pm

California lawmakers are continuing to promote a bill that will reinforce the power of giant AI companies by burying small AI companies and non-commercial developers in red tape, copyright demands and potentially, lawsuits. After several amendments, the bill hasn’t improved much, and in some ways has actually gotten worse. If A.B. 412 is passed, it will make California’s economy less innovative, and less competitive. 

The Bill Threatens Small Tech Companies

A.B. 412 masquerades as a transparency bill, but it’s actually a government-mandated “reading list” that will allow rights holders to file a new type of lawsuit in state court, even as the federal courts continue to assess whether and how federal copyright law applies to the development of generative AI technologies. 

The bill would require developers—even two-person startups— to keep lists of training materials that are “registered, pre-registered or indexed” with the U.S. Copyright Office, and help rights holders create digital ‘fingerprints’ of those works—a technical task with no established standards and no realistic path for small teams to follow. Even if it were limited to registered copyrighted material, that’s a monumental task, as we explained in March when we examined the earlier text of A.B. 412. 

The bill’s amendments have made compliance even harder, since it now requires technologists to go beyond copyrighted material and somehow identify “pre-registered” copyrights. The amended bill also has new requirements that demand technologists document and keep track of when they look at works that aren’t copyrighted but are subject to exclusive rights, such as pre-1972 sound recordings—rights that, not coincidentally, are primarily controlled by large entertainment companies. 

The penalties for noncompliance are steep—up to $1,000 per day per violation—putting small developers at enormous financial risk even for accidental lapses.

The goal of this list is clear: for big content companies to more easily file lawsuits against software developers, big and small. And for most AI developers, the burden will be crushing. Under A.B. 412, a two-person startup building an open-source chatbot, or an indie developer fine-tuning a language model for disability access, would face the same compliance burdens as Google or Meta. 

Reading and Analyzing The Open Web Is Not a Crime 

It’s critical to remember that AI training is very likely protected by fair use under U.S. copyright law—a point that’s still being worked out in the courts. The idea that we should preempt that process with sweeping state regulation is not just premature; it’s dangerous.

It’s also worth noting that copyright is governed by federal law. Federal courts are already working to define the boundaries of fair use and copyright in the AI context—the California legislature should let them do their job. A.B. 412 tries to create a state-level regulatory scheme in an area that belongs in federal hands—a risky legal overreach that could further complicate an already unsettled policy space.

A.B. 412 is a solution in search of a problem. The courthouse doors are far from closed to content owners who want to dispute the use of their copyrighted works. There are multiple high-profile litigations over the copyright status of AI training works that are working their way through trial courts and appeal courts right now. 

Scope Creep

Rather than narrowing its focus to make compliance more realistic, the latest amendments to A.B. 412 actually expand the scope of covered works. The bill now demands documentation of obscure categories of content like pre-1972 sound recordings. These recordings have rights that are often murky, and largely controlled by major media companies.

The bill also adds “preregistered” and indexed works to its coverage. Preregistration, designed to help entertainment companies punish unauthorized copying even before commercial release, expands the universe of content that developers must track—without offering any meaningful help to small creators. 

A Moat Serving Big Tech

Ironically, the companies that will benefit most from A.B. 412 are the very same large tech firms that lawmakers often claim they want to regulate. Big companies can hire teams of lawyers and compliance officers to handle these requirements. Small developers? They’re more likely to shut down, sell out, or never enter the field in the first place.

This bill doesn’t create a fairer marketplace. It builds a regulatory moat around the incumbents, locking out new competitors and ensuring that only a handful of companies have the resources to develop advanced AI systems. Truly innovative technology often comes from unknown or small companies, but A.B. 412 threatens to turn California—and anyone who does business there—into a fortress where only the biggest players survive.

A Lopsided Bill 

A.B. 412 is becoming an increasingly extreme and one-sided piece of legislation. It’s a maximalist wishlist for legacy rights-holders, delivered at the expense of small developers and the public. The result will be less competition, less innovation, and fewer choices for consumers—not more protection for creators.

This new version does close a few loopholes, and expands the period for AI developers to respond to copyright demands from 7 days to 30 days. But it seriously fails to close others: for instance, the exemption for noncommercial development applies only to work done “exclusively for noncommercial academic or governmental” institutions. That still leaves a huge window to sue hobbyists and independent researchers who don’t have university or government jobs. 

While the bill nominally exempts developers who use only public or developer-owned data, that’s a carve-out with no practical value. Like a search engine, nearly every meaningful AI system relies on mixed sources — and developers can’t realistically track the copyright status of them all.

At its core, A.B. 412 is a flawed bill that would harm the whole U.S. tech ecosystem. Lawmakers should be advancing policies that protect privacy, promote competition, and ensure that innovation benefits the public—not just a handful of entrenched interests.

If you’re a California resident, now is the time to speak out. Tell your legislators that A.B. 412 will hurt small companies, help big tech, and lock California’s economy in the past.

EPA to propose rolling back climate rule for power plants Wednesday

ClimateWire News - Tue, 06/10/2025 - 5:48pm
It marks an escalation in President Donald Trump's effort to purge climate initiatives from the federal government.

Recovering from the past and transitioning to a better energy future

MIT Latest News - Tue, 06/10/2025 - 3:15pm

As the frequency and severity of extreme weather events grow, it may become increasingly necessary to employ a bolder approach to climate change, warned Emily A. Carter, the Gerhard R. Andlinger Professor in Energy and the Environment at Princeton University. Carter made her case for why the energy transition is no longer enough in the face of climate change while speaking at the MIT Energy Initiative (MITEI) Presents: Advancing the Energy Transition seminar on the MIT campus.

“If all we do is take care of what we did in the past — but we don’t change what we do in the future — then we’re still going to be left with very serious problems,” she said. Our approach to climate change mitigation must comprise transformation, intervention, and adaption strategies, said Carter. 

Transitioning to a decarbonized electricity system is one piece of the puzzle. Growing amounts of solar and wind energy — along with nuclear, hydropower, and geothermal — are slowly transforming the energy electricity landscape, but Carter noted that there are new technologies farther down the pipeline.  

“Advanced geothermal may come on in the next couple of decades. Fusion will only really start to play a role later in the century, but could provide firm electricity such that we can start to decommission nuclear,” said Carter, who is also a senior strategic advisor and associate laboratory director at the Department of Energy’s Princeton Plasma Physics Laboratory. 

Taking this a step further, Carter outlined how this carbon-free electricity should then be used to electrify everything we can. She highlighted the industrial sector as a critical area for transformation: “The energy transition is about transitioning off of fossil fuels. If you look at the manufacturing industries, they are driven by fossil fuels right now. They are driven by fossil fuel-driven thermal processes.” Carter noted that thermal energy is much less efficient than electricity and highlighted electricity-driven strategies that could replace heat in manufacturing, such as electrolysis, plasmas, light-emitting diodes (LEDs) for photocatalysis, and joule heating. 

The transportation sector is also a key area for electrification, Carter said. While electric vehicles have become increasingly common in recent years, heavy-duty transportation is not as easily electrified. The solution? “Carbon-neutral fuels for heavy-duty aviation and shipping,” she said, emphasizing that these fuels will need to become part of the circular economy. “We know that when we burn those fuels, they’re going to produce CO2 [carbon dioxide] again. They need to come from a source of CO2 that is not fossil-based.” 

The next step is intervention in the form of carbon dioxide removal, which then necessitates methods of storage and utilization, according to Carter. “There’s a lot of talk about building large numbers of pipelines to capture the CO2 — from fossil fuel-driven power plants, cement plants, steel plants, all sorts of industrial places that emit CO2 — and then piping it and storing it in underground aquifers,” she explained. Offshore pipelines are much more expensive than those on land, but can mitigate public concerns over their safety. Europe is exclusively focusing their efforts offshore for this very reason, and the same could be true for the United States, Carter said.  

Once carbon dioxide is captured, commercial utilization may provide economic leverage to accelerate sequestration, even if only a few gigatons are used per year, Carter noted. Through mineralization, CO2 can be converted into carbonates, which could be used in building materials such as concrete and road-paving materials.  

There is another form of intervention that Carter currently views as a last resort: solar geoengineering, sometimes known as solar radiation management or SRM. In 1991, Mount Pinatubo in the Philippines erupted and released sulfur dioxide into the stratosphere, which caused a temporary cooling of the Earth by approximately 0.5 degree Celsius for over a year. SRM seeks to recreate that cooling effect by injecting particles into the atmosphere that reflect sunlight. According to Carter, there are three main strategies: stratospheric aerosol injection, cirrus cloud thinning (thinning clouds to let more infrared radiation emitted by the earth escape to space), and marine cloud brightening (brightening clouds with sea salt so they reflect more light).  

“My view is, I hope we don't ever have to do it, but I sure think we should understand what would happen in case somebody else just decides to do it. It’s a global security issue,” said Carter. “In principle, it’s not so difficult technologically, so we’d like to really understand and to be able to predict what would happen if that happened.” 

With any technology, stakeholder and community engagement is essential for deployment, Carter said. She emphasized the importance of both respectfully listening to concerns and thoroughly addressing them, stating, “Hopefully, there’s enough information given to assuage their fears. We have to gain the trust of people before any deployment can be considered.” 

A crucial component of this trust starts with the responsibility of the scientific community to be transparent and critique each other’s work, Carter said. “Skepticism is good. You should have to prove your proof of principle.” 

MITEI Presents: Advancing the Energy Transition is an MIT Energy Initiative speaker series highlighting energy experts and leaders at the forefront of the scientific, technological, and policy solutions needed to transform our energy systems. The series will continue in fall 2025. For more information on this and additional events, visit the MITEI website.

Inroads to personalized AI trip planning

MIT Latest News - Tue, 06/10/2025 - 3:00pm

Travel agents help to provide end-to-end logistics — like transportation, accommodations, meals, and lodging — for businesspeople, vacationers, and everyone in between. For those looking to make their own arrangements, large language models (LLMs) seem like they would be a strong tool to employ for this task because of their ability to iteratively interact using natural language, provide some commonsense reasoning, collect information, and call other tools in to help with the task at hand. However, recent work has found that state-of-the-art LLMs struggle with complex logistical and mathematical reasoning, as well as problems with multiple constraints, like trip planning, where they’ve been found to provide viable solutions 4 percent or less of the time, even with additional tools and application programming interfaces (APIs).

Subsequently, a research team from MIT and the MIT-IBM Watson AI Lab reframed the issue to see if they could increase the success rate of LLM solutions for complex problems. “We believe a lot of these planning problems are naturally a combinatorial optimization problem,” where you need to satisfy several constraints in a certifiable way, says Chuchu Fan, associate professor in the MIT Department of Aeronautics and Astronautics (AeroAstro) and the Laboratory for Information and Decision Systems (LIDS). She is also a researcher in the MIT-IBM Watson AI Lab. Her team applies machine learning, control theory, and formal methods to develop safe and verifiable control systems for robotics, autonomous systems, controllers, and human-machine interactions.

Noting the transferable nature of their work for travel planning, the group sought to create a user-friendly framework that can act as an AI travel broker to help develop realistic, logical, and complete travel plans. To achieve this, the researchers combined common LLMs with algorithms and a complete satisfiability solver. Solvers are mathematical tools that rigorously check if criteria can be met and how, but they require complex computer programming for use. This makes them natural companions to LLMs for problems like these, where users want help planning in a timely manner, without the need for programming knowledge or research into travel options. Further, if a user’s constraint cannot be met, the new technique can identify and articulate where the issue lies and propose alternative measures to the user, who can then choose to accept, reject, or modify them until a valid plan is formulated, if one exists.

“Different complexities of travel planning are something everyone will have to deal with at some point. There are different needs, requirements, constraints, and real-world information that you can collect,” says Fan. “Our idea is not to ask LLMs to propose a travel plan. Instead, an LLM here is acting as a translator to translate this natural language description of the problem into a problem that a solver can handle [and then provide that to the user],” says Fan.

Co-authoring a paper on the work with Fan are Yang Zhang of MIT-IBM Watson AI Lab, AeroAstro graduate student Yilun Hao, and graduate student Yongchao Chen of MIT LIDS and Harvard University. This work was recently presented at the Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics.

Breaking down the solver

Math tends to be domain-specific. For example, in natural language processing, LLMs perform regressions to predict the next token, a.k.a. “word,” in a series to analyze or create a document. This works well for generalizing diverse human inputs. LLMs alone, however, wouldn’t work for formal verification applications, like in aerospace or cybersecurity, where circuit connections and constraint tasks need to be complete and proven, otherwise loopholes and vulnerabilities can sneak by and cause critical safety issues. Here, solvers excel, but they need fixed formatting inputs and struggle with unsatisfiable queries.  A hybrid technique, however, provides an opportunity to develop solutions for complex problems, like trip planning, in a way that’s intuitive for everyday people.

“The solver is really the key here, because when we develop these algorithms, we know exactly how the problem is being solved as an optimization problem,” says Fan. Specifically, the research group used a solver called satisfiability modulo theories (SMT), which determines whether a formula can be satisfied. “With this particular solver, it’s not just doing optimization. It’s doing reasoning over a lot of different algorithms there to understand whether the planning problem is possible or not to solve. That’s a pretty significant thing in travel planning. It’s not a very traditional mathematical optimization problem because people come up with all these limitations, constraints, restrictions,” notes Fan.

Translation in action

The “travel agent” works in four steps that can be repeated, as needed. The researchers used GPT-4, Claude-3, or Mistral-Large as the method’s LLM. First, the LLM parses a user’s requested travel plan prompt into planning steps, noting preferences for budget, hotels, transportation, destinations, attractions, restaurants, and trip duration in days, as well as any other user prescriptions. Those steps are then converted into executable Python code (with a natural language annotation for each of the constraints), which calls APIs like CitySearch, FlightSearch, etc. to collect data, and the SMT solver to begin executing the steps laid out in the constraint satisfaction problem. If a sound and complete solution can be found, the solver outputs the result to the LLM, which then provides a coherent itinerary to the user.

If one or more constraints cannot be met, the framework begins looking for an alternative. The solver outputs code identifying the conflicting constraints (with its corresponding annotation) that the LLM then provides to the user with a potential remedy. The user can then decide how to proceed, until a solution (or the maximum number of iterations) is reached.

Generalizable and robust planning

The researchers tested their method using the aforementioned LLMs against other baselines: GPT-4 by itself, OpenAI o1-preview by itself, GPT-4 with a tool to collect information, and a search algorithm that optimizes for total cost. Using the TravelPlanner dataset, which includes data for viable plans, the team looked at multiple performance metrics: how frequently a method could deliver a solution, if the solution satisfied commonsense criteria like not visiting two cities in one day, the method’s ability to meet one or more constraints, and a final pass rate indicating that it could meet all constraints. The new technique generally achieved over a 90 percent pass rate, compared to 10 percent or lower for the baselines. The team also explored the addition of a JSON representation within the query step, which further made it easier for the method to provide solutions with 84.4-98.9 percent pass rates.

The MIT-IBM team posed additional challenges for their method. They looked at how important each component of their solution was — such as removing human feedback or the solver — and how that affected plan adjustments to unsatisfiable queries within 10 or 20 iterations using a new dataset they created called UnsatChristmas, which includes unseen constraints, and a modified version of TravelPlanner. On average, the MIT-IBM group’s framework achieved 78.6  and 85 percent success, which rises to 81.6 and 91.7 percent with additional plan modification rounds. The researchers analyzed how well it handled new, unseen constraints and paraphrased query-step and step-code prompts. In both cases, it performed very well, especially with an 86.7 percent pass rate for the paraphrasing trial.

Lastly, the MIT-IBM researchers applied their framework to other domains with tasks like block picking, task allocation, the traveling salesman problem, and warehouse. Here, the method must select numbered, colored blocks and maximize its score; optimize robot task assignment for different scenarios; plan trips minimizing distance traveled; and robot task completion and optimization.

“I think this is a very strong and innovative framework that can save a lot of time for humans, and also, it’s a very novel combination of the LLM and the solver,” says Hao.

This work was funded, in part, by the Office of Naval Research and the MIT-IBM Watson AI Lab.

Melding data, systems, and society

MIT Latest News - Tue, 06/10/2025 - 2:25pm

Research that crosses the traditional boundaries of academic disciplines, and boundaries between academia, industry, and government, is increasingly widespread, and has sometimes led to the spawning of significant new disciplines. But Munther Dahleh, a professor of electrical engineering and computer science at MIT, says that such multidisciplinary and interdisciplinary work often suffers from a number of shortcomings and handicaps compared to more traditionally focused disciplinary work.

But increasingly, he says, the profound challenges that face us in the modern world — including climate change, biodiversity loss, how to control and regulate artificial intelligence systems, and the identification and control of pandemics — require such meshing of expertise from very different areas, including engineering, policy, economics, and data analysis. That realization is what guided him, a decade ago, in the creation of MIT’s pioneering Institute for Data, Systems and Society (IDSS), aiming to foster a more deeply integrated and lasting set of collaborations than the usual temporary and ad hoc associations that occur for such work.

Dahleh has now written a book detailing the process of analyzing the landscape of existing disciplinary divisions at MIT and conceiving of a way to create a structure aimed at breaking down some of those barriers in a lasting and meaningful way, in order to bring about this new institute. The book, “Data, Systems, and Society: Harnessing AI for Societal Good,” was published this March by Cambridge University Press.

The book, Dahleh says, is his attempt “to describe our thinking that led us to the vision of the institute. What was the driving vision behind it?” It is aimed at a number of different audiences, he says, but in particular, “I’m targeting students who are coming to do research that they want to address societal challenges of different types, but utilizing AI and data science. How should they be thinking about these problems?”

A key concept that has guided the structure of the institute is something he refers to as “the triangle.” This refers to the interaction of three components: physical systems, people interacting with those physical systems, and then regulation and policy regarding those systems. Each of these affects, and is affected by, the others in various ways, he explains. “You get a complex interaction among these three components, and then there is data on all these pieces. Data is sort of like a circle that sits in the middle of this triangle and connects all these pieces,” he says.

When tackling any big, complex problem, he suggests, it is useful to think in terms of this triangle. “If you’re tackling a societal problem, it’s very important to understand the impact of your solution on society, on the people, and the role of people in the success of your system,” he says. Often, he says, “solutions and technology have actually marginalized certain groups of people and have ignored them. So the big message is always to think about the interaction between these components as you think about how to solve problems.”

As a specific example, he cites the Covid-19 pandemic. That was a perfect example of a big societal problem, he says, and illustrates the three sides of the triangle: there’s the biology, which was little understood at first and was subject to intensive research efforts; there was the contagion effect, having to do with social behavior and interactions among people; and there was the decision-making by political leaders and institutions, in terms of shutting down schools and companies or requiring masks, and so on. “The complex problem we faced was the interaction of all these components happening in real-time, when the data wasn’t all available,” he says.

Making a decision, for example shutting schools or businesses, based on controlling the spread of the disease, had immediate effects on economics and social well-being and health and education, “so we had to weigh all these things back into the formula,” he says. “The triangle came alive for us during the pandemic.” As a result, IDSS “became a convening place, partly because of all the different aspects of the problem that we were interested in.”

Examples of such interactions abound, he says. Social media and e-commerce platforms are another case of “systems built for people, and they have a regulation aspect, and they fit into the same story if you’re trying to understand misinformation or the monitoring of misinformation.”

The book presents many examples of ethical issues in AI, stressing that they must be handled with great care. He cites self-driving cars as an example, where programming decisions in dangerous situations can appear ethical but lead to negative economic and humanitarian outcomes. For instance, while most Americans support the idea that a car should sacrifice its driver rather than kill an innocent person, they wouldn’t buy such a car. This reluctance lowers adoption rates and ultimately increases casualties.

In the book, he explains the difference, as he sees it, between the concept of “transdisciplinary” versus typical cross-disciplinary or interdisciplinary research. “They all have different roles, and they have been successful in different ways,” he says. The key is that most such efforts tend to be transitory, and that can limit their societal impact. The fact is that even if people from different departments work together on projects, they lack a structure of shared journals, conferences, common spaces and infrastructure, and a sense of community. Creating an academic entity in the form of IDSS that explicitly crosses these boundaries in a fixed and lasting way was an attempt to address that lack. “It was primarily about creating a culture for people to think about all these components at the same time.”

He hastens to add that of course such interactions were already happening at MIT, “but we didn’t have one place where all the students are all interacting with all of these principles at the same time.” In the IDSS doctoral program, for instance, there are 12 required core courses — half of them from statistics and optimization theory and computation, and half from the social sciences and humanities.

Dahleh stepped down from the leadership of IDSS two years ago to return to teaching and to continue his research. But as he reflected on the work of that institute and his role in bringing it into being, he realized that unlike his own academic research, in which every step along the way is carefully documented in published papers, “I haven’t left a trail” to document the creation of the institute and the thinking behind it. “Nobody knows what we thought about, how we thought about it, how we built it.” Now, with this book, they do.

The book, he says, is “kind of leading people into how all of this came together, in hindsight. I want to have people read this and sort of understand it from a historical perspective, how something like this happened, and I did my best to make it as understandable and simple as I could.”

How we really judge AI

MIT Latest News - Tue, 06/10/2025 - 11:30am

Suppose you were shown that an artificial intelligence tool offers accurate predictions about some stocks you own. How would you feel about using it? Now, suppose you are applying for a job at a company where the HR department uses an AI system to screen resumes. Would you be comfortable with that?

A new study finds that people are neither entirely enthusiastic nor totally averse to AI. Rather than falling into camps of techno-optimists and Luddites, people are discerning about the practical upshot of using AI, case by case.

“We propose that AI appreciation occurs when AI is perceived as being more capable than humans and personalization is perceived as being unnecessary in a given decision context,” says MIT Professor Jackson Lu, co-author of a newly published paper detailing the study’s results. “AI aversion occurs when either of these conditions is not met, and AI appreciation occurs only when both conditions are satisfied.”

The paper, “AI Aversion or Appreciation? A Capability–Personalization Framework and a Meta-Analytic Review,” appears in Psychological Bulletin. The paper has eight co-authors, including Lu, who is the Career Development Associate Professor of Work and Organization Studies at the MIT Sloan School of Management.

New framework adds insight

People’s reactions to AI have long been subject to extensive debate, often producing seemingly disparate findings. An influential 2015 paper on “algorithm aversion” found that people are less forgiving of AI-generated errors than of human errors, whereas a widely noted 2019 paper on “algorithm appreciation” found that people preferred advice from AI, compared to advice from humans.

To reconcile these mixed findings, Lu and his co-authors conducted a meta-analysis of 163 prior studies that compared people’s preferences for AI versus humans. The researchers tested whether the data supported their proposed “Capability–Personalization Framework” — the idea that in a given context, both the perceived capability of AI and the perceived necessity for personalization shape our preferences for either AI or humans.

Across the 163 studies, the research team analyzed over 82,000 reactions to 93 distinct “decision contexts” — for instance, whether or not participants would feel comfortable with AI being used in cancer diagnoses. The analysis confirmed that the Capability–Personalization Framework indeed helps account for people’s preferences.

“The meta-analysis supported our theoretical framework,” Lu says. “Both dimensions are important: Individuals evaluate whether or not AI is more capable than people at a given task, and whether the task calls for personalization. People will prefer AI only if they think the AI is more capable than humans and the task is nonpersonal.”

He adds: “The key idea here is that high perceived capability alone does not guarantee AI appreciation. Personalization matters too.”

For example, people tend to favor AI when it comes to detecting fraud or sorting large datasets — areas where AI’s abilities exceed those of humans in speed and scale, and personalization is not required. But they are more resistant to AI in contexts like therapy, job interviews, or medical diagnoses, where they feel a human is better able to recognize their unique circumstances.

“People have a fundamental desire to see themselves as unique and distinct from other people,” Lu says. “AI is often viewed as impersonal and operating in a rote manner. Even if the AI is trained on a wealth of data, people feel AI can’t grasp their personal situations. They want a human recruiter, a human doctor who can see them as distinct from other people.”

Context also matters: From tangibility to unemployment

The study also uncovered other factors that influence individuals’ preferences for AI. For instance, AI appreciation is more pronounced for tangible robots than for intangible algorithms.

Economic context also matters. In countries with lower unemployment, AI appreciation is more pronounced.

“It makes intuitive sense,” Lu says. “If you worry about being replaced by AI, you’re less likely to embrace it.”  

Lu is continuing to examine people’s complex and evolving attitudes toward AI. While he does not view the current meta-analysis as the last word on the matter, he hopes the Capability–Personalization Framework offers a valuable lens for understanding how people evaluate AI across different contexts.

“We’re not claiming perceived capability and personalization are the only two dimensions that matter, but according to our meta-analysis, these two dimensions capture much of what shapes people’s preferences for AI versus humans across a wide range of studies,” Lu concludes.

In addition to Lu, the paper’s co-authors are Xin Qin, Chen Chen, Hansen Zhou, Xiaowei Dong, and Limei Cao of Sun Yat-sen University; Xiang Zhou of Shenzhen University; and Dongyuan Wu of Fudan University.

The research was supported, in part, by grants to Qin and Wu from the National Natural Science Foundation of China. 

“Each of us holds a piece of the solution”

MIT Latest News - Tue, 06/10/2025 - 11:00am

MIT has an unparalleled history of bringing together interdisciplinary teams to solve pressing problems — think of the development of radar during World War II, or leading the international coalition that cracked the code of the human genome — but the challenge of climate change could demand a scale of collaboration unlike any that’s come before at MIT.

“Solving climate change is not just about new technologies or better models. It’s about forging new partnerships across campus and beyond — between scientists and economists, between architects and data scientists, between policymakers and physicists, between anthropologists and engineers, and more,” MIT Vice President for Energy and Climate Evelyn Wang told an energetic crowd of faculty, students, and staff on May 6. “Each of us holds a piece of the solution — but only together can we see the whole.”

Undeterred by heavy rain, approximately 300 campus community members filled the atrium in the Tina and Hamid Moghadam Building (Building 55) for a spring gathering hosted by Wang and the Climate Project at MIT. The initiative seeks to direct the full strength of MIT to address climate change, which Wang described as one of the defining challenges of this moment in history — and one of its greatest opportunities.

“It calls on us to rethink how we power our world, how we build, how we live — and how we work together,” Wang said. “And there is no better place than MIT to lead this kind of bold, integrated effort. Our culture of curiosity, rigor, and relentless experimentation makes us uniquely suited to cross boundaries — to break down silos and build something new.”

The Climate Project is organized around six missions, thematic areas in which MIT aims to make significant impact, ranging from decarbonizing industry to new policy approaches to designing resilient cities. The faculty leaders of these missions posed challenges to the crowd before circulating among the crowd to share their perspectives and to discuss community questions and ideas.

Wang and the Climate Project team were joined by a number of research groups, startups, and MIT offices conducting relevant work today on issues related to energy and climate. For example, the MIT Office of Sustainability showcased efforts to use the MIT campus as a living laboratory; MIT spinouts such as Forma Systems, which is developing high-performance, low-carbon building systems, and Addis Energy, which envisions using the earth as a reactor to produce clean ammonia, presented their technologies; and visitors learned about current projects in MIT labs, including DebunkBot, an artificial intelligence-powered chatbot that can persuade people to shift their attitudes about conspiracies, developed by David Rand, the Erwin H. Schell Professor at the MIT Sloan School of Management.

Benedetto Marelli, an associate professor in the Department of Civil and Environmental Engineering who leads the Wild Cards Mission, said the energy and enthusiasm that filled the room was inspiring — but that the individual conversations were equally valuable.

“I was especially pleased to see so many students come out. I also spoke with other faculty, talked to staff from across the Institute, and met representatives of external companies interested in collaborating with MIT,” Marelli said. “You could see connections being made all around the room, which is exactly what we need as we build momentum for the Climate Project.”

White House looks to freeze more agency funds — and expand executive power

ClimateWire News - Tue, 06/10/2025 - 6:20am
The latest move targets more than $30 billion in spending at EPA, the National Science Foundation and other agencies.

New Jersey offshore wind project bows out

ClimateWire News - Tue, 06/10/2025 - 6:14am
Atlantic Shores was the only state proposal with federal permits.

Pages