Feed aggregator

Enhance responsible governance to match the scale and pace of marine–climate interventions

Nature Climate Change - Thu, 04/03/2025 - 12:00am

Nature Climate Change, Published online: 03 April 2025; doi:10.1038/s41558-025-02292-3

Oceans are on the frontline of an array of new marine–climate actions that are both poorly understood and under-regulated. Development and deployment of these interventions is outpacing governance readiness to address risks and ensure responsible transformation and effective action.

Novel marine-climate interventions hampered by low consensus and governance preparedness

Nature Climate Change - Thu, 04/03/2025 - 12:00am

Nature Climate Change, Published online: 03 April 2025; doi:10.1038/s41558-025-02291-4

Oceans are on the front line of new planned climate actions, but understanding of novel marine-climate intervention development and deployment remains low. Here a survey among intervention practitioners allows identification of science and governance gaps for marine-climate interventions.

Vana is letting users own a piece of the AI models trained on their data

MIT Latest News - Thu, 04/03/2025 - 12:00am

In February 2024, Reddit struck a $60 million deal with Google to let the search giant use data on the platform to train its artificial intelligence models. Notably absent from the discussions were Reddit users, whose data were being sold.

The deal reflected the reality of the modern internet: Big tech companies own virtually all our online data and get to decide what to do with that data. Unsurprisingly, many platforms monetize their data, and the fastest-growing way to accomplish that today is to sell it to AI companies, who are themselves massive tech companies using the data to train ever more powerful models.

The decentralized platform Vana, which started as a class project at MIT, is on a mission to give power back to the users. The company has created a fully user-owned network that allows individuals to upload their data and govern how they are used. AI developers can pitch users on ideas for new models, and if the users agree to contribute their data for training, they get proportional ownership in the models.

The idea is to give everyone a stake in the AI systems that will increasingly shape our society while also unlocking new pools of data to advance the technology.

“This data is needed to create better AI systems,” says Vana co-founder Anna Kazlauskas ’19. “We’ve created a decentralized system to get better data — which sits inside big tech companies today — while still letting users retain ultimate ownership.”

From economics to the blockchain

A lot of high school students have pictures of pop stars or athletes on their bedroom walls. Kazlauskas had a picture of former U.S. Treasury Secretary Janet Yellen.

Kazlauskas came to MIT sure she’d become an economist, but she ended up being one of five students to join the MIT Bitcoin club in 2015, and that experience led her into the world of blockchains and cryptocurrency.

From her dorm room in MacGregor House, she began mining the cryptocurrency Ethereum. She even occasionally scoured campus dumpsters in search of discarded computer chips.

“It got me interested in everything around computer science and networking,” Kazlauskas says. “That involved, from a blockchain perspective, distributed systems and how they can shift economic power to individuals, as well as artificial intelligence and econometrics.”

Kazlauskas met Art Abal, who was then attending Harvard University, in the former Media Lab class Emergent Ventures, and the pair decided to work on new ways to obtain data to train AI systems.

“Our question was: How could you have a large number of people contributing to these AI systems using more of a distributed network?” Kazlauskas recalls.

Kazlauskas and Abal were trying to address the status quo, where most models are trained by scraping public data on the internet. Big tech companies often also buy large datasets from other companies.

The founders’ approach evolved over the years and was informed by Kazlauskas’ experience working at the financial blockchain company Celo after graduation. But Kazlauskas credits her time at MIT with helping her think about these problems, and the instructor for Emergent Ventures, Ramesh Raskar, still helps Vana think about AI research questions today.

“It was great to have an open-ended opportunity to just build, hack, and explore,” Kazlauskas says. “I think that ethos at MIT is really important. It’s just about building things, seeing what works, and continuing to iterate.”

Today Vana takes advantage of a little-known law that allows users of most big tech platforms to export their data directly. Users can upload that information into encrypted digital wallets in Vana and disburse it to train models as they see fit.

AI engineers can suggest ideas for new open-source models, and people can pool their data to help train the model. In the blockchain world, the data pools are called data DAOs, which stands for decentralized autonomous organization. Data can also be used to create personalized AI models and agents.

In Vana, data are used in a way that preserves user privacy because the system doesn’t expose identifiable information. Once the model is created, users maintain ownership so that every time it’s used, they’re rewarded proportionally based on how much their data helped trained it.

“From a developer’s perspective, now you can build these hyper-personalized health applications that take into account exactly what you ate, how you slept, how you exercise,” Kazlauskas says. “Those applications aren’t possible today because of those walled gardens of the big tech companies.”

Crowdsourced, user-owned AI

Last year, a machine-learning engineer proposed using Vana user data to train an AI model that could generate Reddit posts. More than 140,000 Vana users contributed their Reddit data, which contained posts, comments, messages, and more. Users decided on the terms in which the model could be used, and they maintained ownership of the model after it was created.

Vana has enabled similar initiatives with user-contributed data from the social media platform X; sleep data from sources like Oura rings; and more. There are also collaborations that combine data pools to create broader AI applications.

“Let’s say users have Spotify data, Reddit data, and fashion data,” Kazlauskas explains. “Usually, Spotify isn’t going to collaborate with those types of companies, and there’s actually regulation against that. But users can do it if they grant access, so these cross-platform datasets can be used to create really powerful models.”

Vana has over 1 million users and over 20 live data DAOs. More than 300 additional data pools have been proposed by users on Vana’s system, and Kazlauskas says many will go into production this year.

“I think there’s a lot of promise in generalized AI models, personalized medicine, and new consumer applications, because it’s tough to combine all that data or get access to it in the first place,” Kazlauskas says.

The data pools are allowing groups of users to accomplish something even the most powerful tech companies struggle with today.

“Today, big tech companies have built these data moats, so the best datasets aren’t available to anyone,” Kazlauskas says. “It’s a collective action problem, where my data on its own isn’t that valuable, but a data pool with tens of thousands or millions of people is really valuable. Vana allows those pools to be built. It’s a win-win: Users get to benefit from the rise of AI because they own the models. Then you don’t end up in scenario where you don’t have a single company controlling an all-powerful AI model. You get better technology, but everyone benefits.”

MIT welcomes 2025 Heising-Simons Foundation 51 Pegasi b Fellow Jess Speedie

MIT Latest News - Wed, 04/02/2025 - 4:50pm

The MIT School of Science welcomes Jess Speedie, one of eight recipients of the 2025 51 Pegasi b Fellowship. The announcement was made March 27 by the Heising-Simons Foundation.

The 51 Pegasi b Fellowship, named after the first exoplanet discovered orbiting a sun-like star, was established in 2017 to provide postdocs with the opportunity to conduct theoretical, observational, and experimental research in planetary astronomy.

Speedie, who expects to complete her PhD in astronomy at the University of Victoria, Canada, this summer, will be hosted by the Department of Earth, Atmospheric and Planetary Sciences (EAPS). She will be mentored by Kerr-McGee Career Development Professor Richard Teague as she uses a combination of observational data and simulations to study the birth of planets and the processes of planetary formation.

“The planetary environment is where all the good stuff collects … it has the greatest potential for the most interesting things in the universe to happen, such as the origin of life,” she says. “Planets, for me, are where the stories happen.”

Speedie’s work has focused on understanding “cosmic nurseries” and the detection and characterization of the youngest planets in the galaxy. A lot of this work has made use of the Atacama Large Millimeter/submillimeter Array (ALMA), located in northern Chile. Made up of a collection of 66 parabolic dishes, ALMA studies the universe with radio wavelengths, and Speedie has developed a novel approach to find signals in the data of gravitational instability in protoplanetary disks, a method of planetary formation.

“One of the big, big questions right now in the community focused on planet formation is, where are the planets? It is that simple. We think they’re developing in these disks, but we’ve detected so few of them,” she says.

While working as a fellow, Speedie is aiming to develop an algorithm that carefully aligns and stacks a decade of ALMA observational data to correct for a blurring effect that happens when combining images captured at different times. Doing so should produce the sharpest, most sensitive images of early planetary systems to date.

She is also interested in studying infant planets, especially ones that may be forming in disks around protoplanets, rather than stars. Modeling how these ingredient materials in orbit behave could give astronomers a way to measure the mass of young planets.

“What’s exciting is the potential for discovery. I have this sense that the universe as a whole is infinitely more creative than human minds — the kinds of things that happen out there, you can’t make that up. It’s better than science fiction,” she says.

The other 51 Pegasi b Fellows and their host institutions this year are Nick Choksi (Caltech), Yan Liang (Yale University), Sagnick Mukherjee (Arizona State University), Matthew Nixon (Arizona State University), Julia Santos (Harvard University), Nour Skaf (University of Hawaii), and Jerry Xuan (University of California at Los Angeles).

The fellowship provides up to $450,000 of support over three years for independent research, a generous salary and discretionary fund, mentorship at host institutions, an annual summit to develop professional networks and foster collaboration, and an option to apply for another grant to support a future position in the United States.

A flexible robot can help emergency responders search through rubble

MIT Latest News - Wed, 04/02/2025 - 1:50pm

When major disasters hit and structures collapse, people can become trapped under rubble. Extricating victims from these hazardous environments can be dangerous and physically exhausting. To help rescue teams navigate these structures, MIT Lincoln Laboratory, in collaboration with researchers at the University of Notre Dame, developed the Soft Pathfinding Robotic Observation Unit (SPROUT). SPROUT is a vine robot — a soft robot that can grow and maneuver around obstacles and through small spaces. First responders can deploy SPROUT under collapsed structures to explore, map, and find optimum ingress routes through debris. 

"The urban search-and-rescue environment can be brutal and unforgiving, where even the most hardened technology struggles to operate. The fundamental way a vine robot works mitigates a lot of the challenges that other platforms face," says Chad Council, a member of the SPROUT team, which is led by Nathaniel Hanson. The program is conducted out of the laboratory's Human Resilience Technology Group

First responders regularly integrate technology, such as cameras and sensors, into their workflows to understand complex operating environments. However, many of these technologies have limitations. For example, cameras specially built for search-and-rescue operations can only probe on a straight path inside of a collapsed structure. If a team wants to search further into a pile, they need to cut an access hole to get to the next area of the space. Robots are good for exploring on top of rubble piles, but are ill-suited for searching in tight, unstable structures and costly to repair if damaged. The challenge that SPROUT addresses is how to get under collapsed structures using a low-cost, easy-to-operate robot that can carry cameras and sensors and traverse winding paths. 

SPROUT is composed of an inflatable tube made of airtight fabric that unfurls from a fixed base. The tube inflates with air, and a motor controls its deployment. As the tube extends into rubble, it can flex around corners and squeeze through narrow passages. A camera and other sensors mounted to the tip of the tube image and map the environment the robot is navigating. An operator steers SPROUT with joysticks, watching a screen that displays the robot's camera feed. Currently, SPROUT can deploy up to 10 feet, and the team is working on expanding it to 25 feet.

When building SPROUT, the team overcame a number of challenges related to the robot's flexibility. Because the robot is made of a deformable material that bends at many points, determining and controlling the robot's shape as it unfurls through the environment is difficult — think of trying to control an expanding wiggly sprinkler toy. Pinpointing how to apply air pressure within the robot so that steering is as simple as pointing the joystick forward to make the robot move forward was essential for system adoption by emergency responders. In addition, the team had to design the tube to minimize friction while the robot grows and engineer the controls for steering.

While a teleoperated system is a good starting point for assessing the hazards of void spaces, the team is also finding new ways to apply robot technologies to the domain, such as using data captured by the robot to build maps of the subsurface voids. "Collapse events are rare but devastating events. In robotics, we would typically want ground truth measurements to validate our approaches, but those simply don't exist for collapsed structures," Hanson says. To solve this problem, Hanson and his team made a simulator that allows them to create realistic depictions of collapsed structures and develop algorithms that map void spaces.

SPROUT was developed in collaboration with Margaret Coad, a professor at the University of Notre Dame and an MIT graduate. When looking for collaborators, Hanson — a graduate of Notre Dame — was already aware of Coad's work on vine robots for industrial inspection. Coad's expertise, together with the laboratory's experience in engineering, strong partnership with urban search-and-rescue teams, and ability to develop fundamental technologies and prepare them for  transition to industry, "made this a really natural pairing to join forces and work on research for a traditionally underserved community," Hanson says. "As one of the primary inventors of vine robots, Professor Coad brings invaluable expertise on the fabrication and modeling of these robots."

Lincoln Laboratory tested SPROUT with first responders at the  Massachusetts Task Force 1  training site in Beverly, Massachusetts. The tests allowed the researchers to improve the durability and portability of the robot and learn how to grow and steer the robot more efficiently. The team is planning a larger field study this spring.

"Urban search-and-rescue teams and first responders serve critical roles in their communities but typically have little-to-no research and development budgets," Hanson says. "This program has enabled us to push the technology readiness level of vine robots to a point where responders can engage with a hands-on demonstration of the system."

Sensing in constrained spaces is not a problem unique to disaster response communities, Hanson adds. The team envisions the technology being used in the maintenance of military systems or critical infrastructure with difficult-to-access locations.

The initial program focused on mapping void spaces, but future work aims to localize hazards and assess the viability and safety of operations through rubble. "The mechanical performance of the robots has an immediate effect, but the real goal is to rethink the way sensors are used to enhance situational awareness for rescue teams," says Hanson. "Ultimately, we want SPROUT to provide a complete operating picture to teams before anyone enters a rubble pile." 

Cem Tasan to lead the Materials Research Laboratory

MIT Latest News - Wed, 04/02/2025 - 1:30pm

C. Cem Tasan has been appointed director of MIT’s Materials Research Laboratory (MRL), effective March 15. The POSCO Associate Professor of Metallurgy in the Department of Materials Science and Engineering (DMSE), Tasan succeeds Lionel “Kim” Kimerling, who has held the post of interim director since Carl Thompson stepped down in August 2023.

“MRL is a strategic asset for MIT, and Cem has a clear vision to build upon the lab’s engagement with materials researchers across the breadth of the Institute as well as with external collaborators and sponsors,” wrote Vice President for Research Ian Waitz, in a letter announcing the appointment.

The MRL is a leading interdisciplinary center dedicated to materials science and engineering. As a hub for innovation, the MRL unites researchers across disciplines, fosters industry and government partnerships, and drives advancements that shape the future of technology. Through groundbreaking research, the MRL supports MIT’s mission to advance science and technology for the benefit of society, enabling discoveries that have a lasting impact across industries and everyday life.

“MRL has a position at the core of materials research activities across departments at MIT,” Tasan says. “It can only grow from where it is, right in the heart of the Institute’s innovative hub.”

As director, Tasan will lead MRL’s research mission, with a view to strengthening internal collaboration and building upon the interdisciplinary laboratory’s long history of industry engagement. He will also take on responsibility for the management of Building 13, the Vannevar Bush Building, which houses key research facilities and labs.

“MRL is in very good hands with Cem Tasan’s leadership,” says Kimerling, the outgoing interim director. “His vision for a united MIT materials community whose success is stimulated by the convergence of basic science and engineering solutions provides the nutrition for MIT’s creative relevance to society. His collegial nature, motivating energy, and patient approach will make it happen.”

Tasan is a metallurgist with expertise in the fracture in metals and the design of damage-resistant alloys. Among other advances, his lab has demonstrated a multiscale means of designing high-strength/high-ductility titanium alloys; and explained the stress intensification mechanism by which human hair damages hard steel razors, pointing the way to stronger and longer-lasting blades.

“We need better materials that operate in more and more extreme conditions, for almost all of our critical industries and applications,” says Tasan. “Materials research in MRL identifies interdisciplinary pathways to address this important challenge.” 

He studied in Turkey and the Netherlands, earning his PhD at Eindhoven University of Technology before spending several years leading a research group at the Max Planck Institute for Sustainable Materials in Germany. He joined the MIT faculty in 2016 and earned tenure in 2022.

“Cem has led one of the major collaborative research teams at MRL, and he expects to continue developing a strong community among the MIT materials research faculty,” wrote Waitz in his letter on March 14.

The MRL was established in 2017 through the merger of the MIT Materials Processing Center (MPC) and the Center for Materials Science and Engineering. This unification aimed to strengthen MIT’s leadership in materials research by fostering interdisciplinary collaboration and advancing breakthroughs in areas such as energy conversion, quantum materials, and materials sustainability.

From 2008 to 2017, Thompson, the Stavros Salapatas Professor of Materials Science and Engineering, served as director of the MPC. During his tenure, he played a crucial role in expanding materials research and building partnerships with industry, government agencies, and academic institutions. With the formation of the MRL in 2017, Thompson was appointed its inaugural director, guiding the new laboratory to prominence as a hub for cutting-edge materials science. He stepped down from this role in August 2023.

At that time, Kimerling stepped in to serve as interim director of MRL. He brought special knowledge of the lab’s history, having served as director of the MPC from 1993 to 2008, transforming it into a key industry-academic interface. Under his leadership, the MPC became a crucial gateway for industry partners to collaborate with MIT faculty across materials-related disciplines, bridging fundamental research with industrial applications. His vision helped drive technological innovation and economic development by aligning academic expertise with industry needs. As interim director of MRL these past 18 months, Kimerling has ensured continuity in leadership.

“I’m delighted that Cem will be the next MRL director,” says Thompson. “He’s a great fit. He has been affiliated with MPC, and then MRL, since the beginning of his faculty career at MIT. He’s also played a key role in leading a renaissance in physical metallurgy at MIT and has many close ties to industry.”

Site-Blocking Legislation Is Back. It’s Still a Terrible Idea.

EFF: Updates - Wed, 04/02/2025 - 11:53am

More than a decade ago, Congress tried to pass SOPA and PIPA—two sweeping bills that would have allowed the government and copyright holders to quickly shut down entire websites based on allegations of piracy. The backlash was immediate and massive. Internet users, free speech advocates, and tech companies flooded lawmakers with protests, culminating in an “Internet Blackout” on January 18, 2012. Turns out, Americans don’t like government-run internet blacklists. The bills were ultimately shelved. 

Thirteen years later, as institutional memory fades and appetite for opposition wanes, members of Congress in both parties are ready to try this again. 

take action

Act Now To Defend the Open Web  

The Foreign Anti-Digital Piracy Act (FADPA), along with at least one other bill still in draft form, would revive this reckless strategy. These new proposals would let rights holders get federal court orders forcing ISPs and DNS providers to block entire websites based on accusations of infringing copyright. Lawmakers claim they’re targeting “pirate” sites—but what they’re really doing is building an internet kill switch.

These bills are an unequivocal and serious threat to a free and open internet. EFF and our supporters are going to fight back against them. 

Site-Blocking Doesn’t Work—And Never Will 

Today, many websites are hosted on cloud infrastructure or use shared IP addresses. Blocking one target can mean blocking thousands of unrelated sites. That kind of digital collateral damage has already happened in AustriaRussia​, and in the US.

Site-blocking is both dangerously blunt and trivially easy to evade. Determined evaders can create the same content on a new domain within hours. Users who want to see blocked content can fire up a VPN or change a single DNS setting to get back online. 

These workarounds aren’t just popular—they’re essential tools in countries that suppress dissent. It’s shocking that Congress is on the verge of forcing Americans to rely on the same workarounds that internet users in authoritarian regimes must rely on just to reach mislabeled content. It will force Americans to rely on riskier, less trustworthy online services. 

Site-Blocking Silences Speech Without a Defense

The First Amendment should not take a back seat because giant media companies want the ability to shut down websites faster. But these bills wrongly treat broad takedowns as a routine legal process. Most cases would be decided in ex parte proceedings, with no one there to defend the site being blocked. This is more than a shortcut–it skips due process entirely. 

Users affected by a block often have no idea what happened. A blocked site may just look broken, like a glitch or an outage. Law-abiding publishers and users lose access, and diagnosing the problem is difficult. Site-blocking techniques are the bluntest of instruments, and they almost always punish innocent bystanders. 

The copyright industries pushing these bills know that site-blocking is not a narrowly tailored fix for a piracy epidemic. The entertainment industry is booming right now, blowing past its pre-COVID projections. Site-blocking legislation is an attempt to build a new American censorship system by letting private actors get dangerous infrastructure-level control over internet access. 

EFF and the Public Will Push Back

FADPA is already on the table. More bills are coming. The question is whether lawmakers remember what happened the last time they tried to mess with the foundations of the open web. 

If they don’t, they’re going to find out the hard way. Again. 

take action

Tell Congress: No To Internet Blacklists  

Site-blocking laws are dangerous, unnecessary, and ineffective. Lawmakers need to hear—loud and clear—that Americans don’t support government-mandated internet censorship. Not for copyright enforcement. Not for anything.

Rational Astrologies and Security

Schneier on Security - Wed, 04/02/2025 - 7:04am

John Kelsey and I wrote a short paper for the Rossfest Festschrift: “Rational Astrologies and Security“:

There is another non-security way that designers can spend their security budget: on making their own lives easier. Many of these fall into the category of what has been called rational astrology. First identified by Randy Steve Waldman [Wal12], the term refers to something people treat as though it works, generally for social or institutional reasons, even when there’s little evidence that it works—­and sometimes despite substantial evidence that it does not...

Trump’s tariffs expected to undermine transition to clean energy

ClimateWire News - Wed, 04/02/2025 - 6:21am
New levies on imported goods could exacerbate a shortage of parts used by the energy industry.

Climate grant recipients fight EPA for access to $20B

ClimateWire News - Wed, 04/02/2025 - 6:11am
Green banking groups head to court Wednesday in an attempt to force Citibank to unfreeze their accounts.

Citing Trump tariffs, Canadian province eliminates a carbon tax

ClimateWire News - Wed, 04/02/2025 - 6:10am
Saskatchewan, the second province to revoke a carbon tax, wants to "ensure that our industries ... are more competitive," its premier says.

Three NASA satellites are dying. Their end could disrupt climate data.

ClimateWire News - Wed, 04/02/2025 - 6:08am
Scientists are increasingly concerned about the future of Earth science under President Donald Trump.

California’s snowpack data likely signals another fire-prone summer

ClimateWire News - Wed, 04/02/2025 - 6:07am
This is the third year in a row that the state finds itself in potentially combustible situation — where wet winters led to more vegetation growing across its landscape.

Climate firm that partnered with Meta, Microsoft goes bankrupt

ClimateWire News - Wed, 04/02/2025 - 6:05am
The bankruptcy was filed after federal prosecutors charged co-founder Joseph Sanberg with conspiring to defraud two investor funds of at least $145 million.

Japan’s $1.7T pension fund offers new backing to ESG

ClimateWire News - Wed, 04/02/2025 - 6:05am
The Government Pension Investment Fund is rejecting the shift by other asset managers to downgrade or remove green commitments.

Jesuit prefers prison over fine to draw attention to climate change

ClimateWire News - Wed, 04/02/2025 - 6:04am
The Rev. Jörg Alt started serving Tuesday his nearly monthlong prison sentence for participating in a street-blocking protest in Nuremberg.

Researchers teach LLMs to solve complex planning challenges

MIT Latest News - Wed, 04/02/2025 - 12:00am

Imagine a coffee company trying to optimize its supply chain. The company sources beans from three suppliers, roasts them at two facilities into either dark or light coffee, and then ships the roasted coffee to three retail locations. The suppliers have different fixed capacity, and roasting costs and shipping costs vary from place to place.

The company seeks to minimize costs while meeting a 23 percent increase in demand.

Wouldn’t it be easier for the company to just ask ChatGPT to come up with an optimal plan? In fact, for all their incredible capabilities, large language models (LLMs) often perform poorly when tasked with directly solving such complicated planning problems on their own.

Rather than trying to change the model to make an LLM a better planner, MIT researchers took a different approach. They introduced a framework that guides an LLM to break down the problem like a human would, and then automatically solve it using a powerful software tool.

A user only needs to describe the problem in natural language — no task-specific examples are needed to train or prompt the LLM. The model encodes a user’s text prompt into a format that can be unraveled by an optimization solver designed to efficiently crack extremely tough planning challenges.

During the formulation process, the LLM checks its work at multiple intermediate steps to make sure the plan is described correctly to the solver. If it spots an error, rather than giving up, the LLM tries to fix the broken part of the formulation.

When the researchers tested their framework on nine complex challenges, such as minimizing the distance warehouse robots must travel to complete tasks, it achieved an 85 percent success rate, whereas the best baseline only achieved a 39 percent success rate.

The versatile framework could be applied to a range of multistep planning tasks, such as scheduling airline crews or managing machine time in a factory.

“Our research introduces a framework that essentially acts as a smart assistant for planning problems. It can figure out the best plan that meets all the needs you have, even if the rules are complicated or unusual,” says Yilun Hao, a graduate student in the MIT Laboratory for Information and Decision Systems (LIDS) and lead author of a paper on this research.

She is joined on the paper by Yang Zhang, a research scientist at the MIT-IBM Watson AI Lab; and senior author Chuchu Fan, an associate professor of aeronautics and astronautics and LIDS principal investigator. The research will be presented at the International Conference on Learning Representations.

Optimization 101

The Fan group develops algorithms that automatically solve what are known as combinatorial optimization problems. These vast problems have many interrelated decision variables, each with multiple options that rapidly add up to billions of potential choices.

Humans solve such problems by narrowing them down to a few options and then determining which one leads to the best overall plan. The researchers’ algorithmic solvers apply the same principles to optimization problems that are far too complex for a human to crack.

But the solvers they develop tend to have steep learning curves and are typically only used by experts.

“We thought that LLMs could allow nonexperts to use these solving algorithms. In our lab, we take a domain expert’s problem and formalize it into a problem our solver can solve. Could we teach an LLM to do the same thing?” Fan says.

Using the framework the researchers developed, called LLM-Based Formalized Programming (LLMFP), a person provides a natural language description of the problem, background information on the task, and a query that describes their goal.

Then LLMFP prompts an LLM to reason about the problem and determine the decision variables and key constraints that will shape the optimal solution.

LLMFP asks the LLM to detail the requirements of each variable before encoding the information into a mathematical formulation of an optimization problem. It writes code that encodes the problem and calls the attached optimization solver, which arrives at an ideal solution.

“It is similar to how we teach undergrads about optimization problems at MIT. We don’t teach them just one domain. We teach them the methodology,” Fan adds.

As long as the inputs to the solver are correct, it will give the right answer. Any mistakes in the solution come from errors in the formulation process.

To ensure it has found a working plan, LLMFP analyzes the solution and modifies any incorrect steps in the problem formulation. Once the plan passes this self-assessment, the solution is described to the user in natural language.

Perfecting the plan

This self-assessment module also allows the LLM to add any implicit constraints it missed the first time around, Hao says.

For instance, if the framework is optimizing a supply chain to minimize costs for a coffeeshop, a human knows the coffeeshop can’t ship a negative amount of roasted beans, but an LLM might not realize that.

The self-assessment step would flag that error and prompt the model to fix it.

“Plus, an LLM can adapt to the preferences of the user. If the model realizes a particular user does not like to change the time or budget of their travel plans, it can suggest changing things that fit the user’s needs,” Fan says.

In a series of tests, their framework achieved an average success rate between 83 and 87 percent across nine diverse planning problems using several LLMs. While some baseline models were better at certain problems, LLMFP achieved an overall success rate about twice as high as the baseline techniques.

Unlike these other approaches, LLMFP does not require domain-specific examples for training. It can find the optimal solution to a planning problem right out of the box.

In addition, the user can adapt LLMFP for different optimization solvers by adjusting the prompts fed to the LLM.

“With LLMs, we have an opportunity to create an interface that allows people to use tools from other domains to solve problems in ways they might not have been thinking about before,” Fan says.

In the future, the researchers want to enable LLMFP to take images as input to supplement the descriptions of a planning problem. This would help the framework solve tasks that are particularly hard to fully describe with natural language.

This work was funded, in part, by the Office of Naval Research and the MIT-IBM Watson AI Lab.

Looking under the hood at the brain’s language system

MIT Latest News - Wed, 04/02/2025 - 12:00am

As a young girl growing up in the former Soviet Union, Evelina Fedorenko PhD ’07 studied several languages, including English, as her mother hoped that it would give her the chance to eventually move abroad for better opportunities.

Her language studies not only helped her establish a new life in the United States as an adult, but also led to a lifelong interest in linguistics and how the brain processes language. Now an associate professor of brain and cognitive sciences at MIT, Fedorenko studies the brain’s language-processing regions: how they arise, whether they are shared with other mental functions, and how each region contributes to language comprehension and production.

Fedorenko’s early work helped to identify the precise locations of the brain’s language-processing regions, and she has been building on that work to generate insight into how different neuronal populations in those regions implement linguistic computations.

“It took a while to develop the approach and figure out how to quickly and reliably find these regions in individual brains, given this standard problem of the brain being a little different across people,” she says. “Then we just kept going, asking questions like: Does language overlap with other functions that are similar to it? How is the system organized internally? Do different parts of this network do different things? There are dozens and dozens of questions you can ask, and many directions that we have pushed on.”

Among some of the more recent directions, she is exploring how the brain’s language-processing regions develop early in life, through studies of very young children, people with unusual brain architecture, and computational models known as large language models.

From Russia to MIT

Fedorenko grew up in the Russian city of Volgograd, which was then part of the Soviet Union. When the Soviet Union broke up in 1991, her mother, a mechanical engineer, lost her job, and the family struggled to make ends meet.

“It was a really intense and painful time,” Fedorenko recalls. “But one thing that was always very stable for me is that I always had a lot of love, from my parents, my grandparents, and my aunt and uncle. That was really important and gave me the confidence that if I worked hard and had a goal, that I could achieve whatever I dreamed about.”

Fedorenko did work hard in school, studying English, French, German, Polish, and Spanish, and she also participated in math competitions. As a 15-year-old, she spent a year attending high school in Alabama, as part of a program that placed students from the former Soviet Union with American families. She had been thinking about applying to universities in Europe but changed her plans when she realized the American higher education system offered more academic flexibility.

After being admitted to Harvard University with a full scholarship, she returned to the United States in 1998 and earned her bachelor’s degree in psychology and linguistics, while also working multiple jobs to send money home to help her family.

While at Harvard, she also took classes at MIT and ended up deciding to apply to the Institute for graduate school. For her PhD research at MIT, she worked with Ted Gibson, a professor of brain and cognitive sciences, and later, Nancy Kanwisher, the Walter A. Rosenblith Professor of Cognitive Neuroscience. She began by using functional magnetic resonance imaging (fMRI) to study brain regions that appeared to respond preferentially to music, but she soon switched to studying brain responses to language.

She found that working with Kanwisher, who studies the functional organization of the human brain but hadn’t worked much on language before, helped Fedorenko to build a research program free of potential biases baked into some of the early work on language processing in the brain.

“We really kind of started from scratch,” Fedorenko says, “combining the knowledge of language processing I have gained by working with Gibson and the rigorous neuroscience approaches that Kanwisher had developed when studying the visual system.”

After finishing her PhD in 2007, Fedorenko stayed at MIT for a few years as a postdoc funded by the National Institutes of Health, continuing her research with Kanwisher. During that time, she and Kanwisher developed techniques to identify language-processing regions in different people, and discovered new evidence that certain parts of the brain respond selectively to language. Fedorenko then spent five years as a research faculty member at Massachusetts General Hospital, before receiving an offer to join the faculty at MIT in 2019.

How the brain processes language

Since starting her lab at MIT’s McGovern Institute for Brain Research, Fedorenko and her trainees have made several discoveries that have helped to refine neuroscientists’ understanding of the brain’s language-processing regions, which are spread across the left frontal and temporal lobes of the brain.

In a series of studies, her lab showed that these regions are highly selective for language and are not engaged by activities such as listening to music, reading computer code, or interpreting facial expressions, all of which have been argued to be share similarities with language processing.

“We’ve separated the language-processing machinery from various other systems, including the system for general fluid thinking, and the systems for social perception and reasoning, which support the processing of communicative signals, like facial expressions and gestures, and reasoning about others’ beliefs and desires,” Fedorenko says. “So that was a significant finding, that this system really is its own thing.”

More recently, Fedorenko has turned her attention to figuring out, in more detail, the functions of different parts of the language processing network. In one recent study, she identified distinct neuronal populations within these regions that appear to have different temporal windows for processing linguistic content, ranging from just one word up to six words.

She is also studying how language-processing circuits arise in the brain, with ongoing studies in which she and a postdoc in her lab are using fMRI to scan the brains of young children, observing how their language regions behave even before the children have fully learned to speak and understand language.

Large language models (similar to ChatGPT) can help with these types of developmental questions, as the researchers can better control the language inputs to the model and have continuous access to its abilities and representations at different stages of learning.

“You can train models in different ways, on different kinds of language, in different kind of regimens. For example, training on simpler language first and then more complex language, or on language combined with some visual inputs. Then you can look at the performance of these language models on different tasks, and also examine changes in their internal representations across the training trajectory, to test which model best captures the trajectory of human language learning,” Fedorenko says.

To gain another window into how the brain develops language ability, Fedorenko launched the Interesting Brains Project several years ago. Through this project, she is studying people who experienced some type of brain damage early in life, such as a prenatal stroke, or brain deformation as a result of a congenital cyst. In some of these individuals, their conditions destroyed or significantly deformed the brain’s typical language-processing areas, but all of these individuals are cognitively indistinguishable from individuals with typical brains: They still learned to speak and understand language normally, and in some cases, they didn’t even realize that their brains were in some way atypical until they were adults.

“That study is all about plasticity and redundancy in the brain, trying to figure out what brains can cope with, and how” Fedorenko says. “Are there many solutions to build a human mind, even when the neural infrastructure is so different-looking?”

Vote for “How to Fix the Internet” in the Webby Awards People's Voice Competition!

EFF: Updates - Tue, 04/01/2025 - 2:51pm

EFF’s “How to Fix the Internet” podcast is a nominee in the Webby Awards 29th Annual People's Voice competition – and we need your support to bring the trophy home!

Vote now!

We keep hearing all these dystopian stories about technology’s impact on our lives and our futures — from tracking-based surveillance capitalism to the dominance of a few large platforms choking innovation to the growing pressure by authoritarian governments to control what we see and say. The landscape can feel bleak. Exposing and articulating these problems is important, but so is envisioning and then building a better future. 

That’s where our podcast comes in. Through curious conversations with some of the leading minds in law and technology, “How to Fix the Internet” explores creative solutions to some of today’s biggest tech challenges.    

Over our five seasons, we’ve had well-known, mainstream names like Marc Maron to discuss patent trolls, Adam Savage to discuss the rights to tinker and repair, Dave Eggers to discuss when to set technology aside, and U.S. Sen. Ron Wyden, D-OR, to discuss how Congress can foster an internet that benefits everyone. But we’ve also had lesser-known names who do vital, thought-provoking work – Taiwan’s then-Minister of Digital Affairs Audrey Tang discussed seeing democracy as a kind of open-source social technology, Alice Marwick discussed the spread of conspiracy theories and disinformation, Catherine Bracy discussed getting tech companies to support (not exploit) the communities they call home, and Chancey Fleet discussing the need to include people with disabilities in every step of tech development and deployment.   

We’ve just recorded our first interview for Season 6, and episodes should start dropping next month! Meanwhile, you can catch up on our past seasons to become deeply informed on vital technology issues and join the movement working to build a better technological future.  

 And if you’ve liked what you’ve heard, please throw us a vote in the Webbys competition!  

Vote now!

Our deepest thanks to all our brilliant guests, and to the Alfred P. Sloan Foundation's Program in Public Understanding of Science and Technology, without whom this podcast would not be possible. 

Click below to listen to the show now, or choose your podcast player:

%3Ciframe%20scrolling%3D%22no%22%20seamless%3D%22%22%20src%3D%22https%3A%2F%2Fplayer.simplecast.com%2F1c515ea8-cb6d-4f72-8d17-bc9b7a566869%3Fdark%3Dfalse%26amp%3Bshow%3Dtrue%22%20width%3D%22100%25%22%20height%3D%22480px%22%20frameborder%3D%22no%22%20allow%3D%22autoplay%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from simplecast.com

   

Or get our YouTube playlist! Or, listen to the episodes on the Internet Archive!

Trump names urologist as RFK’s deputy

ClimateWire News - Tue, 04/01/2025 - 1:41pm
Brian Christine would be a key adviser to Health Secretary Robert F. Kennedy Jr., who has embraced conspiracy theories about vaccines and transgender treatment.

Pages