Feed aggregator

A new community for computational science and engineering

MIT Latest News - Tue, 09/16/2025 - 11:00am

For the past decade, MIT has offered doctoral-level study in computational science and engineering (CSE) exclusively through an interdisciplinary program designed for students applying computation within a specific science or engineering field.

As interest grew among students focused primarily on advancing CSE methodology itself, it became clear that a dedicated academic home for this group — students and faculty deeply invested in the foundations of computational science and engineering — was needed.

Now, with a stand-alone CSE PhD program, they have not only a space for fostering discovery in the cross-cutting methodological dimensions of computational science and engineering, but also a tight-knit community.

“This program recognizes the existence of computational science and engineering as a discipline in and of itself, so you don’t have to be doing this work through the lens of mechanical or chemical engineering, but instead in its own right,” says Nicolas Hadjiconstantinou, co-director of the Center for Computational Science and Engineering (CCSE).

Offered by CCSE and launched in 2023, the stand-alone program blends both coursework and a thesis, much like other MIT PhD programs, yet its methodological focus sets it apart from other Institute offerings.

“What’s unique about this program is that it’s not hosted by one specific department. The stand-alone program is, at its core, about computational science and cross-cutting methodology. We connect this research with people in a lot of different application areas. We have oceanographers, people doing materials science, students with a focus on aeronautics and astronautics, and more,” says outgoing co-director Youssef Marzouk, now the associate dean of the MIT Schwarzman College of Computing.

Expanding horizons

Hadjiconstantinou, the Quentin Berg Professor of Mechanical Engineering, and Marzouk, the Breene M. Kerr Professor of Aeronautics and Astronautics, have led the center’s efforts since 2018, and developed the program and curriculum together. The duo was intentional about crafting a program that fosters students’ individual research while also exposing them to all the field has to offer.

To expand students’ horizons and continue to build a collaborative community, the PhD in CSE program features two popular seminar series: weekly community seminars that focus primarily on internal speakers (current graduate students, postdocs, research scientists, and faculty), and monthly distinguished seminars in CSE, which are Institute-wide and bring external speakers from various institutions and industry roles.

“Something surprising about the program has been the seminars. I thought it would be the same people I see in my classes and labs, but it’s much broader than that,” says Emily Williams, a fourth-year PhD student and a Department of Energy Computational Science graduate fellow. “One of the most interesting seminars was around simulating fluid flow for biomedical applications. My background is in fluids, so I understand that part, but seeing it applied in a totally different domain than what I work in was eye-opening,” says Williams.

That seminar, “Astrophysical Fluid Dynamics at Exascale,” presented by James Stone, a professor in the School of Natural Sciences at the Institute for Advanced Study and at Princeton University, represented one of many opportunities for CSE students to engage with practitioners in small groups, gaining academic insight as well as a wider perspective on future career paths.

Designing for impact

The interdisciplinary PhD program served as a departure point from which Hadjiconstantinou and Marzouk created a new offering that was uniquely its own.

For Marzouk, that meant focusing on expanding the stand-alone program to be able to constantly grow and pivot to retain relevancy as technology speeds up, too: “In my view, the vitality of this program is that science and engineering applications nowadays rest on computation in a really foundational way, whether it’s engineering design or scientific discovery. So it’s essential to perform research on the building blocks of this kind of computation. This research also has to be shaped by the way that we apply it so that scientists or engineers will actually use it,” Marzouk says.

The curriculum is structured around six core focus areas, or “ways of thinking,” that are fundamental to CSE:

  • Discretization and numerical methods for partial differential equations;
  • Optimization methods;
  • Inference, statistical computing, and data-driven modeling;
  • High performance computing, software engineering, and algorithms;
  • Mathematical foundations (e.g., functional analysis, probability); and
  • Modeling (i.e., a subject that treats computational modeling in any science or engineering discipline).

Students select and build their own thesis committee that consists of faculty from across MIT, not just those associated with CCSE. The combination of a curriculum that’s “modern and applicable to what employers are looking for in industry and academics," according to Williams, and the ability to build your own group of engaged advisors, allows for a level of specialization that’s hard to find elsewhere.

“Academically, I feel like this program is designed in such a flexible and interdisciplinary way. You have a lot of control in terms of which direction you want to go in,” says Rosen Yu, a PhD student. Yu’s research is focused on engineering design optimization, an interest she discovered during her first year of research at MIT with Professor Faez Ahmed. The CSE PhD was about to launch, and it became clear that her research interests skewed more toward computation than the existing mechanical engineering degree; it was a natural fit.

“At other schools, you often see just a pure computer science program or an engineering department with hardly any intersection. But this CSE program, I like to say it’s like a glue between these two communities,” says Yu.

That “glue” is strengthening, with more students matriculating each year, as well as Institute faculty and staff becoming affiliated with CSE. While the thesis topics of students range from WIlliams’ stochastic methods for model reduction of multiscale chaotic systems to scalable and robust GPU-cased optimization for energy systems, the goal of the program remains the same: develop students and research that will make a difference.

“That's why MIT is an ‘Institute of Technology’ and not a ‘university.’ There’s always this question, no matter what you’re studying: what is it good for? Our students will go on to work in systems biology, simulators of climate models, electrification, hypersonic vehicles, and more, but the whole point is that their research is helping with something,” says Hadjiconstantinou.

How to build AI scaling laws for efficient LLM training and budget maximization

MIT Latest News - Tue, 09/16/2025 - 11:00am

When researchers are building large language models (LLMs), they aim to maximize performance under a particular computational and financial budget. Since training a model can amount to millions of dollars, developers need to be judicious with cost-impacting decisions about, for instance, the model architecture, optimizers, and training datasets before committing to a model. To anticipate the quality and accuracy of a large model’s predictions, practitioners often turn to scaling laws: using smaller, cheaper models to try to approximate the performance of a much larger target model. The challenge, however, is that there are thousands of ways to create a scaling law.

New work from MIT and MIT-IBM Watson AI Lab researchers addresses this by amassing and releasing a collection of hundreds of models and metrics concerning training and performance to approximate more than a thousand scaling laws. From this, the team developed a meta-analysis and guide for how to select small models and estimate scaling laws for different LLM model families, so that the budget is optimally applied toward generating reliable performance predictions.

“The notion that you might want to try to build mathematical models of the training process is a couple of years old, but I think what was new here is that most of the work that people had been doing before is saying, ‘can we say something post-hoc about what happened when we trained all of these models, so that when we’re trying to figure out how to train a new large-scale model, we can make the best decisions about how to use our compute budget?’” says Jacob Andreas, associate professor in the Department of Electrical Engineering and Computer Science and principal investigator with the MIT-IBM Watson AI Lab.

The research was recently presented at the International Conference on Machine Learning by Andreas, along with MIT-IBM Watson AI Lab researchers Leshem Choshen and Yang Zhang of IBM Research.

Extrapolating performance

No matter how you slice it, developing LLMs is an expensive endeavor: from decision-making regarding the numbers of parameters and tokens, data selection and size, and training techniques to determining output accuracy and tuning to the target applications and tasks. Scaling laws offer a way to forecast model behavior by relating a large model’s loss to the performance of smaller, less-costly models from the same family, avoiding the need to fully train every candidate. Mainly, the differences between the smaller models are the number of parameters and token training size. According to Choshen, elucidating scaling laws not only enable better pre-training decisions, but also democratize the field by enabling researchers without vast resources to understand and build effective scaling laws.

The functional form of scaling laws is relatively simple, incorporating components from the small models that capture the number of parameters and their scaling effect, the number of training tokens and their scaling effect, and the baseline performance for the model family of interest. Together, they help researchers estimate a target large model’s performance loss; the smaller the loss, the better the target model’s outputs are likely to be.

These laws allow research teams to weigh trade-offs efficiently and to test how best to allocate limited resources. They’re particularly useful for evaluating scaling of a certain variable, like the number of tokens, and for A/B testing of different pre-training setups.

In general, scaling laws aren’t new; however, in the field of AI, they emerged as models grew and costs skyrocketed. “It’s like scaling laws just appeared at some point in the field,” says Choshen. “They started getting attention, but no one really tested how good they are and what you need to do to make a good scaling law.” Further, scaling laws were themselves also a black box, in a sense. “Whenever people have created scaling laws in the past, it has always just been one model, or one model family, and one dataset, and one developer,” says Andreas. “There hadn’t really been a lot of systematic meta-analysis, as everybody is individually training their own scaling laws. So, [we wanted to know,] are there high-level trends that you see across those things?”

Building better

To investigate this, Choshen, Andreas, and Zhang created a large dataset. They collected LLMs from 40 model families, including Pythia, OPT, OLMO, LLaMA, Bloom, T5-Pile, ModuleFormer mixture-of-experts, GPT, and other families. These included 485 unique, pre-trained models, and where available, data about their training checkpoints, computational cost (FLOPs), training epochs, and the seed, along with 1.9 million performance metrics of loss and downstream tasks. The models differed in their architectures, weights, and so on. Using these models, the researchers fit over 1,000 scaling laws and compared their accuracy across architectures, model sizes, and training regimes, as well as testing how the number of models, inclusion of intermediate training checkpoints, and partial training impacted the predictive power of scaling laws to target models. They used measurements of absolute relative error (ARE); this is the difference between the scaling law’s prediction and the observed loss of a large, trained model. With this, the team compared the scaling laws, and after analysis, distilled practical recommendations for AI practitioners about what makes effective scaling laws.

Their shared guidelines walk the developer through steps and options to consider and expectations. First, it’s critical to decide on a compute budget and target model accuracy. The team found that 4 percent ARE is about the best achievable accuracy one could expect due to random seed noise, but up to 20 percent ARE is still useful for decision-making. The researchers identified several factors that improve predictions, like including intermediate training checkpoints, rather than relying only on final losses; this made scaling laws more reliable. However, very early training data before 10 billion tokens are noisy, reduce accuracy, and should be discarded. They recommend prioritizing training more models across a spread of sizes to improve robustness of the scaling law’s prediction, not just larger models; selecting five models provides a solid starting point. 

Generally, including larger models improves prediction, but costs can be saved by partially training the target model to about 30 percent of its dataset and using that for extrapolation. If the budget is considerably constrained, developers should consider training one smaller model within the target model family and borrow scaling law parameters from a model family with similar architecture; however, this may not work for encoder–decoder models. Lastly, the MIT-IBM research group found that when scaling laws were compared across model families, there was strong correlation between two sets of hyperparameters, meaning that three of the five hyperparameters explained nearly all of the variation and could likely capture the model behavior. Together, these guidelines provide a systematic approach to making scaling law estimation more efficient, reliable, and accessible for AI researchers working under varying budget constraints.

Several surprises arose during this work: small models partially trained are still very predictive, and further, the intermediate training stages from a fully trained model can be used (as if they are individual models) for prediction of another target model. “Basically, you don’t pay anything in the training, because you already trained the full model, so the half-trained model, for instance, is just a byproduct of what you did,” says Choshen. Another feature Andreas pointed out was that, when aggregated, the variability across model families and different experiments jumped out and was noisier than expected. Unexpectedly, the researchers found that it’s possible to utilize the scaling laws on large models to predict performance down to smaller models. Other research in the field has hypothesized that smaller models were a “different beast” compared to large ones; however, Choshen disagrees. “If they’re totally different, they should have shown totally different behavior, and they don’t.”

While this work focused on model training time, the researchers plan to extend their analysis to model inference. Andreas says it’s not, “how does my model get better as I add more training data or more parameters, but instead as I let it think for longer, draw more samples. I think there are definitely lessons to be learned here about how to also build predictive models of how much thinking you need to do at run time.” He says the theory of inference time scaling laws might become even more critical because, “it’s not like I'm going to train one model and then be done. [Rather,] it’s every time a user comes to me, they’re going to have a new query, and I need to figure out how hard [my model needs] to think to come up with the best answer. So, being able to build those kinds of predictive models, like we’re doing in this paper, is even more important.”

This research was supported, in part, by the MIT-IBM Watson AI Lab and a Sloan Research Fellowship. 

Microsoft Still Uses RC4

Schneier on Security - Tue, 09/16/2025 - 7:06am

Senator Ron Wyden has asked the Federal Trade Commission to investigate Microsoft over its continued use of the RC4 encryption algorithm. The letter talks about a hacker technique called Kerberoasting, that exploits the Kerberos authentication system.

EPA proposal puts US gas exporters in a bind

ClimateWire News - Tue, 09/16/2025 - 6:29am
The bid to halt greenhouse gas reporting could cause problems for petroleum companies — especially those hoping to sell gas to the E.U.

Trump’s energy EOs go on trial

ClimateWire News - Tue, 09/16/2025 - 6:28am
The case opens Tuesday with testimony about the effects that President Donald Trump's executive orders may be having on climate change. It's the first time a federal trial will feature climate-related testimony.

Interior: Revolution Wind failed to address national security concerns

ClimateWire News - Tue, 09/16/2025 - 6:27am
New England officials have criticized the Trump administration for blocking the offshore wind project that's 80 percent complete.

Fears rise as unregulated property insurers expand

ClimateWire News - Tue, 09/16/2025 - 6:25am
The companies aren't part of state programs that pay claims for insolvent insurers. "That's not a risk we should take," an expert says.

Newsom replaces California’s top air quality official with his climate adviser

ClimateWire News - Tue, 09/16/2025 - 6:24am
Lauren Sanchez will replace California Air Resources Board Chair Liane Randolph, who had more than a year left in her term.

Climate change is burning a €43B hole in Europe’s pocket

ClimateWire News - Tue, 09/16/2025 - 6:24am
Global warming is making droughts, fires and floods more likely.

Australia pledges $6B by 2030 to tackle climate hazards

ClimateWire News - Tue, 09/16/2025 - 6:23am
“No Australian community will be immune” from deadly heat waves, floods, cyclones, droughts and bushfires, the climate minister said.

Catastrophe bonds worth $17.5B land in EU crosshairs

ClimateWire News - Tue, 09/16/2025 - 6:23am
The development, which coincides with the U.S. hurricane season, is dividing market participants.

Kenyan banks need skills to bridge $5B green funding gap

ClimateWire News - Tue, 09/16/2025 - 6:22am
The financing gap to protect Kenya's biodiversity stands at $5.13 billion annually, according a new industry study.

MIT geologists discover where energy goes during an earthquake

MIT Latest News - Tue, 09/16/2025 - 12:00am

The ground-shaking that an earthquake generates is only a fraction of the total energy that a quake releases. A quake can also generate a flash of heat, along with a domino-like fracturing of underground rocks. But exactly how much energy goes into each of these three processes is exceedingly difficult, if not impossible, to measure in the field.

Now MIT geologists have traced the energy that is released by “lab quakes” — miniature analogs of natural earthquakes that are carefully triggered in a controlled laboratory setting. For the first time, they have quantified the complete energy budget of such quakes, in terms of the fraction of energy that goes into heat, shaking, and fracturing.

They found that only about 10 percent of a lab quake’s energy causes physical shaking. An even smaller fraction — less than 1 percent — goes into breaking up rock and creating new surfaces. The overwhelming portion of a quake’s energy — on average 80 percent — goes into heating up the immediate region around a quake’s epicenter. In fact, the researchers observed that a lab quake can produce a temperature spike hot enough to melt surrounding material and turn it briefly into liquid melt.

The geologists also found that a quake’s energy budget depends on a region’s deformation history — the degree to which rocks have been shifted and disturbed by previous tectonic motions. The fractions of quake energy that produce heat, shaking, and rock fracturing can shift depending on what the region has experienced in the past.

“The deformation history — essentially what the rock remembers — really influences how destructive an earthquake could be,” says Daniel Ortega-Arroyo, a graduate student in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). “That history affects a lot of the material properties in the rock, and it dictates to some degree how it is going to slip.”

The team’s lab quakes are a simplified analog of what occurs during a natural earthquake. Down the road, their results could help seismologists predict the likelihood of earthquakes in regions that are prone to seismic events. For instance, if scientists have an idea of how much shaking a quake generated in the past, they might be able to estimate the degree to which the quake’s energy also affected rocks deep underground by melting or breaking them apart. This in turn could reveal how much more or less vulnerable the region is to future quakes.

“We could never reproduce the complexity of the Earth, so we have to isolate the physics of what is happening, in these lab quakes,” says Matěj Peč, associate professor of geophysics at MIT. “We hope to understand these processes and try to extrapolate them to nature.”

Peč (pronounced “Peck”) and Ortega-Arroyo reported their results on Aug. 28 in the journal AGU Advances. Their MIT co-authors are Hoagy O’Ghaffari and Camilla Cattania, along with Zheng Gong and Roger Fu at Harvard University and Markus Ohl and Oliver Plümper at Utrecht University in the Netherlands.

Under the surface

Earthquakes are driven by energy that is stored up in rocks over millions of years. As tectonic plates slowly grind against each other, stress accumulates through the crust. When rocks are pushed past their material strength, they can suddenly slip along a narrow zone, creating a geologic fault. As rocks slip on either side of the fault, they produce seismic waves that ripple outward and upward.

We perceive an earthquake’s energy mainly in the form of ground shaking, which can be measured using seismometers and other ground-based instruments. But the other two major forms of a quake’s energy — heat and underground fracturing — are largely inaccessible with current technologies.

“Unlike the weather, where we can see daily patterns and measure a number of pertinent variables, it’s very hard to do that very deep in the Earth,” Ortega-Arroyo says. “We don’t know what’s happening to the rocks themselves, and the timescales over which earthquakes repeat within a fault zone are on the century-to-millenia timescales, making any sort of actionable forecast challenging.”

To get an idea of how an earthquake’s energy is partitioned, and how that energy budget might affect a region’s seismic risk, he and Peč went into the lab. Over the last seven years, Peč’s group at MIT has developed methods and instrumentation to simulate seismic events, at the microscale, in an effort to understand how earthquakes at the macroscale may play out.

“We are focusing on what’s happening on a really small scale, where we can control many aspects of failure and try to understand it before we can do any scaling to nature,” Ortega-Arroyo says.

Microshakes

For their new study, the team generated miniature lab quakes that simulate a seismic slipping of rocks along a fault zone. They worked with small samples of granite, which are representative of rocks in the seismogenic layer — the geologic region in the continental crust where earthquakes typically originate. They ground up the granite into a fine powder and mixed the crushed granite with a much finer powder of magnetic particles, which they used as a sort of internal temperature gauge. (A particle’s magnetic field strength will change in response to a fluctuation in temperature.)

The researchers placed samples of the powdered granite — each about 10 square millimeters and 1 millimeter thin — between two small pistons and wrapped the ensemble in a gold jacket. They then applied a strong magnetic field to orient the powder’s magnetic particles in the same initial direction and to the same field strength. They reasoned that any change in the particles’ orientation and field strength afterward should be a sign of how much heat that region experienced as a result of any seismic event.

Once samples were prepared, the team placed them one at a time into a custom-built apparatus that the researchers tuned to apply steadily increasing pressure, similar to the pressures that rocks experience in the Earth’s seismogenic layer, about 10 to 20 kilometers below the surface. They used custom-made piezoelectric sensors, developed by co-author O’Ghaffari, which they attached to either end of a sample to measure any shaking that occurred as they increased the stress on the sample.

They observed that at certain stresses, some samples slipped, producing a microscale seismic event similar to an earthquake. By analyzing the magnetic particles in the samples after the fact, they obtained an estimate of how much each sample was temporarily heated — a method developed in collaboration with Roger Fu’s lab at Harvard University. They also estimated the amount of shaking each sample experienced, using measurements from the piezoelectric sensor and numerical models. The researchers also examined each sample under the microscope, at different magnifications, to assess how the size of the granite grains changed — whether and how many grains broke into smaller pieces, for instance.

From all these measurements, the team was able to estimate each lab quake’s energy budget. On average, they found that about 80 percent of a quake’s energy goes into heat, while 10 percent generates shaking, and less than 1 percent goes into rock fracturing, or creating new, smaller particle surfaces. 

“In some instances we saw that, close to the fault, the sample went from room temperature to 1,200 degrees Celsius in a matter of microseconds, and then immediately cooled down once the motion stopped,” Ortega-Arroyo says. “And in one sample, we saw the fault move by about 100 microns, which implies slip velocities essentially about 10 meters per second. It moves very fast, though it doesn’t last very long.”

The researchers suspect that similar processes play out in actual, kilometer-scale quakes.

“Our experiments offer an integrated approach that provides one of the most complete views of the physics of earthquake-like ruptures in rocks to date,” Peč says. “This will provide clues on how to improve our current earthquake models and natural hazard mitigation.”

This research was supported, in part, by the National Science Foundation.

How to get your business into the flow

MIT Latest News - Tue, 09/16/2025 - 12:00am

In the late 1990s, a Harley-Davidson executive named Donald Kieffer became general manager of a company engine plant near Milwaukee. The iconic motorcycle maker had forged a celebrated comeback, and Kieffer, who learned manufacturing on the shop floor, had been part of it. Now Kieffer wanted to make his facility better. So he arranged for a noted Toyota executive, Hajime Oba, to pay a visit.

The meeting didn’t go as Kieffer expected. Oba walked around the plant for 45 minutes, diagrammed the setup on a whiteboard, and suggested one modest change. As a high-ranking manager, Kieffer figured he had to make far-reaching upgrades. Instead, Oba asked him, “What is the problem you are trying to solve?”

Oba’s point was subtle. Harley-Davidson had a good plant that could get better, but not by imposing grand, top-down plans. The key was to fix workflow issues the employees could identify. Even a small fix can have large effects, and, anyway, a modestly useful change is better than a big, formulaic makeover that derails things. So Kieffer took Oba’s prompt and started making specific, useful changes. 

“Organizations are dynamic places, and when we try to impose a strict, static structure on them, we drive all that dynamism underground,” says MIT professor of management Nelson Repenning. “And the waste and chaos it creates is 100 times more expensive than people anticipate.”

Now Kieffer and Repenning have written a book about flexible, sensible organizational improvement, “There’s Got to Be a Better Way,” published by PublicAffairs. They call their approach “dynamic work design,” which aims to help firms refine their workflow — and to stop people from making it worse through overconfident, cookie-cutter prescriptions.

“So much of management theory presumes we can predict the future accurately, including our impact on it,” Repenning says. “And everybody knows that’s not true. Yet we go along with the fiction. The premise underlying dynamic work design is, if we accept that we can’t predict the future perfectly, we might design the world differently.”

Kieffer adds: “Our principles address how work is designed. Not how leaders have to act, but how you design human work, and drive changes.”

One collaboration, five principles

This book is the product of a long collaboration: In 1996, Kieffer first met Repenning, who was then a new MIT faculty member, and they soon recognized they thought similarly about managing work. By 2008, Kieffer also became a lecturer at the MIT Sloan School of Management, where Repenning is now a distinguished professor of system dynamics and organization studies.

The duo began teaching executive education classes together at MIT Sloan, often working with firms tackling tough problems. In the 2010s, they worked extensively with BP executives after the Deepwater Horizon accident, finding ways to combine safety priorities with other operations.

Repenning is an expert on system dynamics, an MIT-developed field emphasizing how parts of a system interact. In a firm, making isolated changes may throw the system as a whole further off kilter. Instead, managers need to grasp the larger dynamics — and recognize that a firm’s problems are not usually its people, since most employees perform similarly when burdened by a faulty system.

Whereas many have touted management systems prescribe set things in advance — like culling the bottom 10 percent of your employees annually — Repenning and Kieffer believe a firm should study itself empirically and develop improvements from there.

“Managers lose touch with how work actually gets done,” Kieffer says. “We bring managers in touch with real-time work, to see the problems people have, to help them solve it and learn new ways to work.”

Over time, Repenning and Kieffer have codified their ideas about work design into five principles:

  • Solve the right problem: Use empiricism to develop a blame-free statement of issues to address;
  • Structure for discovery: Allow workers to see how their work fits into the bigger picture, and to help improve things;
  • Connect the human chain: Make sure the right information moves from one person to the next;
  • Regulate for flow: New tasks should only enter a system when there is capacity for them to be handled; and
  • Visualize the work: Create a visual method — think of a whiteboard with sticky notes — for mapping work operations.

No mugs, no t-shirts — just open your eyes

Applying dynamic work design to any given firm may sound simple, but Repenning and Kieffer note that many forces make it hard to implement. For instance, firm leaders may be tempted to opt for technology-based solutions when there are simpler, cheaper fixes available.

Indeed, “resorting to technology before fixing the underlying design risks wasting money and embedding the original problem even deeper in the organization,” they write in the book.

Moreover, dynamic work design is not itself a solution, but a way of trying to find a specific solution.

“One thing that keeps Don and I up at night is a CEO reading our book and thinking, ‘We’re going to be a dynamic work design company,’ and printing t-shirts and coffee mugs and holding two-day conferences where everyone signs the dynamic work design poster, and evaluating everyone every week on how dynamic they are,’” Repenning says. “Then you’re being awfully static.”

After all, firms change, and their needs change. Repenning and Kieffer want managers to keep studying their firm’s workflow, so they can keep current with their needs. In fairness, a certain amount of managers do this.

“Most people have experienced fleeting moments of good work design,” Repenning says. Building on that, he says, managers and employees can keep driving a process of improvement that is realistic and logical.

“Start small,” he adds. “Pick one problem you can work on in a couple of weeks, and solve that. Most cases, with open eyes, there’s low-hanging fruit. You find the places you can win, and change incrementally, rather than all at once. For senior executives, this is hard. They are used to doing big things. I tell our executive ed students, it’s going to feel uncomfortable at the beginning, but this is a much more sustainable path to progress.”

Our Stop Censoring Abortion Campaign Uncovers a Social Media Censorship Crisis

EFF: Updates - Mon, 09/15/2025 - 3:07pm

This is the first installment in a blog series documenting EFF's findings from the Stop Censoring Abortion campaign. You can read additional posts here. 

We’ve been hearing that social media platforms are censoring abortion-related content, even when no law requires them to do so. Now, we’ve got the receipts. 

For months, EFF has been investigating stories from users whose abortion-related content has been taken down or otherwise suppressed by major social media platforms. In collaboration with our allies—including Plan C, Women on Web, Reproaction, and Women First Digital—we launched the #StopCensoringAbortion campaign to collect and amplify these stories.  

Submissions came from a variety of users, including personal accounts, influencers, healthcare clinics, research organizations, and advocacy groups from across the country and abroad—a spectrum that underscores the wide reach of this censorship. Since the start of the year, we’ve seen nearly 100 examples of abortion-related content taken down by social media platforms. 

We analyzed these takedowns, deletions, and bans, comparing the content to what platform policies allow—particularly those of Meta—and found that almost none of the submissions we received violated any of the platforms’ stated policies. Most of the censored posts simply provided factual, educational information. This Threads post is a perfect example: 

Screenshot submitted by Lauren Kahre to EFF

In this post, health policy strategist Lauren Kahre discussed abortion pills’ availability via mail. She provided factual information about two FDA approved medications (mifepristone and misoprostol), including facts like shelf life and how to store pills safely.  

Lauren’s post doesn’t violate any of Meta’s policies and shouldn’t have been removed. But don’t just take our word for it: Meta has publicly insisted that posts like these should not be censored. In a February 2024 letter to Amnesty International, Meta Human Rights Policy Director Miranda Sissons wrote: “Organic content (i.e., non paid content) educating users about medication abortion is allowed and does not violate our Community Standards. Additionally, providing guidance on legal access to pharmaceuticals is allowed.” 

Still, shortly after Lauren shared this post, Meta took it down. Perhaps even more perplexing was their explanation for doing so. According to Meta, the post was removed because “[they] don’t allow people to buy, sell, or exchange drugs that require a prescription from a doctor or a pharmacist.” 

Screenshot submitted by Lauren Kahre to EFF

In the submissions we received, this was the most common reason Meta gave for removing abortion-related content. The company frequently claimed that posts violated policies on Restricted Goods and Services, which prohibit any “attempts to buy, sell, trade, donate, gift or ask for pharmaceutical drugs.”  

Yet in Lauren’s case and others, the posts very clearly did no such thing. And as Meta itself has explained: “Providing guidance on how to legally access pharmaceuticals is permitted as it is not considered an offer to buy, sell or trade these drugs.” 

In fact, Meta’s policies on Restricted Goods & Services further state: “We allow discussions about the sale of these goods in stores or by online retailers, advocating for changes to regulations of goods and services covered in this policy, and advocating for or concerning the use of pharmaceutical drugs in the context of medical treatment, including discussion of physical or mental side effects.” Also, “Debating or advocating for the legality or discussing scientific or medical merits of prescription drugs is allowed. This includes news and public service announcements.” 

Over and over again, the policies say one thing, but the actual enforcement says another. 

We spoke with multiple Meta representatives to share these findings. We asked hard questions about their policies and the gap between how they’re being applied. Unfortunately, we were mostly left with the same concerns, but we’re continuing to push them to do better.  

In the coming weeks, we will share a series of blogs further examining trends we found, including stories of unequal enforcement, where individuals and organizations needed to rely on internal connections at Meta to get wrongfully censored posts restored; examples of account suspensions without sufficient warnings; an exploration of Meta’s ad policies; practical tips for users to avoid being censored; and concrete steps platforms should take to reform their abortion content moderation practices. For a preview, we’ve already shared some of our findings with Barbara Ortutay at The Associated Press, whose report on some of these takedowns was published today.  

We hope this series highlighting examples of abortion content censorship will help the public and the platforms understand the breadth of this problem, who is affected, and with what consequences. These stories collectively underscore the urgent need for platforms to review and consistently enforce their policies in a fair and transparent manner.  

With reproductive rights under attack both in the U.S. and abroad, sharing accurate information about abortion online has never been more critical. Together, we can hold platforms like Meta accountable, demand transparency in moderation practices, and ultimately stop the censorship of this essential, sometimes life-saving information. 

This is the first post in our blog series documenting the findings from our Stop Censoring Abortion campaign. Read more in the series: https://www.eff.org/pages/stop-censoring-abortion    

Climate Action Learning Lab helps state and local leaders identify and implement effective climate mitigation strategies

MIT Latest News - Mon, 09/15/2025 - 10:00am

This spring, J-PAL North America — a regional office of MIT’s Abdul Latif Jameel Poverty Action Lab (J-PAL) — launched its first ever Learning Lab, centered on climate action. The Learning Lab convened a cohort of government leaders who are enacting a broad range of policies and programs to support the transition to a low-carbon economy. Through the Learning Lab, participants explored how to embed randomized evaluation into promising solutions to determine how to maximize changes in behavior — a strategy that can help advance decarbonization in the most cost-effective ways to benefit all communities. The inaugural cohort included more than 25 participants from state agencies and cities, including the Massachusetts Clean Energy Center, the Minnesota Housing Finance Agency, and the cities of Lincoln, Nebraska; Newport News, Virginia; Orlando, Florida; and Philadelphia.

“State and local governments have demonstrated tremendous leadership in designing and implementing decarbonization policies and climate action plans over the past few years,” said Peter Christensen, scientific advisor of the J-PAL North America Environment, Energy, and Climate Change Sector. “And while these are informed by scientific projections on which programs and technologies may effectively and equitably reduce emissions, the projection methods involve a lot of assumptions. It can be challenging for governments to determine whether their programs are actually achieving the expected level of emissions reductions that we desperately need. The Climate Action Learning Lab was designed to support state and local governments in addressing this need — helping them to rigorously evaluate their programs to detect their true impact.”

From May to July, the Learning Lab offered a suite of resources for participants to leverage rigorous evaluation to identify effective and equitable climate mitigation solutions. Offerings included training lectures, one-on-one strategy sessions, peer learning engagements, and researcher collaboration. State and local leaders built skills and knowledge in evidence generation and use, reviewed and applied research insights to their own programmatic areas, and identified priority research questions to guide evidence-building and decision-making practices. Programs prioritized for evaluation covered topics such as compliance with building energy benchmarking policies, take-up rates of energy-efficient home improvement programs such as heat pumps and Solar for All, and scoring criteria for affordable housing development programs.

“We appreciated the chance to learn about randomized evaluation methodology, and how this impact assessment tool could be utilized in our ongoing climate action planning. With so many potential initiatives to pursue, this approach will help us prioritize our time and resources on the most effective solutions,” said Anna Shugoll, program manager at the City of Philadelphia’s Office of Sustainability.

This phase of the Learning Lab was possible thanks to grant funding from J-PAL North America’s longtime supporter and collaborator Arnold Ventures. The work culminated in an in-person summit in Cambridge, Massachusetts, on July 23, where Learning Lab participants delivered a presentation on their jurisdiction’s priority research questions and strategic evaluation plans. They also connected with researchers in the J-PAL network to further explore impact evaluation opportunities for promising decarbonization programs.

“The Climate Action Learning Lab has helped us identify research questions for some of the City of Orlando’s deep decarbonization goals. J-PAL staff, along with researchers in the J-PAL network, worked hard to bridge the gap between behavior change theory and the applied, tangible benefits that we achieve through rigorous evaluation of our programs,” said Brittany Sellers, assistant director for sustainability, resilience and future-ready for Orlando. “Whether we’re discussing an energy-efficiency policy for some of the biggest buildings in the City of Orlando or expanding [electric vehicle] adoption across the city, it’s been very easy to communicate some of these high-level research concepts and what they can help us do to actually pursue our decarbonization goals.”

The next phase of the Climate Action Learning Lab will center on building partnerships between jurisdictions and researchers in the J-PAL network to explore the launch of randomized evaluations, deepening the community of practice among current cohort members, and cultivating a broad culture of evidence building and use in the climate space. 

“The Climate Action Learning Lab provided a critical space for our city to collaborate with other cities and states seeking to implement similar decarbonization programs, as well as with researchers in the J-PAL network to help rigorously evaluate these programs,” said Daniel Collins, innovation team director at the City of Newport News. “We look forward to further collaboration and opportunities to learn from evaluations of our mitigation efforts so we, as a city, can better allocate resources to the most effective solutions.”

The Climate Action Learning Lab is one of several offerings under the J-PAL North America Evidence for Climate Action Project. The project’s goal is to convene an influential network of researchers, policymakers, and practitioners to generate rigorous evidence to identify and advance equitable, high-impact policy solutions to climate change in the United States. In addition to the Learning Lab, J-PAL North America will launch a climate special topic request for proposals this fall to fund research on climate mitigation and adaptation initiatives. J-PAL will welcome applications from both research partnerships formed through the Learning Lab as well as other eligible applicants.

Local government leaders, researchers, potential partners, or funders committed to advancing climate solutions that work, and who want to learn more about the Evidence for Climate Action Project, may email na_eecc@povertyactionlab.org or subscribe to the J-PAL North America Climate Action newsletter.

How MIT’s Steel Research Group led to a groundbreaking national materials initiative

MIT Latest News - Mon, 09/15/2025 - 10:00am

Traditionally, developing new materials for cutting-edge applications — such as SpaceX’s Raptor engine — has taken a decade or more. But thanks to a breakthrough technology pioneered by an MIT research group now celebrating its 40th year, a key material for the Raptor was delivered in just a few years. The same innovation has accelerated the development of high-performance materials for the Apple Watch, U.S. Air Force jets, and Formula One race cars.

The MIT Steel Research Group (SRG) also led to a national initiative that “has already sparked a paradigm shift in how new materials are discovered, developed, and deployed,” according to a White House story describing the Materials Genome Initiative’s first five years.

Gregory B. Olson founded the SRG in 1985 with the goal of using computers to accelerate the hunt for new materials by plumbing databases of those materials’ fundamental properties. It was the beginning of a new field: computational materials design.

At the time, “nobody knew whether we could really do this,” remembers Olson, a professor of the practice in the Department of Materials Science and Engineering. “I have some documented evidence of agencies resisting the entire concept because, in their opinion, a material could never be designed.”

Eventually, however, Olson and colleagues showed that the approach worked. One of the most important results: In 2011 President Barack Obama made a speech “essentially announcing that this technology is real and it’s what everybody should be doing,” says Olson, who is also affiliated with the Materials Research Laboratory. In the speech, Obama launched the Materials Genome Initiative (MGI).

The MGI is developing “a fundamental database of the parameters that direct the assembly of the structures of materials,” much like the Human Genome Project “is a database that directs the assembly of the structures of life,” says Olson.

The goal is to use the MGI database to discover, manufacture, and deploy advanced materials twice as fast, and at a fraction of the cost, compared to traditional methods, according to the MGI website.

At MIT, the SRG continues to focus on steel, “because it’s the material [the world has] studied the longest, so we have the deepest fundamental understanding of its properties,” says Olson, project principal investigator.

The Cybersteels Project, funded by the Office of Naval Research, brings together eight MIT faculty who are working to expand our knowledge of steel, eventually adding their data to the MGI. Major areas of study include the boundaries between the microscopic grains that make up a steel and the economic modeling of new steels.

Concludes Olson, “it has been tremendously satisfying to see how this technology has really blossomed in the hands of leading corporations and led to a national initiative to take it even further.”

Machine-learning tool gives doctors a more detailed 3D picture of fetal health

MIT Latest News - Mon, 09/15/2025 - 10:00am

For pregnant women, ultrasounds are an informative (and sometimes necessary) procedure. They typically produce two-dimensional black-and-white scans of fetuses that can reveal key insights, including biological sex, approximate size, and abnormalities like heart issues or cleft lip. If your doctor wants a closer look, they may use magnetic resonance imaging (MRI), which uses magnetic fields to capture images that can be combined to create a 3D view of the fetus.

MRIs aren’t a catch-all, though; the 3D scans are difficult for doctors to interpret well enough to diagnose problems because our visual system is not accustomed to processing 3D volumetric scans (in other words, a wrap-around look that also shows us the inner structures of a subject). Enter machine learning, which could help model a fetus’s development more clearly and accurately from data — although no such algorithm has been able to model their somewhat random movements and various body shapes.

That is, until a new approach called “Fetal SMPL” from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), Boston Children’s Hospital (BCH), and Harvard Medical School presented clinicians with a more detailed picture of fetal health. It was adapted from “SMPL” (Skinned Multi-Person Linear model), a 3D model developed in computer graphics to capture adult body shapes and poses, as a way to represent fetal body shapes and poses accurately. Fetal SMPL was then trained on 20,000 MRI volumes to predict the location and size of a fetus and create sculpture-like 3D representations. Inside each model is a skeleton with 23 articulated joints called a “kinematic tree,” which the system uses to pose and move like the fetuses it saw during training.

The extensive, real-world scans that Fetal SMPL learned from helped it develop pinpoint accuracy. Imagine stepping into a stranger’s footprint while blindfolded, and not only does it fit perfectly, but you correctly guess what shoe they wore — similarly, the tool closely matched the position and size of fetuses in MRI frames it hadn’t seen before. Fetal SMPL was only misaligned by an average of about 3.1 millimeters, a gap smaller than a single grain of rice.

The approach could enable doctors to precisely measure things like the size of a baby’s head or abdomen and compare these metrics with healthy fetuses at the same age. Fetal SMPL has demonstrated its clinical potential in early tests, where it achieved accurate alignment results on a small group of real-world scans.

“It can be challenging to estimate the shape and pose of a fetus because they’re crammed into the tight confines of the uterus,” says lead author, MIT PhD student, and CSAIL researcher Yingcheng Liu SM ’21. “Our approach overcomes this challenge using a system of interconnected bones under the surface of the 3D model, which represent the fetal body and its motions realistically. Then, it relies on a coordinate descent algorithm to make a prediction, essentially alternating between guessing pose and shape from tricky data until it finds a reliable estimate.”

In utero

Fetal SMPL was tested on shape and pose accuracy against the closest baseline the researchers could find: a system that models infant growth called “SMIL.” Since babies out of the womb are larger than fetuses, the team shrank those models by 75 percent to level the playing field.

The system outperformed this baseline on a dataset of fetal MRIs between the gestational ages of 24 and 37 weeks taken at Boston Children’s Hospital. Fetal SMPL was able to recreate real scans more precisely, as its models closely lined up with real MRIs.

The method was efficient at lining up their models to images, only needing three iterations to arrive at a reasonable alignment. In an experiment that counted how many incorrect guesses Fetal SMPL had made before arriving at a final estimate, its accuracy plateaued from the fourth step onward.

The researchers have just begun testing their system in the real world, where it produced similarly accurate models in initial clinical tests. While these results are promising, the team notes that they’ll need to apply their results to larger populations, different gestational ages, and a variety of disease cases to better understand the system’s capabilities.

Only skin deep

Liu also notes that their system only helps analyze what doctors can see on the surface of a fetus, since only bone-like structures lie beneath the skin of the models. To better monitor babies’ internal health, such as liver, lung, and muscle development, the team intends to make their tool volumetric, modeling the fetus’s inner anatomy from scans. Such upgrades would make the models more human-like, but the current version of Fetal SMPL already presents a precise (and unique) upgrade to 3D fetal health analysis.

“This study introduces a method specifically designed for fetal MRI that effectively captures fetal movements, enhancing the assessment of fetal development and health,” says Kiho Im, Harvard Medical School associate professor of pediatrics and staff scientist in the Division of Newborn Medicine at BCH’s Fetal-Neonatal Neuroimaging and Developmental Science Center. Im, who was not involved with the paper, adds that this approach “will not only improve the diagnostic utility of fetal MRI, but also provide insights into the early functional development of the fetal brain in relation to body movements.”

“This work reaches a pioneering milestone by extending parametric surface human body models for the earliest shapes of human life: fetuses,” says Sergi Pujades, an associate professor at University Grenoble Alpes, who wasn’t involved in the research. “It allows us to detangle the shape and motion of a human, which has already proven to be key in understanding how adult body shape relates to metabolic conditions and how infant motion relates to neurodevelopmental disorders. In addition, the fact that the fetal model stems from, and is compatible with, the adult (SMPL) and infant (SMIL) body models, will allow us to study human shape and pose evolution over long periods of time. This is an unprecedented opportunity to further quantify how human shape growth and motion are affected by different conditions.”

Liu wrote the paper with three CSAIL members: Peiqi Wang SM ’22, PhD ’25; MIT PhD student Sebastian Diaz; and senior author Polina Golland, the Sunlin and Priscilla Chou Professor of Electrical Engineering and Computer Science, a principal investigator in MIT CSAIL, and the leader of the Medical Vision Group. BCH assistant professor of pediatrics Esra Abaci Turk, Inria researcher Benjamin Billot, and Harvard Medical School professor of pediatrics and professor of radiology Patricia Ellen Grant are also authors on the paper. This work was supported, in part, by the National Institutes of Health and the MIT CSAIL-Wistron Program.

The researchers will present their work at the International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) in September.

Lawsuit About WhatsApp Security

Schneier on Security - Mon, 09/15/2025 - 7:05am

Attaullah Baig, WhatsApp’s former head of security, has filed a whistleblower lawsuit alleging that Facebook deliberately failed to fix a bunch of security flaws, in violation of its 2019 settlement agreement with the Federal Trade Commission.

The lawsuit, alleging violations of the whistleblower protection provision of the Sarbanes-Oxley Act passed in 2002, said that in 2022, roughly 100,000 WhatsApp users had their accounts hacked every day. By last year, the complaint alleged, as many as 400,000 WhatsApp users were getting locked out of their accounts each day as a result of such account takeovers...

How hurricanes and falling vaccine rates could collide in Florida

ClimateWire News - Mon, 09/15/2025 - 6:15am
Public health experts say diseases are more likely to tear through crowded storm shelters after the state canceled vaccine mandates.

Pages