Feed aggregator
Environmentalists urge Petrobras to speed up shift to renewables
Health losses attributed to anthropogenic climate change
Nature Climate Change, Published online: 17 September 2025; doi:10.1038/s41558-025-02399-7
The authors assess the growing field of climate change health impact attribution. They show literature bias towards direct heat effects and extreme weather in high-income countries, highlighting the lack of global representation in current efforts.California, Tell Governor Newsom: Regulate AI Police Reports and Sign S.B. 524
The California legislature has passed a necessary piece of legislation, S.B. 524, which starts to regulate police reports written by generative AI. Now, it’s up to us to make sure Governor Newsom will sign the bill.
We must make our voices heard. These technologies obscure certain records and drafts from public disclosure. Vendors have invested heavily on their ability to sell police genAI.
AI-generated police reports are spreading rapidly. The most popular product on the market is Axon’s Draft One, which is already one of the country’s biggest purveyors of police tech, including body-worn cameras. By bundling their products together, Axon has capitalized on its customer base to spread their untransparent and potentially harmful genAI product.
Many things can go wrong when genAI is used to write narrative police reports. First, because the product relies on body-worn camera audio, there’s a big chance of the AI draft missing context like sarcasm, culturally-specific or contextual vocabulary use and slang, languages other than English. While police are expected to edit the AI’s version of events to make up for these flaws, many officers will defer to the AI. Police are also supposed to make an independent decision before arresting a person who was identified by face recognition–and police mess that up all the time. The prosecutor of King County, Washington, has forbidden local officers from using Draft One out of fear that it is unreliable.
Then, of course, there’s the matter of dishonesty. Many public defenders and criminal justice practitioners have voiced concerns about what this technology would do to cross examination. If caught with a different story on the stand than the one in their police report, an officer can easily say, “the AI wrote that and I didn’t edit well enough.” The genAI creates a layer of plausible deniability. Carelessness is a very different offense than lying on the stand.
To make matters worse, an investigation by EFF found that Axon’s Draft One product defies transparency by design. The technology is deliberately built to obscure what portion of a finished report was written by AI and which portions were written by an officer–making it difficult to determine if an officer is lying about which portions of a report were written by AI.
But now, California has an important chance to join with other states like Utah that are passing laws to reign in these technologies, and what minimum safeguards and transparency must go along with using them.
S.B. 524 does several important things: It mandates that police reports written by AI include disclaimers on every page or within the body of the text that make it clear that this report was written in part or in total by a computer. It also says that any reports written by AI must retain their first draft. That way, it should be easier for defense attorneys, judges, police supervisors, or any other auditing entity to see which portions of the final report were written by AI and which parts were written by the officer. Further, the bill requires officers to sign and verify that they read the report and its facts are correct. And it bans AI vendors from selling or sharing the information a police agency provided to the AI.
These common-sense, first-step reforms are important: watchdogs are struggling to figure out where and how AI is being used in a police context. In fact, Axon’s Draft One, would be out of compliance with this bill, which would require them to redesign their tool to make it more transparent—a small win for communities everywhere.
So now we’re asking you: help us make a difference. Use EFF’s Action Center to tell Governor Newsom to sign S.B. 524 into law!
Decoding the sounds of battery formation and degradation
Before batteries lose power, fail suddenly, or burst into flames, they tend to produce faint sounds over time that provide a signature of the degradation processes going on within their structure. But until now, nobody had figured out how to interpret exactly what those sounds meant, and how to distinguish between ordinary background noise and significant signs of possible trouble.
Now, a team of researchers at MIT’s Department of Chemical Engineering have done a detailed analysis of the sounds emanating from lithium ion batteries, and has been able to correlate particular sound patterns with specific degradation processes taking place inside the cells. The new findings could provide the basis for relatively simple, totally passive and nondestructive devices that could continuously monitor the health of battery systems, for example in electric vehicles or grid-scale storage facilities, to provide ways of predicting useful operating lifetimes and forecasting failures before they occur.
The findings were reported Sept. 5 in the journal Joule, in a paper by MIT graduate students Yash Samantaray and Alexander Cohen, former MIT research scientist Daniel Cogswell PhD ’10, and Chevron Professor of Chemical Engineering and professor of mathematics Martin Z. Bazant.
“In this study, through some careful scientific work, our team has managed to decode the acoustic emissions,” Bazant says. “We were able to classify them as coming from gas bubbles that are generated by side reactions, or by fractures from the expansion and contraction of the active material, and to find signatures of those signals even in noisy data.”
Samantaray explains that, “I think the core of this work is to look at a way to investigate internal battery mechanisms while they’re still charging and discharging, and to do this nondestructively.” He adds, “Out there in the world now, there are a few methods that exist, but most are very expensive and not really conducive to batteries in their normal format.”
To carry out their analysis, the team coupled electrochemical testing with recording of the acoustic emissions, under real-world charging and discharging conditions, using detailed signal processing to correlate the electrical and acoustic data. By doing so, he says, “we were able to come up with a very cost-effective and efficient method of actually understanding gas generation and fracture of materials.”
Gas generation and fracturing are two primary mechanisms of degradation and failure in batteries, so being able to detect and distinguish those processes, just by monitoring the sounds produced by the batteries, could be a significant tool for those managing battery systems.
Previous approaches have simply monitored the sounds and recorded times when the overall sound level exceeded some threshold. But in this work, by simultaneously monitoring the voltage and current as well as the sound characteristics, Bazant says, “We know that [sound] emissions happen at a certain potential [voltage], and that helps us identify what the process might be that is causing that emission.”
After these tests, they would then take the batteries apart and study them under an electron microscope to detect fracturing of the materials.
In addition, they took a wavelet transform — essentially, a way of encoding the frequency and duration of each signal that is captured, providing distinct signatures that can then be more easily extracted from background noise. “No one had done that before,” Bazant says, “so that was another breakthrough.”
Acoustic emissions are widely used in engineering, he points out, for example to monitor structures such as bridges for signs of incipient failure. “It’s a great way to monitor a system,” he says, “because those emissions are happening whether you’re listening to them or not,” so by listening, you can learn something about internal processes that would otherwise be invisible.
With batteries, he says, “we often have a hard time interpreting the voltage and current information as precisely as we’d like, to know what’s happening inside a cell. And so this offers another window into the cell’s state of health, including its remaining useful life, and safety, too.” In a related paper with Oak Ridge National Laboratory researchers, the team has shown that acoustic emissions can provide an early warning of thermal runaway, a situation that can lead to fires if not caught. The new study suggests that these sounds can be used to detect gas generation prior to combustion, “like seeing the first tiny bubbles in a pot of heated water, long before it boils,” says Bazant.
The next step will be to take this new knowledge of how certain sounds relate to specific conditions, and develop a practical, inexpensive monitoring system based on this understanding. For example, the team has a grant from Tata Motors to develop a battery monitoring system for its electric vehicles. “Now, we know what to look for, and how to correlate that with lifetime and health and safety,” Bazant says.
One possible application of this new understanding, Samantaray says, is “as a lab tool for groups that are trying to develop new materials or test new environments, so they can actually determine gas generation or active material fracturing without having to open up the battery.”
Bazant adds that the system could also be useful for quality control in battery manufacturing. “The most expensive and rate-limiting process in battery production is often the formation cycling,” he says. This is the process where batteries are cycled through charging and discharging to break them in, and part of that process involves chemical reactions that release some gas. The new system would allow detection of these gas formation signatures, he says, “and by sensing them, it may be easier to isolate well-formed cells from poorly formed cells very early, even before the useful life of the battery, when it’s being made,” he says.
The work was supported by the Toyota Research Institute, the Center for Battery Sustainability, the National Science Foundation, and the Department of Defense, and made use of the facilities of MIT.nano.
A new community for computational science and engineering
For the past decade, MIT has offered doctoral-level study in computational science and engineering (CSE) exclusively through an interdisciplinary program designed for students applying computation within a specific science or engineering field.
As interest grew among students focused primarily on advancing CSE methodology itself, it became clear that a dedicated academic home for this group — students and faculty deeply invested in the foundations of computational science and engineering — was needed.
Now, with a stand-alone CSE PhD program, they have not only a space for fostering discovery in the cross-cutting methodological dimensions of computational science and engineering, but also a tight-knit community.
“This program recognizes the existence of computational science and engineering as a discipline in and of itself, so you don’t have to be doing this work through the lens of mechanical or chemical engineering, but instead in its own right,” says Nicolas Hadjiconstantinou, co-director of the Center for Computational Science and Engineering (CCSE).
Offered by CCSE and launched in 2023, the stand-alone program blends both coursework and a thesis, much like other MIT PhD programs, yet its methodological focus sets it apart from other Institute offerings.
“What’s unique about this program is that it’s not hosted by one specific department. The stand-alone program is, at its core, about computational science and cross-cutting methodology. We connect this research with people in a lot of different application areas. We have oceanographers, people doing materials science, students with a focus on aeronautics and astronautics, and more,” says outgoing co-director Youssef Marzouk, now the associate dean of the MIT Schwarzman College of Computing.
Expanding horizons
Hadjiconstantinou, the Quentin Berg Professor of Mechanical Engineering, and Marzouk, the Breene M. Kerr Professor of Aeronautics and Astronautics, have led the center’s efforts since 2018, and developed the program and curriculum together. The duo was intentional about crafting a program that fosters students’ individual research while also exposing them to all the field has to offer.
To expand students’ horizons and continue to build a collaborative community, the PhD in CSE program features two popular seminar series: weekly community seminars that focus primarily on internal speakers (current graduate students, postdocs, research scientists, and faculty), and monthly distinguished seminars in CSE, which are Institute-wide and bring external speakers from various institutions and industry roles.
“Something surprising about the program has been the seminars. I thought it would be the same people I see in my classes and labs, but it’s much broader than that,” says Emily Williams, a fourth-year PhD student and a Department of Energy Computational Science graduate fellow. “One of the most interesting seminars was around simulating fluid flow for biomedical applications. My background is in fluids, so I understand that part, but seeing it applied in a totally different domain than what I work in was eye-opening,” says Williams.
That seminar, “Astrophysical Fluid Dynamics at Exascale,” presented by James Stone, a professor in the School of Natural Sciences at the Institute for Advanced Study and at Princeton University, represented one of many opportunities for CSE students to engage with practitioners in small groups, gaining academic insight as well as a wider perspective on future career paths.
Designing for impact
The interdisciplinary PhD program served as a departure point from which Hadjiconstantinou and Marzouk created a new offering that was uniquely its own.
For Marzouk, that meant focusing on expanding the stand-alone program to be able to constantly grow and pivot to retain relevancy as technology speeds up, too: “In my view, the vitality of this program is that science and engineering applications nowadays rest on computation in a really foundational way, whether it’s engineering design or scientific discovery. So it’s essential to perform research on the building blocks of this kind of computation. This research also has to be shaped by the way that we apply it so that scientists or engineers will actually use it,” Marzouk says.
The curriculum is structured around six core focus areas, or “ways of thinking,” that are fundamental to CSE:
- Discretization and numerical methods for partial differential equations;
- Optimization methods;
- Inference, statistical computing, and data-driven modeling;
- High performance computing, software engineering, and algorithms;
- Mathematical foundations (e.g., functional analysis, probability); and
- Modeling (i.e., a subject that treats computational modeling in any science or engineering discipline).
Students select and build their own thesis committee that consists of faculty from across MIT, not just those associated with CCSE. The combination of a curriculum that’s “modern and applicable to what employers are looking for in industry and academics," according to Williams, and the ability to build your own group of engaged advisors, allows for a level of specialization that’s hard to find elsewhere.
“Academically, I feel like this program is designed in such a flexible and interdisciplinary way. You have a lot of control in terms of which direction you want to go in,” says Rosen Yu, a PhD student. Yu’s research is focused on engineering design optimization, an interest she discovered during her first year of research at MIT with Professor Faez Ahmed. The CSE PhD was about to launch, and it became clear that her research interests skewed more toward computation than the existing mechanical engineering degree; it was a natural fit.
“At other schools, you often see just a pure computer science program or an engineering department with hardly any intersection. But this CSE program, I like to say it’s like a glue between these two communities,” says Yu.
That “glue” is strengthening, with more students matriculating each year, as well as Institute faculty and staff becoming affiliated with CSE. While the thesis topics of students range from WIlliams’ stochastic methods for model reduction of multiscale chaotic systems to scalable and robust GPU-cased optimization for energy systems, the goal of the program remains the same: develop students and research that will make a difference.
“That's why MIT is an ‘Institute of Technology’ and not a ‘university.’ There’s always this question, no matter what you’re studying: what is it good for? Our students will go on to work in systems biology, simulators of climate models, electrification, hypersonic vehicles, and more, but the whole point is that their research is helping with something,” says Hadjiconstantinou.
How to build AI scaling laws for efficient LLM training and budget maximization
When researchers are building large language models (LLMs), they aim to maximize performance under a particular computational and financial budget. Since training a model can amount to millions of dollars, developers need to be judicious with cost-impacting decisions about, for instance, the model architecture, optimizers, and training datasets before committing to a model. To anticipate the quality and accuracy of a large model’s predictions, practitioners often turn to scaling laws: using smaller, cheaper models to try to approximate the performance of a much larger target model. The challenge, however, is that there are thousands of ways to create a scaling law.
New work from MIT and MIT-IBM Watson AI Lab researchers addresses this by amassing and releasing a collection of hundreds of models and metrics concerning training and performance to approximate more than a thousand scaling laws. From this, the team developed a meta-analysis and guide for how to select small models and estimate scaling laws for different LLM model families, so that the budget is optimally applied toward generating reliable performance predictions.
“The notion that you might want to try to build mathematical models of the training process is a couple of years old, but I think what was new here is that most of the work that people had been doing before is saying, ‘can we say something post-hoc about what happened when we trained all of these models, so that when we’re trying to figure out how to train a new large-scale model, we can make the best decisions about how to use our compute budget?’” says Jacob Andreas, associate professor in the Department of Electrical Engineering and Computer Science and principal investigator with the MIT-IBM Watson AI Lab.
The research was recently presented at the International Conference on Machine Learning by Andreas, along with MIT-IBM Watson AI Lab researchers Leshem Choshen and Yang Zhang of IBM Research.
Extrapolating performance
No matter how you slice it, developing LLMs is an expensive endeavor: from decision-making regarding the numbers of parameters and tokens, data selection and size, and training techniques to determining output accuracy and tuning to the target applications and tasks. Scaling laws offer a way to forecast model behavior by relating a large model’s loss to the performance of smaller, less-costly models from the same family, avoiding the need to fully train every candidate. Mainly, the differences between the smaller models are the number of parameters and token training size. According to Choshen, elucidating scaling laws not only enable better pre-training decisions, but also democratize the field by enabling researchers without vast resources to understand and build effective scaling laws.
The functional form of scaling laws is relatively simple, incorporating components from the small models that capture the number of parameters and their scaling effect, the number of training tokens and their scaling effect, and the baseline performance for the model family of interest. Together, they help researchers estimate a target large model’s performance loss; the smaller the loss, the better the target model’s outputs are likely to be.
These laws allow research teams to weigh trade-offs efficiently and to test how best to allocate limited resources. They’re particularly useful for evaluating scaling of a certain variable, like the number of tokens, and for A/B testing of different pre-training setups.
In general, scaling laws aren’t new; however, in the field of AI, they emerged as models grew and costs skyrocketed. “It’s like scaling laws just appeared at some point in the field,” says Choshen. “They started getting attention, but no one really tested how good they are and what you need to do to make a good scaling law.” Further, scaling laws were themselves also a black box, in a sense. “Whenever people have created scaling laws in the past, it has always just been one model, or one model family, and one dataset, and one developer,” says Andreas. “There hadn’t really been a lot of systematic meta-analysis, as everybody is individually training their own scaling laws. So, [we wanted to know,] are there high-level trends that you see across those things?”
Building better
To investigate this, Choshen, Andreas, and Zhang created a large dataset. They collected LLMs from 40 model families, including Pythia, OPT, OLMO, LLaMA, Bloom, T5-Pile, ModuleFormer mixture-of-experts, GPT, and other families. These included 485 unique, pre-trained models, and where available, data about their training checkpoints, computational cost (FLOPs), training epochs, and the seed, along with 1.9 million performance metrics of loss and downstream tasks. The models differed in their architectures, weights, and so on. Using these models, the researchers fit over 1,000 scaling laws and compared their accuracy across architectures, model sizes, and training regimes, as well as testing how the number of models, inclusion of intermediate training checkpoints, and partial training impacted the predictive power of scaling laws to target models. They used measurements of absolute relative error (ARE); this is the difference between the scaling law’s prediction and the observed loss of a large, trained model. With this, the team compared the scaling laws, and after analysis, distilled practical recommendations for AI practitioners about what makes effective scaling laws.
Their shared guidelines walk the developer through steps and options to consider and expectations. First, it’s critical to decide on a compute budget and target model accuracy. The team found that 4 percent ARE is about the best achievable accuracy one could expect due to random seed noise, but up to 20 percent ARE is still useful for decision-making. The researchers identified several factors that improve predictions, like including intermediate training checkpoints, rather than relying only on final losses; this made scaling laws more reliable. However, very early training data before 10 billion tokens are noisy, reduce accuracy, and should be discarded. They recommend prioritizing training more models across a spread of sizes to improve robustness of the scaling law’s prediction, not just larger models; selecting five models provides a solid starting point.
Generally, including larger models improves prediction, but costs can be saved by partially training the target model to about 30 percent of its dataset and using that for extrapolation. If the budget is considerably constrained, developers should consider training one smaller model within the target model family and borrow scaling law parameters from a model family with similar architecture; however, this may not work for encoder–decoder models. Lastly, the MIT-IBM research group found that when scaling laws were compared across model families, there was strong correlation between two sets of hyperparameters, meaning that three of the five hyperparameters explained nearly all of the variation and could likely capture the model behavior. Together, these guidelines provide a systematic approach to making scaling law estimation more efficient, reliable, and accessible for AI researchers working under varying budget constraints.
Several surprises arose during this work: small models partially trained are still very predictive, and further, the intermediate training stages from a fully trained model can be used (as if they are individual models) for prediction of another target model. “Basically, you don’t pay anything in the training, because you already trained the full model, so the half-trained model, for instance, is just a byproduct of what you did,” says Choshen. Another feature Andreas pointed out was that, when aggregated, the variability across model families and different experiments jumped out and was noisier than expected. Unexpectedly, the researchers found that it’s possible to utilize the scaling laws on large models to predict performance down to smaller models. Other research in the field has hypothesized that smaller models were a “different beast” compared to large ones; however, Choshen disagrees. “If they’re totally different, they should have shown totally different behavior, and they don’t.”
While this work focused on model training time, the researchers plan to extend their analysis to model inference. Andreas says it’s not, “how does my model get better as I add more training data or more parameters, but instead as I let it think for longer, draw more samples. I think there are definitely lessons to be learned here about how to also build predictive models of how much thinking you need to do at run time.” He says the theory of inference time scaling laws might become even more critical because, “it’s not like I'm going to train one model and then be done. [Rather,] it’s every time a user comes to me, they’re going to have a new query, and I need to figure out how hard [my model needs] to think to come up with the best answer. So, being able to build those kinds of predictive models, like we’re doing in this paper, is even more important.”
This research was supported, in part, by the MIT-IBM Watson AI Lab and a Sloan Research Fellowship.
Microsoft Still Uses RC4
Senator Ron Wyden has asked the Federal Trade Commission to investigate Microsoft over its continued use of the RC4 encryption algorithm. The letter talks about a hacker technique called Kerberoasting, that exploits the Kerberos authentication system.
EPA proposal puts US gas exporters in a bind
Trump’s energy EOs go on trial
Interior: Revolution Wind failed to address national security concerns
Fears rise as unregulated property insurers expand
Newsom replaces California’s top air quality official with his climate adviser
Climate change is burning a €43B hole in Europe’s pocket
Australia pledges $6B by 2030 to tackle climate hazards
Catastrophe bonds worth $17.5B land in EU crosshairs
Kenyan banks need skills to bridge $5B green funding gap
MIT geologists discover where energy goes during an earthquake
The ground-shaking that an earthquake generates is only a fraction of the total energy that a quake releases. A quake can also generate a flash of heat, along with a domino-like fracturing of underground rocks. But exactly how much energy goes into each of these three processes is exceedingly difficult, if not impossible, to measure in the field.
Now MIT geologists have traced the energy that is released by “lab quakes” — miniature analogs of natural earthquakes that are carefully triggered in a controlled laboratory setting. For the first time, they have quantified the complete energy budget of such quakes, in terms of the fraction of energy that goes into heat, shaking, and fracturing.
They found that only about 10 percent of a lab quake’s energy causes physical shaking. An even smaller fraction — less than 1 percent — goes into breaking up rock and creating new surfaces. The overwhelming portion of a quake’s energy — on average 80 percent — goes into heating up the immediate region around a quake’s epicenter. In fact, the researchers observed that a lab quake can produce a temperature spike hot enough to melt surrounding material and turn it briefly into liquid melt.
The geologists also found that a quake’s energy budget depends on a region’s deformation history — the degree to which rocks have been shifted and disturbed by previous tectonic motions. The fractions of quake energy that produce heat, shaking, and rock fracturing can shift depending on what the region has experienced in the past.
“The deformation history — essentially what the rock remembers — really influences how destructive an earthquake could be,” says Daniel Ortega-Arroyo, a graduate student in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). “That history affects a lot of the material properties in the rock, and it dictates to some degree how it is going to slip.”
The team’s lab quakes are a simplified analog of what occurs during a natural earthquake. Down the road, their results could help seismologists predict the likelihood of earthquakes in regions that are prone to seismic events. For instance, if scientists have an idea of how much shaking a quake generated in the past, they might be able to estimate the degree to which the quake’s energy also affected rocks deep underground by melting or breaking them apart. This in turn could reveal how much more or less vulnerable the region is to future quakes.
“We could never reproduce the complexity of the Earth, so we have to isolate the physics of what is happening, in these lab quakes,” says Matěj Peč, associate professor of geophysics at MIT. “We hope to understand these processes and try to extrapolate them to nature.”
Peč (pronounced “Peck”) and Ortega-Arroyo reported their results on Aug. 28 in the journal AGU Advances. Their MIT co-authors are Hoagy O’Ghaffari and Camilla Cattania, along with Zheng Gong and Roger Fu at Harvard University and Markus Ohl and Oliver Plümper at Utrecht University in the Netherlands.
Under the surface
Earthquakes are driven by energy that is stored up in rocks over millions of years. As tectonic plates slowly grind against each other, stress accumulates through the crust. When rocks are pushed past their material strength, they can suddenly slip along a narrow zone, creating a geologic fault. As rocks slip on either side of the fault, they produce seismic waves that ripple outward and upward.
We perceive an earthquake’s energy mainly in the form of ground shaking, which can be measured using seismometers and other ground-based instruments. But the other two major forms of a quake’s energy — heat and underground fracturing — are largely inaccessible with current technologies.
“Unlike the weather, where we can see daily patterns and measure a number of pertinent variables, it’s very hard to do that very deep in the Earth,” Ortega-Arroyo says. “We don’t know what’s happening to the rocks themselves, and the timescales over which earthquakes repeat within a fault zone are on the century-to-millenia timescales, making any sort of actionable forecast challenging.”
To get an idea of how an earthquake’s energy is partitioned, and how that energy budget might affect a region’s seismic risk, he and Peč went into the lab. Over the last seven years, Peč’s group at MIT has developed methods and instrumentation to simulate seismic events, at the microscale, in an effort to understand how earthquakes at the macroscale may play out.
“We are focusing on what’s happening on a really small scale, where we can control many aspects of failure and try to understand it before we can do any scaling to nature,” Ortega-Arroyo says.
Microshakes
For their new study, the team generated miniature lab quakes that simulate a seismic slipping of rocks along a fault zone. They worked with small samples of granite, which are representative of rocks in the seismogenic layer — the geologic region in the continental crust where earthquakes typically originate. They ground up the granite into a fine powder and mixed the crushed granite with a much finer powder of magnetic particles, which they used as a sort of internal temperature gauge. (A particle’s magnetic field strength will change in response to a fluctuation in temperature.)
The researchers placed samples of the powdered granite — each about 10 square millimeters and 1 millimeter thin — between two small pistons and wrapped the ensemble in a gold jacket. They then applied a strong magnetic field to orient the powder’s magnetic particles in the same initial direction and to the same field strength. They reasoned that any change in the particles’ orientation and field strength afterward should be a sign of how much heat that region experienced as a result of any seismic event.
Once samples were prepared, the team placed them one at a time into a custom-built apparatus that the researchers tuned to apply steadily increasing pressure, similar to the pressures that rocks experience in the Earth’s seismogenic layer, about 10 to 20 kilometers below the surface. They used custom-made piezoelectric sensors, developed by co-author O’Ghaffari, which they attached to either end of a sample to measure any shaking that occurred as they increased the stress on the sample.
They observed that at certain stresses, some samples slipped, producing a microscale seismic event similar to an earthquake. By analyzing the magnetic particles in the samples after the fact, they obtained an estimate of how much each sample was temporarily heated — a method developed in collaboration with Roger Fu’s lab at Harvard University. They also estimated the amount of shaking each sample experienced, using measurements from the piezoelectric sensor and numerical models. The researchers also examined each sample under the microscope, at different magnifications, to assess how the size of the granite grains changed — whether and how many grains broke into smaller pieces, for instance.
From all these measurements, the team was able to estimate each lab quake’s energy budget. On average, they found that about 80 percent of a quake’s energy goes into heat, while 10 percent generates shaking, and less than 1 percent goes into rock fracturing, or creating new, smaller particle surfaces.
“In some instances we saw that, close to the fault, the sample went from room temperature to 1,200 degrees Celsius in a matter of microseconds, and then immediately cooled down once the motion stopped,” Ortega-Arroyo says. “And in one sample, we saw the fault move by about 100 microns, which implies slip velocities essentially about 10 meters per second. It moves very fast, though it doesn’t last very long.”
The researchers suspect that similar processes play out in actual, kilometer-scale quakes.
“Our experiments offer an integrated approach that provides one of the most complete views of the physics of earthquake-like ruptures in rocks to date,” Peč says. “This will provide clues on how to improve our current earthquake models and natural hazard mitigation.”
This research was supported, in part, by the National Science Foundation.
How to get your business into the flow
In the late 1990s, a Harley-Davidson executive named Donald Kieffer became general manager of a company engine plant near Milwaukee. The iconic motorcycle maker had forged a celebrated comeback, and Kieffer, who learned manufacturing on the shop floor, had been part of it. Now Kieffer wanted to make his facility better. So he arranged for a noted Toyota executive, Hajime Oba, to pay a visit.
The meeting didn’t go as Kieffer expected. Oba walked around the plant for 45 minutes, diagrammed the setup on a whiteboard, and suggested one modest change. As a high-ranking manager, Kieffer figured he had to make far-reaching upgrades. Instead, Oba asked him, “What is the problem you are trying to solve?”
Oba’s point was subtle. Harley-Davidson had a good plant that could get better, but not by imposing grand, top-down plans. The key was to fix workflow issues the employees could identify. Even a small fix can have large effects, and, anyway, a modestly useful change is better than a big, formulaic makeover that derails things. So Kieffer took Oba’s prompt and started making specific, useful changes.
“Organizations are dynamic places, and when we try to impose a strict, static structure on them, we drive all that dynamism underground,” says MIT professor of management Nelson Repenning. “And the waste and chaos it creates is 100 times more expensive than people anticipate.”
Now Kieffer and Repenning have written a book about flexible, sensible organizational improvement, “There’s Got to Be a Better Way,” published by PublicAffairs. They call their approach “dynamic work design,” which aims to help firms refine their workflow — and to stop people from making it worse through overconfident, cookie-cutter prescriptions.
“So much of management theory presumes we can predict the future accurately, including our impact on it,” Repenning says. “And everybody knows that’s not true. Yet we go along with the fiction. The premise underlying dynamic work design is, if we accept that we can’t predict the future perfectly, we might design the world differently.”
Kieffer adds: “Our principles address how work is designed. Not how leaders have to act, but how you design human work, and drive changes.”
One collaboration, five principles
This book is the product of a long collaboration: In 1996, Kieffer first met Repenning, who was then a new MIT faculty member, and they soon recognized they thought similarly about managing work. By 2008, Kieffer also became a lecturer at the MIT Sloan School of Management, where Repenning is now a distinguished professor of system dynamics and organization studies.
The duo began teaching executive education classes together at MIT Sloan, often working with firms tackling tough problems. In the 2010s, they worked extensively with BP executives after the Deepwater Horizon accident, finding ways to combine safety priorities with other operations.
Repenning is an expert on system dynamics, an MIT-developed field emphasizing how parts of a system interact. In a firm, making isolated changes may throw the system as a whole further off kilter. Instead, managers need to grasp the larger dynamics — and recognize that a firm’s problems are not usually its people, since most employees perform similarly when burdened by a faulty system.
Whereas many have touted management systems prescribe set things in advance — like culling the bottom 10 percent of your employees annually — Repenning and Kieffer believe a firm should study itself empirically and develop improvements from there.
“Managers lose touch with how work actually gets done,” Kieffer says. “We bring managers in touch with real-time work, to see the problems people have, to help them solve it and learn new ways to work.”
Over time, Repenning and Kieffer have codified their ideas about work design into five principles:
- Solve the right problem: Use empiricism to develop a blame-free statement of issues to address;
- Structure for discovery: Allow workers to see how their work fits into the bigger picture, and to help improve things;
- Connect the human chain: Make sure the right information moves from one person to the next;
- Regulate for flow: New tasks should only enter a system when there is capacity for them to be handled; and
- Visualize the work: Create a visual method — think of a whiteboard with sticky notes — for mapping work operations.
No mugs, no t-shirts — just open your eyes
Applying dynamic work design to any given firm may sound simple, but Repenning and Kieffer note that many forces make it hard to implement. For instance, firm leaders may be tempted to opt for technology-based solutions when there are simpler, cheaper fixes available.
Indeed, “resorting to technology before fixing the underlying design risks wasting money and embedding the original problem even deeper in the organization,” they write in the book.
Moreover, dynamic work design is not itself a solution, but a way of trying to find a specific solution.
“One thing that keeps Don and I up at night is a CEO reading our book and thinking, ‘We’re going to be a dynamic work design company,’ and printing t-shirts and coffee mugs and holding two-day conferences where everyone signs the dynamic work design poster, and evaluating everyone every week on how dynamic they are,’” Repenning says. “Then you’re being awfully static.”
After all, firms change, and their needs change. Repenning and Kieffer want managers to keep studying their firm’s workflow, so they can keep current with their needs. In fairness, a certain amount of managers do this.
“Most people have experienced fleeting moments of good work design,” Repenning says. Building on that, he says, managers and employees can keep driving a process of improvement that is realistic and logical.
“Start small,” he adds. “Pick one problem you can work on in a couple of weeks, and solve that. Most cases, with open eyes, there’s low-hanging fruit. You find the places you can win, and change incrementally, rather than all at once. For senior executives, this is hard. They are used to doing big things. I tell our executive ed students, it’s going to feel uncomfortable at the beginning, but this is a much more sustainable path to progress.”
Our Stop Censoring Abortion Campaign Uncovers a Social Media Censorship Crisis
This is the first installment in a blog series documenting EFF's findings from the Stop Censoring Abortion campaign. You can read additional posts here.
We’ve been hearing that social media platforms are censoring abortion-related content, even when no law requires them to do so. Now, we’ve got the receipts.
For months, EFF has been investigating stories from users whose abortion-related content has been taken down or otherwise suppressed by major social media platforms. In collaboration with our allies—including Plan C, Women on Web, Reproaction, and Women First Digital—we launched the #StopCensoringAbortion campaign to collect and amplify these stories.
Submissions came from a variety of users, including personal accounts, influencers, healthcare clinics, research organizations, and advocacy groups from across the country and abroad—a spectrum that underscores the wide reach of this censorship. Since the start of the year, we’ve seen nearly 100 examples of abortion-related content taken down by social media platforms.
We analyzed these takedowns, deletions, and bans, comparing the content to what platform policies allow—particularly those of Meta—and found that almost none of the submissions we received violated any of the platforms’ stated policies. Most of the censored posts simply provided factual, educational information. This Threads post is a perfect example:
Screenshot submitted by Lauren Kahre to EFF
In this post, health policy strategist Lauren Kahre discussed abortion pills’ availability via mail. She provided factual information about two FDA approved medications (mifepristone and misoprostol), including facts like shelf life and how to store pills safely.
Lauren’s post doesn’t violate any of Meta’s policies and shouldn’t have been removed. But don’t just take our word for it: Meta has publicly insisted that posts like these should not be censored. In a February 2024 letter to Amnesty International, Meta Human Rights Policy Director Miranda Sissons wrote: “Organic content (i.e., non paid content) educating users about medication abortion is allowed and does not violate our Community Standards. Additionally, providing guidance on legal access to pharmaceuticals is allowed.”
Still, shortly after Lauren shared this post, Meta took it down. Perhaps even more perplexing was their explanation for doing so. According to Meta, the post was removed because “[they] don’t allow people to buy, sell, or exchange drugs that require a prescription from a doctor or a pharmacist.”
Screenshot submitted by Lauren Kahre to EFF
In the submissions we received, this was the most common reason Meta gave for removing abortion-related content. The company frequently claimed that posts violated policies on Restricted Goods and Services, which prohibit any “attempts to buy, sell, trade, donate, gift or ask for pharmaceutical drugs.”
Yet in Lauren’s case and others, the posts very clearly did no such thing. And as Meta itself has explained: “Providing guidance on how to legally access pharmaceuticals is permitted as it is not considered an offer to buy, sell or trade these drugs.”
In fact, Meta’s policies on Restricted Goods & Services further state: “We allow discussions about the sale of these goods in stores or by online retailers, advocating for changes to regulations of goods and services covered in this policy, and advocating for or concerning the use of pharmaceutical drugs in the context of medical treatment, including discussion of physical or mental side effects.” Also, “Debating or advocating for the legality or discussing scientific or medical merits of prescription drugs is allowed. This includes news and public service announcements.”
Over and over again, the policies say one thing, but the actual enforcement says another.
We spoke with multiple Meta representatives to share these findings. We asked hard questions about their policies and the gap between how they’re being applied. Unfortunately, we were mostly left with the same concerns, but we’re continuing to push them to do better.
In the coming weeks, we will share a series of blogs further examining trends we found, including stories of unequal enforcement, where individuals and organizations needed to rely on internal connections at Meta to get wrongfully censored posts restored; examples of account suspensions without sufficient warnings; an exploration of Meta’s ad policies; practical tips for users to avoid being censored; and concrete steps platforms should take to reform their abortion content moderation practices. For a preview, we’ve already shared some of our findings with Barbara Ortutay at The Associated Press, whose report on some of these takedowns was published today.
We hope this series highlighting examples of abortion content censorship will help the public and the platforms understand the breadth of this problem, who is affected, and with what consequences. These stories collectively underscore the urgent need for platforms to review and consistently enforce their policies in a fair and transparent manner.
With reproductive rights under attack both in the U.S. and abroad, sharing accurate information about abortion online has never been more critical. Together, we can hold platforms like Meta accountable, demand transparency in moderation practices, and ultimately stop the censorship of this essential, sometimes life-saving information.
This is the first post in our blog series documenting the findings from our Stop Censoring Abortion campaign. Read more in the series: https://www.eff.org/pages/stop-censoring-abortion
Climate Action Learning Lab helps state and local leaders identify and implement effective climate mitigation strategies
This spring, J-PAL North America — a regional office of MIT’s Abdul Latif Jameel Poverty Action Lab (J-PAL) — launched its first ever Learning Lab, centered on climate action. The Learning Lab convened a cohort of government leaders who are enacting a broad range of policies and programs to support the transition to a low-carbon economy. Through the Learning Lab, participants explored how to embed randomized evaluation into promising solutions to determine how to maximize changes in behavior — a strategy that can help advance decarbonization in the most cost-effective ways to benefit all communities. The inaugural cohort included more than 25 participants from state agencies and cities, including the Massachusetts Clean Energy Center, the Minnesota Housing Finance Agency, and the cities of Lincoln, Nebraska; Newport News, Virginia; Orlando, Florida; and Philadelphia.
“State and local governments have demonstrated tremendous leadership in designing and implementing decarbonization policies and climate action plans over the past few years,” said Peter Christensen, scientific advisor of the J-PAL North America Environment, Energy, and Climate Change Sector. “And while these are informed by scientific projections on which programs and technologies may effectively and equitably reduce emissions, the projection methods involve a lot of assumptions. It can be challenging for governments to determine whether their programs are actually achieving the expected level of emissions reductions that we desperately need. The Climate Action Learning Lab was designed to support state and local governments in addressing this need — helping them to rigorously evaluate their programs to detect their true impact.”
From May to July, the Learning Lab offered a suite of resources for participants to leverage rigorous evaluation to identify effective and equitable climate mitigation solutions. Offerings included training lectures, one-on-one strategy sessions, peer learning engagements, and researcher collaboration. State and local leaders built skills and knowledge in evidence generation and use, reviewed and applied research insights to their own programmatic areas, and identified priority research questions to guide evidence-building and decision-making practices. Programs prioritized for evaluation covered topics such as compliance with building energy benchmarking policies, take-up rates of energy-efficient home improvement programs such as heat pumps and Solar for All, and scoring criteria for affordable housing development programs.
“We appreciated the chance to learn about randomized evaluation methodology, and how this impact assessment tool could be utilized in our ongoing climate action planning. With so many potential initiatives to pursue, this approach will help us prioritize our time and resources on the most effective solutions,” said Anna Shugoll, program manager at the City of Philadelphia’s Office of Sustainability.
This phase of the Learning Lab was possible thanks to grant funding from J-PAL North America’s longtime supporter and collaborator Arnold Ventures. The work culminated in an in-person summit in Cambridge, Massachusetts, on July 23, where Learning Lab participants delivered a presentation on their jurisdiction’s priority research questions and strategic evaluation plans. They also connected with researchers in the J-PAL network to further explore impact evaluation opportunities for promising decarbonization programs.
“The Climate Action Learning Lab has helped us identify research questions for some of the City of Orlando’s deep decarbonization goals. J-PAL staff, along with researchers in the J-PAL network, worked hard to bridge the gap between behavior change theory and the applied, tangible benefits that we achieve through rigorous evaluation of our programs,” said Brittany Sellers, assistant director for sustainability, resilience and future-ready for Orlando. “Whether we’re discussing an energy-efficiency policy for some of the biggest buildings in the City of Orlando or expanding [electric vehicle] adoption across the city, it’s been very easy to communicate some of these high-level research concepts and what they can help us do to actually pursue our decarbonization goals.”
The next phase of the Climate Action Learning Lab will center on building partnerships between jurisdictions and researchers in the J-PAL network to explore the launch of randomized evaluations, deepening the community of practice among current cohort members, and cultivating a broad culture of evidence building and use in the climate space.
“The Climate Action Learning Lab provided a critical space for our city to collaborate with other cities and states seeking to implement similar decarbonization programs, as well as with researchers in the J-PAL network to help rigorously evaluate these programs,” said Daniel Collins, innovation team director at the City of Newport News. “We look forward to further collaboration and opportunities to learn from evaluations of our mitigation efforts so we, as a city, can better allocate resources to the most effective solutions.”
The Climate Action Learning Lab is one of several offerings under the J-PAL North America Evidence for Climate Action Project. The project’s goal is to convene an influential network of researchers, policymakers, and practitioners to generate rigorous evidence to identify and advance equitable, high-impact policy solutions to climate change in the United States. In addition to the Learning Lab, J-PAL North America will launch a climate special topic request for proposals this fall to fund research on climate mitigation and adaptation initiatives. J-PAL will welcome applications from both research partnerships formed through the Learning Lab as well as other eligible applicants.
Local government leaders, researchers, potential partners, or funders committed to advancing climate solutions that work, and who want to learn more about the Evidence for Climate Action Project, may email na_eecc@povertyactionlab.org or subscribe to the J-PAL North America Climate Action newsletter.