Feed aggregator
#StopCensoringAbortion: What We Learned and Where We Go From Here
This is the tenth and final installment in a blog series documenting EFF's findings from the Stop Censoring Abortion campaign. You can read additional posts here.
When we launched Stop Censoring Abortion, our goals were to understand how social media platforms were silencing abortion-related content, gather data and lift up stories of censorship, and hold social media companies accountable for the harm they have caused to the reproductive rights movement.
Thanks to nearly 100 submissions from educators, advocates, clinics, researchers, and individuals around the world, we confirmed what many already suspected: this speech is being removed, restricted, and silenced by platforms at an alarming rate. Together, our findings paint a clear picture of censorship in action: platforms’ moderation systems are not only broken, but are actively harming those seeking and sharing vital reproductive health information.
Here are the key lessons from this campaign: what we uncovered, how platforms can do better, and why pushing back against this censorship matters more now than ever.
Lessons LearnedAcross our submissions, we saw systemic over-enforcement, vague and convoluted policies, arbitrary takedowns, sudden account bans, and ignored appeals. And in almost every case we reviewed, the posts and accounts in question did not violate any of the platform’s stated rules.
The most common reason Meta gave for removing abortion-related content was that it violated policies on Restricted Goods and Services, which prohibit any “attempts to buy, sell, trade, donate, gift or ask for pharmaceutical drugs.” But most of the content submitted simply provided factual, educational information that clearly did not violate those rules. As we saw in the M+A Hotline’s case, this kind of misclassification deprives patients, advocates, and researchers of reliable information, and chills those trying to provide accurate and life-saving reproductive health resources.
In one submission, we even saw posts sharing educational abortion resources get flagged under the “Dangerous Organizations and Individuals” policy, a rule intended to prevent terrorism and criminal activity. We’ve seen this policy cause problems in the past, but in the reproductive health space, treating legal and accurate information as violent or unlawful only adds needless stigma and confusion.
Meta’s convoluted advertising policies add another layer of harm. There are specific, additional rules users must navigate to post paid content about abortion. While many of these rules still contain exceptions for purely educational content, Meta is vague about how and when those exceptions apply. And ads that seem like they should have been allowed were frequently flagged under rules about “prescription drugs” or “social issues.” This patchwork of unclear policies forces users to second-guess what content they can post or promote for fear of losing access to their networks.
In another troubling trend, many of our submitters reported experiencing shadowbanning and de-ranking, where posts weren’t removed but were instead quietly suppressed by the algorithm. This kind of suppression leaves advocates without any notice, explanation, or recourse—and severely limits their ability to reach people who need the information most.
Many users also faced sudden account bans without warning or clear justification. Though Meta’s policies dictate that an account should only be disabled or removed after “repeated” violations, organizations like Women Help Women received no warning before seeing their critical connections cut off overnight.
Finally, we learned that Meta’s enforcement outcomes were deeply inconsistent. Users often had their appeals denied and accounts suspended until someone with insider access to Meta could intervene. For example, the Red River’s Women’s Clinic, RISE at Emory, and Aid Access each had their accounts restored only after press attention or personal contacts stepped in. This reliance on backchannels underscores the inequity in Meta’s moderation processes: without connections, users are left unfairly silenced.
It’s Not Just MetaMost of our submissions detailed suppression that took place on one of Meta’s platforms (Facebook, Instagram, Whatsapp and Threads), so we decided to focus our analysis on Meta’s moderation policies and practices. But we should note that this problem is by no means confined to Meta.
On LinkedIn, for example, Stephanie Tillman told us about how she had her entire account permanently taken down, with nothing more than a vague notice that she had violated LinkedIn’s User Agreement. When Stephanie reached out to ask what violation she committed, LinkedIn responded that “due to our Privacy Policy we are unable to release our findings,” leaving her with no clarity or recourse. Stephanie suspects that the ban was related to her work with Repro TLC, an advocacy and clinical health care organization, and/or her posts relating to her personal business, Feminist Midwife LLC. But LinkedIn’s opaque enforcement meant she had no way to confirm these suspicions, and no path to restoring her account.
Screenshot submitted by Stephanie Tillman to EFF (with personal information redacted by EFF)
And over on Tiktok, Brenna Miller, a creator who works in health care and frequently posts about abortion, posted a video of her “unboxing” an abortion pill care package from Carafem. Though Brenna’s video was factual and straightforward, TikTok removed it, saying that she had violated TikTok’s Community Guidelines.
Screenshot submitted by Brenna Miller to EFF
Brenna appealed the removal successfully at first, but a few weeks later the video was permanently deleted—this time, without any explanation or chance to appeal again.
Brenna’s far from the only one experiencing censorship on TikTok. Even Jessica Valenti, award-winning writer, activist, and author of the Abortion Every Day newsletter, recently had a video taken down from TikTok for violating its community guidelines, with no further explanation. The video she posted was about the Trump administration calling IUDs and the Pill ‘abortifacients.’ Jessica wrote:
Which rule did I break? Well, they didn’t say: but I wasn’t trying to sell anything, the video didn’t feature nudity, and I didn’t publish any violence. By process of elimination, that means the video was likely taken down as "misinformation." Which is…ironic.
These are not isolated incidents. In the Center for Intimacy Justice’s survey of reproductive rights advocates, health organizations, sex educators, and businesses, 63% reported having content removed on Meta platforms, 55% reported the same on TikTok, and 66% reported having ads rejected from Google platforms (including YouTube). Clearly, censorship of abortion-related content is a systemic problem across platforms.
How Platforms Can Do Better on Abortion-Related SpeechBased on our findings, we're calling on platforms to take these concrete steps to improve moderation of abortion-related speech:
- Publish clear policies. Users should not have to guess whether their speech is allowed or not.
- Enforce rules consistently. If a post does not violate a written standard, it should not be removed.
- Provide real transparency. Enforcement decisions must come with clear, detailed explanations and meaningful opportunities to appeal.
- Guarantee functional appeals. Users must be able to challenge wrongful takedowns without relying on insider contacts.
- Expand human review. Reproductive rights is a nuanced issue and can be too complex to be left entirely to error-prone automated moderation systems.
Don’t get it twisted: Users should not have to worry about their posts being deleted or their accounts getting banned when they share factual information that doesn’t violate platform policies. The onus is on platforms to get it together and uphold their commitments to users. But while platforms continue to fail, we’ve provided some practical tips to reduce the risk of takedowns, including:
- Consider limiting commonly flagged words and images. Posts with pill images or certain keyword combinations (like “abortion,” “pill,” and “mail”) were often flagged.
- Be as clear as possible. Vague phrases like “we can help you get what you need” might look like drug sales to an algorithm.
- Be careful with links. Direct links to pill providers were often flagged. Spell out the links instead.
- Expect stricter rules for ads. Boosted posts face harsher scrutiny than regular posts.
- Appeal wrongful enforcement decisions. Requesting an appeal might get you a human moderator or, even better, review from Meta’s independent Oversight Board.
- Document everything and back up your content. Screenshot all communications and enforcement decisions so you can share them with the press or advocacy groups, and export your data regularly in case your account vanishes overnight.
Abortion information saves lives, and social media is the primary—and sometimes only—way for advocates and providers to get accurate information out to the masses. But now we have evidence that this censorship is widespread, unjustified, and harming communities who need access to this information most.
Platforms must be held accountable for these harms, and advocates must continue to speak out. The more we push back—through campaigns, reporting, policy advocacy, and user action—the harder it will be for platforms to look away.
So keep speaking out, and keep demanding accountability. Platforms need to know we're paying attention—and we won't stop fighting until everyone can share information about abortion freely, safely, and without fear of being silenced.
This is the tenth and final post in our blog series documenting the findings from our Stop Censoring Abortion campaign. Read more at https://www.eff.org/pages/stop-censoring-abortion.
Affected by unjust censorship? Share your story using the hashtag #StopCensoringAbortion. Amplify censored posts and accounts, share screenshots of removals and platform messages—together, we can demonstrate how these policies harm real people.
Secretary of Energy Chris Wright ’85 visits MIT
U.S. Secretary of Energy Chris Wright ’85 visited MIT on Monday, meeting Institute leaders, discussing energy innovation at a campus forum, viewing poster presentations from researchers supported through the MIT-GE Vernova Energy and Climate Alliance, and watching energy research demos in the lab where he used to work as a student.
“I’ve always been in energy because I think it’s just far and away the world’s most important industry,” Wright said at the forum, which included a panel discussion with business leaders and a fireside chat with MIT Professor Ernest Moniz, who was the U.S. secretary of energy from 2013 to 2017. Wright added: “Not only is it by far the world’s most important industry, because it enables all the others, but it’s also a booming time right now. … It is an awesomely exciting time to be in energy.”
Wright was greeted on campus by MIT President Sally Kornbluth, who also gave introductory remarks at the forum, held in MIT’s Samberg Center. While the Institute has added many research facilities and buildings since Wright was a student, Kornbluth observed, the core MIT ethos remains the same.
“MIT is still MIT,” Kornbluth said. “It’s a community that rewards merit, boldness, and scientific rigor. And it’s a magnet for people with a drive to solve hard problems that matter in the real world, an enthusiasm for working with industry, and an ethic of national service.”
When it comes to energy research, Kornbluth added, “MIT is developing transformational approaches to make American energy more secure, reliable, affordable, and clean — which in turn will strengthen both U.S. competitiveness and national security.”
At the event, Wright, the 17th U.S. secretary of energy, engaged in a fireside chat with Moniz, the 13th U.S. secretary of energy, the Cecil and Ida Green Professor of Physics and Engineering Systems Post-Tenure, a special advisor to the MIT president, and the founding director of the MIT Energy Initiative (MITEI). Wright began his remarks by reflecting on Kornbluth’s description of the Institute.
“Merit, boldness, and scientific rigor,” Wright said. “That is MIT … to me. That hit me hard when I got here, and frankly, it’s a good part of the reason my life has gone the way it’s gone.”
On energy topics, Wright emphasized the need for continued innovation in energy across a range of technologies, including fusion, geothermal, and more, while advocating for the benefits of vigorous market-based progress. Before becoming secretary of energy, Wright most recently served as founder and CEO of Liberty Energy. He also was the founder of Pinnacle Technologies, among other enterprises. Wright was confirmed as secretary by the U.S. Senate in February.
Asked to name promising areas of technological development, Wright focused on three particular areas of interest. Citing artificial intelligence, he noted that the interest in it was “overwhelming,” with many possible applications. Regarding fusion energy, Wright said, “We are going to see meaningful breakthroughs.” And quantum computing, he added, was going to be a “game-changer” as well.
Wright also emphasized the value of federal support for fundamental research, including projects in the national laboratories the Department of Energy oversees.
“The 17 national labs we have in this country are absolute jewels. They are gems of this country,” Wright said. He later noted, “There are things, like this foundational research, that are just an essential part of our country and an essential part of our future.”
Moniz asked Wright a range of questions in the fireside chat, while adding his own perspective at times about the many issues connected to energy abundance globally.
“Climate, energy, security, equity, affordability, have to be recognized as one conversation, and not separate conversations,” Moniz said. “That’s what’s at stake in my view.”
Wright’s appearance was part of the Energy Freedom Tour developed by the American Conservation Coalition (ACC), in coordination with the Hamm Institute for American Energy at Oklahoma State University. Later stops are planned for Stanford University and Texas A&M University.
Ann Bluntzer Pullin, executive director of the Hamm Institute, gave remarks at the forum as well, noting the importance of making students aware of the energy industry and helping to “get them excited about the impact this career can make.” She also praised MIT’s advances in the field, adding, “This is where so many ideas were born and executed that have allowed America to really thrive in this energy abundance in our country that we have [had] for so long.”
The forum also featured remarks from Roger Martella, chief corporate officer, chief sustainability officer, and head of government affairs at GE Vernova. In March, MIT and GE Vernova announced a new five-year joint program, the MIT-GE Vernova Energy and Climate Alliance, featuring research projects, education programs, and career opportunities for MIT students.
“That’s what we’re about, electrification as the lifeblood of prosperity,” Martella said, describing GE Vernova’s work. “When we’re here at MIT we feel like we’re living history every moment when we’re walking down the halls, because no institution has [contributed] to innovation and technology more, doing it every single day to advance prosperity for all people around the world.”
A panel discussion at the forum featured Wright speaking along with three MIT alumni who are active in the energy business: Carlos Araque ’01, SM ’02, CEO of Quaise Energy, a leading-edge firm in geothermal energy solutions; Bob Mumgaard SM ’15, PhD ’15, CEO of Commonwealth Fusion Systems, a leading fusion energy firm and an MIT spinout; and Milo Werner SM ’07, MBA ’07, a general partner at DCVC and expert in energy and climate investments. The panel was moderated by Chris Barnard, president of the ACC.
Mumgaard noted that Commonwealth Fusion Systems launched in 2018 with “an explicit mission, working with MIT still today, of putting fusion onto an industrial trajectory,” although there is “plenty left to do, still, at that intersection of science, technology, innovation, and business.”
Araque said he believes geothermal is “metric-by-metric” more powerful and profitable than many other forms of energy. “This is not a stop-gap,” he added. Quaise is currently developing its first power-plant-scale facility in the U.S.
Werner noted that the process of useful innovation only begins in the lab; making an advance commercially viable is the critical next step. The biggest impact “is not in the breakthrough,” she said. “It’s not in the discovery that you make in the lab. It’s actually once you’ve built a billion of them. That’s when you actually change the world.”
After the forum, Wright took a tour of multiple research centers on the MIT campus, including the MIT.nano facility, guided by Vladimir Bulović, faculty director of MIT.nano and the Fariborz Maseeh Chair in Emerging Technology.
At MIT.nano, Bulović showed Wright the Titan Krios G3i, a nearly room-size electron microscope that enables researchers to take a high-resolution look at the structure of tiny particles, with a variety of research applications. The tour also viewed one of MIT.nano’s cleanrooms, a shared fabrication facility used by both MIT researchers and users outside of MIT, including many in industry.
On a different note, in an MIT.nano hallway, Bulović showed Wright the One.MIT mosaics, which contain the names of all MIT students and employees past and present — well over 300,000 in all. First etched on a 6-inch wafer, the mosaics are a visual demonstration of the power of nanotechnology — and a searchable display, so Bulović located Wright’s name, which is printed near the chin of one of the figures on the MIT seal.
The tour ended in the basement of Building 10, in what is now the refurbished Grainger Energy Machine Facility, where Wright used to conduct research. After earning his undergraduate degree in mechanical engineering, Wright entered into graduate studies at MIT before leaving, as he recounted at the forum, to pursue business opportunities.
At the lab, Wright met with David Perreault, the Ford Foundation Professor of Engineering; and Steven Leeb, the Emanuel Landsman Professor, a specialist in power systems. A half-dozen MIT graduate students gave Wright demos of their research projects, all involving energy-generation innovations. Wright readily engaged with all the graduate students about the technologies and the parameters of the devices, and asked the students about their own careers.
Wright was accompanied on the lab tour by MIT Provost Anantha Chandrakasan, himself an expert in developing energy-efficient systems. Chandrakasan delivered closing remarks at the forum in the Samberg Center, noting MIT’s “strong partnership with the Department of Energy” and its “long and proud history of engaging industry.”
As such, Chandrakasan said, MIT has a “role as a resource in service of the nation, so please don’t hesitate to call on us.”
MIT-affiliated physicists win McMillan Award for discovery of exotic electronic state
Last year, MIT physicists reported in the journal Nature that electrons can become fractions of themselves in graphene, an atomically thin form of carbon. This exotic electronic state, called the fractional quantum anomalous Hall effect (FQAHE), could enable more robust forms of quantum computing.
Now two young MIT-affiliated physicists involved in the discovery of FQAHE have been named the 2025 recipients of the McMillan Award from the University of Illinois for their work. Jiaqi Cai and Zhengguang Lu won the award “for the discovery of fractional anomalous quantum hall physics in 2D moiré materials.”
Cai is currently a Pappalardo Fellow at MIT working with Pablo Jarillo-Herrero, the Cecil and Ida Green Professor of Physics, and collaborating with several other labs at MIT including Long Ju, the Lawrence and Sarah W. Biedenharn Career Development Associate Professor in the MIT Department of Physics. He discovered FQAHE while working in the laboratory of Professor Xiaodong Xu at the University of Washington.
Lu discovered FQAHE while working as a postdoc Ju's lab and has since become an assistant professor at Florida State University.
The two independent discoveries were made in the same year.
“The McMillan award is the highest honor that a young condensed matter physicist can receive,” says Ju. “My colleagues and I in the Condensed Matter Experiment and the Condensed Matter Theory Group are very proud of Zhengguang and Jiaqi.”
Ju and Jarillo-Herrero are both also affiliated with the Materials Research Laboratory.
In addition to a monetary prize and a plaque, Lu and Cai will give a colloquium on their work at the University of Illinois this fall.
Martin Trust Center for MIT Entrepreneurship welcomes Ana Bakshi as new executive director
The Martin Trust Center for MIT Entrepreneurship announced that Ana Bakshi has been named its new executive director. Bakshi stepped into the role at the start of the fall semester and will collaborate closely with the managing director, Ethernet Inventors Professor of the Practice Bill Aulet, to elevate the center to higher levels.
“Ana is uniquely qualified for this role. She brings a deep and highly decorated background in entrepreneurship education at the highest levels, along with exceptional leadership and execution skills,” says Aulet. “Since I first met her 12 years ago, I have been extraordinarily impressed with her commitment to create the highest-quality centers and institutes for entrepreneurs, first at King’s College London and then at Oxford University. This ideal skill set is compounded by her experience in leading high-growth companies, most recently as the chief operation officer in an award-winning AI startup. I’m honored and thrilled to welcome her to MIT — her knowledge and energy will greatly elevate our community, and the field as a whole.”
A rapidly changing environment creates imperative for raising the bar for entrepreneurship education
The need to raise the bar for innovation-driven entrepreneurship education is both timely and urgent. The rate of change is getting faster and faster every day, especially with artificial intelligence, and is generating new problems that need to be solved, as well as exacerbating existing problems in climate, health care, manufacturing, future of work, education, and economic stratification, to name but a few. The world needs more entrepreneurs and better entrepreneurs.
Bakshi joins the Trust Center at an exciting time in its history. MIT is at the forefront of helping to develop people and systems that can turn challenges into opportunities using an entrepreneurial mindset, skill set, and way of operating. Bakshi’s deep experience and success will be key to unlocking this opportunity. “I am truly honored to join the Trust Center at such a pivotal moment,” Bakshi says. “In an era defined by both extraordinary challenges and extraordinary possibilities, the future will be built by those bold enough to try, and MIT will be at the forefront of this.”
Translating academic research into real-world impact
Bakshi has a decade of experience building two world-class entrepreneurship centers from the ground up. She served as the founding director at King’s College and then at Oxford. In this role, she was responsible for all aspects of these centers, including fundraising.
While at Oxford, she authored a data-driven approach to determining efficacy of outcomes for their programs, as evidenced by a 61-page study, “Universities: Drivers of Prosperity and Economic Recovery.”
As the director of the Oxford Foundry (Oxford’s cross-university entrepreneurship center), Bakshi focused on investing in ambitious founders and talent. The center was backed by global entrepreneurial leaders such as the founders of LinkedIn and Twitter, with corporate partnerships including Santander and EY, and investment funds including Oxford Science Enterprises (OSE). As of 2021, the startups supported by the Foundry and King’s College have raised over $500 million and have created nearly 3,000 jobs, spanning diverse industries including health tech, climate tech, cybersecurity, fintech, and deep tech spinouts focusing on world-class science.
In addition, she built the highly successful and economically sustainable Entrepreneurship School, Oxford’s first digital online learning platform.
Bakshi comes to MIT after having worked in the private sector as the chief operating officer (COO) in a rapidly growing artificial intelligence startup for almost two years, Quench.ai, with offices in London and New York City. She was the first C-suite employee at Quench.ai, serving as COO and now senior advisor, helping companies unlock value from their knowledge through AI.
Right place, right time, right person moving at the speed of MIT AI
Since its inception, then turbocharged in the 1940s with the creation and operation of the RadLab, and continuing to this day, entrepreneurship is at the core of MIT’s identity and mission.
"MIT has been a leader in entrepreneurship for decades. It’s now the third leg of the school, alongside teaching and research,” says Mark Gorenberg ’76, chair of the MIT Corporation. “I’m excited to have such a transformative leader as Ana join the Trust Center team, and I look forward to the impact she will have on the students and the wider academic community at MIT as we enter an exciting new phase in company building, driven by the accelerated use of AI and emerging technologies."
“In a time where we are rethinking management education, entrepreneurship as an interdisciplinary field to create impact is even more important to our future. To have such an experienced and accomplished leader in academia and the startup world, especially in AI, reinforces our commitment to be a global leader in this field,” says Richard M. Locke, John C Head III Dean at the MIT Sloan School of Management.
“MIT is a unique hub of research, innovation, and entrepreneurship, and that special mix creates massive positive impact that ripples around the world,” says Frederic Kerrest, MIT Sloan MBA ’09, co-founder of Okta, and member of the MIT Corporation. “In a rapidly changing, AI-driven world, Ana has the skills and experience to further accelerate MIT’s global leadership in entrepreneurship education to ensure that our students launch and scale the next generation of groundbreaking, innovation-driven startups.”
Prior to her time at Oxford and King’s College, Bakshi served as an elected councilor representing 6,000-plus constituents, held roles in international nongovernmental organizations, and led product execution strategy at MAHI, an award-winning family-led craft sauce startup, available in thousands of major retailers across the U.K. Bakshi sits on the advisory council for conservation charity Save the Elephants, leveraging AI-driven and scientific approaches to reduce human-wildlife conflict and protect elephant populations. Her work and impact have been featured across FT, Forbes, BBC, The Times, and The Hill. Bakshi was twice honored as a Top 50 Woman in Tech (U.K.), most recently in 2025.
“As AI changes how we learn, how we build, and how we scale, my focus will be on helping MIT expand its support for phenomenal talent — students and faculty — with the skills, ecosystem, and backing to turn knowledge into impact,” Bakshi says.
35 years of impact to date
The Trust Center was founded in 1990 by the late Professor Edward Roberts and serves all MIT students across all schools and all disciplines. It supports 60-plus courses and extensive extracurricular programming, including the delta v academic accelerator. Much of the work of the center is generated through the Disciplined Entrepreneurship methodology, which offers a proven approach to create new ventures. Over a thousand schools and other organizations across the world use Disciplined Entrepreneurship books and resources to teach entrepreneurship.
Now, with AI-powered tools like Orbit and JetPack, the Trust Center is changing the way that entrepreneurship is taught and practiced. Its mission is to produce the next generation of innovation-driven entrepreneurs while advancing the field more broadly to make it both rigorous and practical. This approach of leveraging proven evidence-based methodology, emerging technology, the ingenuity of MIT students, and responding to industry shifts is similar to how MIT established the field of chemical engineering in the 1890s. The desired result in both cases was to create a comprehensive, integrated, scalable, rigorous, and practical curriculum to create a new workforce to address the nation’s and world’s greatest challenges.
Lincoln Lab unveils the most powerful AI supercomputer at any US university
The new TX-Generative AI Next (TX-GAIN) computing system at the Lincoln Laboratory Supercomputing Center (LLSC) is the most powerful AI supercomputer at any U.S. university. With its recent ranking from TOP500, which biannually publishes a list of the top supercomputers in various categories, TX-GAIN joins the ranks of other powerful systems at the LLSC, all supporting research and development at Lincoln Laboratory and across the MIT campus.
"TX-GAIN will enable our researchers to achieve scientific and engineering breakthroughs. The system will play a large role in supporting generative AI, physical simulation, and data analysis across all research areas," says Lincoln Laboratory Fellow Jeremy Kepner, who heads the LLSC.
The LLSC is a key resource for accelerating innovation at Lincoln Laboratory. Thousands of researchers tap into the LLSC to analyze data, train models, and run simulations for federally funded research projects. The supercomputers have been used, for example, to simulate billions of aircraft encounters to develop collision-avoidance systems for the Federal Aviation Administration, and to train models in the complex tasks of autonomous navigation for the Department of Defense. Over the years, LLSC capabilities have been essential to numerous award-winning technologies, including those that have improved airline safety, prevented the spread of new diseases, and aided in hurricane responses.
As its name suggests, TX-GAIN is especially equipped for developing and applying generative AI. Whereas traditional AI focuses on categorization tasks, like identifying whether a photo depicts a dog or cat, generative AI produces entirely new outputs. Kepner describes it as a mathematical combination of interpolation (filling in the gaps between known data points) and extrapolation (extending data beyond known points). Today, generative AI is widely known for its use of large language models to create human-like responses to user prompts.
At Lincoln Laboratory, teams are applying generative AI to various domains beyond large language models. They are using the technology, for instance, to evaluate radar signatures, supplement weather data where coverage is missing, root out anomalies in network traffic, and explore chemical interactions to design new medicines and materials.
To enable such intense computations, TX-GAIN is powered by more than 600 NVIDIA graphics processing unit accelerators specially designed for AI operations, in addition to traditional high-performance computing hardware. With a peak performance of two AI exaflops (two quintillion floating-point operations per second), TX-GAIN is the top AI system at a university, and in the Northeast. Since TX-GAIN came online this summer, researchers have taken notice.
"TX-GAIN is allowing us to model not only significantly more protein interactions than ever before, but also much larger proteins with more atoms. This new computational capability is a game-changer for protein characterization efforts in biological defense," says Rafael Jaimes, a researcher in Lincoln Laboratory's Counter–Weapons of Mass Destruction Systems Group.
The LLSC's focus on interactive supercomputing makes it especially useful to researchers. For years, the LLSC has pioneered software that lets users access its powerful systems without needing to be experts in configuring algorithms for parallel processing.
"The LLSC has always tried to make supercomputing feel like working on your laptop," Kepner says. "The amount of data and the sophistication of analysis methods needed to be competitive today are well beyond what can be done on a laptop. But with our user-friendly approach, people can run their model and get answers quickly from their workspace."
Beyond supporting programs solely at Lincoln Laboratory, TX-GAIN is enhancing research collaborations with MIT's campus. Such collaborations include the Haystack Observatory, Center for Quantum Engineering, Beaver Works, and Department of Air Force–MIT AI Accelerator. The latter initiative is rapidly prototyping, scaling, and applying AI technologies for the U.S. Air Force and Space Force, optimizing flight scheduling for global operations as one fielded example.
The LLSC systems are housed in an energy-efficient data center and facility in Holyoke, Massachusetts. Research staff in the LLSC are also tackling the immense energy needs of AI and leading research into various power-reduction methods. One software tool they developed can reduce the energy of training an AI model by as much as 80 percent.
"The LLSC provides the capabilities needed to do leading-edge research, while in a cost-effective and energy-efficient manner," Kepner says.
All of the supercomputers at the LLSC use the "TX" nomenclature in homage to Lincoln Laboratory's Transistorized Experimental Computer Zero (TX-0) of 1956. TX-0 was one of the world's first transistor-based machines, and its 1958 successor, TX-2, is storied for its role in pioneering human-computer interaction and AI. With TX-GAIN, the LLSC continues this legacy.
A simple formula could guide the design of faster-charging, longer-lasting batteries
At the heart of all lithium-ion batteries is a simple reaction: Lithium ions dissolved in an electrolyte solution “intercalate” or insert themselves into a solid electrode during battery discharge. When they de-intercalate and return to the electrolyte, the battery charges.
This process happens thousands of times throughout the life of a battery. The amount of power that the battery can generate, and how quickly it can charge, depend on how fast this reaction happens. However, little is known about the exact mechanism of this reaction, or the factors that control its rate.
In a new study, MIT researchers have measured lithium intercalation rates in a variety of different battery materials and used that data to develop a new model of how the reaction is controlled. Their model suggests that lithium intercalation is governed by a process known as coupled ion-electron transfer, in which an electron is transferred to the electrode along with a lithium ion.
Insights gleaned from this model could guide the design of more powerful and faster charging lithium-ion batteries, the researchers say.
“What we hope is enabled by this work is to get the reactions to be faster and more controlled, which can speed up charging and discharging,” says Martin Bazant, the Chevron Professor of Chemical Engineering and a professor of mathematics at MIT.
The new model may also help scientists understand why tweaking electrodes and electrolytes in certain ways leads to increased energy, power, and battery life — a process that has mainly been done by trial and error.
“This is one of these papers where now we began to unify the observations of reaction rates that we see with different materials and interfaces, in one theory of coupled electron and ion transfer for intercalation, building up previous work on reaction rates,” says Yang Shao-Horn, the J.R. East Professor of Engineering at MIT and a professor of mechanical engineering, materials science and engineering, and chemistry.
Shao-Horn and Bazant are the senior authors of the paper, which appears today in Science. The paper’s lead authors are Yirui Zhang PhD ’22, who is now an assistant professor at Rice University; Dimitrios Fraggedakis PhD ’21, who is now an assistant professor at Princeton University; Tao Gao, a former MIT postdoc who is now an assistant professor at the University of Utah; and MIT graduate student Shakul Pathak.
Modeling lithium flow
For many decades, scientists have hypothesized that the rate of lithium intercalation at a lithium-ion battery electrode is determined by how quickly lithium ions can diffuse from the electrolyte into the electrode. This reaction, they believed, was governed by a model known as the Butler-Volmer equation, originally developed almost a century ago to describe the rate of charge transfer during an electrochemical reaction.
However, when researchers have tried to measure lithium intercalation rates, the measurements they obtained were not always consistent with the rates predicted by the Butler-Volmer equation. Furthermore, obtaining consistent measurements across labs has been difficult, with different research teams reporting measurements for the same reaction that varied by a factor of up to 1 billion.
In the new study, the MIT team measured lithium intercalation rates using an electrochemical technique that involves applying repeated, short bursts of voltage to an electrode. They generated these measurements for more than 50 combinations of electrolytes and electrodes, including lithium nickel manganese cobalt oxide, which is commonly used in electric vehicle batteries, and lithium cobalt oxide, which is found in the batteries that power most cell phones, laptops, and other portable electronics.
For these materials, the measured rates are much lower than has previously been reported, and they do not correspond to what would be predicted by the traditional Butler-Volmer model.
The researchers used the data to come up with an alternative theory of how lithium intercalation occurs at the surface of an electrode. This theory is based on the assumption that in order for a lithium ion to enter an electrode, an electron from the electrolyte solution must be transferred to the electrode at the same time.
“The electrochemical step is not lithium insertion, which you might think is the main thing, but it’s actually electron transfer to reduce the solid material that is hosting the lithium,” Bazant says. “Lithium is intercalated at the same time that the electron is transferred, and they facilitate one another.”
This coupled-electron ion transfer (CIET) lowers the energy barrier that must be overcome for the intercalation reaction to occur, making it more likely to happen. The mathematical framework of CIET allowed the researchers to make reaction rate predictions, which were validated by their experiments and substantially different from those made by the Butler-Volmer model.
Faster charging
In this study, the researchers also showed that they could tune intercalation rates by changing the composition of the electrolyte. For example, swapping in different anions can lower the amount of energy needed to transfer the lithium and electron, making the process more efficient.
“Tuning the intercalation kinetics by changing electrolytes offers great opportunities to enhance the reaction rates, alter electrode designs, and therefore enhance the battery power and energy,” Shao-Horn says.
Shao-Horn’s lab and their collaborators have been using automated experiments to make and test thousands of different electrolytes, which are used to develop machine-learning models to predict electrolytes with enhanced functions.
The findings could also help researchers to design batteries that would charge faster, by speeding up the lithium intercalation reaction. Another goal is reducing the side reactions that can cause battery degradation when electrons are picked off the electrode and dissolve into the electrolyte.
“If you want to do that rationally, not just by trial and error, you need some kind of theoretical framework to know what are the important material parameters that you can play with,” Bazant says. “That’s what this paper tries to provide.”
The research was funded by Shell International Exploration and Production and the Toyota Research Institute through the D3BATT Center for Data-Driven Design of Rechargeable Batteries.
Tips to Protect Your Posts About Reproductive Health From Being Removed
This is the ninth installment in a blog series documenting EFF’s findings from the Stop Censoring Abortion campaign. You can read additional posts here.
Meta has been getting content moderation wrong for years, like most platforms that host user-generated content. Sometimes it’s a result of deliberate design choices—privacy rollbacks, opaque policies, features that prioritize growth over safety—made even when the company knows that those choices could negatively impact users. Other times, it’s simply the inevitable outcome of trying to govern billions of posts with a mix of algorithms and overstretched human reviewers. Importantly, users shouldn’t have to worry about their posts being deleted or their accounts getting banned when they share factual health information that doesn’t violate the platforms' policies. But knowing more about what the algorithmic moderation is likely to flag can help you to avoid its mistakes.
We analyzed the roughly one-hundred survey submissions we received from social media users in response to our Stop Censoring Abortion campaign. Their stories revealed some clear patterns: certain words, images, and phrases seemed to trigger takedowns, even when posts didn’t come close to violating Meta’s rules.
For example, your post linking to information on how people are accessing abortion pills online clearly is not an offer to buy or sell pills, but an algorithm, or a human content reviewer who doesn’t know for sure, might wrongly flag it for violating Meta’s policies on promoting or selling “restricted goods.”
That doesn’t mean you’re powerless. For years, people have used “algospeak”—creative spelling, euphemisms, or indirection—to sidestep platform filters. Abortion rights advocates are now forced into similar strategies, even when their speech is perfectly legal. It’s not fair, but it might help you keep your content online. Here are some things we learned from our survey:
Practical Tips to Reduce the Risk of TakedownsWhile traditional social media platforms can help people reach larger audiences, using them also generally means you have to hand over control of what you and others are able to see to the people who run the company. This is the deal that large platforms offer—and while most of us want platforms to moderate some content (even if that moderation is imperfect), current systems of moderation often reflect existing societal power imbalances and impact marginalized voices the most.
There are ways companies and governments could better balance the power between users and platforms. In the meantime, there are steps you can take right now to break the hold these platforms have:
- Images and keywords matter. Posts with pill images, or accounts with “pill” in their names, were flagged often—even when the posts weren’t offering to sell medication. Before posting, consider whether you need to include an image of, or the word “pill,” or whether there’s another way to communicate your message.
- Clarity beats vagueness. Saying “we can help you find what you need” or “contact me for more info” might sound innocuous, but to an algorithm, it can look like an offer to sell drugs. Spell out what kind of support you do and don’t provide—for example: “We can talk through options and point you toward trusted resources. We don’t provide medical services or medication.”
- Be careful with links. Direct links to organizations or services that provide abortion pills were often flagged, even if the organizations operate legally. Instead of linking, try spelling out the name of the site or account.
- Certain word combos are red flags. Posts that included words like “mifepristone,” “abortion,” and “mail” together were frequently removed. You may still want to use them—they’re accurate and important—but know they make your post more likely to be flagged.
- Ads are even stricter. Meta requires pharmaceutical advertisers to prove they’re licensed in the countries they target. If you boost posts, assume the more stringent advertising standards will be applied.
Big platforms give you reach, but they also set the rules—and those rules usually favor corporate interests over human rights. You don’t have to accept that as the only way forward:
- Keep a backup. Export your data regularly so you’re not left empty-handed if your account disappears overnight.
- Build your own space. Hosting a website isn’t free, but it puts you in control.
- Explore other platforms. Newsletters, Discord, and other community tools offer more control than Facebook or Instagram. Decentralized platforms like Mastodon and Bluesky aren’t perfect, but they show what’s possible when moderation isn’t dictated from the top down. (Learn more about the differences between Mastodon, Bluesky, and Threads, and how these kinds of platforms help us build a better internet.)
- Push for interoperability. Imagine being able to take your audience with you when you leave a platform. That’s the future we should be fighting for. (For more on interoperability and Meta, check out this video where Cory Doctorow explains what an interoperable Facebook would look like.)
If you’re working in abortion access—whether as a provider, activist, or volunteer—your privacy and security matter. The same is true for patients. Check out EFF’s Surveillance Self-Defense for tailored guides. Look at resources from groups like Digital Defense Fund and learn how location tracking tools can endanger abortion access. If you run an organization, consider some of the ways you can minimize what information you collect about patients, clients, or customers, in our guide to Online Privacy for Nonprofits.
Platforms like Meta insist they want to balance free expression and safety, but their blunt systems consistently end up reinforcing existing inequalities—silencing the very people who most need to be heard. Until they do better, it’s on us to protect ourselves, share our stories, and keep building the kind of internet that respects our rights.
This is the ninth post in our blog series documenting the findings from our Stop Censoring Abortion campaign. Read more in the series: https://www.eff.org/pages/stop-censoring-abortion
Affected by unjust censorship? Share your story using the hashtag #StopCensoringAbortion. Amplify censored posts and accounts, share screenshots of removals and platform messages—together, we can demonstrate how these policies harm real people.
Daniel Miessler on the AI Attack/Defense Balance
His conclusion:
Context wins
Basically whoever can see the most about the target, and can hold that picture in their mind the best, will be best at finding the vulnerabilities the fastest and taking advantage of them. Or, as the defender, applying patches or mitigations the fastest.
And if you’re on the inside you know what the applications do. You know what’s important and what isn’t. And you can use all that internal knowledge to fix things—hopefully before the baddies take advantage.
Summary and prediction
- Attackers will have the advantage for 3-5 years. For less-advanced defender teams, this will take much longer. ...
Flock’s Gunshot Detection Microphones Will Start Listening for Human Voices
Flock Safety, the police technology company most notable for their extensive network of automated license plate readers spread throughout the United States, is rolling out a new and troubling product that may create headaches for the cities that adopt it: detection of “human distress” via audio. As part of their suite of technologies, Flock has been pushing Raven, their version of acoustic gunshot detection. These devices capture sounds in public places and use machine learning to try to identify gunshots and then alert police—but EFF has long warned that they are also high powered microphones parked above densely-populated city streets. Cities now have one more reason to follow the lead of many other municipalities and cancel their Flock contracts, before this new feature causes civil liberties harms to residents and headaches for cities.
In marketing materials, Flock has been touting new features to their Raven product—including the ability of the device to alert police based on sounds, including “distress.” The online ad for the product, which allows cities to apply for early access to the technology, shows the image of police getting an alert for “screaming.”
It’s unclear how this technology works. For acoustic gunshot detection, generally the microphones are looking for sounds that would signify gunshots (though in practice they often mistake car backfires or fireworks for gunshots). Flock needs to come forward now with an explanation of exactly how their new technology functions. It is unclear how these devices will interact with state “eavesdropping” laws that limit listening to or recording the private conversations that often take place in public.
Flock is no stranger to causing legal challenges for the cities and states that adopt their products. In Illinois, Flock was accused of violating state law by allowing Immigration and Customs Enforcement (ICE), a federal agency, access to license plate reader data taken within the state. That’s not all. In 2023, a North Carolina judge halted the installation of Flock cameras statewide for operating in the state without a license. When the city of Evanston, Illinois recently canceled its contract with Flock, it ordered the company to take down their license plate readers–only for Flock to mysteriously reinstall them a few days later. This city has now sent Flock a cease and desist order and in the meantime, has put black tape over the cameras. For some, the technology isn’t worth its mounting downsides. As one Illinois village trustee wrote while explaining his vote to cancel the city’s contract with Flock, “According to our own Civilian Police Oversight Commission, over 99% of Flock alerts do not result in any police action.”
Gunshot detection technology is dangerous enough as it is—police showing up to alerts they think are gunfire only to find children playing with fireworks is a recipe for innocent people to get hurt. This isn’t hypothetical: in Chicago a child really was shot at by police who thought they were responding to a shooting thanks to a ShotSpotter alert. Introducing a new feature that allows these pre-installed Raven microphones all over cities to begin listening for human voices in distress is likely to open up a whole new can of unforeseen legal, civil liberties, and even bodily safety consequences.
How the shutdown is roiling climate programs at 6 agencies
Delaware eyes limits on data centers as megaproject looms
Michael Mann resigns from administrative post at UPenn
Trump admin advances bid to block Michigan climate lawsuit
Judge restores DOT grant yanked from California university
Labour’s net-zero policies turn off working-class voters, warns UK union boss
Hundreds of feet of Calif. bluff fall toward ocean in landslide-hit town
Swiss glaciers shrank 3% this year, scientists say
Analysis shows European banks capitalizing on green transition
Accounting for uncertainty to help engineers design complex systems
Designing a complex electronic device like a delivery drone involves juggling many choices, such as selecting motors and batteries that minimize cost while maximizing the payload the drone can carry or the distance it can travel.
Unraveling that conundrum is no easy task, but what happens if the designers don’t know the exact specifications of each battery and motor? On top of that, the real-world performance of these components will likely be affected by unpredictable factors, like changing weather along the drone’s route.
MIT researchers developed a new framework that helps engineers design complex systems in a way that explicitly accounts for such uncertainty. The framework allows them to model the performance tradeoffs of a device with many interconnected parts, each of which could behave in unpredictable ways.
Their technique captures the likelihood of many outcomes and tradeoffs, giving designers more information than many existing approaches which, at most, can usually only model best-case and worst-case scenarios.
Ultimately, this framework could help engineers develop complex systems like autonomous vehicles, commercial aircraft, or even regional transportation networks that are more robust and reliable in the face of real-world unpredictability.
“In practice, the components in a device never behave exactly like you think they will. If someone has a sensor whose performance is uncertain, and an algorithm that is uncertain, and the design of a robot that is also uncertain, now they have a way to mix all these uncertainties together so they can come up with a better design,” says Gioele Zardini, the Rudge and Nancy Allen Assistant Professor of Civil and Environmental Engineering at MIT, a principal investigator in the Laboratory for Information and Decision Systems (LIDS), an affiliate faculty with the Institute for Data, Systems, and Society (IDSS), and senior author of a paper on this framework.
Zardini is joined on the paper by lead author Yujun Huang, an MIT graduate student; and Marius Furter, a graduate student at the University of Zurich. The research will be presented at the IEEE Conference on Decision and Control.
Considering uncertainty
The Zardini Group studies co-design, a method for designing systems made of many interconnected components, from robots to regional transportation networks.
The co-design language breaks a complex problem into a series of boxes, each representing one component, that can be combined in different ways to maximize outcomes or minimize costs. This allows engineers to solve complex problems in a feasible amount of time.
In prior work, the researchers modeled each co-design component without considering uncertainty. For instance, the performance of each sensor the designers could choose for a drone was fixed.
But engineers often don’t know the exact performance specifications of each sensor, and even if they do, it is unlikely the senor will perfectly follow its spec sheet. At the same time, they don’t know how each sensor will behave once integrated into a complex device, or how performance will be affected by unpredictable factors like weather.
“With our method, even if you are unsure what the specifications of your sensor will be, you can still design the robot to maximize the outcome you care about,” says Furter.
To accomplish this, the researchers incorporated this notion of uncertainty into an existing framework based on category theory.
Using some mathematical tricks, they simplified the problem into a more general structure. This allows them to use the tools of category theory to solve co-design problems in a way that considers a range of uncertain outcomes.
By reformulating the problem, the researchers can capture how multiple design choices affect one another even when their individual performance is uncertain.
This approach is also simpler than many existing tools that typically require extensive domain expertise. With their plug-and-play system, one can rearrange the components in the system without violating any mathematical constraints.
And because no specific domain expertise is required, the framework could be used by a multidisciplinary team where each member designs one component of a larger system.
“Designing an entire UAV isn’t feasible for just one person, but designing a component of a UAV is. By providing the framework for how these components work together in a way that considers uncertainty, we’ve made it easier for people to evaluate the performance of the entire UAV system,” Huang says.
More detailed information
The researchers used this new approach to choose perception systems and batteries for a drone that would maximize its payload while minimizing its lifetime cost and weight.
While each perception system may offer a different detection accuracy under varying weather conditions, the designer doesn’t know exactly how its performance will fluctuate. This new system allows the designer to take these uncertainties into consideration when thinking about the drone’s overall performance.
And unlike other approaches, their framework reveals distinct advantages of each battery technology.
For instance, their results show that at lower payloads, nickel-metal hydride batteries provide the lowest expected lifetime cost. This insight would be impossible to fully capture without accounting for uncertainty, Zardini says.
While another method might only be able to show the best-case and worst-case performance scenarios of lithium polymer batteries, their framework gives the user more detailed information.
For example, it shows that if the drone’s payload is 1,750 grams, there is a 12.8 percent chance the battery design would be infeasible.
“Our system provides the tradeoffs, and then the user can reason about the design,” he adds.
In the future, the researchers want to improve the computational efficiency of their problem-solving algorithms. They also want to extend this approach to situations where a system is designed by multiple parties that are collaborative and competitive, like a transportation network in which rail companies operate using the same infrastructure.
“As the complexity of systems grow, and involves more disparate components, we need a formal framework in which to design these systems. This paper presents a way to compose large systems from modular components, understand design trade-offs, and importantly do so with a notion of uncertainty. This creates an opportunity to formalize the design of large-scale systems with learning-enabled components,” says Aaron Ames, the Bren Professor of Mechanical and Civil Engineering, Control and Dynamical Systems, and Aerospace at Caltech, who was not involved with this research.
Privacy Harm Is Harm
Every day, corporations track our movements through license plate scanners, building detailed profiles of where we go, when we go there, and who we visit. When they do this to us in violation of data privacy laws, we’ve suffered a real harm—period. We shouldn’t need to prove we’ve suffered additional damage, such as physical injury or monetary loss, to have our day in court.
That's why EFF is proud to join an amicus brief in Mata v. Digital Recognition Network, a lawsuit by drivers against a corporation that allegedly violated a California statute that regulates Automatic License Plate Readers (ALPRs). The state trial court erroneously dismissed the case, by misinterpreting this data privacy law to require proof of extra harm beyond privacy harm. The brief was written by the ACLU of Northern California, Stanford’s Juelsgaard Clinic, and UC Law SF’s Center for Constitutional Democracy.
The amicus brief explains:
This case implicates critical questions about whether a California privacy law, enacted to protect people from harmful surveillance, is not just words on paper, but can be an effective tool for people to protect their rights and safety.
California’s Constitution and laws empower people to challenge harmful surveillance at its inception without waiting for its repercussions to manifest through additional harms. A foundation for these protections is article I, section 1, which grants Californians an inalienable right to privacy.
People in the state have long used this constitutional right to challenge the privacy-invading collection of information by private and governmental parties, not only harms that are financial, mental, or physical. Indeed, widely understood notions of privacy harm, as well as references to harm in the California Code, also demonstrate that term’s expansive meaning.
What’s At StakeThe defendant, Digital Recognition Network, also known as DRN Data, is a subsidiary of Motorola Solutions that provides access to a massive searchable database of ALPR data collected by private contractors. Its customers include law enforcement agencies and private companies, such as insurers, lenders, and repossession firms. DRN is the sister company to the infamous surveillance vendor Vigilant Solutions (now Motorola Solutions), and together they have provided data to ICE through a contract with Thomson Reuters.
The consequences of weak privacy protections are already playing out across the country. This year alone, authorities in multiple states have used license plate readers to hunt for people seeking reproductive healthcare. Police officers have used these systems to stalk romantic partners and monitor political activists. ICE has tapped into these networks to track down immigrants and their families for deportation.
Strong Privacy LawsThis case could determine whether privacy laws have real teeth or are just words on paper. If corporations can collect your personal information with impunity—knowing that unless you can prove bodily injury or economic loss, you can’t fight back—then privacy laws lose value.
We need strong data privacy laws. We need a private right of action so when a company violates our data privacy rights, we can sue them. We need a broad definition of “harm,” so we can sue over our lost privacy rights, without having to prove collateral injury. EFF wages this battle when writing privacy laws, when interpreting those laws, and when asserting “standing” in federal and state courts.
The fight for privacy isn’t just about legal technicalities. It’s about preserving your right to move through the world without being constantly tracked, catalogued, and profiled by corporations looking to profit from your personal information.
You can read the amicus brief here.
