Feed aggregator
Encroaching desert threatens to swallow Mauritania’s homes, history
Study: Climate change is shrinking glaciers faster than ever
The Judicial Conference Should Continue to Liberally Allow Amicus Briefs, a Critical Advocacy Tool
EFF does a lot of things, including impact litigation, legislative lobbying, and technology development, all to fight for your civil liberties in the digital age. With litigation, we directly represent clients and also file “amicus” briefs in court cases.
An amicus brief, also called a “friend-of-the-court” brief, is when we don’t represent one of the parties on either side of the “v”—instead, we provide the court with a helpful outside perspective on the case, either on behalf of ourselves or other groups, that can help the court make its decision.
Amicus briefs are a core part of EFF’s legal work. Over the years, courts at all levels have extensively engaged with and cited our amicus briefs, showing that they value our thoughtful legal analysis, technical expertise, and public interest mission.
Unfortunately, the Judicial Conference—the body that oversees the federal court system—has proposed changes to the rule governing amicus briefs (Federal Rule of Appellate Procedure 29) that would make it harder to file such briefs in the circuit courts.
EFF filed comments with the Judicial Conference sharing our thoughts on the proposed rule changes (a total of 407 comments were filed). Two proposed changes are particularly concerning.
First, amicus briefs would be “disfavored” if they address issues “already mentioned” by the parties. This language is extremely broad and may significantly reduce the amount and types of amicus briefs that are filed in the circuit courts. As we said in our comments:
We often file amicus briefs that expand upon issues only briefly addressed by the parties, either because of lack of space given other issues that party counsel must also address on appeal, or a lack of deep expertise by party counsel on a specific issue that EFF specializes in. We see this often in criminal appeals when we file in support of the defendant. We also file briefs that address issues mentioned by the parties but additionally explain how the relevant technology works or how the outcome of the case will impact certain other constituencies.
We then shared examples of EFF amicus briefs that may have been disfavored if the “already mentioned” standard had been in effect, even though our briefs provided help to the courts. Just two examples are:
- In United States v. Cano, we filed an amicus brief that addressed the core issue of the case—whether the border search exception to the Fourth Amendment’s warrant requirement applies to cell phones. We provided a detailed explanation of the privacy interests in digital devices, and a thorough Fourth Amendment analysis regarding why a warrant should be required to search digital devices at the border. The Ninth Circuit extensively engaged with our brief to vacate the defendant’s conviction.
- In NetChoice, LLC v. Attorney General of Florida, a First Amendment case about social media content moderation (later considered by the Supreme Court), we filed an amicus brief that elaborated on points only briefly made by the parties about the prevalence of specialized social media services reflecting a wide variety of subject matter focuses and political viewpoints. Several of the examples we provided were used by the 11th Circuit in its opinion.
Second, the proposed rules would require an amicus organization (or person) to file a motion with the court and get formal approval before filing an amicus brief. This would replace the current rule, which also allows an amicus brief to be filed if both parties in the case consent (which is commonly what happens).
As we stated in our comments: “Eliminating the consent provision will dramatically increase motion practice for circuit courts, putting administrative burdens on the courts as well as amicus brief filers.” We also argued that this proposed change “is not in the interests of justice.” We wrote:
Having to write and file a separate motion may disincentivize certain parties from filing amicus briefs, especially people or organizations with limited resources … The circuits should … facilitate the participation by diverse organizations at all stages of the appellate process—where appeals often do not just deal with discrete disputes between parties, but instead deal with matters of constitutional and statutory interpretation that will impact the rights of Americans for years to come.
Amicus briefs are a crucial part of EFF’s work in defending your digital rights, and our briefs provide valuable arguments and expertise that help the courts make informed decisions. That’s why we are calling on the Judicial Conference to reject these changes and preserve our ability to file amicus briefs in the federal appellate courts that make a difference.
Your support is essential in ensuring that we can continue to fight for your digital rights—in and out of court.
Friday Squid Blogging: New Squid Fossil
A 450-million-year-old squid fossil was dug up in upstate New York.
Cornered by the UK’s Demand for an Encryption Backdoor, Apple Turns Off Its Strongest Security Setting
Today, in response to the U.K.’s demands for a backdoor, Apple has stopped offering users in the U.K. Advanced Data Protection, an optional feature in iCloud that turns on end-to-end encryption for files, backups, and more.
Had Apple complied with the U.K.’s original demands, they would have been required to create a backdoor not just for users in the U.K., but for people around the world, regardless of where they were or what citizenship they had. As we’ve said time and time again, any backdoor built for the government puts everyone at greater risk of hacking, identity theft, and fraud.
This blanket, worldwide demand put Apple in an untenable position. Apple has long claimed it wouldn’t create a backdoor, and in filings to the U.K. government in 2023, the company specifically raised the possibility of disabling features like Advanced Data Protection as an alternative. Apple's decision to disable the feature for U.K. users could well be the only reasonable response at this point, but it leaves those people at the mercy of bad actors and deprives them of a key privacy-preserving technology. The U.K. has chosen to make its own citizens less safe and less free.
Although the U.K. Investigatory Powers Act purportedly authorizes orders to compromise security like the one issued to Apple, policymakers in the United States are not entirely powerless. As Senator Ron Wyden and Representative Andy Biggs noted in a letter to the Director of National Intelligence (DNI) last week, the US and U.K. are close allies who have numerous cybersecurity- and intelligence-sharing agreements, but “the U.S. government must not permit what is effectively a foreign cyberattack waged through political means.” They pose a number of key questions, including whether the CLOUD Act—an “encryption-neutral” law that enables special status for the U.K. to request data directly from US companies—actually allows the sort of demands at issue here. We urge Congress and others in the US to pressure the U.K. to back down and to provide support for US companies to resist backdoor demands, regardless of what government issues them.
Meanwhile, Apple is not the only company operating in the U.K. that offers end-to-end encryption backup features. For example, you can optionally enable end-to-end encryption for chat backups in WhatsApp or backups from Samsung Galaxy phones. Many cloud backup services offer similar protections, as do countless chat apps, like Signal, to secure conversations. We do not know if other companies have been approached with similar requests, but we hope they stand their ground as well.
If you’re in the U.K. and have not enabled ADP, you can longer do so. If you have already enabled it, Apple will provide guidance soon about what to do. This change will not affect the end-to-end encryption used in Apple Messages, nor does it alter other data that’s end-to-end encrypted by default, like passwords and health data. But iCloud backups have long been a loophole for law enforcement to gain access to data otherwise not available to them on iPhones with device encryption enabled, including the contents of messages they’ve stored in the backup. Advanced Data Protection is an optional feature to close that loophole. Without it, U.K. users’ files and device backups will be accessible to Apple, and thus shareable with law enforcement.
We appreciate Apple’s stance against the U.K. government’s request. Weakening encryption violates fundamental rights. We all have the right to private spaces, and any backdoor would annihilate that right. The U.K. must back down from these overreaching demands and allow Apple—and others—to provide the option for end-to-end encrypted cloud storage.
Study: Even after learning the right idea, humans and animals still seem to test other approaches
Maybe it’s a life hack or a liability, or a little of both. A surprising result in a new MIT study may suggest that people and animals alike share an inherent propensity to keep updating their approach to a task even when they have already learned how they should approach it, and even if the deviations sometimes lead to unnecessary error.
The behavior of “exploring” when one could just be “exploiting” could make sense for at least two reasons, says Mriganka Sur, senior author of the study published Feb. 18 in Current Biology. Just because a task’s rules seem set one moment doesn’t mean they’ll stay that way in this uncertain world, so altering behavior from the optimal condition every so often could help reveal needed adjustments. Moreover, trying new things when you already know what you like is a way of finding out whether there might be something even better out there than the good thing you’ve got going on right now.
“If the goal is to maximize reward, you should never deviate once you have found the perfect solution, yet you keep exploring,” says Sur, the Paul and Lilah Newton Professor in The Picower Institute for Learning and Memory and the Department of Brain and Cognitive Sciences at MIT. “Why? It’s like food. We all like certain foods, but we still keep trying different foods because you never know, there might be something you could discover.”
Predicting timing
Former research technician Tudor Dragoi, now a graduate student at Boston University, led the study in which he and fellow members of the Sur Lab explored how humans and marmosets, a small primate, make predictions about event timing.
Three humans and two marmosets were given a simple task. They’d see an image on a screen for some amount of time — the amount of time varied from one trial to the next within a limited range — and they simply had to hit a button (marmosets poked a tablet while humans clicked a mouse) when the image disappeared. Success was defined as reacting as quickly as possible to the image’s disappearance without hitting the button too soon. Marmosets received a juice reward on successful trials.
Though marmosets needed more training time than humans, the subjects all settled into the same reasonable pattern of behavior regarding the task. The longer the image stayed on the screen, the faster their reaction time to its disappearance. This behavior follows the “hazard model” of prediction in which, if the image can only last for so long, the longer it’s still there, the more likely it must be to disappear very soon. The subjects learned this and overall, with more experience, their reaction times became faster.
But as the experiment continued, Sur and Dragoi’s team noticed something surprising was also going on. Mathematical modeling of the reaction time data revealed that both the humans and marmosets were letting the results of the immediate previous trial influence what they did on the next trial, even though they had already learned what to do. If the image was only on the screen briefly in one trial, on the next round subjects would decrease reaction time a bit (presumably expecting a shorter image duration again) whereas if the image lingered, they’d increase reaction time (presumably because they figured they’d have a longer wait).
Those results add to ones from a similar study Sur’s lab published in 2023, in which they found that even after mice learned the rules of a different cognitive task, they’d arbitrarily deviate from the winning strategy every so often. In that study, like this one, learning the successful strategy didn’t prevent subjects from continuing to test alternatives, even if it meant sacrificing reward.
“The persistence of behavioral changes even after task learning may reflect exploration as a strategy for seeking and setting on an optimal internal model of the environment,” the scientists wrote in the new study.
Relevance for autism
The similarity of the human and marmoset behaviors is an important finding as well, Sur says. That’s because differences in making predictions about one’s environment is posited to be a salient characteristic of autism spectrum disorders. Because marmosets are small, are inherently social, and are more cognitively complex than mice, work has begun in some labs to establish marmoset autism models, but a key component was establishing that they model autism-related behaviors well. By demonstrating that marmosets model neurotypical human behavior regarding predictions, the study therefore adds weight to the emerging idea that marmosets can indeed provide informative models for autism studies.
In addition to Dragoi and Sur, other authors of the paper are Hiroki Sugihara, Nhat Le, Elie Adam, Jitendra Sharma, Guoping Feng, and Robert Desimone.
The Simons Foundation Autism Research Initiative supported the research through the Simons Center for the Social Brain at MIT.
EFF at RightsCon 2025
EFF is delighted to be attending RightsCon again—this year hosted in Taipei, Taiwan between 24-27 February.
RightsCon provides an opportunity for human rights experts, technologists, activists, and government representatives to discuss pressing human rights challenges and their potential solutions.
Many EFFers are heading to Taipei and will be actively participating in this year's event. Several members will be leading sessions, speaking on panels, and be available for networking.
Our delegation includes:
- Alexis Hancock, Director of Engineering, Certbot
- Babette Ngene, Public Interest Technology Director
- Christoph Schmon, International Policy Director
- Cindy Cohn, Executive Director
- Daly Barnett, Senior Staff Technologist
- David Greene, Senior Staff Attorney and Civil Liberties Director
- Jillian York, Director of International Freedom of Expression
- Karen Gullo, Senior Writer for Free Speech and Privacy
- Paige Collings, Senior Speech and Privacy Activist
- Svea Windwehr, Assistant Director of EU Policy
- Veridiana Alimonti, Associate Director For Latin American Policy
We hope you’ll have the opportunity to connect with us during the conference, especially at the following sessions:
Day 0 (Monday 24 February)Mutual Support: Amplifying the Voices of Digital Rights Defenders in Taiwan and East Asia
09:00 - 12:30, Room 101C
Alexis Hancock, Director of Engineering, Certbot
Host institutions: Open Culture Foundation, Odditysay Labs, Citizen Congress Watch and FLAME
This event aims to present Taiwan and East Asia’s digital rights landscape, highlighting current challenges faced by digital rights defenders and fostering resonance with participants' experiences. Join to engage in insightful discussions, learn from Taiwan’s tech community and civil society, and contribute to the global dialogue on these pressing issues. The form to register is here.
Platform accountability in crisis? Global perspective on platform accountability frameworks09:00 - 13:00, Room 202A
Christoph Schmon, International Policy Director; Babette Ngene, Public Interest Technology Director
Host institutions: Electronic Frontier Foundation (EFF), Access Now
This high level panel will reflect on alarming developments in platforms' content policies and their enforcement, and discuss whether existing frameworks offer meaningful tools to counter the current platform accountability crisis. The starting point for the discussion will be Access Now's recently launched report Platform accountability: a rule-of-law checklist for policymakers. The panel will be followed by a workshop, dedicated to the “Draft Viennese Principles for Embedding Global Considerations into Human-Rights-Centred DSA enforcement”. Facilitated by the DSA Human Rights Alliance, the workshop will provide a safe space for civil society organisations to strategize and discuss necessary elements of a human rights based approach to platform governance.
Day 1 (Tuesday 25 February)Criminalization of Tor in Ola Bini’s case? Lessons for digital experts in the Global South
09:00 - 10:00 (online)
Veridiana Alimonti, Associate Director For Latin American Policy
Host institutions: Access Now, Centro de Autonomía Digital (CAD), Observation Mission of the Ola Bini Case, Tor Project
This session will analyze how the use of Tor is criminalized in Ola Bini´s case and its implications for digital experts in other contexts of criminalization in the Global South, especially when they defend human rights online. Participants will work through various exercises to: 1- Analyze, from a technical perspective, the judicial criminalization of Tor in Ola Bini´s case, and 2- Collectively analyze how its criminalization can affect (judicially) the work of digital experts from the Global South and discuss possible support alternatives.
The counter-surveillance supply chain11:30am - 12:30, Room 201F
Babette Ngene, Public Interest Technology Director
Host institution: Meta
The fight against surveillance and other malicious cyber adversaries is a whole-of-society problem, requiring international norms and policies, in-depth research, platform-level defenses, investigation, and detection. This dialogue focuses on the critical first link in this counter-surveillance supply chain; the on the ground organizations around the world who are the first contact for local activists and organizations dealing with targeted malware, and will include an open discussion on how to improve the global response to surveillance and surveillance-for-hire actors through a lens of local contextual knowledge and information sharing.
Day 3 (Wednesday 26 February)Derecho a no ser objeto de decisiones automatizadas: desafíos y regulaciones en el sector judicial
16:30 - 17:30, Room 101C
Veridiana Alimonti, Associate Director For Latin American Policy
Host institutions: Hiperderecho, Red en Defensa de los Derechos Digitales, Instituto Panamericano de Derecho y Tecnología
A través de este panel se analizarán casos específicos de México, Perú y Colombia para comprender las implicaciones éticas y jurídicas del uso de la inteligencia artificial en la redacción y motivación de sentencias judiciales. Con este diálogo se busca abordar el derecho a no ser objeto de decisiones automatizadas y las implicaciones éticas y jurídicas sobre la automatización de sentencias judiciales. Algunas herramientas pueden reproducir o amplificar estereotipos discriminatorios, además de posibles violaciones a los derechos de privacidad y protección de datos personales, entre otros.
Prying Open the Age-Gate: Crafting a Human Rights Statement Against Age Verification Mandates16:30 - 17:30, Room 401
David Greene, Senior Staff Attorney and Civil Liberties Director
Host institutions: Electronic Frontier Foundation (EFF), Open Net, Software Freedom Law Centre, EDRi
The session will engage participants in considering the issues and seeding the drafting of a global human rights statement on online age verification mandates. After a background presentation on various global legal models to challenge such mandates (with the facilitators representing Asia, Africa, Europe, US), participants will be encouraged to submit written inputs (that will be read during the session) and contribute to a discussion. This will be the start of an ongoing effort that will extend beyond RightsCon with the goal of producing a human rights statement that will be shared and endorsed broadly.
Day 4 (Thursday 27 February)Let's talk about the elephant in the room: transnational policing and human rights
10:15 - 11:15, Room 201B
Veridiana Alimonti, Associate Director For Latin American Policy
Host institutions: Citizen Lab, Munk School of Global Affairs & Public Policy, University of Toronto
This dialogue focuses on growing trends surrounding transnational policing, which pose new and evolving challenges to international human rights. The session will distill emergent themes, with focal points including expanding informal and formal transnational cooperation and data-sharing frameworks at regional and international levels, the evolving role of borders in the development of investigative methods, and the proliferation of new surveillance technologies including mercenary spyware and AI-driven systems.
Queer over fear: cross-regional strategies and community resistance for LGBTQ+ activists fighting against digital authoritarianism11:30 - 12:30, Room 101D
Paige Collings, Senior Speech and Privacy Activist
Host institutions: Access Now, Electronic Frontier Foundation (EFF), De|Center, Fight for the Future
The rise of the international anti-gender movement has seen authorities pass anti-LGBTQ+ legislation that has made the stakes of survival even higher for sexual and gender minorities. This workshop will bring together LGBTQ+ activists from Africa, the Middle East, Eastern Europe, Central Asia and the United States to exchange ideas for advocacy and liberation from the policies, practices and directives deployed by states to restrict LGBTQ+ rights, as well as how these actions impact LGBTQ+ people—online and offline—particularly in regards to online organizing, protest and movement building.
Utah Bill Aims to Make Officers Disclose AI-Written Police Reports
A bill headed to the Senate floor in Utah would require officers to disclose if a police report was written by generative AI. The bill, S.B. 180, requires a department to have a policy governing the use of AI. This policy would mandate that police reports created in whole or in part by generative AI have a disclaimer that the report contains content generated by AI and requires officers to legally certify that the report was checked for accuracy.
S.B. 180 is unfortunately a necessary step in the right direction when it comes to regulating the rapid spread of police using generative AI to write their narrative reports for them. EFF will continue to monitor this bill in hopes that it will be part of a larger conversation about more robust regulations. Specifically, Axon, the makers of tasers and the salespeople behind a shocking amount of police and surveillance tech, has recently rolled out a new product, Draft One, which uses body-worn camera audio to generate police reports. This product is spreading quickly in part because it is integrated with other Axon products which are already omnipresent in U.S. society.
But it’s going to take more than a disclaimer to curb the potential harms of AI-generated police reports.
As we’ve previously cautioned, the public should be skeptical of AI’s ability to accurately process and distinguish between the wide range of languages, dialects, vernacular, idioms, and slang people use. As online content moderation has shown, software may have a passable ability to capture words, but it often struggles with content and meaning. In a tense setting such as a traffic stop, AI mistaking a metaphorical statement for a literal claim could fundamentally change the content of a police report.
Moreover, so-called artificial intelligence taking over consequential tasks and decision-making has the power to obscure human agency. Police officers who deliberately exaggerate or lie to shape the narrative available in body camera footage now have even more of a veneer of plausible deniability with AI-generated police reports. If police were to be caught in a lie concerning what’s in the report, an officer might be able to say that they did not lie: the AI simply did not capture what was happening in the chaotic video.
As this technology spreads without much transparency, oversight, or guardrails, we are likely to see more cities, counties, and states push back against its use. Out of fear that AI-generated reports would complicate and compromise cases in the criminal justice system,prosecutors in King County, Washington (which includes Seattle) have instructed officers not to use the technology for now.
The use of AI to write police reports is troubling in ways we are accustomed to, but also in new ways. Not only do we not yet know how widespread use of this technology will affect the criminal justice system, but because of how the product is designed, there is a chance we won’t even know if AI has been used even if we are staring directly at the police report in question. For that reason, it’s no surprise that lawmakers in Utah have introduced this bill to require some semblance of transparency. We will likely see similar regulations and restrictions in other states and local jurisdictions, and possibly even stronger ones.
Implementing Cryptography in AI Systems
Interesting research: “How to Securely Implement Cryptography in Deep Neural Networks.”
Abstract: The wide adoption of deep neural networks (DNNs) raises the question of how can we equip them with a desired cryptographic functionality (e.g, to decrypt an encrypted input, to verify that this input is authorized, or to hide a secure watermark in the output). The problem is that cryptographic primitives are typically designed to run on digital computers that use Boolean gates to map sequences of bits to sequences of bits, whereas DNNs are a special type of analog computer that uses linear mappings and ReLUs to map vectors of real numbers to vectors of real numbers. This discrepancy between the discrete and continuous computational models raises the question of what is the best way to implement standard cryptographic primitives as DNNs, and whether DNN implementations of secure cryptosystems remain secure in the new setting, in which an attacker can ask the DNN to process a message whose “bits” are arbitrary real numbers...
USAID document: Climate programs are being shut down
OSHA purged ‘diversity’ documents. But they weren’t about DEI.
EPA declines to publicly release endangerment finding recommendation
House Republicans launch drive to undo Biden rules
Trump taps fossil fuel insider to run DOE’s renewable office
EPA places director of Greenhouse Gas Reduction Fund on leave
Europe likely to miss most green targets for 2030
EU to force restaurants, fashion brands to slash their waste
Brazil’s net-zero transition will cost $6T by 2050, BNEF says
New UK-based climate group emerges as international banks retreat
High-speed videos show what happens when a droplet splashes into a pool
Rain can freefall at speeds of up to 25 miles per hour. If the droplets land in a puddle or pond, they can form a crown-like splash that, with enough force, can dislodge any surface particles and launch them into the air.
Now MIT scientists have taken high-speed videos of droplets splashing into a deep pool, to track how the fluid evolves, above and below the water line, frame by millisecond frame. Their work could help to predict how spashing droplets, such as from rainstorms and irrigation systems, may impact watery surfaces and aerosolize surface particles, such as pollen on puddles or pesticides in agricultural runoff.
The team carried out experiments in which they dispensed water droplets of various sizes and from various heights into a pool of water. Using high-speed imaging, they measured how the liquid pool deformed as the impacting droplet hit the pool’s surface.
Across all their experiments, they observed a common splash evolution: As a droplet hit the pool, it pushed down below the surface to form a “crater,” or cavity. At nearly the same time, a wall of liquid rose above the surface, forming a crown. Interestingly, the team observed that small, secondary droplets were ejected from the crown before the crown reached its maximum height. This entire evolution happens in a fraction of a second.
Scientists have caught snapshots of droplet splashes in the past, such as the famous “Milk Drop Coronet” — a photo of a drop of milk in mid-splash, taken by the late MIT professor Harold “Doc” Edgerton, who invented a photographic technique to capture quickly moving objects.
The new work represents the first time scientists have used such high-speed images to model the entire splash dynamics of a droplet in a deep pool, combining what happens both above and below the surface. The team has used the imaging to gather new data central to build a mathematical model that predicts how a droplet’s shape will morph and merge as it hits a pool’s surface. They plan to use the model as a baseline to explore to what extent a splashing droplet might drag up and launch particles from the water pool.
“Impacts of drops on liquid layers are ubiquitous,” says study author Lydia Bourouiba, a professor in the MIT departments of Civil and Environmental Engineering and Mechanical Engineering, and a core member of the Institute for Medical Engineering and Science (IMES). “Such impacts can produce myriads of secondary droplets that could act as carriers for pathogens, particles, or microbes that are on the surface of impacted pools or contaminated water bodies. This work is key in enabling prediction of droplet size distributions, and potentially also what such drops can carry with them.”
Bourouiba and her mentees have published their results in the Journal of Fluid Mechanics. MIT co-authors include former graduate student Raj Dandekar PhD ’22, postdoc (Eric) Naijian Shen, and student mentee Boris Naar.
Above and below
At MIT, Bourouiba heads up the Fluid Dynamics of Disease Transmission Laboratory, part of the Fluids and Health Network, where she and her team explore the fundamental physics of fluids and droplets in a range of environmental, energy, and health contexts, including disease transmission. For their new study, the team looked to better understand how droplets impact a deep pool — a seemingly simple phenomenon that nevertheless has been tricky to precisely capture and characterize.
Bourouiba notes that there have been recent breakthroughs in modeling the evolution of a splashing droplet below a pool’s surface. As a droplet hits a pool of water, it breaks through the surface and drags air down through the pool to create a short-lived crater. Until now, scientists have focused on the evolution of this underwater cavity, mainly for applications in energy harvesting. What happens above the water, and how a droplet’s crown-like shape evolves with the cavity below, remained less understood.
“The descriptions and understanding of what happens below the surface, and above, have remained very much divorced,” says Bourouiba, who believes such an understanding can help to predict how droplets launch and spread chemicals, particles, and microbes into the air.
Splash in 3D
To study the coupled dynamics between a droplet’s cavity and crown, the team set up an experiment to dispense water droplets into a deep pool. For the purposes of their study, the researchers considered a deep pool to be a body of water that is deep enough that a splashing droplet would remain far away from the pool’s bottom. In these terms, they found that a pool with a depth of at least 20 centimeters was sufficient for their experiments.
They varied each droplet’s size, with an average diameter of about 5 millimeters. They also dispensed droplets from various heights, causing the droplets to hit the pool’s surface at different speeds, which on average was about 5 meters per second. The overall dynamics, Bourouiba says, should be similar to what occurs on the surface of a puddle or pond during an average rainstorm.
“This is capturing the speed at which raindrops fall,” she says. “These wouldn’t be very small, misty drops. This would be rainstorm drops for which one needs an umbrella.”
Using high-speed imaging techniques inspired by Edgerton’s pioneering photography, the team captured videos of pool-splashing droplets, at rates of up to 12,500 frames per second. They then applied in-house imaging processing methods to extract key measurements from the image sequences, such as the changing width and depth of the underwater cavity, and the evolving diameter and height of the rising crown. The researchers also captured especially tricky measurements, of the crown’s wall thickness profile and inner flow — the cylinder that rises out of the pool, just before it forms a rim and points that are characteristic of a crown.
“This cylinder-like wall of rising liquid, and how it evolves in time and space, is at the heart of everything,” Bourouiba says. “It’s what connects the fluid from the pool to what will go into the rim and then be ejected into the air through smaller, secondary droplets.”
The researchers worked the image data into a set of “evolution equations,” or a mathematical model that relates the various properties of an impacting droplet, such as the width of its cavity and the thickness and speed profiles of its crown wall, and how these properties change over time, given a droplet’s starting size and impact speed.
“We now have a closed-form mathematical expression that people can use to see how all these quantities of a splashing droplet change over space and time,” says co-author Shen, who plans, with Bourouiba, to apply the new model to the behavior of secondary droplets and understanding how a splash end-up dispersing particles such as pathogens and pesticides. “This opens up the possibility to study all these problems of splash in 3D, with self-contained closed-formed equations, which was not possible before.”
This research was supported, in part, by the Department of Agriculture-National Institute of Food and Agriculture Specialty Crop Research Initiative; the Richard and Susan Smith Family Foundation; the National Science Foundation; the Centers for Disease Control and Prevention-National Institute for Occupational Safety and Health; Inditex; and the National Institute of Allergy and Infectious Diseases of the National Institutes of Health.