Feed aggregator

Data Brokers are Selling Your Flight Information to CBP and ICE

EFF: Updates - Wed, 07/09/2025 - 7:06pm

For many years, data brokers have existed in the shadows, exploiting gaps in privacy laws to harvest our information—all for their own profit. They sell our precise movements without our knowledge or meaningful consent to a variety of private and state actors, including law enforcement agencies. And they show no sign of stopping.

This incentivizes other bad actors. If companies collect any kind of personal data and want to make a quick buck, there’s a data broker willing to buy it and sell it to the highest bidder–often law enforcement and intelligence agencies.

One recent investigation by 404 Media revealed that the Airlines Reporting Corporation (ARC), a data broker owned and operated by at least eight major U.S. airlines, including United Airlines and American Airlines, collected travelers’ domestic flight records and secretly sold access to U.S. Customs and Border Protection (CBP). Despite selling passengers’ names, full flight itineraries, and financial details, the data broker prevented U.S. border forces from revealing it as the origin of the information. So, not only is the government doing an end run around the Fourth Amendment to get information where they would otherwise need a warrant—they’ve also been trying to hide how they know these things about us. 

ARC’s Travel Intelligence Program (TIP) aggregates passenger data and contains more than one billion records spanning 39 months of past and future travel by both U.S. and non-U.S. citizens. CBP, which sits within the U.S. Department of Homeland Security (DHS), claims it needs this data to support local and state police keeping track of people of interest. But at a time of growing concerns about increased immigration enforcement at U.S. ports of entry, including unjustified searches, law enforcement officials will use this additional surveillance tool to expand the web of suspicion to even larger numbers of innocent travelers. 

More than 200 airlines settle tickets through ARC, with information on more than 54% of flights taken globally. ARC’s board of directors includes representatives from U.S. airlines like JetBlue and Delta, as well as international airlines like Lufthansa, Air France, and Air Canada. 

In selling law enforcement agencies bulk access to such sensitive information, these airlines—through their data broker—are putting their own profits over travelers' privacy. U.S. Immigration and Customs Enforcement (ICE) recently detailed its own purchase of personal data from ARC. In the current climate, this can have a detrimental impact on people’s lives. 

Movement unrestricted by governments is a hallmark of a free society. In our current moment, when the federal government is threatening legal consequences based on people’s national, religious, and political affiliations, having air travel in and out of the United States tracked by any ARC customer is a recipe for state retribution. 

Sadly, data brokers are doing even broader harm to our privacy. Sensitive location data is harvested from smartphones and sold to cops, internet backbone data is sold to federal counterintelligence agencies, and utility databases containing phone, water, and electricity records are shared with ICE officers. 

At a time when immigration authorities are eroding fundamental freedoms through increased—and arbitrary—actions at the U.S. border, this news further exacerbates concerns that creeping authoritarianism can be fueled by the extraction of our most personal data—all without our knowledge or consent.

The new revelations about ARC’s data sales to CBP and ICE is a fresh reminder of the need for “privacy first” legislation that imposes consent and minimization limits on corporate processing of our data. We also need to pass the “Fourth Amendment is not for sale” act to stop police from bypassing judicial review of their data seizures by means of purchasing data from brokers. And let’s enforce data broker registration laws. 

Electronic Frontier Foundation to Present Annual EFF Awards to Just Futures Law, Erie Meyer, and Software Freedom Law Center, India

EFF: Updates - Wed, 07/09/2025 - 5:00pm
2025 Awards Will Be Presented in a Live Ceremony Wednesday, Sept. 10 in San Francisco

SAN FRANCISCO—The Electronic Frontier Foundation (EFF) is honored to announce that Just Futures Law, Erie Meyer, and Software Freedom Law Center, India will receive the 2025 EFF Awards for their vital work in ensuring that technology supports privacy, freedom, justice, and innovation for all people.  

The EFF Awards recognize specific and substantial technical, social, economic, or cultural contributions in diverse fields including journalism, art, digital access, legislation, tech development, and law.  

 The EFF Awards ceremony will start at 6 p.m. PT on Wednesday, Sept. 10, 2025 at the San Francisco Design Center Galleria, 101 Henry Adams St. in San Francisco. Guests can register at http://www.eff.org/effawards. The ceremony will be recorded and shared online on Sept. 12. 

For the past 30 years, the EFF Awards—previously known as the Pioneer Awards—have recognized and honored key leaders in the fight for freedom and innovation online. Started when the internet was new, the Awards now reflect the fact that the online world has become both a necessity in modern life and a continually evolving set of tools for communication, organizing, creativity, and increasing human potential. 

“Whether fighting the technological abuses that abet criminalization, detention, and deportation of immigrants and people of color, or working and speaking out fearlessly to protect Americans’ data privacy, or standing up for digital rights in the world’s most populous country, all of our 2025 Awards winners contribute to creating a brighter tech future for humankind,”  EFF Executive Director Cindy Cohn said. “We hope that this recognition will bring even more support for each of these vital efforts.” 

Just Futures Law: Leading Immigration and Surveillance Litigation 

jfl_icon_medium.png Just Futures Law is a women-of-color-led law project that recognizes how surveillance disproportionately impacts immigrants and people of color in the United States.  It uses litigation to fight back as part of defending and building the power of immigrant rights and criminal justice activists, organizers, and community groups to prevent criminalization, detention, and deportation of immigrants and people of color. Just Futures was founded in 2019 using a movement lawyering and racial justice framework and seeks to transform how litigation and legal support serves communities and builds movement power.  

In the past year, Just Futures sued the Department of Homeland Security and its subagencies seeking a court order to compel the agencies to release records on their use of AI and other algorithms, and sued the Trump Administration for prematurely halting Haiti’s Temporary Protected Status, a humanitarian program that allows hundreds of thousands of Haitians to temporarily remain and work in the United States due to Haiti’s current conditions of extraordinary crises. It has represented activists in their fight against tech giants like Clearview AI, it has worked with Mijente to launch the TakeBackTech fellowship to train new advocates on grassroots-directed research, and it has worked with Grassroots Leadership to fight for the release of detained individuals under Operation Lone Star

Erie Meyer: Protecting Americans' Privacy 

eriemeyer.png Erie Meyer is a Senior Fellow at the Vanderbilt Policy Accelerator where she focuses on the intersection of technology, artificial intelligence, and regulation, and a Senior Fellow at the Georgetown Law Institute for Technology Law & Policy. She is former Chief Technologist at both the Consumer Financial Protection Bureau (CFPB) and the Federal Trade Commission. Earlier, she was senior advisor to the U.S. Chief Technology Officer at the White House, where she co-founded the United States Digital Service, a team of technologists and designers working to improve digital services for the public. Meyer also worked as senior director at Code for America, a nonprofit that promotes civic hacking to modernize government services, and in the Ohio Attorney General's office at the height of the financial crisis. 

Since January 20, Meyer has helped organize former government technologists to stand up for the privacy and integrity of governmental systems that hold Americans’ data. In addition to organizing others, she filed a declaration in federal court in February warning that 12 years of critical records could be irretrievably lost in the CFPB’s purge by the Trump Administration’s Department of Government Efficiency. In April, she filed a declaration in another case warning about using private-sector AI on government information. That same month, she testified to the House Oversight Subcommittee on Cybersecurity, Information Technology, and Government Innovation that DOGE is centralizing access to some of the most sensitive data the government holds—Social Security records, disability claims, even data tied to national security—without a clear plan or proper oversight, warning that “DOGE is burning the house down and calling it a renovation.” 

Software Freedom Law Center, India: Defending Digital Freedoms 

sflc_logo.png Software Freedom Law Center, India is a donor-supported legal services organization based in India that brings together lawyers, policy analysts, students, and technologists to protect freedom in the digital world. It promotes innovation and open access to knowledge by helping developers make great free and open-source software, protects privacy and civil liberties for Indians by educating and providing free legal advice, and helps policymakers make informed and just decisions about use of technology. 

Founded in 2010 by technology lawyer and online civil liberties activist Mishi Choudhary, SFLC.IN tracks and participates in litigation, AI regulations, and free speech issues that are defining Indian technology. It also tracks internet shutdowns and censorship incidents across India, provides digital security training, and has launched the Digital Defenders Network, a pan-Indian network of lawyers committed to protecting digital rights. It has conducted landmark litigation cases, petitioned the government of India on freedom of expression and internet issues, and campaigned for WhatsApp and Facebook to fix a feature of their platform that has been used to harass women in India. 

To register for this event:  http://www.eff.org/effawards 

For past honorees: https://www.eff.org/awards/past-winners 

Changing the conversation in health care

MIT Latest News - Wed, 07/09/2025 - 4:50pm

Generative artificial intelligence is transforming the ways humans write, read, speak, think, empathize, and act within and across languages and cultures. In health care, gaps in communication between patients and practitioners can worsen patient outcomes and prevent improvements in practice and care. The Language/AI Incubator, made possible through funding from the MIT Human Insight Collaborative (MITHIC), offers a potential response to these challenges. 

The project envisions a research community rooted in the humanities that will foster interdisciplinary collaboration across MIT to deepen understanding of generative AI’s impact on cross-linguistic and cross-cultural communication. The project’s focus on health care and communication seeks to build bridges across socioeconomic, cultural, and linguistic strata.

The incubator is co-led by Leo Celi, a physician and the research director and senior research scientist with the Institute for Medical Engineering and Science (IMES), and Per Urlaub, professor of the practice in German and second language studies and director of MIT’s Global Languages program. 

“The basis of health care delivery is the knowledge of health and disease,” Celi says. “We’re seeing poor outcomes despite massive investments because our knowledge system is broken.”

A chance collaboration

Urlaub and Celi met during a MITHIC launch event. Conversations during the event reception revealed a shared interest in exploring improvements in medical communication and practice with AI.

“We’re trying to incorporate data science into health-care delivery,” Celi says. “We’ve been recruiting social scientists [at IMES] to help advance our work, because the science we create isn’t neutral.”

Language is a non-neutral mediator in health care delivery, the team believes, and can be a boon or barrier to effective treatment. “Later, after we met, I joined one of his working groups whose focus was metaphors for pain: the language we use to describe it and its measurement,” Urlaub continues. “One of the questions we considered was how effective communication can occur between doctors and patients.”

Technology, they argue, impacts casual communication, and its impact depends on both users and creators. As AI and large language models (LLMs) gain power and prominence, their use is broadening to include fields like health care and wellness. 

Rodrigo Gameiro, a physician and researcher with MIT’s Laboratory for Computational Physiology, is another program participant. He notes that work at the laboratory centers responsible AI development and implementation. Designing systems that leverage AI effectively, particularly when considering challenges related to communicating across linguistic and cultural divides that can occur in health care, demands a nuanced approach. 

“When we build AI systems that interact with human language, we’re not just teaching machines how to process words; we’re teaching them to navigate the complex web of meaning embedded in language,” Gameiro says.

Language’s complexities can impact treatment and patient care. “Pain can only be communicated through metaphor,” Urlaub continues, “but metaphors don’t always match, linguistically and culturally.” Smiley faces and one-to-10 scales — pain measurement tools English-speaking medical professionals may use to assess their patients — may not travel well across racial, ethnic, cultural, and language boundaries.

“Science has to have a heart” 

LLMs can potentially help scientists improve health care, although there are some systemic and pedagogical challenges to consider. Science can focus on outcomes to the exclusion of the people it’s meant to help, Celi argues. “Science has to have a heart,” he says. “Measuring students’ effectiveness by counting the number of papers they publish or patents they produce misses the point.”

The point, Urlaub says, is to investigate carefully while simultaneously acknowledging what we don’t know, citing what philosophers call Epistemic Humility. Knowledge, the investigators argue, is provisional, and always incomplete. Deeply held beliefs may require revision in light of new evidence. 

“No one’s mental view of the world is complete,” Celi says. “You need to create an environment in which people are comfortable acknowledging their biases.”

“How do we share concerns between language educators and others interested in AI?” Urlaub asks. “How do we identify and investigate the relationship between medical professionals and language educators interested in AI’s potential to aid in the elimination of gaps in communication between doctors and patients?” 

Language, in Gameiro’s estimation, is more than just a tool for communication. “It reflects culture, identity, and power dynamics,” he says. In situations where a patient might not be comfortable describing pain or discomfort because of the physician’s position as an authority, or because their culture demands yielding to those perceived as authority figures, misunderstandings can be dangerous. 

Changing the conversation

AI’s facility with language can help medical professionals navigate these areas more carefully, providing digital frameworks offering valuable cultural and linguistic contexts in which patient and practitioner can rely on data-driven, research-supported tools to improve dialogue. Institutions need to reconsider how they educate medical professionals and invite the communities they serve into the conversation, the team says. 

‘We need to ask ourselves what we truly want,” Celi says. “Why are we measuring what we’re measuring?” The biases we bring with us to these interactions — doctors, patients, their families, and their communities — remain barriers to improved care, Urlaub and Gameiro say.

“We want to connect people who think differently, and make AI work for everyone,” Gameiro continues. “Technology without purpose is just exclusion at scale.”

“Collaborations like these can allow for deep processing and better ideas,” Urlaub says.

Creating spaces where ideas about AI and health care can potentially become actions is a key element of the project. The Language/AI Incubator hosted its first colloquium at MIT in May, which was led by Mena Ramos, a physician and the co-founder and CEO of the Global Ultrasound Institute

The colloquium also featured presentations from Celi, as well as Alfred Spector, a visiting scholar in MIT’s Department of Electrical Engineering and Computer Science, and Douglas Jones, a senior staff member in the MIT Lincoln Laboratory’s Human Language Technology Group. A second Language/AI Incubator colloquium is planned for August.

Greater integration between the social and hard sciences can potentially increase the likelihood of developing viable solutions and reducing biases. Allowing for shifts in the ways patients and doctors view the relationship, while offering each shared ownership of the interaction, can help improve outcomes. Facilitating these conversations with AI may speed the integration of these perspectives. 

“Community advocates have a voice and should be included in these conversations,” Celi says. “AI and statistical modeling can’t collect all the data needed to treat all the people who need it.”

Community needs and improved educational opportunities and practices should be coupled with cross-disciplinary approaches to knowledge acquisition and transfer. The ways people see things are limited by their perceptions and other factors. “Whose language are we modeling?” Gameiro asks about building LLMs. “Which varieties of speech are being included or excluded?” Since meaning and intent can shift across those contexts, it’s important to remember these when designing AI tools. 

“AI is our chance to rewrite the rules”

While there’s lots of potential in the collaboration, there are serious challenges to overcome, including establishing and scaling the technological means to improve patient-provider communication with AI, extending opportunities for collaboration to marginalized and underserved communities, and reconsidering and revamping patient care. 

But the team isn’t daunted.

Celi believes there are opportunities to address the widening gap between people and practitioners while addressing gaps in health care. “Our intent is to reattach the string that’s been cut between society and science,” he says. “We can empower scientists and the public to investigate the world together while also acknowledging the limitations engendered in overcoming their biases.”

Gameiro is a passionate advocate for AI’s ability to change everything we know about medicine. “I’m a medical doctor, and I don’t think I’m being hyperbolic when I say I believe AI is our chance to rewrite the rules of what medicine can do and who we can reach,” he says.

“Education changes humans from objects to subjects,” Urlaub argues, describing the difference between disinterested observers and active and engaged participants in the new care model he hopes to build. “We need to better understand technology’s impact on the lines between these states of being.”

Celi, Gameiro, and Urlaub each advocate for MITHIC-like spaces across health care, places where innovation and collaboration are allowed to occur without the kinds of arbitrary benchmarks institutions have previously used to mark success.

“AI will transform all these sectors,” Urlaub believes. “MITHIC is a generous framework that allows us to embrace uncertainty with flexibility.”

“We want to employ our power to build community among disparate audiences while admitting we don’t have all the answers,” Celi says. “If we fail, it’s because we failed to dream big enough about how a reimagined world could look.”

AI shapes autonomous underwater “gliders”

MIT Latest News - Wed, 07/09/2025 - 4:35pm

Marine scientists have long marveled at how animals like fish and seals swim so efficiently despite having different shapes. Their bodies are optimized for efficient, hydrodynamic aquatic navigation so they can exert minimal energy when traveling long distances.

Autonomous vehicles can drift through the ocean in a similar way, collecting data about vast underwater environments. However, the shapes of these gliding machines are less diverse than what we find in marine life — go-to designs often resemble tubes or torpedoes, since they’re fairly hydrodynamic as well. Plus, testing new builds requires lots of real-world trial-and-error.

Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and the University of Wisconsin at Madison propose that AI could help us explore uncharted glider designs more conveniently. Their method uses machine learning to test different 3D designs in a physics simulator, then molds them into more hydrodynamic shapes. The resulting model can be fabricated via a 3D printer using significantly less energy than hand-made ones.

The MIT scientists say that this design pipeline could create new, more efficient machines that help oceanographers measure water temperature and salt levels, gather more detailed insights about currents, and monitor the impacts of climate change. The team demonstrated this potential by producing two gliders roughly the size of a boogie board: a two-winged machine resembling an airplane, and a unique, four-winged object resembling a flat fish with four fins.

Peter Yichen Chen, MIT CSAIL postdoc and co-lead researcher on the project, notes that these designs are just a few of the novel shapes his team’s approach can generate. “We’ve developed a semi-automated process that can help us test unconventional designs that would be very taxing for humans to design,” he says. “This level of shape diversity hasn’t been explored previously, so most of these designs haven’t been tested in the real world.”

But how did AI come up with these ideas in the first place? First, the researchers found 3D models of over 20 conventional sea exploration shapes, such as submarines, whales, manta rays, and sharks. Then, they enclosed these models in “deformation cages” that map out different articulation points that the researchers pulled around to create new shapes.

The CSAIL-led team built a dataset of conventional and deformed shapes before simulating how they would perform at different “angles-of-attack” — the direction a vessel will tilt as it glides through the water. For example, a swimmer may want to dive at a -30 degree angle to retrieve an item from a pool.

These diverse shapes and angles of attack were then used as inputs for a neural network that essentially anticipates how efficiently a glider shape will perform at particular angles and optimizes it as needed.

Giving gliding robots a lift

The team’s neural network simulates how a particular glider would react to underwater physics, aiming to capture how it moves forward and the force that drags against it. The goal: find the best lift-to-drag ratio, representing how much the glider is being held up compared to how much it’s being held back. The higher the ratio, the more efficiently the vehicle travels; the lower it is, the more the glider will slow down during its voyage.

Lift-to-drag ratios are key for flying planes: At takeoff, you want to maximize lift to ensure it can glide well against wind currents, and when landing, you need sufficient force to drag it to a full stop.

Niklas Hagemann, an MIT graduate student in architecture and CSAIL affiliate, notes that this ratio is just as useful if you want a similar gliding motion in the ocean.

“Our pipeline modifies glider shapes to find the best lift-to-drag ratio, optimizing its performance underwater,” says Hagemann, who is also a co-lead author on a paper that was presented at the International Conference on Robotics and Automation in June. “You can then export the top-performing designs so they can be 3D-printed.”

Going for a quick glide

While their AI pipeline seemed realistic, the researchers needed to ensure its predictions about glider performance were accurate by experimenting in more lifelike environments.

They first fabricated their two-wing design as a scaled-down vehicle resembling a paper airplane. This glider was taken to MIT’s Wright Brothers Wind Tunnel, an indoor space with fans that simulate wind flow. Placed at different angles, the glider’s predicted lift-to-drag ratio was only about 5 percent higher on average than the ones recorded in the wind experiments — a small difference between simulation and reality.

A digital evaluation involving a visual, more complex physics simulator also supported the notion that the AI pipeline made fairly accurate predictions about how the gliders would move. It visualized how these machines would descend in 3D.

To truly evaluate these gliders in the real world, though, the team needed to see how their devices would fare underwater. They printed two designs that performed the best at specific points-of-attack for this test: a jet-like device at 9 degrees and the four-wing vehicle at 30 degrees.

Both shapes were fabricated in a 3D printer as hollow shells with small holes that flood when fully submerged. This lightweight design makes the vehicle easier to handle outside of the water and requires less material to be fabricated. The researchers placed a tube-like device inside these shell coverings, which housed a range of hardware, including a pump to change the glider’s buoyancy, a mass shifter (a device that controls the machine’s angle-of-attack), and electronic components.

Each design outperformed a handmade torpedo-shaped glider by moving more efficiently across a pool. With higher lift-to-drag ratios than their counterpart, both AI-driven machines exerted less energy, similar to the effortless ways marine animals navigate the oceans.

As much as the project is an encouraging step forward for glider design, the researchers are looking to narrow the gap between simulation and real-world performance. They are also hoping to develop machines that can react to sudden changes in currents, making the gliders more adaptable to seas and oceans.

Chen adds that the team is looking to explore new types of shapes, particularly thinner glider designs. They intend to make their framework faster, perhaps bolstering it with new features that enable more customization, maneuverability, or even the creation of miniature vehicles.

Chen and Hagemann co-led research on this project with OpenAI researcher Pingchuan Ma SM ’23, PhD ’25. They authored the paper with Wei Wang, a University of Wisconsin at Madison assistant professor and recent CSAIL postdoc; John Romanishin ’12, SM ’18, PhD ’23; and two MIT professors and CSAIL members: lab director Daniela Rus and senior author Wojciech Matusik. Their work was supported, in part, by a Defense Advanced Research Projects Agency (DARPA) grant and the MIT-GIST Program.

Collaborating with the force of nature

MIT Latest News - Wed, 07/09/2025 - 4:30pm

Common sense tells us to run from molten lava flowing from active volcanoes. But MIT professors J. Jih, Cristina Parreño Alonso, and Skylar Tibbits — faculty in the Department of Architecture at the School of Architecture and Planning — have their bags packed to head to southwest Iceland in anticipation of an imminent volcanic eruption. The Nordic island nation is currently experiencing a period of intense seismic activity; seven volcanic eruptions have taken place in its southern peninsula in under a year.

Earlier this year, the faculty built and placed a series of lightweight, easily deployable steel structures close to the volcano, where a few of the recent eruptions have taken place; several more structures are on trucks waiting to be delivered to sites where fissures open and lava oozes out. Cameras are in place to record what happens when the lava meets and hits these structures to help understand the lava flows.

This new research explores what type of shapes and materials can be used to interact with lava and successfully divert it from heading in the direction of habitats or critical infrastructure that lie in its path. Their work is supported by a Professor Amar. G. Bose Research Grant.

“We’re trying to imagine new ways of conceptualizing infrastructure when it relates to lava and volcanic eruptions,” says Jih, an associate professor of the practice. “Lovely for us as designers, physical prototyping is the only way you can test some of these ideas out.” 

Currently, the Icelandic Department of Civic Protection and Emergency Management and an engineering group, EFLA, are diverting the lava with massive berms (approximately 44 to 54 yards in length and 9 yards in height) made from earth and stone.

Berms protecting the town of Grindavik, a power plant, and the popular Blue Lagoon geothermal spa have met with mixed results. In November 2024, a volcano erupted for the seventh time in less than a year, forcing the evacuation of town residents and the Blue Lagoon’s guests and employees. The latter’s parking lot was consumed by lava.

Sigurdur Thorsteinsson, chief brand, design, and innovation officer of the Blue Lagoon, as well as a designer and a partner in Design Group Italia, was on site for this eruption and several others.

“Some magma went into the city of Grindavik and three or four houses were destroyed,” says Thorsteinsson. “One of our employees watched her house go under magma on television, which was an emotional moment.”

While staff at the Blue Lagoon have become very efficient at evacuating guests, says Thorsteinsson, each eruption forces the tourist destination to close and townspeople to evacuate, disrupting lives and livelihoods.

“You cannot really stop the magma,” says Thorsteinsson, who is working with the MIT faculty on this research project. “It’s too powerful.”

Tibbits, associate professor of design research and founder and co-director of the Self-Assembly Lab, agrees. His research explores how to guide or work with the forces of nature.

Last year, Tibbits and Jih were in Iceland on another research project when erupting volcanoes interrupted their work. The two started thinking about how the lava could be redirected.

“The question is: Can we find more strategic interventions in the field that could work with the lava, rather than fight it?” says Tibbits.

To investigate what kinds of materials would withstand this type of interaction, they invited Parreño Alonso, a senior lecturer in the Department of Architecture, to join them.

“Cristina, being the department authority on magma, was an obvious and important partner for us,” says Jih with a smile.

Parreño Alonso has been working with volcanic rock for years and taught a series of design studios exploring volcanic rock as an architectural material. She also has proposed designing structures to engage directly with lava flows and recently has been examining volcanic rock in a molten state and melting basalt in MIT’s foundry with Michael Tarkanian, a senior lecturer in MIT’s Department of Materials Science and Engineering, and Metals Lab director. For this project, she is exploring the potential of molten rock as a substitute for concrete, a widely used material because of its pliability.

“It’s exciting how this idea of working with volcanoes was taking shape in parallel, from different angles, within the same department,” says Parreño Alonso. “I love how these parallel interests have led to such a beautiful collaboration.”

She also sees other opportunities by collaborating with these forces of nature.

“We are interested in the potential of generating something out of the interaction with the lava,” she says. “Could it be a landscape that becomes a park? There are many possibilities.”

The steel structures were first tested at MIT’s Metals Lab with Tarkanian and then built onsite in Iceland. The team wanted to make the structures lightweight so they could be quickly set up in the field, but strong enough so they wouldn’t be easily destroyed. Various designs were created; this iteration of the design has V-shaped structures that can guide the lava to flow around them, or they can be reconfigured as ramps or tunnels.

“There is a road that has been hit by many of the recent eruptions and must keep being rebuilt,” says Tibbits. “We created two ramps that could in the future serve as tunnels, allowing the lava to flow over the road and create a type of lava cave where the cars could drive under the cooled lava.”

Tibbits says they see the structures in the field now as an initial intervention. After documenting and studying how they interact with the lava, the architects will develop new iterations of what they believe will eventually become critical infrastructure for locations around the world with active volcanoes.

“If we can show and prove what kinds of shapes and structures and what kinds of materials can divert magma flows, I think it’s incredibly valuable research,” says Thorsteinsson.

Thorsteinsson lives in Italy half of the year and says the volcanoes there — Mount Etna in Sicily and Mount Vesuvius in the Gulf of Naples — pose a greater danger than those in Iceland because of the densely populated neighborhoods nearby. Volcanoes in Hawaii and Japan are in similarly populated areas.

“Whatever information you can learn about diverting magma flows to other directions and what kinds of structures are needed — it would be priceless,” he says.

Yet Another Strava Privacy Leak

Schneier on Security - Wed, 07/09/2025 - 7:05am

This time it’s the Swedish prime minister’s bodyguards. (Last year, it was the US Secret Service and Emmanuel Macron’s bodyguards. in 2018, it was secret US military bases.)

This is ridiculous. Why do people continue to make their data public?

Trump has long politicized disasters. Now he’s on the other side.

ClimateWire News - Wed, 07/09/2025 - 6:23am
Democrats have seized on the administration's cuts to the National Weather Service in the wake of the Texas floods.

Texas starved fund meant to reduce flood risk

ClimateWire News - Wed, 07/09/2025 - 6:23am
The state has identified more than $50 billion in flood control needs. But lawmakers have devoted just $1.4 billion to address them.

Trump’s FEMA council meets Wednesday as agency helps Texas

ClimateWire News - Wed, 07/09/2025 - 6:21am
The president created the FEMA Review Council to recommend overhauling the agency. The meeting collides with FEMA's response to deadly flooding.

Researchers who question mainstream climate science join DOE

ClimateWire News - Wed, 07/09/2025 - 6:21am
The Department of Energy brings aboard three scientists known for challenging mainstream climate research.

US carbon removal seen backsliding under Trump — report

ClimateWire News - Wed, 07/09/2025 - 6:20am
The president's agenda threatens to shift technological innovation to other countries.

3 missing as flash flooding hits New Mexico mountain village

ClimateWire News - Wed, 07/09/2025 - 6:19am
Emergency crews carried out at least 85 swift water rescues in the Ruidoso area, including of people who were trapped in their homes and cars, said an emergency official.

Climate change tripled recent heat deaths in Europe, scientists say

ClimateWire News - Wed, 07/09/2025 - 6:19am
Global warming caused an additional 1,500 deaths in 12 cities during last week’s heat wave, an analysis found.

Monsoon flooding sweeps away 20 people, destroys Nepal-China link

ClimateWire News - Wed, 07/09/2025 - 6:18am
The flooding on the Bhotekoshi River destroyed the Friendship Bridge at Rasuwagadi, which is 75 miles north of the capital, Kathmandu.

Storms, fires hit Balkan countries following extreme heat

ClimateWire News - Wed, 07/09/2025 - 6:18am
Rain was welcome in Serbia, where firefighters battled more than 600 wildfires Monday.

Far-right climate delayers to lead Parliament talks on EU’s 2040 target

ClimateWire News - Wed, 07/09/2025 - 6:17am
The Patriots for Europe group will be in charge of delicate negotiations on the next emissions-cutting milestone.

Global finance watchdog confronts climate discord after officials clash

ClimateWire News - Wed, 07/09/2025 - 6:16am
The rift has prompted the Financial Stability Board to amend a report that details the progress it’s made in delivering on climate-related objectives.

Implantable device could save diabetes patients from dangerously low blood sugar

MIT Latest News - Wed, 07/09/2025 - 5:00am

For people with Type 1 diabetes, developing hypoglycemia, or low blood sugar, is an ever-present threat. When glucose levels become extremely low, it creates a life-threatening situation for which the standard treatment of care is injecting a hormone called glucagon.

As an emergency backup, for cases where patients may not realize that their blood sugar is dropping to dangerous levels, MIT engineers have designed an implantable reservoir that can remain under the skin and be triggered to release glucagon when blood sugar levels get too low.

This approach could also help in cases where hypoglycemia occurs during sleep, or for diabetic children who are unable to administer injections on their own.

“This is a small, emergency-event device that can be placed under the skin, where it is ready to act if the patient’s blood sugar drops too low,” says Daniel Anderson, a professor in MIT’s Department of Chemical Engineering, a member of MIT’s Koch Institute for Integrative Cancer Research and Institute for Medical Engineering and Science (IMES), and the senior author of the study. “Our goal was to build a device that is always ready to protect patients from low blood sugar. We think this can also help relieve the fear of hypoglycemia that many patients, and their parents, suffer from.”

The researchers showed that this device could also be used to deliver emergency doses of epinephrine, a drug that is used to treat heart attacks and can also prevent severe allergic reactions, including anaphylactic shock.

Siddharth Krishnan, a former MIT research scientist who is now an assistant professor of electrical engineering at Stanford University, is the lead author of the study, which appears today in Nature Biomedical Engineering.

Emergency response

Most patients with type 1 diabetes use daily insulin injections to help their body absorb sugar and prevent their blood sugar levels from getting too high. However, if their blood sugar levels get too low, they develop hypoglycemia, which can lead to confusion and seizures, and may be fatal if it goes untreated.

To combat hypoglycemia, some patients carry preloaded syringes of glucagon, a hormone that stimulates the liver to release glucose into the bloodstream. However, it isn’t always easy for people, especially children, to know when they are becoming hypoglycemic.

“Some patients can sense when they’re getting low blood sugar, and go eat something or give themselves glucagon,” Anderson says. “But some are unaware that they’re hypoglycemic, and they can just slip into confusion and coma. This is also a problem when patients sleep, as they are reliant on glucose sensor alarms to wake them when sugar drops dangerously low.”

To make it easier to counteract hypoglycemia, the MIT team set out to design an emergency device that could be triggered either by the person using it, or automatically by a sensor.

The device, which is about the size of a quarter, contains a small drug reservoir made of a 3D-printed polymer. The reservoir is sealed with a special material known as a shape-memory alloy, which can be programmed to change its shape when heated. In this case, the researcher used a nickel-titanium alloy that is programmed to curl from a flat slab into a U-shape when heated to 40 degrees Celsius.

Like many other protein or peptide drugs, glucagon tends to break down quickly, so the liquid form can’t be stored long-term in the body. Instead, the MIT team created a powdered version of the drug, which remains stable for much longer and stays in the reservoir until released.

Each device can carry either one or four doses of glucagon, and it also includes an antenna tuned to respond to a specific frequency in the radiofrequency range. That allows it to be remotely triggered to turn on a small electrical current, which is used to heat the shape-memory alloy. When the temperature reaches the 40-degree threshold, the slab bends into a U shape, releasing the contents of the reservoir.

Because the device can receive wireless signals, it could also be designed so that drug release is triggered by a glucose monitor when the wearer’s blood sugar drops below a certain level.

“One of the key features of this type of digital drug delivery system is that you can have it talk to sensors,” Krishnan says. “In this case, the continuous glucose-monitoring technology that a lot of patients use is something that would be easy for these types of devices to interface with.”

Reversing hypoglycemia

After implanting the device in diabetic mice, the researchers used it to trigger glucagon release as the animals’ blood sugar levels were dropping. Within less than 10 minutes of activating the drug release, blood sugar levels began to level off, allowing them to remain within the normal range and avert hypoglycemia.

The researchers also tested the device with a powdered version of epinephrine. They found that within 10 minutes of drug release, epinephrine levels in the bloodstream became elevated and heart rate increased.

In this study, the researchers kept the devices implanted for up to four weeks, but they now plan to see if they can extend that time up to at least a year.

“The idea is you would have enough doses that can provide this therapeutic rescue event over a significant period of time. We don’t know exactly what that is — maybe a year, maybe a few years, and we’re currently working on establishing what the optimal lifetime is. But then after that, it would need to be replaced,” Krishnan says.

Typically, when a medical device is implanted in the body, scar tissue develops around the device, which can interfere with its function. However, in this study, the researchers showed that even after fibrotic tissue formed around the implant, they were able to successfully trigger the drug release.

The researchers are now planning for additional animal studies and hope to begin testing the device in clinical trials within the next three years.

“It’s really exciting to see our team accomplish this, which I hope will someday help diabetic patients and could more broadly provide a new paradigm for delivering any emergency medicine,” says Robert Langer, the David H. Koch Institute Professor at MIT and an author of the paper.

Other authors of the paper include Laura O’Keeffe, Arnab Rudra, Derin Gumustop, Nima Khatib, Claudia Liu, Jiawei Yang, Athena Wang, Matthew Bochenek, Yen-Chun Lu, Suman Bose, and Kaelan Reed.

The research was funded by the Leona M. and Harry B. Helmsley Charitable Trust, the National Institutes of Health, a JDRF postdoctoral fellowship, and the National Institute of Biomedical Imaging and Bioengineering.

Processing our technological angst through humor

MIT Latest News - Wed, 07/09/2025 - 12:00am

The first time Steve Jobs held a public demo of the Apple Macintosh, in early 1984, scripted jokes were part of the rollout. First, Jobs pulled the machine out of a bag. Then, using speech technology from Samsung, the Macintosh made a quip about rival IBM’s mainframes: “Never trust a computer you can’t lift.”

There’s a reason Jobs was doing that. For the first few decades that computing became part of cultural life, starting in the 1950s, computers seemed unfriendly, grim, and liable to work against human interests. Take the 1968 film “2001: A Space Odyssey,” in which the onboard computer, HAL, turns against the expedition’s astronauts. It’s a famous cultural touchstone. Jobs, in selling the idea of a personal computer, was using humor to ease concerns about the machines.

“Against the sense of computing as cold and numbers-driven, the fact that this computer was using voice technology to deliver jokes made it seem less forbidding, less evil,” says MIT scholar Benjamin Mangrum.

In fact, this dynamic turns up throughout modern culture, in movies, television, fiction, and the theater. We often deal with our doubts and fears about computing through humor, whether reconciling ourselves to machines or critiquing them. Now, Mangrum analyzes this phenomenon in a new book, “The Comedy of Computation: Or, How I Learned to Stop Worrying and Love Obsolescence,” published this month by Stanford University Press.

“Comedy has been a form for making this technology seem ordinary,” says Mangrum, an associate professor in MIT’s literature program. “Where in other circumstances computing might seem inhuman or impersonal, comedy allows us to incorporate it into our lives in a way that makes it make sense.”

Reversals of fortune

Mangrum’s interest in the subject was sparked partly by William Marchant’s 1955 play, “The Desk Set” — a romantic comedy later turned into a film starring Katharine Hepburn and Spencer Tracy — which queries, among other things, how office workers will co-exist alongside computers.

Perhaps against expectations, romantic comedies have turned out to be one of the most prominent contemporary forms of culture that grapple with technology and its effects on us. Mangrum, in the book, explains why: Their plot structure often involves reversals, which sometimes are extended to technology, too. Computing might seem forbidding, but it might also pull people together.

“One of the common tropes about romantic comedies is that there are characters or factors in the drama that obstruct the happy union of two people,” Mangrum observes. “And often across the arc of the drama, the obstruction or obstructive character is transformed into a partner, or collaborator, and assimilated within the happy couple’s union. That provides a template for how some cultural producers want to present the experience of computing. It begins as an obstruction and ends as a partner.”

That plot structure, Mangrum notes, dates to antiquity and was common in Shakespeare’s day. Still, as he writes in the book, there is “no timeless reality called Comedy,” as the vehicles and forms of it change over time. Beyond that, specific jokes about computing can quickly become outmoded. Steve Jobs made fun of mainframes, and the 1998 Nora Ephron comedy “You’ve Got Mail” got laughs out of dial-up modems, but those jokes might leave most people puzzled today.

“Comedy is not a fixed resource,” Mangrum says. “It’s an ever-changing toolbox.”

Continuing this evolution into the 21st century, Mangrum observes that a lot of computational comedy centers on an entire category of commentary he calls “the Great Tech-Industrial Joke.” This focuses on the gap between noble-sounding declared aspirations of technology and the sometimes-dismal outcomes it creates.

Social media, for instance, promised new worlds of connectivity and social exploration, and has benefits people enjoy — but it has also generated polarization, misinformation, and toxicity. Technology’s social effects are complex. Whole televisions shows, such as “Silicon Valley,” have dug into this terrain.

“The tech industry announces that some of its products have revolutionary or utopian aims, but the achievements of many of them fall far short of that,” Mangrum says. “It’s a funny setup for a joke. People have been claiming we’re saving the world, when actually we’re just processing emails faster. But it’s a mode of criticism aimed at big tech, since its products are more complicated.”

A complicated, messy picture

“The Comedy of Computation” digs into several other facets of modern culture and technology. The notion of personal authenticity, as Mangrum observes, is a fairly recent and modern construct in society — and it’s another sphere of life that collides with computing, since social media is full of charges of inauthenticity.

“That ethics of authenticity connects to comedy, as we make jokes about people not being authentic,” Mangrum says.

“The Comedy of Computation” has received praise from other scholars. Mark Goble, a professor of English at the University of California at Berkeley, has called it “essential for understanding the technological world in its complexity, absurdity, and vibrancy.”

For his part, Mangrum emphasizes that his book is an exploration of the full complexity of technology, culture, and society.

“There’s this really complicated, messy picture,” Mangrum says. “And comedy sometimes finds a way of experiencing and finding pleasure in that messiness, and other times it neatly wraps it up in a lesson that can make things neater than they actually are.”

Mangrum adds that the book focuses on “the combination of the threat and pleasure that’s involved across the history of the computer, in the ways it’s been assimilated and shaped society, with real advances and benefits, along with real threats, for instance to employment. I’m interested in the duality, the simultaneous and seemingly conflicting features of that experience.”

Amplified warming accelerates deoxygenation in the Arctic Ocean

Nature Climate Change - Wed, 07/09/2025 - 12:00am

Nature Climate Change, Published online: 09 July 2025; doi:10.1038/s41558-025-02376-0

Rapid warming of the global ocean and amplified Arctic warming will alter the ocean biogeochemistry. Here the authors show that Atlantic water inflow, and the subsequent subduction and circulation, is reducing dissolved oxygen in the Arctic due to reduced solubility with increased temperatures.

Pages