Feed aggregator
Contrasting biological production trends over land and ocean
Nature Climate Change, Published online: 01 August 2025; doi:10.1038/s41558-025-02375-1
The authors jointly assess the changes in land and ocean net primary production from 2003 to 2021. They show contrasting trends, with overall planetary increases (0.11 ± 0.13 PgC yr−1) driven by terrestrial enhancement and offset by oceanic decline.Ushering in a new era of suture-free tissue reconstruction for better healing
When surgeons repair tissues, they’re currently limited to mechanical solutions like sutures and staples, which can cause their own damage, or meshes and glues that may not adequately bond with tissues and can be rejected by the body.
Now, Tissium is offering surgeons a new solution based on a biopolymer technology first developed at MIT. The company’s flexible, biocompatible polymers conform to surrounding tissues, attaching to them in order to repair torn tissue after being activated using blue light.
“Our goal is to make this technology the new standard in fixation,” says Tissium co-founder Maria Pereira, who began working with polymers as a PhD student through the MIT Portugal Program. “Surgeons have been using sutures, staples, or tacks for decades or centuries, and they’re quite penetrating. We’re trying to help surgeons repair tissues in a less traumatic way.”
In June, Tissium reached a major milestone when it received marketing authorization from the Food and Drug Administration for its non-traumatic, sutureless solution to repair peripheral nerves. The FDA’s De Novo marketing authorization acknowledges the novelty of the company’s platform and enables commercialization of the MIT spinout’s first product. It came after studies showing the platform helped patients regain full flexion and extension of their injured fingers or toes without pain.
Tissium’s polymers can work with a range of tissue types, from nerves to cardiovascular and the abdominal walls, and the company is eager to apply its programmable platform to other areas.
“We really think this approval is just the beginning,” Tissium CEO Christophe Bancel says. “It was a critical step, and it wasn’t easy, but we knew if we could get the first one, it would begin a new phase for the company. Now it’s our responsibility to show this works with other applications and can benefit more patients.”
From lab to patients
Years before he co-founded Tissium, Jeff Karp was a postdoc in the lab of MIT Institute Professor Robert Langer, where he worked to develop elastic materials that were biodegradable and photocurable for a range of clinical applications. After graduation, Karp became an affiliate faculty member in the Harvard-MIT Program in Health Sciences and Technology. He is also a faculty member at Harvard Medical School and Brigham and Women’s Hospital. In 2008, Pereira joined Karp’s lab as a visiting PhD student through funding from the MIT Portugal Program, tuning the polymers’ thickness and ability to repel water to optimize the material’s ability to attach to wet tissue.
“Maria took this polymer platform and turned it into a fixation platform that could be used in many areas in medicine,” Karp recalls. “[The cardiac surgeon] Pedro del Nido at Boston Children’s Hospital had alerted us to this major problem of a birth defect that causes holes in the heart of newborns. There were no suitable solutions, so that was one of the applications we began working on that Maria led.”
Pereira and her collaborators went on to demonstrate they could use the biopolymers to seal holes in the hearts of rats and pigs without bleeding or complications. Bancel, a pharmaceutical industry veteran, was introduced to the technology when he met with Karp, Pereira, and Langer during a visit to Cambridge in 2012, and he spent the next few months speaking with surgeons.
“I spoke with about 15 surgeons from a range of fields about their challenges,” Bancel says. “I realized if the technology could work in these settings, it would address a big set of challenges. All of the surgeons were excited about how the material could impact their practice.”
Bancel worked with MIT’s Technology Licensing Office to take the biopolymer technology out of the lab, including patents from Karp’s original work in Langer’s lab. Pereira moved to Paris upon completing her PhD, and Tissium was officially founded in 2013 by Pereira, Bancel, Karp, Langer, and others.
“The MIT and Harvard ecosystems are at the core of our success,” Pereira says. “From the get-go, we tried to solve problems that would be meaningful for patients. We weren’t just doing research for the sake of doing research. We started in the cardiovascular space, but we quickly realized we wanted to create new standards for tissue repair and tissue fixation.”
After licensing the technology, Tissium had a lot of work to do to make it scalable commercially. The founders partnered with companies that specialize in synthesizing polymers and created a method to 3D print a casing for polymer-wrapped nerves.
“We quickly realized the product is a combination of the polymer and the accessories,” Bancel says. “It was about how surgeons used the product. We had to design the right accessories for the right procedures.”
The new system is sorely needed. A recent meta-analysis of nerve repairs using sutures found that only 54 percent of patients achieved highly meaningful recovery following surgery. By not using sutures, Tissium’s flexible polymer technology offers an atraumatic way to reconnect nerves. In a recent trial of 12 patients, all patients that completed follow up regained full flexion and extension of their injured digits and reported no pain 12 months after surgery.
“The current standard of care is suboptimal,” Pereira says. “There are variabilities in the outcome, sutures can create trauma, tension, misalignment, and all that can impact patient outcomes, from sensation to motor function and overall quality of life.”
Trauma-free tissue repair
Today Tissium has six products in development, including one ongoing clinical trial in the hernia space and another set to begin soon for a cardiovascular application.
“Early on, we had the intuition that if this were to work in one application, it would be surprising if it didn’t work in many other applications,” Bancel says.
The company also believes its 3D-printed production process will make it easier to expand.
“Not only can this be used for tissue fixation broadly across medicine, but we can leverage the 3D printing method to make all kinds of implantable medical devices from the same polymeric platform,” Karp explains. “Our polymers are programmable, so we can program the degradation, the mechanical properties, and this could open up the door to other exciting breakthroughs in medical devices with new capabilities.”
Now Tissium’s team is encouraging people in the medical field to reach out if they think their platform could improve on the standard of care — and they’re mindful that the first approval is a milestone worth celebrating unto itself.
“It’s the best possible outcome for your research to generate not just a paper, but a treatment with potential to improve the standard of care along with patients’ lives,” Karp says. “It’s the dream, and it’s an incredible feeling to be able to celebrate this with all the collaborators that have been involved along the way.”
Langer adds, “I agree with Jeff. It’s wonderful to see the research we started at MIT reach the point of FDA approval and change peoples’ lives.”
TechEd Collab: Building Community in Arizona Around Tech Awareness
Earlier this year, EFF welcomed Technology Education Collaborative (TEC) into the Electronic Frontier Alliance (EFA). TEC empowers everyday people to become informed users of today's extraordinary technology, and helps people better understand the tech that surrounds them on a daily basis. TEC does this by hosting in-person, hands-on events, including right to repair workshops, privacy meetups, tech field trips, and demos. We got the chance to catch up with Connor Johnson, Chief Technology Officer of TEC, and speak with him about the work TEC is doing in the Greater Phoenix area:
Connor, tell us how Technology Education Collaborative got started, and about its mission.
TEC was started with the idea of creating a space where industry professionals, students, and the community at large could learn about technology together. We teamed up with Gateway Community College to build the Advanced Cyber Systems Lab. A lot of tech groups in Phoenix meet at varying locations, because they can’t afford or find a dedicated space. TEC hosts community technology-focused groups at the Advanced Cyber Systems Lab, so they can have the proper equipment to work on and collaborate on their projects.
Speaking of projects, let's talk about some of the main priorities of TEC: right to repair, privacy, and cybersecurity. Having the only right to repair hub in the greater Phoenix metro valley, what concerns do you see on the horizon?
One of our big concerns is that many companies have slowly shifted away from repairability to a sense of convenience. We are thankful for the donations from iFixIt that allow people to use the tools they may otherwise not know they need or could afford. Community members and IT professionals have come to use our anti-static benches to fix everything from TVs to 3D printers. We are also starting to host ‘Hardware Happy Hour’ so anyone can bring their hardware projects in and socialize with like-minded people.
How’s your privacy and cybersecurity work resonating with the community?
We have had a host of different speakers discuss the current state of privacy and how it can affect different individuals. It was also wonderful to have your Surveillance Litigation Director, Andrew Crocker, speak at our July edition of Privacy PIE. So many of the attendees were thrilled to be able to ask him questions and get clarification on current issues. Christina, CEO of TEC, has done a great job leading our Privacy PIE events and discussing the legal situation surrounding many privacy rights people take for granted. One of my favorite presentations was when we discussed privacy concerns with modern cars, where she touched on aspects like how the cameras are tied to car companies' systems and data collection.
TEC’s current goal is to focus on building a community that is not just limited to cybersecurity itself. One problem that we’ve noticed is that there are a lot of groups focused on security but don’t branch out into other fields in tech. Security affects all aspects of technology, which is why TEC has been branching out its efforts to other fields within tech like hardware and programming. A deeper understanding of the fundamentals can help us to build better systems from the ground up, rather than applying cybersecurity as an afterthought.
In the field of cybersecurity, we have been working on a project building a small business network. The idea behind this initiative is to allow small businesses to independently set up their network, so that provides a good layer of security. Many shops don’t either have the money to afford a security-hardened network or don’t have the technical know-how to set one up. We hope this open-source project will allow people to set up the network themselves, and allow students a way to gain valuable work experience.
It’s awesome to hear of all the great things TEC is doing in Phoenix! How can people plug in and get engaged and involved?
TEC can always benefit from more volunteers or donations. Our goal is to build community, and we are happy to have anyone join us. All are welcome to the Advanced Cyber System lab at Gateway Community College – Washington Campus Monday through Thursday 4 pm to 8 pm. Our website is www.techedcollab.org and on facebook we’re: www.facebook.com/techedcollab People can also join our discord server for some great discussions and updates on our upcoming events!
How government accountability and responsiveness affect tax payment
A fundamental problem for governments is getting citizens to comply with their laws and policies. They can’t monitor everyone and catch all the rule-breakers. “It’s a logistical impossibility,” says Lily L. Tsai, MIT’s Ford Professor of Political Science and the director and founder of the MIT Governance Lab.
Instead, governments need citizens to choose to follow the rules of their own accord. “As a government, you have to rely on them to voluntarily comply with the laws, policies, and regulations that are put into place,” Tsai says.
One particularly important thing governments need citizens to do is pay their taxes. In a paper in the October issue of the journal World Development, Tsai and her co-authors, including Minh Trinh ’22, a graduate of the Department of Political Science, look at different factors that might affect compliance with property tax laws in China. They found that study participants in an in-person tax-paying experiment were more likely to pay their taxes if government officials were monitoring and punishing corruption.
“When people think that government authorities are motivated by the public good, have moral character, and have integrity, then the requests that those authorities make of citizens are more likely to seem legitimate, and so they’re more likely to pay their taxes,” Tsai says.
In China, only two cities, Chongqing and Shanghai, collect property taxes. Officials have been concerned that citizens might resist property taxes because homeownership is the main source of urban household wealth in China. Private homeownership accounts for 64 percent of household wealth in China, compared to only 29 percent in the United States.
Tsai and her co-authors wanted to test how governments might make people more willing to pay their property taxes. Researchers have theorized that citizens are more likely to comply with tax laws when they feel like they’re getting something in return from the government. The government can be responsive to citizens’ demands for public services, for example. Or the government can punish officials who are corrupt or perform poorly.
In the first part of the study, a survey of Chinese citizens, respondents expressed preferences for different hypothetical property tax policies. The results suggested that participants wanted the government to be responsive to their needs and to hold officials accountable. People preferred a policy that allowed for citizen input on the use of tax revenue over one that did not, and a policy that allowed for the sanctioning of corrupt officials garnered more support than a policy that did not.
Survey participants also preferred a lighter penalty for not paying their taxes over a harsher penalty, and they supported a tax exemption for first apartments. Interestingly to the researchers, policies that allowed for government responsiveness and accountability received roughly the same support as these policies with economic benefits. “This is evidence to show that we should really pay attention to non-economic factors, because they can have similar magnitudes of impact on tax-paying behavior,” Tsai says.
For the second stage of the study, researchers recruited people for a lab experiment in Shanghai (one of the two cities that collects property taxes). Participants played a game on an iPad in which they chose repeatedly whether or not to pay property taxes. At the end of the game, they received an amount of real money that varied depending on how they and other participants played the game.
Participants were then randomly split into different groups. In one group, participants were given an opportunity to voice their preference for how their property tax revenue was used. Some were told the government incorporated their feedback, while others were told their preferences were not considered — in other words, participants learned whether or not the government was responsive to their needs. In another group, participants learned that a corrupt official had stolen money from property tax revenue. Some were told that the official had been caught and punished, while others were told the official got away with stealing.
The researchers measured whether game players’ willingness to pay property taxes changed after receiving this new information. They found that while the willingness of players who learned the government was responsive to their needs did not change significantly, players who learned the government punished corrupt officials paid their property taxes more frequently.
“It was kind of amazing to see that people care a lot about whether or not higher-level authorities are making sure that tax dollars are not being wasted through corruption,” Tsai says. She argues in her 2021 book, “When People Want Punishment: Retributive Justice and the Puzzle of Authoritarian Popularity,” that when authorities are willing to punish their own officials, it may signal to people that leaders have moral integrity and share the values of ordinary people, making them appear more legitimate.
While the researchers expected to see government responsiveness affect tax payment as well, Tsai says it’s not totally surprising that for people living in places without direct channels for citizen input, the opportunity to participate in the decision-making process in a lab setting might not resonate as strongly.
The findings don’t mean that government responsiveness isn’t important. But they suggest that even when there aren’t opportunities for citizens to make their voices heard, there are other ways for governments to appear legitimate and get people to comply with rules voluntarily.
As the strength of democratic institutions declines globally, scholars wonder whether perceptions of governments’ legitimacy will decline at the same time. “These findings suggest that maybe that’s not necessarily the case,” Tsai says.
School of Humanities, Arts, and Social Sciences welcomes 14 new faculty for 2025
Dean Agustín Rayo and the MIT School of Humanities, Arts, and Social Sciences (SHASS) recently welcomed 14 new professors to the MIT community. They arrive with diverse backgrounds and vast knowledge in their areas of research.
Naoki Egami joins MIT as an associate professor in the Department of Political Science. He is also a faculty affiliate of the Institute for Data, Systems, and Society. Egami specializes in political methodology and develops statistical methods for questions in political science and the social sciences. His current research programs focus on three areas: external validity and generalizability; machine learning and artificial intelligence for the social sciences; and causal inference with network and spatial data. His work has appeared in various academic journals in political science, statistics, and computer science, such as American Political Science Review, American Journal of Political Science, Journal of the American Statistical Association, Journal of the Royal Statistical Society (Series B), NeurIPS, and Science Advances. Before joining MIT, Egami was an assistant professor at Columbia University. He received a PhD from Princeton University (2020) and a BA from the University of Tokyo (2015).
Valentin Figueroa joins the Department of Political Science as an assistant professor. His research examines historical state building, ideological change, and scientific innovation, with a regional focus on Western Europe and Latin America. His current book project investigates the disestablishment of patrimonial administrations and the rise of bureaucratic states in early modern Europe. Before joining MIT, he was an assistant professor at the Pontificia Universidad Católica de Chile. Originally from Argentina, Figueroa holds a BA and an MA in political science from Universidad de San Andrés and Universidad Torcuato Di Tella, respectively, and a PhD in political science from Stanford University.
Bailey Flanigan is an assistant professor in the Department of Political Science, with a shared appointment in the MIT Schwarzman College of Computing in the Department of Electrical Engineering and Computer Science. Her research combines tools from across these disciplines — including social choice theory, game theory, algorithms, statistics, and survey methods — to advance political methodology and strengthen public participation in democracy. She is specifically interested in sampling algorithms, opinion measurement/preference elicitation, and the design of democratic innovations like deliberative minipublics and participatory budgeting. Before joining MIT, Flanigan was a postdoc at Harvard University’s Data Science Initiative. She earned her PhD in computer science from Carnegie Mellon University and her BS in bioengineering from the University of Wisconsin at Madison.
Rachel Fraser is an associate professor in the Department of Linguistics and Philosophy. Before coming to MIT, Fraser taught at Oxford University, where she also completed her graduate work in philosophy. She has interests in epistemology, language, feminism, aesthetics, and political philosophy. At present, her main project is a book manuscript on the epistemology of narrative.
Brian Hedden PhD ’12 is a professor in the Department of Linguistics and Philosophy, with a shared appointment in the MIT Schwarzman College of Computing in the Department of Electrical Engineering and Computer Science. His research focuses on how we ought to form beliefs and make decisions. He works in epistemology, decision theory, and ethics, including ethics of AI. He is the author of “Reasons without Persons: Rationality, Identity, and Time” (Oxford University Press, 2015) and articles on topics including collective action problems, legal standards of proof, algorithmic fairness, and political polarization, among others. Prior to joining MIT, he was a faculty member at the Australian National University and the University of Sydney, and a junior research fellow at Oxford. He received his BA From Princeton University in 2006 and his PhD from MIT in 2012.
Rebekah Larsen is an assistant professor in the Comparative Media Studies/Writing program. A media sociologist with a PhD from Cambridge University, her work uncovers and analyzes understudied media ecosystems, with special attention to sociotechnical change and power relations within these systems. Recent scholarly sites of inquiry include conservative talk radio stations in rural Utah (and ethnographic work in conservative spaces); the new global network of fact checkers funded by social media platform content moderation contracts; and search engine manipulation of journalists and activists around a controversial 2010s privacy regulation. Prior to MIT, Larsen held a Marie Curie grant at the University of Copenhagen, and was a visiting fellow at the Information Society Project (Yale Law School). She maintains current affiliations as a faculty associate at the Berkman Klein Center (Harvard Law School) and a research associate at the Center for Governance and Human Rights (Cambridge University).
Pascal Le Boeuf joins the Music and Theater Arts Section as an assistant professor. Described as “sleek, new,” “hyper-fluent,” and “a composer that rocks” by The New York Times, he is a Grammy Award-winning composer, jazz pianist, and producer whose works range from improvised music to hybridizing notation-based chamber music with production-based technology. Recent projects include collaborations with Akropolis Reed Quintet, Christian Euman, Jamie Lidell, Alarm Will Sound, Ji Hye Jung, Tasha Warren, Dave Eggar, Barbora Kolarova and Arx Duo, JACK Quartet, Friction Quartet, Hub New Music, Todd Reynolds, Sara Caswell, Jessica Meyer, Nick Photinos, Ian Chang, Dayna Stephens, Linda May Han Oh, Justin Brown, and Le Boeuf Brothers. He received a 2025 Grammy Award for Best Instrumental Composition, a 2024 Barlow Commission, a 2023 Guggenheim Fellowship, and a 2020 Copland House Residency Award. Le Boeuf is a Harold W. Dodds Honorific Fellow and PhD candidate in music composition at Princeton University.
Becca Lewis is an assistant professor in the Comparative Media Studies/Writing program. An interdisciplinary scholar who examines the rise of right-wing politics in Silicon Valley and online, she holds a PhD in communication theory and research from Stanford University and an MS in social science from the University of Oxford. Her work has been published in academic journals including New Media and Society, Social Media and Society, and American Behavioral Scientist, and in news outlets such as The Guardian and Business Insider. She previously worked as a researcher at the Data and Society Research Institute, where she published the organization’s flagship reports on media manipulation, disinformation, and right-wing digital media. In 2022, she served as an expert witness in the defamation lawsuit brought against Alex Jones by the parents of a Sandy Hook shooting victim.
Ben Lindquist is an assistant professor in the History Section, with a shared appointment in the MIT Schwarzman College of Computing in the Department of Electrical Engineering and Computer Science. His work observes the historical ways that computing has circulated with ideas of religion, emotion, and divergent thinking. “The Feeling Machine,” his first book, under contract with the University of Chicago Press, follows the history of synthetic speech to ask how emotion became a subject of computer science. Before coming to MIT, he was a postdoc in the Science in Human Culture Program at Northwestern University and earned his PhD in history from Princeton University.
Bar Luzon joins the Department of Linguistics and Philosophy as an assistant professor. Luzon completed her BA in philosophy in 2017 at the Hebrew University of Jerusalem, and her PhD in philosophy in 2024 at New York University. Before coming to MIT, she was a Mellon Postdoctoral Fellow in the Philosophy Department at Rutgers University. She works in the philosophy of mind and language, metaphysics, and epistemology. Her research focuses on the nature of representation and the structure of reality. In the course of pursuing these issues, she writes about mental content, metaphysical determination, the vehicles of mental representation, and the connection between truth and different epistemic notions.
Mark Rau is an assistant professor in the Music and Theater Arts Section, with a shared appointment in the MIT Schwarzman College of Computing in the Department of Electrical Engineering and Computer Science. He is involved in developing graduate programming focused on music technology. He is interested in the fields of musical acoustics, vibration and acoustic measurement, audio signal processing, and physical modeling synthesis, among other areas. As a lifelong musician, his research focuses on musical instruments and creative audio effects. Before joining MIT, he was a postdoc at McGill University and a lecturer at Stanford University. He completed his PhD at Stanford’s Center for Computer Research in Music and Acoustics. He also holds an MA in music, science, and technology from Stanford, as well as a BS in physics and BMus in jazz from McGill University.
Viola Schmitt is an associate professor in the Department of Linguistics and Philosophy. She is a linguist with a special interest in semantics. Much of her work focuses on trying to understand general constraints on human language meaning; that is, the principles regulating which meanings can be expressed by human languages and how languages can package meaning. Variants of this question were also central to grants she received from the Austrian and German research foundations. She earned her PhD in linguistics from the University of Vienna and worked as a postdoc and/or lecturer at the Universities of Vienna, Graz, Göttingen, and at the University of California at Los Angeles. Her most recent position was as a junior professor at the Humboldt University Berlin.
Angela Saini joins the Comparative Media Studies/Writing program as an assistant professor. A science journalist and author, she presents television and radio documentaries for the BBC and her writing has appeared in National Geographic, Wired, Science, and Foreign Policy. She has published four books, which have together been translated into 18 languages. Her bestselling 2019 book, “Superior: The Return of Race Science,” was a finalist for the LA Times Book Prize, and her latest, “The Patriarchs: The Origins of Inequality,” was a finalist for the Orwell Prize for Political Writing. She has an MEng from the University of Oxford, and was made an honorary fellow of her alma mater, Keble College, in 2023.
Paris Smaragdis SM ’97, PhD ’01 joins the Music and Theater Arts Section as a professor with a shared appointment in the MIT Schwarzman College of Computing in the Department of Electrical Engineering and Computer Science. He holds a BMus (cum laude ’95) from Berklee College of Music. His research lies at the intersection of signal processing and machine learning, especially as it relates to sound and music. He has been a research scientist at Mitsubishi Electric Research Labs, a senior research scientist at Adobe Research, and an Amazon Scholar with Amazon’s AWS. He spent 15 years as a professor at the University of Illinois Urbana Champaign in the Computer Science Department, where he spearheaded the design of the CS+Music program, and served as an associate director of the School of Computer and Data Science.
How the brain distinguishes oozing fluids from solid objects
Imagine a ball bouncing down a flight of stairs. Now think about a cascade of water flowing down those same stairs. The ball and the water behave very differently, and it turns out that your brain has different regions for processing visual information about each type of physical matter.
In a new study, MIT neuroscientists have identified parts of the brain’s visual cortex that respond preferentially when you look at “things” — that is, rigid or deformable objects like a bouncing ball. Other brain regions are more activated when looking at “stuff” — liquids or granular substances such as sand.
This distinction, which has never been seen in the brain before, may help the brain plan how to interact with different kinds of physical materials, the researchers say.
“When you’re looking at some fluid or gooey stuff, you engage with it in different way than you do with a rigid object. With a rigid object, you might pick it up or grasp it, whereas with fluid or gooey stuff, you probably are going to have to use a tool to deal with it,” says Nancy Kanwisher, the Walter A. Rosenblith Professor of Cognitive Neuroscience; a member of the McGovern Institute for Brain Research and MIT’s Center for Brains, Minds, and Machines; and the senior author of the study.
MIT postdoc Vivian Paulun, who is joining the faculty of the University of Wisconsin at Madison this fall, is the lead author of the paper, which appears today in the journal Current Biology. RT Pramod, an MIT postdoc, and Josh Tenenbaum, an MIT professor of brain and cognitive sciences, are also authors of the study.
Stuff vs. things
Decades of brain imaging studies, including early work by Kanwisher, have revealed regions in the brain’s ventral visual pathway that are involved in recognizing the shapes of 3D objects, including an area called the lateral occipital complex (LOC). A region in the brain’s dorsal visual pathway, known as the frontoparietal physics network (FPN), analyzes the physical properties of materials, such as mass or stability.
Although scientists have learned a great deal about how these pathways respond to different features of objects, the vast majority of these studies have been done with solid objects, or “things.”
“Nobody has asked how we perceive what we call ‘stuff’ — that is, liquids or sand, honey, water, all sorts of gooey things. And so we decided to study that,” Paulun says.
These gooey materials behave very differently from solids. They flow rather than bounce, and interacting with them usually requires containers and tools such as spoons. The researchers wondered if these physical features might require the brain to devote specialized regions to interpreting them.
To explore how the brain processes these materials, Paulun used a software program designed for visual effects artists to create more than 100 video clips showing different types of things or stuff interacting with the physical environment. In these videos, the materials could be seen sloshing or tumbling inside a transparent box, being dropped onto another object, or bouncing or flowing down a set of stairs.
The researchers used functional magnetic resonance imaging (fMRI) to scan the visual cortex of people as they watched the videos. They found that both the LOC and the FPN respond to “things” and “stuff,” but that each pathway has distinctive subregions that respond more strongly to one or the other.
“Both the ventral and the dorsal visual pathway seem to have this subdivision, with one part responding more strongly to ‘things,’ and the other responding more strongly to ‘stuff,’” Paulun says. “We haven’t seen this before because nobody has asked that before.”
Roland Fleming, a professor of experimental psychology at Justus Liebig University of Geissen, described the findings as a “major breakthrough in the scientific understanding of how our brains represent the physical properties of our surrounding world.”
“We’ve known the distinction exists for a long time psychologically, but this is the first time that it’s been really mapped onto separate cortical structures in the brain. Now we can investigate the different computations that the distinct brain regions use to process and represent objects and materials,” says Fleming, who was not involved in the study.
Physical interactions
The findings suggest that the brain may have different ways of representing these two categories of material, similar to the artificial physics engines that are used to create video game graphics. These engines usually represent a 3D object as a mesh, while fluids are represented as sets of particles that can be rearranged.
“The interesting hypothesis that we can draw from this is that maybe the brain, similar to artificial game engines, has separate computations for representing and simulating ‘stuff’ and ‘things.’ And that would be something to test in the future,” Paulun says.
The researchers also hypothesize that these regions may have developed to help the brain understand important distinctions that allow it to plan how to interact with the physical world. To further explore this possibility, the researchers plan to study whether the areas involved in processing rigid objects are also active when a brain circuit involved in planning to grasp objects is active.
They also hope to look at whether any of the areas within the FPN correlate with the processing of more specific features of materials, such as the viscosity of liquids or the bounciness of objects. And in the LOC, they plan to study how the brain represents changes in the shape of fluids and deformable substances.
The research was funded by the German Research Foundation, the U.S. National Institutes of Health, and a U.S. National Science Foundation grant to the Center for Brains, Minds, and Machines.
Cheating on Quantum Computing Benchmarks
Peter Gutmann and Stephan Neuhaus have a new paper—I think it’s new, even though it has a March 2025 date—that makes the argument that we shouldn’t trust any of the quantum factorization benchmarks, because everyone has been cooking the books:
Similarly, quantum factorisation is performed using sleight-of-hand numbers that have been selected to make them very easy to factorise using a physics experiment and, by extension, a VIC-20, an abacus, and a dog. A standard technique is to ensure that the factors differ by only a few bits that can then be found using a simple search-based approach that has nothing to do with factorisation…. Note that such a value would never be encountered in the real world since the RSA key generation process typically requires that |p-q| > 100 or more bits [9]. As one analysis puts it, “Instead of waiting for the hardware to improve by yet further orders of magnitude, researchers began inventing better and better tricks for factoring numbers by exploiting their hidden structure” [10]...
EPA used an unexpected argument to kill the endangerment finding
DOE reframes climate consensus as a debate
Trump admin eliminates offshore wind project areas
AI could strengthen the credibility of carbon markets, report says
Promise to triple global clean energy is off track
Spain’s top court rules PM Sánchez not responsible for Valencia floods
Legal risks seen accelerating climate adaptation plans
Google sees Asia as a ‘challenging’ region to decarbonize
Satellite launched by India, NASA will track shifts in land and ice
Shifting hotspot of tropical cyclone clusters in a warming climate
Nature Climate Change, Published online: 31 July 2025; doi:10.1038/s41558-025-02397-9
Tropical cyclones can occur concurrently in the same basins in clusters, potentially resulting in greater damage. Here the authors show that global warming causes a shift in hotspots of such clusters towards the North Atlantic.Mapping cells in time and space: New tool reveals a detailed history of tumor growth
All life is connected in a vast family tree. Every organism exists in relationship to its ancestors, descendants, and cousins, and the path between any two individuals can be traced. The same is true of cells within organisms — each of the trillions of cells in the human body is produced through successive divisions from a fertilized egg, and can all be related to one another through a cellular family tree. In simpler organisms, such as the worm C. elegans, this cellular family tree has been fully mapped, but the cellular family tree of a human is many times larger and more complex.
In the past, MIT professor and Whitehead Institute for Biomedical Research member Jonathan Weissman and other researchers developed lineage tracing methods to track and reconstruct the family trees of cell divisions in model organisms in order to understand more about the relationships between cells and how they assemble into tissues, organs, and — in some cases — tumors. These methods could help to answer many questions about how organisms develop and diseases like cancer are initiated and progress.
Now, Weissman and colleagues have developed an advanced lineage tracing tool that not only captures an accurate family tree of cell divisions, but also combines that with spatial information: identifying where each cell ends up within a tissue. The researchers used their tool, PEtracer, to observe the growth of metastatic tumors in mice. Combining lineage tracing and spatial data provided the researchers with a detailed view of how elements intrinsic to the cancer cells and from their environments influenced tumor growth, as Weissman and postdocs in his lab Luke Koblan, Kathryn Yost, and Pu Zheng, and graduate student William Colgan share in a paper published in the journal Science on July 24.
“Developing this tool required combining diverse skill sets through the sort of ambitious interdisciplinary collaboration that’s only possible at a place like Whitehead Institute,” says Weissman, who is also a Howard Hughes Medical Institute investigator. “Luke came in with an expertise in genetic engineering, Pu in imaging, Katie in cancer biology, and William in computation, but the real key to their success was their ability to work together to build PEtracer.”
“Understanding how cells move in time and space is an important way to look at biology, and here we were able to see both of those things in high resolution. The idea is that by understanding both a cell’s past and where it ends up, you can see how different factors throughout its life influenced its behaviors. In this study, we use these approaches to look at tumor growth, though in principle we can now begin to apply these tools to study other biology of interest, like embryonic development,” Koblan says.
Designing a tool to track cells in space and time
PEtracer tracks cells’ lineages by repeatedly adding short, predetermined codes to the DNA of cells over time. Each piece of code, called a lineage tracing mark, is made up of five bases, the building blocks of DNA. These marks are inserted using a gene editing technology called prime editing, which directly rewrites stretches of DNA with minimal undesired byproducts. Over time, each cell acquires more lineage tracing marks, while also maintaining the marks of its ancestors. The researchers can then compare cells’ combinations of marks to figure out relationships and reconstruct the family tree.
“We used computational modeling to design the tool from first principles, to make sure that it was highly accurate, and compatible with imaging technology. We ran many simulations to land on the optimal parameters for a new lineage tracing tool, and then engineered our system to fit those parameters,” Colgan says.
When the tissue — in this case, a tumor growing in the lung of a mouse — had sufficiently grown, the researchers collected these tissues and used advanced imaging approaches to look at each cell’s lineage relationship to other cells via the lineage tracing marks, along with its spatial position within the imaged tissue and its identity (as determined by the levels of different RNAs expressed in each cell). PEtracer is compatible with both imaging approaches and sequencing methods that capture genetic information from single cells.
“Making it possible to collect and analyze all of this data from the imaging was a large challenge,” Zheng says. “What’s particularly exciting to me is not just that we were able to collect terabytes of data, but that we designed the project to collect data that we knew we could use to answer important questions and drive biological discovery.”
Reconstructing the history of a tumor
Combining the lineage tracing, gene expression, and spatial data let the researchers understand how the tumor grew. They could tell how closely related neighboring cells are and compare their traits. Using this approach, the researchers found that the tumors they were analyzing were made up of four distinct modules, or neighborhoods, of cells.
The tumor cells closest to the lung, the most nutrient-dense region, were the most fit, meaning their lineage history indicated the highest rate of cell division over time. Fitness in cancer cells tends to correlate to how aggressively tumors will grow.
The cells at the “leading edge” of the tumor, the far side from the lung, were more diverse and not as fit. Below the leading edge was a low-oxygen neighborhood of cells that might once have been leading edge cells, now trapped in a less-desirable spot. Between these cells and the lung-adjacent cells was the tumor core, a region with both living and dead cells, as well as cellular debris.
The researchers found that cancer cells across the family tree were equally likely to end up in most of the regions, with the exception of the lung-adjacent region, where a few branches of the family tree dominated. This suggests that the cancer cells’ differing traits were heavily influenced by their environments, or the conditions in their local neighborhoods, rather than their family history. Further evidence of this point was that expression of certain fitness-related genes, such as Fgf1/Fgfbp1, correlated to a cell’s location, rather than its ancestry. However, lung-adjacent cells also had inherited traits that gave them an edge, including expression of the fitness-related gene Cldn4 — showing that family history influenced outcomes as well.
These findings demonstrate how cancer growth is influenced both by factors intrinsic to certain lineages of cancer cells and by environmental factors that shape the behavior of cancer cells exposed to them.
“By looking at so many dimensions of the tumor in concert, we could gain insights that would not have been possible with a more limited view,” Yost says. “Being able to characterize different populations of cells within a tumor will enable researchers to develop therapies that target the most aggressive populations more effectively.”
“Now that we’ve done the hard work of designing the tool, we’re excited to apply it to look at all sorts of questions in health and disease, in embryonic development, and across other model species, with an eye toward understanding important problems in human health,” Koblan says. “The data we collect will also be useful for training AI models of cellular behavior. We’re excited to share this technology with other researchers and see what we all can discover.”
Creeping crystals: Scientists observe “salt creep” at the single-crystal scale
Salt creeping, a phenomenon that occurs in both natural and industrial processes, describes the collection and migration of salt crystals from evaporating solutions onto surfaces. Once they start collecting, the crystals climb, spreading away from the solution. This creeping behavior, according to researchers, can cause damage or be harnessed for good, depending on the context. New research published June 30 in the journal Langmuir is the first to show salt creeping at a single-crystal scale and beneath a liquid’s meniscus.
“The work not only explains how salt creeping begins, but why it begins and when it does,” says Joseph Phelim Mooney, a postdoc in the MIT Device Research Laboratory and one of the authors of the new study. “We hope this level of insight helps others, whether they’re tackling water scarcity, preserving ancient murals, or designing longer-lasting infrastructure.”
The work is the first to directly visualize how salt crystals grow and interact with surfaces underneath a liquid meniscus, something that’s been theorized for decades but never actually imaged or confirmed at this level, and it offers fundamental insights that could impact a wide range of fields — from mineral extraction and desalination to anti-fouling coatings, membrane design for separation science, and even art conservation, where salt damage is a major threat to heritage materials.
In civil engineering applications, for example, the research can help explain why and when salt crystals start growing across surfaces like concrete, stone, or building materials. “These crystals can exert pressure and cause cracking or flaking, reducing the long-term durability of structures,” says Mooney. “By pinpointing the moment when salt begins to creep, engineers can better design protective coatings or drainage systems to prevent this form of degradation.”
For a field like art conservation, where salt can be devastating to murals, frescoes, and ancient artifacts, often forming beneath the surface before visible damage appears, the work can help identify the exact conditions that cause salt to start moving and spreading, allowing conservators to act earlier and more precisely to protect heritage objects.
The work began during Mooney’s Marie Curie Fellowship at MIT. “I was focused on improving desalination systems and quickly ran into [salt buildup as] a major roadblock,” he says. “[Salt] was everywhere, coating surfaces, clogging flow paths, and undermining the efficiency of our designs. I realized we didn’t fully understand how or why salt starts creeping across surfaces in the first place.”
That experience led Mooney to team up with colleagues to dig into the fundamentals of salt crystallization at the air–liquid–solid interface. “We wanted to zoom in, to really see the moment salt begins to move, so we turned to in situ X-ray microscopy,” he says. “What we found gave us a whole new way to think about surface fouling, material degradation, and controlled crystallization.”
The new research may, in fact, allow better control of a crystallization processes required to remove salt from water in zero-liquid discharge systems. It can also be used to explain how and when scaling happens on equipment surfaces, and may support emerging climate technologies that depend on smart control of evaporation and crystallization.
The work also supports mineral and salt extraction applications, where salt creeping can be both a bottleneck and an opportunity. In these applications, Mooney says, “by understanding the precise physics of salt formation at surfaces, operators can optimize crystal growth, improving recovery rates and reducing material losses.”
Mooney’s co-authors on the paper include fellow MIT Device Lab researchers Omer Refet Caylan, Bachir El Fil (now an associate professor at Georgia Tech), and Lenan Zhang (now an associate professor at Cornell University); Jeff Punch and Vanessa Egan of the University of Limerick; and Jintong Gao of Cornell.
The research was conducted using in situ X-ray microscopy. Mooney says the team’s big realization moment occurred when they were able to observe a single salt crystal pinning itself to the surface, which kicked off a cascading chain reaction of growth.
“People had speculated about this, but we captured it on X-ray for the first time. It felt like watching the microscopic moment where everything tips, the ignition points of a self-propagating process,” says Mooney. “Even more surprising was what followed: The salt crystal didn’t just grow passively to fill the available space. It pierced through the liquid-air interface and reshaped the meniscus itself, setting up the perfect conditions for the next crystal. That subtle, recursive mechanism had never been visually documented before — and seeing it play out in real time completely changed how we thought about salt crystallization.”
The paper, “In Situ X-ray Microscopy Unraveling the Onset of Salt Creeping at a Single-Crystal Level,” is available now in the journal Langmuir. Research was conducted in MIT.nano.
👮 Amazon Ring Is Back in the Mass Surveillance Game | EFFector 37.9
EFF is gearing up to beat the heat in Las Vegas for the summer security conferences! Before we make our journey to the Strip, we figured let's get y'all up-to-speed with a new edition of EFFector.
This time we're covering an illegal mass surveillance scheme by the Sacramento Municipal Utility District, calling out dating apps for using intimate data—like sexual preferences or identity—to train AI , and explaining why we're backing the Wikimedia Foundation in their challenge to the UK’s Online Safety Act.
Don't forget to also check out our audio companion to EFFector as well! We're interviewing staff about some of the important work that they're doing. This time, EFF Senior Policy Analyst Matthew Guariglia explains how Amazon Ring is cashing in on the rising tide of techno-authoritarianism. Listen now on YouTube or the Internet Archive.
EFFECTOR 37.9 - Amazon Ring Is Back in the Mass Surveillance Game
Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and listeners—up to date on the movement to protect online privacy and free expression.
Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us fight for a brighter digital future.