Feed aggregator
How cement “breathes in” and stores millions of tons of CO₂ a year
The world’s most common construction material has a secret. Cement, the “glue” that holds concrete together, gradually “breathes in” and stores millions of tons of carbon dioxide (CO2) from the air over the lifetimes of buildings and infrastructure.
A new study from the MIT Concrete Sustainability Hub quantifies this process, carbon uptake, at a national scale for the first time. Using a novel approach, the research team found that the cement in U.S. buildings and infrastructure sequesters over 6.5 million metric tons of CO2 annually. This corresponds to roughly 13 percent of the process emissions — the CO2 released by the underlying chemical reaction — in U.S. cement manufacturing. In Mexico, the same building stock sequesters about 5 million tons a year.
But how did the team come up with those numbers?
Scientists have known how carbon uptake works for decades. CO2 enters concrete or mortar — the mixture that glues together blocks, brick, and stones — through tiny pores, reacts with the calcium-rich products in cement, and becomes locked into a stable mineral called calcium carbonate, or limestone.
The chemistry is well-known, but calculating the magnitude of this at scale is not. A concrete highway in Dallas sequesters CO2 differently than Mexico City apartments made from concrete masonry units (CMUs), also called concrete blocks or, colloquially, cinder blocks. And a foundation slab buried under the snow in Fairbanks, Alaska, “breathes in” CO2 at a different pace entirely.
As Hessam AzariJafari, lead author and research scientist in the MIT Department of Civil and Environmental Engineering, explains, “Carbon uptake is very sensitive to context. Four major factors drive it: the type of cement used, the product we make with it — concrete, CMUs, or mortar — the geometry of the structure, and the climate and conditions it’s exposed to. Even within the same structure, uptake can vary five-fold between different elements.”
As no two structures sequester CO2 in the same way, estimating uptake nationwide would normally require simulating an array of cement-based elements: slabs, walls, beams, columns, pavements, and more. On top of that, each of those has its own age, geometry, mixture, and exposure condition to account for.
Seeing that this approach would be like trying to count every grain of sand on a beach, the team took a different route. They developed hundreds of archetypes, typical designs that could stand in for different buildings and pieces of infrastructure. It’s a bit like measuring the beach instead by mapping out its shape, depth, and shoreline to estimate how much sand usually sits in a given spot.
With these archetypes in hand, the team modeled how each one sequesters CO2 in different environments and how common each is across every state in the United States and Mexico. In this way, they could estimate not just how much CO2 structures sequester, but why those numbers differ.
Two factors stood out. The first was the “construction trend,” or how the amount of new construction had changed over the previous five years. Because it reflects how quickly cement products are being added to the building stock, it shapes how much cement each state consumes and, therefore, how much of that cement is actively carbonating. The second was the ratio of mortar to concrete, since porous mortars sequester CO2 an order of magnitude faster than denser concrete.
In states where mortar use was higher, the fraction of CO2 uptake relative to process emissions was noticeably greater. “We observed something unique about Mexico: Despite using half the cement that the U.S. does, the country has three-quarters of the uptake,” notes AzariJafari. “This is because Mexico makes more use of mortars and lower-strength concrete, and bagged cement mixed on-site. These practices are why their uptake sequesters about a quarter of their cement manufacturing emissions.”
While care must be taken for structural elements that use steel reinforcement, as uptake can accelerate corrosion, it’s possible to enhance the uptake of many elements without negative impacts.
Randolph Kirchain, director of the MIT Concrete Sustainability Hub, principal research scientist in the MIT Materials Research Laboratory, and the senior author of this study, explains: “For instance, increasing the amount of surface area exposed to air accelerates uptake and can be achieved by foregoing painting or tiling, or choosing designs like waffle slabs with a higher surface area-to-volume ratio. Additionally, avoiding unnecessarily stronger, less-porous concrete mixtures than required would speed up uptake while using less cement.”
“There is a real opportunity to refine how carbon uptake from cement is represented in national inventories,” AzariJafari comments. “The buildings around us and the concrete beneath our feet are constantly ‘breathing in’ millions of tons of CO2. Nevertheless, some of the simplified values in widely used reporting frameworks can lead to higher estimates than what we observe empirically. Integrating updated science into international inventories and guidelines such as the Intergovernmental Panel on Climate Change (IPCC) would help ensure that reported numbers reflect the material and temporal realities of the sector.”
By offering the first rigorous, bottom-up estimation of carbon uptake at a national scale, the team’s work provides a more representative picture of cement’s environmental impact. As we work to decarbonize the built environment, understanding what our structures are already doing in the background may be just as important as the innovations we pursue moving forward. The approach developed by MIT researchers could be extended to other countries by combining global building-stock databases with national cement-production statistics. It could also inform the design of structures that safely maximize uptake.
The findings were published Dec. 15 in the Proceedings of the National Academy of Sciences. Joining AzariJafari and Kirchain on the paper are MIT researchers Elizabeth Moore of the Department of Materials Science and Engineering and the MIT Climate Project and former postdocs Ipek Bensu Manav SM ’21, PhD ’24 and Motahareh Rahimi, along with Bruno Huet and Christophe Levy from the Holcim Innovation Center in France.
🪪 Age Verification Is Coming for the Internet | EFFector 37.18
The final EFFector of 2025 is here! Just in time to keep you up-to-date on the latests happenings in the fight for privacy and free speech online.
In this latest issue, we're sharing how to spot sneaky ALPR cameras at the U.S. border, covering a host of new resources on age verification laws, and explaining why AI companies need to protect chatbot logs from bulk surveillance.
Prefer to listen in? Check out our audio companion, where EFF Activist Molly Buckley explains our new resource explaining age verification laws and how you can fight back. Catch the conversation on YouTube or the Internet Archive.
EFFECTOR 37.18 - 🪪 AGE VERIFICATION IS COMING FOR THE INTERNET
Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and listeners—up to date on the movement to protect online privacy and free expression.
Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us fight for a brighter digital future.
Chinese Surveillance and AI
New report: “The Party’s AI: How China’s New AI Systems are Reshaping Human Rights.” From a summary article:
China is already the world’s largest exporter of AI powered surveillance technology; new surveillance technologies and platforms developed in China are also not likely to simply stay there. By exposing the full scope of China’s AI driven control apparatus, this report presents clear, evidence based insights for policymakers, civil society, the media and technology companies seeking to counter the rise of AI enabled repression and human rights violations, and China’s growing efforts to project that repression beyond its borders...
Defense bill directs GAO to probe tick conspiracy promoted by RFK Jr.
Homeowners drop flood insurance as FEMA rates rise
3 reasons Trump’s tanker seizure hasn’t spiked oil prices
Wright says Congress has momentum for permitting overhaul
Emissions compliance costs soar as Washington seeks major cuts
‘Climate Superfund Act’ emerges for last-minute lame-duck action in New Jersey Senate
Trump admin sides with fossil fuel industry in Supreme Court case
Political battles swirl over the fate of Europe’s car industry
VW’s $3.5B gamble: Can it win back share in the competitive Chinese market?
Finnish refiner Neste says it won’t meet 2035 oil exit goal
Torrential rains, flooding kill 37 in Moroccan city of Safi
A new immunotherapy approach could work for many types of cancer
Researchers at MIT and Stanford University have developed a new way to stimulate the immune system to attack tumor cells, using a strategy that could make cancer immunotherapy work for many more patients.
The key to their approach is reversing a “brake” that cancer cells engage to prevent immune cells from launching an attack. This brake is controlled by sugar molecules known as glycans that are found on the surface of cancer cells.
By blocking those glycans with molecules called lectins, the researchers showed they could dramatically boost the immune system’s response to cancer cells. To achieve this, they created multifunctional molecules known as AbLecs, which combine a lectin with a tumor-targeting antibody.
“We created a new kind of protein therapeutic that can block glycan-based immune checkpoints and boost anti-cancer immune responses,” says Jessica Stark, the Underwood-Prescott Career Development Professor in the MIT departments of Biological Engineering and Chemical Engineering. “Because glycans are known to restrain the immune response to cancer in multiple tumor types, we suspect our molecules could offer new and potentially more effective treatment options for many cancer patients.”
Stark, who is also a member of MIT’s Koch Institute for Integrative Cancer Research, is the lead author of the paper. Carolyn Bertozzi, a professor of chemistry at Stanford and director of the Sarafan ChEM Institute, is the senior author of the study, which appears today in Nature Biotechnology.
Releasing the brakes
Training the immune system to recognize and destroy tumor cells is a promising approach to treating many types of cancer. One class of immunotherapy drugs known as checkpoint inhibitors stimulate immune cells by blocking an interaction between the proteins PD-1 and PD-L1. This removes a brake that tumor cells use to prevent immune cells like T cells from killing cancer cells.
Drugs targeting the PD-1- PD-L1 checkpoint have been approved to treat several kinds of cancer. In some of these patients, checkpoint inhibitors can lead to long-lasting remission, but for many others, they don’t work at all.
In hopes of generating immune responses in a greater number of patients, researchers are now working on ways to target other immunosuppressive interactions between cancer cells and immune cells. One such interaction occurs between glycans on tumor cells and receptors found on immune cells.
Glycans are found on nearly all living cells, but tumor cells often express glycans that are not found on healthy cells, including glycans that contain a monosaccharide called sialic acid. When sialic acids bind to lectin receptors, located on immune cells, it turns on an immunosuppressive pathway in the immune cells. These lectins that bind to sialic acid are known as Siglecs.
“When Siglecs on immune cells bind to sialic acids on cancer cells, it puts the brakes on the immune response. It prevents that immune cell from becoming activated to attack and destroy the cancer cell, just like what happens when PD-1 binds to PD-L1,” Stark says.
Currently, there aren’t any approved therapies that target this Siglec-sialic acid interaction, despite a number of drug development approaches that have been tried. For example, researchers have tried to develop lectins that could bind to sialic acids and prevent them from interacting with immune cells, but so far, this approach hasn’t worked well because lectins don’t bind strongly enough to accumulate on the cancer cell surface in large numbers.
To overcome that, Stark and her colleagues developed a way to deliver larger quantities of lectins by attaching them to antibodies that target cancer cells. Once there, the lectins can bind to sialic acid, preventing sialic acid from interacting with Siglec receptors on immune cells. This lifts the brakes off the immune response, allowing immune cells such as macrophages and natural killer (NK) cells to launch an attack on the tumor.
“This lectin binding domain typically has relatively low affinity, so you can’t use it by itself as a therapeutic. But, when the lectin domain is linked to a high-affinity antibody, you can get it to the cancer cell surface where it can bind and block sialic acids,” Stark says.
A modular system
In this study, the researchers designed an AbLec based on the antibody trastuzumab, which binds to HER2 and is approved as a cancer therapy to treat breast, stomach, and colorectal cancers. To form the AbLec, they replaced one arm of the antibody with a lectin, either Siglec-7 or Siglec-9.
Tests using cells grown in the lab showed that this AbLec rewired immune cells to attack and destroy cancer cells.
The researchers then tested their AbLecs in a mouse model that was engineered to express human Siglec receptors and antibody receptors. These mice were then injected with cancer cells that formed metastases in the lungs. When treated with the AbLec, these mice showed fewer lung metastases than mice treated with trastuzumab alone.
The researchers also showed that they could swap in other tumor-specific antibodies, such as rituximab, which targets CD20, or cetuximab, which targets EGFR. They could also swap in lectins that target other glycans involved in immunosuppression, or antibodies that target checkpoint proteins such as PD-1.
“AbLecs are really plug-and-play. They’re modular,” Stark says. “You can imagine swapping out different decoy receptor domains to target different members of the lectin receptor family, and you can also swap out the antibody arm. This is important because different cancer types express different antigens, which you can address by changing the antibody target.”
Stark, Bertozzi, and others have started a company called Valora Therapeutics, which is now working on developing lead AbLec candidates. They hope to begin clinical trials in the next two to three years.
The research was funded, in part, by a Burroughs Wellcome Fund Career Award at the Scientific Interface, a Society for Immunotherapy of Cancer Steven A. Rosenberg Scholar Award, a V Foundation V Scholar Grant, the National Cancer Institute, the National Institute of General Medical Sciences, a Merck Discovery Biologics SEEDS grant, an American Cancer Society Postdoctoral Fellowship, and a Sarafan ChEM-H Postdocs at the Interface seed grant.
“Robot, make me a chair”
Computer-aided design (CAD) systems are tried-and-true tools used to design many of the physical objects we use each day. But CAD software requires extensive expertise to master, and many tools incorporate such a high level of detail they don’t lend themselves to brainstorming or rapid prototyping.
In an effort to make design faster and more accessible for non-experts, researchers from MIT and elsewhere developed an AI-driven robotic assembly system that allows people to build physical objects by simply describing them in words.
Their system uses a generative AI model to build a 3D representation of an object’s geometry based on the user’s prompt. Then, a second generative AI model reasons about the desired object and figures out where different components should go, according to the object’s function and geometry.
The system can automatically build the object from a set of prefabricated parts using robotic assembly. It can also iterate on the design based on feedback from the user.
The researchers used this end-to-end system to fabricate furniture, including chairs and shelves, from two types of premade components. The components can be disassembled and reassembled at will, reducing the amount of waste generated through the fabrication process.
They evaluated these designs through a user study and found that more than 90 percent of participants preferred the objects made by their AI-driven system, as compared to different approaches.
While this work is an initial demonstration, the framework could be especially useful for rapid prototyping complex objects like aerospace components and architectural objects. In the longer term, it could be used in homes to fabricate furniture or other objects locally, without the need to have bulky products shipped from a central facility.
“Sooner or later, we want to be able to communicate and talk to a robot and AI system the same way we talk to each other to make things together. Our system is a first step toward enabling that future,” says lead author Alex Kyaw, a graduate student in the MIT departments of Electrical Engineering and Computer Science (EECS) and Architecture.
Kyaw is joined on the paper by Richa Gupta, an MIT architecture graduate student; Faez Ahmed, associate professor of mechanical engineering; Lawrence Sass, professor and chair of the Computation Group in the Department of Architecture; senior author Randall Davis, an EECS professor and member of the Computer Science and Artificial Intelligence Laboratory (CSAIL); as well as others at Google Deepmind and Autodesk Research. The paper was recently presented at the Conference on Neural Information Processing Systems.
Generating a multicomponent design
While generative AI models are good at generating 3D representations, known as meshes, from text prompts, most do not produce uniform representations of an object’s geometry that have the component-level details needed for robotic assembly.
Separating these meshes into components is challenging for a model because assigning components depends on the geometry and functionality of the object and its parts.
The researchers tackled these challenges using a vision-language model (VLM), a powerful generative AI model that has been pre-trained to understand images and text. They task the VLM with figuring out how two types of prefabricated parts, structural components and panel components, should fit together to form an object.
“There are many ways we can put panels on a physical object, but the robot needs to see the geometry and reason over that geometry to make a decision about it. By serving as both the eyes and brain of the robot, the VLM enables the robot to do this,” Kyaw says.
A user prompts the system with text, perhaps by typing “make me a chair,” and gives it an AI-generated image of a chair to start.
Then, the VLM reasons about the chair and determines where panel components go on top of structural components, based on the functionality of many example objects it has seen before. For instance, the model can determine that the seat and backrest should have panels to have surfaces for someone sitting and leaning on the chair.
It outputs this information as text, such as “seat” or “backrest.” Each surface of the chair is then labeled with numbers, and the information is fed back to the VLM.
Then the VLM chooses the labels that correspond to the geometric parts of the chair that should receive panels on the 3D mesh to complete the design.
Human-AI co-design
The user remains in the loop throughout this process and can refine the design by giving the model a new prompt, such as “only use panels on the backrest, not the seat.”
“The design space is very big, so we narrow it down through user feedback. We believe this is the best way to do it because people have different preferences, and building an idealized model for everyone would be impossible,” Kyaw says.
“The human‑in‑the‑loop process allows the users to steer the AI‑generated designs and have a sense of ownership in the final result,” adds Gupta.
Once the 3D mesh is finalized, a robotic assembly system builds the object using prefabricated parts. These reusable parts can be disassembled and reassembled into different configurations.
The researchers compared the results of their method with an algorithm that places panels on all horizontal surfaces that are facing up, and an algorithm that places panels randomly. In a user study, more than 90 percent of individuals preferred the designs made by their system.
They also asked the VLM to explain why it chose to put panels in those areas.
“We learned that the vision language model is able to understand some degree of the functional aspects of a chair, like leaning and sitting, to understand why it is placing panels on the seat and backrest. It isn’t just randomly spitting out these assignments,” Kyaw says.
In the future, the researchers want to enhance their system to handle more complex and nuanced user prompts, such as a table made out of glass and metal. In addition, they want to incorporate additional prefabricated components, such as gears, hinges, or other moving parts, so objects could have more functionality.
“Our hope is to drastically lower the barrier of access to design tools. We have shown that we can use generative AI and robotics to turn ideas into physical objects in a fast, accessible, and sustainable manner,” says Davis.
The political psychology of climate denial
Nature Climate Change, Published online: 16 December 2025; doi:10.1038/s41558-025-02523-7
Climate denial in political discourse is fuelled by psychological factors such as psychological distance, cognitive dissonance, confirmation bias, loss aversion, existential anxiety and social identity. Effective communication strategies addressing deniers’ motivations are crucial as denial undermines urgent climate action.States Take On Tough Tech Policy Battles: 2025 in Review
State legislatures—from Olympia, WA, to Honolulu, HI, to Tallahassee, FL, and everywhere in between—kept EFF’s state legislative team busy throughout 2025.
We saw some great wins and steps forward this year. Washington became the eighth state to enshrine the right to repair. Several states stepped up to protect the privacy of location data, with bills recognizing your location data isn't just a pin on a map—it's a powerful tool that reveals far more than most people realize. Other state legislators moved to protect health privacy. And California passed a law making it easier for people to exercise their privacy rights under the state’s consumer data privacy law.
Several states also took up debates around how to legislate and regulate artificial intelligence and its many applications. We’ll continue to work with allies in states including California and Colorado to proposals that address the real harms from some uses of AI, without infringing on the rights of creators and individual users.
We’ve also fought some troubling bills in states across the country this year. In April, Florida introduced a bill that would have created a backdoor for law enforcement to have easy access to messages if minors use encrypted platforms. Thankfully, the Florida legislature did not pass the bill this year. But it should set off serious alarm bells for anyone who cares about digital rights. And it was just one of a growing set of bills from states that, even when well-intentioned, threaten to take a wrecking ball to privacy, expression, and security in the name of protecting young people online.
Take, for example, the burgeoning number of age verification, age gating, age assurance, and age estimation bills. Instead of making the internet safer for children, these laws can incentivize or intersect with existing systems that collect vast amounts of data to force all users—regardless of age—to verify their identity just to access basic content or products. South Dakota and Wyoming, for example, are requiring any website that hosts any sexual content to implement age verification measures. But, given the way those laws are written, that definition could include essentially any site that allows user-generated or published content without age-based gatekeeping access. That could include everyday resources such as social media networks, online retailers, and streaming platforms.
Lawmakers, not satisfied with putting age gates on the internet, are also increasingly going after VPNs (virtual private networks) to prevent anyone from circumventing these new digital walls. VPNs are not foolproof tools—and they shouldn’t be necessary to access legally protected speech—but they should be available to people who want to use them. We will continue to stand against these types of bills, not just for the sake of free expression, but to protect the free flow of information essential to a free society.
This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2025.
Lawmakers Must Listen to Young People Before Regulating Their Internet Access: 2025 in Review
State and federal lawmakers have introduced multiple proposals in 2025 to curtail or outright block children and teenagers from accessing legal content on the internet. These lawmakers argue that internet and social media platforms have an obligation to censor or suppress speech that they consider “harmful” to young people. Unfortunately, in many of these legislative debates, lawmakers are not listening to kids, whose experiences online are overwhelmingly more positive than what lawmakers claim.
Fortunately, EFF has spent the past year trying to make sure that lawmakers hear young people’s voices. We have also been reminding lawmakers that minors, like everyone else, have First Amendment rights to express themselves online.
These rights extend to a young person’s ability to use social media both to speak for themselves and access the speech of others online. Young people also have the right to control how they access this speech, including a personalized feed and other digestible and organized ways. Preventing teenagers from accessing the same internet and social media channels that adults use is a clear violation of their right to free expression.
On top of violating minors’ First Amendment rights, these laws also actively harm minors who rely on the internet to find community, find resources to end abuse, or access information about their health. Cutting off internet access acutely harms LGBTQ+ youth and others who lack familial or community support where they live. These laws also empower the state to decide what information is acceptable for all young people, overriding parents’ choices.
Additionally, all of the laws that would attempt to create a “kid friendly” internet and an “adults-only” internet are a threat to everyone, adults included. These mandates encourage an adoption of invasive and dangerous age-verification technology. Beyond creepy, these systems incentivize more data collection, and increase the risk of data breaches and other harms. Requiring everyone online to provide their ID or other proof of their age could block legal adults from accessing lawful speech if they don’t have the right form of ID. Furthermore, this trend infringes on people’s right to be anonymous online, and creates a chilling effect which may deter people from joining certain services or speaking on certain topics
EFF has lobbied against these bills at both the state and federal level, and we have also filed briefs in support of several lawsuits to protect the First Amendment Rights of minors. We will continue to advocate for the rights of everyone online – including minors – in the future.
This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2025.
3 Questions: Using computation to study the world’s best single-celled chemists
Today, out of an estimated 1 trillion species on Earth, 99.999 percent are considered microbial — bacteria, archaea, viruses, and single-celled eukaryotes. For much of our planet’s history, microbes ruled the Earth, able to live and thrive in the most extreme of environments. Researchers have only just begun in the last few decades to contend with the diversity of microbes — it’s estimated that less than 1 percent of known genes have laboratory-validated functions. Computational approaches offer researchers the opportunity to strategically parse this truly astounding amount of information.
An environmental microbiologist and computer scientist by training, new MIT faculty member Yunha Hwang is interested in the novel biology revealed by the most diverse and prolific life form on Earth. In a shared faculty position as the Samuel A. Goldblith Career Development Professor in the Department of Biology, as well as an assistant professor at the Department of Electrical Engineering and Computer Science and the MIT Schwarzman College of Computing, Hwang is exploring the intersection of computation and biology.
Q: What drew you to research microbes in extreme environments, and what are the challenges in studying them?
A: Extreme environments are great places to look for interesting biology. I wanted to be an astronaut growing up, and the closest thing to astrobiology is examining extreme environments on Earth. And the only thing that lives in those extreme environments are microbes. During a sampling expedition that I took part in off the coast of Mexico, we discovered a colorful microbial mat about 2 kilometers underwater that flourished because the bacteria breathed sulfur instead of oxygen — but none of the microbes I was hoping to study would grow in the lab.
The biggest challenge in studying microbes is that a majority of them cannot be cultivated, which means that the only way to study their biology is through a method called metagenomics. My latest work is genomic language modeling. We’re hoping to develop a computational system so we can probe the organism as much as possible “in silico,” just using sequence data. A genomic language model is technically a large language model, except the language is DNA as opposed to human language. It’s trained in a similar way, just in biological language as opposed to English or French. If our objective is to learn the language of biology, we should leverage the diversity of microbial genomes. Even though we have a lot of data, and even as more samples become available, we’ve just scratched the surface of microbial diversity.
Q: Given how diverse microbes are and how little we understand about them, how can studying microbes in silico, using genomic language modeling, advance our understanding of the microbial genome?
A: A genome is many millions of letters. A human cannot possibly look at that and make sense of it. We can program a machine, though, to segment data into pieces that are useful. That’s sort of how bioinformatics works with a single genome. But if you’re looking at a gram of soil, which can contain thousands of unique genomes, that’s just too much data to work with — a human and a computer together are necessary in order to grapple with that data.
During my PhD and master’s degree, we were only just discovering new genomes and new lineages that were so different from anything that had been characterized or grown in the lab. These were things that we just called “microbial dark matter.” When there are a lot of uncharacterized things, that’s where machine learning can be really useful, because we’re just looking for patterns — but that’s not the end goal. What we hope to do is to map these patterns to evolutionary relationships between each genome, each microbe, and each instance of life.
Previously, we’ve been thinking about proteins as a standalone entity — that gets us to a decent degree of information because proteins are related by homology, and therefore things that are evolutionarily related might have a similar function.
What is known about microbiology is that proteins are encoded into genomes, and the context in which that protein is bounded — what regions come before and after — is evolutionarily conserved, especially if there is a functional coupling. This makes total sense because when you have three proteins that need to be expressed together because they form a unit, then you might want them located right next to each other.
What I want to do is incorporate more of that genomic context in the way that we search for and annotate proteins and understand protein function, so that we can go beyond sequence or structural similarity to add contextual information to how we understand proteins and hypothesize about their functions.
Q: How can your research be applied to harnessing the functional potential of microbes?
A: Microbes are possibly the world’s best chemists. Leveraging microbial metabolism and biochemistry will lead to more sustainable and more efficient methods for producing new materials, new therapeutics, and new types of polymers.
But it’s not just about efficiency — microbes are doing chemistry we don’t even know how to think about. Understanding how microbes work, and being able to understand their genomic makeup and their functional capacity, will also be really important as we think about how our world and climate are changing. A majority of carbon sequestration and nutrient cycling is undertaken by microbes; if we don’t understand how a given microbe is able to fix nitrogen or carbon, then we will face difficulties in modeling the nutrient fluxes of the Earth.
On the more therapeutic side, infectious diseases are a real and growing threat. Understanding how microbes behave in diverse environments relative to the rest of our microbiome is really important as we think about the future and combating microbial pathogens.
