Feed aggregator
Bezos puts money into breeding more climate-friendly cows
Decarbonization can improve energy security
Nature Climate Change, Published online: 09 April 2025; doi:10.1038/s41558-025-02317-x
Moving towards net-zero carbon emissions reduces reliance on fossil fuels but requires geographically concentrated materials for clean energy technologies. Now research finds countries can reduce emerging materials risks by expanding trading partnerships.Trade risks to energy security in net-zero emissions energy scenarios
Nature Climate Change, Published online: 09 April 2025; doi:10.1038/s41558-025-02305-1
Trade risks associated with fossil fuels and critical materials matter for energy security, and will evolve with the low-carbon transition. Here the researchers find that overall trade risks decrease for most countries in net-zero scenarios, although risks to electricity or transportation sectors may increase.Could LLMs help design our next medicines and materials?
The process of discovering molecules that have the properties needed to create new medicines and materials is cumbersome and expensive, consuming vast computational resources and months of human labor to narrow down the enormous space of potential candidates.
Large language models (LLMs) like ChatGPT could streamline this process, but enabling an LLM to understand and reason about the atoms and bonds that form a molecule, the same way it does with words that form sentences, has presented a scientific stumbling block.
Researchers from MIT and the MIT-IBM Watson AI Lab created a promising approach that augments an LLM with other machine-learning models known as graph-based models, which are specifically designed for generating and predicting molecular structures.
Their method employs a base LLM to interpret natural language queries specifying desired molecular properties. It automatically switches between the base LLM and graph-based AI modules to design the molecule, explain the rationale, and generate a step-by-step plan to synthesize it. It interleaves text, graph, and synthesis step generation, combining words, graphs, and reactions into a common vocabulary for the LLM to consume.
When compared to existing LLM-based approaches, this multimodal technique generated molecules that better matched user specifications and were more likely to have a valid synthesis plan, improving the success ratio from 5 percent to 35 percent.
It also outperformed LLMs that are more than 10 times its size and that design molecules and synthesis routes only with text-based representations, suggesting multimodality is key to the new system’s success.
“This could hopefully be an end-to-end solution where, from start to finish, we would automate the entire process of designing and making a molecule. If an LLM could just give you the answer in a few seconds, it would be a huge time-saver for pharmaceutical companies,” says Michael Sun, an MIT graduate student and co-author of a paper on this technique.
Sun’s co-authors include lead author Gang Liu, a graduate student at the University of Notre Dame; Wojciech Matusik, a professor of electrical engineering and computer science at MIT who leads the Computational Design and Fabrication Group within the Computer Science and Artificial Intelligence Laboratory (CSAIL); Meng Jiang, associate professor at the University of Notre Dame; and senior author Jie Chen, a senior research scientist and manager in the MIT-IBM Watson AI Lab. The research will be presented at the International Conference on Learning Representations.
Best of both worlds
Large language models aren’t built to understand the nuances of chemistry, which is one reason they struggle with inverse molecular design, a process of identifying molecular structures that have certain functions or properties.
LLMs convert text into representations called tokens, which they use to sequentially predict the next word in a sentence. But molecules are “graph structures,” composed of atoms and bonds with no particular ordering, making them difficult to encode as sequential text.
On the other hand, powerful graph-based AI models represent atoms and molecular bonds as interconnected nodes and edges in a graph. While these models are popular for inverse molecular design, they require complex inputs, can’t understand natural language, and yield results that can be difficult to interpret.
The MIT researchers combined an LLM with graph-based AI models into a unified framework that gets the best of both worlds.
Llamole, which stands for large language model for molecular discovery, uses a base LLM as a gatekeeper to understand a user’s query — a plain-language request for a molecule with certain properties.
For instance, perhaps a user seeks a molecule that can penetrate the blood-brain barrier and inhibit HIV, given that it has a molecular weight of 209 and certain bond characteristics.
As the LLM predicts text in response to the query, it switches between graph modules.
One module uses a graph diffusion model to generate the molecular structure conditioned on input requirements. A second module uses a graph neural network to encode the generated molecular structure back into tokens for the LLMs to consume. The final graph module is a graph reaction predictor which takes as input an intermediate molecular structure and predicts a reaction step, searching for the exact set of steps to make the molecule from basic building blocks.
The researchers created a new type of trigger token that tells the LLM when to activate each module. When the LLM predicts a “design” trigger token, it switches to the module that sketches a molecular structure, and when it predicts a “retro” trigger token, it switches to the retrosynthetic planning module that predicts the next reaction step.
“The beauty of this is that everything the LLM generates before activating a particular module gets fed into that module itself. The module is learning to operate in a way that is consistent with what came before,” Sun says.
In the same manner, the output of each module is encoded and fed back into the generation process of the LLM, so it understands what each module did and will continue predicting tokens based on those data.
Better, simpler molecular structures
In the end, Llamole outputs an image of the molecular structure, a textual description of the molecule, and a step-by-step synthesis plan that provides the details of how to make it, down to individual chemical reactions.
In experiments involving designing molecules that matched user specifications, Llamole outperformed 10 standard LLMs, four fine-tuned LLMs, and a state-of-the-art domain-specific method. At the same time, it boosted the retrosynthetic planning success rate from 5 percent to 35 percent by generating molecules that are higher-quality, which means they had simpler structures and lower-cost building blocks.
“On their own, LLMs struggle to figure out how to synthesize molecules because it requires a lot of multistep planning. Our method can generate better molecular structures that are also easier to synthesize,” Liu says.
To train and evaluate Llamole, the researchers built two datasets from scratch since existing datasets of molecular structures didn’t contain enough details. They augmented hundreds of thousands of patented molecules with AI-generated natural language descriptions and customized description templates.
The dataset they built to fine-tune the LLM includes templates related to 10 molecular properties, so one limitation of Llamole is that it is trained to design molecules considering only those 10 numerical properties.
In future work, the researchers want to generalize Llamole so it can incorporate any molecular property. In addition, they plan to improve the graph modules to boost Llamole’s retrosynthesis success rate.
And in the long run, they hope to use this approach to go beyond molecules, creating multimodal LLMs that can handle other types of graph-based data, such as interconnected sensors in a power grid or transactions in a financial market.
“Llamole demonstrates the feasibility of using large language models as an interface to complex data beyond textual description, and we anticipate them to be a foundation that interacts with other AI algorithms to solve any graph problems,” says Chen.
This research is funded, in part, by the MIT-IBM Watson AI Lab, the National Science Foundation, and the Office of Naval Research.
Exploring the impacts of technology on everyday citizens
Give Dwai Banerjee credit: He doesn’t pick easy topics to study.
Banerjee is an MIT scholar who in a short time has produced a wide-ranging body of work about the impact of technology on society — and who, as a trained anthropologist, has a keen eye for people’s lived experience.
In one book, “Enduring Cancer,” from 2020, Banerjee studies the lives of mostly poor cancer patients in Delhi, digging into their psychological horizons and interactions with the world of medical care. Another book, “Hematologies,” also from 2020, co-authored with anthropologist Jacob Copeman, examines common ideas about blood in Indian society.
And in still another book, forthcoming later this year, Banerjee explores the history of computing in India — including the attempt by some to generate growth through domestic advances, even as global computer firms were putting the industry on rather different footing.
“I enjoy having the freedom to explore new topics,” says Banerjee, an associate professor in MIT’s Program in Science, Technology, and Society (STS). “For some people, building on their previous work is best, but I need new ideas to keep me going. For me, that feels more natural. You get invested in a subject for a time and try to get everything out of it.”
What largely links these disparate topics together is that Banerjee, in his work, is a people person: He aims to illuminate the lives and thoughts of everyday citizens as they interact with the technologies and systems of contemporary society.
After all, a cancer diagnosis can be life-changing not just in physical terms, but psychologically. For some, having cancer creates “a sense of being unmoored from prior certainties about oneself and one’s place in the world,” as Banerjee writes in “Enduring Cancer.”
The technology that enables diagnoses does not meet all our human needs, so the book traces the complicated inner lives of patients, and a medical system shifting to meet psychological and palliative-care challenges. Technology and society interact beyond blockbuster products, as the book deftly implies.
For his research and teaching, Banerjee was awarded tenure at MIT last year.
Falling for the humanities
Banerjee grew up in Delhi, and as a university student he expected to work in computing, before changing course.
“I was going to go to graduate school for computer engineering,” Banerjee says. “Then I just fell in love with the humanities, and studied the humanities and social sciences.” He received an MPhil and an MA in sociology from the Delhi School of Economics, then enrolled as a PhD student at New York University.
At NYU, Banerjee undertook doctoral studies in cultural anthropology, while performing some of the fieldwork that formed the basis of “Enduring Cancer.” At the same time, he found the people he was studying were surrounded by history — shaping the technologies and policies they encountered, and shaping their own thought. Ultimately even Banerjee’s anthropological work has a strong historical dimension.
After earning his PhD, Banerjee became a Mellon Fellow in the Humanities at Dartmouth College, then joined the MIT faculty in STS. It is a logical home for someone who thinks broadly and uses multiple research methods, from the field to the archives.
“I sometimes wonder if I am an anthropologist or if I am an historian,” Banerjee allows. “But it is an interdisciplinary program, so I try to make the most of that.”
Indeed, the STS program draws on many fields and methods, with its scholars and students linked by a desire to rigorously examine the factors shaping the development and application of technology — and, if necessary, to initiate difficult discussions about technology’s effects.
“That’s the history of the field and the department at MIT, that it’s a kind of moral backbone,” Banerjee says.
Finding inspiration
As for where Banerjee’s book ideas come from, he is not simply looking for large issues to write about, but things that spark his intellectual and moral sensibilities — like disadvantaged cancer patients in Delhi.
“‘Enduring Cancer,’ in my mind, is a sort of a traditional medical anthropology text, which came out of finding inspiration from these people, and running with it as far as I could,” Banerjee says.
Alternately, “‘Hematologies’ came out of a collaboration, a conversation with Jacob Copeman, with us talking about things and getting excited about it,” Banerjee adds. “The intellectual friendship became a driving force.” Copeman is now an anthropologist on the faculty at the University of Santiago de Compostela, in Spain.
As for Banerjee’s forthcoming book about computing in India, the spark was partly his own remembered enjoyment of seeing the internet reach the country, facilitated though it was by spotty dial-up modems and other now-quaint-seeming tools.
“It’s coming from an old obsession,” Banerjee says. “When the internet had just arrived, at that time when something was just blowing up, it was exciting. This project is [partly about] recovering my early enjoyment of what was then a really exciting time.”
The subject of the book itself, however, predates the commercial internet. Rather, Banerjee chronicles the history of computing during India’s first few decades after achieving independence from Britain, in 1947. Even into the 1970s, India’s government was interested in creating a strong national IT sector, designing and manufacturing its own machines. Eventually those efforts faded, and the multinational computing giants took ahold of India’s markets.
The book details how and why this happened, in the process recasting what we think we know about India and technology. Today, Banerjee notes, India is an exporter of skilled technology talent and an importer of tech tools, but that wasn’t predestined. It is more that the idea of an autonomous tech sector in the country ran into the prevailing forces of globalization.
“The book traces this moment of this high confidence in the country’s ability to do these things, producing manufacturing and jobs and economic growth, and then it traces the decline of that vision,” Banerjee says.
“One of the aims is for it to be a book anyone can read,” Banerjee adds. In that sense, the principle guiding his interests is now guiding his scholarly output: People first.
The spark of innovation and the commercialization journey
To Vanessa Chan PhD ’00, effective engineers don’t just solve technical problems. To make an impact with a new product or technology, they need to bring it to market, deploy it, and make it mainstream. Yet this is precisely what they aren’t trained to do.
In fact, 97 percent of patents fail to make it over the “commercialization wall.”
“Only 3 percent of patents succeed, and one of the biggest challenges is we are not training our PhDs, our undergrads, our faculty, to commercialize technologies,” said Chan, vice dean of innovation and entrepreneurship at the University of Pennsylvania’s School of Engineering and Applied Science. She delivered the Department of Materials Science and Engineering (DMSE)’s spring 2025 Wulff Lecture at MIT on March 10. “Instead, we’re focused on the really hard technical issues that we have to overcome, versus everything that needs to be addressed for something to make it to market.”
Chan spoke from deep experience, having led McKinsey & Co.’s innovation practice, helping Fortune 100 companies commercialize technologies. She also invented the tangle-free headphones Loopit at re.design, the firm she founded, and served as the U.S. Department of Energy (DoE)’s chief commercialization officer and director of the Office of Technology Transitions during the Biden administration.
From invention to impact
A DMSE alumna, Chan addressed a near-capacity crowd about the importance of materials innovation. She highlighted how new materials — or existing materials used in new ways — could solve key challenges, from energy sustainability to health care delivery. For example, carbon fiber composites have replaced aluminum in the airline industry, leading to reduced fuel consumption, lower emissions, and enhanced safety. Modern lithium-ion and solid-state batteries use optimized electrode materials for higher efficiency and faster charging. And biodegradable polymer stents, which dissolve over time, have replaced traditional metallic stents that remain in arteries and can cause complications.
The Wulff Lecture is a twice-yearly talk aimed at educating students, especially first-years, about materials science and engineering and its impact on society.
Inventing a groundbreaking technology is just the beginning, Chan said. She gave the example of Thomas Edison, often thought of as the father of the electric light bulb. But Edison didn’t invent the carbonized filament — that was Joseph Swan.
“Thomas Edison was the father of the deployed light bulb,” Chan said. “He took Swan’s patents and figured out, how do we actually pull a vacuum on this? How do we manufacture this at scale?”
For an invention to make an impact, it needs to successfully traverse the commercialization journey from research to development, demonstration, and deployment in the market. “An invention without deployment is a tragedy, because you’ve invented something where you may have a lot of paper publications, but it is not making a difference at all in the real world.”
Materials commercialization is difficult, Chan explained, because new materials are at the very beginning of a value chain — the full range of activities in producing a product or service. To make it to market, the materials invention must be adopted by others along the chain, and in some cases, companies must navigate how each part of the chain gets paid. A new material for hip replacements, for example, designed to reduce the risk of infection and rehospitalization, might be a materials breakthrough, but getting it to market is complicated by the way insurance works.
“They will not pay more to avoid hospitalization,” Chan said. “If your material is more expensive than what is currently being used today, the providers will not reimburse for that.”
Beyond technology
But engineers can increase their odds in commercialization if they know the right language. “Adoption readiness levels” (ARLs), developed in Chan’s Office of Technology Transitions, help assess the nontechnical risks technologies face on their journey to commercialization. These risks cover value proposition — whether a technology can perform at a price customers will pay — market acceptance, and other potential barriers, such as infrastructure and regulations.
In 2022, the Bipartisan Infrastructure Law and the Inflation Reduction Act allocated $370 billion toward clean energy deployment — 10 times the Department of Energy’s annual budget — to advance clean energy technologies such as carbon management, clean hydrogen, and geothermal heating and cooling. But Chan emphasized that the real prize was unlocking an estimated $23 trillion from private-sector investors.
“Those are the ones who are going to bring the technologies to market. So, how do we do that? How do we convince them to actually commercialize these technologies which aren’t quite there?” Chan asked.
Chan’s team spearheaded “Pathways to Commercial Liftoff,” a roadmap to bridge the gap between innovation and commercial adoption, helping identify scaling requirements, key players, and the acceptable risk levels for early adoption.
She shared an example from the DoE initiative, which received $8 billion from Congress to create a market for clean hydrogen technologies. She tied the money to specific pathways, explaining, “the private sector will start listening because they want the money.”
Her team also gathered data on where the industry was headed, identifying sectors that would likely adopt hydrogen, the infrastructure needed to support it, and what policies or funding could accelerate commercialization.
“There’s also community perception, because when we talk to people about hydrogen, what's the first thing people think about? The Hindenburg,” Chan said, referencing the 1937 dirigible explosion. “So these are the kinds of things that we had to grapple with if we’re actually going to create a hydrogen economy.”
“What do you love?”
Chan concluded her talk by offering students professional advice. She encouraged them to do what they love. On a slide, she shared a Venn diagram of her passions for technology, business, and making things — she recently started a pottery studio called Rebel Potters — illustrating the motivations behind her career journey.
“So I need you to ask yourself, What is your Venn diagram? What is it that you love?” Chan asked. “And you may say, ‘I don’t know. I’m 18 right now, and I just need to figure out what classes I want to take.’ Well, guess what? Get outside your comfort zone. Go do something new. Go try new things.”
Attendee Delia Harms, a DMSE junior, found the exercise particularly useful. “I think I’m definitely lacking a little bit of direction in where I want to go after undergrad and what I want my career path to look like,” Harms said. “So I’ll definitely try that exercise later — thinking about what my circles are, and how they come together.”
Jeannie She, a junior majoring in artificial intelligence and bioengineering, found inspiration in Chan’s public sector experience.
“I have always seen government as bureaucracy, red tape, slow — but I’m also really interested in policy and policy change,” She said. “So learning from her and the things that she’s accomplished during her time as an appointee has been really inspiring, and makes me see that there are careers in policy where things can actually get done.”
Our Privacy Act Lawsuit Against DOGE and OPM: Why a Judge Let It Move Forward
Last week, a federal judge rejected the government’s motion to dismiss our Privacy Act lawsuit against the U.S. Office of Personnel Management (OPM) and Elon Musk’s “Department of Government Efficiency” (DOGE). OPM is disclosing to DOGE agents the highly sensitive personal information of tens of millions of federal employees, retirees, and job applicants. This disclosure violates the federal Privacy Act, a watershed law that tightly limits how the federal government can use our personal information.
We represent two unions of federal employees: the AFGE and the AALJ. Our co-counsel are Lex Lumina LLP, State Democracy Defenders Fund, and The Chandra Law Firm LLC.
We’ve already explained why the new ruling is a big deal, but let’s take a deeper dive into the Court’s reasoning.
Plaintiffs have standing
A plaintiff must show they have “standing” to bring their claim. Article III of the U.S. Constitution empowers courts to decide “cases” and “controversies.” Courts have long held this requires the plaintiff to show an “injury in fact” that is, among other things, “concrete.” In recent years, two Supreme Court decisions – Spokeo v. Robins (2016) and TransUnion v. Ramirez (2021) – addressed when an “intangible” injury, such as invasion of data privacy, is sufficiently concrete. They ruled that such injury must have “a close relationship to a harm traditionally recognized as providing a basis for a lawsuit in American courts.”
In our case, the Court held that our clients passed this test: “The complaint alleges concrete harms analogous to intrusion upon seclusion.” That is one of the common law privacy torts, long recognized in U.S. law. According to the Restatement of Torts, it occurs when a person “intrudes” on the “seclusion of another” in a manner “highly offensive to a reasonable person.”
The Court reasoned that the records at issue here “contain information about the deeply private affairs of the plaintiffs,” including “social security numbers, health history, financial disclosures, and information about family members.” The court also emphasized plaintiffs’ allegation that these records were “disclosed to DOGE agents in a rushed and insecure manner,” including “administrative access, enabling them to alter OPM records and obscure their own access to those records.”
The Court rejected defendants’ argument that our clients supposedly pled “only that DOGE agents were granted access to OPM’s data system,” and not also that “the DOGE agents in fact used that access to examine OPM records.” As a factual matter, plaintiffs in fact pled that “DOGE agents actually exploited their access to review, possess, and use OPM records.”
As a legal matter, such use is not required: “Exposure of the plaintiff’s personally identifiable information to unauthorized third parties, without further use or disclosure, is analogous to harm cognizable under the common law right to privacy.” So ruling, the Court observed: “at least four federal courts have found that the plaintiffs before them had made a sufficient showing of concrete injury, as analogous to common law privacy torts, when agencies granted DOGE agents access to repositories of plaintiffs’ personal information.”
To have standing, a plaintiff must also show that their “injury in fact” is “actual or imminent.” The Court held that our clients passed this test, too. It ruled that plaintiffs adequately alleged an actual injury: “ongoing unauthorized access by the DOGE agents to the plaintiffs’ data.” It also ruled that plaintiffs adequately alleged a separate, imminent injury: OPM’s disclosure to DOGE “has made the OPM data more vulnerable to hacking, identity theft, and other activities that are substantially harmful to the plaintiffs.” The Court emphasized the allegations of “sweeping and uncontrolled access to DOGE agents who were not properly vetted or trained,” as well as the notorious 2015 OPM data breach.
Finally, the Court held that our clients sufficiently alleged the remaining two elements of standing: that defendants caused plaintiffs’ injuries, and that an injunction would redress them.
Plaintiffs may proceed on their Privacy Act claims
The Court held: “The plaintiffs have plausibly alleged violations of two provisions of the Privacy Act: 5 U.S.C. § 552a(b), which prohibits certain disclosures of records, and 5 U.S.C. § 552a(e)(10), which imposes a duty to establish appropriate safeguards and ensure security and confidentiality of records.” The Court cited two other judges who had recently “found a likelihood that plaintiffs will succeed” in their wrongful disclosure claims.
Reprising their failed standing arguments, the government argued that to plead a violation of the Privacy Act’s no-disclosure rule, our clients must allege “not just transmission to another person but also review of the records by that individual.” Again, the Court rejected this argument for two independent reasons. Factually, “the complaint amply pleads that DOGE agents viewed, possessed, and used the OPM records.” Legally, “the defendants misconstrue the term ‘disclose.’” The Court looked to the OPM’s own regulations, which define the term to include “providing personal review of a record,” and an earlier appellate court opinion, interpreting the term to include “virtually all instances [of] an agency’s unauthorized transmission of a protected record.”
Next, the government asserted an exception from the Privacy Act’s no-disclosure rule, for disclosure “to those officers and employees of the agency which maintains the record who have a need for the record in the performance of their duties.” The Court observed that our clients disputed this exception on two independent grounds: “both because [the disclosures] were made to DOGE agents who were not officers or employees of OPM and because, even if the DOGE agents were employees of OPM, they did not have a need for those records in the performance of any lawful duty.” On both grounds, the plaintiffs’ allegations sufficed.
Plaintiffs may seek to enjoin Privacy Act violations
The Court ruled that our clients may seek injunctive and declaratory relief against the alleged Privacy Act violations, by means of the Administrative Procedure Act (APA), though not the Privacy Act itself. This is a win: What ultimately matters is the availability of relief, not the particular path to that relief.
As discussed above, plaintiffs have two claims that the government violated the Privacy Act: unlawful disclosures and unlawful cybersecurity failures. Plaintiffs also have an APA claim of agency action “not in accordance with law,” which refers back to these two Privacy Act violations.
To be subject to APA judicial review, the challenged agency action must be “final.” The Court found finality: “The complaint plausibly alleges that actions by OPM were not representative of its ordinary day-to-day operations but were, in sharp contrast to its normal procedures, illegal, rushed, and dangerous.”
Another requirement for APA judicial review is the absence of an “other adequate remedy.” The Court interpreted the Privacy Act to not allow the injunction our clients seek, but then ruled: “As a result, the plaintiffs have no adequate recourse under the Privacy Act and may pursue their request for injunctive relief under the APA.” The Court further wrote:
The defendants’ Kafkaesque argument to the contrary would deprive the plaintiffs of any recourse under the law. They contend that the plaintiffs have no right to any injunctive relief – neither under the Privacy Act nor under the APA. … This argument promptly falls apart under examination.
Plaintiffs may proceed on two more claims
The Court allowed our clients to move forward on their two other claims.
They may proceed on their claim that the government violated the APA by acting in an “arbitrary and capricious” manner. The Court reasoned: “The complaint alleges that OPM rushed the onboarding process, omitted crucial security practices, and thereby placed the security of OPM records at grave risk.”
Finally, our clients may proceed on their claim that DOGE acted “ultra vires,” meaning outside of its legal power, when it accessed OPM records. The Court reasoned: “The complaint adequately pleads that DOGE Defendants plainly and openly crossed a congressionally drawn line in the sand.”
Next steps
Congress passed the Privacy Act following the Watergate and COINTELPRO scandals to restore trust in government and prevent a future President from creating another “enemies list.” Congress found that the federal government’s increasing use of databases full of personal records “greatly magnified the harm to individual privacy,” and so it tightly regulated how agencies may use these databases.
The ongoing DOGE data grab may be the worst violation of the Privacy Act since its enactment in 1974. So it is great news that a judge has denied the government’s motion to dismiss our lawsuit. Now we will move forward to prove our case.
Related Cases: American Federation of Government Employees v. U.S. Office of Personnel ManagementEnabling energy innovation at scale
Enabling and sustaining a clean energy transition depends not only on groundbreaking technology to redefine the world’s energy systems, but also on that innovation happening at scale. As a part of an ongoing speaker series, the MIT Energy Initiative (MITEI) hosted Emily Knight, the president and CEO of The Engine, a nonprofit incubator and accelerator dedicated to nurturing technology solutions to the world’s most urgent challenges. She explained how her organization is bridging the gap between research breakthroughs and scalable commercial impact.
“Our mission from the very beginning was to support and accelerate what we call ‘tough tech’ companies — [companies] who had this vision to solve some of the world’s biggest problems,” Knight said.
The Engine, a spinout of MIT, coined the term “tough tech” to represent not only the durability of the technology, but also the complexity and scale of the problems it will solve. “We are an incubator and accelerator focused on building a platform and creating what I believe is an open community for people who want to build tough tech, who want to fund tough tech, who want to work in a tough tech company, and ultimately be a part of this community,” said Knight.
According to Knight, The Engine creates “an innovation orchard” where early-stage research teams have access to the infrastructure and resources needed to take their ideas from lab to market while maximizing impact. “We use this pathway — from idea to investment, then investment to impact — in a lot of the work that we do,” explained Knight.
She said that tough tech exists at the intersection of several risk factors: technology, market and customer, regulatory, and scaling. Knight highlighted MIT spinout Commonwealth Fusion Systems (CFS) — one of many MIT spinouts within The Engine’s ecosystem that focus on energy — as an example of how The Engine encourages teams to work through these risks.
In the early days, the CFS team was told to assume their novel fusion technology would work. “If you’re only ever worried that your technology won’t work, you won’t pick your head up and have the right people on your team who are building the public affairs relationships so that, when you need it, you can get your first fusion reactor sited and done,” explained Knight. “You don’t know where to go for the next round of funding, and you don’t know who in government is going to be your advocates when you need them to be.”
“I think [CFS’s] eighth employee was a public affairs person,” Knight said. With the significant regulatory, scaling, and customer risks associated with fusion energy, building their team wisely was essential. Bringing on a public affairs person helped CFS build awareness and excitement around fusion energy in the local community and build the community programs necessary for grant funding.
The Engine’s growing ecosystem of entrepreneurs, researchers, institutions, and government agencies is a key component of the support offered to early-stage researchers. The ecosystem creates a space for sharing knowledge and resources, which Knight believes is critical for navigating the unique challenges associated with Tough Tech.
This support can be especially important for new entrepreneurs: “This leader that is going from PhD student to CEO — that is a really, really big journey that happens the minute you get funding,” said Knight. “Knowing that you’re in a community of people who are on that same journey is really important.”
The Engine also extends this support to the broader community through educational programs that walk participants through the process of translating their research from lab to market. Knight highlighted two climate and energy startups that joined The Engine through one such program geared toward graduate students and postdocs: Lithios, which is producing sustainable, low-cost lithium, and Lydian, which is developing sustainable aviation fuels.
The Engine also offers access to capital from investors with an intimate understanding of tough tech ventures. She said that government agency partners can offer additional support through public funding opportunities and highlighted that grants from the U.S. Department of Energy were key in the early funding of another MIT spinout within their ecosystem, Sublime Systems.
In response to the current political shift away from climate investments, as well as uncertainty surrounding government funding, Knight believes that the connections within their ecosystem are more important than ever as startups explore alternative funding. “We’re out there thinking about funding mechanisms that could be more reliable. That’s our role as an incubator.”
Being able to convene the right people to address a problem is something that Knight attributes to her education at Cornell University’s School of Hotel Administration. “My ethos across all of this is about service,” stated Knight. “We’re constantly evolving our resources and how we help our teams based on the gaps they’re facing.”
MITEI Presents: Advancing the Energy Transition is an MIT Energy Initiative speaker series highlighting energy experts and leaders at the forefront of the scientific, technological, and policy solutions needed to transform our energy systems. The next seminar in this series will be April 30 with Manish Bapna, president and CEO of the Natural Resources Defense Council. Visit MITEI’s Events page for more information on this and additional events.
EFF, Civil Society Groups, Academics Call on UK Home Secretary to Address Flawed Data Bill
Last week, EFF joined 30 civil society groups and academics in warning UK Home Secretary Yvette Cooper and Department for Science, Innovation & Technology Secretary Peter Kyle about the law enforcement risks contained within the draft Data Use and Access Bill (DUA Bill).
Clause 80 of the DUA Bill weakens the safeguards for solely automated decisions in the law-enforcement context and dilutes crucial data protection safeguards.
Under sections 49 and 50 of the Data Protection Act 2018, solely automated decisions are prohibited from being made in the law enforcement context unless the decision is required or authorised by law. Clause 80 reverses this in all scenarios unless the data processing involves special category data.
In short, this would enable law enforcement to use automated decisions about people regarding their socioeconomic status, regional or postcode data, inferred emotions, or even regional accents. This increases the already broad possibilities for bias, discrimination, and lack of transparency at the hands of law enforcement.
In the government’s own Impact Assessment for the DUA Bill, the Government acknowledged that “those with protected characteristics such as race, gender, and age are more likely to face discrimination from ADM due to historical biases in datasets.” Yet, politicians in the UK have decided to push forward with this discriminatory and dangerous agenda regardless.
Further, given the already minimal transparency around automated decision making, individuals affected in the law enforcement context would have no or highly limited routes to redress.
The DUA Bill puts marginalised groups at risk of opaque, unfair and harmful automated decisions. Yvette Cooper and Peter Kyle must address the lack of safeguards governing law enforcement use of automated decision-making tools before time runs out.
The full letter can be found here.
Study: Burning heavy fuel oil with scrubbers is the best available option for bulk maritime shipping
When the International Maritime Organization enacted a mandatory cap on the sulfur content of marine fuels in 2020, with an eye toward reducing harmful environmental and health impacts, it left shipping companies with three main options.
They could burn low-sulfur fossil fuels, like marine gas oil, or install cleaning systems to remove sulfur from the exhaust gas produced by burning heavy fuel oil. Biofuels with lower sulfur content offer another alternative, though their limited availability makes them a less feasible option.
While installing exhaust gas cleaning systems, known as scrubbers, is the most feasible and cost-effective option, there has been a great deal of uncertainty among firms, policymakers, and scientists as to how “green” these scrubbers are.
Through a novel lifecycle assessment, researchers from MIT, Georgia Tech, and elsewhere have now found that burning heavy fuel oil with scrubbers in the open ocean can match or surpass using low-sulfur fuels, when a wide variety of environmental factors is considered.
The scientists combined data on the production and operation of scrubbers and fuels with emissions measurements taken onboard an oceangoing cargo ship.
They found that, when the entire supply chain is considered, burning heavy fuel oil with scrubbers was the least harmful option in terms of nearly all 10 environmental impact factors they studied, such as greenhouse gas emissions, terrestrial acidification, and ozone formation.
“In our collaboration with Oldendorff Carriers to broadly explore reducing the environmental impact of shipping, this study of scrubbers turned out to be an unexpectedly deep and important transitional issue,” says Neil Gershenfeld, an MIT professor, director of the Center for Bits and Atoms (CBA), and senior author of the study.
“Claims about environmental hazards and policies to mitigate them should be backed by science. You need to see the data, be objective, and design studies that take into account the full picture to be able to compare different options from an apples-to-apples perspective,” adds lead author Patricia Stathatou, an assistant professor at Georgia Tech, who began this study as a postdoc in the CBA.
Stathatou is joined on the paper by Michael Triantafyllou, the Henry L. and Grace Doherty and others at the National Technical University of Athens in Greece and the maritime shipping firm Oldendorff Carriers. The research appears today in Environmental Science and Technology.
Slashing sulfur emissions
Heavy fuel oil, traditionally burned by bulk carriers that make up about 30 percent of the global maritime fleet, usually has a sulfur content around 2 to 3 percent. This is far higher than the International Maritime Organization’s 2020 cap of 0.5 percent in most areas of the ocean and 0.1 percent in areas near population centers or environmentally sensitive regions.
Sulfur oxide emissions contribute to air pollution and acid rain, and can damage the human respiratory system.
In 2018, fewer than 1,000 vessels employed scrubbers. After the cap went into place, higher prices of low-sulfur fossil fuels and limited availability of alternative fuels led many firms to install scrubbers so they could keep burning heavy fuel oil.
Today, more than 5,800 vessels utilize scrubbers, the majority of which are wet, open-loop scrubbers.
“Scrubbers are a very mature technology. They have traditionally been used for decades in land-based applications like power plants to remove pollutants,” Stathatou says.
A wet, open-loop marine scrubber is a huge, metal, vertical tank installed in a ship’s exhaust stack, above the engines. Inside, seawater drawn from the ocean is sprayed through a series of nozzles downward to wash the hot exhaust gases as they exit the engines.
The seawater interacts with sulfur dioxide in the exhaust, converting it to sulfates — water-soluble, environmentally benign compounds that naturally occur in seawater. The washwater is released back into the ocean, while the cleaned exhaust escapes to the atmosphere with little to no sulfur dioxide emissions.
But the acidic washwater can contain other combustion byproducts like heavy metals, so scientists wondered if scrubbers were comparable, from a holistic environmental point of view, to burning low-sulfur fuels.
Several studies explored toxicity of washwater and fuel system pollution, but none painted a full picture.
The researchers set out to fill that scientific gap.
A “well-to-wake” analysis
The team conducted a lifecycle assessment using a global environmental database on production and transport of fossil fuels, such as heavy fuel oil, marine gas oil, and very-low sulfur fuel oil. Considering the entire lifecycle of each fuel is key, since producing low-sulfur fuel requires extra processing steps in the refinery, causing additional emissions of greenhouse gases and particulate matter.
“If we just look at everything that happens before the fuel is bunkered onboard the vessel, heavy fuel oil is significantly more low-impact, environmentally, than low-sulfur fuels,” she says.
The researchers also collaborated with a scrubber manufacturer to obtain detailed information on all materials, production processes, and transportation steps involved in marine scrubber fabrication and installation.
“If you consider that the scrubber has a lifetime of about 20 years, the environmental impacts of producing the scrubber over its lifetime are negligible compared to producing heavy fuel oil,” she adds.
For the final piece, Stathatou spent a week onboard a bulk carrier vessel in China to measure emissions and gather seawater and washwater samples. The ship burned heavy fuel oil with a scrubber and low-sulfur fuels under similar ocean conditions and engine settings.
Collecting these onboard data was the most challenging part of the study.
“All the safety gear, combined with the heat and the noise from the engines on a moving ship, was very overwhelming,” she says.
Their results showed that scrubbers reduce sulfur dioxide emissions by 97 percent, putting heavy fuel oil on par with low-sulfur fuels according to that measure. The researchers saw similar trends for emissions of other pollutants like carbon monoxide and nitrous oxide.
In addition, they tested washwater samples for more than 60 chemical parameters, including nitrogen, phosphorus, polycyclic aromatic hydrocarbons, and 23 metals.
The concentrations of chemicals regulated by the IMO were far below the organization’s requirements. For unregulated chemicals, the researchers compared the concentrations to the strictest limits for industrial effluents from the U.S. Environmental Protection Agency and European Union.
Most chemical concentrations were at least an order of magnitude below these requirements.
In addition, since washwater is diluted thousands of times as it is dispersed by a moving vessel, the concentrations of such chemicals would be even lower in the open ocean.
These findings suggest that the use of scrubbers with heavy fuel oil can be considered as equal to or more environmentally friendly than low-sulfur fuels across many of the impact categories the researchers studied.
“This study demonstrates the scientific complexity of the waste stream of scrubbers. Having finally conducted a multiyear, comprehensive, and peer-reviewed study, commonly held fears and assumptions are now put to rest,” says Scott Bergeron, managing director at Oldendorff Carriers and co-author of the study.
“This first-of-its-kind study on a well-to-wake basis provides very valuable input to ongoing discussion at the IMO,” adds Thomas Klenum, executive vice president of innovation and regulatory affairs at the Liberian Registry, emphasizing the need “for regulatory decisions to be made based on scientific studies providing factual data and conclusions.”
Ultimately, this study shows the importance of incorporating lifecycle assessments into future environmental impact reduction policies, Stathatou says.
“There is all this discussion about switching to alternative fuels in the future, but how green are these fuels? We must do our due diligence to compare them equally with existing solutions to see the costs and benefits,” she adds.
This study was supported, in part, by Oldendorff Carriers.
Arguing Against CALEA
At a Congressional hearing earlier this week, Matt Blaze made the point that CALEA, the 1994 law that forces telecoms to make phone calls wiretappable, is outdated in today’s threat environment and should be rethought:
In other words, while the legally-mandated CALEA capability requirements have changed little over the last three decades, the infrastructure that must implement and protect it has changed radically. This has greatly expanded the “attack surface” that must be defended to prevent unauthorized wiretaps, especially at scale. The job of the illegal eavesdropper has gotten significantly easier, with many more options and opportunities for them to exploit. Compromising our telecommunications infrastructure is now little different from performing any other kind of computer intrusion or data breach, a well-known and endemic cybersecurity problem. To put it bluntly, something like Salt Typhoon was inevitable, and will likely happen again unless significant changes are made...