Feed aggregator
Relaxing tailpipe rules would hurt climate and consumers, critics say
EU science advisers slam Brussels’ weakened 2040 climate plans
EU climate chief lobbied Germany to back weakened 2040 goal
States roll out red carpets for data centers. But some lawmakers push back.
River dammed by huge Swiss landslide flows once again
Flood-induced selective migration patterns examined
Nature Climate Change, Published online: 03 June 2025; doi:10.1038/s41558-025-02346-6
Selective migration patterns emerge in flood-prone regions in the USA. The sociodemographic profiles of individuals who were more inclined to move in or out of flood-prone areas were strikingly different. Media sentiment aggravates population replacement in these regions, leading to short-term structure changes in the housing market and long-term socioeconomic decline.New 3D printing method enables complex designs and creates less waste
Hearing aids, mouth guards, dental implants, and other highly tailored structures are often products of 3D printing. These structures are typically made via vat photopolymerization — a form of 3D printing that uses patterns of light to shape and solidify a resin, one layer at a time.
The process also involves printing structural supports from the same material to hold the product in place as it’s printed. Once a product is fully formed, the supports are removed manually and typically thrown out as unusable waste.
MIT engineers have found a way to bypass this last finishing step, in a way that could significantly speed up the 3D-printing process. They developed a resin that turns into two different kinds of solids, depending on the type of light that shines on it: Ultraviolet light cures the resin into an highly resilient solid, while visible light turns the same resin into a solid that is easily dissolvable in certain solvents.
The team exposed the new resin simultaneously to patterns of UV light to form a sturdy structure, as well as patterns of visible light to form the structure’s supports. Instead of having to carefully break away the supports, they simply dipped the printed material into solution that dissolved the supports away, revealing the sturdy, UV-printed part.
The supports can dissolve in a variety of food-safe solutions, including baby oil. Interestingly, the supports could even dissolve in the main liquid ingredient of the original resin, like a cube of ice in water. This means that the material used to print structural supports could be continuously recycled: Once a printed structure’s supporting material dissolves, that mixture can be blended directly back into fresh resin and used to print the next set of parts — along with their dissolvable supports.
The researchers applied the new method to print complex structures, including functional gear trains and intricate lattices.
“You can now print — in a single print — multipart, functional assemblies with moving or interlocking parts, and you can basically wash away the supports,” says graduate student Nicholas Diaco. “Instead of throwing out this material, you can recycle it on site and generate a lot less waste. That’s the ultimate hope.”
He and his colleagues report the details of the new method in a paper appearing today in Advanced Materials Technologies. The MIT study’s co-authors include Carl Thrasher, Max Hughes, Kevin Zhou, Michael Durso, Saechow Yap, Professor Robert Macfarlane, and Professor A. John Hart, head of MIT’s Department of Mechanical Engineering.
Waste removal
Conventional vat photopolymerization (VP) begins with a 3D computer model of a structure to be printed — for instance, of two interlocking gears. Along with the gears themselves, the model includes small support structures around, under, and between the gears to keep every feature in place as the part is printed. This computer model is then sliced into many digital layers that are sent to a VP printer for printing.
A standard VP printer includes a small vat of liquid resin that sits over a light source. Each slice of the model is translated into a matching pattern of light that is projected onto the liquid resin, which solidifies into the same pattern. Layer by layer, a solid, light-printed version of the model’s gears and supports forms on the build platform. When printing is finished, the platform lifts the completed part above the resin bath. Once excess resin is washed away, a person can go in by hand to remove the intermediary supports, usually by clipping and filing, and the support material is ultimately thrown away.
“For the most part, these supports end up generating a lot of waste,” Diaco says.
Print and dip
Diaco and the team looked for a way to simplify and speed up the removal of printed supports and, ideally, recycle them in the process. They came up with a general concept for a resin that, depending on the type of light that it is exposed to, can take on one of two phases: a resilient phase that would form the desired 3D structure and a secondary phase that would function as a supporting material but also be easily dissolved away.
After working out some chemistry, the team found they could make such a two-phase resin by mixing two commercially available monomers, the chemical building blocks that are found in many types of plastic. When ultraviolet light shines on the mixture, the monomers link together into a tightly interconnected network, forming a tough solid that resists dissolution. When the same mixture is exposed to visible light, the same monomers still cure, but at the molecular scale the resulting monomer strands remain separate from one another. This solid can quickly dissolve when placed in certain solutions.
In benchtop tests with small vials of the new resin, the researchers found the material did transform into both the insoluble and soluble forms in response to ultraviolet and visible light, respectively. But when they moved to a 3D printer with LEDs dimmer than the benchtop setup, the UV-cured material fell apart in solution. The weaker light only partially linked the monomer strands, leaving them too loosely tangled to hold the structure together.
Diaco and his colleagues found that adding a small amount of a third “bridging” monomer could link the two original monomers together under UV light, knitting them into a much sturdier framework. This fix enabled the researchers to simultaneously print resilient 3D structures and dissolvable supports using timed pulses of UV and visible light in one run.
The team applied the new method to print a variety of intricate structures, including interlocking gears, intricate lattices, a ball within a square frame, and, for fun, a small dinosaur encased in an egg-shaped support that dissolved away when dipped in solution.
“With all these structures, you need a lattice of supports inside and out while printing,” Diaco says. “Removing those supports normally requires careful, manual removal. This shows we can print multipart assemblies with a lot of moving parts, and detailed, personalized products like hearing aids and dental implants, in a way that’s fast and sustainable.”
“We’ll continue studying the limits of this process, and we want to develop additional resins with this wavelength-selective behavior and mechanical properties necessary for durable products,” says professor of mechanical engineering John Hart. “Along with automated part handling and closed-loop reuse of the dissolved resin, this is an exciting path to resource-efficient and cost-effective polymer 3D printing at scale.”
This research was supported, in part, by the Center for Perceptual and Interactive Intelligence (InnoHK) in Hong Kong, the U.S. National Science Foundation, the U.S. Office of Naval Research, and the U.S. Army Research Office.
Teaching AI models what they don’t know
Artificial intelligence systems like ChatGPT provide plausible-sounding answers to any question you might ask. But they don’t always reveal the gaps in their knowledge or areas where they’re uncertain. That problem can have huge consequences as AI systems are increasingly used to do things like develop drugs, synthesize information, and drive autonomous cars.
Now, the MIT spinout Themis AI is helping quantify model uncertainty and correct outputs before they cause bigger problems. The company’s Capsa platform can work with any machine-learning model to detect and correct unreliable outputs in seconds. It works by modifying AI models to enable them to detect patterns in their data processing that indicate ambiguity, incompleteness, or bias.
“The idea is to take a model, wrap it in Capsa, identify the uncertainties and failure modes of the model, and then enhance the model,” says Themis AI co-founder and MIT Professor Daniela Rus, who is also the director of the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). “We’re excited about offering a solution that can improve models and offer guarantees that the model is working correctly.”
Rus founded Themis AI in 2021 with Alexander Amini ’17, SM ’18, PhD ’22 and Elaheh Ahmadi ’20, MEng ’21, two former research affiliates in her lab. Since then, they’ve helped telecom companies with network planning and automation, helped oil and gas companies use AI to understand seismic imagery, and published papers on developing more reliable and trustworthy chatbots.
“We want to enable AI in the highest-stakes applications of every industry,” Amini says. “We’ve all seen examples of AI hallucinating or making mistakes. As AI is deployed more broadly, those mistakes could lead to devastating consequences. Our software can make these systems more transparent.”
Helping models know what they don’t know
Rus’ lab has been researching model uncertainty for years. In 2018, she received funding from Toyota to study the reliability of a machine learning-based autonomous driving solution.
“That is a safety-critical context where understanding model reliability is very important,” Rus says.
In separate work, Rus, Amini, and their collaborators built an algorithm that could detect racial and gender bias in facial recognition systems and automatically reweight the model’s training data, showing it eliminated bias. The algorithm worked by identifying the unrepresentative parts of the underlying training data and generating new, similar data samples to rebalance it.
In 2021, the eventual co-founders showed a similar approach could be used to help pharmaceutical companies use AI models to predict the properties of drug candidates. They founded Themis AI later that year.
“Guiding drug discovery could potentially save a lot of money,” Rus says. “That was the use case that made us realize how powerful this tool could be.”
Today Themis is working with companies in a wide variety of industries, and many of those companies are building large language models. By using Capsa, the models are able to quantify their own uncertainty for each output.
“Many companies are interested in using LLMs that are based on their data, but they’re concerned about reliability,” observes Stewart Jamieson SM ’20, PhD ’24, Themis AI's head of technology. “We help LLMs self-report their confidence and uncertainty, which enables more reliable question answering and flagging unreliable outputs.”
Themis AI is also in discussions with semiconductor companies building AI solutions on their chips that can work outside of cloud environments.
“Normally these smaller models that work on phones or embedded systems aren’t very accurate compared to what you could run on a server, but we can get the best of both worlds: low latency, efficient edge computing without sacrificing quality,” Jamieson explains. “We see a future where edge devices do most of the work, but whenever they’re unsure of their output, they can forward those tasks to a central server.”
Pharmaceutical companies can also use Capsa to improve AI models being used to identify drug candidates and predict their performance in clinical trials.
“The predictions and outputs of these models are very complex and hard to interpret — experts spend a lot of time and effort trying to make sense of them,” Amini remarks. “Capsa can give insights right out of the gate to understand if the predictions are backed by evidence in the training set or are just speculation without a lot of grounding. That can accelerate the identification of the strongest predictions, and we think that has a huge potential for societal good.”
Research for impact
Themis AI’s team believes the company is well-positioned to improve the cutting edge of constantly evolving AI technology. For instance, the company is exploring Capsa’s ability to improve accuracy in an AI technique known as chain-of-thought reasoning, in which LLMs explain the steps they take to get to an answer.
“We’ve seen signs Capsa could help guide those reasoning processes to identify the highest-confidence chains of reasoning,” Amini says. “We think that has huge implications in terms of improving the LLM experience, reducing latencies, and reducing computation requirements. It’s an extremely high-impact opportunity for us.”
For Rus, who has co-founded several companies since coming to MIT, Themis AI is an opportunity to ensure her MIT research has impact.
“My students and I have become increasingly passionate about going the extra step to make our work relevant for the world," Rus says. “AI has tremendous potential to transform industries, but AI also raises concerns. What excites me is the opportunity to help develop technical solutions that address these challenges and also build trust and understanding between people and the technologies that are becoming part of their daily lives.”
At MIT, Lindsay Caplan reflects on artistic crossroads where humans and machines meet
The intersection of art, science, and technology presents a unique, sometimes challenging, viewpoint for both scientists and artists. It is in this nexus that art historian Lindsay Caplan positions herself: “My work as an art historian focuses on the ways that artists across the 20th century engage with new technologies like computers, video, and television, not merely as new materials for making art as they already understand it, but as conceptual platforms for reorienting and reimagining the foundational assumptions of their practice.”
With this introduction, Caplan, an assistant professor at Brown University, opened the inaugural Resonances Lecture — a new series by STUDIO.nano to explore the generative edge where art, science, and technology meet. Delivered on April 28 to an interdisciplinary crowd at MIT.nano, Caplan’s lecture, titled “Analogical Engines — Collaborations across Art and Technology in the 1960s,” traced how artists across Europe and the Americas in the 1960s engaged with and responded to the emerging technological advances of computer science, cybernetics, and early AI. “By the time we reached the 1960s,” she said, “analogies between humans and machines, drawn from computer science and fields like information theory and cybernetics, abound among art historians and artists alike.”
Kaplan’s talk centered on two artistic networks, with a particular emphasis on American artist Liliane Lijn: New Tendencies exhibitions (1961-79) and the Signals gallery in London (1964-66). She deftly analyzed the artist’s material experimentation with contemporary advances in emergent technologies — quantum physics and mathematical formalism, particularly Heisenberg's uncertainty principle. She argued that both art historical formalism and mathematical formalism share struggles with representation, indeterminacy, and the tension between constructed and essential truths.
Following her talk, Caplan was joined by MIT faculty Mark Jarzombek, professor of the history and theory of architecture, and Gediminas Urbonas, associate professor of art, culture, and technology (ACT), for a panel discussion moderated by Ardalan SadeghiKivi SM ’22, lecturer of comparative media studies. The conversation expanded on Caplan’s themes with discussions of artists’ attraction to newly developed materials and technology, and the critical dimension of reimagining and repurposing technologies that were originally designed with an entirely different purpose.
Urbonas echoed the urgency of these conversations. “It is exceptionally exciting to witness artists working in dialectical tension with scientists — a tradition that traces back to the founding of the Center for Advanced Visual Studies at MIT and continues at ACT today,” reflected Urbonas. “The dual ontology of science and art enables us to grasp the world as a web of becoming, where new materials, social imaginaries, and aesthetic values are co-constituted through interdisciplinary inquiry. Such collaborations are urgent today, offering tools to reimagine agency, subjectivity, and the role of culture in shaping the future.”
The event concluded with a reception in MIT.nano’s East Lobby, where attendees could view MIT ACT student projects currently on exhibition in MIT.nano’s gallery spaces. The reception was, itself, an intersection of art and technology. “The first lecture of the Resonances Lecture Series lived up to the title,” reflects Jarzombek. “A brilliant talk by Lindsay Caplan proved that the historical and aesthetical dimensions in the sciences have just as much relevance to a critical posture as the technical.”
The Resonances lecture and panel series seeks to gather artists, designers, scientists, engineers, and historians who examine how scientific endeavors shape artistic production, and vice versa. Their insights expose the historical context on how art and science are made and distributed in society and offer hints at the possible futures of such productions.
“When we were considering who to invite to launch this lecture series, Lindsay Caplan immediately came to mind,” says Tobias Putrih, ACT lecturer and academic advisor for STUDIO.nano. “She is one of the most exciting thinkers and historians writing about the intersection between art, technology, and science today. We hope her insights and ideas will encourage further collaborative projects.”
The Resonances series is one of several new activities organized by STUDIO,nano, a program within MIT.nano, to connect the arts with cutting-edge research environments. “MIT.nano generates extraordinary scientific work,” says Samantha Farrell, manager of STUDIO.nano, “but it’s just as vital to create space for cultural reflection. STUDIO.nano invites artists to engage directly with new technologies — and with the questions they raise.”
In addition to the Resonances lectures, STUDIO.nano organizes exhibitions in the public spaces at MIT.nano, and an Encounters series, launched last fall, to bring artists to MIT.nano. To learn about current installations and ongoing collaborations, visit the STUDIO.nano web page.
The Defense Attorney’s Arsenal In Challenging Electronic Monitoring
In criminal prosecutions, electronic monitoring (EM) is pitched as a “humane alternative" to incarceration – but it is not. The latest generation of “e-carceration” tools are burdensome, harsh, and often just as punitive as imprisonment. Fortunately, criminal defense attorneys have options when shielding their clients from this over-used and harmful tech.
Framed as a tool that enhances public safety while reducing jail populations, EM is increasingly used as a condition of pretrial release, probation, parole, or even civil detention. However, this technology imposes serious infringements on liberty, privacy, and due process for not only those placed on it but also for people they come into contact with. It can transform homes into digital jails, inadvertently surveil others, impose financial burdens, and punish every misstep—no matter how minor or understandable.
Even though EM may appear less severe than incarceration, research and litigation reveal that these devices often function as a form of detention in all but name. Monitored individuals must often remain at home for long periods, request permission to leave for basic needs, and comply with curfews or “exclusion zones.” Violations, even technical ones—such as a battery running low or a dropped GPS signal—can result in arrest and incarceration. Being able to take care of oneself and reintegrate into the world becomes a minefield of compliance and red tape. The psychological burden, social stigma, and physical discomfort associated with EM are significant, particularly for vulnerable populations.
For many, EM still evokes bulky wrist or ankle “shackles” that can monitor a subject’s location, and sometimes even their blood alcohol levels. These devices have matured with digital technology however, increasingly imposed through more sophisticated devices like smartwatches or mobile phones applications. Newer iterations of EM have also followed a trajectory of collecting much more data, including biometrics and more precise location information.
This issue is more pressing than ever, as the 2020 COVID pandemic led to an explosion in EM adoption. As incarceration and detention facilities became superspreader zones, judges kept some offenders out of these facilities by expanding the use of EM; so much so that some jurisdictions ran out of classic EM devices like ankle bracelets.
Today the number of people placed on EM in the criminal system continues to skyrocket. Fighting the spread of EM requires many tactics, but on the front lines are the criminal defense attorneys challenging EM impositions. This post will focus on the main issues for defense attorneys to consider while arguing against the imposition of this technology.
PRETRIAL ELECTRONIC MONITORINGWe’ve seen challenges to EM programs in a variety of ways, including attacking the constitutionality of the program as a whole and arguing against pretrial and/or post-conviction imposition. However, it is likely that the most successful challenges will come from individualized challenges to pretrial EM.
First, courts have not been receptive to arguments that entire EM programs are unconstitutional. For example, in Simon v. San Francisco et.al, 135 F.4th 784 (9 Cir. 2025), the Ninth Circuit held that although San Francisco’s EM program constituted a Fourth Amendment search, a warrant was not required. The court explained their decision by stating that the program was a condition of pretrial release, included the sharing of location data, and was consented to by the individual (with counsel present) by signing a form that essentially operated as a contract. This decision exemplifies the court’s failure to grasp the coercive nature of this type of “consent” that is pervasive in the criminal legal system.
Second, pretrial defendants have more robust rights than they do after conviction. While a person’s expectation of privacy may be slightly diminished following arrest but before trial, the Fourth Amendment is not entirely out of the picture. Their “privacy and liberty interests” are, for instance, “far greater” than a person who has been convicted and is on probation or parole. United States v. Scott, 450 F.3d 863, 873 (9th Cir. 2006). Although individuals continue to retain Fourth Amendment rights after conviction, the reasonableness analysis will be heavily weighted towards the state as the defendant is no longer presumed innocent. However, even people on probation have a “substantial” privacy interest. United States v. Lara, 815 F.3d 605, 610 (9th Cir. 2016).
THE FOURTH AMENDMENTThe first foundational constitutional rights threatened by the sheer invasiveness of EM are those protected by the Fourth Amendment. This concern is only heightened as the technology improves and collects increasingly detailed information. Unlike traditional probation or parole supervision, EM often tracks individuals with no geographic limitations or oversight, and can automatically record more than just approximate location information.
Courts have increasingly recognized that this new technology poses greater and more novel threats to our privacy than earlier generations. In Grady v. North Carolina, 575 U.S. 306 (2015), the Supreme Court, relying on United States v. Jones, 565 U.S. 400 (2012) held that attaching a GPS tracking device to a person—even a convicted sex offender—constitutes a Fourth Amendment search and is thus subject to the inquiry of reasonableness. A few years later, the monumental decision in Carpenter v. United States, 138 S. Ct. 2206 (2018), firmly established that Fourth Amendment analysis is affected by the advancement of technology, holding that that long-term cell-site location tracking by law enforcement constituted a search requiring a warrant.
As criminal defense attorneys are well aware, the Fourth Amendment’s ostensibly powerful protections are often less effective in practice. Nevertheless, this line of cases still forms a strong foundation for arguing that EM should be subjected to exacting Fourth Amendment scrutiny.
DUE PROCESSThree key procedural due process challenges that defense attorneys can raise under the Fifth and Fourteenth Amendments are: inadequate hearing, lack of individualized assessment, and failure to consider ability to pay.
Many courts impose EM without adequate consideration of individual circumstances or less restrictive alternatives. Defense attorneys should demand evidentiary hearings where the government must prove that monitoring is necessary and narrowly tailored. If the defendant is not given notice, hearing, or the opportunity to object, that could arguably constitute a violation of due process. For example, in the previously mentioned case, Simon v. San Francisco, the Ninth Circuit found that individuals who were not informed of the details regarding the city’s pretrial EM program in the presence of counsel had their rights violated.
Second, imposition of EM should be based on an individualized assessment rather than a blanket rule. For pretrial defendants, EM is frequently used as a condition of bail. Although under both federal and state bail frameworks, courts are generally required to impose the least restrictive conditions necessary to ensure the defendant’s court appearance and protect the community, many jurisdictions have included EM as a default condition rather than individually assessing whether EM is appropriate. The Bail Reform Act of 1984, for instance, mandates that release conditions be tailored to the individual’s circumstances. Yet in practice, many jurisdictions impose EM categorically, without specific findings or consideration of alternatives. Defense counsel should challenge this practice by insisting that judges articulate on the record why EM is necessary, supported by evidence related to flight risk or danger. Where clients have stable housing, employment, and no history of noncompliance, EM may be more restrictive than justified.
Lastly, financial burdens associated with EM may also implicate due process where a failure to pay can result in violations and incarceration. In Bearden v. Georgia, 461 U.S. 660 (1983), the Supreme Court held that courts cannot revoke probation for failure to pay fines or restitution without first determining whether the failure was willful. Relying on Bearden, defense attorneys can argue that EM fees imposed on indigent clients amount to unconstitutional punishment for poverty. Similarly, a growing number of lower courts have agreed, particularly where clients were not given the opportunity to contest their ability to pay. Defense attorneys should request fee waivers, present evidence of indigence, and challenge any EM orders that functionally condition liberty on wealth.
STATE LAW PROTECTIONSState constitutions and statutes often provide stronger protections than federal constitutional minimums. In addition to state corollaries to the Fourth and Fifth Amendment, some states have also enacted statutes to govern pretrial release and conditions. A number of states have established a presumption in favor of release on recognizance or personal recognizance bonds. In those jurisdictions, the state has to overcome this presumption before the court can impose restrictive conditions like EM. Some states require courts to impose the least restrictive conditions necessary to achieve legitimate purposes, making EM appropriate only when less restrictive alternatives are inadequate.
Most pretrial statutes list specific factors courts must consider, such as community ties, employment history, family responsibilities, nature of the offense, criminal history, and risk of flight or danger to community. Courts that fail to adequately consider these factors or impose generic monitoring conditions may violate statutory requirements.
For example, Illinois's SAFE-T Act includes specific protections against overly restrictive EM conditions, but implementation has been inconsistent. Defense attorneys in Illinois and states with similar laws should challenge monitoring conditions that violate specific statutory requirements.
TECHNOLOGICAL ISSUESAttorneys should also consider the reliability of EM technology. Devices frequently produce false violations and alerts, particularly in urban areas or buildings where GPS signals are weak. Misleading data can lead to violation hearings and even incarceration. Attorneys should demand access to raw location data, vendor records, and maintenance logs. Expert testimony can help demonstrate technological flaws, human error, or system limitations that cast doubt on the validity of alleged violations.
In some jurisdictions, EM programs are operated by private companies under contracts with probation departments, courts, or sheriffs. These companies profit from fees paid by clients and have minimal oversight. Attorneys should request copies of contracts, training manuals, and policies governing EM use. Discovery may reveal financial incentives, lack of accountability, or systemic issues such as racial or geographic disparities in monitoring. These findings can support broader litigation or class actions, particularly where indigent individuals are jailed for failing to pay private vendors.
Recent research provides compelling evidence that EM fails to achieve its stated purposes while creating significant harms. Studies have not found significant relationships between EM of individuals on pretrial release and their court appearance rates or likelihood of arrest. Nor do they show that law enforcement is employing EM on individuals they would otherwise put in jail.
To the contrary, studies indicate that law enforcement is using EM to surveil and constrain the liberty of those who wouldn't otherwise be detained, as the rise in the number of people placed on EM has not coincided with a decrease in detention. This research demonstrates that EM represents an expansion of government control rather than a true alternative to detention.
Additionally, EM devices may be rife with technical issues as described above. Communication system failures that prevent proper monitoring, and device malfunctions that cause electronic shocks. Cutting of ankle bracelets is a common occurrence among users, especially when the technology is malfunctioning or hurting them. Defense attorneys should document all technical issues and argue that unreliable technology cannot form the basis for liberty restrictions or additional criminal charges.
CREATING A RECORD FOR APPEALAttorneys should always make sure they are creating a record on which the EM imposition can be appealed, should the initial hearing be unsuccessful. This will require lawyers to include the factual basis for challenge and preserve the appropriate legal arguments. The modern generation of EM has yet to undergo the extensive judicial review that ankle shackles have been subjected to, making it integral to make an extensive record of the ways in which it is more invasive and harmful, so that it can be properly argued to an appellate court that the nature of the newest EM requires more than perfunctory application of decades-old precedence. As we saw with Carpenter, the rapid advancement of technology may push the courts to reconsider older paradigms for constitutional analysis and find them wanting. Thus, a comprehensive record would be critical to show EM as it is—an extension of incarceration—rather than a benevolent alternative to detention.
Defeating electronic monitoring will require a multidimensional approach that includes litigating constitutional claims, contesting factual assumptions, exposing technological failures, and advocating for systemic reforms. As the carceral state evolves, attorneys must remain vigilant and proactive in defending the rights of their clients.
The EU’s “Encryption Roadmap” Makes Everyone Less Safe
EFF has joined more than 80 civil society organizations, companies, and cybersecurity experts in signing a letter urging the European Commission to change course on its recently announced “Technology Roadmap on Encryption.” The roadmap, part of the EU’s ProtectEU strategy, discusses new ways for law enforcement to access encrypted data. That framing is dangerously flawed.
Let’s be clear: there is no technical “lawful access” to end-to-end encrypted messages that preserves security and privacy. Any attempt to circumvent encryption—like client-side scanning—creates new vulnerabilities, threatening the very people governments claim to protect.
This letter is significant in not just its content, but in who signed it. The breadth of the coalition makes one thing clear: civil society and the global technical community overwhelmingly reject the idea that weakening encryption can coexist with respect for fundamental rights.
Strong encryption is a pillar of cybersecurity, protecting everyone: activists, journalists, everyday web users, and critical infrastructure. Undermining it doesn’t just hurt privacy. It makes everyone’s data more vulnerable and weakens the EU’s ability to defend against cybersecurity threats.
EU officials should scrap any roadmap focused on circumvention and instead invest in stronger, more widespread use of end-to-end encryption. Security and human rights aren’t in conflict. They depend on each other.
You can read the full letter here.
AI stirs up the recipe for concrete in MIT study
For weeks, the whiteboard in the lab was crowded with scribbles, diagrams, and chemical formulas. A research team across the Olivetti Group and the MIT Concrete Sustainability Hub (CSHub) was working intensely on a key problem: How can we reduce the amount of cement in concrete to save on costs and emissions?
The question was certainly not new; materials like fly ash, a byproduct of coal production, and slag, a byproduct of steelmaking, have long been used to replace some of the cement in concrete mixes. However, the demand for these products is outpacing supply as industry looks to reduce its climate impacts by expanding their use, making the search for alternatives urgent. The challenge that the team discovered wasn’t a lack of candidates; the problem was that there were too many to sort through.
On May 17, the team, led by postdoc Soroush Mahjoubi, published an open-access paper in Nature’s Communications Materials outlining their solution. “We realized that AI was the key to moving forward,” notes Mahjoubi. “There is so much data out there on potential materials — hundreds of thousands of pages of scientific literature. Sorting through them would have taken many lifetimes of work, by which time more materials would have been discovered!”
With large language models, like the chatbots many of us use daily, the team built a machine-learning framework that evaluates and sorts candidate materials based on their physical and chemical properties.
“First, there is hydraulic reactivity. The reason that concrete is strong is that cement — the ‘glue’ that holds it together — hardens when exposed to water. So, if we replace this glue, we need to make sure the substitute reacts similarly,” explains Mahjoubi. “Second, there is pozzolanicity. This is when a material reacts with calcium hydroxide, a byproduct created when cement meets water, to make the concrete harder and stronger over time. We need to balance the hydraulic and pozzolanic materials in the mix so the concrete performs at its best.”
Analyzing scientific literature and over 1 million rock samples, the team used the framework to sort candidate materials into 19 types, ranging from biomass to mining byproducts to demolished construction materials. Mahjoubi and his team found that suitable materials were available globally — and, more impressively, many could be incorporated into concrete mixes just by grinding them. This means it’s possible to extract emissions and cost savings without much additional processing.
“Some of the most interesting materials that could replace a portion of cement are ceramics,” notes Mahjoubi. “Old tiles, bricks, pottery — all these materials may have high reactivity. That’s something we’ve observed in ancient Roman concrete, where ceramics were added to help waterproof structures. I’ve had many interesting conversations on this with Professor Admir Masic, who leads a lot of the ancient concrete studies here at MIT.”
The potential of everyday materials like ceramics and industrial materials like mine tailings is an example of how materials like concrete can help enable a circular economy. By identifying and repurposing materials that would otherwise end up in landfills, researchers and industry can help to give these materials a second life as part of our buildings and infrastructure.
Looking ahead, the research team is planning to upgrade the framework to be capable of assessing even more materials, while experimentally validating some of the best candidates. “AI tools have gotten this research far in a short time, and we are excited to see how the latest developments in large language models enable the next steps,” says Professor Elsa Olivetti, senior author on the work and member of the MIT Department of Materials Science and Engineering. She serves as an MIT Climate Project mission director, a CSHub principal investigator, and the leader of the Olivetti Group.
“Concrete is the backbone of the built environment,” says Randolph Kirchain, co-author and CSHub director. “By applying data science and AI tools to material design, we hope to support industry efforts to build more sustainably, without compromising on strength, safety, or durability.
In addition to Mahjoubi, Olivetti, and Kirchain, co-authors on the work include MIT postdoc Vineeth Venugopal, Ipek Bensu Manav SM ’21, PhD ’24; and CSHub Deputy Director Hessam AzariJafari.
245 Days Without Justice: Laila Soueif’s Hunger Strike and the Fight to Free Alaa Abd el-Fattah
Laila Soueif has now been on hunger strike for 245 days. On Thursday night, she was taken to the hospital once again. Soueif’s hunger strike is a powerful act of protest against the failures of two governments. The Egyptian government continues to deny basic justice by keeping her son, Alaa Abd el-Fattah, behind bars—his only “crime” was sharing a Facebook post about the torture of a fellow detainee. Meanwhile, the British government, despite Alaa’s citizenship, has failed to secure even a single consular visit. Its muted response reflects an unacceptable unwillingness to stand up for the rights of its own citizens.
This is the second time this year that Soueif’s health has collapsed due to her hunger strike. Now, her condition is dire. Her blood sugar is dangerously low, and every day, her family fears it could be her last. Doctors say it’s a miracle she’s still alive.
Her protest is a call for accountability—a demand that both governments uphold the rule of law and protect human rights, not only in rhetoric, but through action.
Late last week, after an 18-month investigation, the United Nations Working Group on Arbitrary Detention (UNWGAD) issued its Opinion on Abd el-Fattah’s case, stating that he is being held unlawfully by the Egyptian government. That Egypt will not provide the United Kingdom with consular access to its citizen further violates the country’s obligations under international law.
As stated in a letter to British Prime Minister Keir Starmer by 21 organizations, including EFF, the UK must now use every tool it has at its disposal to ensure that Alaa Abd el-Fattah is released immediately.
MIT students and postdoc explore the inner workings of Capitol Hill
This spring, 25 MIT students and a postdoc traveled to Washington, where they met with congressional offices to advocate for federal science funding and specific, science-based policies based on insights from their research on pressing issues — including artificial intelligence, health, climate and ocean science, energy, and industrial decarbonization. Organized annually by the Science Policy Initiative (SPI), this year’s trip came at a particularly critical moment, as science agencies are facing unprecedented funding cuts.
Over the course of two days, the group met with 66 congressional offices across 35 states and select committees, advocating for stable funding for science agencies such as the Department of Energy, the National Oceanic and Atmospheric Administration, the National Science Foundation, NASA, and the Department of Defense.
Congressional Visit Days (CVD), organized by SPI, offer students and researchers a hands-on introduction to federal policymaking. In addition to meetings on Capitol Hill, participants connected with MIT alumni in government and explored potential career paths in science policy.
This year’s trip was co-organized by Mallory Kastner, a PhD student in biological oceanography at MIT and Woods Hole Oceanographic Institution (WHOI), and Julian Ufert, a PhD student in chemical engineering at MIT. Ahead of the trip, participants attended training sessions hosted by SPI, the MIT Washington Office, and the MIT Policy Lab. These sessions covered effective ways to translate scientific findings into policy, strategies for a successful advocacy meeting, and hands-on demos of a congressional meeting.
Participants then contacted their representatives’ offices in advance and tailored their talking points to each office’s committees and priorities. This structure gave participants direct experience initiating policy conversations with those actively working on issues they cared about.
Audrey Parker, a PhD student in civil and environmental engineering studying methane abatement, emphasizes the value of connecting scientific research with priorities in Congress: “Through CVD, I had the opportunity to contribute to conversations on science-backed solutions and advocate for the role of research in shaping policies that address national priorities — including energy, sustainability, and climate change.”
To many of the participants, stepping into the shoes of a policy advisor was a welcome diversion from their academic duties and scientific routine. For Alex Fan, an undergraduate majoring in electrical engineering and computer science, the trip was enlightening: “It showed me that student voices really do matter in shaping science policy. Meeting with lawmakers, especially my own representative, Congresswoman Bonamici, made the experience personal and inspiring. It has made me seriously consider a future at the intersection of research and policy.”
“I was truly impressed by the curiosity and dedication of our participants, as well as the preparation they brought to each meeting,” says Ufert. “It was inspiring to watch them grow into confident advocates, leveraging their experience as students and their expertise as researchers to advise on policy needs.”
Kastner adds: “It was eye-opening to see the disconnect between scientists and policymakers. A lot of knowledge we generate as scientists rarely makes it onto the desk of congressional staff, and even more rarely onto the congressperson’s. CVD was an incredibly empowering experience for me as a scientist — not only am I more motivated to broaden my scientific outreach to legislators, but I now also have the skills to do so.”
Funding is the bedrock that allows scientists to carry out research and make discoveries. In the United States, federal funding for science has enabled major technological breakthroughs and advancements in manufacturing and other industrial sectors, and led to important environmental protection standards. While participants found the degree of support for science funding variable among offices from across the political spectrum, they were reassured by the fact that many offices on both sides of the aisle still recognized the significance of science.
Teaching AI models the broad strokes to sketch more like humans do
When you’re trying to communicate or understand ideas, words don’t always do the trick. Sometimes the more efficient approach is to do a simple sketch of that concept — for example, diagramming a circuit might help make sense of how the system works.
But what if artificial intelligence could help us explore these visualizations? While these systems are typically proficient at creating realistic paintings and cartoonish drawings, many models fail to capture the essence of sketching: its stroke-by-stroke, iterative process, which helps humans brainstorm and edit how they want to represent their ideas.
A new drawing system from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and Stanford University can sketch more like we do. Their method, called “SketchAgent,” uses a multimodal language model — AI systems that train on text and images, like Anthropic’s Claude 3.5 Sonnet — to turn natural language prompts into sketches in a few seconds. For example, it can doodle a house either on its own or through collaboration, drawing with a human or incorporating text-based input to sketch each part separately.
The researchers showed that SketchAgent can create abstract drawings of diverse concepts, like a robot, butterfly, DNA helix, flowchart, and even the Sydney Opera House. One day, the tool could be expanded into an interactive art game that helps teachers and researchers diagram complex concepts or give users a quick drawing lesson.
CSAIL postdoc Yael Vinker, who is the lead author of a paper introducing SketchAgent, notes that the system introduces a more natural way for humans to communicate with AI.
“Not everyone is aware of how much they draw in their daily life. We may draw our thoughts or workshop ideas with sketches,” she says. “Our tool aims to emulate that process, making multimodal language models more useful in helping us visually express ideas.”
SketchAgent teaches these models to draw stroke-by-stroke without training on any data — instead, the researchers developed a “sketching language” in which a sketch is translated into a numbered sequence of strokes on a grid. The system was given an example of how things like a house would be drawn, with each stroke labeled according to what it represented — such as the seventh stroke being a rectangle labeled as a “front door” — to help the model generalize to new concepts.
Vinker wrote the paper alongside three CSAIL affiliates — postdoc Tamar Rott Shaham, undergraduate researcher Alex Zhao, and MIT Professor Antonio Torralba — as well as Stanford University Research Fellow Kristine Zheng and Assistant Professor Judith Ellen Fan. They’ll present their work at the 2025 Conference on Computer Vision and Pattern Recognition (CVPR) this month.
Assessing AI’s sketching abilities
While text-to-image models such as DALL-E 3 can create intriguing drawings, they lack a crucial component of sketching: the spontaneous, creative process where each stroke can impact the overall design. On the other hand, SketchAgent’s drawings are modeled as a sequence of strokes, appearing more natural and fluid, like human sketches.
Prior works have mimicked this process, too, but they trained their models on human-drawn datasets, which are often limited in scale and diversity. SketchAgent uses pre-trained language models instead, which are knowledgeable about many concepts, but don’t know how to sketch. When the researchers taught language models this process, SketchAgent began to sketch diverse concepts it hadn’t explicitly trained on.
Still, Vinker and her colleagues wanted to see if SketchAgent was actively working with humans on the sketching process, or if it was working independently of its drawing partner. The team tested their system in collaboration mode, where a human and a language model work toward drawing a particular concept in tandem. Removing SketchAgent’s contributions revealed that their tool’s strokes were essential to the final drawing. In a drawing of a sailboat, for instance, removing the artificial strokes representing a mast made the overall sketch unrecognizable.
In another experiment, CSAIL and Stanford researchers plugged different multimodal language models into SketchAgent to see which could create the most recognizable sketches. Their default backbone model, Claude 3.5 Sonnet, generated the most human-like vector graphics (essentially text-based files that can be converted into high-resolution images). It outperformed models like GPT-4o and Claude 3 Opus.
“The fact that Claude 3.5 Sonnet outperformed other models like GPT-4o and Claude 3 Opus suggests that this model processes and generates visual-related information differently,” says co-author Tamar Rott Shaham.
She adds that SketchAgent could become a helpful interface for collaborating with AI models beyond standard, text-based communication. “As models advance in understanding and generating other modalities, like sketches, they open up new ways for users to express ideas and receive responses that feel more intuitive and human-like,” says Shaham. “This could significantly enrich interactions, making AI more accessible and versatile.”
While SketchAgent’s drawing prowess is promising, it can’t make professional sketches yet. It renders simple representations of concepts using stick figures and doodles, but struggles to doodle things like logos, sentences, complex creatures like unicorns and cows, and specific human figures.
At times, their model also misunderstood users’ intentions in collaborative drawings, like when SketchAgent drew a bunny with two heads. According to Vinker, this may be because the model breaks down each task into smaller steps (also called “Chain of Thought” reasoning). When working with humans, the model creates a drawing plan, potentially misinterpreting which part of that outline a human is contributing to. The researchers could possibly refine these drawing skills by training on synthetic data from diffusion models.
Additionally, SketchAgent often requires a few rounds of prompting to generate human-like doodles. In the future, the team aims to make it easier to interact and sketch with multimodal language models, including refining their interface.
Still, the tool suggests AI could draw diverse concepts the way humans do, with step-by-step human-AI collaboration that results in more aligned final designs.
This work was supported, in part, by the U.S. National Science Foundation, a Hoffman-Yee Grant from the Stanford Institute for Human-Centered AI, the Hyundai Motor Co., the U.S. Army Research Laboratory, the Zuckerman STEM Leadership Program, and a Viterbi Fellowship.
Eight with MIT ties win 2025 Hertz Foundation Fellowships
The Hertz Foundation announced that it has awarded fellowships to eight MIT affiliates. The prestigious award provides each recipient with five years of doctoral-level research funding (up to a total of $250,000), which gives them an unusual measure of independence in their graduate work to pursue groundbreaking research.
The MIT-affiliated awardees are Matthew Caren ’25; April Qiu Cheng ’24; Arav Karighattam, who begins his PhD at the Institute this fall; Benjamin Lou ’25; Isabelle A. Quaye ’22, MNG ’24; Albert Qin ’24; Ananthan Sadagopan ’24; and Gianfranco (Franco) Yee ’24.
“Hertz Fellows embody the promise of future scientific breakthroughs, major engineering achievements and thought leadership that is vital to our future,” said Stephen Fantone, chair of the Hertz Foundation board of directors and president and CEO of Optikos Corp., in the announcement. “The newest recipients will direct research teams, serve in leadership positions in our government and take the helm of major corporations and startups that impact our communities and the world.”
In addition to funding, fellows receive access to Hertz Foundation programs throughout their lives, including events, mentoring, and networking. They join the ranks of over 1,300 former Hertz Fellows since the fellowship was established in 1963 who are leaders and scholars in a range of technology, science, and engineering fields. Former fellows have contributed to breakthroughs in such areas as advanced medical therapies, computational systems used by billions of people daily, global defense networks, and the recent launch of the James Webb Space Telescope.
This year’s MIT recipients are among a total of 19 Hertz Foundation Fellows scholars selected from across the United States.
Matthew Caren ’25 studied electrical engineering and computer science, mathematics, and music at MIT. His research focuses on computational models of how people use their voices to communicate sound at the Computer Science and Artificial Intelligence Lab (CSAIL) and interpretable real-time machine listening systems at the MIT Music Technology Lab. He spent several summers developing large language model systems and bioinformatics algorithms at Apple and a year researching expressive digital instruments at Stanford University’s Center for Computer Research in Music and Acoustics. He chaired the MIT Schwarzman College of Computing Undergraduate Advisory Group, where he led undergraduate committees on interdisciplinary computing AI and was a founding member of the MIT Voxel Lab for music and arts technology. In addition, Caren has invented novel instruments used by Grammy-winning musicians on international stages. He plans to pursue a doctorate at Stanford.
April Qiu Cheng ’24 majored in physics at MIT, graduating in just three years. Their research focused on black hole phenomenology, gravitational-wave inference, and the use of fast radio bursts as a statistical probe of large-scale structure. They received numerous awards, including an MIT Outstanding Undergraduate Research Award, the MIT Barrett Prize, the Astronaut Scholarship, and the Princeton President’s Fellowship. Cheng contributed to the physics department community by serving as vice president of advocacy for Undergraduate Women in Physics and as the undergraduate representative on the Physics Values Committee. In addition, they have participated in various science outreach programs for middle and high school students. Since graduating, they have been a Fulbright Fellow at the Max Planck Institute for Gravitational Physics, where they have been studying gravitational-wave cosmology. Cheng will begin a doctorate in astrophysics at Princeton in the fall.
Arav Karighattam was home schooled, and by age 14 had completed most of the undergraduate and graduate courses in physics and mathematics at the University of California at Davis. He graduated from Harvard University in 2024 with a bachelor’s degree in mathematics and will attend MIT to pursue a PhD, also in mathematics. Karighattam is fascinated by algebraic number theory and arithmetic geometry and seeks to understand the mysteries underlying the structure of solutions to Diophantine equations. He also wants to apply his mathematical skills to mitigating climate change and biodiversity loss. At a recent conference at MIT titled “Mordell’s Conjecture 100 Years Later,” Karighattam distinguished himself as the youngest speaker to present a paper among graduate students, postdocs, and faculty members.
Benjamin Lou ’25 graduated from MIT in May with a BS in physics and is interested in finding connections between fundamental truths of the universe. One of his research projects applies symplectic techniques to understand the nature of precision measurements using quantum states of light. Another is about geometrically unifying several theorems in quantum mechanics using the Prüfer transformation. For his work, Lou was honored with the Barry Goldwater Scholarship. Lou will pursue his doctorate at MIT, where he plans to work on unifying quantum mechanics and gravity, with an eye toward uncovering experimentally testable predictions. Living with the debilitating disease spinal muscular atrophy, which causes severe, full-body weakness and makes scratchwork unfeasible, Lou has developed a unique learning style emphasizing mental visualization. He also co-founded and helped lead the MIT Assistive Technology Club, dedicated to empowering those with disabilities using creative technologies. He is working on a robotic self-feeding device for those who cannot eat independently.
Isabelle A. Quaye ’22, MNG ’24 studied electrical engineering and computer science as an undergraduate at MIT, with a minor in economics. She was awarded competitive fellowships and scholarships from Hyundai, Intel, D. E. Shaw, and Palantir, and received the Albert G. Hill Prize, given to juniors and seniors who have maintained high academic standards and have made continued contributions to improving the quality of life for underrepresented students at MIT. While obtaining her master’s degree at MIT, she focused on theoretical computer science and systems. She is currently a software engineer at Apple, where she continues to develop frameworks that harness intelligence from data to improve systems and processes. Quaye also believes in contributing to the advancement of science and technology through teaching and has volunteered in summer programs to teach programming and informatics to high school students in the United States and Ghana.
Albert Qin ’24 majored in physics and mathematics at MIT. He also pursued an interest in biology, researching single-molecule approaches to study transcription factor diffusion in living cells and studying the cell circuits that control animal development. His dual interests have motivated him to find common ground between physics and biological fields. Inspired by his MIT undergraduate advisors, he hopes to become a teacher and mentor for aspiring young scientists. Qin is currently pursuing a PhD at Princeton University, addressing questions about the behavior of neural networks — both artificial and biological — using a variety of approaches and ideas from physics and neuroscience.
Ananthan Sadagopan ’24 is currently pursuing a doctorate in biological and biomedical science at Harvard University, focusing on chemical biology and the development of new therapeutic strategies for intractable diseases. He earned his BS at MIT in chemistry and biology in three years and led projects characterizing somatic perturbations of X chromosome inactivation in cancer, developing machine learning tools for cancer dependency prediction, using small molecules for targeted protein relocalization and creating a generalizable strategy to drug the most mutated gene in cancer (TP53). He published as the first author in top journals, such as Cell, during his undergraduate career. He also holds patents related to his work on cancer dependency prediction and drugging TP53. While at the Institute, he served as president of the Chemistry Undergraduate Association, winning both the First-Year and Senior Chemistry Achievement Awards, and was head of the events committee for the MIT Science Olympiad.
Gianfranco (Franco) Yee ’24 majored in biological engineering at MIT, conducting research in the Manalis Lab on chemical gradients in the gut microenvironment and helping to develop a novel gut-on-a-chip platform for culturing organoids under these gradients. His senior thesis extended this work to the microbiome, investigating host-microbe interactions linked to intestinal inflammation and metabolic disorders. Yee also earned a concentration in education at MIT, and is committed to increasing access to STEM resources in underserved communities. He co-founded Momentum AI, an educational outreach program that teaches computer science to high school students across Greater Boston. The inaugural program served nearly 100 students and included remote outreach efforts in Ukraine and China. Yee has also worked with MIT Amphibious Achievement and the MIT Office of Engineering Outreach Programs. He currently attends Gerstner Sloan Kettering Graduate School, where he plans to leverage the gut microbiome and immune system to develop innovative therapeutic treatments.
Former Hertz Fellows include two Nobel laureates; recipients of 11 Breakthrough Prizes and three MacArthur Foundation “genius awards;” and winners of the Turing Award, the Fields Medal, the National Medal of Technology, the National Medal of Science, and the Wall Street Journal Technology Innovation Award. In addition, 54 are members of the National Academies of Sciences, Engineering and Medicine, and 40 are fellows of the American Association for the Advancement of Science. Hertz Fellows hold over 3,000 patents, have founded more than 375 companies, and have created hundreds of thousands of science and technology jobs.
3 Questions: How to help students recognize potential bias in their AI datasets
Every year, thousands of students take courses that teach them how to deploy artificial intelligence models that can help doctors diagnose disease and determine appropriate treatments. However, many of these courses omit a key element: training students to detect flaws in the training data used to develop the models.
Leo Anthony Celi, a senior research scientist at MIT’s Institute for Medical Engineering and Science, a physician at Beth Israel Deaconess Medical Center, and an associate professor at Harvard Medical School, has documented these shortcomings in a new paper and hopes to persuade course developers to teach students to more thoroughly evaluate their data before incorporating it into their models. Many previous studies have found that models trained mostly on clinical data from white males don’t work well when applied to people from other groups. Here, Celi describes the impact of such bias and how educators might address it in their teachings about AI models.
Q: How does bias get into these datasets, and how can these shortcomings be addressed?
A: Any problems in the data will be baked into any modeling of the data. In the past we have described instruments and devices that don’t work well across individuals. As one example, we found that pulse oximeters overestimate oxygen levels for people of color, because there weren’t enough people of color enrolled in the clinical trials of the devices. We remind our students that medical devices and equipment are optimized on healthy young males. They were never optimized for an 80-year-old woman with heart failure, and yet we use them for those purposes. And the FDA does not require that a device work well on this diverse of a population that we will be using it on. All they need is proof that it works on healthy subjects.
Additionally, the electronic health record system is in no shape to be used as the building blocks of AI. Those records were not designed to be a learning system, and for that reason, you have to be really careful about using electronic health records. The electronic health record system is to be replaced, but that’s not going to happen anytime soon, so we need to be smarter. We need to be more creative about using the data that we have now, no matter how bad they are, in building algorithms.
One promising avenue that we are exploring is the development of a transformer model of numeric electronic health record data, including but not limited to laboratory test results. Modeling the underlying relationship between the laboratory tests, the vital signs and the treatments can mitigate the effect of missing data as a result of social determinants of health and provider implicit biases.
Q: Why is it important for courses in AI to cover the sources of potential bias? What did you find when you analyzed such courses’ content?
A: Our course at MIT started in 2016, and at some point we realized that we were encouraging people to race to build models that are overfitted to some statistical measure of model performance, when in fact the data that we’re using is rife with problems that people are not aware of. At that time, we were wondering: How common is this problem?
Our suspicion was that if you looked at the courses where the syllabus is available online, or the online courses, that none of them even bothers to tell the students that they should be paranoid about the data. And true enough, when we looked at the different online courses, it’s all about building the model. How do you build the model? How do you visualize the data? We found that of 11 courses we reviewed, only five included sections on bias in datasets, and only two contained any significant discussion of bias.
That said, we cannot discount the value of these courses. I’ve heard lots of stories where people self-study based on these online courses, but at the same time, given how influential they are, how impactful they are, we need to really double down on requiring them to teach the right skillsets, as more and more people are drawn to this AI multiverse. It’s important for people to really equip themselves with the agency to be able to work with AI. We’re hoping that this paper will shine a spotlight on this huge gap in the way we teach AI now to our students.
Q: What kind of content should course developers be incorporating?
A: One, giving them a checklist of questions in the beginning. Where did this data came from? Who were the observers? Who were the doctors and nurses who collected the data? And then learn a little bit about the landscape of those institutions. If it’s an ICU database, they need to ask who makes it to the ICU, and who doesn’t make it to the ICU, because that already introduces a sampling selection bias. If all the minority patients don’t even get admitted to the ICU because they cannot reach the ICU in time, then the models are not going to work for them. Truly, to me, 50 percent of the course content should really be understanding the data, if not more, because the modeling itself is easy once you understand the data.
Since 2014, the MIT Critical Data consortium has been organizing datathons (data “hackathons”) around the world. At these gatherings, doctors, nurses, other health care workers, and data scientists get together to comb through databases and try to examine health and disease in the local context. Textbooks and journal papers present diseases based on observations and trials involving a narrow demographic typically from countries with resources for research.
Our main objective now, what we want to teach them, is critical thinking skills. And the main ingredient for critical thinking is bringing together people with different backgrounds.
You cannot teach critical thinking in a room full of CEOs or in a room full of doctors. The environment is just not there. When we have datathons, we don’t even have to teach them how do you do critical thinking. As soon as you bring the right mix of people — and it’s not just coming from different backgrounds but from different generations — you don’t even have to tell them how to think critically. It just happens. The environment is right for that kind of thinking. So, we now tell our participants and our students, please, please do not start building any model unless you truly understand how the data came about, which patients made it into the database, what devices were used to measure, and are those devices consistently accurate across individuals?
When we have events around the world, we encourage them to look for data sets that are local, so that they are relevant. There’s resistance because they know that they will discover how bad their data sets are. We say that that’s fine. This is how you fix that. If you don’t know how bad they are, you’re going to continue collecting them in a very bad manner and they’re useless. You have to acknowledge that you’re not going to get it right the first time, and that’s perfectly fine. MIMIC (the Medical Information Marked for Intensive Care database built at Beth Israel Deaconess Medical Center) took a decade before we had a decent schema, and we only have a decent schema because people were telling us how bad MIMIC was.
We may not have the answers to all of these questions, but we can evoke something in people that helps them realize that there are so many problems in the data. I’m always thrilled to look at the blog posts from people who attended a datathon, who say that their world has changed. Now they’re more excited about the field because they realize the immense potential, but also the immense risk of harm if they don’t do this correctly.
Australia Requires Ransomware Victims to Declare Payments
A new Australian law requires larger companies to declare any ransomware payments they have made.