MIT Latest News

Subscribe to MIT Latest News feed
MIT News is dedicated to communicating to the media and the public the news and achievements of the students, faculty, staff and the greater MIT community.
Updated: 4 hours 57 min ago

Closing the design-to-manufacturing gap for optical devices

Wed, 12/13/2023 - 12:00am

Photolithography involves manipulating light to precisely etch features onto a surface, and is commonly used to fabricate computer chips and optical devices like lenses. But tiny deviations during the manufacturing process often cause these devices to fall short of their designers’ intentions.

To help close this design-to-manufacturing gap, researchers from MIT and the Chinese University of Hong Kong used machine learning to build a digital simulator that mimics a specific photolithography manufacturing process. Their technique utilizes real data gathered from the photolithography system, so it can more accurately model how the system would fabricate a design.

The researchers integrate this simulator into a design framework, along with another digital simulator that emulates the performance of the fabricated device in downstream tasks, such as producing images with computational cameras. These connected simulators enable a user to produce an optical device that better matches its design and reaches the best task performance.

This technique could help scientists and engineers create more accurate and efficient optical devices for applications like mobile cameras, augmented reality, medical imaging, entertainment, and telecommunications. And because the pipeline of learning the digital simulator utilizes real-world data, it can be applied to a wide range of photolithography systems.

“This idea sounds simple, but the reasons people haven’t tried this before are that real data can be expensive and there are no precedents for how to effectively coordinate the software and hardware to build a high-fidelity dataset,” says Cheng Zheng, a mechanical engineering graduate student who is co-lead author of an open-access paper describing the work. “We have taken risks and done extensive exploration, for example, developing and trying characterization tools and data-exploration strategies, to determine a working scheme. The result is surprisingly good, showing that real data work much more efficiently and precisely than data generated by simulators composed of analytical equations. Even though it can be expensive and one can feel clueless at the beginning, it is worth doing.”

Zheng wrote the paper with co-lead author Guangyuan Zhao, a graduate student at the Chinese University of Hong Kong; and her advisor, Peter T. So, a professor of mechanical engineering and biological engineering at MIT. The research will be presented at the SIGGRAPH Asia Conference.

Printing with light

Photolithography involves projecting a pattern of light onto a surface, which causes a chemical reaction that etches features into the substrate. However, the fabricated device ends up with a slightly different pattern because of miniscule deviations in the light’s diffraction and tiny variations in the chemical reaction.

Because photolithography is complex and hard to model, many existing design approaches rely on equations derived from physics. These general equations give some sense of the fabrication process but can’t capture all deviations specific to a photolithography system. This can cause devices to underperform in the real world.

For their technique, which they call neural lithography, the MIT researchers build their photolithography simulator using physics-based equations as a base, and then incorporate a neural network trained on real, experimental data from a user’s photolithography system. This neural network, a type of machine-learning model loosely based on the human brain, learns to compensate for many of the system’s specific deviations.

The researchers gather data for their method by generating many designs that cover a wide range of feature sizes and shapes, which they fabricate using the photolithography system. They measure the final structures and compare them with design specifications, pairing those data and using them to train a neural network for their digital simulator.

“The performance of learned simulators depends on the data fed in, and data artificially generated from equations can’t cover real-world deviations, which is why it is important to have real-world data,” Zheng says.

Dual simulators

The digital lithography simulator consists of two separate components: an optics model that captures how light is projected on the surface of the device, and a resist model that shows how the photochemical reaction occurs to produce features on the surface.

In a downstream task, they connect this learned photolithography simulator to a physics-based simulator that predicts how the fabricated device will perform on this task, such as how a diffractive lens will diffract the light that strikes it.

The user specifies the outcomes they want a device to achieve. Then these two simulators work together within a larger framework that shows the user how to make a design that will reach those performance goals.

“With our simulator, the fabricated object can get the best possible performance on a downstream task, like the computational cameras, a promising technology to make future cameras miniaturized and more powerful. We show that, even if you use post-calibration to try and get a better result, it will still not be as good as having our photolithography model in the loop,” Zhao adds.

They tested this technique by fabricating a holographic element that generates a butterfly image when light shines on it. When compared to devices designed using other techniques, their holographic element produced a near-perfect butterfly that more closely matched the design. They also produced a multilevel diffraction lens, which had better image quality than other devices.

In the future, the researchers want to enhance their algorithms to model more complicated devices, and also test the system using consumer cameras. In addition, they want to expand their approach so it can be used with different types of photolithography systems, such as systems that use deep or extreme ultraviolet light.

This research is supported, in part, by the U.S. National Institutes of Health, Fujikura Limited, and the Hong Kong Innovation and Technology Fund.

Ronald Garcia Ruiz named a Popular Science “Brilliant 10”

Tue, 12/12/2023 - 2:35pm

Popular Science magazine has named Ronald Fernando Garcia Ruiz, assistant professor with MIT’s Department of Physics and a researcher in the Laboratory for Nuclear Science, as one of its Brilliant 10 for 2023.

Garcia Ruiz is featured in the Dec. 5 issue.

The Garcia Ruiz Lab (the Laboratory for Exotic Molecules and Atoms) focuses its research on the development of laser spectroscopy techniques to investigate the properties of subatomic particles using atoms and molecules made up of short-lived radioactive nuclei. Garcia Ruiz’s experimental work provides unique information about the fundamental forces of nature, the properties of nuclear matter at the limits of existence, and the search for new physics beyond the Standard Model of particle physics.

Garcia Ruiz obtained his bachelor’s degree in physics at Universidad Nacional de Colombia, a master’s degree in physics at Universidad Nacional Autónoma de México, and his PhD degree at KU Leuven in Belgium. Garcia Ruiz was based at CERN during most of his PhD, working on laser spectroscopy techniques for the study of short-lived atomic nuclei. After his PhD, he became a research associate at The University of Manchester. In 2018, he was awarded a CERN Research Fellowship to lead the local team of the Collinear Resonance Ionization Spectroscopy experiment. At CERN, he has led several experimental programs motivated by modern developments in nuclear science. 

Garcia Ruiz joined the MIT faculty in 2020. Over the last few years, his team and collaborators developed the Resonance Ionization Spectroscopy Experiment (RISE) at the new U.S. Department of Energy (DOE) Facility for Rare Isotope Beams. This unique facility worldwide will facilitate the study of rare atoms and molecules containing nuclei with extreme proton-to-neutron ratios, enabling new opportunities in the study of nuclear matter and searches for new physics.

Garcia Ruiz and his collaborators have pioneered the study of radioactive molecules for the investigation of nuclear and particle physics phenomena. In particular, radioactive molecules with octupole-deformed nuclei are predicted to offer unprecedented sensitivity to study the violation of the fundamental symmetries that are suggested to play a critical role in the origin and evolution of our visible universe.

Among his previous honors and awards, Garcia Ruiz received a Sloan Research Fellowship 2023, the Stuart Jay Freedman Award in Experimental Nuclear Physics from the American Physical Society in 2022, the IUPAP Young Scientist Prize in Nuclear Physics 2022, the National Academic Award in Science, Alejandro Angel Escobar Prize, Colombia in 2021, and the DOE Early Career Award 2020.

The 10 awardees, chosen from hundreds of nominations, are reviewed by peers in the field. Popular Science's annual Brilliant 10 list was first published in 2002. Eight other MIT researchers have received this distinction since the list’s inception.

MIT campus goals in food, water, waste support decarbonization efforts

Tue, 12/12/2023 - 2:25pm

With the launch of Fast Forward: MIT’s Climate Action Plan for the Decade, the Institute committed to decarbonize campus operations by 2050 — an effort that touches on every corner of MIT, from building energy use to procurement and waste. At the operational level, the plan called for establishing a set of quantitative climate impact goals in the areas of food, water, and waste to inform the campus decarbonization roadmap. After an 18-month process that engaged staff, faculty, and researchers, the goals — as well as high-level strategies to reach them — were finalized in spring 2023.

The goal development process was managed by a team representing the areas of campus food, water, and waste, respectively, and includes Director of Campus Dining Mark Hayes and Senior Sustainability Project Manager Susy Jones (food), Director of Utilities Janine Helwig (water), Assistant Director of Campus Services Marty O’Brien, and Assistant Director of Sustainability Brain Goldberg (waste) to co-lead the efforts. The group worked together to set goals that leverage ongoing campus sustainability efforts. “It was important for us to collaborate in order to identify the strategies and goals,” explains Goldberg. “It allowed us to set goals that not only align, but build off of one another, enabling us to work more strategically.”

In setting the goals, each team relied on data, community insight, and best practices. The co-leads are sharing their process to help others at the Institute understand the roles they can play in supporting these objectives.  

Sustainable food systems

The primary food impact goal aims for a 25 percent overall reduction in the greenhouse gas footprint of food purchases starting with academic year 2021-22 as a baseline, acknowledging that beef purchases make up a significant share of those emissions. Additionally, the co-leads established a goal to recover all edible food waste in dining hall and retail operations where feasible, as that reduces MIT’s waste impact and acknowledges that redistributing surplus food to feed people is critically important.

The work to develop the food goal was uniquely challenging, as MIT works with nine different vendors — including main vendor Bon Appetit — to provide food on campus, with many vendors having their own sustainability targets. The goal-setting process began by understanding vendor strategies and leveraging their climate commitments. “A lot of this work is not about reinventing the wheel, but about gathering data,” says Hayes. “We are trying to connect the dots of what is currently happening on campus and to better understand food consumption and waste, ensuring that we area reaching these targets.”

In identifying ways to reach and exceed these targets, Jones conducted listening sessions around campus, balancing input with industry trends, best-available science, and institutional insight from Hayes. “Before we set these goals and possible strategies, we wanted to get a grounding from the community and understand what would work on our campus,” says Jones, who recently began a joint role that bridges the Office of Sustainability and MIT Dining in part to support the goal work.

By establishing the 25 percent reduction in the greenhouse gas footprint of food purchases across MIT residential dining menus, Jones and Hayes saw goal-setting as an opportunity to add more sustainable, local, and culturally diverse foods to the menu. “If beef is the most carbon-intensive food on the menu, this enables us to explore and expand so many recipes and menus from around the globe that incorporate alternatives,” Jones says.

Strategies to reach the climate food goals focus on local suppliers, more plant-forward meals, food recovery, and food security. In 2019, MIT was a co-recipient of the New England Food Vision Prize provided by the Kendall Foundation to increase the amount of local food served on campus in partnership with CommonWealth Kitchen in Dorchester. While implementation of that program was put on pause due to the pandemic, work resumed this year. Currently, the prize is funding a collaborative effort to introduce falafel-like, locally manufactured fritters made from Maine-grown yellow field peas to dining halls at MIT and other university campuses, exemplifying the efforts to meet the climate impact goal, serve as a model for others, and provide demonstrable ways of strengthening the regional food system.

“This sort of innovation is where we’re a leader,” says Hayes. “In addition to the Kendall Prize, we are looking to focus on food justice, growing our BIPOC [Black, Indigenous, and people of color] vendors, and exploring ideas such as local hydroponic and container vegetable growing companies, and how to scale these types of products into institutional settings.”

Reduce and reuse for campus water

The 2030 water impact goal aims to achieve a 10 percent reduction in water use compared to the 2019 baseline and to update the water reduction goal to align with the new metering program and proposed campus decarbonization plans as they evolve.

When people think of campus water use, they may think of sprinklers, lab sinks, or personal use like drinking water and showers. And while those uses make up around 60 percent of campus water use, the Central Utilities Plant (CUP) accounts for the remaining 40 percent. “The CUP generates electricity and delivers heating and cooling to the campus through steam and chilled water — all using what amounts to a large percentage of water use on campus,” says Helwig. As such, the water goal focuses as much on reuse as reduction, with one approach being to expand water capture from campus cooling towers for reuse in CUP operations. “People often think of water use and energy separately, but they often go hand-in-hand,” Helwig explains.

Data also play a central part in the water impact goal — that’s why a new metering program is called for in the implementation strategy. “We have access to a lot of data at MIT, but in reviewing the water data to inform the goal, we learned that it wasn’t quite where we needed it,” explains Helwig. “By ensuring we have the right meter and submeters set up, we can better set boundaries to understand where there is the potential to reduce water use.” Irrigation on campus is one such target with plans to soon release new campuswide landscaping standards that minimize water use.

Reducing campus waste

The waste impact goal aims to reduce campus trash by 30 percent compared to 2019 baseline totals. Additionally, the goal outlines efforts to improve the accuracy of indicators tracking campus waste; reduce the percentage of food scraps in trash and percent of recycling in trash in select locations; reduce the percentage of trash and recycling comprised of single use items; and increase the percentage of residence halls and other campus spaces where food is consumed at scale, implementing an MIT food scrap collection program.

In setting the waste goals, Goldberg and O’Brien studied available campus waste data from past waste audits, pilot programs, and MIT’s waste haulers. They factored in state and city policies that regulate things like the type and amount of waste large institutions can transport. “Looking at all the data it became clear that a 30 percent trash reduction goal will make a tremendous impact on campus and help us drive toward the goal of completely designing out waste from campus,” Goldberg says. The strategies to reach the goals include reducing the amount of materials that come into campus, increasing recycling rates, and expanding food waste collection on campus.

While reducing the waste created from material sources is outlined in the goals, food waste is a special focus on campus because it comprises approximately 40 percent of campus trash, it can be easily collected separately from trash and recycled locally, and decomposing food waste is one of the largest sources of greenhouse gas emissions found in landfills. “There is a lot of greenhouse gas emissions that result from production, distribution, transportation, packaging, processing, and disposal of food,” explains Goldberg. “When food travels to campus, is removed from campus as waste, and then breaks down in a landfill, there are emissions every step of the way.”

To reduce food waste, Goldberg and O’Brien outlined strategies that include working with campus suppliers to identify ordering volumes and practices to limit waste. Once materials are on campus, another strategy kicks in, with a new third stream of waste collection that joins recycling and trash — food waste. By collecting the food waste separately — in bins that are currently rolling out across campus — the waste can be reprocessed into fertilizer, compost, and/or energy without the off-product of greenhouse gases. The waste impact goal also relies on behavioral changes to reduce waste, with education materials part of the process to reduce waste and decontaminate reprocessing streams.

Tracking progress

As work toward the goals advances, community members can monitor progress in the Sustainability DataPool Material Matters and Campus Water Use dashboards, or explore the Impact Goals in depth.

“From food to water to waste, everyone on campus interacts with these systems and can grapple with their impact either from a material they need to dispose of, to water they’re using in a lab, or leftover food from an event,” says Goldberg. “By setting these goals we as an institution can lead the way and help our campus community understand how they can play a role, plug in, and make an impact.”

How to be an astronaut

Tue, 12/12/2023 - 2:15pm

The first question a student asked Warren “Woody” Hoburg ’08 during his visit to MIT's Department of Aeronautics and Astronautics (AeroAstro) this November was: “It seems like there’s no real way to know if being an astronaut is something you could really do. Are there any activities we can try out and see if astronaut-related things are something we might want to do?”

Hoburg’s response: There is no one path to space.

“If you look at all the classes of astronauts, there are all sorts of life paths that lead people to the astronaut corps. Do the things that are fun and exciting — work on things you’re excited to do just because it’s fulfilling in and of itself, not because of where it might lead,” he told a room full of Course 16 students.

Hoburg was the only faculty member among his peers in NASA’s Astronaut Class 22, for example. His own CV includes outdoor sports, computer science and robotics, EMT and search and rescue service, design optimization research, and flying airplanes.

In a two-day visit to the department that included a keynote lecture as well as fireside chats and Q&As with undergraduates and grad students, Hoburg shared his personal journey to becoming an astronaut, lessons and observations from his time aboard the International Space Station, and his excitement for what’s next in space exploration.

From MIT to ISS

For Hoburg, the path that led him first to MIT and eventually to the International Space Station wasn’t straightforward, or focused on a specific goal. From his aerospace studies at MIT, he was torn between going to grad school or getting a job in industry. He decided to pursue computer science in grad school, and from there wasn’t sure if he should stay in academia, join a startup, or join the U.S. Air Force. It was late in grad school when his research started going well that he decided to stick with it, and that decision brought him back to MIT in 2014 as an assistant professor leading a research group in AeroAstro.

He had more or less forgotten his childhood dream of becoming an astronaut. “Not in a bad way,” he clarifies, “just there were other things consuming my time and interest.” But then, a friend suggested they submit applications for the NASA Astronaut Candidate Program. “I remembered that when I was a kid I did think that would just be the coolest job, so I applied. I never thought I’d actually get accepted.”

Performing in an operational environment

Hoburg credits his time at MIT with nurturing a love of adventure and pursuing new ideas and passions. “Everyone here was awesome academically, that was a given. But it seemed like everyone also had a wild unique interest, and I loved that about this community.” As an undergraduate Hoburg remembers rushing through his P-sets so he could go off rock-climbing and skiing for the weekend.

The MIT Alpine Ski team was his first experience on a tight-knit, mission focused team, which has become a core part of his personal and professional ethos. Before starting grad school at the University of California at Berkeley he took a year off to be an EMT, and he spent his summers in California on the Yosemite Search and Rescue team.

“That was my first experience doing what I would call real operational stuff, getting called out on a mission to help someone, working with a high-performing team in an austere environment,” he said. “A lot of the civilians who get selected at NASA have something operational in their background, in addition to their technical expertise. I think search and rescue ultimately helped me with my astronaut application, but I don’t know of anyone who had gone that route before me. It did help me grow into a strong operator — but at the time I just wanted to be out in the mountains responding to emergencies.”

This theme of operational capacity emerged throughout Hoburg’s talks and Q&As. He noted that astronaut candidates tend to be natural team players, and the two-plus years of training prepare them to approach every situation with trust and confidence. A comfort level with versatility is critical for an astronaut: they have to fly and dock the spacecraft, operate and perform maintenance on the ISS itself, perform spacewalks, and of course get home again. All of this is in service of their primary mission aboard the ISS:

“We’re just operators up there,” says Hoburg, “we work on literally hundreds of different experiments, while the PIs are on the ground. The science work is definitely the purpose of why we’re there. That place is busy — we are working 12 hour days a lot of the time.”

Moon, Mars, and beyond

Many of the students’ questions and Hoburg’s responses were practical, perhaps unsurprisingly in a department full of aerospace engineers. His ISS wish list — free-flying robots to help with holding and carrying; robotic cameras to better document their experiments and other pursuits onboard; improved automation and manual control interfaces in launch, flight, and docking; better solutions to the challenges of stowage and organization — may be the very projects that this generation of engineers tackles.

Hoburg also shared some broader insights from his career as an astronaut so far, including his personal reflection on the famously profound experience of looking at the Earth from space:

“Earth actually looks really big from the ISS,” he said, adding that he would love to see it from the far-away perspective of the Apollo 8 lunar mission. “The overpowering feeling for me was looking at the atmosphere. When you do a spacewalk, it’s pretty in your face that you’re in a vacuum. There is just pure death all around you. And when you look at the Earth, you see how it’s protected by this very, very thin layer.”

Hoburg is enthusiastic for NASA’s upcoming return to the moon, and for the growing commercialization of low Earth orbit that’s allowing NASA to focus on “a transition period beyond low Earth orbit.” He’s keen to help with the lunar missions that will help prepare the next generations of spacefarers to get to Mars.

Above all, he’s excited about the possibilities ahead. “Looking back at the 20th century, I think the moon landing was truly one of our crowning achievements. So part of it is purely inspirational, that spirit of adventure and exploration. Putting humans farther out into space is an audacious goal, worth our time and effort.”

A computer scientist pushes the boundaries of geometry

Tue, 12/12/2023 - 2:05pm

More than 2,000 years ago, the Greek mathematician Euclid, known to many as the father of geometry, changed the way we think about shapes.

Building off those ancient foundations and millennia of mathematical progress since, Justin Solomon is using modern geometric techniques to solve thorny problems that often seem to have nothing to do with shapes.

For instance, perhaps a statistician wants to compare two datasets to see how using one for training and the other for testing might impact the performance of a machine-learning model.

The contents of these datasets might share some geometric structure depending on how the data are arranged in high-dimensional space, explains Solomon, an associate professor in the MIT Department of Electrical Engineering and Computer Science (EECS) and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL). Comparing them using geometric tools can bring insight, for example, into whether the same model will work on both datasets.

“The language we use to talk about data often involves distances, similarities, curvature, and shape — exactly the kinds of things that we’ve been talking about in geometry forever. So, geometers have a lot to contribute to abstract problems in data science,” he says.

The sheer breadth of problems one can solve using geometric techniques is the reason Solomon gave his Geometric Data Processing Group a “purposefully ambiguous” name.

About half of his team works on problems that involve processing two- and three-dimensional geometric data, like aligning 3D organ scans in medical imaging or enabling autonomous vehicles to identify pedestrians in spatial data gathered by LiDAR sensors.

The rest conduct high-dimensional statistical research using geometric tools, such as to construct better generative AI models. For example, these models learn to create new images by sampling from certain parts of a dataset filled with example images. Mapping that space of images is, at its core, a geometric problem.

“The algorithms we developed targeting applications in computer animation are almost directly relevant to generative AI and probability tasks that are popular today,” Solomon adds.

Getting into graphics

An early interest in computer graphics started Solomon on his journey to become an MIT professor.

As a math-minded high school student growing up in northern Virginia, he had the opportunity to intern at a research lab outside Washington, where he helped to develop algorithms for 3D face recognition.

That experience inspired him to double-major in math and computer science at Stanford University, and he arrived on campus keen to dive into more research projects. He remembers charging into the campus career fair as a first-year and talking his way into a summer internship at Pixar Animation Studios.

“They finally relented and granted me an interview,” he recalls.

He worked at Pixar every summer throughout college and into graduate school. There, he focused on physical simulation of cloth and fluids to improve the realism of animated films, as well as rendering techniques to change the “look” of animated content.

“Graphics is so much fun. It is driven by visual content, but beyond that, it presents unique mathematical challenges that set it apart from other parts of computer science,” Solomon says.

After deciding to launch an academic career, Solomon stayed at Stanford to earn a computer science PhD. As a graduate student, he eventually focused on a problem known as optimal transport, where one seeks to move a distribution of some item to another distribution as efficiently as possible.

For instance, perhaps someone wants to find the cheapest way to ship bags of flour from a collection of manufacturers to a collection of bakeries spread across a city. The farther one ships the flour, the more expensive it is; optimal transport seeks the minimum cost for shipment.

“My focus was originally narrowed to only computer graphics applications of optimal transport, but the research took off in other directions and applications, which was a surprise to me. But, in a way, this coincidence led to the structure of my research group at MIT,” he says.

Solomon says he was attracted to MIT because of the opportunity to work with brilliant students, postdocs, and colleagues on complex, yet practical problems that could have an impact on many disciplines.

Paying it forward

As a faculty member, he is passionate about using his position at MIT to make the field of geometric research accessible to people who aren’t usually exposed to it — especially underserved students who often don’t have the opportunity to conduct research in high school or college.

To that end, Solomon launched the Summer Geometry Initiative, a six-week paid research program for undergraduates, mostly drawn from underrepresented backgrounds. The program, which provides a hands-on introduction to geometry research, completed its third summer in 2023.

“There aren’t many institutions that have someone who works in my field, which can lead to imbalances. It means the typical PhD applicant comes from a restricted set of schools. I’m trying to change that, and to make sure folks who are absolutely brilliant but didn’t have the advantage of being born in the right place still have the opportunity to work in our area,” he says.

The program has gotten real results. Since its launch, Solomon has seen the composition of the incoming classes of PhD students change, not just at MIT, but at other institutions, as well.

Beyond computer graphics, there is a growing list of problems in machine learning and statistics that can be tackled using geometric techniques, which underscores the need for a more diverse field of researchers who bring new ideas and perspectives, he says.

For his part, Solomon is looking forward to applying tools from geometry to improve unsupervised machine learning models. In unsupervised machine learning, models must learn to recognize patterns without having labeled training data.

The vast majority of 3D data are not labeled, and paying humans to hand-label objects in 3D scenes is often prohibitively expensive. But sophisticated models incorporating geometric insight and inference from data can help computers figure out complex, unlabeled 3D scenes, so models can learn from them more effectively. 

When Solomon isn’t pondering this and other knotty research quandaries, he can often be found playing classical music on the piano or cello. He’s a fan of composer Dmitri Shostakovich.

An avid musician, he’s made a habit of joining a symphony in whatever city he moves to, and currently plays cello with the New Philharmonia Orchestra in Newton, Massachusetts.

In a way, it’s a harmonious combination of his interests.

“Music is analytical in nature, and I have the advantage of being in a research field — computer graphics — that is very closely connected to artistic practice. So the two are mutually beneficial,” he says.

MIT researchers observe a hallmark quantum behavior in bouncing droplets

Tue, 12/12/2023 - 12:00am

In our everyday classical world, what you see is what you get. A ball is just a ball, and when lobbed through the air, its trajectory is straightforward and clear. But if that ball were shrunk to the size of an atom or smaller, its behavior would shift into a quantum, fuzzy reality. The ball would exist as not just a physical particle but also a wave of possible particle states. And this wave-particle duality can give rise to some weird and sneaky phenomena.

One of the stranger prospects comes from a thought experiment known as the “quantum bomb tester.” The experiment proposes that a quantum particle, such as a photon, could act as a sort of telekinetic bomb detector. Through its properties as both a particle and a wave, the photon could, in theory, sense the presence of a bomb without physically interacting with it.

The concept checks out mathematically and is in line with what the equations governing quantum mechanics allow. But when it comes to spelling out exactly how a particle would accomplish such a bomb-sniffing feat, physicists are stumped. The conundrum lies in a quantum particle’s inherently shifty, in-between, undefinable state. In other words, scientists just have to trust that it works.

But mathematicians at MIT are hoping to dispel some of the mystery and ultimately establish a more concrete picture of quantum mechanics. They have now shown that they can recreate an analog of the quantum bomb tester and generate the behavior that the experiment predicts. They’ve done so not in an exotic, microscopic, quantum setting, but in a seemingly mundane, classical, tabletop setup.

In a paper appearing today in Physical Review A, the team reports recreating the quantum bomb tester in an experiment with a study of bouncing droplets. The team found that the interaction of the droplet with its own waves is similar to a photon’s quantum wave-particle behavior: When dropped into a configuration similar to what is proposed in the quantum bomb test, the droplet behaves in exactly the same statistical manner that is predicted for the photon. If there were actually a bomb in the setup 50 percent of the time, the droplet, like the photon, would detect it, without physically interacting with it, 25 percent of the time.

The fact that the statistics in both experiments match up suggests that something in the droplet’s classical dynamics may be at the heart of a photon’s otherwise mysterious quantum behavior. The researchers see the study as another bridge between two realities: the observable, classical world and the fuzzier quantum realm.

“Here we have a classical system that gives the same statistics as arises in the quantum bomb test, which is considered one of the wonders of the quantum world,” says study author John Bush, professor of applied mathematics at MIT. “In fact, we find that the phenomenon is not so wonderful after all. And this is another example of quantum behavior that can be understood from a local realist perspective.”

Bush’s co-author is former MIT postdoc Valeri Frumkin.

Making waves

To some physicists, quantum mechanics leaves too much to the imagination and doesn’t say enough about the actual dynamics from which such weird phenomena supposedly arise. In 1927, in an attempt to crystallize quantum mechanics, physicist Louis de Broglie presented the pilot wave theory — a still-controversial idea that poses a particle’s quantum behavior is determined not by an intangible, statistical wave of possible states but by a physical “pilot” wave of its own making, that guides the particle through space.

The concept was mostly discounted until 2005, when physicist Yves Couder discovered that de Broglie’s quantum waves could be replicated and studied in a classical, fluid-based experiment. The setup involves a bath of fluid that is made to subtly vibrate up and down, though not quite enough to generate waves on its own. A millimeter-sized droplet of the same fluid is then dispensed over the bath, and as it bounces off the surface, the droplet resonates with the bath’s vibrations, creating what physicists know as a standing wave field that “pilots,” or pushes the droplet along. The effect is of a droplet that appears to walk along a rippled surface in patterns that turn out to be in line with de Broglie’s pilot wave theory.

For the last 13 years, Bush has worked to refine and extend Couder’s hydrodynamic pilot wave experiments and has successfully used the setup to observe droplets exhibiting emergent, quantum-like behavior, including quantum tunneling, single-particle diffraction, and surreal trajectories.

“It turns out that this hydrodynamic pilot-wave experiment exhibits many features of quantum systems which were previously thought to be impossible to understand from a classical perspective,” Bush says.

Bombs away

In their new study, he and Frumkin took on the quantum bomb tester. The thought experiment begins with a conceptual interferometer — essentially, two corridors of the same length that branch out from the same starting point, then turn and converge, forming a rhombus-like configuration as the corridors continue on, each ending in a respective detector.

According to quantum mechanics, if a photon is fired from the interferometer’s starting point, through a beamsplitter, the particle should travel down one of the two corridors with equal probability. Meanwhile, the photon’s mysterious “wave function,” or the sum of all its possible states, travels down both corridors simultaneously. The wave function interferes in such a way to ensure that the particle only appears at one detector (let’s call this D1) and never the other (D2). Hence, the photon should be detected at D1 100 percent of the time, regardless of which corridor it traveled through.

If there is a bomb in one of the two corridors, and a photon heads down this corridor, it predictably triggers the bomb and the setup is blown to bits, and no photon is detected at either detector. But if the photon travels down the corridor without the bomb, something weird happens: Its wave function, in traveling down both corridors, is cut short in one by the bomb. As it’s not quite a particle, the wave does not set off the bomb. But the wave interference is altered in such a way that the particle will be detected with equal probability at D1 and D2. Any signal at D2 therefore would mean that a photon has detected the presence of the bomb, without physically interacting with it. If the bomb is present 50 percent of the time, then this weird quantum bomb detection should occur 25 percent of the time.

In their new study, Bush and Frumkin set up an analogous experiment to see if this quantum behavior could emerge in classical droplets. Into a bath of silicon oil, they submerged a structure similar to the rhombus-like corridors in the thought experiment. They then carefully dispensed tiny oil droplets into the bath and tracked their paths. They added a structure to one side of the rhombus to mimic a bomb-like object and observed how the droplet and its wave patterns changed in response.

In the end, they found that 25 percent of the time a droplet bounced through the corridor without the “bomb,” while its pilot waves interacted with the bomb structure in a way that pushed the droplet away from the bomb. From this perspective, the droplet was able to “sense” the bomb-like object without physically coming into contact with it. While the droplet exhibited quantum-like behavior, the team could plainly see that this behavior emerged from the droplet’s waves, which physically helped to keep the droplet away from the bomb. These dynamics, the team says, may also help to explain the mysterious behavior in quantum particles.

“Not only are the statistics the same, but we also know the dynamics, which was a mystery,” Frumkin says. “And the inference is that an analogous dynamics may underly the quantum behavior.”

"This system is the only example we know which is not quantum but shares some strong wave-particles properties," says theoretical physicist Matthieu Labousse, of ESPCI Paris, who was not involved in the study. "It is very surprising that many examples thought to be peculiar to the quantum world can be reproduced by such a classical system. It enables to understand the barrier between what it is specific to a quantum system and what is not.  The latest results of the group at MIT pushes the barrier very far."

This research is supported, in part, by the National Science Foundation.

3 Questions: Darrell Irvine on making HIV vaccines more powerful

Tue, 12/12/2023 - 12:00am

An MIT research team led by Professor Darrell Irvine has developed a novel kind of vaccine adjuvant: a nanoparticle that can help to stimulate the immune system to generate a stronger response to a vaccine. These nanoparticles contain saponin, a compound derived from the bark of the Chilean soapbark tree, along with a molecule called MPLA, each of which helps to activate the immune system.

The adjuvant has been incorporated into an experimental HIV vaccine that has shown promising results in animal studies, and this month, the first human volunteers will receive the vaccine as part of a phase 1 clinical trial run by the Consortium for HIV/AIDS Vaccine Development at the Scripps Research Institute. MIT News spoke with Irvine about why this project required an interdisciplinary approach, and what may lie ahead.

Q: What are the special features of the new nanoparticle adjuvant that help it create a more powerful immune response to vaccination? 

A: Most vaccines, such as the Covid-19 vaccines, are thought to protect us through B cells making protective antibodies. Development of an HIV vaccine has been made challenging by the fact that the B cells that are capable of evolving to produce protective antibodies — called broadly neutralizing antibodies — are very rare in the average person. Vaccine adjuvants are important in this scenario to ensure that when we immunize with an HIV antigen, these rare B cells become activated and get a chance to participate in the immune response.

We particularly discovered that this new adjuvant, which we call SMNP (short for saponin/MPLA nanoparticles), is particularly good at helping more B cells enter germinal centers, the specialized location in lymph nodes where high affinity antibodies are produced. In animal models, SMNP also has shown unique mechanisms of action: Administering antigens with SMNP leads to better antigen delivery to lymph nodes (through increases in lymph flow) and better capture of the antigen by B cells in lymph nodes.

Q: How did your lab, which generally focuses on bioengineering and materials science, end up working on HIV vaccines? What obstacles did you have to overcome in the development of this adjuvant?

A: About 15 years ago, Bruce Walker approached me about getting involved in the HIV vaccine effort, and recruited me to join the Ragon Institute of MGH, MIT, and Harvard as a member of the steering committee. Through the Ragon Institute, I met colleagues in the Scripps Consortium for HIV/AIDS Vaccine Development (CHAVD), and we realized there was a tremendous opportunity to directly contribute to the HIV vaccine challenge, working in partnership with experts in immunogen design, structural biology, and HIV pathogenesis.

As we carried out study after study of SMNP in preclinical animal models, we realized the adjuvant had really amazing effects for promoting anti-HIV antibody responses, and the CHAVD decided this was worth moving forward to testing in humans. A major challenge was transferring the technology out of the lab to synthesize large amounts of the adjuvant under GMP (good manufacturing process) conditions for a clinical trial. The initial contract manufacturing organization (CMO) hired by the consortium to produce SMNP simply couldn’t get a process to work for scalable manufacturing.

Luckily for us, a chemical engineering graduate student, Ivan Pires, whom I co-advise with Paula Hammond, head of MIT’s Department of Chemical Engineering, had developed expertise in one particular processing technique known as tangential flow filtration during his undergraduate training. Leveraging classic chemical engineering skills in thermodynamics and process design, Ivan stepped in and solved the process issues the CMO was facing, allowing the manufacturing to move forward. This to me is what makes MIT great — the ability of our students and postdocs to step up and solve big problems and make big contributions when the need arises.

Q: What other diseases could this approach be useful for? Are there any plans to test it with other types of vaccines?

A: In principle, SMNP may be helpful for any infectious disease vaccine where strong antibody responses are needed. We are currently sharing the adjuvant with about 30 different labs around the world, who are testing it in vaccines against many other pathogens including Epstein-Barr virus, malaria, and influenza. We are hopeful that if SMNP is safe and effective in humans, this will be an adjuvant that can be broadly used in infectious disease trials.

Boosting faith in the authenticity of open source software

Mon, 12/11/2023 - 4:35pm

Open source software — software that is freely distributed, along with its source code, so that copies, additions, or modifications can be readily made — is “everywhere,” to quote the 2023 Open Source Security and Risk Analysis Report. Ninety-six percent of the computer programs used by major industries include open source software, and 76 percent of those programs consist of open source software. But the percentage of software packages “containing security vulnerabilities remains troublingly high,” the report warned.

One concern is that “the software you’ve gotten from what you believe to be a reliable developer has somehow been compromised,” says Kelsey Merrill ’22, MEng ’23, a software engineer who received a master’s degree earlier this year from MIT’s Department of Electrical Engineering and Computer Science. “Suppose that somewhere in the supply chain, the software has been changed by an attacker who has malicious intent.”

The risk of a security breach of this sort is by no means abstract. In 2020, to take a notorious example, the Texas company SolarWinds made a software update to its widely used program called Orion. Hackers broke into the system, inserting pernicious code into the software before SolarWinds shipped the latest version of Orion to more than 18,000 customers, including Microsoft, Intel, and roughly 100 other companies, as well as a dozen U.S. government agencies — including the departments of State, Defense, Treasury, Commerce, and Homeland Security. In this case, the product that was corrupted came from a large commercial company, but lapses may be even more likely to occur in the open source realm, Merrill says, “where people of varying backgrounds — many of whom are hobbyists without any security training — can publish software that gets used around the world.”

Now, she and three collaborators — her former advisor Karen Sollins, a principal research scientist at the MIT Computer Science and Artificial Intelligence Laboratory; Santiago Torres-Arias, an assistant professor of computer science at Purdue University; and Zachary Newman SM ’20, a research scientist at Chainguard Labs — have developed a new system called Speranza, which is aimed at reassuring software consumers that the product they are getting has not been tampered with and is coming directly from a source they trust.

“What we have done,” explains Sollins, “is to develop, prove correct, and demonstrate the viability of an approach that allows the [software] maintainers to remain anonymous.” Preserving anonymity is obviously important, given that almost everyone — software developers included — values their confidentiality. This new approach, Sollins adds, “simultaneously allows [software] users to have confidence that the maintainers are, in fact, legitimate maintainers and, furthermore, that the code being downloaded is, in fact, the correct code of that maintainer.”

So how can users confirm the genuineness of a software package in order to guarantee, as Merrill puts it, “that the maintainers are who they say they are?” The classical way of doing this, which was invented more than 40 years ago, is by means of a digital signature, which is analogous to a handwritten signature — albeit with far greater built-in security through the use of various cryptographic techniques.

To carry out a digital signature, two “keys” are generated at the same time — each of which is a number, composed of zeros and ones, that is 256 digits long. One key is designated “private,” the other “public,” but they constitute a pair that is mathematically linked. A software developer can use their private key, along with the contents of the document or computer program, to generate a digital signature that is attached exclusively to that document or program. A software user can then use the public key — as well as the developer’s signature, plus the contents of the package they downloaded — to verify the package’s authenticity.

Validation comes in the form of a yes or a no, a one or a zero. “Getting a one means that the authenticity has been assured,” Merrill explains. “The document is the same as when it was signed and is hence unchanged. A zero means something is amiss, and you may not want to rely on that document.”

Although this decades-old approach is tried and true in a sense, it is far from perfect. One problem, Merrill notes, “is that people are bad at managing cryptographic keys, which consist of very long numbers, in a way that is secure and prevents them from getting lost.” People lose their passwords all the time, Merrill says. “And if a software developer were to lose the private key and then contact a user saying, ‘Hey, I have a new key,’ how would you know who that really is?”

To address those concerns, Speranza is building off of “Sigstore” — a system introduced last year to enhance the security of the software supply chain. Sigstore was developed by Newman (who instigated the Speranza project) and Torres-Arias, along with John Speed Meyers of Chainguard Labs. Sigstore automates and streamlines the digital signing process. Users no longer have to manage long cryptographic keys but are instead issued ephemeral keys (an approach called “keyless signing”) that expire quickly — perhaps within a matter of minutes — and therefore don’t have to be stored.

A drawback with Sigstore stems from the fact that it dispensed with long-lasting public keys, so that software maintainers instead have to identify themselves — through a protocol called OpenID Connect (OIDC) — in a way that can be linked to their email addresses. That feature, alone, may inhibit the widespread adoption of Sigstore, and it served as the motivating factor behind — and the raison d’etre for — Speranza. “We take Sigstore’s basic infrastructure and change it to provide privacy guarantees,” Merrill explains.

With Speranza, privacy is achieved through an original idea that she and her collaborators call “identity co-commitments.” Here, in simple terms, is how the idea works: A software developer’s identity, in the form of an email address, is converted into a so-called “commitment” that consists of a big pseudorandom number. (A pseudorandom number does not meet the technical definition of “random” but, practically speaking, is about as good as random.)

Meanwhile, another big random number — the accompanying commitment, or co-commitment — is generated that is associated with a software package that this developer either created or was granted permission to modify. In order to demonstrate to a prospective user of a particular software package as to who created this version of the package and signed it, the authorized developer would publish a proof that establishes an unequivocal link between the commitment that represents their identity and the commitment attached to the software product. The proof that is carried out is of a special type, called a zero-knowledge proof, which is a way of showing, for instance, that two things have a common bound, without divulging details as to what those things — such as the developer’s email address — actually are.

“Speranza ensures that software comes from the correct source without requiring developers to reveal personal information like their email addresses,” comments Marina Moore, a PhD candidate at the New York University Center for Cyber Security. “It allows verifiers to see that the same developer signed a package several times without revealing who the developer is or even other packages that they work on. This provides a usability improvement over long-term signing keys, and a privacy benefit over other OIDC-based solutions like Sigstore.”

Marcela Mellara, a research scientist in the Security and Privacy Research group at Intel Labs, says, “This approach has the advantage of allowing software consumers to automatically verify that the package they obtain from a Speranza-enabled repository originated from an expected maintainer, and gain trust that the software they are using is authentic.”

A paper about Speranza was presented at the Computer and Communications Security Conference in Copenhagen, Denmark.

MIT Generative AI Week fosters dialogue across disciplines

Mon, 12/11/2023 - 4:25pm

In late November, faculty, staff, and students from across MIT participated in MIT Generative AI Week. The programming included a flagship full-day symposium as well as four subject-specific symposia, all aimed at fostering a dialogue about the opportunities and potential applications of generative artificial intelligence technologies across a diverse range of disciplines.

“These events are one expression of our conviction that MIT has a special responsibility to help society come to grips with the tectonic forces of generative AI — to understand its potential, contain its risks, and harness its power for good,” said MIT President Sally Kornbluth, in an email announcing the week of programming earlier this fall.

Activities during MIT Generative AI Week, many of which are available to watch on YouTube, included:

MIT Generative AI: Shaping the Future Symposium

The week kicked off with a flagship symposium, MIT Generative AI: Shaping the Future. The full-day symposium featured welcoming remarks from Kornbluth as well as two keynote speakers. The morning keynote speaker, Professor Emeritus Rodney Brooks, iRobot co-founder, former director of the Computer Science and Artificial Intelligence Laboratory (CSAIL), and Robust.AI founder and CTO, spoke about how robotics and generative AI intersect. The afternoon keynote speaker, renowned media artist and director Refik Anadol, discussed the interplay between generative AI and art, including approaches toward data sculpting and digital architecture in our physical world.

The symposium included panel and roundtable discussions on topics such as generative AI foundations; science fiction; generative AI applications; and generative AI, ethics, and society. The event concluded with a performance by saxophonist and composer Paul Winter. It was chaired by Daniela Rus, the Andrew (1956) and Erna Viterbi Professor of Electrical Engineering and Computer Science (EECS) and director of CSAIL, and co-chaired by Cynthia Breazeal, MIT dean for digital learning and professor of media arts and sciences, and Sertac Karaman, professor of aeronautics and astronautics and director of the Laboratory for Information and Decision Systems.

“Another Body” Screening

The first day of MIT Generative AI Week concluded with a special screening of the documentary “Another Body.” The SxSW Special Jury Award-winning documentary follows a college student’s search for answers and justice after she discovers deepfake pornography of herself circulating online.

After the viewing, there was a panel discussion including the film’s editor, Rabab Haj Yahya; David Goldston, director of the MIT Washington Office; Catherine D’Ignazio, associate professor of urban science and planning and director of the Data + Feminism Lab; and MIT junior Ananda Santos Figueiredo.

Generative AI + Education Symposium

Drawing from the extended MIT community of faculty, research staff, students, and colleagues, the Generative AI + Education Symposium offered thought-provoking keynotes, panel conversations, and live demonstrations of how generative AI is transforming learning experience and teaching practice from K-12, post-secondary education, and workforce upskilling. The symposium included a fireside chat entitled, “Will Generative AI Transform Learning and Education?” as well as sessions on the learner experience, teaching practice, and big ideas from MIT.

This half-day symposium concluded with an innovation showcase where attendees were invited to engage directly with demos of the latest in MIT research and ingenuity. The event was co-chaired by Breazeal and Christopher Capozzola, senior associate dean for open learning and professor of history.

Generative AI + Health Symposium

The Generative AI + Health Symposium highlighted AI research focused on the health of people and the health of the planet. Talks illustrated progress in molecular design and sensing applications to advance human health, as well as work to improve climate-change projections, increase efficiency in mobility, and design new materials. A panel discussion of six researchers from across MIT explored anticipated impacts of AI in these areas.

This half-day symposium was co-chaired by Raffaele Ferrari, the Cecil and Ida Green Professor of Oceanography in the Department of Earth, Atmospheric and Planetary Sciences and director of the Program in Atmospheres, Oceans, and Climate; Polina Golland, the Sunlin and Priscilla Chou Professor in the Department of EECS and a principal investigator at CSAIL; Amy Keating, the Jay A. Stein Professor of Biology, professor of biological engineering, and head of the Department of Biology; and Elsa Olivetti, the Jerry McAfee (1940) Professor in Engineering in the Department of Materials Science and Engineering, associate dean of engineering, and director of the MIT Climate and Sustainability Consortium.

Generative AI + Creativity Symposium

At the Generative AI + Creativity Symposium, faculty experts, researchers, and students across MIT explored questions that peer into the future and imagine a world where generative AI-enhanced systems and techniques improve the human condition. Topics explored included how combined human and AI systems might make more creative and better decisions than either one alone; how lifelong creativity, fostered by a new generation of tools, methods, and experiences, can help society; envisioning, exploring, and implementing a more joyful, artful, meaningful, and equitable future; how to make AI legible and trustworthy; and how to engage an unprecedented combination of diverse stakeholders to inspire and support creative thinking, expression, and computation empowering all people.

The half-day symposium was co-chaired by Dava Newman, the Apollo Program Professor of Astronautics and director of the MIT Media Lab, and John Ochsendorf, the Class of 1942 Professor, professor of architecture and of civil and environmental engineering, and founding director of the MIT Morningside Academy for Design.

Generative AI + Impact on Commerce Symposium

The Generative AI + Impact on Commerce Symposium explored the impact of AI on the practice of management. The event featured a curated set of researchers at MIT; policymakers actively working on legislation to ensure that AI is deployed in a manner that is fair and healthy for the consumer; venture capitalists investing in cutting-edge AI technology; and private equity investors who are looking to use AI tools as a competitive advantage.

This half-day symposium was co-chaired by Vivek Farias, the Patrick J. McGovern (1959) Professor at the MIT Sloan School of Management and Simon Johnson, the Ronald A. Kurtz (1954) Professor of Entrepreneurship at the MIT Sloan School of Management.

Moungi Bawendi honored during Nobel Week in Stockholm

Mon, 12/11/2023 - 4:03pm

The 2023 Nobel Prize winners received their awards in a grand ceremony yesterday in Stockholm, Sweden. Among those honored was MIT Professor Moungi Bawendi, who shared the 2023 Nobel Prize in Chemistry with Louis Brus and Aleksey Yekimov for their work on quantum dots.

As part of the annual Nobel Week festivities, Bawendi gave a lecture about his research, participated in a Nobel Banquet, and took part in a conversation with Danish European Space Agency astronaut Andreas Mogensen, a current crew member on the International Space Station. To mark the occasion, Morgensen showed off a floating Nobel Prize medal won previously by physicist Niels Bohr.

In his banquet speech, Bawendi stated: “Wondering about how the atomic world evolves into the macroscopic one inevitably leads us through a wonderful new world, the nano-world, which we now call the realm of nanoscience and nanotechnology. Quantum dots, for which we are being honored here today were at the birth of this new realm. They shine brightly on its future and the yet un-imagined possibilities it offers. So tonight, let us raise a toast to the human drive for exploration, and to the future of nanoscience.”

Below are several photos from Bawendi’s week in Stockholm.

Two from MIT named 2024 Marshall Scholars

Mon, 12/11/2023 - 10:00am

Anushree Chaudhuri and Rupert Li have won Marshall Scholarships, a prestigious British government-funded fellowship that offers exceptional American students the opportunity to pursue several years of graduate study in any field at any university in the United Kingdom. Up to 50 scholarships are awarded each year by the Marshall Aid Commemoration Commission.

The students were advised and supported by the distinguished fellowships team, led by Associate Dean Kim Benard in Career Advising and Professional Development. They also received mentorship from the Presidential Committee on Distinguished Fellowships, co-chaired by professors Will Broadhead and Nancy Kanwisher.

“The MIT students who applied for this year's Marshall Scholarship embody that combination of intellectual prowess, hard work, and civic-mindedness that characterizes the Institute at its best,” says Broadhead. “These students are truly amazing! The thoughtfulness and optimism they demonstrated throughout the months-long exercise in critical reflection and personal growth that the application process demands impressed and inspired us all. On behalf of the Distinguished Fellowships Committee, Nancy and I are thrilled to extend our warmest congratulations to Anushree and Rupert and our very best wishes as they take their richly deserved places in the Marshall Scholar community.”

Anushree Chaudhuri

Anushree Chaudhuri from San Diego, California, will graduate next spring with bachelor’s degrees in urban studies and planning and economics and a master's in city planning. As a Marshall Scholar, she plans to pursue an MPhil/PhD in environmental policy and development at the London School of Economics and Political Science. In the future, Chaudhuri hopes to work across the public and private sectors to drive structural changes that connect global environmental challenges to local community contexts.

Since 2021, Chaudhuri has worked with Professor Larry Susskind in the Science Impact Collaborative to study local responses to large-scale renewable energy projects. This past summer, she traveled around California to document the experiences of rural and Indigenous communities most directly affected by energy transitions.

Chaudhuri has also worked with the U.S. Department of Energy, the World Wildlife Fund, and an environmental, social, and governance investing startup, as well as with several groups at MIT including the Office of Sustainability, Environmental Solutions Initiative, and the Climate and Sustainability Consortium. She represented MIT as an undergraduate delegate to the United Nations COP27.

On campus, Chaudhuri co-leads the Student Sustainability Coalition, an umbrella organization for student sustainability groups. She has previously served as chair of Undergraduate Association Sustainability; a co-lead of the student campaign to revise MIT’s Fast Forward Climate Action Plan; judicial chair of Burton-Conner House; and as a representative on several campus committees, including the Corporation Joint Advisory Committee. She also loves to sing and write.

In 2023, Chaudhuri was named a Udall Scholar and an MIT Burchard Scholar. By taking an interdisciplinary approach that combines law, planning, economics, participatory research, and data science, she is committed to a public service career addressing social and climate injustices.

Rupert Li

Hailing from Portland, Oregon, Rupert Li is a concurrent senior and master’s student at MIT. He will graduate in May 2024 with a BS in mathematics, a BS in computer science, economics, and data science, and a minor in business analytics. He will also be awarded an MEng in computer science, economics, and data science.

As a graduate student in the U.K., Li will pursue the MASt degree in pure mathematics at Cambridge University, followed by the MSc in mathematics and foundations of computer science at Oxford University. Li aspires to become a professor of mathematics.

Li has written 10 math research articles, primarily in combinatorics, but also including discrete geometry, probability, and harmonic analysis. Since his first-year fall, he has worked with Adjunct Professor Henry Cohn in the MIT Department of Mathematics and has authored two papers based on this work.

Li works on sphere-packing and coding theory, a famously challenging mathematical problem that has applications in error-correcting codes, which are ubiquitously used in the digital age to protect against data corruption. He currently also works with Professor Nike Sun in the MIT math department on probability theory and Professor Jim Propp of the University of Massachusetts at Lowell on enumerative combinatorics and statistical mechanics.

Li has worked as a course designer and teaching assistant for Professor Jim Orlin of the MIT Sloan School of Management and Professor Muhamet Yildiz in the Department of Economics, and is currently head teaching assistant for class 6.7900 (Machine Learning). Li received the Barry Goldwater Scholarship and a Morgan Prize Honorable Mention for his undergraduate research. In his free time, he enjoys watching movies and playing strategy games with friends.

MIT group releases white papers on governance of AI

Mon, 12/11/2023 - 12:00am

Providing a resource for U.S. policymakers, a committee of MIT leaders and scholars has released a set of policy briefs that outlines a framework for the governance of artificial intelligence. The approach includes extending current regulatory and liability approaches in pursuit of a practical way to oversee AI.

The aim of the papers is to help enhance U.S. leadership in the area of artificial intelligence broadly, while limiting harm that could result from the new technologies and encouraging exploration of how AI deployment could be beneficial to society.

The main policy paper, “A Framework for U.S. AI Governance: Creating a Safe and Thriving AI Sector,” suggests AI tools can often be regulated by existing U.S. government entities that already oversee the relevant domains. The recommendations also underscore the importance of identifying the purpose of AI tools, which would enable regulations to fit those applications.

“As a country we’re already regulating a lot of relatively high-risk things and providing governance there,” says Dan Huttenlocher, dean of the MIT Schwarzman College of Computing, who helped steer the project, which stemmed from the work of an ad hoc MIT committee. “We’re not saying that’s sufficient, but let’s start with things where human activity is already being regulated, and which society, over time, has decided are high risk. Looking at AI that way is the practical approach.”

“The framework we put together gives a concrete way of thinking about these things,” says Asu Ozdaglar, the deputy dean of academics in the MIT Schwarzman College of Computing and head of MIT’s Department of Electrical Engineering and Computer Science (EECS), who also helped oversee the effort.

The project includes multiple additional policy papers and comes amid heightened interest in AI over last year as well as considerable new industry investment in the field. The European Union is currently trying to finalize AI regulations using its own approach, one that assigns broad levels of risk to certain types of applications. In that process, general-purpose AI technologies such as language models have become a new sticking point. Any governance effort faces the challenges of regulating both general and specific AI tools, as well as an array of potential problems including misinformation, deepfakes, surveillance, and more.

“We felt it was important for MIT to get involved in this because we have expertise,” says David Goldston, director of the MIT Washington Office. “MIT is one of the leaders in AI research, one of the places where AI first got started. Since we are among those creating technology that is raising these important issues, we feel an obligation to help address them.”

Purpose, intent, and guardrails

The main policy brief outlines how current policy could be extended to cover AI, using existing regulatory agencies and legal liability frameworks where possible. The U.S. has strict licensing laws in the field of medicine, for example. It is already illegal to impersonate a doctor; if AI were to be used to prescribe medicine or make a diagnosis under the guise of being a doctor, it should be clear that would violate the law just as strictly human malfeasance would. As the policy brief notes, this is not just a theoretical approach; autonomous vehicles, which deploy AI systems, are subject to regulation in the same manner as other vehicles.

An important step in making these regulatory and liability regimes, the policy brief emphasizes, is having AI providers define the purpose and intent of AI applications in advance. Examining new technologies on this basis would then make clear which existing sets of regulations, and regulators, are germane to any given AI tool.

However, it is also the case that AI systems may exist at multiple levels, in what technologists call a “stack” of systems that together deliver a particular service. For example, a general-purpose language model may underlie a specific new tool. In general, the brief notes, the provider of a specific service might be primarily liable for problems with it. However, “when a component system of a stack does not perform as promised, it may be reasonable for the provider of that component to share responsibility,” as the first brief states. The builders of general-purpose tools should thus also be accountable should their technologies be implicated in specific problems.

“That makes governance more challenging to think about, but the foundation models should not be completely left out of consideration,” Ozdaglar says. “In a lot of cases, the models are from providers, and you develop an application on top, but they are part of the stack. What is the responsibility there? If systems are not on top of the stack, it doesn’t mean they should not be considered.”

Having AI providers clearly define the purpose and intent of AI tools, and requiring guardrails to prevent misuse, could also help determine the extent to which either companies or end users are accountable for specific problems. The policy brief states that a good regulatory regime should be able to identify what it calls a “fork in the toaster” situation — when an end user could reasonably be held responsible for knowing the problems that misuse of a tool could produce.

Responsive and flexible

While the policy framework involves existing agencies, it includes the addition of some new oversight capacity as well. For one thing, the policy brief calls for advances in auditing of new AI tools, which could move forward along a variety of paths, whether government-initiated, user-driven, or deriving from legal liability proceedings. There would need to be public standards for auditing, the paper notes, whether established by a nonprofit entity along the lines of the Public Company Accounting Oversight Board (PCAOB), or through a federal entity similar to the National Institute of Standards and Technology (NIST).

And the paper does call for the consideration of creating a new, government-approved “self-regulatory organization” (SRO) agency along the functional lines of FINRA, the government-created Financial Industry Regulatory Authority. Such an agency, focused on AI, could accumulate domain-specific knowledge that would allow it to be responsive and flexible when engaging with a rapidly changing AI industry.

“These things are very complex, the interactions of humans and machines, so you need responsiveness,” says Huttenlocher, who is also the Henry Ellis Warren Professor in Computer Science and Artificial Intelligence and Decision-Making in EECS. “We think that if government considers new agencies, it should really look at this SRO structure. They are not handing over the keys to the store, as it’s still something that’s government-chartered and overseen.”

As the policy papers make clear, there are several additional particular legal matters that will need addressing in the realm of AI. Copyright and other intellectual property issues related to AI generally are already the subject of litigation.

And then there are what Ozdaglar calls “human plus” legal issues, where AI has capacities that go beyond what humans are capable of doing. These include things like mass-surveillance tools, and the committee recognizes they may require special legal consideration.

“AI enables things humans cannot do, such as surveillance or fake news at scale, which may need special consideration beyond what is applicable for humans,” Ozdaglar says. “But our starting point still enables you to think about the risks, and then how that risk gets amplified because of the tools.”

The set of policy papers addresses a number of regulatory issues in detail. For instance, one paper, “Labeling AI-Generated Content: Promises, Perils, and Future Directions,” by Chloe Wittenberg, Ziv Epstein, Adam J. Berinsky, and David G. Rand, builds on prior research experiments about media and audience engagement to assess specific approaches for denoting AI-produced material. Another paper, “Large Language Models,” by Yoon Kim, Jacob Andreas, and Dylan Hadfield-Menell, examines general-purpose language-based AI innovations.

“Part of doing this properly”

As the policy briefs make clear, another element of effective government engagement on the subject involves encouraging more research about how to make AI beneficial to society in general.

For instance, the policy paper, “Can We Have a Pro-Worker AI? Choosing a path of machines in service of minds,” by Daron Acemoglu, David Autor, and Simon Johnson, explores the possibility that AI might augment and aid workers, rather than being deployed to replace them — a scenario that would provide better long-term economic growth distributed throughout society.

This range of analyses, from a variety of disciplinary perspectives, is something the ad hoc committee wanted to bring to bear on the issue of AI regulation from the start — broadening the lens that can be brought to policymaking, rather than narrowing it to a few technical questions.

“We do think academic institutions have an important role to play both in terms of expertise about technology, and the interplay of technology and society,” says Huttenlocher. “It reflects what’s going to be important to governing this well, policymakers who think about social systems and technology together. That’s what the nation’s going to need.”

Indeed, Goldston notes, the committee is attempting to bridge a gap between those excited and those concerned about AI, by working to advocate that adequate regulation accompanies advances in the technology.

As Goldston puts it, the committee releasing these papers is “is not a group that is antitechnology or trying to stifle AI. But it is, nonetheless, a group that is saying AI needs governance and oversight. That’s part of doing this properly. These are people who know this technology, and they’re saying that AI needs oversight.”

Huttenlocher adds, “Working in service of the nation and the world is something MIT has taken seriously for many, many decades. This is a very important moment for that.”

In addition to Huttenlocher, Ozdaglar, and Goldston, the ad hoc committee members are: Daron Acemoglu, Institute Professor and the Elizabeth and James Killian Professor of Economics in the School of Arts, Humanities, and Social Sciences; Jacob Andreas, associate professor in EECS; David Autor, the Ford Professor of Economics; Adam Berinsky, the Mitsui Professor of Political Science; Cynthia Breazeal, dean for Digital Learning and professor of media arts and sciences; Dylan Hadfield-Menell, the Tennenbaum Career Development Assistant Professor of Artificial Intelligence and Decision-Making; Simon Johnson, the Kurtz Professor of Entrepreneurship in the MIT Sloan School of Management; Yoon Kim, the NBX Career Development Assistant Professor in EECS; Sendhil Mullainathan, the Roman Family University Professor of Computation and Behavioral Science at the University of Chicago Booth School of Business; Manish Raghavan, assistant professor of information technology at MIT Sloan; David Rand, the Erwin H. Schell Professor at MIT Sloan and a professor of brain and cognitive sciences; Antonio Torralba, the Delta Electronics Professor of Electrical Engineering and Computer Science; and Luis Videgaray, a senior lecturer at MIT Sloan.

Scientists 3D print self-heating microfluidic devices

Mon, 12/11/2023 - 12:00am

MIT researchers have used 3D printing to produce self-heating microfluidic devices, demonstrating a technique which could someday be used to rapidly create cheap, yet accurate, tools to detect a host of diseases.

Microfluidics, miniaturized machines that manipulate fluids and facilitate chemical reactions, can be used to detect disease in tiny samples of blood or fluids. At-home test kits for Covid-19, for example, incorporate a simple type of microfluidic.

But many microfluidic applications require chemical reactions that must be performed at specific temperatures. These more complex microfluidic devices, which are typically manufactured in a clean room, are outfitted with heating elements made from gold or platinum using a complicated and expensive fabrication process that is difficult to scale up.

Instead, the MIT team used multimaterial 3D printing to create self-heating microfluidic devices with built-in heating elements, through a single, inexpensive manufacturing process. They generated devices that can heat fluid to a specific temperature as it flows through microscopic channels inside the tiny machine.

Their technique is customizable, so an engineer could create a microfluidic that heats fluid to a certain temperature or given heating profile within a specific area of the device. The low-cost fabrication process requires about $2 of materials to generate a ready-to-use microfluidic.

The process could be especially useful in creating self-heating microfluidics for remote regions of developing countries where clinicians may not have access to the expensive lab equipment required for many diagnostic procedures.

“Clean rooms in particular, where you would usually make these devices, are incredibly expensive to build and to run. But we can make very capable self-heating microfluidic devices using additive manufacturing, and they can be made a lot faster and cheaper than with these traditional methods. This is really a way to democratize this technology,” says Luis Fernando Velásquez-García, a principal scientist in MIT’s Microsystems Technology Laboratories (MTL) and senior author of a paper describing the fabrication technique.

He is joined on the paper by lead author Jorge Cañada Pérez-Sala, an electrical engineering and computer science graduate student. The research will be presented at the PowerMEMS Conference this month.

An insulator becomes conductive

This new fabrication process utilizes a technique called multimaterial extrusion 3D printing, in which several materials can be squirted through the printer’s many nozzles to build a device layer by layer. The process is monolithic, which means the entire device can be produced in one step on the 3D printer, without the need for any post-assembly.

To create self-heating microfluidics, the researchers used two materials — a biodegradable polymer known as polylactic acid (PLA) that is commonly used in 3D printing, and a modified version of PLA.

The modified PLA has mixed copper nanoparticles into the polymer, which converts this insulating material into an electrical conductor, Velásquez-García explains. When electrical current is fed into a resistor composed of this copper-doped PLA, energy is dissipated as heat.

“It is amazing when you think about it because the PLA material is a dielectric, but when you put in these nanoparticle impurities, it completely changes the physical properties. This is something we don’t fully understand yet, but it happens and it is repeatable,” he says.

Using a multimaterial 3D printer, the researchers fabricate a heating resistor from the copper-doped PLA and then print the microfluidic device, with microscopic channels through which fluid can flow, directly on top in one printing step. Because the components are made from the same base material, they have similar printing temperatures and are compatible.

Heat dissipated from the resistor will warm fluid flowing through the channels in the microfluidic.

In addition to the resistor and microfluidic, they use the printer to add a thin, continuous layer of PLA that is sandwiched between them. It is especially challenging to manufacture this layer because it must be thin enough so heat can transfer from the resistor to the microfluidic, but not so thin that fluid could leak into the resistor.

The resulting machine is about the size of a U.S. quarter and can be produced in a matter of minutes. Channels about 500 micrometers wide and 400 micrometers tall are threaded through the microfluidic to carry fluid and facilitate chemical reactions.

Importantly, the PLA material is translucent, so fluid in the device remains visible. Many processes rely on visualization or the use of light to infer what is happening during chemical reactions, Velásquez-García explains.

Customizable chemical reactors

The researchers used this one-step manufacturing process to generate a prototype that could heat fluid by 4 degrees Celsius as it flowed between the input and the output. This customizable technique could enable them to make devices which would heat fluids in certain patterns or along specific gradients.

“You can use these two materials to create chemical reactors that do exactly what you want. We can set up a particular heating profile while still having all the capabilities of the microfluidic,” he says.

However, one limitation comes from the fact that PLA can only be heated to about 50 degrees Celsius before it starts to degrade. Many chemical reactions, such as those used for polymerase chain reaction (PCR) tests, require temperatures of 90 degrees or higher. And to precisely control the temperature of the device, researchers would need to integrate a third material that enables temperature sensing.

In addition to tackling these limitations in future work, Velásquez-García wants to print magnets directly into the microfluidic device. These magnets could enable chemical reactions that require particles to be sorted or aligned.

At the same time, he and his colleagues are exploring the use of other materials that could reach higher temperatures. They are also studying PLA to better understand why it becomes conductive when certain impurities are added to the polymer.

“If we can understand the mechanism that is related to the electrical conductivity of PLA, that would greatly enhance the capability of these devices, but it is going to be a lot harder to solve than some other engineering problems,” he adds.

“In Japanese culture, it’s often said that beauty lies in simplicity. This sentiment is echoed by the work of Cañada and Velasquez-Garcia. Their proposed monolithically 3D-printed microfluidic systems embody simplicity and beauty, offering a wide array of potential derivations and applications that we foresee in the future,” says Norihisa Miki, a professor of mechanical engineering at Keio University in Tokyo, who was not involved with this work.

“Being able to directly print microfluidic chips with fluidic channels and electrical features at the same time opens up very exiting applications when processing biological samples, such as to amplify biomarkers or to actuate and mix liquids. Also, due to the fact that PLA degrades over time, one can even think of implantable applications where the chips dissolve and resorb over time,” adds Niclas Roxhed, an associate professor at Sweden’s KTH Royal Institute of Technology, who was not involved with this study.

This research was funded, in part, by the Empiriko Corporation and a fellowship from La Caixa Foundation.

Breakerspace illuminates the mysteries of materials

Fri, 12/08/2023 - 4:25pm

Days before the opening of the Breakerspace, a new laboratory and lounge at MIT, actor and rapper Jaden Smith tried out the facility’s capabilities, putting his bracelet under a digital optical microscope. On the screen in front of him was a 3D rendering of woven threads, each strand made up of smaller strands, with specks of matter dotting the surface.

“His eyes just lit up,” says Professor Jeffrey Grossman, the Morton and Claire Goulder and Family Professor in Environmental Systems in the Department of Materials Science and Engineering (DMSE). Smith used a mouse-operated control to home in on the strands, magnifying them 8,000 times. Grossman recalls, “In four minutes, Jaden said, ‘That could be my next album cover.’”

Grossman and Smith have stayed in touch since 2017, when the then-19-year-old star toured campus and sat in on class 3.091 (Introduction to Solid-State Chemistry), which Grossman was teaching at the time. When Smith called in October to say he would be in Cambridge, Grossman invited him to test drive the Breakerspace, which he describes as a hand-on materials exploration space for all undergraduates, regardless of major.

“Curiosity of what the world is made of transcends all disciplines,” Grossman says. “Jaden’s not a material scientist, but he got inspired. And there’s a lot of potential for the space to do that for our students, across disciplines.”

The Breakerspace, equipped with microscopes and other instruments for exploring the composition, structure, and behavior of materials, is the “crown jewel” of DMSE’s strategic vision, Grossman says. The aim is to highlight materials science and engineering — an interdisciplinary field that incorporates chemistry, physics, and engineering principles to understand the materials that make up the world — and articulate its impact.

“Our discipline is about unraveling the mysteries of materials at the atomic level and then using that knowledge to tackle some of the world’s most pressing challenges — in energy, manufacturing, computing, health care, and more,” says Grossman, who until August was DMSE’s department head. “The Breakerspace is all about sharing that excitement and exploration.”

Breaking in the Breakerspace

The facility, under construction over the summer months, opened to the public on Nov. 8, dubbed Breakerday. In the three-hour opening event, nearly 300 people passed through the doors of 8-102, a glass-enclosed space on the Infinite Corridor, one of MIT’s busiest thoroughfares, as DMSE faculty, lecturers, and students gave demos of their capabilities.

A giant image on the wall projected from the digital optical microscope skimmed over the shaft and barbs of a feather. A scanning electron microscope (SEM), which scans a beam of electrons over an object, zoomed in on the Boston landscape etched into someone’s “Brass Rat,” the MIT class ring. And a tensile test machine, used to test the strength of materials, very slowly pulled apart a metal bar. A half-circle of students watched the machine in silence for a long stretch before a loud pop startled the group. DMSE technical instructor Shaymus Hudson, leading the demo, succinctly explained how testing materials helps engineers better understand and optimize them.

“If you can spot where the metal failed and understand why, you can make it not fail,” Hudson said.

In the lounge-half of the facility, people queued up for espressos and Americanos brewed from a café-quality Italian coffee machine.

“The lounge offers a comfortable setting for creative thinking and socializing,” says Breakerspace manager Justin Lavallee. “I think it will bring in a bigger pool of people than the lab alone. I’m curious to see how the community will build and how the lounge might drive that.”

The promise of great coffee on Breakerday clinched the deal for Kaitlyn Li, a first-year student with an interest in chemical engineering, a field adjacent to materials science and engineering. She learned about the Breakerspace from an email.

“When I read it came with coffee, I was like, ‘That sounds great!’” Li says.

Hands-on learning

The idea for the Breakerspace was born from Grossman’s passion for hands-on learning. In 3.091, for example, an MIT general Institute requirement without a lab component, Grossman would hand out goodie bags filled with materials and tools to reinforce class lessons. One contained rods and beads for constructing the crystal structures of various elements. And during one class, Grossman invited students to lob baseballs at glass panes to understand the effects of mechanical stress on properties.

“We learn and we see things differently when we can play with them with our hands, as opposed to read about them in a book or hear about them in a lecture,” Grossman says.

At MIT, of course, hands-on learning runs deep — the motto, after all, is “mens et manus,” Latin for mind and hand — finding purchase in makerspaces affiliated with engineering or other disciplines for tooling around with machine parts and gears or even bioengineering projects. Materials science and engineering, too, involves making things, namely, new materials for specific applications — but Grossman wanted to focus instead on characterization: analyzing and understanding physical properties of materials, learning why they’re hard or soft or malleable.

“Characterization plays a very fundamental role in our understanding of how to improve processing, improve synthesis, change the structure of materials, et cetera, to ultimately yield a particular performance that’s needed for an application,” says Associate Professor James LeBeau, a microscopy expert in DMSE who helped curate the Breakerspace instruments. “And this space is at that core of characterization.”

The name of the facility itself is a play on “makerspace,” with a twist — students can examine, test, and even break materials to see what they’re made of, how they fail, and why.

Equipment required to characterize materials ranges from digital optical microscopes to SEMs measuring in nanometers, or billionths of a meter. Also needed are machines that can analyze materials, such as an X-ray diffractometer (XRD) and a scanning Raman microscope, which convey information about material structures and chemical compositions.

Also available are content creation tools — cameras with tripods, microphones, and lighting — so students can record their experiments and analyses and share them on social media.

The Breakerspace instruments were chosen because they can be operated effectively with very little training — “literally minutes,” says Grossman — which will be provided during regularly scheduled orientations.

“So maybe you've got something in your pocket; you want to know what it's made of — or you want to know what its surface looks like at the 20-nanometers length scale. Well, you can do that here,” Grossman says.

Some students got to exercise their curiosity well before construction on the facility began. Last year, Maria Aguiar, now a junior in DMSE, put her cat figurine into the SEM and discovered its bluish-green tint, characteristic of oxidized copper, was in fact paint.

The rest of MIT was encouraged on Breakerday. One visitor pulled out an Excedrin tablet. Grossman put it under the XRD, which illuminates an object with X-rays to determine its atomic structure. The machine identified acetaminophen, caffeine, and other compounds.

Another student had a cup of apple cider, one of the refreshments available in the lounge that day. “Can you analyze this?” he asked Grossman. The student dipped his finger in the liquid and put a drop of it into an FTIR — short for Fourier-transform infrared spectroscopy; it measures how a sample material interacts with infrared light.

“Lo and behold, the machine found that it's mostly made of water and then a bunch of sugar,” Grossman says. “But that’s the point. It’s not just about getting people to know how to use these tools, but to be curious about what things are made of and get a glimpse at the unseen world around us.”

An undergraduate experience

Though Breakerday welcomed everyone at MIT, regular access is exclusive to MIT undergraduates, ensuring a dedicated place for them to learn, explore, and build a community. Grossman and Lavallee say graduate students and postdocs who need characterization tools for research have multiple avenues of access at MIT — their advisors’ laboratories, for example, or shared facilities such as MIT.nano.

“It’s very difficult for undergrads as just an augmentation of their curricular experience to go and utilize those resources, because there are fees associated with them — there are wait lists, scheduling and policy challenges, and often lengthy training requirements,” Lavallee says. “The Breakerspace is not a wholesale creation of new capabilities that we’re excluding other communities from. It’s just putting those capabilities together in a curated way to deliver to undergraduates.”

Access to materials exploration equipment is a draw for Kirmina Monir, a DMSE senior who is trained to operate one of the Breakerspace’s SEMs and teach others to use it. Typically, even DMSE students would get the opportunity to use such an instrument only as a prescribed part of the curriculum.

“But to be able to just walk into a lab and just use an SEM right there, or use an XRD right there,” Monir says. “This opens the door to a very low barrier to materials science.”

Visitors on Breakerday agreed. Kaitlyn Li, the first-year student interested in chemical engineering and coffee, was thrilled to see demos of machines she knew about from high school chemistry but never used.

“They’d give us an IR spectrum and ask, ‘What molecule is this,’ and then we would have to analyze it,” Li says, referring a chart used for infrared spectroscopy, which measures the interaction of infrared light with matter. In the Breakerspace, which has its own spectrometer, “it’s nice to see how it’s done on the machine and how that procedure works.”

Anna Beck and Samantha Phillips, first-year students taking class 3.001 (Science and Engineering of Materials), plan to start using the laboratory “pretty much immediately” to work on a bioplastics project.

“Our plan is to make a plastic out of banana peels and then test it on the Instron machine,” says Beck, referring to the tensile test machine.

Phillips will likely also use the space to explore curiosity about some object or other. “This is the equivalent of playing with Legos as a kid,” she says.

Learning beyond boundaries

The Breakerspace is mainly for extracurricular undergraduate exploration, but it’s also for teaching. Even before the facility opened, it was being used for class 3.042 (Materials Project Laboratory), and it will be the backdrop for two new classes taking place in the spring: one will be taught by Grossman and Lavallee, 3.000 (Coffee Matters: How to Brew the Perfect Cup), taking advantage of the coffee machine in the lounge and an on-site roaster.

“We think coffee is going to be an exciting material with lots of good testing and roasting and grinding and, of course, materials characterization,” Grossman says.

Another class, 3.S06 (Introduction to Materials Characterization), taught by technical instructor Hudson, will give students experience using microscopy and mechanical testing equipment in experimental research.

Caroline Ross, interim department head for DMSE, sees broader integration into education experiences, including in undergraduate research projects. “We've already got plans to incorporate the instruments in our labs, and I think there will be more and more opportunities for using them in UROPs or thesis projects or anywhere else where you can imagine finding a need for analyzing materials.”

Overall, DMSE faculty and staff hope the Breakerspace introduces to first-years and other undergraduates the mystery, beauty, and promise of a discipline they mostly likely didn’t learn about in high school.

“You learn about physics, chemistry, biology, and maybe this thing called engineering,” Grossman says. “I really could have benefited as a freshman from not only hearing the words ‘material science and engineering’ but actually having a space where I could check it out and see what materials make up the world.”

One first-year student who knew little about the discipline before setting foot in the Breakerspace is Alex Wu. Once he did, though, he was hooked: He got training on an SEM and did demos for visitors on Breakerday, showing them magnifications of sugar and salt and asking them to guess which is which.

“This is nothing I’ve ever had access to before. So just the fact that this is something that I'm able to use as an undergraduate in my first year is just so amazing,” Wu says.

Wu is interested in computer science but is now thinking about studying materials science and engineering, too. He thought about whether other first-years might develop an interest in materials after trying out the Breakerspace; then his smile brightened.

“I mean, that’s kind of what happened to me.”

To learn more about the Breakerspace, request access to the lounge, or book a training session on machines, visit dmse.mit.edu/breakerspace.

Miranda McClellan ’18, MEng ’19 awarded 2025 Schwarzman Scholarship

Fri, 12/08/2023 - 2:40pm

MIT alumna Miranda McClellan ’18, MEng ’19 has been named a 2025 Schwarzman Scholar. In August 2024, she will join the program’s 150 scholars arriving from 43 countries and 114 universities from around the world. The Class of 2025 Scholars were selected from a pool of over 4,000 applicants. They will attend a one-year fully funded master’s degree program in global affairs at Schwarzman College, Tsinghua University in Beijing, China.

McClellan and her fellow Schwarzman Scholars will engage in a graduate curriculum focused on the pillars of leadership, global affairs, and China with additional opportunities for cultural immersion, experiential learning, and professional development. The fellowship program aspires to create a global network of leaders equipped with a well-rounded understanding of China’s changing role in the world.

Hailing from Texas, McClellan earned a BS in computer science and a minor in African Studies from MIT in June 2018 and received an MEng in computer science in June 2019. While at MIT, she served on the board for the Black Students’ Union and presented recommendations for making the campus more inclusive. After graduating MIT, McClellan won a Fulbright grant to conduct research in Spain, where she studied applying machine learning to 5G networks.

McClellan was a fellow at the Internet Society, Center for AI & Data Policy, and the National Science Policy Network. Since 2020, she has been working as a data scientist at Microsoft, building machine learning models to detect malware. In 2022, she co-founded Black Arts DFW to promote equitable access to fine arts for Black patrons in the Dallas area. She also serves as a cybersecurity curriculum developer and mentor to improve representation of minority women in tech roles. As a Schwarzman Scholar, McClellan hopes to compare the impact of Chinese and U.S. policies on issues of cybersecurity, privacy, and AI fairness.

MIT’s Schwarzman Scholar applicants receive guidance and mentorship from the distinguished fellowships team in Career Advising and Professional Development and the Presidential Committee on Distinguished Fellowships. Students and alumni interested in learning more should contact Kimberly Benard, associate dean and director of distinguished fellowships and academic excellence.

MIT engineers design a robotic replica of the heart’s right chamber

Fri, 12/08/2023 - 5:00am

MIT engineers have developed a robotic replica of the heart’s right ventricle, which mimics the beating and blood-pumping action of live hearts.

The robo-ventricle combines real heart tissue with synthetic, balloon-like artificial muscles that enable scientists to control the ventricle’s contractions while observing how its natural valves and other intricate structures function.

The artificial ventricle can be tuned to mimic healthy and diseased states. The team manipulated the model to simulate conditions of right ventricular dysfunction, including pulmonary hypertension and myocardial infarction. They also used the model to test cardiac devices. For instance, the team implanted a mechanical valve to repair a natural malfunctioning valve, then observed how the ventricle’s pumping changed in response.

They say the new robotic right ventricle, or RRV, can be used as a realistic platform to study right ventricle disorders and test devices and therapies aimed at treating those disorders.

“The right ventricle is particularly susceptible to dysfunction in intensive care unit settings, especially in patients on mechanical ventilation,” says Manisha Singh, a postdoc at MIT’s Institute for Medical Engineering and Science (IMES). “The RRV simulator can be used in the future to study the effects of mechanical ventilation on the right ventricle and to develop strategies to prevent right heart failure in these vulnerable patients.”

Singh and her colleagues report details of the new design in an open-access paper appearing today in Nature Cardiovascular Research. Her co-authors include Associate Professor Ellen Roche, who is a core member of IMES and the associate head for research in the Department of Mechanical Engineering at MIT; along with Jean Bonnemain, Caglar Ozturk, Clara Park, Diego Quevedo-Moreno, Meagan Rowlett, and Yiling Fan of MIT; Brian Ayers of Massachusetts General Hospital; Christopher Nguyen of Cleveland Clinic; and Mossab Saeed of Boston Children’s Hospital.

A ballet of beats

The right ventricle is one of the heart’s four chambers, along with the left ventricle and the left and right atria. Of the four chambers, the left ventricle is the heavy lifter, as its thick, cone-shaped musculature is built for pumping blood through the entire body. The right ventricle, Roche says, is a “ballerina” in comparison, as it handles a lighter though no-less-crucial load.

“The right ventricle pumps deoxygenated blood to the lungs, so it doesn’t have to pump as hard,” Roche notes. “It’s a thinner muscle, with more complex architecture and motion.”

This anatomical complexity has made it difficult for clinicians to accurately observe and assess right ventricle function in patients with heart disease.

“Conventional tools often fail to capture the intricate mechanics and dynamics of the right ventricle, leading to potential misdiagnoses and inadequate treatment strategies,” Singh says.

To improve understanding of the lesser-known chamber and speed the development of cardiac devices to treat its dysfunction, the team designed a realistic, functional model of the right ventricle that both captures its anatomical intricacies and reproduces its pumping function.  

The model includes real heart tissue, which the team chose to incorporate because it retains natural structures that are too complex to reproduce synthetically.

“There are thin, tiny chordae and valve leaflets with different material properties that are all moving in concert with the ventricle’s muscle. Trying to cast or print these very delicate structures is quite challenging,” Roche explains.

A heart’s shelf-life

In the new study, the team reports explanting a pig’s right ventricle, which they treated to carefully preserve its internal structures. They then fit a silicone wrapping around it, which acted as a soft, synthetic myocardium, or muscular lining. Within this lining, the team embedded several long, balloon-like tubes, which encircled the real heart tissue, in positions that the team determined through computational modeling to be optimal for reproducing the ventricle’s contractions. The researchers connected each tube to a control system, which they then set to inflate and deflate each tube at rates that mimicked the heart’s real rhythm and motion.

To test its pumping ability, the team infused the model with a liquid similar in viscosity to blood. This particular liquid was also transparent, allowing the engineers to observe with an internal camera how internal valves and structures responded as the ventricle pumped liquid through.

They found that the artificial ventricle’s pumping power and the function of its internal structures were similar to what they previously observed in live, healthy animals, demonstrating that the model can realistically simulate the right ventricle’s action and anatomy. The researchers could also tune the frequency and power of the pumping tubes to mimic various cardiac conditions, such as irregular heartbeats, muscle weakening, and hypertension.

“We’re reanimating the heart, in some sense, and in a way that we can study and potentially treat its dysfunction,” Roche says.

To show that the artificial ventricle can be used to test cardiac devices, the team surgically implanted ring-like medical devices of various sizes to repair the chamber’s tricuspid valve — a leafy, one-way valve that lets blood into the right ventricle. When this valve is leaky, or physically compromised, it can cause right heart failure or atrial fibrillation, and leads to symptoms such as reduced exercise capacity, swelling of the legs and abdomen, and liver enlargement.

The researchers surgically manipulated the robo-ventricle’s valve to simulate this condition, then either replaced it by implanting a mechanical valve or repaired it using ring-like devices of different sizes. They observed which device improved the ventricle’s fluid flow as it continued to pump.

“With its ability to accurately replicate tricuspid valve dysfunction, the RRV serves as an ideal training ground for surgeons and interventional cardiologists,” Singh says. “They can practice new surgical techniques for repairing or replacing the tricuspid valve on our model before performing them on actual patients.”

Currently, the RRV can simulate realistic function over a few months. The team is working to extend that performance and enable the model to run continuously for longer stretches. They are also working with designers of implantable devices to test their prototypes on the artificial ventricle and possibly speed their path to patients. And looking far in the future, Roche plans to pair the RRV with a similar artificial, functional model of the left ventricle, which the group is currently fine-tuning.

“We envision pairing this with the left ventricle to make a fully tunable, artificial heart, that could potentially function in people,” Roche says. “We’re quite a while off, but that’s the overarching vision.”

This research was supported, in part, by the National Science Foundation.

Researchers safely integrate fragile 2D materials into devices

Fri, 12/08/2023 - 5:00am

Two-dimensional materials, which are only a few atoms thick, can exhibit some incredible properties, such as the ability to carry electric charge extremely efficiently, which could boost the performance of next-generation electronic devices.

But integrating 2D materials into devices and systems like computer chips is notoriously difficult. These ultrathin structures can be damaged by conventional fabrication techniques, which often rely on the use of chemicals, high temperatures, or destructive processes like etching.

To overcome this challenge, researchers from MIT and elsewhere have developed a new technique to integrate 2D materials into devices in a single step while keeping the surfaces of the materials and the resulting interfaces pristine and free from defects.

Their method relies on engineering surface forces available at the nanoscale to allow the 2D material to be physically stacked onto other prebuilt device layers. Because the 2D material remains undamaged, the researchers can take full advantage of its unique optical and electrical properties.

They used this approach to fabricate arrays of 2D transistors that achieved new functionalities compared to devices produced using conventional fabrication techniques. Their method, which is versatile enough to be used with many materials, could have diverse applications in high-performance computing, sensing, and flexible electronics.

Core to unlocking these new functionalities is the ability to form clean interfaces, held together by special forces that exist between all matter, called van der Waals forces.

However, such van der Waals integration of materials into fully functional devices is not always easy, says Farnaz Niroui, assistant professor of electrical engineering and computer science (EECS), a member of the Research Laboratory of Electronics (RLE), and senior author of a new paper describing the work.

“Van der Waals integration has a fundamental limit,” she explains. “Since these forces depend on the intrinsic properties of the materials, they cannot be readily tuned. As a result, there are some materials that cannot be directly integrated with each other using their van der Waals interactions alone. We have come up with a platform to address this limit to help make van der Waals integration more versatile, to promote the development of 2D-materials-based devices with new and improved functionalities.”

Niroui wrote the paper with lead author Peter Satterthwaite, an electrical engineering and computer science graduate student; Jing Kong, professor of EECS and a member of RLE; and others at MIT, Boston University, National Tsing Hua University in Taiwan, the National Science and Technology Council of Taiwan, and National Cheng Kung University in Taiwan. The research is published today in Nature Electronics.  

Advantageous attraction

Making complex systems such as a computer chip with conventional fabrication techniques can get messy. Typically, a rigid material like silicon is chiseled down to the nanoscale, then interfaced with other components like metal electrodes and insulating layers to form an active device. Such processing can cause damage to the materials.

Recently, researchers have focused on building devices and systems from the bottom up, using 2D materials and a process that requires sequential physical stacking. In this approach, rather than using chemical glues or high temperatures to bond a fragile 2D material to a conventional surface like silicon, researchers leverage van der Waals forces to physically integrate a layer of 2D material onto a device.

Van der Waals forces are natural forces of attraction that exist between all matter. For example, a gecko’s feet can stick to the wall temporarily due to van der Waals forces. Though all materials exhibit a van der Waals interaction, depending on the material, the forces are not always strong enough to hold them together. For instance, a popular semiconducting 2D material known as molybdenum disulfide will stick to gold, a metal, but won’t directly transfer to insulators like silicon dioxide by just coming into physical contact with that surface.

However, heterostructures made by integrating semiconductor and insulating layers are key building blocks of an electronic device. Previously, this integration has been enabled by bonding the 2D material to an intermediate layer like gold, then using this intermediate layer to transfer the 2D material onto the insulator, before removing the intermediate layer using chemicals or high temperatures.

Instead of using this sacrificial layer, the MIT researchers embed the low-adhesion insulator in a high-adhesion matrix. This adhesive matrix is what makes the 2D material stick to the embedded low-adhesion surface, providing the forces needed to create a van der Waals interface between the 2D material and the insulator.

Making the matrix

To make electronic devices, they form a hybrid surface of metals and insulators on a carrier substrate. This surface is then peeled off and flipped over to reveal a completely smooth top surface that contains the building blocks of the desired device.

This smoothness is important, since gaps between the surface and 2D material can hamper van der Waals interactions. Then, the researchers prepare the 2D material separately, in a completely clean environment, and bring it into direct contact with the prepared device stack.

“Once the hybrid surface is brought into contact with the 2D layer, without needing any high-temperatures, solvents, or sacrificial layers, it can pick up the 2D layer and integrate it with the surface. This way, we are allowing a van der Waals integration that would be traditionally forbidden, but now is possible and allows formation of fully functioning devices in a single step,” Satterthwaite explains.

This single-step process keeps the 2D material interface completely clean, which enables the material to reach its fundamental limits of performance without being held back by defects or contamination.

And because the surfaces also remain pristine, researchers can engineer the surface of the 2D material to form features or connections to other components. For example, they used this technique to create p-type transistors, which are generally challenging to make with 2D materials. Their transistors have improved on previous studies, and can provide a platform toward studying and achieving the performance needed for practical electronics.

Their approach can be done at scale to make larger arrays of devices. The adhesive matrix technique can also be used with a range of materials, and even with other forces to enhance the versatility of this platform. For instance, the researchers integrated graphene onto a device, forming the desired van der Waals interfaces using a matrix made with a polymer. In this case, adhesion relies on chemical interactions rather than van der Waals forces alone.

In the future, the researchers want to build on this platform to enable integration of a diverse library of 2D materials to study their intrinsic properties without the influence of processing damage, and develop new device platforms that leverage these superior functionalities.  

This research is funded, in part, by the U.S. National Science Foundation, the U.S. Department of Energy, the BUnano Cross-Disciplinary Fellowship at Boston University, and the U.S. Army Research Office. The fabrication and characterization procedures were carried out, largely, in the MIT.nano shared facilities.

Pages