MIT Latest News

Subscribe to MIT Latest News feed
MIT News is dedicated to communicating to the media and the public the news and achievements of the students, faculty, staff and the greater MIT community.
Updated: 12 hours 7 min ago

Life on Mars, together

Wed, 03/13/2024 - 5:00pm

Earlier this year, Madelyn Hoying, a PhD student in the Harvard-MIT Program in Health Sciences and Technology, and Wing Lam (Nicole) Chan, an MIT senior in aeronautics and astronautics, were part of Crew 290 at the Mars Desert Research Station (MDRS), the largest and longest-running Mars analog facility in the world. Their six-person crew completed a two-week simulation under the name Project MADMEN (Martian Analysis and Detection of Microbial Environments) — an analog of potential Martian search-for-life missions. 

The mission evolved from Hoying’s NASA Revolutionary Aerospace Systems Concepts – Academic Linkage (NASA RASC-AL) challenge submission, Project ALIEN, during her time as an undergraduate student at Duquesne University. After the challenge concluded, she and her colleagues refined the mission concept and created a test plan that could be conducted in a Mars-analog environment. 

Hoying served as the crew’s commander and health and safety officer, and Chan as the crew’s journalist, documenting daily activities and how the crew experienced life on Mars. The other members of Crew 290 featured three from the original project: Hoying, Rebecca McCallin from Duquesne University, and Benjamin Kazimer from MIT Lincoln Laboratory. Chan, Anja Sheppard from the University of Michigan, and Anna Tretiakova from Boston University joined the team in the next phase. Hoying and Chan had worked together once before in 2022 in another RASC-AL competition. 

“I was initially a bit skeptical of spending two weeks in the middle of nowhere and simply being tasked with writing about what happens every day,” says Chan. “What happens on extravehicular activities (EVAs)? How and where do we live every day? What will we be eating? These doubts all went away with the adrenaline and curiosity of seeing the Martian-esque landscape and especially after putting on the EVA helmet for the first time. It truly felt like I was living on Mars and I very quickly immersed myself in the mission.” 

A unique leadership opportunity

Hoying has participated in other analog missions through MIT’s RASC-AL challenge submissions, specifically 2023’s Pale Red Dot. “I have led an analog mission in the past with [MIT AeroAstro colleague] George Lordos. We led a total crew of 11 in a dual-site mission architecture, where George led one habitat and I led the other. Pale Red Dot and Project MADMEN emphasized different features of a Martian mission, so certain aspects of this, like the extravehicular activity procedures and reporting requirements for mission support, were different.”

As commander, Hoying managed logistics, including balancing the scientific objectives of the multiple projects the crew set out to complete. “The two field experiments were soil collection for Project MADMEN and field operation of REMI, the ground-penetrating radar robot. Sometimes this led to competing requirements for EVAs, as REMI’s mass would reduce the distance that our rovers could cover before running out of battery and therefore limit the terrain types that could be reached for soil collection.” 

Hoying’s main focus was balancing the crew’s requirements for data with safety, including such considerations as who had recently been on EVA, who needed a break from carrying the heavy EVA suits, how far the team could safely travel, and how the weather impacted different areas. “The decisions for what the science goals of an EVA were, who would go on each EVA, and where they would be to collect from came down to me. Ultimately, we were able to balance all of these and satisfy the collection requirements of both field projects, even with last-minute changes due to things like weather.”

The crew makes the mission

Project MADMEN involved conducting onsite field tests of geological samples and robotic experiments for landing site selection. But the success of the mission hinged on more than just in-lab results. Hosting the mission at MDRS allowed the MADMEN crew to gain valuable insights on how individuals and teams might actually experience life on Mars, psychologically and socially. 

“We had a great crew, and as a result we had a great mission,” says Hoying. She managed the psychosocial aspect of the mission using daily questionnaires, studying the effects of contingency and emergency scenarios on metrics like quality of life.

The main living quarters for the crew is a two-story, 8-meter diameter cylinder called the “Hab.” The lower deck comprises the EVA prep room, an airlock, bathroom facilities, and a tunnel to the other structures. The upper deck houses the living quarters, including a kitchen and bunks. The close quarters only served to solidify the crew’s enthusiasm for the mission and support of each other.

“We shared almost every meal together and used the time to bond and talk about our interests. We often ended the day with social activities, whether it be talking about our backgrounds or future plans, playing games, or stargazing,” says Chan. “The most challenging part for me personally was stepping out of my comfort zone. Prior to this mission, I have not lived communally or camped before. It took me a bit to get used to living in close quarters with other people and balancing chores and tasks. I soon got used to the routine and enjoyed trying things for the first time, which made my experience a lot more rewarding, too.”

By day (or “Sol”) 3, the crew had assigned nicknames to each other in a call-sign ceremony. “It’s a tradition in other field experiences I’ve been a part of, and I wanted to carry that through for this crew. Assigning these was a night full of storytelling, laughing, and new memories, and we all agreed that the reasoning behind each nickname assignment would remain between the crew,” says Hoying (“Melon”); Chan’s call sign was “PODO.” 

Crew 290’s Martian journals close with a reflection from Chan on their out-of-this-world experience: “As we get to work tonight, we reminisce about our time here on Mars, from the first time setting foot in the station to the first time suiting up for EVAs. We’re all so grateful to be here and have learned a lot about what it takes to be a Martian during the past two weeks.” Read all of Chan’s journal updates here.

The mission was primarily sponsored by Duquesne University and the Pennsylvania Space Grant Consortium, with some travel support provided by the Massachusetts Space Grant Consortium.

Letting the Earth answer back: Designing better planetary conversations

Wed, 03/13/2024 - 4:30pm

For Chen Chu MArch ’21, the invitation to join the 2023-24 cohort of Morningside Academy for Design Design Fellows has been an unparalleled opportunity to investigate the potential of design as an alternative method of problem-solving.

After earning a master’s degree in architecture at MIT and gaining professional experience as a researcher at an environmental nongovernmental organization, Chu decided to pursue a PhD in the Department of Urban Studies and Planning. “I discovered that I needed to engage in a deeper way with the most difficult ethical challenges of our time, especially those arising from the fact of climate change,” he explains. “For me, MIT has always represented this wonderful place where people are inherently intellectually curious — it’s a very rewarding community to be part of.”

Chu’s PhD research, guided by his doctoral advisor Delia Wendel, assistant professor of urban studies and international development, focuses on how traditional practices of floodplain agriculture can inform local and global strategies for sustainable food production and distribution in response to climate change. 

Typically located alongside a river or stream, floodplains arise from seasonal flooding patterns that distribute nutrient-rich silt and create connectivity between species. This results in exceptionally high levels of biodiversity and microbial richness, generating the ideal conditions for agriculture. It’s no accident that the first human civilizations were founded on floodplains, including Mesopotamia (named for its location poised between two rivers, the Euphrates and Tigris), the Indus River Civilization, and the cultures of Ancient Egypt based around the Nile. Riverine transportation networks and predictable flooding rhythms provide a framework for trade and cultivation; nonetheless, floodplain communities must learn to live with risk, subject to the sudden disruptions of high waters, drought, and ecological disequilibrium. 

For Chu, the “unstable and ungovernable” status of floodplains makes them fertile ground for thinking about. “I’m drawn to these so-called ‘wet landscapes’ — edge conditions that act as transitional spaces between land and water, between humans and nature, between city and river,” he reflects. “The development of extensively irrigated agricultural sites is typically a collective effort, which raises intriguing questions about how communities establish social organizations that simultaneously negotiate top-down state control and adapt to the uncertainty of nature.”

Chu is in the process of honing the focus of his dissertation and refining his data collection methods, which will include archival research and fieldwork, as well as interviews with floodplain inhabitants to gain an understanding of sociopolitical nuances. Meanwhile, his role as a design fellow gives him the space to address the big questions that fire his imagination. How can we live well on shared land? How can we take responsibility for the lives of future generations? What types of political structures are required to get everyone on board? 

These are just a few of the questions that Chu recently put to his cohort in a presentation. During the weekly seminars for the fellowship, he has the chance to converse with peers and mentors of multiple disciplines — from researchers rethinking the pedagogy of design to entrepreneurs applying design thinking to new business models to architects and engineers developing new habitats to heal our relationship with the natural world. 

“I’ll admit — I’m wary of the human instinct to problem-solve,” says Chu. “When it comes to the material conditions and lived experience of people and planet, there’s a limit to our economic and political reasoning, and to conventional architectural practice. That said, I do believe that the mindset of a designer can open up new ways of thinking. At its core, design is an interdisciplinary practice based on the understanding that a problem can’t be solved from a narrow, singular perspective.” 

The stimulating structure of a MAD Fellowship — free from immediate obligations to publish or produce, fellows learn from one another and engage with visiting speakers via regular seminars and events — has prompted Chu to consider what truly makes for generative conversation in the contexts of academia and the private and public sectors. In his opinion, discussions around climate change often fail to take account of one important voice; an absence he describes as “that silent being, the Earth.”

“You can’t ask the Earth, ‘What does justice mean to you?’ Nature will not respond,” he reflects. To bridge the gap, Chu believes it’s important to combine the study of specific political and social conditions with broader existential questions raised by the environmental humanities. His own research draws upon the perspectives of thinkers including Dipesh Chakrabarty, Donna Haraway, Peter Singer,  Anna Tsing, and Michael Watts, among others. He cites James C. Scott’s lecture “In Praise of Floods” as one of his most important influences.

In addition to his instinctive appreciation for theory, Chu’s outlook is grounded by an attention to innovation at the local level. He is currently establishing the parameters of his research, examining case studies of agricultural systems and flood mitigation strategies that have been sustained for centuries. 

“One example is the polder system that is practiced in the Netherlands, China, Bangladesh, and many parts of the world: small, low-lying tracts of land submerged in water and surrounded by dykes and canals,” he explains. “You’ll find a different but comparable strategy in the colder regions of Japan. Crops are protected from the winter winds by constructing a spatial unit with the house at the center; trees behind the house serve as windbreakers and paddy fields for rice are located in front of the house, providing an integrated system of food and livelihood security.”

Chu observes that there is a tendency for international policymakers to overlook local solutions in favor of grander visions and ambitious climate pledges — but he is equally keen not to romanticize vernacular practices. “Realistically, it's always a two-way interaction. Unless you already have a workable local system in place, it’s difficult to implement a solution without top-down support. On the other hand, the large-scale technocratic dreams are empty if ignorant of local traditions and histories.” 

By navigating between the global and the local, the theoretical and the practical, the visionary and the cautionary, Chu has hope in the possibility of gradually finding a way toward long-term solutions that adapt to specific conditions over time. It’s a model of ambition and criticality that Chu sees played out during dialogue at MAD and within his department; at root, he’s aware that the outcome of these conversations depends on the ethical context that shapes them.

“I've been fortunate to have many mentors who have taught me the power of humility; a respect for the finitude, fragility,  and uncertainty of life,” he recalls. “It’s a mindset that’s barely apparent in today’s push for economic growth.” The flip-side of hubristic growth is an assumption that technological ingenuity will be enough to solve the climate crisis, but Chu’s optimism arises from a different source: “When I feel overwhelmed by the weight of the problems we’re facing, I just need to look around me,” he says. “Here on campus — at MAD, in my home department, and increasingly among the new generations of students — there’s a powerful ethos of political sensitivity, ethical compassion, and an attention to clear and critical judgment. That always gives me hope for the planet.”

How free online courses from MIT can “transform the future of the world”

Tue, 03/12/2024 - 5:15pm

From full introductory courses in engineering, psychology, and computer science to lectures about financial concepts, linguistics, and music, the MIT OpenCourseWare YouTube channel has it all — offering millions of learners around the world a pathway to develop new skills and broaden their knowledge base with free offerings from MIT educators.

“I believe OpenCourseWare and Open Learning resources will transform the future of the world for the better — in financial markets I know it already has,” says Michael Pilgreen, a sculptor, painter, and poet from Memphis, Tennessee, who discovered OpenCourseWare when he found himself unemployed in 2020 and used it to jumpstart a new career on Wall Street. 

After watching several lectures about finance, computer science, programming, mathematics, and algorithms on the OpenCourseWare YouTube channel and website, Pilgreen enrolled in the MITx MicroMasters program in finance. He is now a business operations specialist for the Jameel World Education Lab at MIT Open Learning, where he helps the lab bring MIT ideas and know-how to educational innovators worldwide. 

“MIT OpenCourseWare opens the doors to conversations that were previously closed to learners by geography, time, and class,” Pilgreen says. “As an open learner, I was able to leverage the best instructors in the world from my living room, and turn my time being unemployed into a productive period acquiring the skills I needed to work on Wall Street.”

OpenCourseWare is the brainchild of MIT faculty members. The platform was launched in 2001 when the age of digital sharing was just getting started, establishing MIT as the first higher education institution to make educational resources freely available to learners regardless of geographical location or institutional affiliation. Four years later in 2005, OpenCourseWare created a YouTube channel to further its commitment to accessibility and lifelong learning.

Today, OpenCourseWare — part of MIT Open Learning — remains a global model for open sharing in higher education, with an open license that allows the remix and reuse of its educational resources. OpenCourseWare offers materials on its website from more than 2,500 courses that span the MIT undergraduate and graduate curriculum. Educational resources include syllabi, lecture notes, problem sets, assignments, audiovisual content, and insights. 

“We almost take for granted the idea that an enormous amount of outstanding educational content is available to anyone in the world with an internet connection,” says MIT President Sally Kornbluth. “Yet, the fact that this is now the norm has a great deal to do with a groundbreaking project launched at MIT in 2001. OpenCourseWare changed the landscape of education, and it continues to inspire students, teachers, and lifelong learners around the globe to follow their curiosity wherever it leads.”

Curt Newton, OpenCourseWare’s publication director, says the platform inspires millions of curious and motivated learners every year. With over 5 million subscribers and 430 million views, OpenCourseWare stands out as the largest .edu YouTube channel. The channel opens a window into MIT classrooms, giving learners the opportunity to pursue their interests, develop new skills, and even switch careers.

“Videos on our YouTube channel have proven to be an especially effective meeting place,” Newton says. “From introductions to computer programming and the human brain to what it's like to pilot an advanced jet aircraft, these videos are both a complete learning experience in themselves and an entry into even more expansive worlds of learning found on the OpenCourseWare website.”

Emmanuel Kasigazi, an entrepreneur from Uganda, turned to YouTube during the Covid-19 lockdowns and found hundreds of complete lectures on the OpenCourseWare YouTube channel. He explored psychology, cloud computing, data science, and artificial intelligence. 

“The channel opened my eyes to something I didn’t know was reachable,” Kasigazi says. “The psychology classes I took are 24 episodes; each episode is around 40 minutes. That’s a season of 'Grey’s Anatomy.' It’s amazing that I could spend the same amount of time on two different things, but one of them would change my life, my mindset, and the other would just give me a small dopamine boost.”

During his learning journey, Kasigazi also gained a community of open learners. He has teamed up with Pilgreen to shine light on the educational adventures of fellow OpenCourseWare learners. The duo is working on a podcast that will launch this fall. 

“From the channel itself you get great value, but then you pull back the curtain and get to meet the people on the OpenCourseWare team, and it’s amazing,” Kasigazi says. “It’s incredible the people I get to talk to — all because I decided to watch something on YouTube. The most impactful thing I've gotten from this channel is the people I’ve met along the way and the things I’m learning.”

While learners get to expand their knowledge base through these free, publicly accessible videos, MIT faculty members preserve their knowledge for generations to come. 

The late professor Patrick Winston's foundational AI lectures have long been popular on OpenCourseWare. His “How to Speak” lecture, published on the OpenCourseWare YouTube channel in 2018, has become the most popular video on the channel with 18 million views. Winston's annual talk, which had long been a revered event for the MIT community, has now helped millions of people improve their speaking abilities — from conversing with someone one-on-one to presenting research to nailing job interviews.

Gilbert Strang, a world-renowned mathematician, was one of the first professors to publish his lectures on OpenCourseWare. Today, his linear algebra courses have received more than 15 million visits on OpenCourseWare’s website and over 34 million views on YouTube. 

Andrea Henshall, a retired major in the U.S. Air Force, credits her academic success to Strang’s lectures on OpenCourseWare — and other MIT open educational resources. Henshall discovered Strang’s videos after struggling during her first semester of her master’s program in aeronautics and astronautics at MIT. By the end of her master’s program, Henshall was getting A's in all her courses. She is now pursuing a PhD at MIT.

Although Strang has recently retired from MIT after 63 years of teaching, his lessons will continue to be available online to learners in every country on Earth.

“Great teaching is timeless, from the insightful teaching of decades past to our newest video series — an introduction to using data to address cultural, social, economic, and policy questions, created by Sara Ellison and Nobel laureate Esther Duflo,” Newton says. “We’re honored to be preserving and sharing this knowledge for generations to come.” 

MIT OpenCourseWare publishes new content regularly on its YouTube channel and website. Brett Paci, OpenCourseWare’s media publication manager, produces the podcast episodes and many of the video lectures published on the YouTube channel. He considers the channel a “gift to the world.”

“It’s very much in the spirit and mission of MIT to contribute to the global collective knowledge and facilitate learning,” Paci says. “It’s a mission we can be proud of.”

Master bladesmith Bob Kramer’s lessons from the school of life

Tue, 03/12/2024 - 5:10pm

The story of Bob Kramer’s career is a wild one, peppered with twists and turns, false starts, and happy accidents. Before gaining renown as one the finest bladesmiths at work today (a bladesmith is an expert at creating knives and other bladed objects), Kramer had enrolled in and dropped out of college, worked as a chef, performed in improvisational theater, and traveled the United States by train as a circus clown.
 
“The main takeaway for me was that this is an incredible adventure,” Kramer said in a special lecture at MIT on Jan. 26. He was talking about his stint under the big top, but Kramer might as well have meant his lifelong quest for excellence, of making things of exceptional quality and passing on his expertise to others.
 
One of just 120 master bladesmiths in the world, Kramer earned the American Bladesmith Society title after years of hand-forging knives from hot steel and then passing a rigorous test — swiping through an inch-thick rope, chopping a two-by-four, and shaving off his own arm hair.
 
Kramer was at MIT for all of January, invited by the Department of Materials Science and Engineering (DMSE) to teach bladesmithing classes during the institute’s Independent Activities Period. Students lucky enough to get a spot — more than 100 people signed up for 18 spots — learned to shape, heat treat, and grind blades in DMSE’s forge and foundry.

Pursuit, and perfection

Although he called his talk “In Pursuit of the Perfect Blade,” Kramer admitted that perfection is unachievable. “You might think that ‘perfect’ is the operative is this sentence, but for me it’s the pursuit,” Kramer said. “I got my master smith rating in 1997, and in many ways that’s like getting your black belt in a martial art. You are just beginning. You are just starting to understand what needs to be done.”
 
He began by displaying pictures of some of his Kramer Knives — blades with intricate patterns that “go all the way through the steel,” one with a gold inlay of a boy riding a fish, a “plug weld,” or metal insert, and another with steel made from the metals found in a meteorite.
 
Kramer traced his life journey back to his childhood in Michigan as the youngest of six; his older brothers and sisters “were looking outwards. They want to move on, they want to begin their lives. And I’m just trying to figure out like how to survive, how to get some chicken off the plate or get a little bit of attention.”
 
So he was “a little bit of a goofball.” In school, Kramer took to wood shop — measuring and cutting materials and making things — rather than reading and writing book reports. Later, in a high school divided into alternative-lifestyle hippies and letter-sweater-wearing jocks, he learned how to juggle, do card tricks, and ride a unicycle.
 
After a short time as a college student at Wayne State University, where he found out he had dyslexia, he was inspired by Robin Lee Graham’s memoir “Dove,” about the author’s voyage in a sloop as a teenager: “This was one of the easiest books for me to read because it was about adventure.”
 
At 19 Kramer left Detroit to travel across the country. “I was now fully responsible for myself,” he said. “And I began to try to figure out, ‘How do I fit in the world?’”
 
His travels took him to Houston, Texas, where he found a job waiting on the wealthy patrons of the Houston Country Club. Later, on a lark, he went to auditions for Ringling Bros. and Barnum and Bailey Circus clowns, got a contract, and went off with the circus for a year, performing all over the country.
 
“I saw another way to make it through the world. So my mind is opening up to all these other possibilities,” Kramer said.
 
He returned to the service industry, this time getting a job in a hotel kitchen in Seattle. Though the chefs he worked with were professionals with excellent credentials, none knew how to sharpen knives. So he decided he would learn. “I learned how to juggle. I’m going to learn how to sharpen a knife,” he said.
 
After some study, he acquired the right skills and the right tools and started a knife-sharpening business, driving a truck around Seattle, Washington, to fish markets, hotels, and restaurants, making blades razor sharp.

“Make a lot of mistakes”

After about five years, he got bored. “I’ve made enough money, but my mind is not stimulated anymore,” he said. Then one day in Blade, a magazine about custom knives, he saw an ad for a two-week bladesmithing class in Arkansas — an experience that forever changed his life.
 
After attending class, smashing coal into high-carbon coke to make steel and hand-forging a 10-inch blade with a 5-inch handle, he was enraptured.
 
“And when I got home from that, I thought, ‘I’m doing this.’ Somehow this is going to be incorporated in my life,” Kramer said.
 
Soon, he stopped driving his knife-sharpening truck and opened a knife shop in downtown Seattle, hand-making knives in an on-site forge. A review in Saveur magazine brought in swift business. After a move to the country, business slowed. Then Kramer got another review, this time in Cook’s Illustrated, on a $400 chef’s knife the publication bought from him.
 
“And they said, the best knife they had ever tested. The phone starts ringing again, and it happens all over again. Great problem to have,” Kramer said.
 
Kramer described how he makes steel for knives: It starts by stacking layer upon layer, then heating that up to 2,350 degrees Fahrenheit (1,288 Celsius) in the forge and hammering the layers together until they bond. It’s a process he has honed over years of trial and error.
 
“Make a lot of mistakes,” he advised the audience. “That’s how you get to know the stuff.”
 
Professor Yet-Ming Chiang, the Kyocera Professor of Ceramics at MIT and one of Kramer’s DMSE hosts, says what sets Kramer apart is his endless curiosity and passion for self-learning.
 
“Bob is not only a craftsman and an artist; he’s an innovator, in the best sense of that word,” Chiang says. “He doesn’t have any fancy university degrees, but he has illustrated throughout his life how to learn on your own.”

Remembering Ken Johnson Jr., MIT DAPER director of communications, promotions, and marketing

Tue, 03/12/2024 - 11:50am

On Feb. 12, the Division of Student Life and MIT lost a valued community member. Ken Johnson Jr., director of communications, promotions, and marketing in the Department of Athletics, Physical Education, and Recreation (DAPER), passed away following complications from a stroke. He was 47 years old.

Johnson’s sports information career spanned 25 years. Prior to working at MIT, he worked at Brown University and was the sports information director at Manhattanville College, the University of Bridgeport, St. Anselm College, and Assumption University. For the last eight years, Johnson has been at MIT, where he loved working with student-athletes and was recognized many times for his contributions to the sports communications profession.

“Ken truly embraced his role in DAPER. He loved working with our student-athletes and coaches. He continuously displayed his commitment to making every team feel special,” says G. Anthony Grant, DAPER department head and director of athletics.

A passion for sports and collegiate athletics

As a Red Sox fan, an avid golfer, a marathon runner, and a lover of all kinds of sports, Johnson was passionate about working with all of MIT’s 33 sports teams — and it showed. He was recently honored by the College Sports Communicators for his 25-year career in the field. Johnson was also the second vice president of the Eastern Athletic Communications Association and the recipient of the 2019 U.S. Track and Field and Cross-Country Coaches Association Excellence in Communications Award for NCAA Division III Track and Field.

Andrew Barlow, associate professor and baseball coach, also admired Johnson’s enthusiasm for his work, adding, “Ken was a true professional and an instant friend for those who had the opportunity to know him. His passion for the sports communication profession and his devotion to all the student-athletes with whom he supported were remarkable. He was a true fan of all our MIT athletic teams and was an integral part of our MIT baseball family.

“All our players will have fond memories of Ken’s reactions when they would try to make him laugh with silly post-game interview antics. All of us coaches will surely miss our post-game ‘debrief’ sessions where Ken would point out all of ‘our potential decision-making mistakes’ that we might have made,” Barlow says.

“He took great pride when Karenna Groff won the NCAA Woman of the Year Award, and he even attended the ceremony in San Antonio, Texas, where she was recognized,” says Grant. “Ken was also ecstatic when our Men’s Cross-Country team won the program’s first Division III NCAA National Championship. He even bought a full-sized replica of the trophy to put in his office.”

A true New Englander

Johnson grew up on Cape Cod and graduated from Dennis Yarmouth Regional High School. He subsequently earned a bachelor of science in sports management from the University of Massachusetts at Amherst. He is survived by his parents, Kenneth and Katherine “Kate” Johnson, his sister Megan Warfield, her husband, Bill, and his beloved nephew Cameron.

Gifts in Johnson’s memory can be made to the Friends of DAPER Fund.

A sprayable gel could make minimally invasive surgeries simpler and safer

Tue, 03/12/2024 - 11:30am

More than 20 million Americans undergo colonoscopy screenings every year, and in many of those cases, doctors end up removing polyps that are 2 cm or larger and require additional care. This procedure has greatly reduced the overall incidence of colon cancer, but not without complications, as patients may experience gastrointestinal bleeding both during and after the procedure.

In hopes of preventing those complications from occurring, researchers at MIT have developed a new gel, GastroShield, that can be sprayed onto the surgical sites through an endoscope. This gel forms a tough but flexible protective layer that serves as a shield for the damaged area. The material prevents delayed bleeding and reinforces the mechanical integrity of the tissue.

“Our tissue-responsive adhesive technology is engineered to interact with the tissue via complimentary covalent and ionic interactions as well as physical interactions to provide prolonged lesion protection over days to prevent complications following polyp removal, and other wounds at risk of bleeding across the gastrointestinal tract,” says Natalie Artzi, a principal research scientist in MIT’s Institute for Medical Engineering and Science, an associate professor of medicine at Harvard Medical School, and the senior author of the paper.

In an animal study, the researchers showed that the GastroShield application integrates seamlessly with current endoscopic procedures, and provides wound protection for three to seven days where it helps tissue to heal following surgery. Artzi and other members of the research team have started a company called BioDevek that now plans to further develop the material for use in humans.

Gonzalo Muñoz Taboada, CEO of BioDevek, and Daniel Dahis, lead scientist at BioDevek, are the lead authors of the study, which appears in the journal Advanced Materials. Elazer Edelman, the Edward J. Poitras Professor in Medical Engineering and Science at MIT and the director of IMES, and Pere Dosta, a former postdoc in Artzi’s lab, are also authors of the paper.

Adhesive gels

Routine colon cancer screenings often reveal small precancerous polyps, which can be removed before they become cancerous. This is usually done using an endoscope. If any bleeding occurs during the polyp removal, doctors can cauterize the wound to seal it, but this method creates a scar that may delay the healing, and result in additional complications.

Additionally, in some patients, bleeding doesn’t occur until a few days after the procedure. This can be dangerous and may require patients to return to the hospital for additional treatment. Other patients may develop small tears that lead the intestinal contents to leak into the abdomen, which can lead to severe infection and requires emergency care.

When tissue reinforcement is required, doctors often insert metal clips to hold tissue together, but these can’t be used with larger polyps and aren’t always effective. Efforts to develop a gel that could seal the surgical wounds have not been successful, mainly because the materials could not adhere to the surgical site for more than 24 hours.

The MIT team tested dozens of combinations of materials that they thought could have the right properties for this use. They wanted to find formulations that would display a low enough viscosity to be easily delivered and sprayed through a nozzle at the end of a catheter that fits inside commercial endoscopes. Simultaneously, upon tissue contact, this formulation should instantly form a tough gel that adheres strongly to the tissue. They also wanted the gel to be flexible enough that it could withstand the forces generated by the peristaltic movements of the digestive tract and the food flowing by.

The researchers came up with a winning combination that includes a polymer called pluronic, which is a type of block copolymer that can self-assemble into spheres called micelles. The ends of these polymers contain multiple amine groups, which end up on the surface of the micelles. The second component of the gel is oxidized dextran, a polysaccharide that can form strong but reversible bonds with the amine groups of the pluronic micelles.

When sprayed, these materials instantly react with each other and with the lining of the gastrointestinal tract, forming a solid gel in less than five seconds. The micelles that make up the gel are “self-healing” and can absorb forces that they encounter from peristaltic movements and food moving along the digestive tract, by temporarily breaking apart and then re-assembling.

“To obtain a material that adheres to the design criteria and can be delivered through existing colonoscopes, we screened through libraries of materials to understand how different parameters affect gelation, adhesion, retention, and compatibility,” Artzi says.

A protective layer

The gel can also withstand the low pH and enzymatic activity in the digestive tract, and protect tissue from that harsh environment while it heals itself, underscoring its potential for use in other gastrointestinal wounds at high risk of bleeding, such as  stomach ulcers, which affect more than 4 million Americans every year.

In tests in animals, the researchers found that every animal treated with the new gel showed rapid sealing, and there were no perforations, leakages, or bleeding in the week following the treatment. The material lasted for about five days, after which it was sloughed off along with the top layer of tissue as the surgical wounds healed.

The researchers also performed several biocompatibility studies and found that the gel did not cause any adverse effects.

“A key feature of this new technology is our aim to make it translational. GastroShield was designed to be stored in liquid form in a ready-to-use kit. Additionally, it doesn’t require any activation, light, or trigger solution to form the gel, aiming to make endoscopic use easy and fast,” says Muñoz, who is currently leading the translational effort for GastroShield.

BioDevek is now working on further developing the material for possible use in patients. In addition to its potential use in colonoscopies, this gel could also be useful for treating stomach ulcers and inflammatory conditions such as Crohn’s disease, or for delivering cancer drugs, Artzi says.

The research was funded, in part, by the National Science Foundation.

Boosting student engagement and workforce development in microelectronics

Tue, 03/12/2024 - 9:45am

The Northeast Microelectronics Internship Program (NMIP), an initiative of MIT’s Microsystems Technology Laboratories (MTL) to connect first- and second-year college students to careers in semiconductor and microelectronics industries, recently received a $75,000 grant to expand its reach and impact. The funding is part of $9.2 million in grants awarded by the Northeast Microelectronics Coalition (NEMC) Hub to boost technology advancement, workforce development, education, and student engagement across the Northeast Region.

NMIP was founded by Tomás Palacios, the Clarence J. LeBel Professor of Electrical Engineering at MIT, and director of MTL. The grant, he says, will help address a significant barrier limiting the number of students who pursue careers in critical technological fields.

“Undergraduate students are key for the future of our nation’s microelectronics workforce. They directly fill important roles that require technical fluency or move on to advanced degrees,” says Palacios. “But these students have repeatedly shared with us that the lack of internships in their first few semesters in college is the main reason why many move to industries with a more established tradition of hiring undergraduate students in their early years. This program connects students and industry partners to fix this issue.”

The NMIP funding was announced on Jan. 30 during an event featuring Massachusetts Governor Maura Healey, Lt. Governor Kim Driscoll, and Economic Development Secretary Yvonne Hao, as well as leaders from the U.S. Department of Defense and the director of Microelectronics Commons at NSTXL, the National Security Technology Accelerator. The grant to support NMIP is part of $1.5 million in new workforce development grants aimed at spurring the microelectronics and semiconductor industry across the Northeast Region. The new awards are the first investments made by the NEMC Hub, a division of the Massachusetts Technology Collaborative, that is overseeing investments made by the federal CHIPS and Science Act following the formal establishment of the NEMC Hub in September 2023.

“We are very excited for the recognition the program is receiving. It is growing quickly and the support will help us further dive into our mission to connect talented students to the broader microelectronics ecosystem while integrating our values of curiosity, openness, excellence, respect, and community,” says Preetha Kingsview, who manages the program. “This grant will help us connect to the broader community convened by NEMC Hub in close collaboration with MassTech. We are very excited for what this support will help NMIP achieve.”

The funds provided by the NEMC Microelectronics Commons Hub will help expand the program more broadly across the Northeast, to support students and grow the pool of skilled workers for the microelectronics sector regionally. After receiving 300 applications in the first two years, the program received 296 applications in 2024 from students interested in summer internships, and is working with more than 25 industry partners across the Northeast. These NMIP students not only participate in industry-focused summer internships, but are also exposed to the broader microelectronics ecosystem through bi-weekly field trips to microelectronics companies in the region.

“The expansion of the program across the Northeast, and potentially nationwide, will extend the impact of this program to reach more students and benefit more microelectronics companies across the region,” says Christine Nolan, acting NEMC Hub program director.Through hands-on training opportunities we are able to showcase the amazing jobs that exist in this sector and to strengthen the pipeline of talented workers to support the mission of the NEMC Hub and the national CHIPs investments.”  

Sheila Wescott says her company, MACOM, a Lowell-based developer of semiconductor devices and components, is keenly interested in sourcing intern candidates from NMIP. “We already have a success story from this program,” she says. “One of our interns completed two summer programs with us and is continuing part time in the fall — and we anticipate him joining MACOM full time after graduation.”

“NMIP is an excellent platform to engage students with a diverse background and promote microelectronics technology,” says Bin Lu, CTO and co-founder of Finwave Semiconductor.  “Finwave has benefited from engaging with the young engineers who are passionate about working with electronics and cutting-edge semiconductor technology. We are committed to continuing to work with NMIP.”

Scientists develop a rapid gene-editing screen to find effects of cancer mutations

Tue, 03/12/2024 - 6:00am

Tumors can carry mutations in hundreds of different genes, and each of those genes may be mutated in different ways — some mutations simply replace one DNA nucleotide with another, while others insert or delete larger sections of DNA.

Until now, there has been no way to quickly and easily screen each of those mutations in their natural setting to see what role they may play in the development, progression, and treatment response of a tumor. Using a variant of CRISPR genome-editing known as prime editing, MIT researchers have now come up with a way to screen those mutations much more easily.

The researchers demonstrated their technique by screening cells with more than 1,000 different mutations of the tumor suppressor gene p53, all of which have been seen in cancer patients. This method, which is easier and faster than any existing approach, and edits the genome rather than introducing an artificial version of the mutant gene, revealed that some p53 mutations are more harmful than previously thought.

This technique could also be applied to many other cancer genes, the researchers say, and could eventually be used for precision medicine, to determine how an individual patient’s tumor will respond to a particular treatment.

“In one experiment, you can generate thousands of genotypes that are seen in cancer patients, and immediately test whether one or more of those genotypes are sensitive or resistant to any type of therapy that you’re interested in using,” says Francisco Sanchez-Rivera, an MIT assistant professor of biology, a member of the Koch Institute for Integrative Cancer Research, and the senior author of the study.

MIT graduate student Samuel Gould is the lead author of the paper, which appears today in Nature Biotechnology.

Editing cells

The new technique builds on research that Sanchez-Rivera began 10 years ago as an MIT graduate student. At that time, working with Tyler Jacks, the David H. Koch Professor of Biology, and then-postdoc Thales Papagiannakopoulos, Sanchez-Rivera developed a way to use CRISPR genome-editing to introduce into mice genetic mutations linked to lung cancer.

In that study, the researchers showed that they could delete genes that are often lost in lung tumor cells, and the resulting tumors were similar to naturally arising tumors with those mutations. However, this technique did not allow for the creation of point mutations (substitutions of one nucleotide for another) or insertions.

“While some cancer patients have deletions in certain genes, the vast majority of mutations that cancer patients have in their tumors also include point mutations or small insertions,” Sanchez-Rivera says.

Since then, David Liu, a professor in the Harvard University Department of Chemistry and Chemical Biology and a core institute member of the Broad Institute, has developed new CRISPR-based genome editing technologies that can generate additional types of mutations more easily. With base editing, developed in 2016, researchers can engineer point mutations, but not all possible point mutations. In 2019, Liu, who is also an author of the Nature Biotechnology study, developed a technique called prime editing, which enables any kind of point mutation to be introduced, as well as insertions and deletions.

“Prime editing in theory solves one of the major challenges with earlier forms of CRISPR-based editing, which is that it allows you to engineer virtually any type of mutation,” Sanchez-Rivera says.

When they began working on this project, Sanchez-Rivera and Gould calculated that if performed successfully, prime editing could be used to generate more than 99 percent of all small mutations seen in cancer patients.

However, to achieve that, they needed to find a way to optimize the editing efficiency of the CRISPR-based system. The prime editing guide RNAs (pegRNAs) used to direct CRISPR enzymes to cut the genome in certain spots have varying levels of efficiency, which leads to “noise” in the data from pegRNAs that simply aren’t generating the correct target mutation. The MIT team devised a way to reduce that noise by using synthetic target sites to help them calculate how efficiently each guide RNA that they tested was working.

“We can design multiple prime-editing guide RNAs with different design properties, and then we get an empirical measurement of how efficient each of those pegRNAs is. It tells us what percentage of the time each pegRNA is actually introducing the correct edit,” Gould says.

Analyzing mutations

The researchers demonstrated their technique using p53, a gene that is mutated in more than half of all cancer patients. From a dataset that includes sequencing information from more than 40,000 patients, the researchers identified more than 1,000 different mutations that can occur in p53.

“We wanted to focus on p53 because it’s the most commonly mutated gene in human cancers, but only the most frequent variants in p53 have really been deeply studied. There are many variants in p53 that remain understudied,” Gould says.

Using their new method, the researchers introduced p53 mutations in human lung adenocarcinoma cells, then measured the survival rates of these cells, allowing them to determine each mutation’s effect on cell fitness.

Among their findings, they showed that some p53 mutations promoted cell growth more than had been previously thought. These mutations, which prevent the p53 protein from forming a tetramer — an assembly of four p53 proteins — had been studied before, using a technique that involves inserting artificial copies of a mutated p53 gene into a cell.

Those studies found that these mutations did not confer any survival advantage to cancer cells. However, when the MIT team introduced those same mutations using the new prime editing technique, they found that the mutation prevented the tetramer from forming, allowing the cells to survive. Based on the studies done using overexpression of artificial p53 DNA, those mutations would have been classified as benign, while the new work shows that under more natural circumstances, they are not.

“This is a case where you could only observe these variant-induced phenotypes if you're engineering the variants in their natural context and not with these more artificial systems,” Gould says. “This is just one example, but it speaks to a broader principle that we’re going to be able to access novel biology using these new genome-editing technologies.”

Because it is difficult to reactivate tumor suppressor genes, there are few drugs that target p53, but the researchers now plan to investigate mutations found in other cancer-linked genes, in hopes of discovering potential cancer therapies that could target those mutations. They also hope that the technique could one day enable personalized approaches to treating tumors.

“With the advent of sequencing technologies in the clinic, we'll be able to use this genetic information to tailor therapies for patients suffering from tumors that have a defined genetic makeup,” Sanchez-Rivera says. “This approach based on prime editing has the potential to change everything.”

The research was funded, in part, by the National Institute of General Medical Sciences, an MIT School of Science Fellowship in Cancer Research, a Howard Hughes Medical Institute Hanna Gray Fellowship, the V Foundation for Cancer Research, a National Cancer Institute Cancer Center Support Grant, the Ludwig Center at MIT, a Koch Institute Frontier Award, the MIT Research Support Committee, and the Koch Institute Support (core) Grant from the National Cancer Institute.

Reducing pesticide use while increasing effectiveness

Tue, 03/12/2024 - 12:00am

Farming can be a low-margin, high-risk business, subject to weather and climate patterns, insect population cycles, and other unpredictable factors. Farmers need to be savvy managers of the many resources they deal, and chemical fertilizers and pesticides are among their major recurring expenses.

Despite the importance of these chemicals, a lack of technology that monitors and optimizes sprays has forced farmers to rely on personal experience and rules of thumb to decide how to apply these chemicals. As a result, these chemicals tend to be over-sprayed, leading to their runoff into waterways and buildup up in the soil.

That could change, thanks to a new approach of feedback-optimized spraying, invented by AgZen, an MIT spinout founded in 2020 by Professor Kripa Varanasi and Vishnu Jayaprakash SM ’19, PhD ’22.

Over the past decade, AgZen’s founders have developed products and technologies to control the interactions of droplets and sprays with plant surfaces. The Boston-based venture-backed company launched a new commercial product in 2024 and is currently piloting another related product. Field tests of both have shown the products can help farmers spray more efficiently and effectively, using fewer chemicals overall.

“Worldwide, farms spend approximately $60 billion a year on pesticides. Our objective is to reduce the number of pesticides sprayed and lighten the financial burden on farms without sacrificing effective pest management,” Varanasi says.

Getting droplets to stick

While the world pesticide market is growing rapidly, a lot of the pesticides sprayed don’t reach their target. A significant portion bounces off the plant surfaces, lands on the ground, and becomes part of the runoff that flows to streams and rivers, often causing serious pollution. Some of these pesticides can be carried away by wind over very long distances.

“Drift, runoff, and poor application efficiency are well-known, longstanding problems in agriculture, but we can fix this by controlling and monitoring how sprayed droplets interact with leaves,” Varanasi says.

With support from MIT Tata Center and the Abdul Latif Jameel Water and Food Systems Lab, Varanasi and his team analyzed how droplets strike plant surfaces, and explored ways to increase application efficiency. This research led them to develop a novel system of nozzles that cloak droplets with compounds that enhance the retention of droplets on the leaves, a product they call EnhanceCoverage.

Field studies across regions — from Massachusetts to California to Italy and France —showed that this droplet-optimization system could allow farmers to cut the amount of chemicals needed by more than half because more of the sprayed substances would stick to the leaves.

Measuring coverage

However, in trying to bring this technology to market, the researchers faced a sticky problem: Nobody knew how well pesticide sprays were adhering to the plants in the first place, so how could AgZen say that the coverage was better with its new EnhanceCoverage system?

“I had grown up spraying with a backpack on a small farm in India, so I knew this was an issue,” Jayaprakash says. “When we spoke to growers, they told me how complicated spraying is when you’re on a large machine. Whenever you spray, there are so many things that can influence how effective your spray is. How fast do you drive the sprayer? What flow rate are you using for the chemicals? What chemical are you using? What’s the age of the plants, what’s the nozzle you’re using, what is the weather at the time? All these things influence agrochemical efficiency.”

Agricultural spraying essentially comes down to dissolving a chemical in water and then spraying droplets onto the plants. “But the interaction between a droplet and the leaf is complex,” Varanasi says. “We were coming in with ways to optimize that, but what the growers told us is, hey, we’ve never even really looked at that in the first place.”

Although farmers have been spraying agricultural chemicals on a large scale for about 80 years, they’ve “been forced to rely on general rules of thumb and pick all these interlinked parameters, based on what’s worked for them in the past. You pick a set of these parameters, you go spray, and you’re basically praying for outcomes in terms of how effective your pest control is,” Varanasi says.

Before AgZen could sell farmers on the new system to improve droplet coverage, the company had to invent a way to measure precisely how much spray was adhering to plants in real-time.

Comparing before and after

The system they came up with, which they tested extensively on farms across the country last year, involves a unit that can be bolted onto the spraying arm of virtually any sprayer. It carries two sensor stacks, one just ahead of the sprayer nozzles and one behind. Then, built-in software running on a tablet shows the operator exactly how much of each leaf has been covered by the spray. It also computes how much those droplets will spread out or evaporate, leading to a precise estimate of the final coverage.

“There’s a lot of physics that governs how droplets spread and evaporate, and this has been incorporated into software that a farmer can use,” Varanasi says. “We bring a lot of our expertise into understanding droplets on leaves. All these factors, like how temperature and humidity influence coverage, have always been nebulous in the spraying world. But now you have something that can be exact in determining how well your sprays are doing.”

“We’re not only measuring coverage, but then we recommend how to act,” says Jayaprakash, who is AgZen’s CEO. “With the information we collect in real-time and by using AI, RealCoverage tells operators how to optimize everything on their sprayer, from which nozzle to use, to how fast to drive, to how many gallons of spray is best for a particular chemical mix on a particular acre of a crop.”

The tool was developed to prove how much AgZen’s EnhanceCoverage nozzle system (which will be launched in 2025) improves coverage. But it turns out that monitoring and optimizing droplet coverage on leaves in real-time with this system can itself yield major improvements.

“We worked with large commercial farms last year in specialty and row crops,” Jayaprakash says. “When we saved our pilot customers up to 50 percent of their chemical cost at a large scale, they were very surprised.” He says the tool has reduced chemical costs and volume in fallow field burndowns, weed control in soybeans, defoliation in cotton, and fungicide and insecticide sprays in vegetables and fruits. Along with data from commercial farms, field trials conducted by three leading agricultural universities have also validated these results.

“Across the board, we were able to save between 30 and 50 percent on chemical costs and increase crop yields by enabling better pest control,” Jayaprakash says. “By focusing on the droplet-leaf interface, our product can help any foliage spray throughout the year, whereas most technological advancements in this space recently have been focused on reducing herbicide use alone.” The company now intends to lease the system across thousands of acres this year.

And these efficiency gains can lead to significant returns at scale, he emphasizes: In the U.S., farmers currently spend $16 billion a year on chemicals, to protect about $200 billion of crop yields.

The company launched its first product, the coverage optimization system called RealCoverage, this year, reaching a wide variety of farms with different crops and in different climates. “We’re going from proof-of-concept with pilots in large farms to a truly massive scale on a commercial basis with our lease-to-own program,” Jayaprakash says.

“We’ve also been tapped by the USDA to help them evaluate practices to minimize pesticides in watersheds,” Varanasi says, noting that RealCoverage can also be useful for regulators, chemical companies, and agricultural equipment manufacturers.

Once AgZen has proven the effectiveness of using coverage as a decision metric, and after the RealCoverage optimization system is widely in practice, the company will next roll out its second product, EnhanceCoverage, designed to maximize droplet adhesion. Because that system will require replacing all the nozzles on a sprayer, the researchers are doing pilots this year but will wait for a full rollout in 2025, after farmers have gained experience and confidence with their initial product.

“There is so much wastage,” Varanasi says. “Yet farmers must spray to protect crops, and there is a lot of environmental impact from this. So, after all this work over the years, learning about how droplets stick to surfaces and so on, now the culmination of it in all these products for me is amazing, to see all this come alive, to see that we’ll finally be able to solve the problem we set out to solve and help farmers.”

Exploring the cellular neighborhood

Mon, 03/11/2024 - 4:50pm

Cells rely on complex molecular machines composed of protein assemblies to perform essential functions such as energy production, gene expression, and protein synthesis. To better understand how these machines work, scientists capture snapshots of them by isolating proteins from cells and using various methods to determine their structures. However, isolating proteins from cells also removes them from the context of their native environment, including protein interaction partners and cellular location.

Recently, cryogenic electron tomography (cryo-ET) has emerged as a way to observe proteins in their native environment by imaging frozen cells at different angles to obtain three-dimensional structural information. This approach is exciting because it allows researchers to directly observe how and where proteins associate with each other, revealing the cellular neighborhood of those interactions within the cell.

With the technology available to image proteins in their native environment, MIT graduate student Barrett Powell wondered if he could take it one step further: What if molecular machines could be observed in action? In a paper published March 8 in Nature Methods, Powell describes the method he developed, called tomoDRGN, for modeling structural differences of proteins in cryo-ET data that arise from protein motions or proteins binding to different interaction partners. These variations are known as structural heterogeneity. 

Although Powell had joined the lab of MIT associate professor of biology Joey Davis as an experimental scientist, he recognized the potential impact of computational approaches in understanding structural heterogeneity within a cell. Previously, the Davis Lab developed a related methodology named cryoDRGN to understand structural heterogeneity in purified samples. As Powell and Davis saw cryo-ET rising in prominence in the field, Powell took on the challenge of re-imagining this framework to work in cells.

When solving structures with purified samples, each particle is imaged only once. By contrast, cryo-ET data is collected by imaging each particle more than 40 times from different angles. That meant tomoDRGN needed to be able to merge the information from more than 40 images, which was where the project hit a roadblock: the amount of data led to an information overload.

To address this, Powell successfully rebuilt the cryoDRGN model to prioritize only the highest-quality data. When imaging the same particle multiple times, radiation damage occurs. The images acquired earlier, therefore, tend to be of higher quality because the particles are less damaged.

“By excluding some of the lower-quality data, the results were actually better than using all of the data — and the computational performance was substantially faster,” Powell says.

Just as Powell was beginning work on testing his model, he had a stroke of luck: The authors of a groundbreaking new study that visualized, for the first time, ribosomes inside cells at near-atomic resolution, shared their raw data on the Electric Microscopy Public Image Archive (EMPIAR). This dataset was an exemplary test case for Powell, through which he demonstrated that tomoDRGN could uncover structural heterogeneity within cryo-ET data. 

According to Powell, one exciting result is what tomoDRGN found surrounding a subset of ribosomes in the EMPIAR dataset. Some of the ribosomal particles were associated with a bacterial cell membrane and engaged in a process called cotranslational translocation. This occurs when a protein is being simultaneously synthesized and transported across a membrane. Researchers can use this result to make new hypotheses about how the ribosome functions with other protein machinery integral to transporting proteins outside of the cell, now guided by a structure of the complex in its native environment. 

After seeing that tomoDRGN could resolve structural heterogeneity from a structurally diverse dataset, Powell was curious: How small of a population could tomoDRGN identify? For that test, he chose a protein named apoferritin, which is a commonly used benchmark for cryo-ET and is often treated as structurally homogeneous. Ferritin is a protein used for iron storage and is referred to as apoferritin when it lacks iron.

Surprisingly, in addition to the expected particles, tomoDRGN revealed a minor population of ferritin particles — with iron bound — making up just 2 percent of the dataset, that was not previously reported. This result further demonstrated tomoDRGN's ability to identify structural states that occur so infrequently that they would be averaged out of a 3D reconstruction. 

Powell and other members of the Davis Lab are excited to see how tomoDRGN can be applied to further ribosomal studies and to other systems. Davis works on understanding how cells assemble, regulate, and degrade molecular machines, so the next steps include exploring ribosome biogenesis within cells in greater detail using this new tool.

“What are the possible states that we may be losing during purification?” Davis asks. “Perhaps more excitingly, we can look at how they localize within the cell and what partners and protein complexes they may be interacting with.”

A new sensor detects harmful “forever chemicals” in drinking water

Mon, 03/11/2024 - 3:00pm

MIT chemists have designed a sensor that detects tiny quantities of perfluoroalkyl and polyfluoroalkyl substances (PFAS) — chemicals found in food packaging, nonstick cookware, and many other consumer products.

These compounds, also known as “forever chemicals” because they do not break down naturally, have been linked to a variety of harmful health effects, including cancer, reproductive problems, and disruption of the immune and endocrine systems.

Using the new sensor technology, the researchers showed that they could detect PFAS levels as low as 200 parts per trillion in a water sample. The device they designed could offer a way for consumers to test their drinking water, and it could also be useful in industries that rely heavily on PFAS chemicals, including the manufacture of semiconductors and firefighting equipment.

“There’s a real need for these sensing technologies. We’re stuck with these chemicals for a long time, so we need to be able to detect them and get rid of them,” says Timothy Swager, the John D. MacArthur Professor of Chemistry at MIT and the senior author of the study, which appears this week in the Proceedings of the National Academy of Sciences.

Other authors of the paper are former MIT postdoc and lead author Sohyun Park and MIT graduate student Collette Gordon.

Detecting PFAS

Coatings containing PFAS chemicals are used in thousands of consumer products. In addition to nonstick coatings for cookware, they are also commonly used in water-repellent clothing, stain-resistant fabrics, grease-resistant pizza boxes, cosmetics, and firefighting foams.

These fluorinated chemicals, which have been in widespread use since the 1950s, can be released into water, air, and soil, from factories, sewage treatment plants, and landfills. They have been found in drinking water sources in all 50 states.

In 2023, the Environmental Protection Agency created an “advisory health limit” for two of the most hazardous PFAS chemicals, known as perfluorooctanoic acid (PFOA) and perfluorooctyl sulfonate (PFOS). These advisories call for a limit of 0.004 parts per trillion for PFOA and 0.02 parts per trillion for PFOS in drinking water.

Currently, the only way that a consumer could determine if their drinking water contains PFAS is to send a water sample to a laboratory that performs mass spectrometry testing. However, this process takes several weeks and costs hundreds of dollars.

To create a cheaper and faster way to test for PFAS, the MIT team designed a sensor based on lateral flow technology — the same approach used for rapid Covid-19 tests and pregnancy tests. Instead of a test strip coated with antibodies, the new sensor is embedded with a special polymer known as polyaniline, which can switch between semiconducting and conducting states when protons are added to the material.

The researchers deposited these polymers onto a strip of nitrocellulose paper and coated them with a surfactant that can pull fluorocarbons such as PFAS out of a drop of water placed on the strip. When this happens, protons from the PFAS are drawn into the polyaniline and turn it into a conductor, reducing the electrical resistance of the material. This change in resistance, which can be measured precisely using electrodes and sent to an external device such as a smartphone, gives a quantitative measurement of how much PFAS is present.

This approach works only with PFAS that are acidic, which includes two of the most harmful PFAS — PFOA and perfluorobutanoic acid (PFBA).

A user-friendly system

The current version of the sensor can detect concentrations as low as 200 parts per trillion for PFBA, and 400 parts per trillion for PFOA. This is not quite low enough to meet the current EPA guidelines, but the sensor uses only a fraction of a milliliter of water. The researchers are now working on a larger-scale device that would be able to filter about a liter of water through a membrane made of polyaniline, and they believe this approach should increase the sensitivity by more than a hundredfold, with the goal of meeting the very low EPA advisory levels.

“We do envision a user-friendly, household system,” Swager says. “You can imagine putting in a liter of water, letting it go through the membrane, and you have a device that measures the change in resistance of the membrane.”

Such a device could offer a less expensive, rapid alternative to current PFAS detection methods. If PFAS are detected in drinking water, there are commercially available filters that can be used on household drinking water to reduce those levels. The new testing approach could also be useful for factories that manufacture products with PFAS chemicals, so they could test whether the water used in their manufacturing process is safe to release into the environment.

The research was funded by an MIT School of Science Fellowship to Gordon, a Bose Research Grant, and a Fulbright Fellowship to Park.

For people who speak many languages, there’s something special about their native tongue

Sun, 03/10/2024 - 8:01pm

A new study of people who speak many languages has found that there is something special about how the brain processes their native language.

In the brains of these polyglots — people who speak five or more languages — the same language regions light up when they listen to any of the languages that they speak. In general, this network responds more strongly to languages in which the speaker is more proficient, with one notable exception: the speaker’s native language. When listening to one’s native language, language network activity drops off significantly.

The findings suggest there is something unique about the first language one acquires, which allows the brain to process it with minimal effort, the researchers say.

“Something makes it a little bit easier to process — maybe it’s that you’ve spent more time using that language — and you get a dip in activity for the native language compared to other languages that you speak proficiently,” says Evelina Fedorenko, an associate professor of neuroscience at MIT, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study.

Saima Malik-Moraleda, a graduate student in the Speech and Hearing Bioscience and Technology Program at Harvard University, and Olessia Jouravlev, a former MIT postdoc who is now an associate professor at Carleton University, are the lead authors of the paper, which appears today in the journal Cerebral Cortex.

Many languages, one network

The brain’s language processing network, located primarily in the left hemisphere, includes regions in the frontal and temporal lobes. In a 2021 study, Fedorenko’s lab found that in the brains of polyglots, the language network was less active when listening to their native language than the language networks of people who speak only one language. 

In the new study, the researchers wanted to expand on that finding and explore what happens in the brains of polyglots as they listen to languages in which they have varying levels of proficiency. Studying polyglots can help researchers learn more about the functions of the language network, and how languages learned later in life might be represented differently than a native language or languages.

“With polyglots, you can do all of the comparisons within one person. You have languages that vary along a continuum, and you can try to see how the brain modulates responses as a function of proficiency,” Fedorenko says.

For the study, the researchers recruited 34 polyglots, each of whom had at least some degree of proficiency in five or more languages but were not bilingual or multilingual from infancy. Sixteen of the participants spoke 10 or more languages, including one who spoke 54 languages with at least some proficiency.

Each participant was scanned with functional magnetic resonance imaging (fMRI) as they listened to passages read in eight different languages. These included their native language, a language they were highly proficient in, a language they were moderately proficient in, and a language in which they described themselves as having low proficiency.

They were also scanned while listening to four languages they didn’t speak at all. Two of these were languages from the same family (such as Romance languages) as a language they could speak, and two were languages completely unrelated to any languages they spoke.

The passages used for the study came from two different sources, which the researchers had previously developed for other language studies. One was a set of Bible stories recorded in many different languages, and the other consisted of passages from “Alice in Wonderland” translated into many languages.

Brain scans revealed that the language network lit up the most when participants listened to languages in which they were the most proficient. However, that did not hold true for the participants’ native languages, which activated the language network much less than non-native languages in which they had similar proficiency. This suggests that people are so proficient in their native language that the language network doesn’t need to work very hard to interpret it.

“As you increase proficiency, you can engage linguistic computations to a greater extent, so you get these progressively stronger responses. But then if you compare a really high-proficiency language and a native language, it may be that the native language is just a little bit easier, possibly because you've had more experience with it,” Fedorenko says.

Brain engagement

The researchers saw a similar phenomenon when polyglots listened to languages that they don’t speak: Their language network was more engaged when listening to languages related to a language that they could understand, than compared to listening to completely unfamiliar languages.

“Here we’re getting a hint that the response in the language network scales up with how much you understand from the input,” Malik-Moraleda says. “We didn’t quantify the level of understanding here, but in the future we’re planning to evaluate how much people are truly understanding the passages that they're listening to, and then see how that relates to the activation.”

The researchers also found that a brain network known as the multiple demand network, which turns on whenever the brain is performing a cognitively demanding task, also becomes activated when listening to languages other than one’s native language.

“What we’re seeing here is that the language regions are engaged when we process all these languages, and then there’s this other network that comes in for non-native languages to help you out because it’s a harder task,” Malik-Moraleda says.

In this study, most of the polyglots began studying their non-native languages as teenagers or adults, but in future work, the researchers hope to study people who learned multiple languages from a very young age. They also plan to study people who learned one language from infancy but moved to the United States at a very young age and began speaking English as their dominant language, while becoming less proficient in their native language, to help disentangle the effects of proficiency versus age of acquisition on brain responses.

The research was funded by the McGovern Institute for Brain Research, MIT’s Department of Brain and Cognitive Sciences, and the Simons Center for the Social Brain.

Researchers enhance peripheral vision in AI models

Fri, 03/08/2024 - 12:00am

Peripheral vision enables humans to see shapes that aren’t directly in our line of sight, albeit with less detail. This ability expands our field of vision and can be helpful in many situations, such as detecting a vehicle approaching our car from the side.

Unlike humans, AI does not have peripheral vision. Equipping computer vision models with this ability could help them detect approaching hazards more effectively or predict whether a human driver would notice an oncoming object.

Taking a step in this direction, MIT researchers developed an image dataset that allows them to simulate peripheral vision in machine learning models. They found that training models with this dataset improved the models’ ability to detect objects in the visual periphery, although the models still performed worse than humans.

Their results also revealed that, unlike with humans, neither the size of objects nor the amount of visual clutter in a scene had a strong impact on the AI’s performance.

“There is something fundamental going on here. We tested so many different models, and even when we train them, they get a little bit better but they are not quite like humans. So, the question is: What is missing in these models?” says Vasha DuTell, a postdoc and co-author of a paper detailing this study.

Answering that question may help researchers build machine learning models that can see the world more like humans do. In addition to improving driver safety, such models could be used to develop displays that are easier for people to view.

Plus, a deeper understanding of peripheral vision in AI models could help researchers better predict human behavior, adds lead author Anne Harrington MEng ’23.

“Modeling peripheral vision, if we can really capture the essence of what is represented in the periphery, can help us understand the features in a visual scene that make our eyes move to collect more information,” she explains.

Their co-authors include Mark Hamilton, an electrical engineering and computer science graduate student; Ayush Tewari, a postdoc; Simon Stent, research manager at the Toyota Research Institute; and senior authors William T. Freeman, the Thomas and Gerd Perkins Professor of Electrical Engineering and Computer Science and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL); and Ruth Rosenholtz, principal research scientist in the Department of Brain and Cognitive Sciences and a member of CSAIL. The research will be presented at the International Conference on Learning Representations.

“Any time you have a human interacting with a machine — a car, a robot, a user interface — it is hugely important to understand what the person can see. Peripheral vision plays a critical role in that understanding,” Rosenholtz says.

Simulating peripheral vision

Extend your arm in front of you and put your thumb up — the small area around your thumbnail is seen by your fovea, the small depression in the middle of your retina that provides the sharpest vision. Everything else you can see is in your visual periphery. Your visual cortex represents a scene with less detail and reliability as it moves farther from that sharp point of focus.

Many existing approaches to model peripheral vision in AI represent this deteriorating detail by blurring the edges of images, but the information loss that occurs in the optic nerve and visual cortex is far more complex.

For a more accurate approach, the MIT researchers started with a technique used to model peripheral vision in humans. Known as the texture tiling model, this method transforms images to represent a human’s visual information loss.  

They modified this model so it could transform images similarly, but in a more flexible way that doesn’t require knowing in advance where the person or AI will point their eyes.

“That let us faithfully model peripheral vision the same way it is being done in human vision research,” says Harrington.

The researchers used this modified technique to generate a huge dataset of transformed images that appear more textural in certain areas, to represent the loss of detail that occurs when a human looks further into the periphery.

Then they used the dataset to train several computer vision models and compared their performance with that of humans on an object detection task.

“We had to be very clever in how we set up the experiment so we could also test it in the machine learning models. We didn’t want to have to retrain the models on a toy task that they weren’t meant to be doing,” she says.

Peculiar performance

Humans and models were shown pairs of transformed images which were identical, except that one image had a target object located in the periphery. Then, each participant was asked to pick the image with the target object.

“One thing that really surprised us was how good people were at detecting objects in their periphery. We went through at least 10 different sets of images that were just too easy. We kept needing to use smaller and smaller objects,” Harrington adds.

The researchers found that training models from scratch with their dataset led to the greatest performance boosts, improving their ability to detect and recognize objects. Fine-tuning a model with their dataset, a process that involves tweaking a pretrained model so it can perform a new task, resulted in smaller performance gains.

But in every case, the machines weren’t as good as humans, and they were especially bad at detecting objects in the far periphery. Their performance also didn’t follow the same patterns as humans.

“That might suggest that the models aren’t using context in the same way as humans are to do these detection tasks. The strategy of the models might be different,” Harrington says.

The researchers plan to continue exploring these differences, with a goal of finding a model that can predict human performance in the visual periphery. This could enable AI systems that alert drivers to hazards they might not see, for instance. They also hope to inspire other researchers to conduct additional computer vision studies with their publicly available dataset.

“This work is important because it contributes to our understanding that human vision in the periphery should not be considered just impoverished vision due to limits in the number of photoreceptors we have, but rather, a representation that is optimized for us to perform tasks of real-world consequence,” says Justin Gardner, an associate professor in the Department of Psychology at Stanford University who was not involved with this work. “Moreover, the work shows that neural network models, despite their advancement in recent years, are unable to match human performance in this regard, which should lead to more AI research to learn from the neuroscience of human vision. This future research will be aided significantly by the database of images provided by the authors to mimic peripheral human vision.”

This work is supported, in part, by the Toyota Research Institute and the MIT CSAIL METEOR Fellowship.

How sensory gamma rhythm stimulation clears amyloid in Alzheimer’s mice

Thu, 03/07/2024 - 5:40pm

Studies at MIT and elsewhere are producing mounting evidence that light flickering and sound clicking at the gamma brain rhythm frequency of 40 hertz (Hz) can reduce Alzheimer’s disease (AD) progression and treat symptoms in human volunteers as well as lab mice. In a new open-access study in Nature using a mouse model of the disease, MIT researchers reveal a key mechanism that may contribute to these beneficial effects: clearance of amyloid proteins, a hallmark of AD pathology, via the brain’s glymphatic system, a recently discovered “plumbing” network parallel to the brain’s blood vessels.

“Ever since we published our first results in 2016, people have asked me how does it work? Why 40Hz? Why not some other frequency?” says study senior author Li-Huei Tsai, Picower Professor of Neuroscience and director of The Picower Institute for Learning and Memory of MIT and MIT’s Aging Brain Initiative. “These are indeed very important questions we have worked very hard in the lab to address.”

The new paper describes a series of experiments, led by Mitch Murdock PhD '23 when he was a brain and cognitive sciences doctoral student at MIT, showing that when sensory gamma stimulation increases 40Hz power and synchrony in the brains of mice, that prompts a particular type of neuron to release peptides. The study results further suggest that those short protein signals then drive specific processes that promote increased amyloid clearance via the glymphatic system.

“We do not yet have a linear map of the exact sequence of events that occurs,” says Murdock, who was jointly supervised by Tsai and co-author and collaborator Ed Boyden, Y. Eva Tan Professor of Neurotechnology at MIT, a member of the McGovern Institute for Brain Research and an affiliate member of the Picower Institute. “But the findings in our experiments support this clearance pathway through the major glymphatic routes.”

From gamma to glymphatics

Because prior research has shown that the glymphatic system is a key conduit for brain waste clearance and may be regulated by brain rhythms, Tsai and Murdock’s team hypothesized that it might help explain the lab’s prior observations that gamma sensory stimulation reduces amyloid levels in Alzheimer’s model mice.

Working with “5XFAD” mice, which genentically model Alzheimer’s, Murdock and co-authors first replicated the lab’s prior results that 40Hz sensory stimulation increases 40Hz neuronal activity in the brain and reduces amyloid levels. Then they set out to measure whether there was any correlated change in the fluids that flow through the glymphatic system to carry away wastes. Indeed, they measured increases in cerebrospinal fluid in the brain tissue of mice treated with sensory gamma stimulation compared to untreated controls. They also measured an increase in the rate of interstitial fluid leaving the brain. Moreover, in the gamma-treated mice he measured increased diameter of the lymphatic vessels that drain away the fluids and measured increased accumulation of amyloid in cervical lymph nodes, which is the drainage site for that flow.

To investigate how this increased fluid flow might be happening, the team focused on the aquaporin 4 (AQP4) water channel of astrocyte cells, which enables the cells to facilitate glymphatic fluid exchange. When they blocked APQ4 function with a chemical, that prevented sensory gamma stimulation from reducing amyloid levels and prevented it from improving mouse learning and memory. And when, as an added test, they used a genetic technique for disrupting AQP4, that also interfered with gamma-driven amyloid clearance.

In addition to the fluid exchange promoted by APQ4 activity in astrocytes, another mechanism by which gamma waves promote glymphatic flow is by increasing the pulsation of neighboring blood vessels. Several measurements showed stronger arterial pulsatility in mice subjected to sensory gamma stimulation compared to untreated controls.

One of the best new techniques for tracking how a condition, such as sensory gamma stimulation, affects different cell types is to sequence their RNA to track changes in how they express their genes. Using this method, Tsai and Murdock’s team saw that gamma sensory stimulation indeed promoted changes consistent with increased astrocyte AQP4 activity.

Prompted by peptides

The RNA sequencing data also revealed that upon gamma sensory stimulation a subset of neurons, called “interneurons,” experienced a notable uptick in the production of several peptides. This was not surprising in the sense that peptide release is known to be dependent on brain rhythm frequencies, but it was still notable because one peptide in particular, VIP, is associated with Alzheimer’s-fighting benefits and helps to regulate vascular cells, blood flow, and glymphatic clearance.

Seizing on this intriguing result, the team ran tests that revealed increased VIP in the brains of gamma-treated mice. The researchers also used a sensor of peptide release and observed that sensory gamma stimulation resulted in an increase in peptide release from VIP-expressing interneurons.

But did this gamma-stimulated peptide release mediate the glymphatic clearance of amyloid? To find out, the team ran another experiment: They chemically shut down the VIP neurons. When they did so, and then exposed mice to sensory gamma stimulation, they found that there was no longer an increase in arterial pulsatility and there was no more gamma-stimulated amyloid clearance.

“We think that many neuropeptides are involved,” Murdock says. Tsai added that a major new direction for the lab’s research will be determining what other peptides or other molecular factors may be driven by sensory gamma stimulation.

Tsai and Murdock add that while this paper focuses on what is likely an important mechanism — glymphatic clearance of amyloid — by which sensory gamma stimulation helps the brain, it’s probably not the only underlying mechanism that matters. The clearance effects shown in this study occurred rather rapidly, but in lab experiments and clinical studies weeks or months of chronic sensory gamma stimulation have been needed to have sustained effects on cognition.

With each new study, however, scientists learn more about how sensory stimulation of brain rhythms may help treat neurological disorders.

In addition to Tsai, Murdock, and Boyden, the paper’s other authors are Cheng-Yi Yang, Na Sun, Ping-Chieh Pao, Cristina Blanco-Duque, Martin C. Kahn, Nicolas S. Lavoie, Matheus B. Victor, Md Rezaul Islam, Fabiola Galiana, Noelle Leary, Sidney Wang, Adele Bubnys, Emily Ma, Leyla A. Akay, TaeHyun Kim, Madison Sneve, Yong Qian, Cuixin Lai, Michelle M. McCarthy, Nancy Kopell, Manolis Kellis, and Kiryl D. Piatkevich.

Support for the study came from Robert A. and Renee E. Belfer, the Halis Family Foundation, Eduardo Eurnekian, the Dolby family, Barbara J. Weedon, Henry E. Singleton, the Hubolow family, the Ko Hahn family, Carol and Gene Ludwig Family Foundation, Lester A. Gimpelson, Lawrence and Debra Hilibrand, Glenda and Donald Mattes, Kathleen and Miguel Octavio, David B. Emmes, the Marc Haas Foundation, Thomas Stocky and Avni Shah, the JPB Foundation, the Picower Institute, and the National Institutes of Health.

Three MIT alumni graduate from NASA astronaut training

Thu, 03/07/2024 - 2:40pm

“It's been a wild ride,” says Christopher Williams PhD ’12, moments after he received his astronaut pin, signifying graduation into the NASA astronaut corps.

Williams, along with Marcos Berríos ’06 and Christina “Chris” Birch PhD ’15, were among the 12-member class of astronaut candidates to graduate from basic training at NASA’s Johnson Space Center in Houston, Texas, on Tuesday, March 5.

NASA Astronaut Group 23 are the newest generation of Artemis astronauts, which includes 10 hailing from the United States, as well as two from the United Arab Emirates who trained alongside them.

During their more than two years of basic training, the group became proficient in such areas as spacewalking, robotics, space station systems, T-38 jets, and Russian language. The graduates also said that they asked endless questions about the functions of their spacesuit, which they wore while submerged in huge pools to practice spacewalks. They jumped into a frigid lake during a 10-day hike in Wyoming and shared the hauling of a 30-pound lava rock back to camp for more geology study, as well as the last bag of peanut M&Ms after running out of ready-to-eat meals during survival training in the Alabama back country.

“We feel ready to put our efforts and our energy into supporting NASA's science on the space station or in support of our return to the moon and this program,” says Birch. “All of the Flies feel a great sense of responsibility and excitement for what comes next.”

The team earned the nickname “The Flies” from the previous astronaut class, the “Turtles,” and even designed their team patch into a housefly shape. (Although team prefers calling themselves the Swarm, “which has a little bit more pizzazz,” says Birch.) “Traditionally, these names are usually things that do not take well to flight,” Birch adds. “We were really surprised that they gave us a flying creature. I think they have a lot of faith in us and hope that we fly soon.”

The Turtles were the first class to graduate under NASA’s Artemis program, in 2020. They included three aeronautics and astronautics alumni: Raja Chari SM ’01, Jasmin Moghbeli ’05, and Warren “Woody” Hoburg ’08. Former Whitehead Institute for Biomedical Research research fellow Kate Rubins, who was selected as a NASA astronaut in 2009 and had served as a flight engineer aboard the International Space Station, also joined the team.

After the newest graduates received their silver NASA astronaut pins, they joined the other 36 current astronauts eligible “to sit on the pointy end of a rocket” for such initiatives as assignments to the International Space Station, future commercial destinations, deep-space missions to destinations including the moon on NASA’s Orion spacecraft and Space Launch System rocket, and eventually, missions to Mars. The Artemis initiative also includes plans for the first woman and first person of color to walk on the moon.

For now, the Flies will be supporting all of these initiatives while Earthbound.

“Hopefully within next two or three years, my name will be called to go to space,” says Berrios. For now, he will stay in Houston, where he’ll be working in the human landing system program, including with private companies such as SpaceX and Blue Origin. He’ll also continue his training in advanced robotics and Russian, and he is training at various international partner countries working with space station modules.

Marcos Berríos

When he was selected to join the NASA astronaut program, Berríos had been serving as the commander of Detachment 1, 413th Flight Test Squadron and deputy director of the Combat Search and Rescue (CSAR) Combined Task Force. As a test pilot, he has accumulated more than 110 combat missions and 1,400 hours of flight time in more than 21 different aircraft.

Berríos calls Guaynabo, Puerto Rico, his hometown, and says he appreciated other Latino American astronauts, including Franklin R. Chang Diaz PhD ’77, serving as his role models and mentors. He hopes to do the same for others.

“Today, hopefully, marks another opportunity to open doors for others like me in the future, to recognize that the talent in the Latin American community is strong,” he said on the day of his graduation. His advice to those dreaming of being an astronaut is “to not give up, to stay curious, stay humble, be disciplined, and throughout all adversity, throughout all obstacles, that would all be worth it in the end.”

“I've always wanted to be an astronaut,” he says. He read a lot of astronaut autobiographies, and frequently Googled class 2.007 (Design and Manufacturing I), which led him to study mechanical engineering at MIT. He earned his master’s degree in mechanical engineering as well as a doctorate in aeronautics and astronautics from Stanford University, and then enrolled at the U.S. Naval Test Pilot School in Patuxent River, Maryland.

As a developmental test pilot at the CSAR Combined Test Force at Nellis Air Force Base in Nevada, he learned avionics, defensive systems, synthetic vision technologies, and electric vertical-takeoff-and-landing vehicles.

Berríos says that MIT, particularly while working with Professor Alexander Slocum, instilled within him the discipline required for his successes. “I don't want to admit how spending, like, 24 hours on problem set after problem set just provided that attitude and mentality of like, ‘Yeah, this is tough, this is hard,’ but you know we've got the skills, we've got the resources, we've got our colleagues, and we're going to figure it out … and we're going to find a pretty novel way to solve it.”

He says he found spacewalk training to be especially tough “physically, because you're in a pressurized spacesuit — it's stiff, it requires strength and stamina — but also mentally, because you have to be focused for six hours at a time and maintain high awareness of your surroundings as well as for your partner.”

The new astronaut says he identifies first as an engineer and researcher. “We're kind of a jack-of-all-trades,” he says. “One of the amazing things about being an astronaut, and certainly one of the things that was very captivating for me about this job, was all of the different subject matters that we get to touch on. I mean, it's incredible.”

Christina Birch  

An Arizona native, Birch graduated from the University of Arizona with bachelor’s degrees in mathematics, biochemistry, and molecular biophysics. As a doctoral candidate in biological engineering at MIT, she conducted original research at the intersection of synthetic biology, microfluidics, and infectious disease, and worked in the Jacquin Niles lab in the Department of Biological Engineering. “I really am grateful for (her advisor, Niles) taking me on, especially when he was starting up his lab.”

After graduation, she taught bioengineering at the University of California at Riverside, and scientific writing and communication at Caltech. But she didn’t forget the skills she gained while on the MIT cycling team; in 2018, she left academia to become a decorated track cyclist on the U.S. National Team. She was training for the 2020 Summer Olympics, while also working as a scientific consultant for startups in various technology sectors from robotics to vaccine development, when she was selected by NASA.

“I really need to give a shout out to the MIT cycling team,” she says. “They helped give me my start,” she says. “It was just a fantastic place to get a taste of that cycling community which I'm still a part of. I do still ride; I'm focused on longer-distance races, and I like to do gravel races.”

She’s also excited that the International Space Station has a bike trainer called CEVIS, and Teal CEVIS, to reduce muscle and bone loss experienced in microgravity.  

Her next role is to support the Orion program.

“Last week, I was out in San Diego supporting the underway recovery training, which is the landing and recovery team’s practice to recover crew from the Orion capsule after a simulated splashdown in the Pacific. It was just such an incredible learning opportunity for me getting up to speed on this new vehicle. We're doing the Orion 2 mission, which is really an incredible test flight.”

“The more I learn about the program, the more I see how many different elements that we are building from scratch,” she says. “What really sets NASA apart is our dedication to safety, and I know that we will fly astronauts to the moon when we're ready, and now that comes under a little bit of my purview and my responsibilities.”

How does she incorporate her backgrounds in cycling and her biological engineering research into the space program? “The common link between my pursuit of the pointy edge of the bike race, and also original research at MIT, has always been the stepping into the unknown, comfort-pushing boundaries. Whether it's getting into the T38 jet for the first time — I don't have any prior aviation experience — and standing up in front of an audience to give a scientific lecture or to make an attack on the bike, you know I've done that emotional practice.

“I think being comfortable in discomfort and the unknown, stepping through that process with a rigorous sort of like engineering-questioning, is because MIT set me up so well with a strong foundation of understanding engineering principles, and applying those to big questions. Places where we don't have full understanding of a system or how something works, and then there is spaceflight, how we are very much developing these technologies and testing them as we go. Ultimately, human lives are going to depend on asking really good questions.”

She says her biggest challenge so far has been diversifying her skill set.

“I had to make a pretty big transition when I arrived (to NASA training) because I had previously been in a mentality of trying to be the best in the world at something, be it the best in the world on the bike, or you know, being the expert in RNA aptamer malaria-targeting technologies, which is the research I was doing at MIT, and then having to switch to being both knowledgeable and skillful in a huge number of different areas that are required of an astronaut. I don't have an aviation background so that was something very new, very exciting, and very fun, it turns out. But also having to develop spacewalk skills, learning to speak Russian, learning to fly a robotic arm, and learning all about the International Space Station systems, so going from a specialist, really, to a generalist was a pretty big transition.

“One of the hardest things about astronaut training is finding balance, because we are switching between all of these different technical topics, sometimes in the span of a day. You might be in the jet in the morning and then you have to turn around and go to an emergency simulation for a space station in the afternoon. Reid Wiseman, the commander of the Artemis 2 mission, says, ‘Be where your feet are.’ And that was some of the best advice that he gave us coming into the office as candidates.”

Christopher Williams

Williams knew going into the training program that he would learn things in which he had no prior background.

“When you're flying in one of the T38 jets you're having to do, you know, back-of-the-envelope math estimating things while operating in a dynamic environment,” he recalls. “Other things, like doing an underwater run in the spacesuit, to finding alternatives when conjugating Russian verbs … learning how to approach problems and to solve them came from my time at MIT. Going through the physics grad program there made me much stronger at taking new topics and just sort of digesting them, figuring to how to break them down and solve them.”   

He did end up working with many MIT alumni. “Lots of MIT people have rotated through, so I've had lots of good conversations with Kate Rubins and a bunch of folks that passed through AeroAstro [the Department of Aeronautics and Astronautics].”

Williams grew up in Potomac, Maryland, dreaming of being an astronaut. A private pilot and Eagle Scout, Williams spent much of his high school and Stanford University years at the U.S. Naval Research Laboratory in Washington, studying supernovae using the Very Large Array radio telescope, and researching supernovae at NASA's Goddard Space Flight Center.   

At MIT, he pursued his doctorate in physics with a focus on astrophysics. When he wasn’t working as a campus emergency medical technician and volunteer firefighter, Williams and his advisor, Jackie Hewitt, built the Murchison Widefield Array, a low-frequency radio telescope array in Western Australia designed to study the epoch of reionization of the early universe. 

After graduation, he joined the faculty at Harvard Medical School, and was a medical physicist in the Radiation Oncology Department at the Brigham and Women’s Hospital and Dana-Farber Cancer Institute. As the lead physicist for the institute’s MRI-guided adaptive radiation therapy program, Williams focused on developing image guidance techniques for cancer treatments.  

He will be supporting the ongoing missions until it’s his turn to head to space. In the meantime, he looks forward to using his background in medicine to research how the human body is affected by space radiation and being in orbit.

“It’s strange, because as a scientist you know you're kind of in a different role. There are physics experiments on the space station, and tons of biology and chemistry experiments. It's actually really fun because I get to stretch different parts of my brain that I haven't had to before.”

“We're really representing all of NASA, all of America all over the world,” he says. “That's a huge responsibility on us. I really want to make everybody proud.”

Encouraging the next generation of astronauts

After the graduation ceremonies ended, NASA announced that it is accepting applications for new astronaut candidates through April 2. 

Berrios advises MIT students that no matter what their background is, they should apply if they want to be an astronaut. “Try and express in words how your education, how your career, and how your hobbies relate to human space exploration. Chris [Birch] and I have very different backgrounds and combinations of skill sets … I guarantee the next class is going to have an individual from MIT that has a background that we haven't even thought of yet.”

Birch says that just interviewing for the Artemis program “absolutely changed my life. I knew that even if I didn't become an astronaut, I had met, you know, a real incredible group of people that inspired me to push further to do more to find another way to serve and so I would really just encourage people to apply. A lot of people (who were accepted) applied more than once.”

Adds Williams, “If you meet the requirements, just do it. If that's your dream, tell people about it — because people will be excited for you and want to help you to achieve.”

How the brain coordinates speaking and breathing

Thu, 03/07/2024 - 2:00pm

MIT researchers have discovered a brain circuit that drives vocalization and ensures that you talk only when you breathe out, and stop talking when you breathe in.

The newly discovered circuit controls two actions that are required for vocalization: narrowing of the larynx and exhaling air from the lungs. The researchers also found that this vocalization circuit is under the command of a brainstem region that regulates the breathing rhythm, which ensures that breathing remains dominant over speech.

“When you need to breathe in, you have to stop vocalization. We found that the neurons that control vocalization receive direct inhibitory input from the breathing rhythm generator,” says Fan Wang, an MIT professor of brain and cognitive sciences, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study.

Jaehong Park, a Duke University graduate student who is currently a visiting student at MIT, is the lead author of the study, which appears today in Science. Other authors of the paper include MIT technical associates Seonmi Choi and Andrew Harrahill, former MIT research scientist Jun Takatoh, and Duke University researchers Shengli Zhao and Bao-Xia Han.

Vocalization control

Located in the larynx, the vocal cords are two muscular bands that can open and close. When they are mostly closed, or adducted, air exhaled from the lungs generates sound as it passes through the cords.

The MIT team set out to study how the brain controls this vocalization process, using a mouse model. Mice communicate with each other using sounds known as ultrasonic vocalizations (USVs), which they produce using the unique whistling mechanism of exhaling air through a small hole between nearly closed vocal cords.

“We wanted to understand what are the neurons that control the vocal cord adduction, and then how do those neurons interact with the breathing circuit?” Wang says.

To figure that out, the researchers used a technique that allows them to map the synaptic connections between neurons. They knew that vocal cord adduction is controlled by laryngeal motor neurons, so they began by tracing backward to find the neurons that innervate those motor neurons.

This revealed that one major source of input is a group of premotor neurons found in the hindbrain region called the retroambiguus nucleus (RAm). Previous studies have shown that this area is involved in vocalization, but it wasn’t known exactly which part of the RAm was required or how it enabled sound production.

The researchers found that these synaptic tracing-labeled RAm neurons were strongly activated during USVs. This observation prompted the team to use an activity-dependent method to target these vocalization-specific RAm neurons, termed as RAmVOC. They used chemogenetics and optogenetics to explore what would happen if they silenced or stimulated their activity. When the researchers blocked the RAmVOC neurons, the mice were no longer able to produce USVs or any other kind of vocalization. Their vocal cords did not close, and their abdominal muscles did not contract, as they normally do during exhalation for vocalization.

Conversely, when the RAmVOC neurons were activated, the vocal cords closed, the mice exhaled, and USVs were produced. However, if the stimulation lasted two seconds or longer, these USVs would be interrupted by inhalations, suggesting that the process is under control of the same part of the brain that regulates breathing.

“Breathing is a survival need,” Wang says. “Even though these neurons are sufficient to elicit vocalization, they are under the control of breathing, which can override our optogenetic stimulation.”

Rhythm generation

Additional synaptic mapping revealed that neurons in a part of the brainstem called the pre-Bötzinger complex, which acts as a rhythm generator for inhalation, provide direct inhibitory input to the RAmVOC neurons.

“The pre-Bötzinger complex generates inhalation rhythms automatically and continuously, and the inhibitory neurons in that region project to these vocalization premotor neurons and essentially can shut them down,” Wang says.

This ensures that breathing remains dominant over speech production, and that we have to pause to breathe while speaking.

The researchers believe that although human speech production is more complex than mouse vocalization, the circuit they identified in mice plays the conserved role in speech production and breathing in humans.

“Even though the exact mechanism and complexity of vocalization in mice and humans is really different, the fundamental vocalization process, called phonation, which requires vocal cord closure and the exhalation of air, is shared in both the human and the mouse,” Park says.

The researchers now hope to study how other functions such as coughing and swallowing food may be affected by the brain circuits that control breathing and vocalization.

The research was funded by the National Institutes of Health.

Method rapidly verifies that a robot will avoid collisions

Thu, 03/07/2024 - 12:00am

Before a robot can grab dishes off a shelf to set the table, it must ensure its gripper and arm won’t crash into anything and potentially shatter the fine china. As part of its motion planning process, a robot typically runs “safety check” algorithms that verify its trajectory is collision-free.

However, sometimes these algorithms generate false positives, claiming a trajectory is safe when the robot would actually collide with something. Other methods that can avoid false positives are typically too slow for robots in the real world.

Now, MIT researchers have developed a safety check technique which can prove with 100 percent accuracy that a robot’s trajectory will remain collision-free (assuming the model of the robot and environment is itself accurate). Their method, which is so precise it can discriminate between trajectories that differ by only millimeters, provides proof in only a few seconds.

But a user doesn’t need to take the researchers’ word for it — the mathematical proof generated by this technique can be checked quickly with relatively simple math.

The researchers accomplished this using a special algorithmic technique, called sum-of-squares programming, and adapted it to effectively solve the safety check problem. Using sum-of-squares programming enables their method to generalize to a wide range of complex motions.

This technique could be especially useful for robots that must move rapidly avoid collisions in spaces crowded with objects, such as food preparation robots in a commercial kitchen. It is also well-suited for situations where robot collisions could cause injuries, like home health robots that care for frail patients.

“With this work, we have shown that you can solve some challenging problems with conceptually simple tools. Sum-of-squares programming is a powerful algorithmic idea, and while it doesn’t solve every problem, if you are careful in how you apply it, you can solve some pretty nontrivial problems,” says Alexandre Amice, an electrical engineering and computer science (EECS) graduate student and lead author of a paper on this technique.

Amice is joined on the paper fellow EECS graduate student Peter Werner and senior author Russ Tedrake, the Toyota Professor of EECS, Aeronautics and Astronautics, and Mechanical Engineering, and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL). The work will be presented at the International Conference on Robots and Automation.

Certifying safety

Many existing methods that check whether a robot’s planned motion is collision-free do so by simulating the trajectory and checking every few seconds to see whether the robot hits anything. But these static safety checks can’t tell if the robot will collide with something in the intermediate seconds.

This might not be a problem for a robot wandering around an open space with few obstacles, but for robots performing intricate tasks in small spaces, a few seconds of motion can make an enormous difference.

Conceptually, one way to prove that a robot is not headed for a collision would be to hold up a piece of paper that separates the robot from any obstacles in the environment. Mathematically, this piece of paper is called a hyperplane. Many safety check algorithms work by generating this hyperplane at a single point in time. However, each time the robot moves, a new hyperplane needs to be recomputed to perform the safety check.

Instead, this new technique generates a hyperplane function that moves with the robot, so it can prove that an entire trajectory is collision-free rather than working one hyperplane at a time.

The researchers used sum-of-squares programming, an algorithmic toolbox that can effectively turn a static problem into a function. This function is an equation that describes where the hyperplane needs to be at each point in the planned trajectory so it remains collision-free.

Sum-of-squares can generalize the optimization program to find a family of collision-free hyperplanes. Often, sum-of-squares is considered a heavy optimization that is only suitable for offline use, but the researchers have shown that for this problem it is extremely efficient and accurate.

“The key here was figuring out how to apply sum-of-squares to our particular problem. The biggest challenge was coming up with the initial formulation. If I don’t want my robot to run into anything, what does that mean mathematically, and can the computer give me an answer?” Amice says.

In the end, like the name suggests, sum-of-squares produces a function that is the sum of several squared values. The function is always positive, since the square of any number is always a positive value.

Trust but verify

By double-checking that the hyperplane function contains squared values, a human can easily verify that the function is positive, which means the trajectory is collision-free, Amice explains.

While the method certifies with perfect accuracy, this assumes the user has an accurate model of the robot and environment; the mathematical certifier is only as good as the model.

“One really nice thing about this approach is that the proofs are really easy to interpret, so you don’t have to trust me that I coded it right because you can check it yourself,” he adds.

They tested their technique in simulation by certifying that complex motion plans for robots with one and two arms were collision-free. At its slowest, their method took just a few hundred milliseconds to generate a proof, making it much faster than some alternate techniques.

“This new result suggests a novel approach to certifying that a complex trajectory of a robot manipulator is collision free, elegantly harnessing tools from mathematical optimization, turned into surprisingly fast (and publicly available) software. While not yet providing a complete solution to fast trajectory planning in cluttered environments, this result opens the door to several intriguing directions of further research,” says Dan Halperin, a professor of computer science at Tel Aviv University, who was not involved with this research.

While their approach is fast enough to be used as a final safety check in some real-world situations, it is still too slow to be implemented directly in a robot motion planning loop, where decisions need to be made in microseconds, Amice says.

The researchers plan to accelerate their process by ignoring situations that don’t require safety checks, like when the robot is far away from any objects it might collide with. They also want to experiment with specialized optimization solvers that could run faster.

“Robots often get into trouble by scraping obstacles due to poor approximations that are made when generating their routes. Amice, Werner, and Tedrake have come to the rescue with a powerful new algorithm to quickly ensure that robots never overstep their bounds, by carefully leveraging advanced methods from computational algebraic geometry,” adds Steven LaVelle, professor in the Faculty of Information Technology and Electrical Engineering at the University of Oulu in Finland, and who was not involved with this work.

This work was supported, in part, by Amazon and the U.S. Air Force Research Laboratory.

Deciphering the cellular mechanisms behind ALS

Wed, 03/06/2024 - 4:00pm

At a time in which scientific research is increasingly cross-disciplinary, Ernest Fraenkel, the Grover M. Hermann Professor in Health Sciences and Technology in MIT’s Department of Biological Engineering, stands out as both a very early adopter of drawing from different scientific fields and a great advocate of the practice today.

When Fraenkel’s students find themselves at an impasse in their work, he suggests they approach their problem from a different angle or look for inspiration in a completely unrelated field.

“I think the thing that I always come back to is try going around it from the side,” Fraenkel says. “Everyone in the field is working in exactly the same way. Maybe you’ll come up with a solution by doing something different.”

Fraenkel’s work untangling the often-complicated mechanisms of disease to develop targeted therapies employs methods from the world of computer science, including algorithms that bring focus to processes most likely to be relevant. Using such methods, he has decoded fundamental aspects of Huntington’s disease and glioblastoma, and he and his collaborators are working to understand the mechanisms behind amyotrophic lateral sclerosis (ALS), also known as Lou Gehrig’s disease.

Very early on, Fraenkel was exposed to a merging of scientific disciplines. One of his teachers in high school, who was a student at Columbia University, started a program in which chemistry, physics, and biology were taught together. The teacher encouraged Fraenkel to visit a lab at Columbia run by Cyrus Levinthal, a physicist who taught one of the first biophysics classes at MIT. Fraenkel not only worked at the lab for a summer, he left high school (later earning an equivalency diploma) and started working at the lab full time and taking classes at Columbia.

“Here was a lab that was studying really important questions in biology, but the head of it had trained in physics,” Fraenkel says. “The idea that you could get really important insights by cross-fertilization, that’s something that I’ve always really appreciated. And now, we can see how this approach can impact how people are being treated for diseases or reveal really important fundamentals of science.”

Breaking barriers

At MIT, Fraenkel works in the Department of Biological Engineering and co-directs the Computational Systems Biology graduate program. For the study of ALS, he and his collaborators at Massachusetts General Hospital (MGH), including neurologist and neuroscientist Merit Cudkowicz, were recently awarded $1.25 million each from the nonprofit EverythingALS organization. The strategy behind the gift, Fraenkel says, is to encourage MIT and MGH to increase their collaboration, eventually enlisting other organizations as well, to form a hub for ALS research “to break down barriers in the field and really focus on the core problems.”

Fraenkel has been working with EverythingALS and their data scientists in collaboration with doctors James Berry of MGH and Lyle Ostrow of Temple University. He also works extensively with the nonprofit Answer ALS, a consortium of scientists studying the disease.

Fraenkel first got interested in ALS and other neurodegenerative diseases because traditional molecular biology research had not yielded effective therapies or, in the case of ALS, much insight into the disease’s causes.

“I was interested in places where the traditional approaches of molecular biology” — in which researchers hypothesize that a certain protein or gene or pathway is key to understanding a disease — “were not having a lot of luck or impact,” Fraenkel says. “Those are the places where if you come at it from another direction, the field could really advance.”

Fraenkel says that while traditional molecular biology has produced many valuable discoveries, it’s not very systematic. “If you start with the wrong hypothesis, you’re not going to get very far,” he says.

Systems biology, on the other hand, measures many cellular changes — including transcription of genes, protein-DNA interactions, of thousands of chemical compounds and of protein modifications — and can apply artificial intelligence and machine learning to those measurements to collectively identify the most important interactions.

“The goal of systems biology is to systematically measure as many cellular changes as possible, integrate this data, and let the data guide you to the most promising hypotheses,” Fraenkel says.

The Answer ALS project, with which Frankel works, involves approximately a thousand people with ALS who provided clinical information about their disease and blood cells. Their blood cells were reprogrammed to be pluripotent stem cells, meaning that the cells could be used to grow neurons that are studied and compared to neurons from a control group.

Emotional connection

While Fraenkel was intellectually inspired to apply systems biology to the challenging problem of understanding ALS — there is no known cause or cure for 80 to 90 percent of people with ALS — he also felt a strong emotional connection to the community of people with ALS and their advocates.

He tells a story of going to meet the director of an ALS organization in Israel who was trying to encourage scientists to work on the disease. Fraenkel knew the man had ALS. What he didn’t know before arriving at the meeting was that he was immobilized, lying in a hospital bed in his living room and only able to communicate with eye-blinking software.

“I sat down so we could both see the screen he was using to type characters out,” Fraenkel says, “and we had this fascinating conversation.”

“Here was a young guy in the prime of life, suffering in a way that’s unimaginable. At the same time, he was doing something amazing, running this organization to try to make a change. And he wasn’t the only one,” he says. “You meet one, and then another and then another — people who are sometimes on their last breaths and are still pushing to make a difference and cure the disease.”

The gift from EverythingALS — which was founded by Indu Navar after losing her husband, Peter Cohen, to ALS and later merged with CureALS, founded by Bill Nuti, who is living with ALS — aims to research the root causes of the disease, in the hope of finding therapies to stop its progression, and natural healing processes that could possibly restore function of damaged nerves.

To achieve those goals, Fraenkel says it is crucial to measure molecular changes in the cells of people with ALS and also to quantify the symptoms of ALS, which presents very differently from person to person. Fraenkel refers to how understanding the differences in various types of cancer has led to much better treatments, pointing out that ALS is nowhere near as well categorized or understood.

“The subtyping is really going to be what the field needs,” he says. “The prognosis for more than 80 percent of people with ALS is not appreciably different than it would have been 20, or maybe even 100, years ago.”

In the same way that Fraenkel was fascinated as a high school student by doing biology in a physicist’s lab, he says he loves that at MIT, different disciplines work together easily.

“You reach out to MIT colleagues in other departments, and they’re not surprised to hear from someone who’s not in their field,” Fraenkel says. “We’re a goal-oriented institution that focuses on solving hard problems.”

A noninvasive treatment for “chemo brain”

Wed, 03/06/2024 - 2:00pm

Patients undergoing chemotherapy often experience cognitive effects such as memory impairment and difficulty concentrating — a condition commonly known as “chemo brain.”

MIT researchers have now shown that a noninvasive treatment that stimulates gamma frequency brain waves may hold promise for treating chemo brain. In a study of mice, they found that daily exposure to light and sound with a frequency of 40 hertz protected brain cells from chemotherapy-induced damage. The treatment also helped to prevent memory loss and impairment of other cognitive functions.

This treatment, which was originally developed as a way to treat Alzheimer’s disease, appears to have widespread effects that could help with a variety of neurological disorders, the researchers say.

“The treatment can reduce DNA damage, reduce inflammation, and increase the number of oligodendrocytes, which are the cells that produce myelin surrounding the axons,” says Li-Huei Tsai, director of MIT’s Picower Institute for Learning and Memory and the Picower Professor in the MIT Department of Brain and Cognitive Sciences. “We also found that this treatment improved learning and memory, and enhanced executive function in the animals.”

Tsai is the senior author of the new study, which appears today in Science Translational Medicine. The paper’s lead author is TaeHyun Kim, an MIT postdoc.

Protective brain waves

Several years ago, Tsai and her colleagues began exploring the use of light flickering at 40 hertz (cycles per second) as a way to improve the cognitive symptoms of Alzheimer’s disease. Previous work had suggested that Alzheimer’s patients have impaired gamma oscillations — brain waves that range from 25 to 80 hertz (cycles per second) and are believed to contribute to brain functions such as attention, perception, and memory.

Tsai’s studies in mice have found that exposure to light flickering at 40 hertz or sounds with a pitch of 40 hertz can stimulate gamma waves in the brain, which has many protective effects, including preventing the formation of amyloid beta plaques. Using light and sound together provides even more significant protection. The treatment also appears promising in humans: Phase 1 clinical trials in people with early-stage Alzheimer’s disease have found the treatment is safe and does offer some neurological and behavioral benefits.

In the new study, the researchers set out to see whether this treatment could also counteract the cognitive effects of chemotherapy treatment. Research has shown that these drugs can induce inflammation in the brain, as well as other detrimental effects such as loss of white matter — the networks of nerve fibers that help different parts of the brain communicate with each other. Chemotherapy drugs also promote loss of myelin, the protective fatty coating that allows neurons to propagate electrical signals. Many of these effects are also seen in the brains of people with Alzheimer’s.

“Chemo brain caught our attention because it is extremely common, and there is quite a lot of research on what the brain is like following chemotherapy treatment,” Tsai says. “From our previous work, we know that this gamma sensory stimulation has anti-inflammatory effects, so we decided to use the chemo brain model to test whether sensory gamma stimulation can be beneficial.”

As an experimental model, the researchers used mice that were given cisplatin, a chemotherapy drug often used to treat testicular, ovarian, and other cancers. The mice were given cisplatin for five days, then taken off of it for five days, then on again for five days. One group received chemotherapy only, while another group was also given 40-hertz light and sound therapy every day.

After three weeks, mice that received cisplatin but not gamma therapy showed many of the expected effects of chemotherapy: brain volume shrinkage, DNA damage, demyelination, and inflammation. These mice also had reduced populations of oligodendrocytes, the brain cells responsible for producing myelin.

However, mice that received gamma therapy along with cisplatin treatment showed significant reductions in all of those symptoms. The gamma therapy also had beneficial effects on behavior: Mice that received the therapy performed much better on tests designed to measure memory and executive function.

“A fundamental mechanism”

Using single-cell RNA sequencing, the researchers analyzed the gene expression changes that occurred in mice that received the gamma treatment. They found that in those mice, inflammation-linked genes and genes that trigger cell death were suppressed, especially in oligodendrocytes, the cells responsible for producing myelin.

In mice that received gamma treatment along with cisplatin, some of the beneficial effects could still be seen up to four months later. However, the gamma treatment was much less effective if it was started three months after the chemotherapy ended.

The researchers also showed that the gamma treatment improved the signs of chemo brain in mice that received a different chemotherapy drug, methotrexate, which is used to treat breast, lung, and other types of cancer.

“I think this is a very fundamental mechanism to improve myelination and to promote the integrity of oligodendrocytes. It seems that it’s not specific to the agent that induces demyelination, be it chemotherapy or another source of demyelination,” Tsai says.

Because of its widespread effects, Tsai’s lab is also testing gamma treatment in mouse models of other neurological diseases, including Parkinson’s disease and multiple sclerosis. Cognito Therapeutics, a company founded by Tsai and MIT Professor Edward Boyden, has finished a phase 2 trial of gamma therapy in Alzheimer’s patients, and plans to begin a phase 3 trial this year.

“My lab’s major focus now, in terms of clinical application, is Alzheimer’s; but hopefully we can test this approach for a few other indications, too,” Tsai says.

The research was funded by the JPB Foundation, the Ko Hahn Seed Fund, and the National Institutes of Health.

MIT scientists use a new type of nanoparticle to make vaccines more powerful

Wed, 03/06/2024 - 2:00pm

Many vaccines, including vaccines for hepatitis B and whooping cough, consist of fragments of viral or bacterial proteins. These vaccines often include other molecules called adjuvants, which help to boost the immune system’s response to the protein.

Most of these adjuvants consist of aluminum salts or other molecules that provoke a nonspecific immune response. A team of MIT researchers has now shown that a type of nanoparticle called a metal organic framework (MOF) can also provoke a strong immune response, by activating the innate immune system — the body’s first line of defense against any pathogen — through cell proteins called toll-like receptors.

In a study of mice, the researchers showed that this MOF could successfully encapsulate and deliver part of the SARS-CoV-2 spike protein, while also acting as an adjuvant once the MOF is broken down inside cells.

While more work would be needed to adapt these particles for use as vaccines, the study demonstrates that this type of structure can be useful for generating a strong immune response, the researchers say.

“Understanding how the drug delivery vehicle can enhance an adjuvant immune response is something that could be very helpful in designing new vaccines,” says Ana Jaklenec, a principal investigator at MIT’s Koch Institute for Integrative Cancer Research and one of the senior authors of the new study.

Robert Langer, an MIT Institute Professor and member of the Koch Institute, and Dan Barouch, director of the Center for Virology and Vaccine Research at Beth Israel Deaconess Medical Center and a professor at Harvard Medical School, are also senior authors of the paper, which appears today in Science Advances. The paper’s lead author is former MIT postdoc and Ibn Khaldun Fellow Shahad Alsaiari.

Immune activation

In this study, the researchers focused on a MOF called ZIF-8, which consists of a lattice of tetrahedral units made up of a zinc ion attached to four molecules of imidazole, an organic compound. Previous work has shown that ZIF-8 can significantly boost immune responses, but it wasn’t known exactly how this particle activates the immune system.

To try to figure that out, the MIT team created an experimental vaccine consisting of the SARS-CoV-2 receptor-binding protein (RBD) embedded within ZIF-8 particles. These particles are between 100 and 200 nanometers in diameter, a size that allows them to get into the body’s lymph nodes directly or through immune cells such as macrophages.

Once the particles enter the cells, the MOFs are broken down, releasing the viral proteins. The researchers found that the imidazole components then activate toll-like receptors (TLRs), which help to stimulate the innate immune response.

“This process is analogous to establishing a covert operative team at the molecular level to transport essential elements of the Covid-19 virus to the body’s immune system, where they can activate specific immune responses to boost vaccine efficacy,” Alsaiari says.

RNA sequencing of cells from the lymph nodes showed that mice vaccinated with ZIF-8 particles carrying the viral protein strongly activated a TLR pathway known as TLR-7, which led to greater production of cytokines and other molecules involved in inflammation.

Mice vaccinated with these particles generated a much stronger response to the viral protein than mice that received the protein on its own.

“Not only are we delivering the protein in a more controlled way through a nanoparticle, but the compositional structure of this particle is also acting as an adjuvant,” Jaklenec says. “We were able to achieve very specific responses to the Covid protein, and with a dose-sparing effect compared to using the protein by itself to vaccinate.”

Vaccine access

While this study and others have demonstrated ZIF-8’s immunogenic ability, more work needs to be done to evaluate the particles’ safety and potential to be scaled up for large-scale manufacturing. If ZIF-8 is not developed as a vaccine carrier, the findings from the study should help to guide researchers in developing similar nanoparticles that could be used to deliver subunit vaccines, Jaklenec says.

“Most subunit vaccines usually have two separate components: an antigen and an adjuvant,” Jaklenec says. “Designing new vaccines that utilize nanoparticles with specific chemical moieties which not only aid in antigen delivery but can also activate particular immune pathways have the potential to enhance vaccine potency.”

One advantage to developing a subunit vaccine for Covid-19 is that such vaccines are usually easier and cheaper to manufacture than mRNA vaccines, which could make it easier to distribute them around the world, the researchers say.

“Subunit vaccines have been around for a long time, and they tend to be cheaper to produce, so that opens up more access to vaccines, especially in times of pandemic,” Jaklenec says.

The research was funded by Ibn Khaldun Fellowships for Saudi Arabian Women and in part by the Koch Institute Support (core) Grant from the U.S. National Cancer Institute.

Pages