MIT Latest News
Throughout the animal kingdom, males and females frequently exhibit sexual dimorphism: differences in characteristic traits that often make it easy to tell them apart. In mammals, one of the most common sex-biased traits is size, with males typically being larger than females. This is true in humans: Men are, on average, taller than women. However, biological differences among males and females aren’t limited to physical traits like height. They’re also common in disease. For example, women are much more likely to develop autoimmune diseases, while men are more likely to develop cardiovascular diseases.
In spite of the widespread nature of these sex biases, and their significant implications for medical research and treatment, little is known about the underlying biology that causes sex differences in characteristic traits or disease. In order to address this gap in understanding, Whitehead Institute Director David Page has transformed the focus of his lab in recent years from studying the X and Y sex chromosomes to working to understand the broader biology of sex differences throughout the body. In a paper published in Science, Page, a professor of biology at MIT and a Howard Hughes Medical Institute investigator; Sahin Naqvi, first author and former MIT graduate student (now a postdoc at Stanford University); and colleagues present the results of a wide-ranging investigation into sex biases in gene expression, revealing differences in the levels at which particular genes are expressed in males versus females.
The researchers’ findings span 12 tissue types in five species of mammals, including humans, and led to the discovery that a combination of sex-biased genes accounts for approximately 12 percent of the average height difference between men and women. This finding demonstrates a functional role for sex-biased gene expression in contributing to sex differences. The researchers also found that the majority of sex biases in gene expression are not shared between mammalian species, suggesting that — in some cases — sex-biased gene expression that can contribute to disease may differ between humans and the animals used as models in medical research.
Having the same gene expressed at different levels in each sex is one way to perpetuate sex differences in traits in spite of the genetic similarity of males and females within a species — since with the exception of the 46th chromosome (the Y in males or the second X in females), the sexes share the same pool of genes. For example, if a tall parent passes on a gene associated with an increase in height to both a son and a daughter, but the gene has male-biased expression, then that gene will be more highly expressed in the son, and so may contribute more height to the son than the daughter.
The researchers searched for sex-biased genes in tissues across the body in humans, macaques, mice, rats, and dogs, and they found hundreds of examples in every tissue. They used height for their first demonstration of the contribution of sex-biased gene expression to sex differences in traits because height is an easy-to-measure and heavily studied trait in quantitative genetics.
“Discovering contributions of sex-biased gene expression to height is exciting because identifying the determinants of height is a classic, century-old problem, and yet by looking at sex differences in this new way we were able to provide new insights,” Page says. “My hope is that we and other researchers can repeat this model to similarly gain new insights into diseases that show sex bias."
Because height is so well studied, the researchers had access to public data on the identity of hundreds of genes that affect height. Naqvi decided to see how many of those height genes appeared in the researchers’ new dataset of sex-biased genes, and whether the genes’ sex biases corresponded to the expected effects on height. He found that sex-biased gene expression contributed approximately 1.6 centimeters to the average height difference between men and women, or 12 percent of the overall observed difference.
The scope of the researchers’ findings goes beyond height, however. Their database contains thousands of sex-biased genes. Slightly less than a quarter of the sex-biased genes that they catalogued appear to have evolved that sex bias in an early mammalian ancestor, and to have maintained that sex bias today in at least four of the five species studied. The majority of the genes appear to have evolved their sex biases more recently, and are specific to either one species or a certain lineage, such as rodents or primates.
Whether or not a sex-biased gene is shared across species is a particularly important consideration for medical and pharmaceutical research using animal models. For example, previous research identified certain genetic variants that increase the risk of Type 2 diabetes specifically in women; however, the same variants increase the risk of Type 2 diabetes indiscriminately in male and female mice. Therefore, mice would not be a good model to study the genetic basis of this sex difference in humans. Even when the animal appears to have the same sex difference in disease as humans, the specific sex-biased genes involved might be different. Based on their finding that most sex bias is not shared between species, Page and colleagues urge researchers to use caution when picking an animal model to study sex differences at the level of gene expression.
“We’re not saying to avoid animal models in sex-differences research, only not to take for granted that the sex-biased gene expression behind a trait or disease observed in an animal will be the same as that in humans. Now that researchers have species and tissue-specific data available to them, we hope they will use it to inform their interpretation of results from animal models,” Naqvi says.
The researchers have also begun to explore what exactly causes sex-biased expression of genes not found on the sex chromosomes. Naqvi discovered a mechanism by which sex-biased expression may be enabled: through sex-biased transcription factors, proteins that help to regulate gene expression. Transcription factors bind to specific DNA sequences called motifs, and he found that certain sex-biased genes had the motif for a sex-biased transcription factor in their promoter regions, the sections of DNA that turn on gene expression. This means that, for example, a male-biased transcription factor was selectively binding to the promoter region for, and so increasing the expression of, male-biased genes — and likewise for female-biased transcription factors and female-biased genes. The question of what regulates the transcription factors remains for further study — but all sex differences are ultimately controlled by either the sex chromosomes or sex hormones.
The researchers see the collective findings of this paper as a foundation for future sex-differences research.
“We’re beginning to build the infrastructure for a systematic understanding of sex biases throughout the body,” Page says. “We hope these datasets are used for further research, and we hope this work gives people a greater appreciation of the need for, and value of, research into the molecular differences in male and female biology.”
This work was supported by Biogen, Whitehead Institute, National Institutes of Health, Howard Hughes Medical Institute, and generous gifts from Brit and Alexander d’Arbeloff and Arthur W. and Carol Tobin Brill.
What makes a great faculty mentor? Appreciative graduate students from across the Institute have thoughts — lots of them.
In letters of nomination to the Committed to Caring (C2C) program over the past five years, students have lauded faculty who validate them, who encourage work-life balance, and who foster an inclusive work environment, among other caring actions. Professors Eytan Modiano, Erin Kelly, and Ju Li especially excel at advocating for students, sharing behind-the-scenes information, and demonstrating empathy.
The pool of C2C honorees is still expanding, along with a growing catalog of supportive actions known as Mentoring Guideposts. A new selection round has just begun, and the C2C program invites all graduate students to nominate professors for their outstanding mentorship by July 26.
Eytan Modiano: listening and advocating
Eytan Modiano is professor of aeronautics and astronautics and the associate director of the Laboratory for Information and Decision Systems (LIDS). His work addresses communication networks and protocols with application to satellite, wireless, and optical networks. The primary goal of his research is the design of network architectures that are cost-effective, scalable, and robust. His research group crosses disciplinary boundaries by combining techniques from network optimization; queueing theory; graph theory; network protocols and algorithms; machine learning; and physical layer communications.
When students reach out to Modiano for advice, he makes time in his schedule to meet with them, usually the same day or the next. In doing so, students say that Modiano offers invaluable support and shows students that he prioritizes them.
Modiano provides his students with channels to express their difficulties (a Mentoring Guidepost identified by the C2C program). For example, he allots unstructured time during individual and group meetings for student feedback. “These weekly meetings are mainly focused on research,” Modiano says, “but I always make sure to leave time at the end to talk about anything else that is on a student's mind, such as concerns about their career plans, coursework, or anything else.”
He also reaches out to student groups about how the department and lab could better serve them. As associate director of LIDS, Modiano has responded to such feedback in a number of ways, including working alongside the LIDS Social Committee to organize graduate student events. He has advocated for funding of MIT Graduate Women in Aerospace Engineering, and was a key proponent of the Exploring Aerospace Day, an event the group hosted for interested high school students.
Modiano does not think in binary terms about success and failure: “No single event, or even a series of events, is likely to define a career.” Rather, a career should be seen as a path “with ups and downs and whose trajectory we try to shape.”
Modiano advises, “If you persist, you are likely to find a path that you are happy with, and meet your goals.”
Erin Kelly: sustainably moving forward
In her students’ estimation, Erin Kelly, the Sloan Distinguished Professor of Work and Organization Studies, rises to the level of exceptional mentorship by channeling her expertise in work and organization studies to the benefit of her advisees.
Kelly investigates the implications of workplace policies and management strategies for workers, firms, and families; previous research has examined scheduling and work-family supports, family leaves, harassment policies, and diversity initiatives. As part of the Work, Family, and Health Network, she has evaluated innovative approaches to work redesign with group-randomized trials in professional/technical and health care workforces. Her book with Phyllis Moen, "Overload: How Good Jobs Went Bad and What to Do About It," will be published by Princeton University Press in early 2020.
In Kelly’s words, she tries to “promote working in ways that feel sane and sustainable.” She does not count how many hours her students spend on projects or pay attention to where they work or how quickly they respond to emails. Kelly says that she knows her students are committed to this effort long-term, and that everyone works differently.
One student nominator noted that Kelly was extremely supportive of her decision to have a child during graduate school, offering her advice about how to balance work and home as well as how to transition back into school after maternity leave. The nominator notes, “Erin does not view the baby as an impediment to my professional career.”
In addition to providing advice on course selection and dissertation planning, Kelly offers her students “informal” advising (a Mentoring Guidepost) that goes beyond the usual academic parameters. Kelly “explained to me the importance of networking in finding an academic job,” another student says, “I’ve appreciated this informal mentoring, particularly because I am a woman trying to enter a male-dominated field; understanding how to succeed professionally is important, but is not always obvious.”
Ju Li: a proven mentor and friend
Ju Li is the Battelle Energy Alliance Professor of Nuclear Science and Engineering and professor of materials science and engineering at MIT. Li’s research focuses on mechanical properties of materials, and energy storage and conversion. His lab also studies the effects of radiation and aggressive environments on microstructure and materials properties.
Li shows empathy for students’ experiences (a Mentoring Guidepost identified by the C2C program). One student remarked that when they were not confident in their own abilities, Li was “extremely patient” and showed faith in their work. Li “lifted me up with his encouraging words and shared his own experiences and even struggles.”
He concerns himself with both training academic researchers and also preparing students for life after MIT, whether their paths lead them to academic, industry, governmental, or entrepreneurial endeavors. Li’s attention to his students and their aims does not go unnoticed. One C2C nominator says that former group members often come back to visit and to seek advice from Li whenever possible, “and nobody regrets being a member of our group.”
It is clear from their letters of nomination that Li’s students deeply admire his character and hold him up as a lifelong role model. In addition to his caring actions, they cite his humility and his treatment of students as “equals and true friends.”
Just as Li’s students admire him, Li was inspired by his own graduate mentor, Sydney Yip, professor emeritus of nuclear science and engineering, and materials science and engineering at MIT. Li says that Yip taught everyone who encountered him to become better researchers and better people. In graduate school, Li says, “I benefited so much by watching how Sid managed his group, and how he interacted with the world … I felt lucky every day.”
More on Committed to Caring (C2C)
The Committed to Caring (C2C) program, an initiative of the Office of Graduate Education, honors faculty members from across the Institute for their outstanding support of graduate students. By sharing the stories of great mentors, like professors Modiano, Kelly, and Li, the C2C Program hopes to encourage exceptional mentorship at MIT.
Selection criteria for the award include the scope and reach of advisor impact on the experience of graduate students, excellence in scholarship, and demonstrated commitment to diversity and inclusion.
Nominations for the next round of honorees must be submitted by July 26. Selections will be announced in late September.
Fifty years ago this week, humanity made its first expedition to another world, when Apollo 11 touched down on the moon and two astronauts walked on its surface. That moment changed the world in ways that still reverberate today.
MIT’s deep and varied connections to that epochal event — many of which have been described on MIT News — began years before the actual landing, when the MIT Instrumentation Laboratory (now Draper Labs) signed the very first contract to be awarded for the Apollo program after its announcement by President John F. Kennedy in 1961. The Institute’s involvement continued throughout the program — and is still ongoing today.
MIT’s role in creating the navigation and guidance system that got the mission to the moon and back has been widely recognized in books, movies, and television series. But many other aspects of the Institute’s involvement in the Apollo program and its legacy, including advances in mechanical and computational engineering, simulation technology, biomedical studies, and the geophysics of planet formation, have remained less celebrated.
Amid the growing chorus of recollections in various media that have been appearing around this 50th anniversary, here is a small collection of bits and pieces about some of the unsung heroes and lesser-known facts from the Apollo program and MIT’s central role in it.
A new age in electronics
The computer system and its software that controlled the spacecraft — called the Apollo Guidance Computer and designed by the MIT Instrumentation Lab team under the leadership of Eldon Hall — were remarkable achievements that helped push technology forward in many ways.
The AGC’s programs were written in one of the first-ever compiler languages, called MAC, which was developed by Instrumentation Lab engineer Hal Laning. The computer itself, the 1-cubic-foot Apollo Guidance Computer, was the first significant use of silicon integrated circuit chips and greatly accelerated the development of the microchip technology that has gone on to change virtually every consumer product.
In an age when most computers took up entire climate-controlled rooms, the compact AGC was uniquely small and lightweight. But most of its “software” was actually hard-wired: The programs were woven, with tiny donut-shaped metal “cores” strung like beads along a set of wires, with a given wire passing outside the donut to represent a zero, or through the hole for a 1. These so-called rope memories were made in the Boston suburbs at Raytheon, mostly by women who had been hired because they had experience in the weaving industry. Once made, there was no way to change individual bits within the rope, so any change to the software required weaving a whole new rope, and last-minute changes were impossible.
As David Mindell, the Frances and David Dibner Professor of the History of Engineering and Manufacturing, points out in his book “Digital Apollo,” that system represented the first time a computer of any kind had been used to control, in real-time, many functions of a vehicle carrying human beings — a trend that continues to accelerate as the world moves toward self-driving vehicles. Right after the Apollo successes, the AGC was directly adapted to an F-8 fighter jet, to create the first-ever fly-by-wire system for aircraft, where the plane’s control surfaces are moved via a computer rather than direct cables and hydraulic systems. This approach is now widespread in the aerospace industry, says John Tylko, who teaches MIT’s class 16.895J (Engineering Apollo: The Moon Project as a Complex System), which is taught every other year.
As sophisticated as the computer was for its time, computer users today would barely recognize it as such. Its keyboard and display screen looked more like those on a microwave oven than a computer: a simple numeric keypad and a few lines of five-digit luminous displays. Even the big mainframe computer used to test the code as it was being developed had no keyboard or monitor that the programmers ever saw. Programmers wrote their code by hand, then typed it onto punch cards — one card per line — and handed the deck of cards to a computer operator. The next day, the cards would be returned with a printout of the program’s output. And in this time long before email, communications among the team often relied on handwritten paper notes.
MIT’s involvement in the geophysical side of the Apollo program also extends back to the early planning stages — and continues today. For example, Professor Nafi Toksöz, an expert in seismology, helped to develop a seismic monitoring station that the astronauts placed on the moon, where it helped lead to a greater understanding of the moon’s structure and formation. “It was the hardest work I have ever done, but definitely the most exciting,” he has said.
Toksöz says that the data from the Apollo seismometers “changed our understanding of the moon completely.” The seismic waves, which on Earth continue for a few minutes, lasted for two hours, which turned out to be the result of the moon’s extreme lack of water. “That was something we never expected, and had never seen,” he recalls.
The first seismometer was placed on the moon’s surface very shortly after the astronauts landed, and seismologists including Toksöz started seeing the data right away — including every footstep the astronauts took on the surface. Even when the astronauts returned to the lander to sleep before the morning takeoff, the team could see that Buzz Aldrin ScD ’63 and Neil Armstrong were having a sleepless night, with every toss and turn dutifully recorded on the seismic traces.
MIT Professor Gene Simmons was among the first group of scientists to gain access to the lunar samples as soon as NASA released them from quarantine, and he and others in what is now the Department of Earth, Planetary and Atmospheric Sciences (EAPS) have continued to work on these samples ever since. As part of a conference on campus, he exhibited some samples of lunar rock and soil in their first close-up display to the public, where some people may even have had a chance to touch the samples.
Others in EAPS have also been studying those Apollo samples almost from the beginning. Timothy Grove, the Robert R. Shrock Professor of Earth and Planetary Sciences, started studying the Apollo samples in 1971 as a graduate student at Harvard University, and has been doing research on them ever since. Grove says that these samples have led to major new understandings of planetary formation processes that have helped us understand the Earth and other planets better as well.
Among other findings, the rocks showed that ratios of the isotopes of oxygen and other elements in the moon rocks were identical to those in terrestrial rocks but completely different than those of any meteorites, proving that the Earth and the moon had a common origin and leading to the hypothesis that the moon was created through a giant impact from a planet-sized body. The rocks also showed that the entire surface of the moon had likely been molten at one time. The idea that a planetary body could be covered by an ocean of magma was a major surprise to geologists, Grove says.
Many puzzles remain to this day, and the analysis of the rock and soil samples goes on. “There’s still a lot of exciting stuff” being found in these samples, Grove says.
Sorting out the facts
In the spate of publicity and new books, articles, and programs about Apollo, inevitably some of the facts — some trivial, some substantive — have been scrambled along the way. “There are some myths being advanced,” says Tylko, some of which he addresses in his “Engineering Apollo” class. “People tend to oversimplify” many aspects of the mission, he says.
For example, many accounts have described the sequence of alarms that came from the guidance computer during the last four minutes of the mission, forcing mission controllers to make the daring decision to go ahead despite the unknown nature of the problem. But Don Eyles, one of the Instrumentation Lab’s programmers who had written the landing software for the AGC, says that he can’t think of a single account he’s read about that sequence of events that gets it entirely right. According to Eyles, many have claimed the problem was caused by the fact that the rendezvous radar switch had been left on, so that its data were overloading the computer and causing it to reboot.
But Eyles says the actual reason was a much more complex sequence of events, including a crucial mismatch between two circuits that would only occur in rare circumstances and thus would have been hard to detect in testing, and a probably last-minute decion to put a vital switch in a position that allowed it to happen. Eyles has described these details in a memoir about the Apollo years and in a technical paper available online, but he says they are difficult to summarize simply. But he thinks the author Norman Mailer may have come closest, capturing the essence of it in his book “Of a Fire on the Moon,” where he describes the issue as caused by a “sneak circuit” and an “undetectable” error in the onboard checklist.
Some accounts have described the AGC as a very limited and primitive computer compared to today’s average smartphone, and Tylko acknowledges that it had a tiny fraction of the power of today’s smart devices — but, he says, “that doesn’t mean they were unsophisticated.” While the AGC only had about 36 kilobytes of read-only memory and 2 kilobytes of random-access memory, “it was exceptionally sophisticated and made the best use of the resources available at the time,” he says.
In some ways it was even ahead of its time, Tylko says. For example, the compiler language developed by Laning along with Ramon Alonso at the Instrumentation Lab used an architecture that he says was relatively intuitive and easy to interact with. Based on a system of “verbs” (actions to be performed) and “nouns” (data to be worked on), “it could probably have made its way into the architecture of PCs,” he says. “It’s an elegant interface based on the way humans think.”
Some accounts go so far as to claim that the computer failed during the descent and astronaut Neil Armstrong had to take over the controls and land manually, but in fact partial manual control was always part of the plan, and the computer remained in ultimate control throughout the mission. None of the onboard computers ever malfunctioned through the entire Apollo program, according to astronaut David Scott SM ’62, who used the computer on two Apollo missions: “We never had a failure, and I think that is a remarkable achievement.”
Behind the scenes
At the peak of the program, a total of about 1,700 people at MIT’s Instrumentation Lab were working on the Apollo program’s software and hardware, according to Draper Laboratory, the Instrumentation Lab’s successor, which spun off from MIT in 1973. A few of those, such as the near-legendary “Doc” Draper himself — Charles Stark Draper ’26, SM ’28, ScD ’38, former head of the Department of Aeronautics and Astronautics (AeroAstro) — have become widely known for their roles in the mission, but most did their work in near-anonymity, and many went on to entirely different kinds of work after the Apollo program’s end.
Margaret Hamilton, who directed the Instrumentation Lab’s Software Engineering Division, was little known outside of the program itself until an iconic photo of her next to the original stacks of AGC code began making the rounds on social media in the mid 2010s. In 2016, when she was awarded the Presidential Medal of Freedom by President Barack Obama, MIT Professor Jaime Peraire, then head of AeroAstro, said of Hamilton that “She was a true software engineering pioneer, and it’s not hyperbole to say that she, and the Instrumentation Lab’s Software Engineering Division that she led, put us on the moon.” After Apollo, Hamilton went on to found a software services company, which she still leads.
Many others who played major roles in that software and hardware development have also had their roles little recognized over the years. For example, Hal Laning ’40, PhD ’47, who developed the programming language for the AGC, also devised its executive operating system, which employed what was at the time a new way of handling multiple programs at once, by assigning each one a priority level so that the most important tasks, such as controlling the lunar module’s thrusters, would always be taken care of. “Hal was the most brilliant person we ever had the chance to work with,” Instrumentation Lab engineer Dan Lickly told MIT Technology Review. And that priority-driven operating system proved crucial in allowing the Apollo 11 landing to proceed safely in spite of the 1202 alarms going off during the lunar descent.
While the majority of the team working on the project was male, software engineer Dana Densmore recalls that compared to the heavily male-dominated workforce at NASA at the time, the MIT lab was relatively welcoming to women. Densmore, who was a control supervisor for the lunar landing software, told The Wall Street Journal that “NASA had a few women, and they kept them hidden. At the lab it was very different,” and there were opportunities for women there to take on significant roles in the project.
Hamilton recalls the atmosphere at the Instrumentation Lab in those days as one of real dedication and meritocracy. As she told MIT News in 2009, “Coming up with solutions and new ideas was an adventure. Dedication and commitment were a given. Mutual respect was across the board. Because software was a mystery, a black box, upper management gave us total freedom and trust. We had to find a way and we did. Looking back, we were the luckiest people in the world; there was no choice but to be pioneers.”
As cancer cells progress, they accumulate hundreds and even thousands of genetic and epigenetic changes, resulting in protein expression profiles that are radically different from that of healthy cells. But despite their heavily mutated proteome, cancer cells can evade recognition and attack by the immune system.
Immunotherapies, particularly checkpoint inhibitors that reinvigorate exhausted T cells, have revolutionized the treatment of certain forms of cancer. These breakthrough therapies have resulted in unprecedented response rates for some patients. Unfortunately, most cancers fail to respond to immunotherapies and new strategies are therefore needed to realize their full potential.
A team of cancer biologists including members of the laboratories of David H. Koch Professor of Biology Tyler Jacks, director of the Koch Institute for Integrative Cancer Research at MIT, and fellow Koch Institute member Forest White, the Ned C. and Janet Bemis Rice Professor and member of the MIT Center for Precision Cancer Medicine, took a complementary approach to boosting the immune system.
Although cancer cells are rife with mutant proteins, few of those proteins appear on a cell’s surface, where they can be recognized by immune cells. The researchers repurposed a well-studied class of anti-cancer drugs, heat shock protein 90 (HSP90) inhibitors, that make cancer cells easier to recognize by revealing their mutant proteomes.
Many HSP90 inhibitors have been studied extensively for the past several decades as potential cancer treatments. HSP90 protects the folded structure of a number of proteins when cells undergo stress, and in cancer cells plays an important role in stabilizing protein structure undermined by pervasive mutations. However, despite promising preclinical evidence, HSP90 inhibitors have produced discouraging outcomes in clinical trials, and none have achieved FDA approval.
In a study appearing in Clinical Cancer Research, the researchers identified a potential reason behind those disappointing results. HSP90 inhibitors have only been clinically tested at bolus doses — intermittent, large doses — that often result in unwanted side effects in patients.
RNA profiling of human clinical samples and cultured cancer cell lines revealed that this bolus-dosing schedule results in the profound suppression of immune activity as well as the activation of heat shock factor 1 protein (HSF1). Not only does HSF1 activate the cell’s heat shock response, which counteracts the effect of the HSP90 inhibitor, but it is known to be a powerful enabler of cancer cell malignancy.
In striking contrast, the researchers used cancer mouse models with intact immune systems to show that sustained, low-level dosing of HSP90 inhibitors avoids triggering both the heat shock response and the immunosuppression associated with high doses.
Using a method devised by the White lab that combines mass spectrometry-based proteomics and computational modeling, the researchers discovered that the new dosing regimen increased the number and diversity of peptides (protein fragments) on the cell surface. These peptides, which the team found to be released by HSP90 during sustained low-level inhibition, were then free to be taken up by the cell’s antigen-presenting machinery and used to flag patrolling immune cells.
“These results connect a fundamental aspect of cell biology — protein folding — to anti-tumor immune responses” says lead author Alex Jaeger, a postdoctoral fellow in the Jacks lab and a former member of the laboratory of the late MIT biologist and Professor Susan Lindquist, whose work inspired the study’s HSP90 dosing scheule. “Hopefully, our findings can reinvigorate interest in HSP90 inhibition as a complementary approach for immunotherapy.”
Using the new dosing regimen, the researchers were able to clear tumors in mouse models at drug concentrations that are 25-50 times lower than those used in clinical trials, significantly reducing the risk for toxic side effects in patients. Importantly, because several forms of HSP90 inhibitors have already undergone extensive clinical testing, the new dosing regimen can be tested in patients quickly.
This work was supported in part by the Damon Runyon Cancer Research Foundation, the Takeda Pharmaceuticals Immune Oncology Research Fund, and an MIT Training Grant in Environmental Science; foundational work on HSF1 was supported by the Koch Institute Frontier Research Program.
J-PAL North America, a research center at MIT, will partner with two leading education technology nonprofits to test promising models to improve learning, as part of the center’s second Education, Technology, and Opportunity Innovation Competition.
Running in its second year, J-PAL North America’s Education, Technology, and Opportunity Innovation Competition supports education leaders in using randomized evaluations to generate evidence on how technology can improve student learning, particularly for students from disadvantaged backgrounds. Last year, J-PAL North America partnered with the Family Engagement Lab to develop an evaluation of a multilingual digital messaging platform, and with Western Governors University’s Center for Applied Learning Science to evaluate scalable models to improve student learning in math.
This year, J-PAL North America will continue its work to support rigorous evaluations of educational technologies aimed to reduce disparities by partnering with Boys and Girls Clubs of Greater Houston, a youth-development organization that provides education and social services to at-risk students, and MIND Research Institute, a nonprofit committed to improving math education.
“Even just within the first and second year of the J-PAL ed-tech competition, there continues to be an explosion in promising new initiatives,” says Philip Oreopoulos, professor of economics at the University of Toronto and co-chair of the J-PAL Education, Technology, and Opportunity Initiative. “We’re excited to try to help steer this development towards the most promising and effective programs for improving academic success and student well-being.”
Boys and Girls Clubs of Greater Houston will partner with J-PAL North America to develop an evaluation of the BookNook reading app, a research-based intervention technology that aims to improve literacy skills of K-8 students.
“One of our commitments to our youth is to prepare them to be better citizens in life, and we do this through our programming, which supplements the education they receive in school,” says Michael Ewing, director of programs at Boys & Girls Clubs of Greater Houston. “BookNook is one of our programs that we know can increase reading literacy and help students achieve at a higher level. We are excited about this opportunity to conduct a rigorous evaluation of BookNook’s technology because we can substantially increase our own accountability as an organization, ensuring that we are able to track the literacy gains of our students when the program is implemented with fidelity.”
Children who do not master reading by a young age are often placed at a significant disadvantage to their peers throughout the rest of their development. However, many effective interventions for students struggling with reading involve one-on-one or small-group instruction that places a heavy demand on school resources and teacher time. This makes it particularly challenging for schools that are already resource-strapped and face a shortage of teachers to meet the needs of students who are struggling with reading.
The BookNook app offers a channel to bring research-proven literacy intervention strategies to greater numbers of students through accessible technology. The program is heavily scaffolded so that both teachers and non-teachers can use it effectively, allowing after-school staff like those at Boys & Girls Clubs of Greater Houston to provide adaptive instruction to students struggling with reading.
“Our main priority at BookNook is student success,” says Nate Strong, head of partnerships at for the BookNook team. “We are really excited to partner with J-PAL and with Boys & Girls Clubs of Greater Houston to track the success of students in Houston and learn how we can do better for them over the long haul.”
MIND Research Institute seeks to partner with J-PAL North America to develop a scalable model that will increase students’ conceptual understanding of mathematical concepts. MIND’s Spatial Temporal (ST) math program is a pre-K-8 visual instructional program that leverages the brain's spatial-temporal reasoning ability using challenging visual puzzles, non-routine problem solving, and animated informative feedback to understand and solve mathematical problems.
“We’re thrilled and honored to begin this partnership with J-PAL to build our capacity to conduct randomized evaluations,” says Andrew Coulson, chief data science officer for MIND. “It's vital we continue to rigorously evaluate the ability of ST Math's spatial-temporal approach to provide a level playing field for every student, and to show substantial effects on any assessment. With the combination of talent and experience that J-PAL brings, I expect that we will also be exploring innovative research questions, metrics and outcomes, methods and techniques to improve the applicability, validity and real-world usability of the findings.”
J-PAL North America is excited to work with these two organizations and continue to support rigorous evaluations that will help us better understand the role technology should play in learning. Boys & Girls Clubs of Greater Houston and MIND Research Institute will help J-PAL contribute to growing evidence base on education technology that can help guide decision-makers in understanding which uses of education technology are truly helping students learn amidst a rapidly-changing technological landscape.
J-PAL North America is a regional office of the Abdul Latif Jameel Poverty Action Lab. J-PAL was established in 2003 as a research center at MIT’s Department of Economics. Since then, it has built a global network of affiliated professors based at over 58 universities and regional offices in Africa, Europe, Latin America and the Caribbean, North America, South Asia, and Southeast Asia. J-PAL North America was established with support from the Alfred P. Sloan Foundation and Arnold Ventures and works to improve the effectiveness of social programs in North America through three core activities: research, policy outreach, and capacity building. J-PAL North America’s education technology work is supported by the Overdeck Family Foundation and Arnold Ventures.
If you knew that hundreds of millions of running shoes are disposed of in landfills each year, would you prefer a high-performance athletic shoe that is biodegradable? Would being able to monitor your fitness in real time and help you avoid injury while you are running appeal to you? If so, look no further than the collaboration between MIT and the Fashion Institute of Technology (FIT).
For the second consecutive year, students from each institution teamed up for two weeks in late June to create product concepts exploring the use of advanced fibers and technology. The workshops were held collaboratively with Advanced Functional Fabrics of America (AFFOA), a Cambridge, Massachusetts-based national nonprofit whose goal is to enable a manufacturing-based transformation of traditional fibers, yarns, and textiles into highly sophisticated, integrated, and networked devices and systems.
“Humans have made use of natural fibers for millennia. They are essential as tools, clothing and shelter,” says Gregory C. Rutledge, lead principal investigator for MIT in AFFOA and the Lammot du Pont Professor in Chemical Engineering. “Today, new fiber-based solutions can have a significant and timely impact on the challenges facing our world.”
The students had the opportunity this year to respond to a project challenge posed by footwear and apparel manufacturer New Balance, a member of the AFFOA network. Students spent their first week in Cambridge learning new technologies at MIT and the second at FIT, a college of the State University of New York, in New York City working on projects and prototypes. On the last day of the workshop, the teams presented their final projects at the headquarters of Lafayette 148 at the Brooklyn Navy Yard, with New Balance Creative Manager of Computational Design Onur Yuce Gun in attendance.
Team Natural Futurism presented a concept to develop a biodegradable lifestyle shoe using natural material alternatives, including bacterial cellulose and mycelium, and advanced fiber concepts to avoid use of chemical dyes. The result was a shoe that is both sustainable and aesthetic. Team members included: Giulia de Garay (FIT, Textile Development and Marketing), Rebecca Grekin ’19 (Chemical Engineering), rising senior Kedi Hu (Chemical Engineering/Architecture), Nga Yi "Amy" Lam (FIT, Textile Development and Marketing), Daniella Koller (FIT, Fashion Design), and Stephanie Stickle (FIT, Textile Surface Design).
Team CoMIT to Safety Before ProFIT explored the various ways that runners get hurt, sometimes from acute injuries but more often from overuse. Their solution was to incorporate intuitive textiles, as well as tech elements such as a silent alarm and LED display, into athletic clothing and shoes for entry-level, competitive, and expert runners. The goal is to help runners at all levels to eliminate distraction, know their physical limits, and be able to call for help. Team members included Rachel Cheang (FIT, Fashion Design/Knitwear), Jonathan Mateer (FIT, Accessories Design), Caroline Liu ’19 (Materials Science and Engineering), and Xin Wen ’19 (Electrical Engineering and Computer Science).
"It is critical for design students to work in a team environment engaging in the latest technologies. This interaction will support the invention of products that will define our future," comments Joanne Arbuckle, deputy to the president for industry partnerships and collaborative programs at FIT.
The specific content of this workshop was co-designed by MIT postdocs Katia Zolotovsky of the Department of Biological Engineering and Mehmet Kanik of the Research Laboratory of Electronics, with assistant professor of fashion design Andy Liu from FIT, to teach the fundamentals of fiber fabrication, 3-D printing with light, sensing, and biosensing. Participating MIT faculty included Yoel Fink, who is CEO of AFFOA and professor of materials science and electrical engineering; Polina Anikeeva, who is associate professor in the departments of Materials Science and Engineering and Brain and Cognitive Sciences; and Nicholas Xuanlai Fang, professor of mechanical engineering. Participating FIT faculty were Preeti Arya, assistant professor, Textile Development and Marketing; Patrice George, associate professor, Textile Development and Marketing; Suzanne Goetz, associate professor, Textile Surface Design; Tom Scott, Fashion Design; David Ulan, adjunct assistant professor, Accessories Design; and Gregg Woodcock, adjunct instructor, Accessories Design.
To facilitate the intersection of design and engineering for products made of advanced functional fibers, yarns, and textiles, a brand-new workforce must be created and inspired by future opportunities. “The purpose of the program is to bring together undergraduate students from different backgrounds, and provide them with a cross-disciplinary, project-oriented experience that gets them thinking about what can be done with these new materials,” Rutledge adds.
The goal of MIT, FIT, AFFOA, and industrial partner New Balance is to accelerate innovation in high-tech, U.S.-based manufacturing involving fibers and textiles, and potentially to create a whole new industry based on breakthroughs in fiber technology and manufacturing. AFFOA, a Manufacturing Innovation Institute founded in 2016, is a public-private partnership between industry, academia, and both state and federal governments.
“Collaboration and teamwork are DNA-level attributes of the New Balance workplace,” says Chris Wawrousek, senior creative design lead in the NB Innovation Studio. “We were very excited to participate in the program from a multitude of perspectives. The program allowed us to see some of the emerging research in the field of technical textiles. In some cases, these technologies are still very nascent, but give us a window into future developments.”
“The diverse pairing and short time period also remind us of the energy captured in an academic crash course, and just how much teams can do in a condensed period of time,” Wawrousek adds. “Finally, it’s a great chance to connect with this future generation of designers and engineers, hopefully giving them an exciting window into the work of our brand.”
By building upon their different points of view from design and science, the teams demonstrated what is possible when creative individuals from each area act and think as one. “When designers and engineers come together and open their minds to creating new technologies that ultimately will impact the world, we can imagine exciting new multi-material fibers that open up a new spectrum of applications in various markets, from clothing to medical and beyond,” says Yuly Fuentes, MIT Materials Research Laboratory project manager for fiber technologies.
You know that person who always seems to be ahead of their deadlines, despite being swamped? Do you look at them with envy and wonder how they do it?
"Regardless of location, industry, or occupation, productivity is a challenge faced by every professional," says Robert Pozen, senior lecturer at the MIT Sloan School of Management.
As part of his ongoing research and aided by MIT undergraduate Kevin Downey, Pozen surveyed 20,000 self-selected individuals in management from six continents to learn why some people are more productive than others.
The survey tool, dubbed the Pozen Productivity Rating, consists of 21 questions divided into seven categories: planning your schedule, developing daily routines, coping with your messages, getting a lot done, improving your communication skills, running effective meetings, and delegating to others. These particular habits and skills are core to Pozen’s MIT Sloan Executive Education program, Maximizing Your Productivity: How to Become an Efficient and Effective Executive, and his bestselling book, "Extreme Productivity: Boost Your Results, Reduce Your Hours."
After cleaning up the data, Pozen and Downey obtained a complete set of answers from 19,957 respondents. Roughly half were residents of North America; another 21 percent were residents of Europe, and 19 percent were residents of Asia. The remaining 10 percent included residents of Australia, South America, and Africa.
They identified the groups of people with the highest productivity ratings and found that professionals with the highest scores tended to do well on the same clusters of habits:
- They planned their work based on their top priorities and then acted with a definite objective;
- they developed effective techniques for managing a high volume of information and tasks; and
- they understood the needs of their colleagues, enabling short meetings, responsive communications, and clear directions.
The results were also interesting when parsed by the demographics of the survey participants.
Geographically, the average productivity score for respondents from North America was in the middle of the pack, even though Americans tend to work longer hours. In fact, the North American score was significantly lower than the average productivity scores for respondents from Europe, Asia, and Australia.
Age and seniority were highly correlated with personal productivity — older and more senior professionals recorded higher scores than younger and more junior colleagues. Habits of these more senior respondents included developing routines for low-value activities, managing message flow, running effective meetings, and delegating tasks to others.
While the overall productivity scores of male and female professionals were almost the same, there were some noteworthy differences in how women and men managed to be so productive. For example, women tended to score particularly high when it came to running effective meetings — keeping meetings to less than 90 minutes and finishing with an agreement of next steps. By contrast, men did particularly well at coping with high message volume — not looking at their emails too frequently and skipping over the messages of low value.
Coping with your daily flood of messages
While it’s clear that the ability to deal with inbox overload is key to productivity, how that’s accomplished may be less clear to many of us who shudder at our continuous backlog of emails.
“We all have so much small stuff, like email, that overwhelms us, and we wind up dedicating precious time to it,” says Pozen. “Most of us look at email every three to five minutes. Instead, look every hour or two, and when you do look, look only at subject matter and sender, and essentially skip over 60-80 percent of it, because most emails you get aren’t very useful.” Pozen also encourages answering important emails immediately instead of flagging them and then finding them again later (or forgetting altogether), as well as flagging important contacts and making ample use of email filters.
However, Pozen stresses that managing incoming emails, while an important skill, needs to be paired with other, more big-picture habits in order to be effective, such as defining your highest priorities. He warns that without a specific set of goals to pursue — both personal and professional — many ambitious people devote insufficient time to activities that actually support their top goals.
More tips for maximizing your productivity
If you want to become more productive, try developing the “habit clusters” demonstrated in Pozen’s survey results and possessed by the most productive professionals. This includes:
- Focusing on your primary objectives: Every night, revise your next day’s schedule to stress your top priorities. Decide your purpose for reading any lengthy material, before you start.
- Managing your work overload: Skip over 50-80 percent of your emails based on the sender and the subject. Break large projects into small steps — and start with step one.
- Supporting your colleagues: Limit any meeting to 90 minutes or less and end each meeting with clearly defined next steps. Agree on success metrics with your team.
Pozen's survey tool is still available online. Those completing it will receive a feedback report offering practical tips for improving productivity. You can also learn from Pozen firsthand in his MIT Executive Education program, Maximizing Your Personal Productivity.
Apartment seekers in big cities often use the presence of restaurants to determine if a neighborhood would be a good place to live. It turns out there is a lot to this rule of thumb: MIT urban studies scholars have now found that in China, restaurant data can be used to predict key socioeconomic attributes of neighborhoods.
Indeed, using online restaurant data, the researchers say, they can effectively predict a neighborhood’s daytime population, nighttime population, the number of businesses located in it, and the amount of overall spending in the neighborhood.
“The restaurant industry is one of the most decentralized and deregulated local consumption industries,” says Siqi Zheng, an urban studies professor at MIT and co-author of a new paper outlining the findings. “It is highly correlated with local socioeconomic attributes, like population, wealth, and consumption.”
Using restaurant data as a proxy for other economic indicators can have a practical purpose for urban planners and policymakers, the researchers say. In China, as in many places, a census is only taken once a decade, and it may be difficult to analyze the dynamics of a city’s ever-changing areas on a faster-paced basis. Thus new methods of quantifying residential levels and economic activity could help guide city officials.
“Even without census data, we can predict a variety of a neighborhood’s attributes, which is very valuable,” adds Zheng, who is the Samuel Tak Lee Associate Professor of Real Estate Development and Entrepreneurship, and faculty director of the MIT China Future City Lab.
“Today there is a big data divide,” says Carlo Ratti, director of MIT’s Senseable City Lab, and a co-author of the paper. “Data is crucial to better understanding cities, but in many places we don’t have much [official] data. At the same time, we have more and more data generated by apps and websites. If we use this method we [can] understand socioeconomic data in cities where they don’t collect data.”
The paper, “Predicting neighborhoods’ socioeconomic attributes using restaurant data,” appears this week in the Proceedings of the National Academy of Sciences. The authors are Zheng, who is the corresponding author; Ratti; and Lei Dong, a postdoc co-hosted by the MIT China Future City Lab and the Senseable City Lab.
The study takes a close neighborhood-level look at nine cities in China: Baoding, Beijing, Chengdu, Hengyang, Kunming, Shenyang, Shenzen, Yueyang, and Zhengzhou. To conduct the study, the researchers extracted restaurant data from the website Dianping, which they describe as the Chinese equivalent of Yelp, the English-language business-review site.
By matching the Dianping data to reliable, existing data for those cities — including anonymized and aggregated mobile phone location data from 56.3 million people, bank card records, company registration records, and some census data — the researchers found they could predict 95 percent of the variation in daytime population among neighborhoods. They also predicted 95 percent of the variation in nighttime population, 93 percent of the variation in the number of businesses, and 90 percent of the variation in levels of consumer consumption.
“We have used new publicly available data and developed new data augmentation methods to address these urban issues,” says Dong, who adds that the study‘s model is a “new contribution to [the use of] both data science for social good, and big data for urban economics communities.”
The researchers note that this is a more accurate proxy for estimating neighborhood-level demographic and economic activity than other methods previously used. For instance, other researchers have used satellite imaging to calculate the amount of nightime light in cities, and in turn used the quantity of light to estimate neighborhood-level activity. While that method fares well for population estimates, the restaurant-data method is better overall, and much better at estimating business activity and consumer spending.
Zheng says she feels “confident” that the researchers’ model could be applied to other Chinese cities because it already shows good predictive power across cities. But the researchers also believe the method they employed — which uses machine learning techniques to zero in on significant correlations — could potentially be applied to cities around the globe.
“These results indicate the restaurant data can capture common indicators of socioeconomic outcomes, and these commonalities can be transferred … with reasonable accuracy in cities where survey outcomes are unobserved,” the researchers state in the paper.
As the scholars acknowledge, their study observed correlations between restaurant data and neighborhood characteristics, rather than specifying the exact causal mechanisms at work. Ratti notes that the causal link between restaurants and neighborhood characteristics can run both ways: Sometimes restaurants can fill demand in already-thriving area, while at other times their presence is a harbinger of future development.
“There is always [both] a push and a pull” between restaurants and neighborhood development, Ratti says. “But we show the socioeconomic data is very well-reflected in the restaurant landscape, in the cities we look at. The interesting finding is that this seems to be so good as a proxy.”
Zheng says she hopes additional scholars will pick up on the method, which in principle could be applied to many urban studies topics.
“The restaurant data itself, as well as the variety of neighborhood attributes it predicts, can help other researchers study all kinds of urban issues, which is very valuable,” Zheng says.
The research grew out of an ongoing collaboration between MIT’s China Future City Lab and the MIT Senseable City Lab Consortium, which both use a broad range of data sources to shed new light on urban dynamics.
The study was also supported, in part, by the National Science Foundation of China.
The Office of the Vice President for Research announced the appointment of a new leadership team for the Nuclear Reactor Laboratory (NRL). The team will consist of Gordon Kohse, managing director for operations; Jacopo Buongiorno, science and technology director and director for strategic R&D partnerships; and Lance Snead, senior advisor for strategic partnerships and business development and leader of the NRL Irradiation Materials Sciences Group. The team will succeed David Moncton, who plans to return to his research after taking a department head sabbatical. Moncton has served as director of the NRL since 2004.
The new leadership team will collectively oversee an updated organizational model for the NRL that will allow the laboratory to more closely align its operations with the scientific research agenda of the Department of Nuclear Science and Engineering and other MIT researchers. “I look forward to working with this thoughtful and experienced team as they implement their vision for a vibrant operation supporting the critical work of our research community,” says Maria Zuber, vice president for research.
Kohse, a principal research scientist with the NRL and previously the deputy director of research and services, has worked with the NRL for over 40 years, ensuring the smooth operation of experiments at the laboratory. As managing director for operations, Kohse will oversee reactor operations, the newly created program management group, quality assurance, and the irradiation engineering group, and will work closely with Lance Snead on overseeing the Irradiation Materials Sciences Group. Kohse says, “I look forward to a new chapter in my work at the NRL. This is an exciting opportunity to build on the skills and dedication of the laboratory staff and to renew and strengthen cooperation with MIT faculty. My goal is to continue safe, reliable operation of the reactor, and to expand its capabilities in the service of expanding missions in nuclear research and education.”
In his new NRL leadership role, Jacopo Buongiorno, the TEPCO Professor of Nuclear Science and Engineering, will oversee the NRL’s Centers for Irradiation Materials Science. These centers will focus on a variety of research questions ranging from new nuclear fuels, to in-core sensors, to nuclear materials degradation. All experimental research utilizing the MIT reactor will be coordinated through the Centers for Irradiation Materials Science. Ongoing and installed programs will be managed through the program management group.
Buongiorno is also the director of the Center for Advanced Energy Systems (CANES), which is one of eight Low-Carbon-Energy Centers (LCEC) of the MIT Energy Initiative (MITEI); he is also the director of the recently completed MIT study on the Future of Nuclear Energy in a Carbon-Constrained World.
Buongiorno and Snead, an MIT research scientist and former corporate fellow with Oak Ridge National Laboratory, will spearhead efforts to expand external collaborations with federal and industry sponsors and work with MIT’s faculty to identify ways the NRL can provide the needed experimental support for their research and education objectives. “Our vision is to grow the MIT reactor value to MIT’s own research community as well as position it at the center of the worldwide efforts to develop new nuclear technologies that contribute to energy security and decarbonization of the global economy,” says Buongiorno.
This new leadership team will build on NRL’s accomplishments under the direction of David Moncton. Moncton was instrumental in the 20-year relicensing of the reactor, led the NRL in developing the research program which boasts the most productive and innovative program for in-core studies of structural materials, new fuel cladding composites, new generations of nuclear instrumentation based on ultrasonic sensors and fiber optics, and studies of the properties of liquid salt in a radiation environment for use as a coolant in a new generation of high-temperature reactors. The NRL has become a key partner of the Nuclear Science User Facilities (NSUF) sponsored by Idaho National Laboratory, and it has established a world-class reputation for its in-core irradiation program.
Anne White, professor and head of the Department of Nuclear Science and Engineering, notes, “The unique capabilities of NRL together with the Centers for Irradiation Materials Science will create a new and exciting nexus for nuclear-related research and education at MIT, opening up opportunities not only for faculty in the nuclear science and engineering department (Course 22), but across the entire Institute.”
The new leadership team will begin their tenure effective Aug. 1, 2019.
For decades, research has shown that our perception of the world is influenced by our expectations. These expectations, also called “prior beliefs,” help us make sense of what we are perceiving in the present, based on similar past experiences. Consider, for instance, how a shadow on a patient’s X-ray image, easily missed by a less experienced intern, jumps out at a seasoned physician. The physician’s prior experience helps her arrive at the most probable interpretation of a weak signal.
The process of combining prior knowledge with uncertain evidence is known as Bayesian integration and is believed to widely impact our perceptions, thoughts, and actions. Now, MIT neuroscientists have discovered distinctive brain signals that encode these prior beliefs. They have also found how the brain uses these signals to make judicious decisions in the face of uncertainty.
“How these beliefs come to influence brain activity and bias our perceptions was the question we wanted to answer,” says Mehrdad Jazayeri, the Robert A. Swanson Career Development Professor of Life Sciences, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study.
The researchers trained animals to perform a timing task in which they had to reproduce different time intervals. Performing this task is challenging because our sense of time is imperfect and can go too fast or too slow. However, when intervals are consistently within a fixed range, the best strategy is to bias responses toward the middle of the range. This is exactly what animals did. Moreover, recording from neurons in the frontal cortex revealed a simple mechanism for Bayesian integration: Prior experience warped the representation of time in the brain so that patterns of neural activity associated with different intervals were biased toward those that were within the expected range.
MIT postdoc Hansem Sohn, former postdoc Devika Narain, and graduate student Nicolas Meirhaeghe are the lead authors of the study, which appears in the July 15 issue of Neuron.
Ready, set, go
Statisticians have known for centuries that Bayesian integration is the optimal strategy for handling uncertain information. When we are uncertain about something, we automatically rely on our prior experiences to optimize behavior.
“If you can’t quite tell what something is, but from your prior experience you have some expectation of what it ought to be, then you will use that information to guide your judgment,” Jazayeri says. “We do this all the time.”
In this new study, Jazayeri and his team wanted to understand how the brain encodes prior beliefs, and put those beliefs to use in the control of behavior. To that end, the researchers trained animals to reproduce a time interval, using a task called “ready-set-go.” In this task, animals measure the time between two flashes of light (“ready” and “set”) and then generate a “go” signal by making a delayed response after the same amount of time has elapsed.
They trained the animals to perform this task in two contexts. In the “Short” scenario, intervals varied between 480 and 800 milliseconds, and in the “Long” context, intervals were between 800 and 1,200 milliseconds. At the beginning of the task, the animals were given the information about the context (via a visual cue), and therefore knew to expect intervals from either the shorter or longer range.
Jazayeri had previously shown that humans performing this task tend to bias their responses toward the middle of the range. Here, they found that animals do the same. For example, if animals believed the interval would be short, and were given an interval of 800 milliseconds, the interval they produced was a little shorter than 800 milliseconds. Conversely, if they believed it would be longer, and were given the same 800-millisecond interval, they produced an interval a bit longer than 800 milliseconds.
“Trials that were identical in almost every possible way, except the animal’s belief led to different behaviors,” Jazayeri says. “That was compelling experimental evidence that the animal is relying on its own belief.”
Once they had established that the animals relied on their prior beliefs, the researchers set out to find how the brain encodes prior beliefs to guide behavior. They recorded activity from about 1,400 neurons in a region of the frontal cortex, which they have previously shown is involved in timing.
During the “ready-set” epoch, the activity profile of each neuron evolved in its own way, and about 60 percent of the neurons had different activity patterns depending on the context (Short versus Long). To make sense of these signals, the researchers analyzed the evolution of neural activity across the entire population over time, and found that prior beliefs bias behavioral responses by warping the neural representation of time toward the middle of the expected range.
“We have never seen such a concrete example of how the brain uses prior experience to modify the neural dynamics by which it generates sequences of neural activities, to correct for its own imprecision. This is the unique strength of this paper: bringing together perception, neural dynamics, and Bayesian computation into a coherent framework, supported by both theory and measurements of behavior and neural activities,” says Mate Lengyel, a professor of computational neuroscience at Cambridge University, who was not involved in the study.
Researchers believe that prior experiences change the strength of connections between neurons. The strength of these connections, also known as synapses, determines how neurons act upon one another and constrains the patterns of activity that a network of interconnected neurons can generate. The finding that prior experiences warp the patterns of neural activity provides a window onto how experience alters synaptic connections. “The brain seems to embed prior experiences into synaptic connections so that patterns of brain activity are appropriately biased,” Jazayeri says.
As an independent test of these ideas, the researchers developed a computer model consisting of a network of neurons that could perform the same ready-set-go task. Using techniques borrowed from machine learning, they were able to modify the synaptic connections and create a model that behaved like the animals.
These models are extremely valuable as they provide a substrate for the detailed analysis of the underlying mechanisms, a procedure that is known as "reverse-engineering.” Remarkably, reverse-engineering the model revealed that it solved the task the same way the monkeys’ brain did. The model also had a warped representation of time according to prior experience.
The researchers used the computer model to further dissect the underlying mechanisms using perturbation experiments that are currently impossible to do in the brain. Using this approach, they were able to show that unwarping the neural representations removes the bias in the behavior. This important finding validated the critical role of warping in Bayesian integration of prior knowledge.
The researchers now plan to study how the brain builds up and slowly fine-tunes the synaptic connections that encode prior beliefs as an animal is learning to perform the timing task.
The research was funded by the Center for Sensorimotor Neural Engineering, the Netherlands Scientific Organization, the Marie Sklodowska Curie Reintegration Grant, the National Institutes of Health, the Sloan Foundation, the Klingenstein Foundation, the Simons Foundation, the McKnight Foundation, and the McGovern Institute.
The following is adapted from a joint release from the MIT Press and the Harvard Data Science Initiative.
The MIT Press and the Harvard Data Science Initiative (HDSI) have announced the launch of the Harvard Data Science Review (HDSR). The open-access journal, published by MIT Press and hosted online via the multimedia platform PubPub, an initiative of the MIT Knowledge Futures group, will feature leading global thinkers in the burgeoning field of data science, making research, educational resources, and commentary accessible to academics, professionals, and the interested public. With demand for data scientists booming, HDSR will provide a centralized, authoritative, and peer-reviewed publishing community to service the growing profession.
The first issue features articles on topics ranging from authorship attribution of John Lennon-Paul McCartney songs to machine learning models for predicting drug approvals to artificial intelligence (AI). Future content will have a similar range of general interest, academic, and professional content intended to foster dialogue among researchers, educators, and practitioners about data science research, practice, literacy, and workforce development. HDSR will prioritize quality over quantity, with a primary emphasis on substance and readability, attracting readers via inspiring, informative, and intriguing papers, essays, stories, interviews, debates, guest columns, and data science news. By doing so, HDSR intends to help define and shape the profession as a scientifically rigorous and globally impactful multidisciplinary field.
Combining features of a premier research journal, a leading educational publication, and a popular magazine, HDSR will leverage digital technologies and advances to facilitate author-reader interactions globally and learning across various media.
The Harvard Data Science Review will serve as a hub for high-quality work in the growing field of data science, noted by the Harvard Business Review as the "sexiest job of the 21st century." It will feature articles that provide expert overviews of complex ideas and topics from leading thinkers with direct applications for teaching, research, business, government, and more. It will highlight content in the form of commentaries, overviews, and debates intended for a wide readership; fundamental philosophical, theoretical, and methodological research; innovations and advances in learning, teaching, and communicating data science; and short communications and letters to the editor.
The dynamic digital edition is freely available on the PubPub platform to readers around the globe.
Amy Brand, director of the MIT Press, states, “For too long the important work of data scientists has been opaque, appearing mainly in academic journals with limited reach. We are thrilled to partner with the Harvard Data Science Initiative to publish work that will have a deep impact on popular understanding of the growing field of data science. The Review will be an unparalleled resource for advancing data literacy in society.”
Francesca Dominici, the Clarence James Gamble Professor of Biostatistics, Population and Data Science, and David Parkes, the George F. Colony Professor of Computer Science, both at Harvard University, announce, “As codirectors of the Harvard Data Science Initiative, we’re thrilled for the launch of this new journal. With its rigorous and cross-disciplinary thinking, the Harvard Data ScienceReview will advance the new science of data. By sharing stories of positive transformational impact as well as raising questions, this collective endeavor will reveal the contours that will shape future research and practice.”
Xiao-li Meng, the Whipple V.N. Jones Professor of Statistics at Harvard and founding editor-in-chief of HDSR, explains, “The revolutionary ability to collect, process, and apply new analytics to extract powerful insights from data has a tremendous influence on our lives. However, hype and misinformation have emerged as unfortunate side effects of data science’s meteoric rise. The Harvard Data Science Review is designed to cut through the hype to engage readers with substantive and informed articles from the leading data science experts and practitioners, ranging from philosophers of ethics and historians of science to AI researchers and data science educators. In short, it is ‘everything data science and data science for everyone.’”
Elizabeth Langdon-Gray, inaugural executive director of HDSI, comments, “The Harvard Data Science Initiative was founded to foster collaboration in both research and teaching and to catalyze research that will benefit our society and economy. The Review plays a vital part in our effort to empower research progress and education globally and to solve some of the world’s most important challenges.”
The inaugural issue of HDSR will publish contributions from internationally renowned scholars and educators, as well as leading researchers in industry and government, such as Christine Borgman (University of California at Los Angeles), Rodney Brooks (MIT), Emmanuel Candes (Stanford University), David Donoho (Stanford University), Luciano Floridi (Oxford/The Alan Turing Institute), Alan M. Garber (Harvard), Barbara J. Grosz (Harvard), Alfred Hero (University of Michigan), Sabina Leonelli (University of Exeter), Michael I. Jordan (University of California at Berkeley), Andrew Lo (MIT), Maja Matarić (University of Southern California), Brendan McCord (U.S. Department of Defense), Nathan Sanders (WarnerMedia), Rebecca Willett (University of Chicago), and Jeannette Wing (Columbia University).
One of the earliest interactive course videos offered by MIT BLOSSOMS (Blended Learning Open Source Science or Math Studies) looks at the physics of donkey carts — used frequently in the streets of Pakistan. The lesson, created by Naveed Malik ‘81, looks at Newton’s Third Law of Motion, teaching how gravity can affect how two objects interact through the very visual, real-world example of a donkey pulling a cart.
At the recent LINC 2019 conference, Professor Richard Larson, principal investigator of BLOSSOMS and founding director of LINC, provided this example from 2010 of teaching STEM concepts in an engaging and locally-relevant way. Both BLOSSOMS and LINC have grown substantially over the last decade, continuing to explore and expand on the ways that technology-enabled education can improve education access — particularly for developing countries and underserved populations.
Vijay Kumar, executive director of the Abdul Latif Jameel World Education Lab (J-WEL) and associate dean for open learning at MIT, welcomed the very international LINC 2019 audience, comprising approximately 130 attendees representing 31 countries.
Kumar noted that the themes of the conference mirror the central mission of J-WEL, especially applying the innovation and research of MIT to catalyze change — with a particular focus on the developing world and emerging economies — "to address hard problems of education access and inequality.”
This year, LINC focused on “the new learning society,” trying to understand how best to address educational opportunities for diverse learners from around the world with different aspirations, motivations, and needs. Included in this group are many people who are displaced or face other financial or social obstacles to accessing a quality education. In addition to new types of learners, new tools and technologies have emerged. With the explosion of online education, digital learning has become central to the discourse on educational change.
“We are looking at questions of how technology might allow us to think more deeply about learning outcomes,” says Kumar. “How do you initiate change, how do you share resources, how do you create process to scale change, and how do you generate and maintain learning communities?”
“Leapfrogging” for bigger advances in education
Keynote speaker Rebecca Winthrop, director of the Center for Universal Education and senior fellow for global economy and development at the Brookings Institution, talked about innovations that aim to scale education to ensure that all young people across the globe develop the skills needed for a fast-changing world. Winthrop is the author of "Leapfrogging Inequality: Remaking Education to Help Young People Thrive," published by the Brookings Institution in 2018.
Many young people throughout the world — for a variety of reasons — do not have access to quality education. The Brookings Institution has identified a “100-year gap” between levels of education in wealthy and developing countries — meaning that without substantial changes in current education systems, it will take 100 years for children in developing countries to reach the education levels of children in developed countries.
Compounding this challenge is the reality that this 100-year gap refers solely to current education — to the best practices of education today. With new technologies shifting the landscape of what work might look like in the future, education needs to evolve, as well.
“We need to shift to skills that will prepare students for the future,” said Winthrop. “Students need a broad set of competencies, as well as social and emotional skills.”
Winthrop noted that research indicates that, without any significant change in practices and policies, 884 million young people worldwide will not have basic secondary-level skills by 2030.
She discussed the potential of a “leapfrogging” approach to reforming education. Like the word implies, a “leapfrogging” approach to education focuses mostly on “rapid, non-linear progress.” This approach seeks to provide access, quality, and relevance all at once, rather than in stages or steps. There is an emphasis on more student-centered teaching and learning and individualized programs that are results-oriented.
Winthrop provided a variety of examples of specific efforts that in some way reflect this approach, including a satellite education program in Brazil that divides the teaching profession into lecturing and mentoring teachers to reach more students in rural communities; a tablet-based, distance-learning program based in Sudan, Jordan, Lebanon, and Uganda; a literacy and numeracy game that started in Colombia; and tablets preloaded with localized educational content provided to small groups of students in India.
Winthrop emphasized the importance of designing for scale at the beginning, considering the cost per student and what is most important about the program.
“You need to know what is the essence of why the thing is successful, and you need to make sure that core element is preserved when moving to another context or scale,” she said.
Advancing education at MIT and beyond
A panel discussion on “Learning Everywhere” provided some examples of innovative approaches to expanding education access, including the Refugee Action Hub (ReACT) Certificate Program, which was launched during an MIT Solve competition at the Institute. The program seeks to provide pathways to education for refugees, who very rarely have access to higher education, and it includes in-person lectures, online classes, and a paid internship. Key elements of this program, and of many others discussed, are human interaction and community-building.
Another example of an innovative education program with some “leapfrogging” characteristics is the Program in Data Science, created by CoLAB, a hub of disruptive innovation organizations in Uruguay. CoLAB also supports up to 500 students over the next four years to participate in a blended learning program in data science offered through the Uruguay Technological University (UTEC). Developed through membership in J-WEL Higher Education, the Program in Data Science includes online courses from MITx and Uruguayan universities, online activities facilitated by J-WEL staff, and on-site workshops run by J-WEL and MIT International Science and Technology Initiatives (MISTI).
Although a wide variety of creative and impactful efforts were highlighted at LINC 2019, many larger education systems have not yet undergone significant changes.
“Education is super innovative — it’s just largely at the margins and not at the center of systems,” says Winthrop. “It’s a problem of how we harness that for larger systemic change.”
The LINC 2019 participants and J-WEL, as a whole, aim to address this challenge.
“It’s tremendously exciting to see all the people who have come together to share their ideas and experiences,” says Kumar. “New technologies and approaches are enabling new, shared opportunities of increasing education equality. J-WEL supports and strengthens these efforts to enable substantive educational change.”
Fernando “Corby” Corbató, an MIT professor emeritus whose work in the 1960s on time-sharing systems broke important ground in democratizing the use of computers, died on Friday, July 12, at his home in Newburyport, Massachusetts. He was 93.
Decades before the existence of concepts like cybersecurity and the cloud, Corbató led the development of one of the world’s first operating systems. His “Compatible Time-Sharing System” (CTSS) allowed multiple people to use a computer at the same time, greatly increasing the speed at which programmers could work. It’s also widely credited as the first computer system to use passwords.
After CTSS Corbató led a time-sharing effort called Multics, which directly inspired operating systems like Linux and laid the foundation for many aspects of modern computing. Multics doubled as a fertile training ground for an emerging generation of programmers that included C programming language creator Dennis Ritchie, Unix developer Ken Thompson, and spreadsheet inventors Dan Bricklin and Bob Frankston.
Before time-sharing, using a computer was tedious and required detailed knowledge. Users would create programs on cards and submit them in batches to an operator, who would enter them to be run one at a time over a series of hours. Minor errors would require repeating this sequence, often more than once.
But with CTSS, which was first demonstrated in 1961, answers came back in mere seconds, forever changing the model of program development. Decades before the PC revolution, Corbató and his colleagues also opened up communication between users with early versions of email, instant messaging, and word processing.
“Corby was one of the most important researchers for making computing available to many people for many purposes,” says long-time colleague Tom Van Vleck. “He saw that these concepts don’t just make things more efficient; they fundamentally change the way people use information.”
Besides making computing more efficient, CTSS also inadvertently helped establish the very concept of digital privacy itself. With different users wanting to keep their own files private, CTSS introduced the idea of having people create individual accounts with personal passwords. Corbató’s vision of making high-performance computers available to more people also foreshadowed trends in cloud computing, in which tech giants like Amazon and Microsoft rent out shared servers to companies around the world.
“Other people had proposed the idea of time-sharing before,” says Jerry Saltzer, who worked on CTSS with Corbató after starting out as his teaching assistant. “But what he brought to the table was the vision and the persistence to get it done.”
CTSS was also the spark that convinced MIT to launch “Project MAC,” the precursor to the Laboratory for Computer Science (LCS). LCS later merged with the Artificial Intelligence Lab to become MIT’s largest research lab, the Computer Science and Artificial Intelligence Laboratory (CSAIL), which is now home to more than 600 researchers.
“It’s no overstatement to say that Corby’s work on time-sharing fundamentally transformed computers as we know them today,” says CSAIL Director Daniela Rus. “From PCs to smartphones, the digital revolution can directly trace its roots back to the work that he led at MIT nearly 60 years ago.”
In 1990 Corbató was honored for his work with the Association of Computing Machinery’s Turing Award, often described as “the Nobel Prize for computing.”
From sonar to CTSS
Corbató was born on July 1, 1926 in Oakland, California. At 17 he enlisted as a technician in the U.S. Navy, where he first got the engineering bug working on a range of radar and sonar systems. After World War II he earned his bachelor's degree at Caltech before heading to MIT to complete a PhD in physics.
As a PhD student, Corbató met Professor Philip Morse, who recruited him to work with his team on Project Whirlwind, the first computer capable of real-time computation. After graduating, Corbató joined MIT's Computation Center as a research assistant, soon moving up to become deputy director of the entire center.
It was there that he started thinking about ways to make computing more efficient. For all its innovation, Whirlwind was still a rather clunky machine. Researchers often had trouble getting much work done on it, since they had to take turns using it for half-hour chunks of time. (Corbató said that it had a habit of crashing every 20 minutes or so.)
Since computer input and output devices were much slower than the computer itself, in the late 1950s a scheme called multiprogramming was developed to allow a second program to run whenever the first program was waiting for some device to finish. Time-sharing built on this idea, allowing other programs to run while the first program was waiting for a human user to type a request, thus allowing the user to interact directly with the first program.
Saltzer says that Corbató pioneered a programming approach that would be described today as agile design.
“It’s a buzzword now, but back then it was just this iterative approach to coding that Corby encouraged and that seemed to work especially well,” he says.
In 1962 Corbató published a paper about CTSS that quickly became the talk of the slowly-growing computer science community. The following year MIT invited several hundred programmers to campus to try out the system, spurring a flurry of further research on time-sharing.
Foreshadowing future technological innovation, Corbató was amazed — and amused — by how quickly people got habituated to CTSS’ efficiency.
“Once a user gets accustomed to [immediate] computer response, delays of even a fraction of a minute are exasperatingly long,” he presciently wrote in his 1962 paper. “First indications are that programmers would readily use such a system if it were generally available.”
Multics, meanwhile, expanded on CTSS’ more ad hoc design with a hierarchical file system, better interfaces to email and instant messaging, and more precise privacy controls. Peter Neumann, who worked at Bell Labs when they were collaborating with MIT on Multics, says that its design prevented the possibility of many vulnerabilities that impact modern systems, like “buffer overflow” (which happens when a program tries to write data outside the computer’s short-term memory).
“Multics was so far ahead of the rest of the industry,” says Neumann. “It was intensely software-engineered, years before software engineering was even viewed as a discipline.”
In spearheading these time-sharing efforts, Corbató served as a soft-spoken but driven commander in chief — a logical thinker who led by example and had a distinctly systems-oriented view of the world.
“One thing I liked about working for Corby was that I knew he could do my job if he wanted to,” says Van Vleck. “His understanding of all the gory details of our work inspired intense devotion to Multics, all while still being a true gentleman to everyone on the team.”
Another legacy of the professor’s is “Corbató’s Law,” which states that the number of lines of code someone can write in a day is the same regardless of the language used. This maxim is often cited by programmers when arguing in favor of using higher-level languages.
Corbató was an active member of the MIT community, serving as associate department head for computer science and engineering from 1974 to 1978 and 1983 to 1993. He was a member of the National Academy of Engineering, and a fellow of the Institute of Electrical and Electronics Engineers and the American Association for the Advancement of Science.
Corbató is survived by his wife, Emily Corbató, from Brooklyn, New York; his stepsons, David and Jason Gish; his brother, Charles; and his daughters, Carolyn and Nancy, from his marriage to his late wife Isabel; and five grandchildren.
CSAIL will host an event to honor and celebrate Corbató in the coming months.
Philip G. Freelon MArch ’77, professor of the practice in the MIT Department of Architecture, lead architect for the Smithsonian’s National Museum of African American History and Culture, and a dedicated force for inclusivity within the field of architecture, died on July 9 in Durham, North Carolina, of the neuro-degenerative disease amyotrphic lateral sclerosis (ALS), with which he had been diagnosed in 2016. He was 66.
For nine years beginning in 2007, Freelon taught 4.222 (Professional Practice), a required subject in the master’s in architecture program that uses current examples to illustrate the legal, ethical, and management concepts underlying the practice of architecture.
“Phil was a remarkable architect, a motivating teacher, a spirited public intellectual and above all, an exceptional human being whose modesty and respect of others and their ideas put the best face on the architect and on the profession,” says Hashim Sarkis, dean of MIT’s School of Architecture and Planning (SA+P).
A native of Philadelphia, Freelon attended Hampton University in Virginia before transferring to North Carolina State University, from which he graduated in 1975 with a bachelor of environmental design degree in architecture. He earned his master’s degree in architecture from MIT and at age 25 was the youngest person to pass the Architecture Registration Exam in North Carolina.
The Freelon Group, which he founded in 1990, became one of the largest African American-owned architectural firms in the country.
“Phil Freelon was a creative and productive alumnus of the MIT School of Architecture and Planning,” says Adèle Naudé Santos, SA+P dean when Freelon joined the faculty. “His buildings are beautifully crafted and spatially inventive, and we were proud to have him on our faculty. We are greatly saddened by his passing.”
Freelon headed multifaceted design teams for museum projects and cultural institutions such as the Museum of the African Diaspora in San Francisco, the Reginald F. Lewis Museum of Maryland African American History and Culture in Baltimore, the National Center for Civil and Human Rights in Atlanta, the Harvey B. Gantt Center for African-American Arts and Culture in Charlotte, Emancipation Park in Houston, and the Anacostia and Tenleytown branches of the District of Columbia Public Library System.
The practice joined with three other design firms as Freelon Adjaye Bond/SmithGroup to create the Smithsonian National Museum of African American History and Culture. As lead architect and architect of record for the project, on which David Adjaye was lead designer, Freelon directed the programming and planning effort that set the stage for the museum’s design.
In 2014, The Freelon Group joined global design firm Perkins and Will. Recent and current projects led by Freelon include North Carolina Freedom Park in Raleigh, the Durham County Human Services Complex, the Durham Transportation Center, and the Motown Museum Expansion in Detroit. He was appointed to the board of directors and the executive committee of Perkins and Will while serving dual roles as managing director and design director of the firm’s North Carolina practice.
In addition to his role at MIT, he was an adjunct faculty member at North Carolina State University’s College of Design and lectured at Harvard University (where he was a Loeb Fellow), the University of Maryland, Syracuse University, Auburn University, the University of Utah, the University of California at Berkeley, Kent State University, and the New Jersey Institute of Technology, among others. A Peer Professional for the GSA’s Design Excellence Program, he also served on numerous design award juries including the National AIA Institute Honor Awards jury and the National Endowment for the Arts Design Stewardship Panel.
“Phil was one of the hardest working people I ever knew,” said Lawrence Sass, associate professor in the Department of Architecture at MIT and director of the computation group. “I could not believe that someone so humble could have done so much. He was a dedicated professor in addition to being a trusted design professional, and a leader who lived in the spirit of a design giant. He taught from real-world experience. He was emotionally and professionally accessible. I will forever miss and remember his larger-than-life presence walking down the Infinite Corridor.”
Freelon was a Fellow of the American Institute of Architects, and the recipient of the AIA North Carolina’s Gold Medal, its highest individual honor. A LEED Accredited Professional, he was the 2009 recipient of the AIA Thomas Jefferson Award for Public Architecture, and in 2011 was appointed by President Obama to the U.S. Commission of Fine Arts. The Freelon Group received 26 AIA design awards (regional, state, and local) and received AIA North Carolina’s Outstanding Firm Award (2001). Freelon’s furniture design received first prize in the PPG Furniture Design Competition, and he did design contract work with Herman Miller.
His work has appeared in national professional publications including Architecture, Progressive Architecture, Architectural Record, and Contract magazine (Designer of the Year, 2008), and his and the firm’s work has been featured in Metropolis and Metropolitan Home magazines and the The New York Times.
Freelon is survived by his wife of 40 years, Nnenna Freelon; his children Deen, Maya, and Pierce; three siblings; and six grandchildren. A celebration of his life will be held on Sept. 28 at the Durham County Human Services Complex in Durham. In lieu of flowers, donations can be made to NorthStar Church of the Arts, a nonprofit art and culture center founded by Nnenna and Phil Freelon.
An automated system developed by MIT researchers designs and 3-D prints complex robotic parts called actuators that are optimized according to an enormous number of specifications. In short, the system does automatically what is virtually impossible for humans to do by hand.
In a paper published today in Science Advances, the researchers demonstrate the system by fabricating actuators — devices that mechanically control robotic systems in response to electrical signals — that show different black-and-white images at different angles. One actuator, for instance, portrays a Vincent van Gogh portrait when laid flat. Tilted an angle when it’s activated, however, it portrays the famous Edvard Munch painting “The Scream.” The researchers also 3-D printed floating water lilies with petals equipped with arrays of actuators and hinges that fold up in response to magnetic fields run through conductive fluids.
The actuators are made from a patchwork of three different materials, each with a different light or dark color and a property — such as flexibility and magnetization — that controls the actuator’s angle in response to a control signal. Software first breaks down the actuator design into millions of three-dimensional pixels, or “voxels,” that can each be filled with any of the materials. Then, it runs millions of simulations, filling different voxels with different materials. Eventually, it lands on the optimal placement of each material in each voxel to generate two different images at two different angles. A custom 3-D printer then fabricates the actuator by dropping the right material into the right voxel, layer by layer.
“Our ultimate goal is to automatically find an optimal design for any problem, and then use the output of our optimized design to fabricate it,” says first author Subramanian Sundaram PhD ’18, a former graduate student in the Computer Science and Artificial Intelligence Laboratory (CSAIL). “We go from selecting the printing materials, to finding the optimal design, to fabricating the final product in almost a completely automated way.”
The shifting images demonstrates what the system can do. But actuators optimized for appearance and function could also be used for biomimicry in robotics. For instance, other researchers are designing underwater robotic skins with actuator arrays meant to mimic denticles on shark skin. Denticles collectively deform to decrease drag for faster, quieter swimming. “You can imagine underwater robots having whole arrays of actuators coating the surface of their skins, which can be optimized for drag and turning efficiently, and so on,” Sundaram says.
Joining Sundaram on the paper are: Melina Skouras, a former MIT postdoc; David S. Kim, a former researcher in the Computational Fabrication Group; Louise van den Heuvel ’14, SM ’16; and Wojciech Matusik, an MIT associate professor in electrical engineering and computer science and head of the Computational Fabrication Group.
Navigating the “combinatorial explosion”
Robotic actuators today are becoming increasingly complex. Depending on the application, they must be optimized for weight, efficiency, appearance, flexibility, power consumption, and various other functions and performance metrics. Generally, experts manually calculate all those parameters to find an optimal design.
Adding to that complexity, new 3-D-printing techniques can now use multiple materials to create one product. That means the design’s dimensionality becomes incredibly high. “What you’re left with is what’s called a ‘combinatorial explosion,’ where you essentially have so many combinations of materials and properties that you don’t have a chance to evaluate every combination to create an optimal structure,” Sundaram says.
In their work, the researchers first customized three polymer materials with specific properties they needed to build their actuators: color, magnetization, and rigidity. In the end, they produced a near-transparent rigid material, an opaque flexible material used as a hinge, and a brown nanoparticle material that responds to a magnetic signal. They plugged all that characterization data into a property library.
The system takes as input grayscale image examples — such as the flat actuator that displays the Van Gogh portrait but tilts at an exact angle to show “The Scream.” It basically executes a complex form of trial and error that’s somewhat like rearranging a Rubik’s Cube, but in this case around 5.5 million voxels are iteratively reconfigured to match an image and meet a measured angle.
Initially, the system draws from the property library to randomly assign different materials to different voxels. Then, it runs a simulation to see if that arrangement portrays the two target images, straight on and at an angle. If not, it gets an error signal. That signal lets it know which voxels are on the mark and which should be changed. Adding, removing, and shifting around brown magnetic voxels, for instance, will change the actuator’s angle when a magnetic field is applied. But, the system also has to consider how aligning those brown voxels will affect the image.
Voxel by voxel
To compute the actuator’s appearances at each iteration, the researchers adopted a computer graphics technique called “ray-tracing,” which simulates the path of light interacting with objects. Simulated light beams shoot through the actuator at each column of voxels. Actuators can be fabricated with more than 100 voxel layers. Columns can contain more than 100 voxels, with different sequences of the materials that radiate a different shade of gray when flat or at an angle.
When the actuator is flat, for instance, the light beam may shine down on a column containing many brown voxels, producing a dark tone. But when the actuator tilts, the beam will shine on misaligned voxels. Brown voxels may shift away from the beam, while more clear voxels may shift into the beam, producing a lighter tone. The system uses that technique to align dark and light voxel columns where they need to be in the flat and angled image. After 100 million or more iterations, and anywhere from a few to dozens of hours, the system will find an arrangement that fits the target images.
“We’re comparing what that [voxel column] looks like when it’s flat or when it’s titled, to match the target images,” Sundaram says. “If not, you can swap, say, a clear voxel with a brown one. If that’s an improvement, we keep this new suggestion and make other changes over and over again.”
To fabricate the actuators, the researchers built a custom 3-D printer that uses a technique called “drop-on-demand.” Tubs of the three materials are connected to print heads with hundreds of nozzles that can be individually controlled. The printer fires a 30-micron-sized droplet of the designated material into its respective voxel location. Once the droplet lands on the substrate, it’s solidified. In that way, the printer builds an object, layer by layer.
The work could be used as a stepping stone for designing larger structures, such as airplane wings, Sundaram says. Researchers, for instance, have similarly started breaking down airplane wings into smaller voxel-like blocks to optimize their designs for weight and lift, and other metrics. “We’re not yet able to print wings or anything on that scale, or with those materials. But I think this is a first step toward that goal,” Sundaram says.
Artificial intelligence is expected to have tremendous societal impact across the globe in the near future. Now Luis Videgaray PhD ’98, former foreign minister and finance minister of Mexico, is coming to MIT to spearhead an effort that aims to help shape global AI policies, focusing on how such rising technologies will affect people living in all corners of the world.
Starting this month, Videgaray, an expert in geopolitics and AI policy, will serve as director of the MIT Artificial Intelligence Policy for the World Project (MIT AIPW), a collaboration between the MIT Sloan School of Management and the new MIT Stephen A. Schwarzman College of Computing. Videgaray will also serve as a senior lecturer at the MIT Sloan and as a distinguished fellow at the MIT Internet Policy Research Initiative.
The MIT AIPW will bring together researchers from across the Institute to explore and analyze best AI policies for countries around the world based on various geopolitical considerations. The end result of the year-long effort, Videgaray says, will be a report with actionable policy recommendations for national and local governments, businesses, international organizations, and universities — including MIT.
“The core idea is to analyze, raise awareness, and come up with useful policy recommendations for how the geopolitical context affects both the development and use of AI,” says Videgaray, who earned his PhD at MIT in economics. “It’s called AI Policy for the World, because it’s not only about understanding the geopolitics, but also includes thinking about people in poor nations, where AI is not really being developed but will be adopted and have significant impact in all aspects of life.”
“When we launched the MIT Stephen A. Schwarzman College of Computing, we expressed the desire for the college to examine the societal implications of advanced computational capabilities,” says MIT Provost Martin Schmidt. “One element of that is developing frameworks which help governments and policymakers contemplate these issues. I am delighted to see us jump-start this effort with the leadership of our distinguished alumnus, Dr. Videgaray.”
Democracy, diversity, and de-escalation
As Mexico’s finance minister from 2012 to 2016, Videgaray led Mexico’s energy liberalization process, a telecommunications reform to foster competition in the sector, a tax reform that reduced the country’s dependence on oil revenues, and the drafting of the country’s laws on financial technology. In 2012, he was campaign manager for President Peña Nieto and head of the presidential transition team.
As foreign minister from 2017 to 2018, Videgaray led Mexico’s relationship with the Trump White House, including the renegotiation of the North American Free Trade Agreement (NAFTA). He is one of the founders of the Lima Group, created to promote regional diplomatic efforts toward restoring democracy in Venezuela. He also directed Mexico’s leading role in the UN toward an inclusive debate on artificial intelligence and other new technologies. In that time, Videgaray says AI went from being a “science-fiction” concept in the first year to a major global political issue the following year.
In the past few years, academic institutions, governments, and other organizations have launched initiatives that address those issues, and more than 20 countries have strategies in place that guide AI development. But they miss a very important point, Videgaray says: AI’s interaction with geopolitics.
MIT AIWP will have three guiding principles to help shape policy around geopolitics: democratic values, diversity and inclusion, and de-escalation.
One of the most challenging and important issues MIT AIWP faces is if AI “can be a threat to democracy,” Videgaray says. In that way, the project will explore policies that help advance AI technologies, while upholding the values of liberal democracy.
“We see some countries starting to adopt AI technologies not for the improvement for the quality of life, but for social control,” he says. “This technology can be extremely powerful, but we are already seeing how it can also be used to … influence people and have an effect on democracy. In countries where institutions are not as strong, there can be an erosion of democracy.”
A policy challenge in that regard is how to deal with private data restrictions in different countries. If some countries don’t put any meaningful restrictions on data usage, it could potentially give them a competitive edge. “If people start thinking about geopolitical competition as more important than privacy, biases, or algorithmic transparency, and the concern is to win at all costs, then the societal impact of AI around the world could be quite worrisome,” Videgaray says.
In the same vein, MIT AIPW will focus on de-escalation of potential conflict, by promoting an analytical, practical, and realistic collaborative approach to developing and using AI technologies. While media has dubbed the rise of AI worldwide as a type of “arms race,” Videgaray says that type of thinking is potentially hazardous to society. “That reflects a sentiment that we’re moving again into an adversarial world, and technology will be a huge part of it,” he says. “That will have negative effects of how technology is developed and used.”
For inclusion and diversity, the project will make AI’s ethical impact “a truly global discussion,” Videgaray says. That means promoting awareness and participation from countries around the world, including those that may be less developed and more vulnerable. Another challenge is deciding not only what policies should be implemented, but also where those policies might be best implemented. That could mean at the state level or national level in the United States, in different European countries, or with the UN.
“We want to approach this in a truly inclusive way, which is not just about countries leading development of technology,” Videgaray says. “Every country will benefit and be negatively affected by AI, but many countries are not part of the discussion.”
While MIT AIPW won’t be drafting international agreements, Videgaray says another aim of the project is to explore different options and elements of potential international agreements. He also hopes to reach out to decision makers in governments and businesses around the world to gather feedback on the project’s research.
Part of Videgaray’s role includes building connections across MIT departments, labs, and centers to pull in researchers to focus on the issue. “For this to be successful, we need to integrate the thinking of people from different backgrounds and expertise,” he says.
At MIT Sloan, Videgaray will teach classes alongside Simon Johnson, the Ronald A. Kurtz Professor of Entrepreneurship Professor and a professor of global economics and management. His lectures will focus primarily on the issues explored by the MIT AIPW project.
Next spring, MIT AIPW plans to host a conference at MIT to convene researchers from the Institute and around the world to discuss the project’s initial findings and other topics in AI.
As a cucumber plant grows, it sprouts tightly coiled tendrils that seek out supports in order to pull the plant upward. This ensures the plant receives as much sunlight exposure as possible. Now, researchers at MIT have found a way to imitate this coiling-and-pulling mechanism to produce contracting fibers that could be used as artificial muscles for robots, prosthetic limbs, or other mechanical and biomedical applications.
While many different approaches have been used for creating artificial muscles, including hydraulic systems, servo motors, shape-memory metals, and polymers that respond to stimuli, they all have limitations, including high weight or slow response times. The new fiber-based system, by contrast, is extremely lightweight and can respond very quickly, the researchers say. The findings are being reported today in the journal Science.
The new fibers were developed by MIT postdoc Mehmet Kanik and MIT graduate student Sirma Örgüç, working with professors Polina Anikeeva, Yoel Fink, Anantha Chandrakasan, and C. Cem Taşan, and five others, using a fiber-drawing technique to combine two dissimilar polymers into a single strand of fiber.
The key to the process is mating together two materials that have very different thermal expansion coefficients — meaning they have different rates of expansion when they are heated. This is the same principle used in many thermostats, for example, using a bimetallic strip as a way of measuring temperature. As the joined material heats up, the side that wants to expand faster is held back by the other material. As a result, the bonded material curls up, bending toward the side that is expanding more slowly.
Credit: Courtesy of the researchers
Using two different polymers bonded together, a very stretchable cyclic copolymer elastomer and a much stiffer thermoplastic polyethylene, Kanik, Örgüç and colleagues produced a fiber that, when stretched out to several times its original length, naturally forms itself into a tight coil, very similar to the tendrils that cucumbers produce. But what happened next actually came as a surprise when the researchers first experienced it. “There was a lot of serendipity in this,” Anikeeva recalls.
As soon as Kanik picked up the coiled fiber for the first time, the warmth of his hand alone caused the fiber to curl up more tightly. Following up on that observation, he found that even a small increase in temperature could make the coil tighten up, producing a surprisingly strong pulling force. Then, as soon as the temperature went back down, the fiber returned to its original length. In later testing, the team showed that this process of contracting and expanding could be repeated 10,000 times “and it was still going strong,” Anikeeva says.
Credit: Courtesy of the researchers
One of the reasons for that longevity, she says, is that “everything is operating under very moderate conditions,” including low activation temperatures. Just a 1-degree Celsius increase can be enough to start the fiber contraction.
The fibers can span a wide range of sizes, from a few micrometers (millionths of a meter) to a few millimeters (thousandths of a meter) in width, and can easily be manufactured in batches up to hundreds of meters long. Tests have shown that a single fiber is capable of lifting loads of up to 650 times its own weight. For these experiments on individual fibers, Örgüç and Kanik have developed dedicated, miniaturized testing setups.
Credit: Courtesy of the researchers
The degree of tightening that occurs when the fiber is heated can be “programmed” by determining how much of an initial stretch to give the fiber. This allows the material to be tuned to exactly the amount of force needed and the amount of temperature change needed to trigger that force.
The fibers are made using a fiber-drawing system, which makes it possible to incorporate other components into the fiber itself. Fiber drawing is done by creating an oversized version of the material, called a preform, which is then heated to a specific temperature at which the material becomes viscous. It can then be pulled, much like pulling taffy, to create a fiber that retains its internal structure but is a small fraction of the width of the preform.
For testing purposes, the researchers coated the fibers with meshes of conductive nanowires. These meshes can be used as sensors to reveal the exact tension experienced or exerted by the fiber. In the future, these fibers could also include heating elements such as optical fibers or electrodes, providing a way of heating it internally without having to rely on any outside heat source to activate the contraction of the “muscle.”
Such fibers could find uses as actuators in robotic arms, legs, or grippers, and in prosthetic limbs, where their slight weight and fast response times could provide a significant advantage.
Some prosthetic limbs today can weigh as much as 30 pounds, with much of the weight coming from actuators, which are often pneumatic or hydraulic; lighter-weight actuators could thus make life much easier for those who use prosthetics. Such fibers might also find uses in tiny biomedical devices, such as a medical robot that works by going into an artery and then being activated,” Anikeeva suggests. “We have activation times on the order of tens of milliseconds to seconds,” depending on the dimensions, she says.
To provide greater strength for lifting heavier loads, the fibers can be bundled together, much as muscle fibers are bundled in the body. The team successfully tested bundles of 100 fibers. Through the fiber drawing process, sensors could also be incorporated in the fibers to provide feedback on conditions they encounter, such as in a prosthetic limb. Örgüç says bundled muscle fibers with a closed-loop feedback mechanism could find applications in robotic systems where automated and precise control are required.
Kanik says that the possibilities for materials of this type are virtually limitless, because almost any combination of two materials with different thermal expansion rates could work, leaving a vast realm of possible combinations to explore. He adds that this new finding was like opening a new window, only to see “a bunch of other windows” waiting to be opened.
“The strength of this work is coming from its simplicity,” he says.
The team also included MIT graduate student Georgios Varnavides, postdoc Jinwoo Kim, and undergraduate students Thomas Benavides, Dani Gonzalez, and Timothy Akintlio. The work was supported by the National Institute of Neurological Disorders and Stroke and the National Science Foundation.
A promising new way to treat some types of cancer is to program the patient’s own T cells to destroy the cancerous cells. This approach, termed CAR-T cell therapy, is now used to combat some types of leukemia, but so far it has not worked well against solid tumors such as lung or breast tumors.
MIT researchers have now devised a way to super-charge this therapy so that it could be used as a weapon against nearly any type of cancer. The research team developed a vaccine that dramatically boosts the antitumor T cell population and allows the cells to vigorously invade solid tumors.
In a study of mice, the researchers found that they could completely eliminate solid tumors in 60 percent of the animals that were given T-cell therapy along with the booster vaccination. Engineered T cells on their own had almost no effect.
“By adding the vaccine, a CAR-T cell treatment which had no impact on survival can be amplified to give a complete response in more than half of the animals,” says Darrell Irvine, who is the Underwood-Prescott Professor with appointments in Biological Engineering and Materials Science and Engineering, an associate director of MIT’s Koch Institute for Integrative Cancer Research, a member of the Ragon Institute of MGH, MIT, and Harvard, and the senior author of the study.
Leyuan Ma, an MIT postdoc, is the lead author of the study, which appears in the July 11 online edition of Science.
So far, the FDA has approved two types of CAR-T cell therapy, both used to treat leukemia. In those cases, T cells removed from the patient’s blood are programmed to target a protein, or antigen, found on the surface of B cells. (The “CAR” in CAR-T cell therapy is for “chimeric antigen receptor.”)
Scientists believe one reason this approach hasn’t worked well for solid tumors is that tumors usually generate an immunosuppressive environment that disarms the T cells before they can reach their target. The MIT team decided to try to overcome this by giving a vaccine that would go to the lymph nodes, which host huge populations of immune cells, and stimulate the CAR-T cells there.
“Our hypothesis was that if you boosted those T cells through their CAR receptor in the lymph node, they would receive the right set of priming cues to make them more functional so they’d be resistant to shutdown and would still function when they got into the tumor,” Irvine says.
To create such a vaccine, the MIT team used a trick they had discovered several years ago. They found that they could deliver vaccines more effectively to the lymph nodes by linking them to a fatty molecule called a lipid tail. This lipid tail binds to albumin, a protein found in the bloodstream, allowing the vaccine to hitch a ride directly to the lymph nodes.
In addition to the lipid tail, the vaccine contains an antigen that stimulates the CAR-T cells once they reach the lymph nodes. This antigen could be either the same tumor antigen targeted by the T cells, or an arbitrary molecule chosen by the researchers. For the latter case, the CAR-T cells have to be re-engineered so that they can be activated by both the tumor antigen and the arbitrary antigen.
In tests in mice, the researchers showed that either of these vaccines dramatically enhanced the T-cell response. When mice were given about 50,000 CAR-T cells but no vaccine, the CAR-T cells were nearly undetectable in the animals’ bloodstream. In contrast, when the booster vaccine was given the day after the T-cell infusion, and again a week later, CAR-T cells expanded until they made up 65 percent of the animals’ total T cell population, two weeks after treatment.
This huge boost in the CAR-T cell population translated to complete elimination of glioblastoma, breast, and melanoma tumors in many of the mice. CAR-T cells given without the vaccine had no effect on tumors, while CAR-T cells given with the vaccine eliminated tumors in 60 percent of the mice.
This technique also holds promise for preventing tumor recurrence, Irvine says. About 75 days after the initial treatment, the researchers injected tumor cells identical to those that formed the original tumor, and these cells were cleared by the immune system. About 50 days after that, the researchers injected slightly different tumor cells, which did not express the antigen that the original CAR-T cells targeted; the mice could also eliminate those tumor cells.
This suggests that once the CAR-T cells begin destroying tumors, the immune system is able to detect additional tumor antigens and generate populations of “memory” T cells that also target those proteins.
“If we take the animals that appear to be cured and we rechallenge them with tumor cells, they will reject all of them,” Irvine says. “That is another exciting aspect of this strategy. You need to have T cells attacking many different antigens to succeed, because if you have a CAR-T cell that sees only one antigen, then the tumor only has to mutate that one antigen to escape immune attack. If the therapy induces new T-cell priming, this kind of escape mechanism becomes much more difficult.”
While most of the study was done in mice, the researchers showed that human cells coated with CAR antigens also stimulated human CAR-T cells, suggesting that the same approach could work in human patients. The technology has been licensed to a company called Elicio Therapeutics, which is seeking to test it with CAR-T cell therapies that are already in development.
“There’s really no barrier to doing this in patients pretty soon, because if we take a CAR-T cell and make an arbitrary peptide ligand for it, then we don’t have to change the CAR-T cells,” Irvine says. “I’m hopeful that one way or another this can get tested in patients in the next one to two years.”
The research was funded by the National Institutes of Health, the Marble Center for Cancer Nanomedicine, Johnson and Johnson, and the National Institute of General Medical Sciences.
CRISPR-based tools have revolutionized our ability to target disease-linked genetic mutations. CRISPR technology comprises a growing family of tools that can manipulate genes and their expression, including by targeting DNA with the enzymes Cas9 and Cas12 and targeting RNA with the enzyme Cas13. This collection offers different strategies for tackling mutations. Targeting disease-linked mutations in RNA, which is relatively short-lived, would avoid making permanent changes to the genome. In addition, some cell types, such as neurons, are difficult to edit using CRISPR/Cas9-mediated editing, and new strategies are needed to treat devastating diseases that affect the brain.
McGovern Institute Investigator and Broad Institute of MIT and Harvard core member Feng Zhang and his team have now developed one such strategy, called RESCUE (RNA Editing for Specific C to U Exchange), described in the journal Science.
Zhang and his team, including first co-authors Omar Abudayyeh and Jonathan Gootenberg (both now McGovern fellows), made use of a deactivated Cas13 to guide RESCUE to targeted cytosine bases on RNA transcripts, and used a novel, evolved, programmable enzyme to convert unwanted cytosine into uridine — thereby directing a change in the RNA instructions. RESCUE builds on REPAIR, a technology developed by Zhang’s team that changes adenine bases into inosine in RNA.
RESCUE significantly expands the landscape that CRISPR tools can target to include modifiable positions in proteins, such as phosphorylation sites. Such sites act as on/off switches for protein activity and are notably found in signaling molecules and cancer-linked pathways.
“To treat the diversity of genetic changes that cause disease, we need an array of precise technologies to choose from. By developing this new enzyme and combining it with the programmability and precision of CRISPR, we were able to fill a critical gap in the toolbox,” says Zhang, the James and Patricia Poitras Professor of Neuroscience at MIT. Zhang has appointments in MIT’s departments of Brain and Cognitive Sciences and Biological Engineering.
Expanding the reach of RNA editing to new targets
The previously developed REPAIR platform used the RNA-targeting CRISPR/Cas13 to direct the active domain of an RNA editor, ADAR2, to specific RNA transcripts where it could convert the nucleotide base adenine to inosine, or letters A to I. Zhang and colleagues took the REPAIR fusion and evolved it in the lab until it could change cytosine to uridine, or C to U.
RESCUE can be guided to any RNA of choice, then perform a C-to-U edit through the evolved ADAR2 component of the platform. The team took the new platform into human cells, showing that they could target natural RNAs in the cell, as well as 24 clinically relevant mutations in synthetic RNAs. They then further optimized RESCUE to reduce off-target editing, while minimally disrupting on-target editing.
New targets in sight
Expanded targeting by RESCUE means that sites regulating activity and function of many proteins through post-translational modifications, such as phosphorylation, glycosylation, and methylation, can now be more readily targeted for editing.
A major advantage of RNA editing is its reversibility, in contrast to changes made at the DNA level, which are permanent. Thus, RESCUE could be deployed transiently in situations where a modification may be desirable temporarily, but not permanently. To demonstrate this, the team showed that in human cells, RESCUE can target specific sites in the RNA encoding β-catenin, that are known to be phosphorylated on the protein product, leading to a temporary increase in β-catenin activation and cell growth. If such a change were made permanent, it could predispose cells to uncontrolled cell growth and cancer, but by using RESCUE, transient cell growth could potentially stimulate wound healing in response to acute injuries.
The researchers also targeted a pathogenic gene variant, APOE4. The APOE4 allele has consistently emerged as a genetic risk factor for the development of late-onset Alzheimer’s disease. Isoform APOE4 differs from APOE2, which is not a risk factor, by just two differences (both C in APOE4 versus U in APOE2). Zhang and colleagues introduced the risk-associated APOE4 RNA into cells and showed that RESCUE can convert its signature Cs to an APOE2 sequence, essentially converting a risk to a non-risk variant.
To facilitate additional work that will push RESCUE toward the clinic, as well as enable researchers to use RESCUE as a tool to better understand disease-causing mutations, the Zhang lab plans to share the RESCUE system broadly, as they have with previously developed CRISPR tools. The technology will be freely available for academic research through the nonprofit plasmid repository Addgene. Additional information can be found on the Zhang lab’s webpage.
Support for the study was provided by The Phillips Family; J. and P. Poitras; the Poitras Center for Psychiatric Disorders Research; Hock E. Tan and K. Lisa Yang Center for Autism Research; Robert Metcalfe; David Cheng; and a Natinoal Institutes of Heatlh grant to Omar Abudayyeh. Feng Zhang is a New York Stem Cell Foundation–Robertson Investigator. Feng Zhang is supported by the National Institutes of Health; the Howard Hughes Medical Institute; the New York Stem Cell Foundation, and G. Harold and Leila Mathers Foundations.
Researchers at MIT have come up with a new pulsed laser deposition technique to make thinner lithium electrolytes using less heat, promising faster charging and potentially higher-voltage solid-state lithium ion batteries.
Key to the new technique for processing the solid-state battery electrolyte is alternating layers of the active electrolyte lithium garnet component (chemical formula, Li6.25Al0.25La3Zr2O12, or LLZO) with layers of lithium nitride (chemical formula Li3N). First, these layers are built up like a wafer cookie using a pulsed laser deposition process at about 300 degrees Celsius (572 degrees Fahrenheit). Then they are heated to 660 C and slowly cooled, a process known as annealing.
During the annealing process, nearly all of the nitrogen atoms burn off into the atmosphere and the lithium atoms from the original nitride layers fuse into the lithium garnet, forming a single lithium-rich, ceramic thin film. The extra lithium content in the garnet film allows the material to retain the cubic structure needed for positively charged lithium ions (cations) to move quickly through the electrolyte. The findings were reported in a Nature Energy paper published online recently by MIT Associate Professor Jennifer L. M. Rupp and her students Reto Pfenninger, Michal M. Struzik, Inigo Garbayo, and collaborator Evelyn Stilp.
“The really cool new thing is that we found a way to bring the lithium into the film at deposition by using lithium nitride as an internal lithiation source,” Rupp, the work's senior author, says. Rupp holds joint MIT appointments in the departments of Materials Science and Engineering and Electrical Engineering and Computer Science.
“The second trick to the story is that we use lithium nitride, which is close in bandgap to the laser that we use in the deposition, whereby we have a very fast transfer of the material, which is another key factor to not lose lithium to evaporation during a pulsed laser deposition,” Rupp explains.
Lithium batteries with commonly used electrolytes made by combining a liquid and a polymer can pose a fire risk when the liquid is exposed to air. Solid-state batteries are desirable because they replace the commonly used liquid polymer electrolytes in consumer lithium batteries with a solid material that is safer. “So we can kick that out, bring something safer in the battery, and decrease the electrolyte component in size by a factor of 100 by going from the polymer to the ceramic system,” Rupp explains.
Although other methods to produce lithium-rich ceramic materials on larger pellets or tapes, heated using a process called sintering, can yield a dense microstructure that retains a high lithium concentration, they require higher heat and result in bulkier material. The new technique pioneered by Rupp and her students produces a thin film that is about 330 nanometers thick (less than 1.5 hundred-thousandths of an inch). “Having a thin film structure instead of a thick ceramic is attractive for battery electrolyte in general because it allows you to have more volume in the electrodes, where you want to have the active storage capacity. So the holy grail is be thin and be fast,” she says.
Compared to the classic ceramic coffee mug, which under high magnification shows metal oxide particles with a grain size of tens to hundreds of microns, the lithium (garnet) oxide thin films processed using Rupp’s methods show nanometer scale grain structures that are one-thousandth to one-ten-thousandth the size. That means Rupp can engineer thinner electrolytes for batteries. “There is no need in a solid-state battery to have a large electrolyte,” she says.
Faster ionic conduction
Instead, what is needed is an electrolyte with faster conductivity. The unit of measurement for lithium ion conductivity is expressed in Siemens. The new multilayer deposition technique produces a lithium garnet (LLZO) material that shows the fastest ionic conductivity yet for a lithium-based electrolyte compound, about 2.9 x 10-5 Siemens (0.000029 Siemens) per centimeter. This ionic conductivity is competitive with solid-state lithium battery thin film electrolytes based on LIPON (lithium phosphorus oxynitride electrolytes) and adds a new film electrolyte material to the landscape.
“Having the lithium electrolyte as a solid-state very fast conductor allows you to dream out loud of anything else you can do with fast lithium motion,” Rupp says.
A battery’s negatively charged electrode stores power. The work points the way toward higher-voltage batteries based on lithium garnet electrolytes, both because its lower processing temperature opens the door to using materials for higher voltage cathodes that would be unstable at higher processing temperatures, and its smaller electrolyte size allows physically larger cathode volume in the same battery size.
Co-authors Michal Struzik and Reto Pfenninger carried out processing and Raman spectroscopy measurements on the lithium garnet material. These measurements were key to showing the material’s fast conduction at room temperature, as well as understanding the evolution of its different structural phases.
“One of the main challenges in understanding the development of the crystal structure in LLZO was to develop appropriate methodology. We have proposed a series of experiments to observe development of the crystal structure in the [LLZO] thin film from disordered or 'amorphous' phase to fully crystalline, highly conductive phase utilizing Raman spectroscopy upon thermal annealing under controlled atmospheric conditions,” says co-author Struzik, who was a postdoc working at ETH Zurich and MIT with Rupp’s group, and is now a professor at Warsaw University of Technology in Poland. “That allowed us to observe and understand how the crystal phases are developed and, as a consequence, the ionic conductivity improved,” he explains.
Their work shows that during the annealing process, lithium garnet evolves from the amorphous phase in the initial multilayer processed at 300 C through progressively higher temperatures to a low conducting tetragonal phase in a temperature range from about 585 C to 630 C, and to the desired highly conducting cubic phase after annealing at 660 C. Notably, this temperature of 660 C to achieve the highly conducting phase in the multilayer approach is nearly 400 C lower than the 1,050 C needed to achieve it with prior sintering methods using pellets or tapes.
“One of the greatest challenges facing the realization of solid-state batteries lies in the ability to fabricate such devices. It is tough to bring the manufacturing costs down to meet commercial targets that are competitive with today's liquid-electrolyte-based lithium-ion batteries, and one of the main reasons is the need to use high temperatures to process the ceramic solid electrolytes,” says Professor Peter Bruce, the Wolfson Chair of the Department of Materials at Oxford University, who was not involved in this research.
“This important paper reports a novel and imaginative approach to addressing this problem by reducing the processing temperature of garnet-based solid-state batteries by more than half — that is, by hundreds of degrees,” Bruce adds. “Normally, high temperatures are required to achieve sufficient solid-state diffusion to intermix the constituent atoms of ceramic electrolyte. By interleaving lithium layers in an elegant nanostructure the authors have overcome this barrier.”
After demonstrating the novel processing and high conductivity of the lithium garnet electrode, the next step will be to test the material in an actual battery to explore how the material reacts with a battery cathode and how stable it is. “There is still a lot to come,” Rupp predicts.
Understanding aluminum dopant sites
A small fraction of aluminum is added to the lithium garnet formulation because aluminum is known to stabilize the highly conductive cubic phase in this high-temperature ceramic. The researchers complemented their Raman spectroscopy analysis with another technique, known as negative-ion time-of-flight secondary ion mass spectrometry (TOF-SIMS), which shows that the aluminum retains its position at what were originally the interfaces between the lithium nitride and lithium garnet layers before the heating step expelled the nitrogen and fused the material.
“When you look at large-scale processing of pellets by sintering, then everywhere where you have a grain boundary, you will find close to it a higher concentration of aluminum. So we see a replica of that in our new processing, but on a smaller scale at the original interfaces,” Rupp says. “These little things are what adds up, also, not only to my excitement in engineering but my excitement as a scientist to understand phase formations, where that goes and what that does,” Rupp says.
“Negative TOF-SIMS was indeed challenging to measure since it is more common in the field to perform this experiment with focus on positively charged ions,” explains Pfenninger, who worked at ETH Zurich and MIT with Rupp’s group. “However, for the case of the negatively charged nitrogen atoms we could only track it in this peculiar setup. The phase transformations in thin films of LLZO have so far not been investigated in temperature-dependent Raman spectroscopy — another insight towards the understanding thereof.”
The paper’s other authors are Inigo Garbayo, who is now at CIC EnergiGUNE in Minano, Spain, and Evelyn Stilp, who was then with Empa, Swiss Federal Laboratories for Materials Science and Technology, in Dubendorf, Switzerland.
Rupp began this research while serving as a professor of electrochemical materials at ETH Zurich (the Swiss Federal Institute of Technology) before she joined the MIT faculty in February 2017. MIT and ETH have jointly filed for two patents on the multi-layer lithium garnet/lithium nitride processing. This new processing method, which allows precise control of lithium concentration in the material, can also be applied to other lithium oxide films such as lithium titanate and lithium cobaltate that are used in battery electrodes. “That is something we invented. That’s new in ceramic processing,” Rupp says.
“It is a smart idea to use Li3N as a lithium source during preparation of the garnet layers, as lithium loss is a critical issue during thin film preparation otherwise,” comments University Professor Jürgen Janek at Justus Liebig University Giessen in Germany. Janek, who was not involved in this research, adds that “the quality of the data and the analysis is convincing.”
“This work is an exciting first step in preparing one of the best oxide-based solid electrolytes in an intermediate temperature range,” Janek says. “It will be interesting to see whether the intermediate temperature of about 600 degrees C is sufficient to avoid side reactions with the electrode materials.”
Oxford Professor Bruce notes the novelty of the approach, adding “I'm not aware of similar nanostructured approaches to reduce diffusion lengths in solid-state synthesis.”
“Although the paper describes specific application of the approach to the formation of lithium-rich and therefore highly conducting garnet solid electrolytes, the methodology has more general applicability, and therefore significant potential beyond the specific examples provided in the paper,” Bruce says. Commercialization may be needed to be demonstrate this approach at larger scale, he suggests.
While the immediate impact of this work is likely to be on batteries, Rupp predicts another decade of exciting advances based on applications of her processing techniques to devices for neuromorphic computing, artificial intelligence, and fast gas sensors. “The moment the lithium is in a small solid-state film, you can use the fast motion to trigger other electrochemistry,” she says.
Several companies have already expressed interest in using the new electrolyte approach. “It’s good for me to work with strong players in the field so they can push out the technology faster than anything I can do,” Rupp says.
This work was funded by the MIT Lincoln Laboratory, the Thomas Lord Foundation, Competence Center Energy and Mobility, and Swiss Electrics.