MIT Latest News

Subscribe to MIT Latest News feed
MIT News is dedicated to communicating to the media and the public the news and achievements of the students, faculty, staff and the greater MIT community.
Updated: 3 hours 10 min ago

Quantum dots can spit out clone-like photons

8 hours 7 min ago

In the global quest to develop practical computing and communications devices based on the principles of quantum physics, one potentially useful component has proved elusive: a source of individual particles of light with perfectly constant, predictable, and steady characteristics. Now, researchers at MIT and in Switzerland say they have made major steps toward such a single photon source.

The study, which involves using a family of materials known as perovskites to make light-emitting particles called quantum dots, appears today in the journal Science. The paper is by MIT graduate student in chemistry Hendrik Utzat, professor of chemistry Moungi Bawendi, and nine others at MIT and at ETH in Zurich, Switzerland.

The ability to produce individual photons with precisely known and persistent properties, including a wavelength, or color, that does not fluctuate at all, could be useful for many kinds of proposed quantum devices. Because each photon would be indistinguishable from the others in terms of its quantum-mechanical properties, it could be possible, for example, to delay one of them and then get the pair to interact with each other, in a phenomenon called interference.

“This quantum interference between different indistinguishable single photons is the basis of many optical quantum information technologies using single photons as information carriers,” Utzat explains. “But it only works if the photons are coherent, meaning they preserve their quantum states for a sufficiently long time.”

Many researchers have tried to produce sources that could emit such coherent single photons, but all have had limitations. Random fluctuations in the materials surrounding these emitters tend to change the properties of the photons in unpredictable ways, destroying their coherence. Finding emitter materials that maintain coherence and are also bright and stable is “fundamentally challenging,” Utzat says. That’s because not only the surroundings but even the materials themselves “essentially provide a fluctuating bath that randomly interacts with the electronically excited quantum state and washes out the coherence,” he says.

“Without having a source of coherent single photons, you can’t use any of these quantum effects that are the foundation of optical quantum information manipulation,” says Bawendi, who is the Lester Wolfe Professor of Chemistry. Another important quantum effect that can be harnessed by having coherent photons, he says, is entanglement, in which two photons essentially behave as if they were one, sharing all their properties.

Previous chemically-made colloidal quantum dot materials had impractically short coherence times, but this team found that making the quantum dots from perovskites, a family of materials defined by their crystal structure, produced coherence levels that were more than a thousand times better than previous versions. The coherence properties of these colloidal perovskite quantum dots are now approaching the levels of established emitters, such as atom-like defects in diamond or quantum dots grown by physicists using gas-phase beam epitaxy.

One of the big advantages of perovskites, they found, was that they emit photons very quickly after being stimulated by a laser beam. This high speed could be a crucial characteristic for potential quantum computing applications. They also have very little interaction with their surroundings, greatly improving their coherence properties and stability.

Such coherent photons could also be used for quantum-encrypted communications applications, Bawendi says. A particular kind of entanglement, called polarization entanglement, can be the basis for secure quantum communications that defies attempts at interception.

Now that the team has found these promising properties, the next step is to work on optimizing and improving their performance in order to make them scalable and practical. For one thing, they need to achieve 100 percent indistinguishability in the photons produced. So far, they have reached 20 percent, “which is already very remarkable,” Utzat says, already comparable to the coherences reached by other materials, such as atom-like fluorescent defects in diamond, that are already established systems and have been worked on much longer.

“Perovskite quantum dots still have a long way to go until they become applicable in real applications,” he says, “but this is a new materials system available for quantum photonics that can now be optimized and potentially integrated with devices.”

It’s a new phenomenon and will require much work to develop to a practical level, the researchers say. “Our study is very fundamental,” Bawendi notes. “However, it’s a big step toward developing a new material platform that is promising.”

The work was supported by the U.S. Department of Energy, the National Science Foundation, and the Swiss Federal Commission for Technology and Innovation.

Achieving greater efficiency for fast data center operations

9 hours 32 min ago

Today’s data centers eat up and waste a good amount of energy responding to user requests as fast as possible, with only a few microseconds delay. A new system by MIT researchers improves the efficiency of high-speed operations by better assigning time-sensitive data processing across central processing unit (CPU) cores and ensuring hardware runs productively.

Data centers operate as distributed networks, with numerous web and mobile applications implemented on a single server. When users send requests to an app, bits of stored data are pulled from hundreds or thousands of services across as many servers. Before sending a response, the app must wait for the slowest service to process the data. This lag time is known as tail latency.

Current methods to reduce tail latencies leave tons of CPU cores in a server open to quickly handle incoming requests. But this means that cores sit idly for much of the time, while servers continue using energy just to stay powered on. Data centers can contain hundreds of thousands of servers, so even small improvements in each server’s efficiency can save millions of dollars.

Alternatively, some systems reallocate cores across apps based on workload. But this occurs over milliseconds — around one-thousandth the desired speed for today’s fast-paced requests. Waiting too long can also degrade an app’s performance, because any information that’s not processed before an allotted time doesn’t get sent to the user.

In a paper being presented at the USENIX Networked Systems Design and Implementation conference next week, the researchers developed a faster core-allocating system, called Shenango, that reduces tail latencies, while achieving high efficiencies. First, a novel algorithm detects which apps are struggling to process data. Then, a software component allocates idle cores to handle the app’s workload.

“In data centers, there’s a tradeoff between efficiency and latency, and you really need to reallocate cores at much finer granularity than every millisecond,” says first author Amy Ousterhout, a PhD student in the Computer Science and Artificial Intelligence Laboratory (CSAIL). Shenango lets servers “manage operations that occur at really short time scales and do so efficiently.”

Energy and cost savings will vary by data center, depending on size and workloads. But the overall aim is to improve data center CPU utilization, so that every core is put to good use. The best CPU utilization rates today sit at about 60 percent, but the researchers say their system could potentially boost that figure to 100 percent.

“Data center utilization today is quite low,” says co-author Adam Belay, an assistant professor of electrical engineering and computer science and a CSAIL researcher. “This is a very serious problem [that can’t] be solved in a single place in the data center. But this system is one critical piece in driving utilization up higher.”

Joining Ousterhout and Belay on the paper are Hari Balakrishnan, the Fujitsu Chair Professor in the Department of Electrical Engineering and Computer Science, and CSAIL PhD students Jonathan Behrens and Joshua Fried.

Efficient congestion-detection

In a real-world data center, Shenango — algorithm and software — would run on each server in a data center. All the servers would be able to communicate with each other.

The system’s first innovation is a novel congestion-detection algorithm. Every five microseconds the algorithm checks data packets queued for processing for each app. If a packet is still waiting from the last observation, the algorithm notes there’s at least a 5-microsecond delay. It also checks if any computation processes, called threads, are waiting to be executed. If so, the system considers that a “congested” app.

It seems simple enough. But the queue’s structure is important to achieving microsecond-scale congestion detection. Traditional thinking meant having the software check the timestamp of each queued-up data packet, which would take too much time.

The researchers implement the queues in efficient structures known as “ring buffers.” These structures can be visualized as different slots around a ring. The first inputted data packet goes into a starting slot. As new data arrive, they’re dropped into subsequent slots around the ring. Usually, these structures are used for first-in-first-out data processing, pulling data from the starting slot and working toward the ending slot.

The researchers’ system, however, only stores data packets briefly in the structures, until an app can process them. In the meantime, the stored packets can be used for congestion checks. The algorithm need only compare two points in the queue — the location of the first packet and where the last packet was five microseconds ago — to determine if packets are encountering a delay.

“You can look at these two points, and track their progress every five microseconds, to see how much data has been processed,” Fried says. Because the structures are simple, “you only have to do this once per core. If you’re looking at 24 cores, you do 24 checks in five microseconds, which scales nicely.”

Smart allocation

The second innovation is called the IOKernel, the central software hub that steers data packets to appropriate apps. The IOKernel also uses the congestion detection algorithm to quickly allocate cores to congested apps orders of magnitude more quickly than traditional approaches.

For instance, the IOKernel may see an incoming data packet for a certain app that requires microsecond processing speeds. If the app is congested due to a lack of cores, the IOKernel immediately devotes an idle core to the app. If it also sees another app running cores with less time-sensitive data, it will grab some of those cores and reallocate them to the congested app. The apps themselves also help out: If an app isn’t processing data, it alerts the IOKernel that its cores can be reallocated. Processed data goes back to the IOKernel to send the response.

“The IOKernel is concentrating on which apps need cores that don’t have them,” Behrens says. “It’s trying to figure out who’s overloaded and needs more cores, and gives them cores as quickly as possible, so they don’t fall behind and have huge latencies.”

The tight communication between the IOKernel, algorithm, apps, and server hardware is “unique in data centers” and allows Shenango to function seamlessly, Belay says: “The system has global visibility into what’s happening in each server. It sees the hardware providing the packets, what’s running where in each core, and how busy each of the apps are. And it does that at the microsecond scale.”

Next, the researchers are refining Shenango for real-world data center implementation. To do so, they’re ensuring the software can handle a very high data throughput and has appropriate security features.

“Providing low-latency network services is critical to many internet applications. Unfortunately, reducing the latency is very challenging especially when multiple applications compete for shared compute resources,” says KyoungSoo Park, an associate professor of electrical engineering at the Korea Advanced Institute of Science and Technology. “Shenango breaks the conventional wisdom that it is impossible to sustain low latency at a very high request load with a variable response time, and it opens a new system design space that realizes microsecond-scale tail latency with practical network applications.”

Exploring the nature of intelligence

9 hours 38 min ago

Algorithms modeled loosely on the brain have helped artificial intelligence take a giant leap forward in recent years. Those algorithms, in turn, have advanced our understanding of human intelligence while fueling discoveries in a range of other fields. 

MIT founded the Quest for Intelligence to apply new breakthroughs in human intelligence to AI, and use advances in AI to push human intelligence research even further. This fall, nearly 50 undergraduates joined MIT’s human-machine intelligence quest under the Undergraduate Research Opportunities Program (UROP). Students worked on a mix of projects focused on the brain, computing, and connecting computing to disciplines across MIT.

Picking the right word with a click

Nicholas Bonaker, a sophomore, is working on a software program called Nomon to help people with nearly complete paralysis to communicate by pressing a button. Nomon was created more than a decade ago by Tamara Broderick, as a master’s thesis, and soon found a following on the web. Now a computer science professor at MIT’s Computer Science and Artificial Intelligence Lab (CSAIL), Broderick handed Nomon off to Bonaker last summer for an update at a user’s request. 

The program allows the user to select from more than 80 words or characters on a screen; the user presses a button when a clock corresponding to the desired word or character reaches noon. The hands of each clock move slightly out of phase, helping Nomon to figure out which word or character to choose. The program automatically adapts to a user’s clicking style, giving those with less precise motor control more time to pick their word. 

“Nick has made Nomon much easier for a user to install and run, including directly from Windows," Broderick says. “He has dramatically improved the user interface, and refactored the code to make it easier to incorporate future improvements."

Bonaker’s next step is to test Nomon on able-bodied and motor-impaired users to see how it compares to traditional row-column scanner software. “It’s been fun knowing this could have a big impact on someone’s life,” he says.

Predicting how materials respond to 3-D printing

3-D printers are now mainstream, but industrial molds are still better at turning out items like high-quality car parts, or replacement hips and knees. Senior Alexander Denmark chose a project in the lab of Elsa Olivetti, a professor in the Department of Materials Science and Engineering, to understand how 3-D printing methods can be made more consistent.

Working with graduate students in Olivetti’s lab, Denmark used machine-learning algorithms to explore how the printer’s laser speed, and the layering of different types of materials, influence the properties of the finished product. He helped build a framework for comparing 3-D printing parameters to the final product’s mechanical properties. 

“We hope to use it as a guide in printing experiments,” he says. “Say I want our final product to be really strong, or relatively lightweight, this approach could help tell us at what power to set the laser or how thick each layer of material should be.” 

Denmark says the project helped bring his coding skills to the next level. He also appreciated the mentoring he received from his graduate student colleagues. “They gave me a lot of advice on improving my approach,” he says.

A faster way to find new drugs

Developing new drugs is expensive because of the vast number of chemical combinations possible. Second-year student Alexandra Dima chose to work on a project in the lab of Rafael Gomez-Bombarelli, a professor of materials science and engineering. Bombarelli is using machine-learning tools to narrow the search for promising drug candidates by predicting which molecules are most likely to bind with a target protein in the body. 

So far, Dima has helped to build a database of hundreds of thousands of small molecules and proteins, detailing their chemical structures and binding properties. She has also worked on the deep learning framework aimed at predicting which molecule-protein pairs have the strongest binding affinity, and thus, represent the most promising drug candidates. Specifically, she helped to optimize the parameters of a message-passing neural network in the framework. 

Among the challenges she overcame, she says, was learning to extract massive amounts of data from the web and standardize it. She also enjoyed the deep dive into bioinformatics, and as a computer science and biology major, being able to work on a real-world application. “I feel so lucky that I got to start using my coding skills to build tools that have a real life-sciences application,” she says.

Improving face-recognition models

Neeraj Prasad, a sophomore, is using machine learning tools to test ideas about how the brain organizes visual information. His project in the lab of Pawan Sinha, a neuroscience professor in the Department of Brain and Cognitive Sciences (BCS), started with a puzzle: Why are children who are treated for cataracts unable to later recognize faces? The retina matures faster in newborns with cataracts, leading researchers to hypothesize that the newborns, by missing out on seeing faces through blurry eyes, failed to learn to identify faces by their overall configuration.

With researchers in Sinha’s lab, Prasad tested the idea on computer models based on convolutional neural networks, a form of deep learning that mimics the human visual system. When researchers trained the neural nets on pictures of blurred, filtered, or discolored faces, it was able to generalize what it had learned to new faces, suggesting that the blurry vision we have as babies helps us in learning to recognize faces. The results offer insight into how the visual system develops, and suggest a new method for improving face-recognition software. 

Prasad says he learned new computational techniques and how to use the symbolic math library, TensorFlow. Patience was also required. “It took a lot of time to train the neural nets — the networks are so large that we often had to wait several days, even on a supercomputer, for results,” he says. 

Tracking language comprehension in real time

Language underlies much of what we think of as intelligence: It lets us represent concepts and ideas, think and reason about the world, and communicate and coordinate with others. To understand how the brain pulls it all off, psychologists have developed methods for tracking how quickly people grasp what they read and hear, in so-called sentence-processing experiments. Longer reading times can indicate that a word, in a given context, is harder to comprehend, thus help researchers fill out a general model of how language comprehension works. 

Veronica Boyce, a senior majoring in brain and cognitive sciences, has been working in the lab of BCS computational psycholinguistics professor Roger Levy to adapt a sentence-processing experimental method for the web, where more participants can be recruited. The method is powerful but requires labor-intensive hand-crafting of experimental materials. This fall, she showed that deep-learning language models could automatically generate experimental materials and, remarkably, produce higher-quality experiments than manually-crafted materials.

Boyce presents her results next month at the CUNY Conference on Sentence Processing, and will try to improve on her method by building in grammatical structures as part of a related project under the MIT-IBM Watson AI Lab. Current deep-learning language models have no explicit representation of grammar; the patterns they learn in text and speech are based on statistical calculations rather than a set of symbolic rules governing nouns, verbs and other parts of speech. 

“Our work is showing that these hybrid symbolic-deep learning models often do better than traditional models in capturing grammar in language,” says Levy. “This is exciting for Veronica’s project, and future sentencing-processing work. It has the potential to advance research in both human and machine intelligence.”

A conversational calorie counter

A computer science major and a triple jumper on the MIT Track and Field team, third-year student Elizabeth Weeks had the chance this fall to combine her interests in technology and healthy eating by working on a voice-controlled nutrition app in the lab of James Glass, a senior research scientist at CSAIL.

Coco Nutritionist lets users log their meals by talking into their phone rather than manually typing in the information. A collaboration between computer scientists at MIT and nutritionists at Tufts University, the app is meant to make it easier for people to track what they eat, and thus avoid empty calories and mindless eating. 

Weeks helped develop the user interface, and on the back-end, building a new feature for adding recipes and homemade meals, making meal data in the cloud accessible through a call to the server. “Lots of users had requested that we add this feature, and Elizabeth really pulled it off,” says Mandy Korpusik, a graduate student in CSAIL who led the project. Coco Nutritionist made its debut in the Apple Store last month and has already racked up nearly 900 downloads. 

The Quest for Intelligence UROP projects were funded by former Alphabet executive chairman Eric Schmidt and his wife, Wendy; the MIT­–IBM Watson AI Lab; and the MIT-SenseTime Alliance on Artificial Intelligence

Four from MIT named 2019 Sloan Research Fellows

9 hours 43 min ago

Four members of the MIT faculty representing the departments of Economics, Mathematics, and Physics were recently named recipients of the 2019 Sloan Research Fellowships from the Alfred P. Sloan Foundation. The recipients, all early-career scholars in their fields, will each receive a two-year, $70,000 fellowship to further their research.

This year’s MIT recipients are among 126 scientists who represent 57 institutions of higher education in the United States and Canada. This year’s cohort brings MIT’s total to nearly 300 fellows — more than any single institution in the history of the fellowships since their inception in 1955.  

Sloan Fellows are nominated by their fellow researchers and selected from an independent panel of senior scholars on “the basis of a candidate’s research accomplishments, creativity, and potential to become a leader in his or her field.”

2019 Sloan Fellow Nikhil Agarwal, the Castle Krob Career Development Assistant Professor of Economics in the School of Humanities, Arts, and Social Sciences, studies the empirics of matching markets. 

“In these marketplaces, agents cannot simply choose their most preferred option from a menu with posted prices, because goods may be rationed or agents on the other side of the market must agree to a match,” Agarwal says of markets that include medical residency programs, kidney donation, and public school choice. “My research interests lie in how the market structure, market rules, and government policies affect economic outcomes in these settings. To this end, my research involves both developing new empirical techniques and answering applied questions,” he says.

Nancy Rose, department head and Charles P. Kindleberger Professor of Applied Economics, nominated Agarwal. “Nikhil [Agarwal] has made fundamental contributions to the empirical analysis of matching markets, advancing both economic science and public policy objectives,” says Rose. 

Andrew Lawrie, an assistant professor in the Department of Mathematics, is an analyst studying geometric partial differential equations. He investigates the behavior of waves as they interact with each other and with their surrounding medium. 

Lawrie's research focuses on solitons — coherent solitary waves that describe nonlinear dynamics as varied as rogue waves in the ocean, black holes, and short-pulse lasers. Together with Jacek Jendrej, a researcher at Le Centre National de la Recherche Scientifique and Université Paris 13, Lawrie recently gave the first mathematically rigorous example of a completely inelastic two-soliton collision. 

“Dr. Lawrie's mathematical versatility and knowledge recently has been put on great display,” says one of Lawrie’s nominators of his paper in the research journal Inventiones Mathematicae. “This is one of those papers that completely describe mathematically an important phenomenon.”

“He has amassed an astonishingly broad and deep body of work for somebody who is only on his second year of a tenure track,” says his nominator, who requested anonymity. 

Lawrie’s colleague Yufei Zhao was also named a 2019 Sloan Fellow recipient. Zhao, the Class of 1956 Career Development Assistant Professor in the Department of Mathematics, is a researcher in discrete mathematics who has made significant contributions in combinatorics with applications to computer science. 

In major research accomplishments, Zhao contributed to a better understanding of the celebrated Green-Tao theorem, which states that prime numbers contain arbitrarily long arithmetic progressions. Zhao’s proof, co-authored with Jacob Fox, Zhao’s advisor and a former professor in the mathematics department, and David Conlon at the University of Oxford, simplifies a central part of the proof, allowing a more direct route to the Green-Tao theorem. Their work improves the understanding of pseudorandom structures — non-random objects with random-like properties — and has other applications in mathematics and computer science.

“The resulting proof is clean and fits in 25 pages, well under half the length of the original proof,” says Larry Guth, Zhao’s nominator and a professor of mathematics at MIT. “His expository work on the Green-Tao theorem is a real service to the community.”

The final 2019 Sloan Research Fellow recipient is Daniel Harlow, an assistant professor in the Department of Physics. Harlow researches cosmologic events, viewed through the lens of quantum gravity and quantum field theory.

“My research is focused on understanding the most extreme events in our universe: black holes and the Big Bang. Each year brings more observational evidence for these events, but without a theory of quantum gravity, we are not able to explain them in a satisfying way,” says Harlow, whose work has helped clarify many aspects of symmetries in quantum field theory and quantum gravity.

Harlow, who is a researcher in the Laboratory for Nuclear Science, has been working with Hirosi Ooguri, Fred Kavli Professor and director of the Walter Burke Institute for Theoretical Physics at Caltech, to give improved explanations of several well-known phenomena in the standard model of particle physics.

“We are very proud of Dan’s work with Ooguri on foundational aspects of symmetries in quantum field theory,” says Peter Fisher, department head and professor of physics. 

“Sloan Research Fellows are the best young scientists working today,” says Adam F. Falk, president of the Alfred P. Sloan Foundation. “Sloan Fellows stand out for their creativity, for their hard work, for the importance of the issues they tackle, and the energy and innovation with which they tackle them. To be a Sloan Fellow is to be in the vanguard of 21st century science."

Dan Huttenlocher named inaugural dean of MIT Schwarzman College of Computing

15 hours 29 min ago

Dan Huttenlocher SM ’84, PhD ’88, a seasoned builder and leader of new academic entities at Cornell University, has been named as the first dean of the MIT Stephen A. Schwarzman College of Computing. He will assume his new post this summer.

A member of Cornell’s computer science faculty since 1988, Huttenlocher has served since 2012 as the founding dean of Cornell Tech, a graduate school in New York City that focuses on digital technology and its economic and societal impacts. Previously, he helped create and then led Cornell’s Faculty of Computing and Information Science.

Huttenlocher returns to MIT with widely published scholarship in computer science, as well as a strongly interdisciplinary approach to computing. He also brings extensive background in industry: Huttenlocher served for 12 years as a scientist at Xerox’s Palo Alto Research Center (PARC) before leaving to co-found a financial technology company in 2000. He currently chairs the board of the John D. and Catherine T. MacArthur Foundation, and sits on the boards of directors of Amazon and Corning.

Huttenlocher’s appointment was announced this morning by Provost Martin Schmidt in a letter to the MIT community.

“The College of Computing represents a pivotal new chapter in MIT’s perpetual effort to reinvent itself in order to live up to its mission,” Schmidt wrote. “As we take on this Institute-wide challenge, I believe we are very fortunate to have, in Dan, such an inspiring and collaborative leader — someone equipped to ensure a strong beginning for the College and to lay a foundation for its lasting success. I very much look forward to working with Dan as he takes on this new challenge.”

The MIT Schwarzman College of Computing was announced last October as a $1 billion commitment to addressing the opportunities and challenges presented by the prevalence of computing and the rise of artificial intelligence. The initiative aims to reorient MIT to bring the power of computing and artificial intelligence to all fields of study, and to educate students in every discipline to responsibly use and develop these technologies. The college is slated to open in September 2019.

“MIT Schwarzman College is an ambitious experiment in educating the leaders society needs to navigate the algorithmic future,” says MIT President L. Rafael Reif. “For its founding dean, we looked for someone who combined educational creativity, instinctive collegiality, intellectual depth and breadth, institutional savvy, and industry experience. In Dan Huttenlocher, we found all these qualities — along with the signature MIT combination of boldness, enthusiasm, and humility. I'm eager to work with Dan, and I look forward to seeing how he leads our community through the college’s ongoing evolution.”

Huttenlocher holds two degrees from the Institute: In 1984, he earned a master’s degree in electrical engineering and computer science, and in 1988, he earned his PhD in computer science from the Artificial Intelligence Laboratory — a precursor to today’s Computer Science and Artificial Intelligence Laboratory (CSAIL). Huttenlocher sees it as an honor to return to the Institute in a leadership position, especially at a time when computing is evolving rapidly, producing new opportunities and challenges.

“MIT has a bold vision for rethinking computing-related research and education to respond to today’s and tomorrow’s challenges and opportunities,” Huttenlocher says. “The Institute plays a unique role in American and global higher education, and it’s exciting to be coming back to the place where I did my formative research work. I’ve learned a lot from MIT, and the world has changed a lot. If I can help contribute to the ways MIT wants to change in this new world, that is an amazing honor.”

Huttenlocher earned a bachelor’s degree from the University of Michigan in 1980, double-majoring in computer and communication sciences and experimental psychology. In recent years, he has served on two MIT visiting committees, one for Undergraduate and Graduate Programs and the other for the MIT Media Lab.

Since 2018, Huttenlocher has chaired the board of the John D. and Catherine T. MacArthur Foundation, a Chicago-based global foundation with assets of more than $6 billion.

“In the eight years that I have known Dan, as board member and now chair, he has proven himself to be intellectually curious, always respectful, and deeply supportive, especially of work that is both innovative and rigorous,” says Julia Stasch, president of the John D. and Catherine T. MacArthur Foundation. “He challenges us to be pragmatic and to be bold, and to move with the urgency that these times require. He is engaged in all that we do, from criminal justice and journalism to climate solutions, but we have especially benefitted from his wise counsel as we consider, across every area of work, ethical and social considerations in the inevitable and increasing use of technology.”

Thinking multidisciplinary and enterprising

Huttenlocher has taught computer science, business and management, and information science courses at Cornell. In 1998, he chaired the task force that created Cornell’s interdisciplinary Faculty of Computing and Information Science (CIS) in Ithaca. In 2009, he was named dean of CIS; as dean, he helped bring statistics and computing closer together at Cornell, and led the development of Gates Hall as the new home for CIS.

In 2012, Huttenlocher was named founding dean and vice provost of Cornell Tech, a new graduate school focused on technology, business, law, and design, located on Roosevelt Island in the East River between Manhattan and Queens.

Cornell Tech arose from a competition, launched in 2010 by then-New York Mayor Michael Bloomberg, that aimed to build a new applied sciences campus on one of several sites in the city. In 2011, a joint proposal by Cornell and Technion-Israel Institute of Technology was selected from among seven formal proposals. Construction began on the 12-acre campus in 2014, and the first three buildings opened in 2017.

As dean and vice provost of Cornell Tech, Huttenlocher has honed his interdisciplinary approach to computing, fostering education at the intersection of computer science and areas including financial technology, media studies, health care, and urban planning. He built a multidisciplinary faculty — in computer science and engineering, as well as law, management, and design — to shape the school’s curriculum, and has established close ties with industry, nonprofits, government agencies, and investors. Under his leadership, the Cornell Tech curriculum has married computer science with practical application to the workforce and tech startups, creating several new degrees along the way.

Huttenlocher’s own research is broad, spanning algorithms, social media, and computer vision. He’s earned the Longuet-Higgins Award for Fundamental Advances in Computer Vision (2010), and various fellowships and awards from the National Science Foundation, the Association for Computing Machinery, IEEE, and Phi Beta Kappa, the oldest academic honor society in the United States. Throughout his teaching career, Huttenlocher has earned numerous teaching awards, including both the Stephen H. Weiss Fellowship and a Faculty of the Year Award at Cornell, and the New York State Professor of the Year award in 1993.

The path ahead

Huttenlocher believes it is important to help train students to be enterprising, which he sees as “thinking and executing bigger and better,” whether in an entrepreneurial, corporate, government, nonprofit, or academic setting. “It’s about making sure students are prepared to take advantage of the digital age — where you can build things that have a big impact in relatively short order with relatively small groups,” he says.

Huttenlocher himself co-founded a financial technology company, Intelligent Markets, in 2000, remaining involved for six years; the company was later sold to SunGard Financial Systems. After that experience, and helping to launch Cornell Tech, Huttenlocher says he doesn’t subscribe to inflexible visions: Many times, he says, ideas change as an entity evolves.

As dean of the MIT Schwarzman College of Computing, Huttenlocher will aim to educate students on the new societal challenges posed by artificial intelligence and automated decision-making, focusing on ethical questions pertaining to data, algorithms, and computing in general.

“These aren’t issues that can be viewed solely through a technology lens, nor solely through a humanistic lens, nor solely through a social science lens,” he says. “Addressing those questions means bringing those disparate parts of academia together. But it also can’t be answered just by academia. It means engaging people outside academia in understanding what they’re afraid of and what excites them about those technologies.”

With the MIT Schwarzman College of Computing poised to begin hiring 50 new faculty members whose research links computer science with other academic fields, Huttenlocher says, “That’s one of the most exciting parts of this new role: looking at people who can think about research from a computing perspective, and from other perspectives, to understand how computing influences work in other disciplines and how work in other disciplines influences computing.”

“These fields are evolving,” he adds. “MIT needs to not only lead in those areas, but also in their evolution — much as engineering itself evolved a lot 150 years ago, in the early years of the Institute.”

As dean, Huttenlocher will build upon the efforts of five working groups, announced this month by Provost Schmidt. These working groups will begin to give shape to the new college’s organizational structure, faculty appointments, curriculum and degrees, focus on social implications and responsibilities, and infrastructure. The working groups — guided by a steering committee that includes the provost, Dean of Engineering Anantha Chandrakasan, and MIT Faculty Chair Susan Silbey — will aim to produce a report describing their thoughts on these important issues by May.

Leventhal City Prize seeks to spark transformative urban design and planning approaches

Wed, 02/20/2019 - 2:20pm

Vibrant, innovative cities most often result from powerful collaborations among diverse constituencies.

To support this ideal, the MIT Norman B. Leventhal Center for Advanced Urbanism (LCAU) has announced the creation of a new interdisciplinary prize aimed at catalyzing innovative urban design and planning approaches worldwide, with a goal of improving the quality of life and environment for residents. 

The prize has been established in honor of the late Norman B. Leventhal, the visionary developer and philanthropist whose contributions transformed Boston’s urban landscape. His civic leadership drove Boston’s urban revival, through projects such as Rowes Wharf, Center Plaza, South Station, and One Post Office Square.

A prize of $100,000 will be awarded on a three-year cycle to an interdisciplinary team of MIT faculty to work together with either a government agency, nonprofit organization, or civic leadership group anywhere in the world. The winning team must demonstrate the potential to improve the quality of life in cities through an innovative urban design and/or a planning project. The winners must also be able to incorporate the collaborative project in future teaching and research at MIT.

The prize has a number of goals: to develop real-world urban design solutions that advance social and environmental change; to foster new pathways for unconventional projects to get realized; to create innovative solutions using the most advanced knowledge available; and to promote collaboration among MIT faculty, students, and civic entities.

“What makes this prize really unique is that it is offered to a city and an MIT team to work together,” says Hashim Sarkis, dean of the School of Architecture and Planning. “True to the mission of the LCAU and the legacy of Norman Leventhal, it fosters collaboration, imagination, and implementation at the same time.”

For its first cycle, the Norman B. Leventhal City Prize will solicit novel responses related to LCAU’s triennial theme, equitable resilience. Equitable resilience foregrounds concerns for equity when planning, designing, and retrofitting cities with consideration of climate change and other environmental shocks or stresses. Making equity a central goal for resilience efforts, LCAU seeks proposals from all geographies that aim to develop physical design solutions that do not reinforce existing inequalities or create new ones.

“Many cities worldwide are connecting resilience adaptation goals with general development needs and strategic planning efforts,” says Alan M. Berger, the Norman B. Leventhal Professor of Advanced Urbanism. “Although these efforts provide a good starting point, it is imperative to make explicit the differential vulnerability of various socioeconomic groups in the face of increasing severity and frequency of climate change-related risks and natural disasters.”

Since its establishment in 2013 within the School of Architecture and Planning, the LCAU has sought to define the field of advanced urbanism, integrating research on urban design with processes of urbanization and urban culture, to meet the contemporary challenges facing the world’s cities.

Drawing on MIT’s deep history in urban design and planning, architecture, and transportation, the LCAU coordinates multidisciplinary, multifaceted approaches to advance the understanding of cities and propose new forms and systems for urban communities. Support for this program was provided by the Muriel and Norman B. Leventhal Family Foundation and the Sherry and Alan Leventhal Family Foundation.

For more details on the prize, see leventhalcityprize.mit.edu.

Study of quark speeds finds a solution for a 35-year physics mystery

Wed, 02/20/2019 - 1:00pm

MIT physicists now have an answer to a question in nuclear physics that has puzzled scientists for three decades: Why do quarks move more slowly inside larger atoms?

Quarks, along with gluons, are the fundamental building blocks of the universe. These subatomic particles — the smallest particles we know of — are far smaller, and operate at much higher energy levels, than the protons and neutrons in which they are found. Physicists have therefore assumed that a quark should be blithely indifferent to the characteristics of the protons and neutrons, and the overall atom, in which it resides.

But in 1983, physicists at CERN, as part of the European Muon Collaboration (EMC), observed for the first time what would become known as the EMC effect: In the nucleus of an iron atom containing many protons and neutrons, quarks move significantly more slowly than quarks in deuterium, which contains a single proton and neutron. Since then, physicists have found more evidence that the larger an atom’s nucleus, the slower the quarks that move within.

“People have been wracking their brains for 35 years, trying to explain why this effect happens,” says Or Hen, assistant professor of physics at MIT.

Now Hen, Barak Schmookler, and Axel Schmidt, a graduate student and postdoc in MIT’s Laboratory for Nuclear Science, have led an international team of physicists in identifying an explanation for the EMC effect. They have found that a quark’s speed depends on the number of protons and neutrons forming short-ranged correlated pairs in an atom’s nucleus. The more such pairs there are in a nucleus, the more slowly the quarks move within the atom’s protons and neutrons.

Schmidt says an atom’s protons and neutrons can pair up constantly, but only momentarily, before splitting apart and going their separate ways. During this brief, high-energy interaction, he believes that quarks in their respective particles may have a “larger space to play.”

“In quantum mechanics, anytime you increase the volume over which an object is confined, it slows down,” Schmidt says. “If you tighten up the space, it speeds up. That’s a known fact.”

As atoms with larger nuclei intrinsically have more protons and neutrons, they also are more likely to have a higher number of proton-neutron pairs, also known as “short-range correlated” or SRC pairs. Therefore, the team concludes that the larger the atom, the more pairs it is likely to contain, resulting in slower-moving quarks in that particular atom.

Schmookler, Schmidt, and Hen as members of the CLAS Collaboration at the Thomas Jefferson National Accelerator Facility, have published their results today in the journal Nature.

From a suggestion to a full picture

In 2011, Hen and collaborators, who has focused much of their research on SRC pairs, wondered whether this ephemeral coupling had anything to do with the EMC effect and the speed of quarks in atomic nuclei.

They gathered data from various particle accelerator experiments, some of which measured the behavior of quarks in certain atomic nuclei, while others detected SRC pairs in other nuclei. When they plotted the data on a graph a clear trend appeared: The larger an atom’s nucleus, the more SRC pairs there were, and the slower the quarks that were measured. The largest nucleus in the data — gold — contained quarks that moved 20 percent more slowly than those in the smallest measured nucleus, helium.

“This was the first time this connection was concretely suggested,” Hen says. “But we had to do a more detailed study to build a whole physical picture.”

So he and his colleagues analyzed data from an experiment that compared atoms of different sizes and allowed measuring both the quarks’ speed and the number of SRC pairs in each atom’s nucleus. The experiment was carried out at the CEBAF Large Acceptance Spectrometer, or CLAS detector, an enormous, four-story spherical particle accelerator at the Thomas Jefferson National Laboratory in Newport News, Virginia.

Within the detector, Hen describes the team’s target setup as a “kind of a Frankenstein-ish thing,” with mechanical arms, each holding a thin foil made from a different material, such as carbon, aluminum, iron, and lead, each made from atoms containing 12, 27, 67, and 208 protons and neutrons, respectively. An adjacent vessel held liquid deuterium, with atoms containing the lowest number of protons and neutrons of the group.

When they wanted to study a particular foil, they sent a command to the relevant arm to lower the foil of interest, following the deuterium cell and directly in the path of the detector’s electron beam. This beam shot electrons at the deuterium cell and solid foil, at the rate of several billion electrons per second. While a vast majority of electrons miss the targets, some do hit either the protons or neutrons inside the nucleus, or the much tinier quarks themselves. When they hit, the electrons scatter widely, and the angles and energies at which they scatter vary depending on what they hit — information that the detector captures.

Electron tuning

The experiment ran for several months and in the end amassed billions of interactions between electrons and quarks. The researchers calculated the speed of the quark in each interaction, based on the electron’s energy after it scattered, then compared the average quark speed between the various atoms.

By looking at much smaller scaterring angles, corresponding to momentum transfers of a different wave length, the team were able to “zoom out” so that electrons would scatter off the larger protons and neutrons, rather than quarks. SRC pairs are typically extremely energetic and would therefore scatter electrons at higher energies than unpaired protons and neutrons, which is a distinction the researchers used to detect SRC pairs in each material they studied.

“We see that these high-momentum pairs are the reason for these slow-moving quarks,” Hen says.

In particular, they found that the quarks in foils with larger atomic nuclei (and more proton-neutron pairs) moved at most 20 percent slower than deuterium, the material with the least number of pairs.

“These pairs of protons and neutrons have this crazy high-energy interaction, very quickly, and then dissipate,” Schmidt says. “In that time, the interaction is much stronger than normal and the nucleons have significant spatial overlap. So we think quarks in this state slow down a lot.”

Their data show for the first time that how much a quark’s speed is slowed depends on the number of SRC pairs in an atomic nucleus. Quarks in lead, for instance, were far slower than those in aluminum, which themselves were slower than iron, and so on.

The team is now designing an experiment in which they hope to detect the speed of quarks, specifically in SRC pairs.

“We want to isolate and measure correlated pairs, and we expect that will yield this same universal function, in that the way quarks change their velocity inside pairs is the same in carbon and lead, and should be universal across nuclei,” Schmidt says.

Ultimately, the team’s new explanation can help to illuminate subtle yet important differences in the behavior of quarks, the most basic building blocks of the visible world. Scientists have an incomplete understanding of how these tiny particles come to build the protons and neutrons that then come together to form the individual atoms that make up all the material we see in the universe.

“Understanding how quarks interact is really the essence of understanding the visible matter in the universe,” Hen says. “This EMC effect, even though 10 to 20 percent, is something so fundamental that we want to understand it.”

This research was funded, in part, by the U.S. Department of Energy, and the National Science Foundation.

Hiroshi Ishii wins Association for Computing Machinery SIGCHI Lifetime Research Award

Wed, 02/20/2019 - 12:00pm

Hiroshi Ishii, the Jerome B. Wiesner Professor of Media Arts and Sciences at MIT, has been awarded the 2019 Association for Computing Machinery (ACM) SIGCHI Lifetime Research Award. He will accept the award and deliver a keynote presentation at the 2019 CHI Conference on Human Factors in Computing Systems in Glasgow, Scotland, this May.

The Lifetime Research Award is given to individuals whose research in human-computer interaction (HCI) is considered both fundamental and influential to the field. As head of the Tangible Media Group at the MIT Media Lab since 1995, Ishii has pushed the boundaries of digital technology by giving physical form to digital information. He is recognized as a founder of Tangible User Interfaces, a research genre based on the CHI ’97 “Tangible Bits” paper presented with Brygg Ullmer. The paper led to the spinoff ACM International Conference on Tangible, Embedded, and Embodied Interaction, starting in 2007.

“It is an incredible honor for me as an HCI researcher, and I’m extremely excited for this recognition of the Tangible Media Group’s quarter-century battle against the pixel empire of graphical user interfaces,” says Ishii.

Ishii’s work focuses on hypothetical generation of materials that can change form and properties dynamically and computationally, becoming as reconfigurable as the pixels on a graphical user interface screen. His team’s projects, which Ishii describes under the banner of “radical atoms and tangible bits,” have contributed to forming the new stream of “shape-changing user interface” research in the HCI community.

Ishii and his team have presented their work at a variety of academic, design, and artistic venues (including ACM SIGCHI, ACM SIGGRAPH, Industrial Design Society of America, AIGA, Ars Electronica, ICC, Centre Pompidou, Victoria and Albert Museum, Cooper Hewitt Design Museum, and Milan Design Week), emphasizing that the design of engaging and inspiring tangible interactions requires the rigor of both scientific and artistic review, encapsulated by his motto, “Be artistic and analytic. Be poetic and pragmatic.”

“Hiroshi is an inspiration to all of us at the Media Lab,” says lab director Joi Ito. “It’s been exciting to see how his vision has influenced and motivated so many people around the world, as well as here at MIT. This honor is truly deserved.”

Ishii’s keynote presentation at the CHI conference in Glasgow will outline his ongoing vision for “Radical Atoms and Tangible Bits:” seeking to realize seamless interfaces between humans, digital information, and the physical environment.

“Our goal is to invent new design media for artistic expression as well as for scientific analysis, taking advantage of the richness of human senses and skills we develop throughout our lifetime interacting with the physical world, as well as the computational reflection enabled by real-time sensing and digital feedback,” he says. 

Putting data privacy in the hands of users

Wed, 02/20/2019 - 9:48am

A new platform developed by MIT and Harvard University researchers ensures that web services adhere to users’ preferences on how their data are stored and shared in the cloud.

In today’s world of cloud computing, users of mobile apps and web services store personal data on remote data center servers. These data may include photos, social media profiles, email addresses, and even fitness data from wearable devices. Services often aggregate multiple users’ data across servers to gain insights on, say, consumer shopping patterns to help recommend new items to specific users, or may share data with advertisers. Traditionally, however, users haven’t had the power to restrict how their data are processed and shared.

In a paper being presented at this week’s USENIX Networked Systems Design and Implementation conference, the researchers describe a platform, called Riverbed, that forces data center servers to only use data in ways that users explicitly approve.  

In Riverbed, a user’s web browser or smartphone app does not communicate with the cloud directly. Instead, a Riverbed proxy runs on a user’s device to mediate communication. When the service tries to upload user data to a remote service, the proxy tags the data with a set of permissible uses for their data, called a “policy.”

Users can select any number of predefined restrictions — such as, “do not store my data on persistent storage” or “my data may only be shared with the external service x.com.” The proxy tags all the data with the selected policy.

In the datacenter, Riverbed assigns the uploaded data to an isolated cluster of software components, with each cluster processing only data tagged with the same policies. For example, one cluster may contain data that can’t be shared with other services, while another may hold data that can’t be written to disk. Riverbed monitors the server-side code to ensure it adheres to a user’s policies. If it doesn’t, Riverbed terminates the service.

Riverbed aims to enforce user data preferences, while maintaining advantages of cloud computing, such as performing large-scale computations on outsourced servers. “Users give a lot of data to web apps for services, but lose control of how the data is used or where it’s going,” says first author Frank Wang SM ’16, PhD ’18, a recent graduate of the Department of Electrical Engineering and Computer Science and the Computer Science and Artificial Intelligence Laboratory. “We give users control to tell web apps, ‘This is exactly how you can use my data.’”

On that thread, an additional perk for app developers, Wang adds, is establishing more trust with users. “That’s a big thing now,” Wang says. “A selling point for your app would be saying, ‘My app’s goal is to protect user data.’”

Joining Wang on the paper are PhD student Ronny Ko and associate professor of computer science James Mickens, both of Harvard.

Creating “universes”

In 2016, the European Union passed the General Data Protection Regulation (GDPR), which states that users must consent to their data being accessed, that they have the right to request their data be deleted, and that companies must implement appropriate security measures. For web developers, however, these laws provide little technical guidance for writing sophisticated apps that need to leverage user data.

In the past, computer scientists have designed “information flow control” (IFC) systems that allow programmers to label program variables with data policies. But with so many variables and many possible interactions between variables, these systems are difficult to program. Thus, no large-scale web services use IFC techniques.

Primarily, Riverbed leverages the fact that the server-side code of an app can run atop a special “monitor” program — programs that track, regulate, and verify how other programs manipulate data. The monitor creates a separate copy of the app’s code for each unique policy assigned to data. Each copy is called a “universe.” The monitor ensures that users who share the same policy have their data uploaded to, and manipulated by, the same universe. This method enables the monitor to terminate a universe’s code, if that code attempts to violate the universe’s data policy.

This process incorporates a custom interpreter, a program that compiles programming language into code that’s understood by a computer. Interpreters are also used to help runtime programs implement low-level commands into an original program as it runs. The researchers modified a traditional interpreter to extract defined policies from incoming user data and labels certain variables with specific policy direction. Labels will, for instance, denote whitelisted web services for data sharing or restrict persistent storage — meaning the data can’t be stored when the user stops using the web service.

“Say I want my data to be aggregated with other users. That data is put into its own universe with other user data with the same policy,” Wang says. “If a user doesn’t want to share any data with anyone, then that user has their own whole universe. This way, you don’t have any cross-pollination of data.”

For developers, this makes it much easier to comply with GDPR and other privacy laws, Wang says, because users have given explicit consent for data access. “All users in each universe have the same policies, so you can do all your operations and not worry about what data is put into an algorithm, because everyone has the same policy on data in that universe,” Wang says.

Efficient copying

In the worst-case scenario, Wang says, each user of each service would have a separate universe. Generally, this could cause significant computation overhead and slow down the service. But the researchers leveraged a relatively new technique, called “container-based virtualization,” which allow the Riverbed monitor to more efficiently create multiple universes of the same program. As a result, universe management is fast, even if a service has hundreds or thousands of universes.

In their paper, the researchers’ evaluated Riverbed on several apps, demonstrating the platform keeps data secure with little overhead. Results show that more than 1,000 universes can squeeze onto a single server, with added computation that slows down the service by about 10 percent. That’s fast and efficient enough for real-world use, Wang says.

The researchers envision the policies as being written by advocacy groups, such as Electronic Frontier Foundation (EFF), an international nonprofit digital rights group.

New policies can be “dropped in” to a Riverbed-run service at any time, meaning developers don’t need to rewrite code.

Q&A: Why cities aren’t working for the working class

Wed, 02/20/2019 - 9:41am

MIT economist David Autor made news in January, when he delivered the prestigious Richard T. Ely Lecture at the annual meeting of the American Economic Association and presented an attention-grabbing finding about the U.S. economy. Cities no longer provide an abundance of middle-skill jobs for workers without college degrees, he announced, based on his own careful analysis of decades of federal jobs data, which he scutinized by occuptaion, location, and more. MIT News talked to Autor, the Ford Professor of Economics at MIT, about how this sea change is responsible for much of the “hollowing out” of the middle-class work force, and overall inequality, in the U.S. This interview has been edited for length.

Q: Your new research says that changes in the jobs available in cities has played a big role in the growth of inequality and polarization in the U.S. But what exactly is your new finding?

A: There’s a lot of economic literature that says, “Cities are where all the action is.” Wages are higher in cities; people flock to cities. I’ve been writing about the polarization of occupations for a long time, and the hollowing out of the middle class and the geography of that, but I always imagined that polarization reflected a time when the middle-skill jobs had been robustly present in lots of places, both urban and rural, and then hollowed out.

What I didn’t realize was the degree to which [overall U.S.] job polarization is an unwinding of a distinctive feature of high-density metro areas, which was highly present in the postwar period and is now entirely gone. There were these occupations where noncollege workers did skilled work in metro areas: production work and clerical, administrative work. These were middle-skill jobs and they were much more prevalent in cities, urban areas, and metro areas than they were in suburbs and rural areas. But that began to decline in the 1970s and is now extinct. There’s nothing remaining of that.

There’s just less and less mixing of high-skill and low-skill workers, because their jobs are no longer complementary. They’re no longer producing stuff jointly together.

Q: Why were these jobs in cities in the first place?

A:  Two reasons. One is, historically, manufacturing was an urban phenomenon. It had to be, because you needed access to transportation, good infrastructure, and so on. And then for clerical and administrative work, it’s because those occupations are inputs into the world of professionals, so they had to be where the high-skill professionals were.

Q:. To what extent are we talking about strictly a loss of these two types of jobs, pillars of the middle class as they once were?

A: Those are the two biggest examples, but there’s been a big decline of all routine activity. Anything that follows a set of procedures. That doesn’t describe construction work, cleaning, creative work. But you see the decline of routine work even within jobs. There are still office clerical workers, but they’re more educated than they used to be; they do harder jobs. They do a different job — coordinating events, proofing papers, dealing with those godforsaken travel receipts. There are still production workers , but their work is more abstract — they manage complex machines rather than performing repetitive motions. Those are the two leading examples and they cover a lot of the terrain, but they don’t cover all of it.

Q: What are some other implications of this research?

A: There’s this big puzzle in the United States about declining geographic mobility. People have read that as a very negative sign — the labor market is so sclerotic, nobody can afford to move — but my conjecture is that it reflects highly educated people moving to major metro areas to attend college and then staying there. Meanwhile, non-college workers have less and less of an incentive to move to expensive cities where the wage premium for urban non-professionals has fallen steeply.

I think it [job polarization] speaks to our politics. Cities are different places than they used to be. They’re much more educated now, they’re much higher-wage, and they’re younger. They were distinctive before in the set of occupations they might have, and now they’re distinctive in extreme levels of education, high levels of wages, being relatively youthful, and being extremely diverse. You can see how this creates a growing urban, non-urban divide in voting patterns.

Q: Where does your research go from here, and where do policy discussions go?

A: I think there’s a big agenda that flows from this: what it looks like across other industrialized countries, how it relates to the age structure, what the new [jobs] look like. And it raises two other sets of questions. What should you do if you’re not highly educated, what’s the right opportunity, and what’s the right place? And, two, it’s crazy how highly concentrated and expensive educated some cities are. So, is this just a black hole phenomenon? Will cities get denser and denser, or will the countervailing forces prevail? There’s a lot of opportunity for smaller and midsize cities, like Pittsburgh, Nashville, Greensboro-Spartanburg, the places saying, “Hey, we have enough skilled people and restaurants and infrastructure, why don’t you come here?”

Q: Immigration is a prominent political issue right now, and your research here examines it. What did you find?

A: Immigrants are important to this story because the education distribution of immigrants is quite bimodal: They are disproportionately likely to have a postcollege degrees but even more likely to not even have a college degree. And they are concentrated in urban areas. I initially conjectured that part of this [overall wage and polarization trend] was just an artifact of immigration. But you find it equally among immigrants and nonimmigrants, so that’s not it. Immigrants are extremely important to both the skill supply in cities and the services supply in cities, but the hollowing out is not a reflection of immigration in any direct sense.

Q: So as you’ve quantified here, there was an exceptional period in U.S. postwar history that we took as the “normal” state of things.

A: Yes. We’re talking about something that was distinctive about the urban labor market for non-college workers in the postwar era, and now it’s gone.

Festival of Learning highlights innovation

Tue, 02/19/2019 - 3:10pm

The third annual Festival of Learning, organized by MIT Open Learning and the Office of the Vice Chancellor, highlighted educational innovation, including how digital technologies and shared best practices are enabling educators to drive better learning outcomes for MIT students and global learners via online courses. “As a community, we are energized by all the transformation and innovation happening within the education space right now,” said Krishna Rajagopal, dean for digital learning, open learning, as he kicked off the festival.

The educator’s role: to engage and inspire learners

Keynote speaker Po-Shen Loh, Carnegie Mellon University associate professor, founder of online education platform Expii, and coach of the U.S. International Math Olympiad Team, surprised a morning audience of about 400 people in Room 10-250 when he held up a small red die and asked why opposite sides of the die always add up to seven. Loh then began a lively, Socratic interaction with the audience that blended math and physics with engaging humor. What Loh’s inquiry consciously didn’t include was digital technology.

“If we’re all here in this room together,” explained Loh, “we should be taking advantage of this unique opportunity to interact dynamically with each other.” Loh rejected the idea that an educator is someone who simply transmits content to learners. “The teacher’s role is not just to convey information, but to be a cheerleader and a coach inspiring learners to pursue knowledge on their own initiative.”

Loh then held up his smartphone. “Today, every person has an enormous amount of power to do good if they leverage technology.” He described how he founded Expii as a student-directed online learning platform in math and science that would allow users to tailor educational content to however they preferred to learn. As an example, Loh mentioned that teenagers love YouTube because it allows them to decide for themselves how they’ll pursue their own interests; he mentioned the viral Baby Shark phenomenon as an example. Expii followed a similar “personalized engagement” model: “Expii is built in such a way that anyone can contribute and anyone can learn in the ways they want to learn,” said Loh. The takeaway for educators was clear: Making space for personalization can drive engagement.

Loh concluded his hourlong talk by explaining that the accelerating pace of technological change, and the way that change impacts learning and work, have made the capacity to keep learning both urgent and essential: “You need to learn constantly today, no matter who you are and where you are in life,” he said.

Virtual reality in education

Next, D. Fox Harrell, professor of digital media and artificial intelligence and director of the MIT Center for Advanced Virtuality, kicked off the panel “Virtual Experience, Real Liberation: Technologies for Education and the Arts.” He moderated the panel and presented research on how extended-reality technologies such as virtual reality (VR) can be used to enable people to understand systematic social phenomena, such as dehumanizing the other in war, racial and ethnic socialization, and sexism in the workplace. Harrell argued that technologies of virtuality can play a role in serving the social good by reducing bias and helping people critically reflect upon society.

Harrell highlighted research projects on “how to use computer science to impact social issues” such as police brutality and global conflict resolution. VR, for example, is being used to allow people to engage with those on opposite sides of global conflicts virtually, providing them with insights into aspects of their shared humanity and fostering empathy.

Panelist Tabitha Peck, professor of mathematics and computer science at Davidson College, shared her research on using VR to combat implicit bias and stereotype threat, a situation in which individuals are at risk of conforming to a negative stereotype about a group to which they belong. By enabling users to inhabit another person’s body virtually, noted Peck, “a person is offered different perspectives that can impact behavior.” In one example, a domestic abuser was subjected to verbal abuse in a virtual world. “He broke down and cried after,” Peck said, and the experience became an important part of his treatment and recovery efforts.

Eran Egozy​, professor of the practice in music technology and co-founder of Harmonix, next described how he has spent his career tackling a single question: "Can we create a musical instrument which shortens the learning curve for music-making, enabling learners to get to a point of enjoyment faster?” The extremely popular culmination of Egozy’s efforts at Harmonix was “Guitar Hero,” and he detailed the development of the blockbuster game. Egozy ended his talk on a high note, asking everyone in Room 10-250 to pull out their smartphones, connect to the internet, and use their phones to perform as an orchestra in an audience-participation experience called "Tutti." With Egozy waving a baton in the front, and each section of the auditorium assigned a different, smartphone-enabled instrument, the audience played a three-minute musical composition called “Engineered Engineers.”

Finally, in the panel’s Q&A session that ended the morning festivities, Harrell prompted Peck and Egozy to explore how each of their systems play parts in broader ecologies of users, designers, collaborators, caregivers, artists, and more. Technologies of virtuality, he asserted, are not panaceas on their own, but can act within networks of people and systems to serve the greater good.

Afternoon expo and workshops

The festival continued in the afternoon with a learning expo that included 26 exhibitors. One exhibitor was Residential Education, which uses digital tools to drive improved educational outcomes for on-campus courses. Meredith Davies, senior education technologist, explained “we’re here at the festival to educate MIT faculty on the various ways they can use innovation to improve learning. We advise MIT faculty on how they can leverage research-based teaching practices and tailor digital tools to the needs of their learners.”

The festival concluded with three afternoon workshops focused on applying innovative tools and practices. “Applying Learning Sciences to Instruction,” was lead by MIT Senior Learning Scientist Aaron Kessler. The workshop explored the origins of learning science as a bridge between cognitive psychology and other fields such as sociology, political science, computer science, and economics. This interdisciplinary approach allows researchers to gain applied insights into learning and teaching. Kessler encouraged workshop participants to discuss how learning science principles like the testing effect, interleaved practice, and spaced practice might be applied in the ways they teach and learn.

“It’s been great being here today at the Festival of Learning and seeing so many engaged people with so many different ideas about how to improve education,” Poh-Shen Loh said. “What’s really struck me is the high level of enthusiasm everyone has shown for doing things better.”

Serving up brunch for the MIT graduate community

Tue, 02/19/2019 - 10:50am

On Feb. 3, residents of the graduate residence hall Sidney-Pacific hosted a Sunday brunch for all members of the MIT and graduate community. MIT President L. Rafael Reif and Senior Associate Dean for Graduate Education Blanche Staton were invited as special guests to the event.

A team of more than 30 volunteers got up at early hours of the morning to prepare, cook, and serve brunch. Food trays were full of breakfast classics — eggs, sausages, bacon, pancakes, and fruit — in addition to food options for guests with dietary restrictions. As an incentive to enter the brunch early, guests were advised to go eco-friendly by bringing their own reusable plates and utensils to use.

Before brunch officially started, Reif and Staton were led by two Sidney-Pacific Executive Council (SPEC) members on a tour of the graduate residence hall. Then, Reif and Staton were presented with an official red Sidney-Pacific Brunch apron, signifying their welcome to the brunch team. Together with the brunch volunteers, Reif and Staton helped serve food to over 200 attendees.

“Everyone had a great time, and we are very glad that we could do something to benefit the MIT community,” said Sami Yamani, SPEC president and a PhD student in mechanical engineering. “We were all honored to have President Reif and Dean Staton helping us serve brunch to the MIT graduate community and then hang out with students while enjoying their meal.”

The brunches started in 2002 during Sidney-Pacific’s inaugural year as a graduate residence hall. As a way to initiate graduate community within a new space, Roger and Dottie Mark, heads of house for Sidney-Pacific from 2002 to 2013, thought it would be a great idea to have a brunch chair as an officer position.

“We thought having a Sunday brunch would enable the folks to spend some good conversation time with others,” said Dottie Mark.

Added Roger: “Over time, the brunches got better and better as brunch chairs began to compete and get more creative. Everyone seemed to enjoy them.”

Sidney-Pacific brunches are held on a monthly basis during the academic year, with two additional brunches during the summer months. They are open to the entire MIT community and are funded by the Office of Graduate Education (OGE) and Division of Student Life (DSL).

Geeticka Chauhan, resident of Sidney-Pacific and a PhD student in electrical engineering and computer science, said the brunches are an important part to a graduate student’s residence life experience.

“They bring the whole graduate student community together,” Chauhan said. “We have students visiting from all other grad dorms, and sometimes off-campus grad students as well, so being able to bring them all together in a room is a very humbling experience.”

Robots track moving objects with unprecedented precision

Mon, 02/18/2019 - 11:59pm

A novel system developed at MIT uses RFID tags to help robots home in on moving objects with unprecedented speed and accuracy. The system could enable greater collaboration and precision by robots working on packaging and assembly, and by swarms of drones carrying out search-and-rescue missions.

In a paper being presented next week at the USENIX Symposium on Networked Systems Design and Implementation, the researchers show that robots using the system can locate tagged objects within 7.5 milliseconds, on average, and with an error of less than a centimeter.

In the system, called TurboTrack, an RFID (radio-frequency identification) tag can be applied to any object. A reader sends a wireless signal that reflects off the RFID tag and other nearby objects, and rebounds to the reader. An algorithm sifts through all the reflected signals to find the RFID tag’s response. Final computations then leverage the RFID tag’s movement — even though this usually decreases precision — to improve its localization accuracy.

The researchers say the system could replace computer vision for some robotic tasks. As with its human counterpart, computer vision is limited by what it can see, and it can fail to notice objects in cluttered environments. Radio frequency signals have no such restrictions: They can identify targets without visualization, within clutter and through walls.

To validate the system, the researchers attached one RFID tag to a cap and another to a bottle. A robotic arm located the cap and placed it onto the bottle, held by another robotic arm. In another demonstration, the researchers tracked RFID-equipped nanodrones during docking, maneuvering, and flying. In both tasks, the system was as accurate and fast as traditional computer-vision systems, while working in scenarios where computer vision fails, the researchers report.

“If you use RF signals for tasks typically done using computer vision, not only do you enable robots to do human things, but you can also enable them to do superhuman things,” says Fadel Adib, an assistant professor and principal investigator in the MIT Media Lab, and founding director of the Signal Kinetics Research Group. “And you can do it in a scalable way, because these RFID tags are only 3 cents each.”

In manufacturing, the system could enable robot arms to be more precise and versatile in, say, picking up, assembling, and packaging items along an assembly line. Another promising application is using handheld “nanodrones” for search and rescue missions. Nanodrones currently use computer vision and methods to stitch together captured images for localization purposes. These drones often get confused in chaotic areas, lose each other behind walls, and can’t uniquely identify each other. This all limits their ability to, say, spread out over an area and collaborate to search for a missing person. Using the researchers’ system, nanodrones in swarms could better locate each other, for greater control and collaboration.

“You could enable a swarm of nanodrones to form in certain ways, fly into cluttered environments, and even environments hidden from sight, with great precision,” says first author Zhihong Luo, a graduate student in the Signal Kinetics Research Group.

The other Media Lab co-authors on the paper are visiting student Qiping Zhang, postdoc Yunfei Ma, and Research Assistant Manish Singh.

Super resolution

Adib’s group has been working for years on using radio signals for tracking and identification purposes, such as detecting contamination in bottled foods, communicating with devices inside the body, and managing warehouse inventory.

Similar systems have attempted to use RFID tags for localization tasks. But these come with trade-offs in either accuracy or speed. To be accurate, it may take them several seconds to find a moving object; to increase speed, they lose accuracy.

The challenge was achieving both speed and accuracy simultaneously. To do so, the researchers drew inspiration from an imaging technique called “super-resolution imaging.” These systems stitch together images from multiple angles to achieve a finer-resolution image.

“The idea was to apply these super-resolution systems to radio signals,” Adib says. “As something moves, you get more perspectives in tracking it, so you can exploit the movement for accuracy.”

The system combines a standard RFID reader with a “helper” component that’s used to localize radio frequency signals. The helper shoots out a wideband signal comprising multiple frequencies, building on a modulation scheme used in wireless communication, called orthogonal frequency-division multiplexing.

The system captures all the signals rebounding off objects in the environment, including the RFID tag. One of those signals carries a signal that’s specific to the specific RFID tag, because RFID signals reflect and absorb an incoming signal in a certain pattern, corresponding to bits of 0s and 1s, that the system can recognize.

Because these signals travel at the speed of light, the system can compute a “time of flight” — measuring distance by calculating the time it takes a signal to travel between a transmitter and receiver — to gauge the location of the tag, as well as the other objects in the environment. But this provides only a ballpark localization figure, not subcentimter precision.

Leveraging movement

To zoom in on the tag’s location, the researchers developed what they call a “space-time super-resolution” algorithm.

The algorithm combines the location estimations for all rebounding signals, including the RFID signal, which it determined using time of flight. Using some probability calculations, it narrows down that group to a handful of potential locations for the RFID tag.

As the tag moves, its signal angle slightly alters — a change that also corresponds to a certain location. The algorithm then can use that angle change to track the tag’s distance as it moves. By constantly comparing that changing distance measurement to all other distance measurements from other signals, it can find the tag in a three-dimensional space. This all happens in a fraction of a second.

“The high-level idea is that, by combining these measurements over time and over space, you get a better reconstruction of the tag’s position,” Adib says.

“The work reports sub-centimeter accuracy, which is very impressive for RFID,” says Lili Qiu, a professor of computer science at the University of Texas at Austin whose research focuses on wireless networking and communications. “The paper proposes an interesting idea that lets a ‘helper’ transmit a wideband signal compatible with RFID protocol to achieve high tracking accuracy [and] develops a … framework for RF localization that fuses measurements across time and across multiple antennas. The system has potential to support [the researchers’] target applications, such as robotic assembly and nanodrones. … It would be very interesting to see the field test results in the future.”

The work was sponsored, in part, by the National Science Foundation.

Lobster’s underbelly is as tough as industrial rubber

Mon, 02/18/2019 - 11:59pm

Flip a lobster on its back, and you’ll see that the underside of its tail is split in segments connected by a translucent membrane that appears rather vulnerable when compared with the armor-like carapace that shields the rest of the crustacean.

But engineers at MIT and elsewhere have found that this soft membrane is surprisingly tough, with a microscopic, layered, plywood-like structure that makes it remarkably tolerant to scrapes and cuts. This deceptively tough film protects the lobster’s belly as the animal scuttles across the rocky seafloor.

The membrane is also stretchy, to a degree, which enables the lobster to whip its tail back and forth, and makes it difficult for a predator to chew through the tail or pull it apart.

This flexibility may come from the fact that the membrane is a natural hydrogel, composed of 90 percent of water. Chitin, a fibrous material found in many shells and exoskeletons, makes up most of the rest.

The team’s results show that the lobster membrane is the toughest material of all natural hydrogels, including collagen, animal skins, and natural rubber. The membrane is about as strong as industrial rubber composites, such as those used to make car tires, garden hoses, and conveyor belts.

The lobster’s tough yet stretchy membrane could serve as a design guide for more flexible body armor, particularly for highly mobile regions of the body, such as elbows and knees.

“We think this work could motivate flexible armor design,” says Ming Guo, the d’Arbeloff Career Development Assistant Professor in the Department of Mechanical Engineering at MIT. “If you could make armor out of these types of materials, you could freely move your joints, and it would make you feel more comfortable.”

The full paper detailing the team’s results appeared online Feb. 14 in the journal Acta Materialia. (The journal posted an uncorrected proof on Jan. 31.) Guo’s co-authors are Jinrong Wu and Hao Zhang of Sichuan University, Liangliang Qu and Fei Deng of Harvard University, and Zhao Qin, who is a research scientist in the MIT Department of Civil and Environmental Engineering and another senior author of the paper.

Flexible protection

Guo started looking into the properties of the lobster membrane following a dinner with a visitor to his lab.

“He had never eaten lobster before, and wanted to try it,” Guo recalls. “While the meat was very good, he realized the belly’s transparent membrane was really hard to chew. And we wondered why this was the case.”

While much research has been devoted to the lobster’s distinctive, armor-like shell, Guo found there was not much known about the crustacean’s softer tissues.

“When lobsters swim, they stretch and move their joints and flip their tails really fast to escape from predators,” Guo says. “They can’t be entirely covered in a hard shell — they need these softer connections. But nobody has looked at the membrane before, which is very surprising to us.”

So he and his colleagues set about characterizing the properties of the unusual material. They cut each membrane into thin slices, each of which they ran through various experimental tests. They placed some slices in a small oven to dry, then afterward measured their weight. From these measurements, they estimated that 90 percent of the lobster’s membrane consists of water, making it a hydrogel material.

They kept other samples in saline water to mimic a natural ocean environment. With some of these samples, they performed mechanical tests, placing each membrane in a machine that stretches the sample, while precisely measuring the force applied. They observed that the membrane was initially floppy and easily stretched, until it reached about twice its initial length, at which point the material started to stiffen and became progressively tougher and more resistant to stretching.

“This is quite unique for biomaterials,” Guo notes. “For many other tough hydrogels, the more you stretch, the softer they are. This strain-stiffening behavior could allow lobsters to flexibly move, but when something bad happens, they can stiffen and protect themselves.”

Lobster’s natural plywood

As a lobster makes its way across the seafloor, it can scrape against abrasive rocks and sand. The researchers wondered how resilient the lobster’s membrane would be to such small scrapes and cuts. They used a small scalpel to scratch the membrane samples, then stretched them in the same way as the intact membranes.

“We made scratches to mimic what might happen when they’re moving through sand, for example,” Guo explains. “We even cut through half the thickness of the membrane and found it could still be stretched equally far. If you did this with rubber composites, they would break.”

The researchers then zoomed in on the membrane’s microstructure using electron microscopy. What they observed was a structure very similar to plywood. Each membrane, measuring about a quarter of a millimeter thick, is composed of tens of thousands of layers. A single layer contains untold numbers of chitin fibers, resembling filaments of straw, all oriented at the same angle, precisely 36 degrees offset from the layer of fibers above. Similarly, plywood is typically made of three or more thin layers of wood, the grain of each layer oriented at right angles to the layers above and below.

“When you rotate the angle of fibers, layer by layer, you have good strength in all directions,” Guo says. “People have been using this structure in dry materials for defect tolerance. But this is the first time it’s been seen in a natural hydrogel.”

Led by Qin, the team also carried out simulations  to see how a lobster membrane would react to a simple cut if its chitin fibers were aligned like plywood, versus in completely random orientations. To do this, they first simulated a single chitin fiber and assigned it certain mechanical properties, such as strength and stiffness. They then reproduced millions of these fibers and assembled them into a membrane structure composed of either completely random fibers or layers of precisely oriented fibers, similar to the actual lobster membrane.

“It is amazing to have a platform that allows us to directly test and show how identical chitin fibers yield very different mechanical properties once they are built into various architectures” Qin says.

Finally, the researchers created a small notch through both the random and layered membranes, and programmed forces to stretch each membrane. The simulation visualized the stress throughout each membrane.

“In the random membrane, the stress was all equal, and when you stretched it, it quickly fractured,” Guo says. “And we found the layered structure stretched more without breaking.”

“One mystery is how the chitin fibers can be guided to assemble into such a unique layered architecture to form the lobster membrane,” Qin says. “We are working toward understanding this mechanism, and believe that such knowledge can be useful to develop innovative ways of managing the microstructure for material synthesis.”

In addition to flexible body armor, Guo says materials designed to mimic lobster membranes could be useful in soft robotics, as well as tissue engineering. If anything, the results shed new light on the survival of one of nature’s most resilient creatures.

“We think this membrane structure could be a very important reason for why lobsters have been living for more than 100 million years on Earth,” Guo says. “Somehow, this fracture tolerance has really helped them in their evolution.”

This research was supported, in part, by the National Natural Science Foundation of China and State Key Laboratory of Polymer Materials Engineering.

Climate change makes summer weather stormier yet more stagnant

Mon, 02/18/2019 - 3:00pm

Climate change is shifting the energy in the atmosphere that fuels summertime weather, which may lead to stronger thunderstorms and more stagnant conditions for midlatitude regions of the Northern Hemisphere, including North America, Europe, and Asia, a new MIT study finds.

Scientists report that rising global temperatures, particularly in the Arctic, are redistributing the energy in the atmosphere: More energy is available to fuel thunderstorms and other local, convective processes, while less energy is going toward summertime extratropical cyclones — larger, milder weather systems that circulate across thousands of kilometers. These systems are normally associated with winds and fronts that generate rain.

“Extratropical cyclones ventilate air and air pollution, so with weaker extratropical cyclones in the summer, you’re looking at the potential for more poor air-quality days in urban areas,” says study author Charles Gertler, a graduate student in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). “Moving beyond air quality in cities, you have the potential for more destructive thunderstorms and more stagnant days with perhaps longer-lasting heat waves.”

Gertler and his co-author, Associate Professor Paul O’Gorman of EAPS, are publishing their results this week in the Proceedings of the National Academy of Sciences.

A shrinking gradient

In contrast to more violent tropical cyclones such as hurricanes, extratropical cyclones are large weather systems that occur poleward of the Earth’s tropical zone. These storm systems generate rapid changes in temperature and humidity along fronts that sweep across large swaths of the United States. In the winter, extratropical cyclones can whip up into Nor’easters; in the summer, they can bring everything from general cloudiness and light showers to heavy gusts and thunderstorms.

Extratropical cyclones feed off the atmosphere’s horizontal temperature gradient — the difference in average temperatures between northern and southern latitudes. This temperature gradient and the moisture in the atmosphere produces a certain amount of energy in the atmosphere that can fuel weather events. The greater the gradient between, say, the Arctic and the equator, the stronger an extratropical cyclone is likely to be.

In recent decades, the Arctic has warmed faster than the rest of the Earth, in effect shrinking the atmosphere’s horizontal temperature gradient. Gertler and O’Gorman wondered whether and how this warming trend has affected the energy available in the atmosphere for extratropical cyclones and other summertime weather phenomena.

They began by looking at a global reanalysis of recorded climate observations, known as the ERA-Interim Reanalysis, a project that has been collecting available satellite and weather balloon measurements of temperature and humidity around the world since the 1970s. From these measurements, the project produces a fine-grained global grid of estimated temperature and humidity, at various altitudes in the atmosphere.

From this grid of estimates, the team focused on the Northern Hemisphere, and regions between 20 and 80 degrees latitude. They took the average summertime temperature and humidity in these regions, between June, July, and August for each year from 1979 to 2017. They then fed each yearly summertime average of temperature and humidity into an algorithm, developed at MIT, that estimates the amount of energy that would be available in the atmosphere, given the corresponding temperature and humidity conditions.  

“We can see how this energy goes up and down over the years, and we can also separate how much energy is available for convection, which would manifest itself as thunderstorms for example, versus larger-scale circulations like extratropical cyclones,” O’Gorman says.

Seeing changes now

Since 1979, they found the energy available for large-scale extratropical cyclones has decreased by 6 percent, whereas the energy that could fuel smaller, more local thunderstorms has gone up by 13 percent.

Their results mirror some recent evidence in the Northern Hemisphere, suggesting that summer winds associated with extratropical cyclones have decreased with global warming. Observations from Europe and Asia have also shown a strengthening of convective rainfall, such as from thunderstorms.

“Researchers are finding these trends in winds and rainfall that are probably related to climate change,” Gertler says. “But this is the first time anyone has robustly connected the average  change in the atmosphere, to these subdaily timescale events. So we’re presenting a unified framework that connects climate change to this changing weather that we’re seeing.”

The researchers’ results estimate the average impact of global warming on summertime energy of the atmosphere over the Northern Hemisphere. Going forward, they hope to be able to resolve this further, to see how climate change may affect weather in more specific regions of the world.

“We’d like to work out what’s happening to the available energy in the atmosphere, and put the trends on a map to see if it’s, say, going up in North America, versus Asia and oceanic regions,” O’Gorman says. “That’s something that needs to be studied more.”

This research was supported by the National Science Foundation.

Training technicians in developing technologies

Fri, 02/15/2019 - 3:00pm

Headquartered at MIT, AIM Photonics Academy is embarking on an ambitious plan to develop a technician-training program in emerging technologies, attempting to answer the question of whether an institute known for educating world-leading scientists and engineers can play a role in helping train an outstanding technician workforce.

AIM Academy is part of the American Institute for Manufacturing Integrated Photonics (AIM Photonics), focused on integrated photonics. The Office of Naval Research recently awarded a $1.8 million Manufacturing Engineering Education Program grant for AIM Academy to create a technician-certification program in collaboration with Advanced Robotics for Manufacturing (ARM). AIM Photonics and ARM are two of 14 public-private manufacturing innovation institutes created as part of a federal program to revitalize American manufacturing, collectively known as Manufacturing USA.

Until now, AIM Academy has focused on training master’s and PhD engineers, which is what companies said they needed, through summer and winter boot camps and online courses. Integrated photonics — putting light-based technology on computer chips — has diverse applications including LIDAR for driverless cars, sensors, data centers, and the internet of things. As the technology moves from the lab to production, companies will not only need highly trained PhDs to compete, they will also need a workforce of skilled technicians to fill their manufacturing lines.

Lionel Kimerling, the Thomas Lord Professor of Materials Science and Engineering at MIT, leads the AIM Academy program for AIM Photonics. 

“Integrated photonics has enormous potential,” said Kimerling. “AIM Academy is developing programs now that will train workers for the jobs that are coming.” Since the integrated photonics industry is emerging, Kimerling said that the technician-training program would prepare students for the manufacturing positions that are open now, as well as jobs in photonics that will emerge in the years to come.

Both AIM Photonics and ARM have partnered with schools eager to roll out photonics-based certification programs. Pittsburgh-based ARM’s education and workforce development program will work with Westmoreland County Community College in Pennsylvania. As AIM Photonics’ education and workforce development program AIM Academy, will work with Stonehill College and Bridgewater State University in Massachusetts to develop a program specific to photonics technicians. Currently, both Stonehill and Bridgewater offer four-year degrees, but lack tracks for associate degrees or certification in the field.  

The territory is new for both schools. Officials say they are responsible for preparing the future workforce, and are ready to attract a new kind of student and offer their current students access to a certification program that they believe will lead directly to jobs.  

“This effort is part of a larger strategic priority to increase Bridgewater State’s ongoing expansion of educational opportunities and research in the areas of optics and photonics,” said Kristen Porter-Utley, dean of Bridgewater State University’s Bartlett College of Science and Mathematics.

Said Stonehill physics Professor Guiru “Ruby”  Gu: “We envision an innovative work-learn certificate program that brings together industry, higher education and government, and creates a hub for integrated photonics in southeastern Massachusetts.” 

Both Stonehill and Bridgewater officials say that the success of the certification programs begins with more hands-on lab work opportunities for students. The Commonwealth of Massachusetts has committed $28 million in capital equipment grants to AIM Photonics through the Massachusetts Manufacturing Innovation Initiative (M2I2) projects, and has already funded LEAPs (Labs for Education and Application Prototypes) at MIT and Worcester Polytechnic Institute, which will share the facilities with Quinsigamond Community College. Those LEAPs will be open to students who go through the technician-training program.

The 15-month certification program will end in student apprenticeships at local companies.  

“At MIT, we are interested in deploying new technologies. We also have contacts with the companies that will use these technologies,” said Kimerling. “Because of this, we can help train the future workforce.”

Predicting sequence from structure

Fri, 02/15/2019 - 11:00am

One way to probe intricate biological systems is to block their components from interacting and see what happens. This method allows researchers to better understand cellular processes and functions, augmenting everyday laboratory experiments, diagnostic assays, and therapeutic interventions. As a result, reagents that impede interactions between proteins are in high demand. But before scientists can rapidly generate their own custom molecules capable of doing so, they must first parse the complicated relationship between sequence and structure.

Small molecules can enter cells easily, but the interface where two proteins bind to one another is often too large or lacks the tiny cavities required for these molecules to target. Antibodies and nanobodies bind to longer stretches of protein, which makes them better suited to hinder protein-protein interactions, but their large size and complex structure render them difficult to deliver and unstable in the cytoplasm. By contrast, short stretches of amino acids, known as peptides, are large enough to bind long stretches of protein while still being small enough to enter cells.

The Keating lab at the MIT Department of Biology is hard at work developing ways to quickly design peptides that can disrupt protein-protein interactions involving Bcl-2 proteins, which promote cancer growth. Their most recent approach utilizes a computer program called dTERMen, developed by Keating lab alumnus, Gevorg Grigoryan PhD ’07, currently an associate professor of computer science and adjunct associate professor of biological sciences and chemistry at Dartmouth College. Researchers simply feed the program their desired structures, and it spits out amino acid sequences for peptides capable of disrupting specific protein-protein interactions.

“It’s such a simple approach to use,” says Keating, an MIT professor of biology and senior author on the study. “In theory, you could put in any structure and solve for a sequence. In our study, the program came up with new sequence combinations that aren’t like anything found in nature — it deduced a completely unique way to solve the problem. It’s exciting to be uncovering new territories of the sequence universe.”

Former postdoc Vincent Frappier and Justin Jenson PhD ’18 are co-first authors on the study, which appears in the latest issue of Structure.

Same problem, different approach

Jenson, for his part, has tackled the challenge of designing peptides that bind to Bcl-2 proteins using three distinct approaches. The dTERMen-based method, he says, is by far the most efficient and general one he’s tried yet.

Standard approaches for discovering peptide inhibitors often involve modeling entire molecules down to the physics and chemistry behind individual atoms and their forces. Other methods require time-consuming screens for the best binding candidates. In both cases, the process is arduous and the success rate is low.

dTERMen, by contrast, necessitates neither physics nor experimental screening, and leverages common units of known protein structures, like alpha helices and beta strands — called tertiary structural motifs or “TERMs” — which are compiled in collections like the Protein Data Bank. dTERMen extracts these structural elements from the data bank and uses them to calculate which amino acid sequences can adopt a structure capable of binding to and interrupting specific protein-protein interactions. It takes a single day to build the model, and mere seconds to evaluate a thousand sequences or design a new peptide.

“dTERMen allows us to find sequences that are likely to have the binding properties we're looking for, in a robust, efficient, and general manner with a high rate of success,” Jenson says. “Past approaches have taken years. But using dTERMen, we went from structures to validated designs in a matter of weeks.”

Of the 17 peptides they built using the designed sequences, 15 bound with native-like affinity, disrupting Bcl-2 protein-protein interactions that are notoriously difficult to target. In some cases, their designs were surprisingly selective and bound to a single Bcl-2 family member over the others. The designed sequences deviated from known sequences found in nature, which greatly increases the number of possible peptides.

“This method permits a certain level of flexibility,” Frappier says. “dTERMen is more robust to structural change, which allows us to explore new types of structures and diversify our portfolio of potential binding candidates.”

Probing the sequence universe

Given the therapeutic benefits of inhibiting Bcl-2 function and slowing tumor growth, the Keating lab has already begun extending their design calculations to other members of the Bcl-2 family. They intend to eventually develop new proteins that adopt structures that have never been seen before.

“We have now seen enough examples of various local protein structures that computational models of sequence-structure relationships can be inferred directly from structural data, rather than having to be rediscovered each time from atomistic interaction principles,” says Grigoryan, dTERMen’s creator. “It’s immensely exciting that such structure-based inference works and is accurate enough to enable robust protein design. It provides a fundamentally different tool to help tackle the key problems of structural biology — from protein design to structure prediction.”

Frappier hopes one day to be able to screen the entire human proteome computationally, using methods like dTERMen to generate candidate binding peptides. Jenson suggests that using dTERMen in combination with more traditional approaches to sequence redesign could amplify an already powerful tool, empowering researchers to produce these targeted peptides. Ideally, he says, one day developing peptides that bind and inhibit your favorite protein could be as easy as running a computer program, or as routine as designing a DNA primer.

According to Keating, although that time is still in the future, “our study is the first step towards demonstrating this capacity on a problem of modest scope.”

This research was funded the National Institute of General Medical Sciences, National Science Foundation, Koch Institute for Integrative Cancer Research, Natural Sciences and Engineering Research Council of Canada, and Fonds de Recherche du Québec.

MLK Luncheon: America’s bank of justice is overdrawn but not bankrupt

Fri, 02/15/2019 - 10:40am

In one of the less-remembered passages of Martin Luther King Jr.’s celebrated “I have a dream” speech in 1963, he spoke eloquently about the large debt owed by this country to its black citizens after centuries of oppression — which he described as a bad check that was being returned from the bank of justice, marked “insufficient funds.”

That passage formed the theme for this year’s 45th annual MIT Martin Luther King Jr. celebration luncheon, which featured a keynote address by Rahsaan Hall, director of the Racial Justice Program for the Massachusetts branch of the American Civil Liberties Union. “We refuse to believe the bank of justice is bankrupt,” the event’s program proclaimed.

MIT President L. Rafael Reif, referring to King’s words, said that “he spoke at a moment when the nation was rocked by painful inequality and violent suppression. Yet somehow, even in the face of so much turmoil, he was hopeful.”

Reif continued, “He made it clear that, to remain true to its ideals, America’s ‘bank of justice’ owes everyone the same essential guarantee of freedom and equality. Today, we obviously have not conquered discrimination, inequality, and violence. But I believe we can see some signs that the story is changing. And we can certainly see opportunities for each of us to help accelerate that change.”

As one clear example of that progress, he said, “Let’s take a moment to appreciate the fact that the U.S. Congress is now the most diverse in our nation’s history!” And, he said, despite the disturbing stories about political leaders in Virginia who were found to have worn blackface, “even in this discouraging story, I believe there is an important thread of hope.” In King’s time, he said, such activities would have been considered routine, but that’s no longer true. “Today — fortunately, finally! — it is a public outrage.”

In introducing Hall, Reif cited some of his achievements working with the ACLU: “Through a strategic combination of advocating on Beacon Hill, pursuing targeted lawsuits, and engaging people in their neighborhoods, Rahsaan works to advance racial justice in communities across the state,” he said.

Hall also reflected on King’s famous speech, pointing out that while his uplifting words of hope are well-remembered, and the speech “touches us in a very special way,” sometimes people gloss over the tough critique of American society that he also expressed. King referred to the lives of African-Americans as “a lonely island of poverty in the midst of a vast ocean of material prosperity,” and he went on to say that “it’s obvious today that America has defaulted on its promissory note … instead of honoring its sacred obligations, it has given its Negro citizens a bad check.”

He added that King said “he refused to believe that the bank of justice is bankrupt. He refused to believe that there are insufficient funds in the great vaults of opportunity of this nation. So we have come to cash this check, that will give us on demand the riches of freedom and security of justice.”

Hall pointed out that while King spoke of the lofty vision embodied in the U.S. Constitution, its drafters never really imagined that it would apply to all of humanity, including black people, native Americans, and women. “Even though King’s vision was one of hope and of high ideals, the reality is that this [Constitution] is a document that is rooted in a history of white supremacy. Not white supremacy as people think of skinheads and neo-Nazis and alt-right, but white supremacy as a system or structure of beliefs that center and prioritize and lift up and normalize white lives, white values, white beliefs, at the expense of the lives, values, property, behavior, and cultures of people of color.”

He described in detail some of the laws and policies after emancipation that codified a deep level of discrimination and disempowerment, including laws that criminalized not having a job or a permanent residence, and that he said amounted to a new form of state-sanctioned slavery. Discrimination continued to be formalized well into the 20th century, through “separate but equal” policies that enforced segregated housing and education. “Jim Crow did not operate alone. He had an Uncle, and his name was Sam,” Hall said.

Even though there has been much progress, Hall said, recent research has shown a mixed picture, with both advances and setbacks since the Kerner Commission report in the 1960s that found systematic discrimination throughout American society. “I say to you that the bank of justice is not actually bankrupt,” he concluded, “but rather America’s account is overdrawn. There is too much justice for a small segment of society, at the expense of too many others.”

The annual luncheon, as always, included musical selections as well as tributes to this year’s recipients of the Martin Luther King Jr. Leadership Award and to visiting professors and scholars, as well as talks by selected graduate and undergraduate students.

Dasjon Jordan, a graduate student in the Department of Urban Studies, said that the promissory note King spoke of “was not just about racial harmony and handholding. But Dr. King’s address was explicitly about racial and economic justice. It was about people of color having their rights as Americans activated and being able to access fair employment opportunities, housing, education, and to simply provide quality lives for their families.”

Jordan asked, “What are we doing as a body to not only make sure that classrooms aren’t just diverse and inclusive by the number of skin tones we count, but by the content of our curriculum and our actions to prioritize equity and racial justice? We must remember that diversity and inclusion are not substitutes for justice and equity. Justice and equity should not be a suggestion here, but our collective mission.

“The world is watching not only what we produce, but the values we championand processes we take to get there,” he said. “These values and processes become the checks we deposit to America’s bank as we work. … Our engagement should bring problems of racial, economic, and social injustice to the heart of our institution and our daily actions. We must all ask ourselves the hard questions and hold ourselves accountable to solving them with fierce urgency.”

Nikayah Etienne, a senior in mechanical engineering, described growing up in a predominantly black, immigrant community and school, and finding that she first really experienced being a racial minority when she began her studies at MIT. She realized that while this made her highly visible, it also made her often overlooked. But she soon found groups of black students and faculty in which she felt included and respected.

“I’m leaving here with significant lessons and experiences,” she said. “I leave here knowing that I have grown as an activist. I leave here knowing that I want to continue to touch the lives of young boys and girls who have come from similar backgrounds to me, reminding them that systematic racism and stereotyping do not define their potential.”

She added, “I challenge everyone sitting here, and all the members of the MIT community, to start making it a vision and a priority of yours, to aid students of color in cashing their own checks. I challenge you to take the necessary action to move MIT toward a more equitable community. Let our voices be heard.”

From summer research program to PhD dissertation

Fri, 02/15/2019 - 12:00am

One of the most important aspects of MIT’s educational mission is preparing students to be effective members of their scientific and technological communities. For Raspberry Simpson, that process began when she was a 17-year-old participant in the MIT Summer Research Program (MSRP); it is reaching fruition today as she pursues her doctorate in nuclear science and develops novel diagnostics for inertial confinement fusion and high-energy-density physics experiments at some of the country’s most advanced research facilities.

In 2010, Simpson (then a student in Bard College’s Early College program) worked with MIT physics professors Lindley Winslow and Janet Conrad at the Laboratory for Nuclear Science. In addition to their academic work in the MSRP, she recalls, “they put it into my mind subconsciously that MIT was a place for me, that I could do science and be accepted in this space. I can’t emphasize enough how important that is.”

Shortly afterward, Simpson transferred to Columbia University to complete her bachelor’s degree in applied physics. During that time she took a year off from study to assist Winslow with development of a neutrino detector, and work on astrophysics experiments at Los Alamos National Laboratory, where she received important mentoring. 

“I really enjoyed the national laboratory environment; it’s really special to have that many scientists in one place working towards a similar goal,” says Simpson.

In large part because of her experience in MSRP, which seeks to motivate members of under-represented groups to pursue graduate education, Simpson applied to the MIT Department of Nuclear Science and Engineering (NSE) for her PhD studies. “I felt I had a science family here,” she says. “Also, Mareena Robinson, who did the MSRP at the same time I did, was in the PhD program. Having representation from women, especially black women, in the department was a huge factor in me wanting to come back.”

Today, a primary focus of Simpson’s is working on developing diagnostics that allow the assessment of the performance inertial confinement fusion (ICF). There has been a recent surge in optimism about fusion becoming a practical, plentiful, carbon-free energy source, with increased private funding and several private companies (including MIT spinout Commonwealth Fusion Systems) announcing roadmaps for demonstration fusion power plants by the mid-2020s.

To achieve that, ICF compresses pellets of hydrogen isotopes deuterium and tritium to such extremely high temperatures and densities that the isotope nuclei fuse. This creates a heavier nucleus while releasing large quantities of heat in the form of neutrons. Work to date has been promising, but researchers have struggled to extract the full measure of energy from the process.

“The problem we’ve noticed is that there are lots of asymmetries in the implosion; if you think about trying to compress a basketball to the size of a pea, it would be difficult to keep it perfectly spherical,” explains Simpson. “That leads to inefficiencies.”

Simpson is working to develop new ways of measuring and characterizing these asymmetries during the implosion, using a pair of orthogonally positioned charged-particle instruments to measure the spectra of deuterons (deuterium nuclei) scattered during the process. The approach allows inference of variations in density and symmetry.

“Fusion is very complex, and you need as many diagnostics and as much information as you can get to understand the dynamics of these experiments,” notes Simpson, whose role at MIT’s Plasma Science and Fusion Center also connects her to the center’s research into magnetic-confinement fusion, the other leading potential path to energy production.

The project is supported by grants from the U.S. Department of Energy (DoE) and the University of Rochester’s Laboratory for Laser Energetics (LLE); Simpson has worked on several projects at the LLE’s Omega laser facility, a key research resource for fusion and other high-temperature high-density phenomena.

In addition, Simpson was chosen this year for the inaugural class of the DoE’s National Nuclear Security Laboratory Residency Graduate Fellowships, which support long-term security-related study and research at national labs. She will build a charged-particle spectrometer for a group under Tammy Ma at the National Ignition Facility at Lawrence Livermore National Laboratory, which is using a high-intensity petawatt-class laser to generate highly accelerated ions for use in radiography of a variety of targets.

Simpson recently passed her NSE qualifying examinations, and will be turning her attention to her dissertation, writing about the two pieces of work mentioned above, and an additional project that utilizes knock-on deuterons for imaging of ICF asymmetries.

“Our group in the High Energy Density Physics Division has lots of fingers in lots of pies, like fusion, high energy density science, and astrophysics, so my dissertation will include multiple projects,” says Simpson. The group recently received a prestigious Center of Excellence award from the National Nuclear Security Administration

Looking ahead, Simpson says she would enjoy working at a national laboratory, because of both the research culture and labs’ role in cultivating new generations of scientists. “The national labs have a deep understanding of the value of students, and they wouldn’t exist without continued stewardship of student talent, and I’d like to position myself in that environment. I’m not mentoring yet, but eventually I would like to give back in that way.”

She’s also a big fan of the 32-year-old MSRP, and of Institute efforts to make the science and engineering communities more inclusive. 

Dance brings a community together

Thu, 02/14/2019 - 12:55pm

For the first time during MIT’s Independent Activities Period (IAP), the MIT Bhangra Dance Team held a series of dance workshops for the MIT community.

More than two dozen students packed into McCormick Hall’s dance studio to learn step-by-step choreography prepared by two Bhangra dance team members, MIT juniors Rishi Sundaresan and Tarun Kamath.

“We decided to have these workshops during IAP because we figured people at MIT would have more free time,” says Divya Goel, senior and co-captain of MIT Bhangra. “I think we had one of the biggest turnouts ever because of this, which is awesome.”

Bhangra, which originates from the state of Punjab in northern India, is a high-energy, upbeat folk dance that was traditionally performed at harvest festivals or celebrations. With its global growth in popularity in recent years, bhangra has now become a competitive dance form throughout the world.

MIT Bhangra started in 1991 with a mission to spread and share bhangra traditions and culture. Kamath says he joined the dance group because he wanted a community where he could have fun and de-stress, but it turned into something bigger.

“Being part of a dance team starts out as loving the dance form, but what it becomes is a community and a family that you can appreciate for many years,” he says.

In addition to their performances on campus and dance competitions, each summer the group hosts Summer Bhangra, a twice-weekly summer dance workshop for people of all ages and skill levels in the Greater Boston area.

“Knowing that we’re able to teach people so quickly and seeing everyone happy from learning this dance style is really rewarding,” says Goel.

Kamath says that at the end of the day, it’s about more than learning the dance moves.

“If you can walk out of the dance workshop and had a fun two hours, then that’s the best thing that can be said.”

Pages