MIT Latest News
Along with asteroids, the moon, and the International Space Station, there are hundreds of small, 10-centimeter cubes orbiting planet Earth. Alexa Aguilar, a first-year graduate student in the Department of Aeronautics and Astronautics, is helping these small satellites, called CubeSats, communicate.
“We’d like to expand this to what we call ‘swarm technology.’ Imagine you have three, four, up to, you know, x-amount of little cubes that can talk to each other with lasers,” she explains.“You could have these massive constellations of them! For example, you could have a cluster [of CubeSats] here, a cluster there, and each of the clusters has its own camera. They could talk to each other with lasers, and they could send imaging data, and you could computationally mesh all of the [individual] pictures that they’re taking to form a massive picture in space.”
A picture like this could offer a cost-effective way to monitor Earth. In cases of natural disasters, which require rapid response and constant updates, such observational capabilities could be life-saving.
Aguilar is a builder of connections in many other aspects of her life as well. A space enthusiast, supporter of women in STEM, and mentor to other AeroAstro students, she exudes a natural warmth and self-assurance that brings people together.
Discovering herself in Cambridge
As an electrical engineering student at the University of Idaho, Aguilar didn’t always foresee herself at MIT. “My path to MIT really blossomed out of a summer internship at NASA’s Jet Propulsion Laboratory (JPL) when I was in undergrad,” she says. “I met a lot of awesome people, and everyone was doing something more incredible than the last person I’d heard.”
Some of the friends she made there encouraged her to look into the Space Telecommunications, Astronomy, and Radiation Laboratory (STAR Lab), where Aguilar now does her research.
Despite the many friendships she has forged at MIT, Aguilar sometimes struggles with the distance from her close-knit family in Idaho. Sundays are her unofficial Skype day with her mom, with whom she is particularly close. (They share a hard-headedness, Aguilar says with a smile.) Luckily, New England offers at least one of the comforts of home: Aguilar is an avid skier. This winter she attended the yearly MIT Graduate Student Council ski trip with her AeroAstro and JPL colleagues — along with some new MIT friends.
Aguilar enjoys the modern energy of Cambridge, calling it “a really nice mesh of cutting-edge technology and young, excited people who want to do cool things with it.” She also takes full advantage of the cultural opportunities in the area. She and her boyfriend share an avid interest in Japanese art and culture, and enjoy visiting the Museum of Fine Arts, which houses one of the largest collections of Japanese art in the world outside of Japan. Most recently, they visited a vibrant exhibit by artist Takashi Murakami.
Aguilar also circulates among the area restaurants, which she has thoroughly researched. “I’m trying really hard to be a foodie,” she laughs earnestly. Her favorite area restaurant, Coreanos, offers Korean-Mexican fusion — perhaps not coincidentally, this reflects Aguilar’s own heritage.
Aguilar’s mother is Korean and her father is Mexican and Native American, but Aguilar’s upbringing wasn’t strongly influenced by her parents’ ethnic backgrounds. Recently, though, Aguilar has felt pulled to explore her Mexican heritage more fully.
“It’s actually been a really interesting journey in discovering what my cultural background means to me,” she says. Aguilar recalls her grandfather, a Mexican immigrant, telling her she didn’t need to learn Spanish as a child. However, she also recalls seeing him transform into a new person at his favorite Mexican restaurant, where he would banter with the cooks and servers in his native language.
Though he has recently passed away, Aguilar and her sister are actively trying to use Spanish to feel closer to their grandfather and to understand his heritage: “[I think about] the little ways he did pass his culture on that he didn’t realize. … It’s like discovering a part of myself.”
Women supporting women
Another activity close to Aguilar’s heart is her involvement with Graduate Women in Aerospace Engineering (GWAE), a student group geared toward recruiting and supporting women in the AeroAstro department at MIT. “We have an incredible support group. The GWAE [members] are all pretty tight-knit, which is exactly the kind of community we are trying to foster,” she says.
The group has four arms: community building, a women-in-STEM speaker series, mentorship, and outreach and recruitment. As the current co-president of GWAE, Aguilar is involved in all of these efforts. “It's important to me that women have this kind of support and encouragement because I wouldn’t be here without the support of women,” she says.
Aguilar welcomes the opportunity to build that same support in AeroAstro today.
She has many thoughts on why more women don’t pursue graduate work in aerospace engineering: “I think what happens … is a lot of women undergraduates go into industry — which is awesome, we’re really happy about that because it means a lot of them get job offers and they’re excited to go out and work. But most of the incoming graduate women come from undergraduate aerospace programs, and I think a lot of women [from other fields] may be intimidated to apply to the program. So then we don’t have the input to compensate for the number of women who have gone off to industry.”
As such, GWAE works hard to make the department seem approachable — a task to which Aguilar seems particularly well-suited.
She is grateful to have a female advisor in Kerri Cahoy, the Rockwell International Career Development Professor, whom she deeply admires. “She’s a rock star. She’s amazing — I don’t know how she does it. She’s involved in multiple flight projects, which are projects that are bound for orbit … she has a career, she has a family, she’s super successful. … [I want to say,] teach me your secrets!”
Aguilar is also a mentor to undergraduate women in the department and delights in helping her mentees secure the best internships and other opportunities that they can.
While Aguilar is still deciding whether to pursue a doctoral degree in the AeroAstro program or to conclude her work with a master’s degree, she is confident that she will remain involved in space research and engineering. Space, to Aguilar, is less a frontier than a dynamo for scientific progress: It generates research that, when applied, will eventually also transform more down-to-Earth technologies.
“Space research is going to be where so much exciting new science and technology comes from,” Aguilar says. “You hear a lot about Mars 2020 … but the technology it takes to get us there is actually really incredible. I’m excited to see emergent applications like how our internet is going change because we’re trying to get people to Mars. We need the same technology to send a message to the moon that we need to relay to Mars, and it’s that technology that’s going to have global impact.”
Bruno Verdini is executive director of the MIT-Harvard Mexico Negotiation Program, a lecturer in urban planning and negotiation at MIT, and co-founder of MIT’s concentration in negotiation and leadership. He teaches The Art and Science of Negotiation, one of MIT’s highest ranked and most popular electives (with over 500 students from 20 different departments pre-registering per year), and leads training and consulting work for governments, firms, and international organizations around the world. The research underpinning his new book with MIT Press, "Winning Together: The Natural Resource Negotiation Playbook," was the winner of Harvard Law School’s award for best research paper of the year in negotiation, mediation, decision-making, and dispute resolution. He talked with the MIT Energy Initiative following a recent seminar in which he discussed his research and shared expertise on negotiating for mutual gains.
Q. What drew you to study negotiation, and has your interest always been in conflict resolution, or did that evolve over time?
A. I fell in love with the field because it requires a full engagement, with mind, hands, and heart. Negotiations are present in every single professional activity and in our daily personal lives. They entail feeling comfortable with the unknown but curious about how to render it familiar, through individual preparation and collaborative decision-making, showcasing the ability to persuade and the desire to be persuaded, as well. As such, it is an eminently human endeavor, highly analytical and at the same time spiritual. Whether we find ourselves with family, friends, colleagues, partners, or foes, negotiations offer an opportunity to communicate and pursue our principles and aspirations, and as such, a chance to learn from each other (and inevitably, about ourselves!). That’s a transformative opportunity. Whether we have the foresight, willpower, and humility to root out our blind spots, move away from vicious cycles, and build new and better bridges, is up to us. I embrace that responsibility at the heart of the field, as it involves constant self-reflection and the belief that we can learn from our past to change our present and build a better future. In sum, I experience negotiation as an exhilarating expedition that brings new challenges every day, and where our moral compass plays a crucial role.
Q. For your book, you interviewed more than 70 high-ranking officials who were involved in U.S.-Mexico negotiations around energy resources in the Gulf of Mexico as well as water and environmental resources within the Colorado River Basin. How did your conversations with them inform your thinking on the kinds of challenges people need to be aware of and overcome to maximize the potential for successful negotiations?
A. Look around, at work or on your way home, and you’ll see people with self-serving biases and faulty beliefs that cause them to miss opportunities and arrive at needless standoffs. Look inward, and you’ll probably see a couple of hurdles keeping you from being your best version, too. Decades of empirical research support the notion that we tend to see stakeholders and situations in biased ways, with harmful effects at the negotiating table (and beyond). We all struggle with change in different ways at different moments, so, without proactively documenting and practicing against these traps, once we return to complex, ambiguous, stressful, highly competitive, and rapidly changing situations in our professional or personal life, the cognitive and motivational biases that besiege us tend to re-emerge. Against this backdrop, on a transboundary scale, I wanted to examine and piece together, through the eyes of the stakeholders on all sides and across all levels, whether and how these blind spots and faulty beliefs had been dislodged, as part of the efforts to solve high-stakes resource management conflicts that had lasted for over seven decades. In my experience, whenever you focus on how people work side-by-side against the problem (rather than against each other), good insights tend to emerge.
Q. Which negotiating strategies do you consider crucial no matter what area you’re working in, be it natural resources, politics, business, or another area?
A. There are so many, depending on the scenario, the stakes, and both the processes and outcomes we want to foster. In "Winning Together," I focus in on 12 strategies in approximate chronological order, from well before a negotiation is initiated to follow-up measures after an agreement has been implemented. One element to reiterate is that there are great differences between acquiring power and wielding it effectively. A zero-sum mindset, which is quite frequent in the world, can secure the first, but is seldom useful for the latter. If we want to address the complex challenges that besiege our communities, instead of blaming each other or kicking problems down the road, we have to foster leadership practices that better unearth all valuable sources of information and empower willing stakeholders to shape meaningful action. Communities need to provide each other the opportunity to build together and test new courses of action during a pilot period. Such trials can garner support, easing fears of the unknown by securing an end date from the outset. Once the pilot is underway, stakeholders can experience its impacts firsthand. Should the pilot result in more benefits than costs, the stakeholders will become advocates for this approach. In sum, a commitment to put ourselves in the other sides’ shoes, when intertwined with reciprocity, tends to lead to more creative solutions, a shared sense of fairness, and resilience in the implementation of partnerships. Communities thrive when we do that.
When people think of high-stakes negotiations — whether between countries or interest groups or individuals — they generally picture raised voices and table pounding as each side competes to get an outcome that’s in its best interest. But research and experience suggest that a different approach may ensure a more efficient process and a better result.
“There’s evidence that one of the best ways to satisfy one’s own interests is to find an effective way to meet the core interests of the other side,” says Bruno Verdini, lecturer in urban planning and negotiation and executive director of the MIT-Harvard Mexico Negotiation Program. “Embracing a mutual-gains approach to negotiation implies switching away from the traditional, widespread, zero-sum, win-lose mindset in order to structure the negotiation process instead as an opportunity for stakeholders to learn about and respond to each other’s core needs. The result tends to be a more robust agreement that both sides experience and view as beneficial.”
Experts have come up with independent theories about how to enhance adaptive leadership, collaborative decision-making, political communication, and dispute resolution in communities. Verdini wanted to integrate those theories for the first time and apply them to the realm of natural resource management, exploring what had happened in real situations where stakes were high.
To that end, he decided to examine two long-running disputes between the United States and Mexico regarding shared natural resources — hydrocarbon reservoirs in the Gulf of Mexico and environmental and water resources from the Colorado River. In both cases, after seven decades of stalemate, the countries came up with joint agreements in 2012. Since then, both deals have been implemented, enhanced, and renewed for the foreseeable future, despite changes in the binational political landscape over the past few years.
The energy dispute
Defining the rules for deep-water energy production along the U.S.-Mexico maritime boundary had been a source of contention for some 70 years. Within 200 nautical miles from either coastline, ownership is clear. But in the middle of the hydrocarbon-rich Gulf of Mexico, the question remained: How do the U.S. and Mexico engage with each other to address the potential for reservoirs straddling the boundary?
For decades, the notion of U.S. and Mexican energy companies working together didn’t seem feasible. The U.S. adhered to the “rule of capture,” which asserts that if a company drills into a reservoir on the U.S. side, regardless of whether the reservoir crosses the border, it owns all of the extracted oil: first-come, first-served. Mexico argued that this unilateral behavior was not fair, but since it had strict constitutional rulings forbidding joint drilling between international energy companies and Petróleos Mexicanos (PEMEX), its national oil company, cooperation seemed unlikely. In 2000, stuck at an impasse, the two countries agreed to place a decade-long moratorium on drilling in the contested area.
In 2010, the U.S. and Mexico agreed to extend that moratorium for another four years, but this time, they had a new plan in the works. By 2012, after less than 18 months at the negotiating table, the two sides had signed a landmark agreement that would overhaul all prior practices and incentivize U.S. and Mexican companies to jointly develop shared hydrocarbon reservoirs — the first significant offshore energy partnership in the history of relations between the two countries. That agreement would later be complemented and further enhanced by a series of landmark domestic energy reforms in Mexico.
To understand that breakthrough, Verdini talked with the stakeholders from both countries who directly conducted the negotiations. “The idea behind the research was simply to learn from the people who were involved in those high-stakes negotiations,” he says. “The focus was not on the features of the agreement but on what they did day-to-day as they negotiated it. What kind of conversations did they set up? What kind of strategies did they follow to resolve these disputes?” Ultimately, his goal was to find out what actors from different organizations and backgrounds on both sides thought worked in practice.
To that end, he interviewed more than 70 individuals, including all key political leaders, such as presidents, secretaries, and ambassadors; CEOs as well as technical and scientific experts in industry; and general managers of environmental organizations. In those in-depth interviews, the U.S. and Mexican negotiators unpacked the decisions and practices they thought transformed the negotiation process and product — allowing Verdini to piece together what both sides considered, unbeknownst to each other, as the key steps and strategies. He has synthesized this research in a new book, “Winning Together: The Natural Resource Negotiation Playbook.”
Flexible leadership, unusual first steps
As is the case with any window of opportunity, one factor shaping the start of negotiations was simply that conditions and attitudes had changed. Mexico’s production from onshore and shallow-water fields had declined sharply, and the country needed a new source of revenue. Promising new fields were mostly located in deep water, and Mexico needed access to a level of investment, risk-sharing, and technological expertise that would be possible only through international partnerships. While the U.S. still supported the rule of capture, high-ranking U.S. officials noted the potential benefits of cooperation, including opening up Mexico’s energy sector to foreign investment and improving U.S. energy security, in tandem with positive impacts on Mexico’s economic and social stability. Thus, both sides wanted to end the moratorium on development — but they didn’t know how to go about it.
In spring 2010, when the presidents of the two countries publicly announced that they wanted an agreement and specified that they wanted it to be a mutual-gains solution, there was still no blueprint on how to bridge a whole host of divides. However, that unprecedented broad mandate, which had been years in the making, empowered high-level negotiators to take some unusual steps.
For example, several months before negotiations officially started, as per traditional diplomatic protocols, the Mexican authorities sent a draft agreement to the United States. The preliminary review by U.S. agencies found that the draft contained hundreds of terms that the U.S. was unlikely to accept. Typically, the next step would be for the U.S. to send back a counterdraft presenting objections and suggestions, paragraph by paragraph — and the incessant draft-counterdraft cycle would begin.
However, the lead U.S. negotiator — a lawyer in the U.S. State Department who had first-hand experience working in the oil industry — recognized that such a negotiation pattern was doomed to failure because each side couldn’t fully know, at the outset, where the other was coming from, given fundamental differences in industry behavior, market incentives, and legal structures. He therefore suggested that before starting any face-to-face negotiations, the two groups of negotiators should launch a series of collaborative workshops to develop a deeper understanding of how each country operated in the Gulf of Mexico.
At first, the Mexican negotiators thought this step was merely a delay tactic by the U.S. Yet soon enough, as a result of time spent working side by side, all involved felt that those workshops — held monthly in different locations in the U.S. and Mexico — proved critical. The Mexican and U.S. participants learned about each other’s political, legal, and economic goals and constraints in a positive environment. Perhaps more important, they got to know one another personally and to build rapport. “So they had the opportunity to start sharing information in ways that they had never done before,” says Verdini. “And they were able to genuinely put themselves in the other side’s shoes and better appreciate their concerns and interests.”
An unprecedented move by Mexico broke down another barrier to progress, namely, differing assumptions about what was at stake in the dispute. Actors on the U.S. side came into the negotiations claiming that the existence of cross-boundary reservoirs was doubtful — geological formations in the region tend to be tall and narrow — so there was little need for an agreement on how to manage them. Actors on the Mexican side argued that their existence was likely and feared that, without a joint agreement, conflict could erupt if companies drilled on one side, draining hydrocarbons in the shared reservoirs.
Finally, to break the impasse, leaders at the Mexican Ministry of Energy and PEMEX invited U.S. government officials to PEMEX’s state-of-the-art, three-dimensional visualization center in Tabasco, where they were allowed to observe proprietary geological data that suggested why Mexico felt there were transboundary hydrocarbons. That action was a game-changer. It demonstrated how serious Mexico was about moving forward with negotiations and showcased the developing trust between the two sides.
To reciprocate, the United States hosted Mexican officials for presentations in New Orleans, providing details about the formal arrangements under which U.S. companies forgo the rule of capture and sign “unitization” agreements to work with each other in the Gulf of Mexico. Those agreements allocate operating risks and revenues between coordinating parties and ultimately increase total output from a given reservoir. That information clarified that both sides were making strides to collaboratively address their common interests.
Protecting the process
As the negotiations unfolded, the U.S. and Mexican leaders were increasingly committed to protecting the collaborative spirit of the negotiations — a spirit that was tested with some frequency. For example, on several occasions a politically well-connected figure from one side or the other would visit the negotiations and display a confrontational attitude. “When that happened, the operational leader on the side of the errant interloper would ask the other — through private conversation — not to pay mind to this behavior but rather let the person rant and then move on,” explains Verdini. Those unusual and reciprocal assurances kept the negotiation process from being derailed by ineffective tactics.
Early engagement of environmental leaders also enhanced the process. Typically, nongovernmental organizations (NGOs) would try to block any proposal to open up new acreage for offshore drilling. However, in proactive and frequent meetings with NGO advocates, the negotiators mapped out the realities of the scenario: Given its dwindling revenues, Mexico would surely drill more in the Gulf of Mexico. If both countries proceeded on their own, multiple wells on both sides of the maritime boundary would be drilled, increasing the likelihood of spills and mishaps. If instead a collaborative agreement were in place, U.S. and Mexican partner companies would have access to the full geological picture. They could then share the most advanced technology and expertise on how to proceed, permitting drilling to occur on fewer, carefully selected, joint sites. That perspective — along with an innovative provision establishing a process for joint safety inspections — led the environmental NGOs to conclude that, while they could not advocate for passage of the agreement, they would not step in to oppose it.
Communicating with the public
Another notable feature of the negotiations was the careful crafting of all public communications. Reports of high-stakes negotiations generally focus on conflict and mistrust — a unilateral message that engages readers and listeners on both sides but can hinder the process of reaching a mutual resolution. In this case, the two countries agreed to communicate through joint declarations — and to keep those to a minimum. Public releases focused on the real benefits to be seized together, a narrative that gave both sides “victory speeches” they could deliver to their stakeholders and constituents back home.
In addition, PEMEX devised a media campaign that — without mentioning the ongoing negotiations — stressed that production from the usual shallow offshore fields was diminishing and that getting the most out of promising new deepwater fields would require collaboration with international energy companies. The resulting tax revenues would fund pressing education, public health, and security initiatives.
Incentives rather than requirements
The incentives included in the agreement are a remarkable feature derived from the negotiation process itself, according to Verdini. One creative element is the mechanism the binational deal sets up for resolving conflicts. When partner companies are jointly developing a deepwater reservoir, new information can bring the initial allocation of ownership into question, and the partners may not agree on an appropriate redetermination. Mexico wanted such disputes to be settled by an international court, a proposal that for political reasons was unacceptable to the United States.
Working together, the negotiators came up with a dispute-resolution process unlike any other in transboundary deepwater agreements across the world. The process involves three steps: first, a dialogue between industry CEOs; next, mediation aided by neutral third parties; and finally, arbitration by impartial adjudicators who issue a nonbinding ruling. If there’s still no agreement, then either of the two governments can step in to stop production. But such a move would mean financial losses of hundreds of millions of dollars, given the investment costs of drilling wells — a strong incentive for the parties not to trigger disputes for mere political reasons and a creative way to reach a resilient agreement without binding resolutions.
Also striking is how the binational deal encourages companies to unitize. Mexico preferred mandated unitization, but the U.S. wouldn’t accept such a mandate in light of the precedent it could set against the rule of capture in other parts of the world. That deadlock was broken by an arrangement that may seem surprising but actually demonstrates the deep mutual understanding of the negotiators.
If unitization seems impossible, a company can produce from a transboundary reservoir on its own. However, the agreement states that the company’s rights will be based on the estimated percentage of oil resources on its side of the maritime boundary, according to available seismic data — a risky undertaking because seismic estimates before drilling are often incorrect. In addition, it’s well known that producing from only one side of the maritime boundary will likely reduce overall recovery rates and thus profits. As a result, the side that begins production first inevitably damages not only the interests of the nonproducing party but also its own.
Since proceeding alone increases the likelihood of subpar outcomes from geological, business, and political perspectives, the agreement provides incentives for unitization without requiring it — an arrangement that’s politically feasible and implementable for both sides.
Based on his research findings from the Gulf of Mexico and the Colorado River negotiations, as well as a larger review of transboundary water practices around the world, Verdini prepared a practical, step-by-step guide to high-stakes energy, water, and environmental negotiations between developed and developing countries. The guide is described in detail in his book.
But identifying those steps was only part of the journey. Verdini now uses his innovative pedagogy in courses and workshops in which all participants — from government leaders and industry executives to students across disciplinary bounds — practice the strategies to further advance their interests without compromising on their principles.
In the end, Verdini wants to empower everyone to be an effective negotiator — as he says, “so you can sit down with someone who might be different from you, who might have different challenges and different priorities, and you can trust that, with these negotiation skills and strategies, you can work together to make the world a better place.”
This research was supported by the MIT Department of Urban Studies and Planning and the MIT Department of Political Science. Verdini received MIT’s first-ever interdisciplinary PhD in negotiation, communication, diplomacy, and leadership. The research is the recipient of Harvard Law School’s award for best research of the year in negotiation, mediation, and conflict resolution, the first time the honor has been awarded to faculty based at MIT. In partnership with several agencies in Mexico, Verdini is now heading the development of a binational negotiation center devoted to training stakeholders and organizations in different fields in the theory and practice of the mutual-gains approach to negotiation.
This article appears in the Spring 2018 issue of Energy Futures, the magazine of the MIT Energy Initiative.
Scientists at MIT and elsewhere have analyzed data from K2, the follow-up mission to NASA’s Kepler Space Telescope, and have discovered a trove of possible exoplanets amid some 50,000 stars.
In a paper that appears online today in The Astronomical Journal, the scientists report the discovery of nearly 80 new planetary candidates, including a particular standout: a likely planet that orbits the star HD 73344, which would be the brightest planet host ever discovered by the K2 mission.
The planet appears to orbit HD 73344 every 15 days, and based on the amount of light that it blocks each time it passes in front of its star, scientists estimate that the planet is about 2.5 times the size of the Earth and 10 times as massive. It is also likely incredibly hot, with a temperature somewhere in the range of 1,200 to 1,300 degrees Celsius, or around 2,000 degrees Fahrenheit — about the the temperature of lava from an erupting volcano.
The planet lies at a relatively close distance of 35 parsecs, or about 114 light years from Earth. Given its proximity and the fact that it orbits a very bright star, scientists believe the planet is an ideal candidate for follow-up studies to determine its atmospheric composition and other characteristics.
“We think it would probably be more like a smaller, hotter version of Uranus or Neptune,” says Ian Crossfield, an assistant professor of physics at MIT who co-led the study with graduate student Liang Yu.
The new analysis is also noteworthy for the speed with which it was performed. The researchers were able to use existing tools developed at MIT to rapidly search through graphs of light intensity called “lightcurves” from each of the 50,000 stars that K2 monitored in its two recent observing campaigns. They quickly identified the planetary candidates and released the information to the astronomy community just weeks after the K2 mission made the spacecraft’s raw data available. A typical analysis of this kind takes between several months and a year.
Crossfield says such a fast planet-search enables astronomers to follow up with ground-based telescopes much sooner than they otherwise would, giving them a chance to catch a glimpse of planetary candidates before the Earth passes by that particular patch of sky on its way around the sun.
Such speed will also be a necessity when scientists start receiving data from NASA’s Transiting Exoplanet Survey Satellite, TESS, which is designed to monitor nearby stars in 30-day swaths and will ultimately cover nearly the entire sky.
“When the TESS data come down, there’ll be a few months before all of the stars that TESS looked at for that month ‘set’ for the year,” Crossfield says. “If we get candidates out quickly to the community, everyone can start immediately observing systems discovered by TESS, and doing a lot of great planetary science. So this [analysis] was really a dress rehearsal for TESS.”
The team analyzed data from K2’s 16th and 17th observing campaigns, known as C16 and C17. During each campaign, K2 observes one patch of the sky for 80 days. The telescope is on an orbit that trails the Earth as it travels around the sun. For most other campaigns, K2 has been in a “rear-facing” orientation, in which the telescope observes those stars that are essentially in its rear-view mirror.
Since the telescope travels behind the Earth, those stars that it observes are typically not observable by scientists until the planet circles back around the sun to that particular patch of sky, nearly a year later. Thus, for rear-facing campaigns, Crossfield says there has been little motivation to analyze K2 data quickly.
The C16 and C17 campaigns, on the other hand, were forward-facing; K2 observed those stars that were in front of the telescope and within Earth’s field of view, at least for the next several months. Crossfield, Yu, and their colleagues took this as an opportunity to speed up the usual analysis of K2 data, to give astronomers a chance to quickly observe planetary candidates before the Earth passed them by.
During C16, K2 observed 20,647 stars over 80 days, between Dec. 7, 2017, and Feb. 25, 2018. On Feb. 28, the mission released the data, in the form of pixel-level images, to the astronomy community. Yu and Crossfield immediately began to sift through the data, using algorithms developed at MIT to winnow down the field from 20,000-some stars to 1,000 stars of interest.
The team then worked around the clock, looking through these 1,000 stars by eye for signs of transits, or periodic dips in starlight that could signal a passing planet. In the end, they discovered 30 “highest-quality” planet candidates, whose periodic signatures are especially likely to be caused by transiting planets.
“Our experience with four years of K2 data leads us to believe that most of these are indeed real planets, ready to be confirmed or statistically validated,” the researchers write in their paper.
They also identified a similar number of planet candidates in the recent C17 analysis. In addition to these planetary candidates, the group also picked out hundreds of periodic signals that could be signatures of astrophysical phenomena, such as pulsating or rotating stars, and at least one supernova in another galaxy.
Stars in spades
While the nature of a star doesn’t typically change over the course of a year, Crossfield says the sooner researchers can follow up on a possible planetary transit, the better chance there is of confirming that a planet actually exists.
“You want to observe [candidates] again relatively soon so you don’t lose the transit altogether,” Crossfield says. “You might be able to say, ‘I know there’s a planet around that star, but I’m no longer at all certain when the transits will happen.’ That’s another motivation for following these things up more quickly.”
Since the team released its results, astronomers have validated four of the candidates as definite exoplanets. They have been observing other candidates that the study identified, including the possible planet orbiting HD 73344. Crossfield says the brightness of this star, combined with the speed with which its planetary candidate was identified, can help astronomers quickly zero in on even more specific features of this system.
“We found one of the most exciting planets that K2 has found in its entire mission, and we did it more rapidly than any effort has done before,” Crossfield says. “This is showing the path forward for how the TESS mission is going to do the same thing in spades, all over the entire sky, for the next several years.”
This research was supported, in part, by NASA and the National Science Foundation.
Small and medium-sized enterprises (SMEs) make up 98.2 percent of businesses in Canada, and they emit as much climate change-causing greenhouse gas (GHG) emissions per year as Canada’s combined transportation sector, including every car, truck, train, plane, and ship. Reducing emissions can benefit SMEs by helping them grow while also building healthier communities.
The Center for Social Innovation in Toronto has launched a contest on MIT’s Climate CoLab platform to solicit a broad range of possible solutions to help SMEs in Ontario reduce their direct and indirect GHG emissions while helping them thrive. The winning proposals may be eligible to receive funding and support to pilot their solutions in Ontario over eight months.
“Recent research from the University of Waterloo shows us that the vast majority of SMEs believe that sustainability is important,” says Barnabe Geis, director of programs at the Center for Social Innovation. “We want to support the implementation of solutions — whether technologies, programs or services — that help SMEs meet their sustainability goals as a powerful way to both strengthen our economy and improve the health and well-being of our communities.”
Many SMEs face significant barriers to lowering their emissions, everything from lacking the technical expertise to assess options for reducing emissions to not being able to afford the upfront costs of a low-carbon technology. Once the right technologies or practices are implemented, however, the savings and other benefits to SMEs can be substantial. The contest will offer support to demonstrate the value and scalability of solutions in order to make the path towards sustainability more accessible to SMEs across the province.
The contest is now sourcing proposals on the MIT Climate CoLab platform. It allows members of the public to provide feedback to proposal authors and to cast votes for the People’s Choice Winner. A panel of judges will select three to five winning proposals to potentially be piloted in Ontario based on their desirability, feasibility, scalability, and impact.
The contest is open to proposal submissions until Aug. 3. Proposals submitted prior to July 11 will be reviewed by the judges and given feedback before the contest deadline.
Contest winners may be eligible to access a share of a $320,000 grant package and $113,000 worth of workspace, advisory services, and other in-kind support in Toronto through the Center for Social Innovation, to successfully pilot their projects in Ontario over eight months, starting in November.
“The mission of the Climate CoLab is to test how crowds and experts can work together to solve large, complex problems, like climate change,” says MIT Sloan School of Management Professor Thomas Malone, director of the MIT Center for Collective Intelligence and founder of the Climate CoLab. “Our hope is that, by constructively engaging a broad range of scientists, policymakers, business people, practitioners, investors, and concerned citizens, Climate CoLab can surface better proposals for what to do about climate change than any that would have otherwise been developed.”
Patients with pancreatic cancer usually experience significant weight loss, which can begin very early in the disease. A new study from MIT and Dana-Farber Cancer Institute offers insight into how this happens, and suggests that the weight loss may not necessarily affect patients’ survival.
In a study of mice, the researchers found that weight loss occurs due to a reduction in key pancreatic enzymes that normally help digest food. When the researchers treated these mice with replacement enzymes, they were surprised to find that while the mice did regain weight, they did not survive any longer than untreated mice.
Pancreatic cancer patients are sometimes given replacement enzymes to help them gain weight, but the new findings suggest that more study is needed to determine whether that actually benefits patients, says Matt Vander Heiden, an associate professor of biology at MIT and a member of the Koch Institute for Integrative Cancer Research.
“We have to be very careful not to draw medical advice from a mouse study and apply it to humans,” Vander Heiden says. “The study does raise the question of whether enzyme replacement is good or bad for patients, which needs to be studied in a clinical trial.”
Vander Heiden and Brian Wolpin, an associate professor of medicine at Harvard Medical School and Dana-Farber Cancer Institute, are the senior authors of the study, which appears in the June 20 issue of Nature. The paper’s lead authors are Laura Danai, a former MIT postdoc, and Ana Babic, an instructor in medicine at Dana-Farber.
In a 2014 study, Vander Heiden and his colleagues found that muscle starts breaking down very early in pancreatic cancer patients, usually long before any other signs of the disease appear.
Still unknown was how this tissue wasting process occurs. One hypothesis was that pancreatic tumors overproduce some kind of signaling factor, such as a hormone, that circulates in the bloodstream and promotes breakdown of muscle and fat.
However, in their new study, the MIT and Dana-Farber researchers found that this was not the case. Instead, they discovered that even very tiny, early-stage pancreatic tumors can impair the production of key digestive enzymes. Mice with these early-stage tumors lost weight even though they ate the same amount of food as normal mice. These mice were unable to digest all of their food, so they went into a starvation mode where the body begins to break down other tissues, especially fat.
The researchers found that when they implanted pancreatic tumor cells elsewhere in the body, this weight loss did not occur. That suggests the tumor cells are not secreting a weight-loss factor that circulates in the bloodstream; instead, they only stimulate tissue wasting when they are in the pancreas.
The researchers then explored whether reversing this weight loss would improve survival. Treating the mice with pancreatic enzymes did reverse the weight loss. However, these mice actually survived for a shorter period of time than mice that had pancreatic tumors but did not receive the enzymes. That finding, while surprising, is consistent with studies in mice that have shown that calorie restriction can have a protective effect against cancer and other diseases.
“It turns out that this mechanism of tissue wasting is actually protective, at least for the mice, in the same way that limiting calories can be protective for mice,” Vander Heiden says.
The intriguing findings from the mouse study prompted the research team to see if they could find any connection between weight loss and survival in human patients. In an analysis of medical records and blood samples from 782 patients, they found no link between degree of tissue wasting at the time of diagnosis and length of survival. That finding is important because it could reassure patients that weight loss does not necessarily mean that the patient will do worse, Vander Heiden says.
“Sometimes you can’t do anything about this weight loss, and this finding may mean that just because the patient is eating less and is losing weight, that doesn’t necessarily mean that they’re shortening their life,” he says.
The researchers say that more study is needed to determine if the same mechanism they discovered in mice is also occurring in human cancer patients. Because the mechanism they found is very specific to pancreatic tumors, it may differ from the underlying causes behind tissue wasting seen in other types of cancer and diseases such as HIV.
“From a mechanistic standpoint, this study reveals a very different way to think about what could be causing at least some weight loss in pancreatic cancer, suggesting that not all weight loss is the same across different cancers,” Vander Heiden says. “And it raises questions that we really need to study more, because some mechanisms may be protective and some mechanisms may be bad for you.”
Clary Clish, director of the Metabolomics Platform at the Broad Institute, and members of his research group also contributed to this work. The research was funded, in part, by the Lustgarten Foundation, a National Institutes of Health Ruth Kirschstein Fellowship, Stand Up 2 Cancer, the Ludwig Center for Molecular Oncology at MIT, the Koch Institute Frontier Research Program through the Kathy and Curt Marble Cancer Research Fund, the MIT Center for Precision Cancer Medicine, and the National Institutes of Health.
Across the Sahel, a semiarid region of western and north-central Africa extending from Senegal to Sudan, many small-scale farmers, market vendors, and families lack an affordable and effective solution for storing and preserving vegetables. As a result, harvested vegetables are at risk of spoiling before they can be sold or eaten.
That means loss of income for farmers and vendors, reduced availability of nutritious foods for local communities, and an increase in the time spent traveling to purchase fresh produce. The problem is particularly acute in off-grid areas, and for anyone facing financial or technical barriers to refrigeration.
Yet, as described in a recently released report “Evaporative Cooling Technologies for Improved Vegetable Storage in Mali” from MIT’s Comprehensive Initiative on Technology Evaluation (CITE) and MIT D-Lab, there are low-cost, low-tech solutions for communities in need of produce refrigeration that rely on an age-old method exploiting the air-cooling properties of water evaporation. Made from simple materials such as bricks or clay pots, burlap sack or straw, these devices have the potential to address many of the challenges that face rural households and farmers in need of improved post-harvest vegetable storage.
The study was undertaken by a team of researchers led by Eric Verploegen of the D-Lab and Ousmane Sanogo and Takemore Chagomoka from the World Vegetable Center, which is engaged in ongoing work with horticulture cooperatives and farmers in Mali. To gain insight into evaporative cooling device use and preferences, the team conducted interviews in Mali with users of the cooling and storage systems and with stakeholders along the vegetable supply chain. They also deployed sensors to monitor product performance parameters.
A great idea in need of a spotlight
Despite the potential for evaporative cooling technologies to fill a critical technological need, scant consumer information is available about the range of solutions available.
“Evaporative cooling devices for improved vegetable storage have been around for centuries, and we want to provide the kind of information about these technologies that will help consumers decide which products are right for them given their local climate and specific needs,” says Verploegen, the evaluation lead.
The simple chambers cool vegetables through the evaporation of water, in the same way that the evaporation of perspiration cools the human body. When water (or perspiration) evaporates, it takes the heat with it. And in less humid climates like Mali, where it is hot and dry, technologies that take advantage of this cooling process show promise for effectively preserving vegetables.
The team studied two different categories of vegetable cooling technologies: large-scale vegetable cooling chambers constructed from brick, straw, and sack suitable for farming cooperatives, and devices made from clay pots for individuals and small-scale farmers. Over time, they monitored changes in temperature and humidity inside the devices to understand when they were most effective.
“As predicted,” says Verploegen, “the real-world performance of these technologies was stronger in the dry season. We knew this was true in a lab-testing environment, but we now have data that documents that a drop in temperature of greater than 8 degrees Celsius can be achieved in a real-world usage scenario.”
The decrease of temperature, along with the increased humidity and protection from pests provided by the devices, resulted in significant increases in shelf life for commonly stored vegetables including tomatoes, cucumbers, eggplant, cabbage, and hot peppers.
“The large-scale vegetable cooling devices made of brick performed significantly better than those made out of straw or sacks, both from a technical performance perspective and also from an ease-of-use perspective,” notes Verploegen. “For the small-scale devices, we found fairly similar performance across differing designs, indicating that the design constraints are not very rigid; if the basic principles of evaporative cooling are applied, a reasonably effective device can be made using locally available materials. This is an exciting result. It means that to scale use of this process for keeping vegetables fresh, we are able to look at ways to disseminate information and designs rather than developing and distributing physical products.”
The research results indicate that evaporative cooling devices would provide great benefit to small-scale farmers, vendors selling vegetables in a market, and individual consumers, who due to financial or energy constraints, don’t have other options. However, evaporative cooling devices are not appropriate for all settings: they are best suited to communities where there is access to water and vegetable storage is needed during hot and dry weather. And, users must be committed to tending the devices. Sensor data used in the study revealed that users were more inclined to water the cooling devices in the dry season and reduce their usage of the devices as the rainy season started.
Resources for development researchers and practitioners
In addition to the evaluation report, Verploegen has developed two practitioner resources, the “Evaporative Cooling Decision Making Tool” (which is interactive) and the “Evaporative Cooling Best Practices Guide,” to support the determination of evaporative cooler suitability and facilitate the devices’ proper construction and use. The intended audience for these resources includes government agencies, nongovernmental organizations, civil society organizations, and businesses that could produce, distribute, and/or promote these technologies.
Both resources are available online.
As part of an ongoing project, MIT D-Lab and the World Vegetable Center are using the results of this research to test various approaches to increase dissemination of these technologies in the communities that can most benefit from them.
“This study provided us with the evidence that convinced us to use only the efficient types of vegetable cooling technologies — the larger brick chambers,” says World Vegetable Center plant health scientist Wubetu Bihon Legesse. “And, the decision support tool helped us evaluate the suitability of evaporative cooling systems before installing them.”
Launched at MIT in 2012, CITE is a pioneering program dedicated to developing methods for product evaluation in global development. Currently based at MIT D-Lab, CITE’s research is funded by the USAID U.S. Global Development Lab. CITE is led by Professor Dan Frey of the Department of Mechanical Engineering and MIT D-Lab, and additionally supported by MIT faculty and staff from the Priscilla King Gray Public Service Center, the Sociotechnical Systems Research Center, the Center for Transportation and Logistics, the School of Engineering, and the Sloan School of Management.
Daniel E. Hastings, the Cecil and Ida Green Education Professor at MIT, has been named head of the Department of Aeronautics and Astronautics, effective Jan. 1, 2019.
“Dan has a remarkable depth of knowledge about MIT, and has served the Institute in a wide range of capacities,” says Anantha Chandrakasan, dean of the School of Engineering. “He has been a staunch advocate for students, for research, and for MIT’s international activities. We are fortunate to have him join the School of Engineering’s leadership team, and I look forward to working with him.”
Hastings, whose contributions to spacecraft and space system-environment interactions, space system architecture, and leadership in aerospace research and education earned him election to the National Academy of Engineering in 2017, has held a range of roles involving research, education, and administration at MIT.
Hastings has taught courses in space environment interactions, rocket propulsion, advanced space power and propulsion systems, space policy and space systems engineering since he first joined the faculty in 1985. He became director of the MIT Technology and Policy Program in 2000 and was named director of the Engineering Systems Division in 2004. He served as dean for undergraduate education from 2006 to 2013, and from 2014 to 2018 he has been director of the Singapore-MIT Alliance for Research and Technology (SMART).
Hastings has also had an active career of service outside MIT. His many external appointments include serving as chief scientist from 1997 to 1999 for the U.S. Air Force, where he led influential studies of Air Force investments in space and of preparations for a 21st-century science and technology workforce. He was also the chair of the Air Force Scientific Advisory Board from 2002 to 2005; from 2002 to 2008, he was a member of the National Science Board.
A fellow of the American Institute of Aeronautics and Astronautics (AIAA), Hastings was also awarded the Losey Atmospheric Sciences Award from the AIAA in 2002. He is a fellow (academician) of the International Astronautical Federation and the International Council in System Engineering. The U.S Air Force granted him its Exceptional Service Award in 2008, and in both 1997 and 1999 gave him the Air Force Distinguished Civilian Award. He received the National Reconnaissance Office Distinguished Civilian Award in 2003. He was also the recipient of MIT’s Gordon Billard Award for “special service of outstanding merit performed for the Institute” in 2013.
Hastings received his bachelor’s degree from Oxford University in 1976, and MS and PhD degrees in aeronautics and astronautics from MIT in 1978 and 1980, respectively.
Edward M. Greitzer, the H.N. Slater Professor of Aeronautics and Astronautics, will serve as interim department head from July 1 to Dec. 31, 2018.
Hastings will replace Jaime Peraire, the H. N. Slater Professor in Aeronautics and Astronautics, who has been department head since July 1, 2011. “I am grateful to Jaime for his excellent work over the last seven years,” Chandrakasan noted. “During his tenure as department head, he led the creation of a new strategic plan and made significant steps in its implementation. He addressed the department's facilities challenges, strengthened student capstone- and research-project experience, and led the 2014 AeroAstro centennial celebrations, which highlighted the tremendous contributions MIT has made to aerospace and national service.”
Researchers at MIT, who last year designed a tiny computer chip tailored to help honeybee-sized drones navigate, have now shrunk their chip design even further, in both size and power consumption.
The team, co-led by Vivienne Sze, associate professor in MIT's Department of Electrical Engineering and Computer Science (EECS), and Sertac Karaman, the Class of 1948 Career Development Associate Professor of Aeronautics and Astronautics, built a fully customized chip from the ground up, with a focus on reducing power consumption and size while also increasing processing speed.
The new computer chip, named “Navion,” which they are presenting this week at the Symposia on VLSI Technology and Circuits, is just 20 square millimeters — about the size of a LEGO minifigure’s footprint — and consumes just 24 milliwatts of power, or about 1 one-thousandth the energy required to power a lightbulb.
Using this tiny amount of power, the chip is able to process in real-time camera images at up to 171 frames per second, as well as inertial measurements, both of which it uses to determine where it is in space. The researchers say the chip can be integrated into “nanodrones” as small as a fingernail, to help the vehicles navigate, particularly in remote or inaccessible places where global positioning satellite data is unavailable.
The chip design can also be run on any small robot or device that needs to navigate over long stretches of time on a limited power supply.
“I can imagine applying this chip to low-energy robotics, like flapping-wing vehicles the size of your fingernail, or lighter-than-air vehicles like weather balloons, that have to go for months on one battery,” says Karaman, who is a member of the Laboratory for Information and Decision Systems and the Institute for Data, Systems, and Society at MIT. “Or imagine medical devices like a little pill you swallow, that can navigate in an intelligent way on very little battery so it doesn’t overheat in your body. The chips we are building can help with all of these.”
Sze and Karaman’s co-authors are EECS graduate student Amr Suleiman, who is the lead author; EECS graduate student Zhengdong Zhang; and Luca Carlone, who was a research scientist during the project and is now an assistant professor in MIT’s Department of Aeronautics and Astronautics.
A flexible chip
In the past few years, multiple research groups have engineered miniature drones small enough to fit in the palm of your hand. Scientists envision that such tiny vehicles can fly around and snap pictures of your surroundings, like mosquito-sized photographers or surveyors, before landing back in your palm, where they can then be easily stored away.
But a palm-sized drone can only carry so much battery power, most of which is used to make its motors fly, leaving very little energy for other essential operations, such as navigation, and, in particular, state estimation, or a robot’s ability to determine where it is in space.
“In traditional robotics, we take existing off-the-shelf computers and implement [state estimation] algorithms on them, because we don’t usually have to worry about power consumption,” Karaman says. “But in every project that requires us to miniaturize low-power applications, we have to now think about the challenges of programming in a very different way.”
In their previous work, Sze and Karaman began to address such issues by combining algorithms and hardware in a single chip. Their initial design was implemented on a field-programmable gate array, or FPGA, a commercial hardware platform that can be configured to a given application. The chip was able to perform state estimation using 2 watts of power, compared to larger, standard drones that typically require 10 to 30 watts to perform the same tasks. Still, the chip’s power consumption was greater than the total amount of power that miniature drones can typically carry, which researchers estimate to be about 100 milliwatts.
To shrink the chip further, in both size and power consumption, the team decided to build a chip from the ground up rather than reconfigure an existing design. “This gave us a lot more flexibility in the design of the chip,” Sze says.
Running in the world
To reduce the chip’s power consumption, the group came up with a design to minimize the amount of data — in the form of camera images and inertial measurements — that is stored on the chip at any given time. The design also optimizes the way this data flows across the chip.
“Any of the images we would’ve temporarily stored on the chip, we actually compressed so it required less memory,” says Sze, who is a member of the Research Laboratory of Electronics at MIT. The team also cut down on extraneous operations, such as the computation of zeros, which results in a zero. The researchers found a way to skip those computational steps involving any zeros in the data. “This allowed us to avoid having to process and store all those zeros, so we can cut out a lot of unnecessary storage and compute cycles, which reduces the chip size and power, and increases the processing speed of the chip,” Sze says.
Through their design, the team was able to reduce the chip’s memory from its previous 2 megabytes, to about 0.8 megabytes. The team tested the chip on previously collected datasets generated by drones flying through multiple environments, such as office and warehouse-type spaces.
“While we customized the chip for low power and high speed processing, we also made it sufficiently flexible so that it can adapt to these different environments for additional energy savings,” Sze says. “The key is finding the balance between flexibility and efficiency.” The chip can also be reconfigured to support different cameras and inertial measurement unit (IMU) sensors.
From these tests, the researchers found they were able to bring down the chip’s power consumption from 2 watts to 24 milliwatts, and that this was enough to power the chip to process images at 171 frames per second — a rate that was even faster than what the datasets projected.
The team plans to demonstrate its design by implementing its chip on a miniature race car. While a screen displays an onboard camera’s live video, the researchers also hope to show the chip determining where it is in space, in real-time, as well as the amount of power that it uses to perform this task. Eventually, the team plans to test the chip on an actual drone, and ultimately on a miniature drone.
This research was supported, in part, by the Air Force Office of Scientific Research, and by the National Science Foundation.
Getting robots to do things isn’t easy: Usually, scientists have to either explicitly program them or get them to understand how humans communicate via language.
But what if we could control robots more intuitively, using just hand gestures and brainwaves?
A new system spearheaded by researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) aims to do exactly that, allowing users to instantly correct robot mistakes with nothing more than brain signals and the flick of a finger.
Building off the team’s past work focused on simple binary-choice activities, the new work expands the scope to multiple-choice tasks, opening up new possibilities for how human workers could manage teams of robots.
By monitoring brain activity, the system can detect in real-time if a person notices an error as a robot does a task. Using an interface that measures muscle activity, the person can then make hand gestures to scroll through and select the correct option for the robot to execute.
The team demonstrated the system on a task in which a robot moves a power drill to one of three possible targets on the body of a mock plane. Importantly, they showed that the system works on people it’s never seen before, meaning that organizations could deploy it in real-world settings without needing to train it on users.
“This work combining EEG and EMG feedback enables natural human-robot interactions for a broader set of applications than we've been able to do before using only EEG feedback,” says CSAIL Director Daniela Rus, who supervised the work. “By including muscle feedback, we can use gestures to command the robot spatially, with much more nuance and specificity.”
PhD candidate Joseph DelPreto was lead author on a paper about the project alongside Rus, former CSAIL postdoc Andres F. Salazar-Gomez, former CSAIL research scientist Stephanie Gil, research scholar Ramin M. Hasani, and Boston University Professor Frank H. Guenther. The paper will be presented at the Robotics: Science and Systems (RSS) conference taking place in Pittsburgh next week.
In most previous work, systems could generally only recognize brain signals when people trained themselves to “think” in very specific but arbitrary ways and when the system was trained on such signals. For instance, a human operator might have to look at different light displays that correspond to different robot tasks during a training session.
Not surprisingly, such approaches are difficult for people to handle reliably, especially if they work in fields like construction or navigation that already require intense concentration.
Meanwhile, Rus’ team harnessed the power of brain signals called “error-related potentials” (ErrPs), which researchers have found to naturally occur when people notice mistakes. If there’s an ErrP, the system stops so the user can correct it; if not, it carries on.
“What’s great about this approach is that there’s no need to train users to think in a prescribed way,” says DelPreto. “The machine adapts to you, and not the other way around.”
For the project the team used “Baxter,” a humanoid robot from Rethink Robotics. With human supervision, the robot went from choosing the correct target 70 percent of the time to more than 97 percent of the time.
To create the system the team harnessed the power of electroencephalography (EEG) for brain activity and electromyography (EMG) for muscle activity, putting a series of electrodes on the users’ scalp and forearm.
Both metrics have some individual shortcomings: EEG signals are not always reliably detectable, while EMG signals can sometimes be difficult to map to motions that are any more specific than “move left or right.” Merging the two, however, allows for more robust bio-sensing and makes it possible for the system to work on new users without training.
“By looking at both muscle and brain signals, we can start to pick up on a person's natural gestures along with their snap decisions about whether something is going wrong,” says DelPreto. “This helps make communicating with a robot more like communicating with another person.”
The team says that they could imagine the system one day being useful for the elderly, or workers with language disorders or limited mobility.
“We’d like to move away from a world where people have to adapt to the constraints of machines,” says Rus. “Approaches like this show that it’s very much possible to develop robotic systems that are a more natural and intuitive extension of us.”
For his fundamental contributions and innovations in infrastructure mechanics and sustainability, MIT Professor Oral Buyukozturk of the Department of Civil and Environmental Engineering (CEE) was named the recipient of the 2018 George W. Housner Medal for Structural Control and Monitoring from the American Society of Civil Engineers (ASCE).
The medal was awarded for his “pioneering and transformative developments in video-based structural sensing and identification, interferometry-based data analytics, high-efficiency generic wireless networks, and their integration with groundbreaking engineering mechanics research and practice for enhancing civil infrastructural resilience and sustainability,” according to ASCE Executive Director Thomas W. Smith III.
An MIT faculty member since 1976 and director of the Laboratory for Infrastructure Science and Sustainability, Buyukozturk has produced — and continues to produce — significant contributions to the field of engineering mechanics, structural control, and monitoring of buildings and bridges during his tenure.
“This medal means a lot to me not only because it is a major recognition of our contributions but also because of my admiration for the work and vision of George W. Housner in earthquake engineering and structural control and monitoring as challenging interdisciplinary fields,” Buyukozturk says.
He cites a number of large-scale research projects and developments as being integral to the award, including the first MIT-Kuwait Signature Project on Sustainaiblity of Built Environment, and the MITEi Shell Global Distributed Sensing project, each involving dozens of students, postdocs, and researchers.
“I share this medal with my amazing students, postdocs, and collaborators from all corners of MIT, as well as my international collaborators including Dr. Dirk Smit, vice president of exploration technology and chief scientist of Shell,” Buyukozturk says.
Collaboration is central to Buyukozturk’s research. In addition to fostering numerous international projects, Buyukozturk has made it a priority to involve undergraduate students in his high-level research projects. For the past three years, his lab has hosted many undergraduate students including rising senior Stephanie Chin as part of the Undergraduate Research Opportunities Program. The results of their work has been fruitful: In the past two years, three papers have been published in top journals as Chin a coauthor.
Markus Buehler, the McAfee Professor of Engineering and department head of CEE, says the award is “a testament to the dedication of Professor Buyukozturk to the advancement and sustainability of infrastructure over many decades.”
Buyukozturk formally received the medal at the ASCE Engineering Mechanics Institute (EMI) Conference, held on MIT’s campus from May 29 to June 1. ASCE EMI President George Deodatis presented the medal to Buyukozturk at the EMI conference banquet, which attracted more than 1,000 delegates.
MIT and SUSTech announce Centers for Mechanical Engineering Research and Education at MIT and SUSTech
MIT and the Southern University of Science and Technology (SUSTech) in Shenzhen, China, have announced the launch of the Centers for Mechanical Engineering Research and Education at MIT and SUSTech. The two centers, which will be located at MIT and SUSTech, aim to foster research collaborations and inspire new approaches to engineering education.
At a ceremony on June 15, Anantha P. Chandrakasan, dean of engineering at MIT and the Vannevar Bush Professor of Electrical Engineering and Computer Science, and Zhenghe Xu, dean of engineering at SUSTech, signed an agreement establishing the two centers. They were joined by faculty from both MIT’s Department of Mechanical Engineering and SUSTech as well as representatives from the local Shenzhen government.
“This research and educational collaboration will give MIT’s faculty and students the opportunity to benefit from a wider range of research and engage in a discussion on how to best train mechanical engineers,” says Gang Chen, the Carl Richard Soderberg Professor of Power Engineering and head of the Department of Mechanical Engineering, who will serve as faculty director for the MIT center. Professor Zhenghe Xu will serve as faculty director of the SUSTech center.
“Launching these new centers will help support research on some of the world’s most pressing problems,” Chen says.
“The Centers for Mechanical Engineering Research and Education at MIT and SUSTech aim to inspire intellectual dialogue, innovative research and development, and new approaches to teaching and learning between experts in China and at MIT,” says MIT Associate Provost Richard Lester.
Each year, one or two faculty members from SUSTech will visit MIT for a semester. In addition to conducting research at the MIT center, the SUSTech faculty will be invited to observe MIT’s approach to mechanical engineering education firsthand.
Students from SUSTech will also have the opportunity to conduct research and take courses at MIT. Roughly a dozen graduate and undergraduate students from SUSTech will spend time at the MIT center each year.
Meanwhile, faculty and students from MIT will be invited to travel to Shenzhen and observe developments in the area’s innovation ecosystem, through a number of programs supported by the Centers for Mechanical Engineering Research and Education at MIT and SUSTech.
“Our collaboration with SUSTech on launching these two new centers can help us make a positive impact on research and education both in the U.S. and in China,” Chen says.
On June 13, the MIT Audit Division announced its 18th annual Infinite Mile Award recipients. The awards were presented at a luncheon celebrating and honoring the two individuals, Suwen Duan and Erin Coates, who received the awards for their 2018 contributions.
Suwen Duan, senior data analyst, was honored in the category of Collaboration and Community Building. Duan has an excellent capacity to initiate discussions with anyone who may have valuable input — all with the goal of ensuring the best information is available for any data analytics request. Her role is such that she works with everyone, from the senior auditors within various teams to the Institute auditor. Whether it is an ad-hoc request, often under tight deadline, or a long-term project, Duan has worked successfully with her colleagues in the Audit Division to design successful final products. She has also reached out to her colleagues around the Institute in order to engage them, learn, and assist if and when needed. A highlight of her hard work took place on May 21, when she hosted the first MIT Data Analyst Meeting with the goal of promoting and establishing an Institute group that can continue to collaborate in the quest for innovative ways to explore the vast amount and variety of data at MIT.
Erin Coates, senior internal auditor, was recognized in the category of Communication, Collaboration and Community Building and Excellence and Accountability. Coates excels on many levels in regard to her communication skills. She is always courteous when discussing topics with the auditees, well prepared and to the point, with a goal to take minimal necessary valuable clients’ time but also obtain the information needed. Coates is first to suggest ideas to recognize and celebrate a colleague’s milestone, thus promoting an inclusive and mutually respective community. Individuals outside the Audit Division appreciate Coates’ demeanor and her quality of work. All work product and tasks are executed to the best of her ability and in a timely manner. She understands the concept of accountability and in doing so thoroughly ensures audit methodology is followed during audits. Coates has exhibited the desire to learn as much about the Institute as possible through participating in and volunteering in events outside the division.
The Infinite Mile Awards have been a longstanding tradition within MIT. The Infinite Mile Award is intended to recognize individuals or teams from within the Audit Division or other collaborators who have made extraordinary contributions to help the division carry out its mission of being an independent, objective, innovative, and flexible business partner that adds value to the Institute. Nominations are submitted by colleagues who have recognized an individual that they feel has made a significant impact to the division's work and/or been a strong support or inspiration to them.
For many years, drug development has relied on simplified and scalable cell culture models to find and test new drugs for a wide variety of diseases. However, cells grown in a dish are often a feint representation of healthy and diseased cell types in vivo. This limitation has serious consequences: Many potential medicines that originally appear promising in cell cultures often fail to work when tested in patients, and targets may be completely missed if they do not appear in a dish.
A highly collaborative team of researchers from the Harvard-MIT Program in Health Sciences and Technology (HST) and Institute for Medical Engineering and Science (IMES) at MIT recently set out to tackle this issue as it relates to a type of cell found in the intestine that is implicated in inflammatory bowel disease (IBD). In new work, the team was able to generate an intestinal cell that is a substantially better mimic of the real cell and can therefore be used in studies of diseases such as IBD. They reported their findings in a recent issue of BMC Biology.
The team was led by Ben Mead, a doctoral student in the HST Medical Engineering and Medical Physics Program; Jeffrey Karp, a professor at Brigham and Women’s Hospital, working closely with Jose Ordovas-Montanes, a postdoc in the lab of Pfizer-Laubach Career Development Assistant Professor Alex K. Shalek in the MIT Department of Chemistry; and the labs of MIT professor of biological engineering Jim Collins, Institute Professor Robert Langer, and scientists from the Broad Institute of Harvard and MIT and Koch Institute for Integrative Cancer Research.
Understanding genetic risk at the level of single cells
This study was catalyzed by the new technology of high-throughput single-cell RNA-sequencing, which enables transcriptome-wide profiling of tissues at the level of individual cells. Through the lens of single-cell RNA-sequencing, scientists are now able to ‘map’ our single cells and potentially the changes which give rise to disease. The team of researchers turned this method towards determining how well an existing cell culture model mimics a particular type of cell within the body, comparing two single cell ‘maps’: one of a mouse’s small intestine, and another of an adult stem cell-derived model of the small intestine, known as an organoid.
They used these maps to isolate a single cell type and ask how well the organoid-derived cell matched its natural counterpart. “Based on the differences between model and actual cell, we utilized a computationally driven bioengineering approach to improve the fidelity of the model.” said Karp. “We believe this approach may be key to unlocking the next generation of therapeutic development from cellular models, including those made from patient-derived stem cells.”
Individual genes can alter one’s risk of developing diseases such as Crohn’s disease, a type of IBD. One active area of research is understanding where these genes act in a tissue in order to further our understanding of disease mechanisms and propose novel therapeutic interventions. To address this, techniques are needed to reliably map “risk” genes not only within an affected tissue, but to individual cells, to properly surmise if a drug screen can correct a faulty gene or potentially improve a patient’s condition.
Single-cell RNA-sequencing at scale, a revolutionary technique pioneered for low-input clinical biopsies at MIT between Alex K. Shalek’s and Chris Love’s group, now allows researchers to deconstruct a tissue into its elemental components — cells — and identify the key patterns of gene expression which specify each cell type. The ability to efficiently profile tens of thousands of cells economically has unlocked the possibility to identify critical cell types in tissues whose genetic makeup had previously been difficult to discern.
Using single-cell “maps” to re-orient the development of a key cell type
Mapping tissues, such as the small intestine, is highly important in understanding where specific “risk” genes are acting. However, the key advances required to translate findings to the clinic will inevitably be through representative models for the cell types identified as interpreting genes and displaying a disease phenotype. One key IBD-relevant cell type already implicated through genetic studies is known as the Paneth cell, responsible for a key anti-microbial role in the small intestine and defending the stem cell niche.
When adult intestinal stem cells are grown in a dish, they self-organize into remarkable structures known as intestinal organoids: 3-D cellular structures that contain many of the cell types found in a real intestine. Nevertheless, how these intestinal organoids correspond to the bona fide cell types found in the intestine has proven challenging for researchers to tackle. To directly address this question, Shalek suggested a “quick” experiment to Mead, which then gave rise to the fruitful collaboration between the labs.
Mead and Ordovas-Montanes developed a single-cell map of the true characteristics of small intestinal cell types as found within the mouse and, when comparing them to what a map of the intestinal-derived organoid looks like, identified several differences, particularly within the key IBD-relevant cell type known as the Paneth cell. Since the field’s map of an organoid didn’t quite correspond to the real tissue, it may have led them astray in the hunt for drug targets.
Fortunately, through their single-cell data, the team was able to learn how the maps were mis-aligned, and “correct” the developmental pathways which were missing in the dish. As a result, they were able to generate a Paneth cell that is a substantially better mimic of the real cell and can now function to kill bacteria and support the neighboring stem cells which give rise to them.
Translational opportunities afforded by improved representations of tissues
“With this improved cell in-hand, we are now developing a screening platform that will allow us to target relevant Paneth cell biology,” says Mead, who plans to continue the work he started as a postdoc in Shalek’s group.
Their approach for generating physiologically faithful intestinal cell types is a major technological advance that will provide other researchers a powerful tool to further their understanding of the specialized cell states of the epithelial barrier. “As we begin to understand which cell types specifically express genes that alter risk for IBD, it will be critical to ensure the disease models provide an accurate representation of that cell type,” says Ordovas-Montanes.
“We want to make better cell models to not only understand basic disease biology, but also to fast-track development of therapeutics” says Mead. “This research will have impact beyond the intestinal organoid community as organoids are increasingly employed for liver, kidney, lung, and even brain research, and our approach can be generalized for relating and aligning the cell types found in vivo with the models generated from these tissues.”
Even “modest” action to limit climate change could help prevent the most extreme water-shortage scenarios facing Asia by the year 2050, according to a new study led by MIT researchers.
The study takes an inventive approach to modeling the effects of both climate change and economic growth on the world’s most heavily populated continent. Roughly 60 percent of the global population lives in Asia, often with limited access to water: There is less than half the amount of freshwater available per inhabitant in Asia, compared to the global average.
To examine the risk of water shortages on the continent, the researchers conducted detailed simulations of many plausible economic and climate pathways for Asia in the future, evaluating the relative effects of both pathways on water supply and demand. By studying cases in which economic change (or growth) continues but the climate remains unchanged — and vice versa — the scholars could better identify the extent to which these factors generate water shortages.
The MIT-based team found that with no constraints on economic growth and climate change, an additional 200 million people across Asia would be vulnerable to severe water shortages by 2050. However, fighting climate change along the lines of the 2015 Paris Agreement would reduce by around 60 million the number of people facing severe water problems.
But even with worldwide efforts to limit climate change, there is a 50 percent chance that around 100 million people in southern and eastern Asia will experience a 50 percent increase in “water stress” — their inability to access safe water — and a 10 percent chance that water shortages will double for those people.
“We do find that a mitigation strategy can reduce the heightened risk of water stress in Asia,” says Adam Schlosser, deputy director for science research at MIT’s Joint Program on the Science and Policy of Global Change, and co-author of a newly published paper detailing the findings. “But it doesn’t solve it all.”
The paper, “The Impact of Climate Change Policy on the Risk of Water Stress in Southern and Eastern Asia,” is being published today in the journal Environmental Research Letters. The authors are Xiang Gao, a Joint Program research scientist; Schlosser; Charles Fant, a former Joint Program postdoc and a researcher at Industrial Economics, Inc; and Kenneth Strzepek, a Joint Program research scientist and a professor emeritus at the University of Colorado.
The research team also uses models that track municipal and industrial activities and their specific water-demand consequences across many smaller subregions in Asia. Irrigation tends to be a major driver of water consumption, leading to diminished access to water for other uses.
Overall, the researchers conclude, through the mid-21st century, “socioeconomic growth contributes to an increase in water stress” across the whole region, but climate change can have “both positive and negative effects on water stress.” The study turns up a notable amount of regional variation in the effects of climate change within Asia. Climate change by itself is likely to have a more adverse impact on water access in China than in India, for instance, where a warming climate could produce more rain.
Apart from the most likely scenarios, another significant finding is that the potential for extreme water stress is associated with unabated climate change. As the authors state in the paper, “A modest greenhouse gas mitigation pathway eliminates the likelihood of … extreme outcomes” in water access. But without any such climate measures, “both countries have a chance of experiencing extreme water shortages by midcentury,” Gao says.
The study is part of a series of papers the research team is producing to assess water risks across southern and eastern Asia, based on modeling that captures the natural and managed aspects of the water systems across the region. A 2016 paper by the group established that there was a significant risk of water shortages for about 1 billion people in Asia by 2050. The current paper focuses on the impact of climate change policy, and a future paper will analyze the implications of adaptation strategies.
‘There are no easy options,” Schlosser says, of the various ways of limiting climate change. “All of them carry associated costs, and our continued research is looking at the extent to which widespread adaptive and water-efficient measures can reduce risks and perhaps be cost-effective and more resilient.”
The study was supported, in part, by the U.S. Department of Energy, as well as the government, industry and foundation sponsors of the MIT Joint Program on the Science and Policy of Global Change.
When Navy SEALs carry out dives in Arctic waters, or when rescue teams are diving under ice-covered rivers or ponds, the survival time even in the best wetsuits is very limited — as little as tens of minutes, and the experience can be extremely painful at best. Finding ways of extending that survival time without hampering mobility has been a priority for the U.S. Navy and research divers, as a pair of MIT engineering professors learned during a recent program that took them to a variety of naval facilities.
That visit led to a two-year collaboration that has now yielded a dramatic result: a simple treatment that can improve the survival time for a conventional wetsuit by a factor of three, the scientists say.
The findings, which could be applied essentially immediately, are reported this week in the journal RSC Advances, in a paper by Michael Strano, the Carbon P. Dubbs Professor of Chemical Engineering; Jacopo Buongiorno, the TEPCO Professor and associate head of the Department of Nuclear Science and Engineering; and five others at MIT and George Mason University.
The process they discovered works by simply placing the standard neoprene wetsuit inside a pressure tank autoclave no bigger than a beer keg, filled with a heavy inert gas, for about a day. The treatment then lasts for about 20 hours, far longer than anyone would spend on a dive, explains Buongiorno, who is an avid wetsuit user himself. (He competed in a triathlon just last week.) The process could also be done in advance, with the wetsuit placed in a sealed bag to be opened just before use, he says.
Though Buongiorno and Strano are both on the MIT faculty, they had never met until they were both part of the Defense Science Study Group for the Department of Defense. “We got to visit a lot of bases, and met with all kinds of military people up to four-star generals,” says Buongiorno, whose specialty in nuclear engineering has to do with heat transfer, especially through water. They learned about the military’s particular needs and were asked to design a technological project to address one of those needs. After meeting with a group of Navy SEALs, the elite special-operations diving corps, they decided the need for longer-lasting protection in icy waters was one that they could take on.
They looked at the different strategies that various animals use to survive in these frigid waters, and found three types: air pockets trapped in fur or feathers, as with otters and penguins; internally generated heat, as with some animals and fish (including great white sharks, which, surprisingly, are warm-blooded); or a layer of insulating material that greatly slows heat loss from the body, as with seals’ and whales’ blubber.
In the end, after simulations and lab tests, they ended up with a combination of two of these — a blubber-like insulating material that also makes use of trapped pockets of gas, although in this case the gas is not air but a heavy inert gas, namely xenon or krypton.
The material that has become standard for wetsuits is neoprene, an inexpensive material that is a mix of synthetic rubber materials processed into a kind of foam, producing a closed-cell structure similar to styrofoam. Trapped within that structure, occupying more than two-thirds of the volume and accounting for half of the heat that gets transferred through it, are pockets of air.
Strano and Buongiorno found that if the trapped air is replaced with xenon or krypton, the material’s insulating properties increase dramatically. The result, they say, is a material with the lowest heat transfer of any wetsuit ever made. “We set a world record for the world’s lowest thermal conductivity garment,” Strano says — conductivity almost as low as air itself. “It’s like wearing a coat of air.”
They found this could improve survivability in water colder than 10 degrees Celsius, raising it from less than one hour to two or three hours.
The result could be a boon not just to those in the most extreme environments, but to anyone who uses wetsuits in cold waters, including swimmers, athletes, and surfers, as well as professional divers of all kinds.
“As part of this project, I interviewed dozens of wetsuit users, including a professional underwater photographer, divers working at the New England Aquarium, a Navy SEAL friend of mine, and random surfers I approached on a San Diego beach,” says co-author and former MIT postdoc Jeffrey Moran PhD ’17, who is now an assistant professor at George Mason University. “The feedback was essentially unanimous — there is an urgent need for warmer wetsuits, both in and out of the Arctic. People's eyes lit up when I told them about our results.”
Currently, the only viable cold-water alternatives to wetsuits are dry suits, which have a layer of air between the suit and the skin that must be maintained using a hose and a pump, or a warm-water suit, which similarly requires a hose and pump connection. In either case, a failure of the pump or a cut or tear in the suit can result is a quick loss of insulation that can be life threatening within minutes.
But the xenon- or krypton-infused neoprene requires no such support system and has no way of quickly losing its insulating properties, and so does not carry that risk. “We can take anyone’s neoprene wetsuit and pressurize it with xenon or krypton for high-performance operations,” Strano says. MIT graduate student Anton Cottrill, a co-author of the paper, adds, “The gas actually infuses more quickly during treatment than it discharges during its use in an aquatic environment.”
Another possibility, they say, is to produce a wetsuit with the same insulating properties as present ones, but with a small fraction of the thickness, allowing more comfort and freedom of movement that might be appealing to athletes. “Almost everyone I interviewed also said they wanted a wetsuit that was easier to move around in and to put on and take off,” says Moran. “The results of this project suggest that we could make wetsuits that provide the same thermal insulation as traditional ones, but are about half as thick.”
One next step in their research is to look at ways of making a long-term, stable version of a xenon-infused neoprene, perhaps by bonding a protective layer over it, they say. In the meantime, the team is also looking for opportunities to treat the neoprene garments of interested users so that they can collect performance data.
“Their approach to the problem is a remarkable feat of materials science and also very clever engineering,” says John Dabiri, a professor of civil and environmental engineering and of mechanical engineering at Stanford University, who was not involved in this work. “They’ve managed to achieve something close to an ideal air-like thermal barrier, and they’ve accomplished this using materials that are more compatible with end-uses like scuba diving than previous concepts. The overall performance characteristics could be a game-changer for a variety of applications.”
And Charles Amsler, a professor of biology at the University of Alabama at Birmingham, who has made almost 950 research dives in Antartica but was not connected with this research, says, “It could be very beneficial in cases where flexibility, lack of bulkiness, swimming speed, or reduced drag with diver propulsion vehicles are at a premium, or where environmental hazards make the chance of dive suit puncture high. Normally, diver thermal protection in very cold water is by use of dry suits rather than wetsuits. But wetsuits typically allow much more diver flexibility.”
Amsler adds that “One concern with drysuits is that … should the suit be badly punctured, a diver loses much or all of that insulation. … In a deep or long duration dive where staged decompression would be required to prevent decompression illness (“the bends”), wearing one of these thermally enhanced wetsuits would significantly reduce the chance that a diver with a punctured suit would have to make the choice between potentially fatal hypothermia and potentially debilitating or fatal decompression illness.”
The research team also included former MIT postdoc Jeffrey Moran PhD ’17, now at George Mason University; MIT graduate students Anton Cottrill and Zhe Yuan; former postdoc Jesse Benck; and postdoc Pingwei Liu. The work was supported by the U.S. Office of Naval Research, King Abdullah University of Science and Technology, and the U.S Department of Energy.
Back in 1988, voters in California passed Proposition 99, a measure that raised cigarette taxes and placed other restrictions on cigarette sales. Has the law worked? To answer that, it would be good to examine an identical state where no such law was enacted, and compare it with California.
But as Californians know, there is no other place exactly like it. For social scientists, this is a problem. Rigorous studies often employ a test group and a control group, to see the effects of a change among otherwise similar sets of people. In this case, the control group is missing.
Alberto Abadie has a solution to this problem. It’s on display as he opens his laptop and shows some data to a visitor in his office in MIT’s Department of Economics. Abadie’s fix is what he calls the “synthetic control method,” which involves constructing an aggregation of demographically similar people from elsewhere, to serve as a kind of statistical control group — a composite non-California.
“You would like to observe what would have happened in California in the absence of the intervention,” Abadie says. “And of course this is something that you cannot observe. But other states in the U.S. have similar dynamics and factors. So you can use a combination of these other states to approximate what would have been tobacco consumption in California, in the absence of the intervention.”
As Abadie explains, looking over the charts popping up on his laptop, he and two co-authors found, in a published study, that cigarette sales dropped markedly in California due to Proposition 99, with people annually consuming 26 fewer packs per capita by the year 2000.
The finding characterizes Abadie’s approach to research. He is an econometrician, someone who works to refine the tools and statistical methods of the discipline. But he is not just a mechanic of method; Abadie also researches concrete answers to policy questions.
“I do methodological work, but most of it is to estimate the effect of policy interventions,” says Abadie. “I try to use data to figure out which interventions have worked well.” Having long established his reputation in the field, Abadie joined the MIT faculty in 2016 as a full professor.
A “mythical place” becomes real
Abadie has traveled a long road to reach MIT as a professor. He grew up in Bilbao, Spain, where he was a good student with an interest in both physics and economics.
“I decided I was going to study physics,” Abadie recalls, about his course of study in college. “But my father told me, ‘No, because if you study physics, the only thing you can become is a college professor.’ Okay, fine, so I went to study economics.” Abadie pauses: “And I became a college professor.”
After receiving his undergraduate degree from the University of the Basque Country, Abadie received a master’s degree in Madrid, and then moved to MIT in 1995 as a doctoral student.
“MIT was this mythical place in my mind where all the most exciting knowledge came from,” Abadie says. As a PhD candidate, he worked with Josh Angrist and Whitney Newey, who are still faculty members at MIT — “my advisors, now my colleagues,” Abadie notes. Angrist has conducted numerous influential studies that refine the methods of research while producing policy evaluations, and Newey is a prominent econometrician.
Abadie joined the faculty at the Harvard Kennedy School directly after receiving his MIT PhD, and started producing some of his own influential work. The synthetic control method made its published debut in 2003, in a study of the economic consequences of terrorism (a subject Abadie has returned to multiple times). He has also published numerous papers more strictly devoted to refining statistical methods, and continues to do work along both avenues of research.
Since rejoining MIT in 2016, Abadie has continued with his scholarship while teaching both undergraduate and graduate courses, hoping to give today’s students the same benefits he once got from the Institute.
“Teaching is complementary with research,” Abadie says. “When you need to explain material that you believe you understand very well, that is when the real test is. Teaching enriches my understanding of my own research and the research of others, and new research questions often originate in the process of explaining what we know already.”
In addition to his post in the MIT Department of Economics, Abadie is associate director of MIT’s Institute for Data, Systems, and Society (IDSS), an interdisciplinary center launched in 2015 that blends data science and information studies with the social sciences. The IDSS has inaugurated a new undergraduate minor and a new doctoral program, among other programs of study, while serving to advance research across fields.
“I’m learning a lot,” Abadie says of his involvement with IDSS. “This is something new to me. It is very nice to see how they [data scientists and others] are interested in many of the issues we’re interested in. Many fields are colliding and overlapping now.”
And as far as the value that he has provided to scholars in other fields, Abadie closes on a self-effacing note: “Probably I am managing to do something semi-right.”
Robert S. Langer, the David H. Koch (1962) Institute Professor at MIT, has been named one of five U.S. Science Envoys for 2018. As a Science Envoy for Innovation, Langer will focus on novel approaches in biomaterials, drug delivery systems, nanotechnology, tissue engineering, and the U.S. approach to research commercialization.
One of 13 Institute Professors at MIT, Langer has written more than 1,400 articles. He also has over 1,300 issued and pending patents worldwide. Langer's patents have been licensed or sublicensed to over 350 pharmaceutical, chemical, biotechnology and medical device companies. He is the most cited engineer in history (h-index 253 with over 254,000 citations, according to Google Scholar).
Langer is one of four living individuals to have received both the United States National Medal of Science (2006) and the United States National Medal of Technology and Innovation (2011). He has received over 220 major awards, including the 1998 Lemelson-MIT Prize, the world's largest prize for invention, for being "one of history's most prolific inventors in medicine."
Created in 2010, the Science Envoy Program engages eminent U.S. scientists and engineers to help forge connections and identify opportunities for sustained international cooperation. Science Envoys engage internationally at the citizen and government levels to enhance relationships between other nations and the United States, develop partnerships, and improve collaboration. These scientists leverage their international leadership, influence, and expertise in priority countries to advance solutions to shared science and technology challenges. Science Envoys travel as private citizens and usually serve for one year.
Previous Science Envoys with connections to MIT include Susan Hockfield, president emerita of MIT, and Alice P. Gast, president of Lehigh University and former chemical engineering professor at MIT.
Letter regarding the retirement of John Charles, vice president for information systems and technology
The following email was sent today to the MIT community by Executive Vice President and Treasurer Israel Ruiz.
Dear MIT faculty and staff,
I write to share the news that John Charles has let us know of his decision to retire as MIT's Vice President (VP) for Information Systems and Technology (IS&T) at the end of this calendar year, following five years of dedicated service to the Institute.
John came to MIT with extensive experience serving in both public and private research and education institutions. He has had a distinguished 25-year career as an IT leader, and has been a key member of our senior management team. As VP, John set a vision for the transformation of “IT@MIT”. Under his leadership, IS&T migrated 17 years of legacy SAP data to the SAP HANA cloud platform, and moved the majority of IS&T’s managed servers to the cloud, laying the groundwork for a new operating model for enterprise resource planning and data centers. With John’s guidance, the IS&T team implemented a number of cybersecurity enhancements designed to strengthen protections for the Institute’s core administrative systems, demonstrated the benefits of platform-based API-centric architecture, and modernized several key administrative and student systems. Working with the Information Technology Governance Committee, he has been a key contributor to the development of information technology policy at MIT.
I am grateful that John will continue in his current role through the upcoming fall semester. I also appreciate that Deputy Executive Vice President Tony Sharon will work with him and the IS&T team during this transition. As we begin the search for the right individual to build on what John has accomplished and fill the important role of leading IS&T, I welcome your input regarding potential candidates, as well as thoughts about the role. Please send comments or suggestions via email to me at email@example.com, or to Room 4-204. All correspondence received will be treated as confidential.
There will be an occasion to celebrate John and his contributions, but for now, I hope you will join me in expressing our gratitude for his tremendous service to the MIT community.
Executive Vice President and Treasurer
Air pollution has smothered China’s cities in recent decades. In response, the Chinese government has implemented measures to clean up its skies. But are those policies effective? Now an innovative study co-authored by an MIT scholar shows that one of China’s key antipollution laws is indeed working — but unevenly, with one particular set of polluters most readily adapting to it.
The study examines a Chinese law that has required coal-fired power plants to significantly reduce emissions of sulfur dioxide, a pollutant associated with respiratory illnesses, starting in July 2014. Overall, the researchers found that with the policy in place, the concentration of these emissions at coal power plants fell by 13.9 percent.
“There is a significant drop in sulfur dioxide concentrations around the policy deadline,” says Valerie Karplus, an assistant professor at the MIT Sloan School of Management and co-author of a newly published paper detailing the results. “That’s really important. The stakes are really high in China.”
However, that top-line result comes with some quirks. The law called for greater sulfur dioxide emissions reductions in regions that were more heavily polluted and are more populous, yet those places — known as “key” regions in policy terms — are precisely where plants have been least compliant, the researchers found.
“We see the lowest correspondence between sulfur dioxide reported by plants and in independent satellite measures in key regions,” Karplus notes. That includes coal-fired plants in the areas around Beijing and Shanghai, among other populous, economically well-off places.
Indeed, the researchers discovered this precisely because the method they employed in the study compares satellite data measuring sulfur dioxide, on the one hand, to data from relatively new, on-the-ground emissions-monitoring systems — an approach that can pinpoint places where emissions exceed the law, even if audits and reports do not catch the excess pollution.
The paper, “Quantifying coal power plant responses to tighter SO2 emissions standards in China,” is being published this week in Proceedings of the National Academy of Sciences.
The authors are Karplus, who is the Class of 1943 Career Development Professor and an assistant professor of global economics and management at MIT Sloan; Shuang Zhang, an assistant professor of economics at the University of Colorado at Boulder; and Douglas Almond, a professor in the School of International and Public Affairs and the Department of Economics at Columbia University.
To conduct the study, the researchers examined sulfur dioxide data from Continuous Emissions Monitoring Systems (CEMS), power-plant based sensor systems used to capture on-the-ground concentrations of pollution emitted in China. The team looked at data from 256 plants in four provinces. They also used NASA satellite data that measures sulfur dioxide concentration levels globally, and in geographic detail. This provided “an objective source for assessing changes in plant-level emitting behavior that is not susceptible to manipulation,” as the researchers write in the paper.
That is, the CEMS data could be affected by actions at power plants that are designed to influence the results — from incomplete reporting to the manipulation of sensors. But the NASA data is not affected by attempts to influence ground-level readings.
Then, by evaluating the results of the two systems together, Karplus, Zhang, and Almond were able to see how much the data sets corresponded, and where, by focusing on isolated power plants.
“Because we’re comparing patterns in the CEMS to a trusted and well-established data source, that helps make the case that what we’re seeing here is real, and there’s an explanation behind it,” Karplus says.
Intriguingly, data from the two monitoring systems corresponded closely in what the researchers call “non-key” regions, where the maximum allowable concentration of sulfur dioxide was lowered from 400 milligrams per cubic meter to 200 milligrams per cubic meter. But in the heavily polluted and populated “key” regions, where the limit was placed at 50 milligrams per cubic meter, the research found no evidence of correspondence.
That tougher new standard may have been harder for power plants to meet. Thus one potential explanation for the varying results could be that the “stricter new standards and pressure to comply may have generated incentives for plant managers to falsify or selectively omit concentration data,” as the researchers put it in the paper. The study further finds a drop in the reported compliance in key regions from 100 percent to around 50 percent, a further indication the new standard was tough for many plants to meet.
So in addition to the bottom line results indicating overall progress, the new study may contain a couple of policy lessons. In the first place, Karplus suggests, “Governments can and should use remote sensing data as a way of providing an independent check on the numbers they’re getting from emitters who are subject to a particular policy. Satellite data could help to support central government ambitions to curb air pollution.”
To be sure, she notes, the fact that China not only uses CEMS data but makes it available is “a sign of real progress in environmental management in China.” But the satellite data is vital to accurate monitoring.
Moreover, Karplus adds, tightening pollution standards is necessary, but not sufficient to get emitters to make lasting reductions in pollution. New standards are likely to work best when accompanied by stronger implementing capabilities of firms and local governments, as well as rules and norms that support accurate reporting.
“Environmental policy doesn’t exist in a vacuum,” Karplus says. “It requires reshaping prevailing understanding of firms’ environmental responsibility and establishing credible reporting systems. In China, there is still a long way to go, but recent progress is very encouraging.”