Feed aggregator
Private sector investments in climate change adaptation
Nature Climate Change, Published online: 22 September 2025; doi:10.1038/s41558-025-02423-w
Private sectors play an important role in global adaptation efforts, yet we have a limited understanding of their investment patterns. With firm adaptation expenditure data across five coastal urban areas, this research shows how adaptation investment differs across regions and sectors.Global coastal human settlement retreat driven by vulnerability to coastal climate hazards
Nature Climate Change, Published online: 22 September 2025; doi:10.1038/s41558-025-02435-6
Coastal settlement retreat reflects human behavioural adaptation to increasing coastal climate hazards. Using night-time light data over 1992–2019, this study finds that over half of global coastal settlements have retreated, driven by insufficient infrastructure protection and adaptive capacity.How are MIT entrepreneurs using AI?
The Martin Trust Center for MIT Entrepreneurship strives to teach students the craft of entrepreneurship. Over the last few years, no technology has changed that craft more than artificial intelligence.
While many are predicting a rapid and complete transformation in how startups are built, the Trust Center’s leaders have a more nuanced view.
“The fundamentals of entrepreneurship haven’t changed with AI,” says Trust Center Entrepreneur in Residence Macauley Kenney. “There’s been a shift in how entrepreneurs accomplish tasks, and that trickles down into how you build a company, but we’re thinking of AI as another new tool in the toolkit. In some ways the world is moving a lot faster, but we also need to make sure the fundamental principles of entrepreneurship are well-understood.”
That approach was on display during this summer’s delta v startup accelerator program, where many students regularly turned to AI tools but still ultimately relied on talking to their customers to make the right decisions for their business.
Students in this year’s cohort used AI tools to accelerate their coding, draft presentations, learn about new industries, and brainstorm ideas. The Trust Center is encouraging students to use AI as they see fit while also staying mindful of the technology’s limitations.
The Trust Center itself has also embraced AI, most notably through Jetpack, its generative AI app that walks users through the 24 steps of disciplined entrepreneurship outlined in Managing Director Bill Aulet’s book of the same name. When students input a startup idea, the tool can suggest customer segments, early markets to pursue, business models, pricing, and a product plan.
The ways the Trust Center wants students to use Jetpack is apparent in its name: It’s inspired by the acceleration a jetpack provides, but users still need to guide its direction.
Even with AI technology’s current limitations, the Trust Center’s leaders acknowledge it can be a powerful tool for people at any stage of building a business, and their use of AI will continue to evolve with the technology.
“It’s undeniable we’re in the midst of an AI revolution right now,” says Entrepreneur in Residence Ben Soltoff. “AI is reshaping a lot of things we do, and it’s also shaping how we do entrepreneurship and how students build companies. The Trust Center has recognized that for years, and we’ve welcomed AI into how we teach entrepreneurship at all levels, from the earliest stages of idea formation to exploring and testing those ideas and understanding how to commercialize and scale them.”
AI’s strengths and weaknesses
For the past few years, when the Trust Center’s delta v staff get together for strategic retreats, AI has been a central topic. The delta v program’s organizers think about how students can get the most out of the technology each year as they plan their summer-long curriculum.
Everything starts with Orbit, the mobile app designed to help students find entrepreneurial resources, network with peers, access mentorship, and identify events and jobs. Jetpack was added to Orbit last year. It is trained on Aulet’s “Disciplined Entrepreneurship” as well as former Trust Center Executive Director Paul Cheek’s “Startup Tactics” book.
The Trust Center describes Jetpack’s outputs as first drafts designed to help students brainstorm their next steps.
“You need to verify everything when you are using AI to build a business,” says Kenney, who is also a lecturer at MIT Sloan and MIT D-Lab. “I have yet to meet anyone who will base their business on the output of something like ChatGPT without verifying everything first. Sometimes, the verification can take longer than if you had done the research yourself from the beginning.”
One company in this year’s cohort, Mendhai Health, uses AI and telehealth to offer personalized physical therapy for women struggling with pelvic floor dysfunction before and after childbirth.
“AI has definitely made the entrepreneurial process more efficient and faster,” says MBA student Aanchal Arora. “Still, overreliance on AI, at least at this point, can hamper your understanding of customers. You need to be careful with every decision you make.”
Kenney notes the way large language models are built can make them less useful for entrepreneurs.
“Some AI tools can increase your speed by doing things like automatically sorting your email or helping you vibe code apps, but many AI tools are built off averages, and those can be less effective when you’re trying to connect with a very specific demographic,” Kenney says. “It’s not helpful to have AI tell you about an average person, you need to personally have strong validation that your specific customer exists. If you try to build a tool for an average person, you may build a tool for no one at all.”
Students eager to embrace AI may also be overwhelmed by the sheer volume of tools available today. Fortunately, MIT students have a long history of being at the forefront of any new technology, and this year’s delta v cohort featured teams leveraging AI at the core of their solutions and in every step of their entrepreneurial journeys.
MIT Sloan MBA candidate Murtaza Jameel, whose company Cognify uses AI to simulates user interactions with websites and apps to improve digital experiences, describes his firm as an AI-native business.
“We’re building a design intelligence tool that replaces product testing with instant, predictive simulations of user behavior,” Jameel explains. “We’re trying to integrate AI into all of our processes: ideation, go to market, programming. All of our building has been done with AI coding tools. I have a custom bot that I’ve fed tons of information about our company to, and it’s a thought partner I’m speaking to every single day.”
The more things change…
One of the fundamentals the Trust Center doesn’t see changing is the need for students to get out of the lab or the classroom to talk to customers.
“There are ways that AI can unlock new capabilities and make things move faster, but we haven’t turned our curriculum on its head because of AI,” Soltoff says. “In delta v, we stress first and foremost: What are you building and who are you building it for? AI alone can’t tell you who your customer is, what they want, and how you can better serve their needs. You need to go out into the world to make that happen.”
Indeed, many of the biggest hurdles delta v teams faced this summer looked a lot like the hurdles entrepreneurs have always faced.
“We were prepared at the Trust Center to see a big change and to adapt to that, but the companies are still building and encountering the same challenges of customer identification, beachhead market identification, team dynamics,” Kenney says. “Those are still the big meaty challenges they’ve always been working on.”
Amid endless hype about AI agents and the future of work, many founders this summer still said the human side of delta v is what makes the program special.
“I came to MIT with one goal: to start a technology company,” Jameel says. “The delta v program was on my radar when I was applying to MIT. The program gives you incredible access to resources — networks, mentorship, advisors. Some of the top folks in our industry are advising us now on how to build our company. It’s really unique. These are folks who have done what you’re doing 10 or 20 years ago, all just rooting for you. That’s why I came to MIT.”
Power-outage exercises strengthen the resilience of US bases
In recent years, power outages caused by extreme weather or substation attacks have exposed the vulnerability of the electric grid. For the nation’s military bases, which are served by the grid, being ready for outages is a matter of national security. What better way to test readiness than to cut the power?
Lincoln Laboratory is doing just that with its Energy Resilience Readiness Exercises (ERREs). During an exercise, a base is disconnected from the grid, testing the ability of backup power systems and service members to work through failure. Lasting up to 15 hours, each exercise mimics a real outage event with limited forewarning to the base population.
“No one thought that this kind of real-world test would be accepted. We’ve now done it at 33 installations, impacting over 800,000 people,” says Jean Sack ’13, SM ’15, who leads the program with Christopher Lashway and Annie Weathers in the laboratory's Energy Systems Group.
According to a Department of Energy report, 70 percent of the nation’s transmission lines are approaching end of life. This aging infrastructure, combined with increasing power demands and interdependencies, threatens cascading failures. In response, the Department of Defense (DoD) has sharpened its focus on energy resilience, or the ability to anticipate, withstand, and recover from outages. On a base, an outage could disrupt critical missions, open the door to physical or cyberattacks, and cut off water supplies.
“Threats to this already-fragile system are increasing. That's why this work is so important,” Sack says.
Safely cutting power
Before an exercise, the laboratory team works closely with base leadership and infrastructure personnel to carefully plan how it will safely disconnect from utility power. Over multiple site visits, they study each building and mission to understand power capabilities, ensure health and safety, and develop contingency plans.
“We get people together who may never have spoken before, but depend on one another. We like to say ‘connecting mission owners to their utility providers,’” says Lashway, a former electrician turned energy-systems researcher. “The planning process is a huge learning opportunity, and a chance to fix issues ahead of the outage.”
On the day of the outage, laboratory staff are on site to ensure the process runs smoothly, but the base is meant to run the exercise. Since beginning in 2018, the ERRE campaign has reached huge installations, including Fort Bragg, a U.S. Army base in North Carolina that sees nearly 150,000 people daily, and sites as far away as England and Japan.
The key is to not limit its scope. All facilities and missions, especially those that are critical, should be included, and service members are tasked with working through issues. To make exercises even more useful as an evaluation of readiness, some are modified with scripted scenarios simulating real-world incidents. These scenarios might challenge personnel to handle a cyberattack to control systems, shutdown of a backup power plant, or a rocket launch during an outage.
“We can do all the tabletop exercises in the world, but when you actually pull the plug, the question is, what actually goes on?” former assistant secretary of defense for sustainment Robert McMahon said at a joint House Armed Services subcommittee hearing about initial exercises. “Perhaps the most important lesson that I've seen is a lack of appreciation and understanding by our senior leaders at the installation level, all the way up to my level, of what we thought was going to happen versus what actually occurred, and then being able to apply those lessons learned.”
Illuminating issues
The ERREs have brought to light common issues across bases. One of them is a reliance on fragile or faulty backup systems. For example, electronic equipment experiences a hard shutdown if it isn't supported by a backup battery to bridge power transitions. In some instances, these battery systems failed or unexpectedly depleted due to age or generator issues. “We see a giant comms room drop out, and then phones and computers don’t work. It emphasizes the need for redundancies,” Lashway says.
Generators also present issues. Some fail because they aren’t regularly serviced or refueled through the long outage. Sometimes, personnel mistakenly assumed a generator would support their entire building, requiring reconfigurations after the fact. Air conditioning systems are often excluded from generator-supported emergency circuits, but rooms with a large number of computers generate a lot of heat, and overheated equipment quickly shuts down.
The exercises also unveiled interdependencies and chain reactions. In one case, a fire-suppression system accidentally went off, dousing a hangar in foam. The cause was a pressure drop at the same exact moment a switch reset.
“Executing an operation at this scale stresses how each of these factors need to work harmoniously and efficiently to ensure that the base, and ultimately missions, remain functional,” Lashway says.
Beyond resolving technical issues, the exercises have been valuable for practicing coordination and following chains of command. They’ve also revealed social challenges of operating through outages. For instance, some DoD guidance restricts the use of generators at daycare centers, so parents needed to coordinate care while maintaining their mission.
After an exercise, the laboratory compiles all findings in a report for the base. It provides time stamps of significant events by building, identifies links between issues, and summarizes common problems site-wide. It then provides recommendations to address vulnerabilities. “Our goal is to provide as much justification as possible for the base to get the resources they need to fix a problem,” Sack says.
The researchers also want to help bases prevent issues and avoid costly repairs. Recently, they’ve been using power meters to capture electrical data before, during, and after an exercise. These monitoring tools reveal power-quality issues that are otherwise hidden.
“Not all power is created equal, and standards must be followed to ensure equipment, especially specialized military equipment, operates properly and doesn’t get damaged over the long term. Power metering provides a view into that,” says Lashway.
Sparking resiliency ahead
Lincoln Laboratory’s ERRE campaign has resulted in legislation. In 2021, Congress passed a law requiring each military branch to perform at least five ERREs, or "Black Start Exercises," per year through 2027. That law was recently reauthorized until 2032. The team has transitioned the ERRE process to two private companies, as well as within the Air Force and Army, to conduct exercises in the coming years.
“It's very exciting that this got Congress' attention and has scaled across the DoD,” says Nick Judson, who leads the portfolio of energy, water, and natural hazard resilience efforts within the Energy Systems Group. “This idea started out as a way to enable change on DoD installations, and included a lot of difficult conversations about turning the power off to critical missions, and now we're seeing significant improvements to the readiness of bases and their missions.”
It may even be encouraging some healthy competition across the services, Lashway says. At a recent regional event in Colorado, three U.S. Space Force installations each vied to push the scope and duration of their exercises.
The team’s focus is now turning to related analysis, such as water resiliency. Water and wastewater systems are vulnerable to disruptions beyond power outages, including equipment failure, sabotage, or water source depletion.
“We are conducting tabletop exercises and workshops uniting stakeholders around the importance of water and wastewater systems to enable missions,” says Amelia Servi, who leads this work. “So far, we’ve seen great engagement from groups managing water systems who have been seeking funds to fix these aging systems, and from missions who have previously taken water for granted.”
They are also working on long-term energy planning, including ways for installations to be less dependent on the grid. One way is to install microgrids, which are self-sufficient systems that can tap into stored energy. According to Sack, microgrids are highly customized and complicated to operate, so one goal is to design a standardized system. The team's recent power-metering data is providing useful initial inputs into such a design.
The researchers are also considering how this work could improve energy resiliency for civilians. Large-scale exercises might not be feasible for the public, but they could be conducted in areas important to public safety, or in places that rely on military resources. During one exercise in Georgia, city residents partially depended upon a base's power plant, so that exercise included working with the city to ensure its resiliency to the outage.
“Striking that balance of testing readiness without causing harm is a big challenge in this field and a huge motivation for us,” Sack says. “We are encouraged by the outcomes. Our work is impacting the services at the highest level, rewriting infrastructure policy, and making sure people can better sustain operations during grid disruptions.”
Friday Squid Blogging: Giant Squid vs. Blue Whale
A comparison aimed at kids.
Companies Must Provide Accurate and Transparent Information to Users When Posts are Removed
This is the third installment in a blog series documenting EFF's findings from the Stop Censoring Abortion campaign. You can read additional posts here.
Imagine sharing information about reproductive health care on social media and receiving a message that your content has been removed for violating a policy intended to curb online extremism. That’s exactly what happened to one person using Instagram who shared her story with our Stop Censoring Abortion project.
Meta’s rules for “Dangerous Organizations and Individuals” (DOI) were supposed to be narrow: a way to prevent the platform from being used by terrorist groups, organized crime, and those engaged in violent or criminal activity. But over the years, we’ve seen these rules applied in far broader—and more troubling—ways, with little transparency and significant impact on marginalized voices.
EFF has long warned that the DOI policy is opaque, inconsistently enforced, and prone to overreach. The policy has been critiqued by others for its opacity and propensity to disproportionately censor marginalized groups.
Samantha Shoemaker's post about Plan C was flagged under Meta's policy on dangerous organizations and individuals
Meta has since added examples and clarifications in its Transparency Center to this and other policies, but their implementation still leaves users in the dark about what’s allowed and what isn’t.
The case we received illustrates just how harmful this lack of clarity can be. Samantha Shoemaker, an individual sharing information about abortion care, shared straightforward, facts about accessing abortion pills. Her posts included:
- A video linking to Plan C’s website, which lists organizations that provide abortion pills in different states.
- A reshared image from Plan C’s own Instagram account encouraging people to learn about advance provision of abortion pills.
- A short clip of women talking about their experiences taking abortion pills.
Instead of allowing her to facilitate informed discussion, Instagram flagged some of her posts under its “Prescription Drugs” policy, while others were removed under the DOI policy—the same set of rules meant to stop violent extremism from being shared.
We recognize that moderation systems—both human and automated—will make mistakes. But when Meta equates medically accurate, harm-reducing information about abortion with “dangerous organizations,” it underscores a deeper problem: the blunt tools of content moderation disproportionately silence speech that is lawful, important, and often life-saving.
At a time when access to abortion information is already under political attack in the United States and around the world, platforms must be especially careful not to compound the harm. This incident shows how overly broad rules and opaque enforcement can erase valuable speech and disempower users who most need access to knowledge.
And when content does violate the rules, it’s important that users are provided with accurate information as to why. An individual sharing information about health care will undoubtedly be confused or upset by being told that they have violated a policy meant to curb violent extremism. Moderating content responsibly means offering the greatest transparency and clarity to users as possible. As outlined in the Santa Clara Principles on Transparency and Accountability in Content Moderation, users should be able to readily understand:
- What types of content are prohibited by the company and will be removed, with detailed guidance and examples of permissible and impermissible content;
- What types of content the company will take action against other than removal, such as algorithmic downranking, with detailed guidance and examples on each type of content and action; and
- The circumstances under which the company will suspend a user’s account, whether permanently or temporarily.
If you find your content removed under Meta’s policies, you do have options:
- Appeal the decision: Every takedown notice should give you the option to appeal within the app. Appeals are sometimes reviewed by a human moderator rather than an automated system.
- Request Oversight Board review: In certain cases, you can escalate to Meta’s independent Oversight Board, which has the power to overturn takedowns and set policy precedents.
- Document your case: Save screenshots of takedown notices, appeals, and your original post. This documentation is essential if you want to report the issue to advocacy groups or in future proceedings.
- Share your story: Projects like Stop Censoring Abortion collect cases of unjust takedowns to build pressure for change. Speaking out, whether to EFF and other advocacy groups or to the media, helps illustrate how policies harm real people.
Abortion is health care. Sharing information about it is not dangerous—it’s necessary. Meta should allow users to share vital information about reproductive care. The company must also ensure that users are provided with clear information about how their policies are being applied and how to appeal seemingly wrongful decisions.
This is the third post in our blog series documenting the findings from our Stop Censoring Abortion campaign. Read more in the series: https://www.eff.org/pages/stop-censoring-abortion
Burgum celebrated wind power. Then Trump tapped him to kill it.
House Science Democrats seek interview with Judith Curry
Lawyers in landmark climate case get $3M in legal fees
California tightens carbon market to get deeper emissions cuts
40 Democrats file motion in last-gasp effort to save green bank grants
Vermont backtracks on energy code amid housing crunch
EU vows to deliver delayed 2035 climate target before COP30
How Macron joined the climate bad guys club
Global water cycle ‘erratic’ in 2024, leading to floods, drought
In coastal Ghana, female oyster farmers try to save an old practice
China says it wants to protect coral reefs. Experts have doubts.
What does the future hold for generative AI?
When OpenAI introduced ChatGPT to the world in 2022, it brought generative artificial intelligence into the mainstream and started a snowball effect that led to its rapid integration into industry, scientific research, health care, and the everyday lives of people who use the technology.
What comes next for this powerful but imperfect tool?
With that question in mind, hundreds of researchers, business leaders, educators, and students gathered at MIT’s Kresge Auditorium for the inaugural MIT Generative AI Impact Consortium (MGAIC) Symposium on Sept. 17 to share insights and discuss the potential future of generative AI.
“This is a pivotal moment — generative AI is moving fast. It is our job to make sure that, as the technology keeps advancing, our collective wisdom keeps pace,” said MIT Provost Anantha Chandrakasan to kick off this first symposium of the MGAIC, a consortium of industry leaders and MIT researchers launched in February to harness the power of generative AI for the good of society.
Underscoring the critical need for this collaborative effort, MIT President Sally Kornbluth said that the world is counting on faculty, researchers, and business leaders like those in MGAIC to tackle the technological and ethical challenges of generative AI as the technology advances.
“Part of MIT’s responsibility is to keep these advances coming for the world. … How can we manage the magic [of generative AI] so that all of us can confidently rely on it for critical applications in the real world?” Kornbluth said.
To keynote speaker Yann LeCun, chief AI scientist at Meta, the most exciting and significant advances in generative AI will most likely not come from continued improvements or expansions of large language models like Llama, GPT, and Claude. Through training, these enormous generative models learn patterns in huge datasets to produce new outputs.
Instead, LuCun and others are working on the development of “world models” that learn the same way an infant does — by seeing and interacting with the world around them through sensory input.
“A 4-year-old has seen as much data through vision as the largest LLM. … The world model is going to become the key component of future AI systems,” he said.
A robot with this type of world model could learn to complete a new task on its own with no training. LeCun sees world models as the best approach for companies to make robots smart enough to be generally useful in the real world.
But even if future generative AI systems do get smarter and more human-like through the incorporation of world models, LeCun doesn’t worry about robots escaping from human control.
Scientists and engineers will need to design guardrails to keep future AI systems on track, but as a society, we have already been doing this for millennia by designing rules to align human behavior with the common good, he said.
“We are going to have to design these guardrails, but by construction, the system will not be able to escape those guardrails,” LeCun said.
Keynote speaker Tye Brady, chief technologist at Amazon Robotics, also discussed how generative AI could impact the future of robotics.
For instance, Amazon has already incorporated generative AI technology into many of its warehouses to optimize how robots travel and move material to streamline order processing.
He expects many future innovations will focus on the use of generative AI in collaborative robotics by building machines that allow humans to become more efficient.
“GenAI is probably the most impactful technology I have witnessed throughout my whole robotics career,” he said.
Other presenters and panelists discussed the impacts of generative AI in businesses, from largescale enterprises like Coca-Cola and Analog Devices to startups like health care AI company Abridge.
Several MIT faculty members also spoke about their latest research projects, including the use of AI to reduce noise in ecological image data, designing new AI systems that mitigate bias and hallucinations, and enabling LLMs to learn more about the visual world.
After a day spent exploring new generative AI technology and discussing its implications for the future, MGAIC faculty co-lead Vivek Farias, the Patrick J. McGovern Professor at MIT Sloan School of Management, said he hoped attendees left with “a sense of possibility, and urgency to make that possibility real.”
