Feed aggregator

Tesla’s sluggish quarter to reset the new normal for EV sales

ClimateWire News - Thu, 04/02/2026 - 6:20am
Slower sales are likely the new normal for Tesla as it increasingly pivots its focus to AI, autonomy and robotics amid weakening global EV demand.

Possible US Government iPhone Hacking Tool Leaked

Schneier on Security - Thu, 04/02/2026 - 6:05am

Wired writes (alternate source):

Security researchers at Google on Tuesday released a report describing what they’re calling “Coruna,” a highly sophisticated iPhone hacking toolkit that includes five complete hacking techniques capable of bypassing all the defenses of an iPhone to silently install malware on a device when it visits a website containing the exploitation code. In total, Coruna takes advantage of 23 distinct vulnerabilities in iOS, a rare collection of hacking components that suggests it was created by a well-resourced, likely state-sponsored group of hackers...

MIT researchers measure traffic emissions, to the block, in real-time

MIT Latest News - Thu, 04/02/2026 - 5:00am

In a study focused on New York City, MIT researchers have shown that existing sensors and mobile data can be used to generate a near real-time, high-resolution picture of auto emissions, which could be used to develop local transportation and decarbonization policies.

The new method produces much more detailed data than some other common approaches, which use intermittent samples of vehicle emissions. The researchers say it is also more practical and scales up better than some studies that have aimed for very granular emissions data from a small number of automobiles at once. The work helps bridge the gap between less-detailed citywide emissions inventories and highly detailed analyses based on individual vehicles.

“Our model, by combining real-time traffic cameras with multiple data sources, allows extrapolating very detailed emission maps, down to a single road and hour of the day,” says Paolo Santi, a principal research scientist in the MIT Senseable City Lab and co-author of a new paper detailing the project’s results. “Such detailed information can prove very helpful to support decision-making and understand effects of traffic and mobility interventions.”

Carlo Ratti, director of the MIT Senseable City Lab, notes that the research “is part of our lab’s ongoing quest into hyperlocal measurements of air quality and other environmental factors. By integrating multiple streams of data, we can reach a level of precision that was unthinkable just a few years ago — giving policymakers powerful new tools to understand and protect human health.”

The new method also protects privacy, since it uses computer vision techniques to recognize types of vehicles, but without compiling license plate numbers. The study leverages technologies, including those already installed at intersections, to yield richer data about vehicle movement and pollution.

“The very basic idea is just to estimate traffic emissions using existing data sources in a cost-effective way,” says Songhua Hu, a former postdoc in the Senseable City Lab, and now an assistant professor at City University of Hong Kong.

The paper, “Ubiquitous Data-driven Framework for Traffic Emission Estimation and Policy Evaluation,” is published in Nature Sustainability.

The authors are Hu; Santi; Tom Benson, a researcher in the Senseable City Lab; Xuesong Zhou, a professor of transportation engineering at Arizona State University; An Wang, an assistant professor at Hong Kong Polytechnic University; Ashutosh Kumar, a visiting doctoral student at the Senseable City Lab; and Ratti. The MIT Senseable City Lab is part of MIT’s Department of Urban Studies and Planning.

Manhattan measurements

To conduct the study, the researchers used images from 331 cameras already in use in Manhattan intersections, along with anonymized location records from over 1.75 million mobile phones. Applying vehicle-recognition programs and defining 12 broad categories of automobiles, the scholars found they could correctly place 93 percent of vehicles in the right category. The imaging also yielded important information about the specific ways traffic signals affect traffic flow. That matters because traffic signals are a major reason for stop-and-go driving patterns, which strongly affect urban emissions but are often omitted in conventional inventories.

The mobile phone data then provided rich information about the overall patterns of traffic and movement of individual vehicles throughout the city. The scholars combined the camera and phone data with known information about emissions rates to arrive at their own emissions estimates for New York City.

“We just need to input all emission-related information based on existing urban data sources, and we can estimate the traffic emissions,” Hu says.

Moreover, the researchers evaluated the changes in emissions that might occur in different scenarios when traffic patterns, or vehicle types, also change.

For one, they modeled what would happen to emissions if a certain percentage of travel demand shifted from private vehicles to buses. In another scenario, they looked at what would happen if morning and evening rush hour times were spread out a bit longer, leaving fewer vehicles on the road at once. They also modeled the effects of replacing fine-grained emissions inputs with citywide averages — finding that the rougher emissions estimates could vary widely, from −49 percent to 25 percent of the more fine-tuned results. That underscores how seemingly small simplifications can introduce large errors into emission estimates.

Major emissions drop

On one level, this work involved altering inputs into the model and seeing what emerged. But one scenario the researchers studied is based on a real-world change: In January 2025, New York City implemented congestion pricing south of 60th Street in Manhattan.

To study that, the researchers looked at what happened to vehicle traffic at intervals of two, four, six, and eight weeks after the program began. Overall, congestion pricing lowered traffic volume by about 10 percent — but there was a corresponding drop in emissions of 16-22 percent.

This finding aligns with a previous study by researchers at Cornell University, which reported a 22 percent reduction in particulate matter (PM2.5) levels within the pricing zone. The MIT team also found that these reductions were not evenly distributed across the network, with larger declines on some major streets and more mixed effects outside the pricing zone.

“We see these kinds of huge changes after the congestion pricing began, Hu says. “I think that’s a demonstration that our model can be very helpful if a government really wants to know if a new policy converts into real-world impact.”

There are additional forms of data that could be fed into the researchers’ new method. For instance, in related work in Amsterdam, the team leveraged dashboard cams from vehicles to yield rich information about vehicle movement.

“With our model we can make any camera used in cities, from the hundreds of traffic cameras to the thousands of dash cams, a powerful device to estimate traffic emissions in real-time,” says Fábio Duarte, the associate director of research and design at the MIT Senseable City Lab, who has worked on multiple related studies.

The research was supported by the city of Amsterdam, the AMS Institute, and the Abu Dhabi’s Department of Municipalities and Transport.

It was also supported by the MIT Senseable City Consortium, which consists of Atlas University, the city of Laval, the city of Rio de Janeiro, Consiglio per la Ricerca in Agricoltura e l’Analisi dell’Economia Agraria, the Dubai Future Foundation, FAE Technology, KAIST Center for Advanced Urban Systems, Sondotecnica, Toyota, and Volkswagen Group America.

Evaluating the ethics of autonomous systems

MIT Latest News - Thu, 04/02/2026 - 12:00am

Artificial intelligence is increasingly being used to help optimize decision-making in high-stakes settings. For instance, an autonomous system can identify a power distribution strategy that minimizes costs while keeping voltages stable.

But while these AI-driven outputs may be technically optimal, are they fair? What if a low-cost power distribution strategy leaves disadvantaged neighborhoods more vulnerable to outages than higher-income areas?

To help stakeholders quickly pinpoint potential ethical dilemmas before deployment, MIT researchers developed an automated evaluation method that balances the interplay between measurable outcomes, like cost or reliability, and qualitative or subjective values, such as fairness.   

The system separates objective evaluations from user-defined human values, using a large language model (LLM) as a proxy for humans to capture and incorporate stakeholder preferences. 

The adaptive framework selects the best scenarios for further evaluation, streamlining a process that typically requires costly and time-consuming manual effort. These test cases can show situations where autonomous systems align well with human values, as well as scenarios that unexpectedly fall short of ethical criteria.

“We can insert a lot of rules and guardrails into AI systems, but those safeguards can only prevent the things we can imagine happening. It is not enough to say, ‘Let’s just use AI because it has been trained on this information.’ We wanted to develop a more systematic way to discover the unknown unknowns and have a way to predict them before anything bad happens,” says senior author Chuchu Fan, an associate professor in the MIT Department of Aeronautics and Astronautics (AeroAstro) and a principal investigator in the MIT Laboratory for Information and Decision Systems (LIDS).

Fan is joined on the paper by lead author Anjali Parashar, a mechanical engineering graduate student; Yingke Li, an AeroAstro postdoc; and others at MIT and Saab. The research will be presented at the International Conference on Learning Representations.

Evaluating ethics

In a large system like a power grid, evaluating the ethical alignment of an AI model’s recommendations in a way that considers all objectives is especially difficult.

Most testing frameworks rely on pre-collected data, but labeled data on subjective ethical criteria are often hard to come by. In addition, because ethical values and AI systems are both constantly evolving, static evaluation methods based on written codes or regulatory documents require frequent updates.

Fan and her team approached this problem from a different perspective. Drawing on their prior work evaluating robotic systems, they developed an experimental design framework to identify the most informative scenarios, which human stakeholders would then evaluate more closely.

Their two-part system, called Scalable Experimental Design for System-level Ethical Testing (SEED-SET), incorporates quantitative metrics and ethical criteria. It can identify scenarios that effectively meet measurable requirements and align well with human values, and vice versa.   

“We don’t want to spend all our resources on random evaluations. So, it is very important to guide the framework toward the test cases we care the most about,” Li says.

Importantly, SEED-SET does not need pre-existing evaluation data, and it adapts to multiple objectives.

For instance, a power grid may have several user groups, including a large rural community and a data center. While both groups may want low-cost and reliable power, each group’s priority from an ethical perspective may vary widely.

These ethical criteria may not be well-specified, so they can’t be measured analytically.

The power grid operator wants to find the most cost-effective strategy that best meets the subjective ethical preferences of all stakeholders.

SEED-SET tackles this challenge by splitting the problem into two, following a hierarchical structure. An objective model considers how the system performs on tangible metrics like cost. Then a subjective model that considers stakeholder judgements, like perceived fairness, builds on the objective evaluation.

“The objective part of our approach is tied to the AI system, while the subjective part is tied to the users who are evaluating it. By decomposing the preferences in a hierarchical fashion, we can generate the desired scenarios with fewer evaluations,” Parashar says.

Encoding subjectivity

To perform the subjective assessment, the system uses an LLM as a proxy for human evaluators. The researchers encode the preferences of each user group into a natural language prompt for the model.

The LLM uses these instructions to compare two scenarios, selecting the preferred design based on the ethical criteria.

“After seeing hundreds or thousands of scenarios, a human evaluator can suffer from fatigue and become inconsistent in their evaluations, so we use an LLM-based strategy instead,” Parashar explains.

SEED-SET uses the selected scenario to simulate the overall system (in this case, a power distribution strategy). These simulation results guide its search for the next best candidate scenario to test.

In the end, SEED-SET intelligently selects the most representative scenarios that either meet or are not aligned with objective metrics and ethical criteria. In this way, users can analyze the performance of the AI system and adjust its strategy.

For instance, SEED-SET can pinpoint cases of power distribution that prioritize higher-income areas during periods of peak demand, leaving underprivileged neighborhoods more prone to outages.

To test SEED-SET, the researchers evaluated realistic autonomous systems, like an AI-driven power grid and an urban traffic routing system. They measured how well the generated scenarios aligned with ethical criteria.

The system generated more than twice as many optimal test cases as the baseline strategies in the same amount of time, while uncovering many scenarios other approaches overlooked.

“As we shifted the user preferences, the set of scenarios SEED-SET generated changed drastically. This tells us the evaluation strategy responds well to the preferences of the user,” Parashar says.

To measure how useful SEED-SET would be in practice, the researchers will need to conduct a user study to see if the scenarios it generates help with real decision-making.

In addition to running such a study, the researchers plan to explore the use of more efficient models that can scale up to larger problems with more criteria, such as evaluating LLM decision-making.

This research was funded, in part, by the U.S. Defense Advanced Research Projects Agency.

Is “Hackback” Official US Cybersecurity Strategy?

Schneier on Security - Wed, 04/01/2026 - 12:57pm

The 2026 US “Cyber Strategy for America” document is mostly the same thing we’ve seen out of the White House for over a decade, but with a more aggressive tone.

But one sentence stood out: “We will unleash the private sector by creating incentives to identify and disrupt adversary networks and scale our national capabilities.” This sounds like a call for hackback: giving private companies permission to conduct offensive cyber operations.

The Economist noticed (alternate link) this, too.

I think this is an incredibly dumb idea:

In warfare, the notion of counterattack is extremely powerful. Going after the enemy­—its positions, its supply lines, its factories, its infrastructure—­is an age-old military tactic. But in peacetime, we call it revenge, and consider it dangerous. Anyone accused of a crime deserves a fair trial. The accused has the right to defend himself, to face his accuser, to an attorney, and to be presumed innocent until proven guilty...

Digital Hopes, Real Power: From Revolution to Regulation

EFF: Updates - Wed, 04/01/2026 - 9:20am

This is the second installment of a blog series reflecting on the global digital legacy of the 2011 Arab uprisings.

From Russia—where wartime censorship and more stringent platform controls have choked dissenting voices—to Nigeria, with its aggressive takedown orders turning social media into political battlegrounds, and to Turkey, where sweeping “disinformation” laws have made platforms heavily policed spaces, freedom of expression online is under attack. Per Freedom House’s 2023 Freedom on the Net Report, 66% of internet users live where political or social sites are blocked, and 78% are in countries where people have been arrested for online posts. New social media regulations have emerged in dozens of countries in the past year alone.

The online landscape looks markedly different than it did fifteen years ago. Back then, social media was still new and largely free from legal restrictions: platforms moderated content in response to user reports, governments rarely targeted them directly, and blocks (when they happened) were temporary, with censorship mostly focused on whole websites that VPNs or proxies could easily bypass. The internet was far from free, but governments’ crude tactics left space for circumvention.

Those early restrictions, as crude as they were, marked the start of a rapid evolution in online censorship. Governments like Thailand, which blocked thousands of YouTube videos in 2007 over critical content, and Turkey, which demanded takedowns from YouTube before blocking the site entirely, tested legal and technical pressures to mute dissent and force platforms’ compliance. By 2011, governments weren't just reacting—they had learned to pressure platforms into becoming instruments of state censorship, shifting their playbooks from blunt blocks to sophisticated systems of control that simple VPNs could no longer reliably bypass. Governments across the region were watching closely, and by the time the 2011 uprisings began, they were prepared to respond.

Looking Back

After learning that a Facebook page—We Are All Khaled Said, honoring a young man killed by police brutality—sparked Egypt’s street protests, Western media hailed online platforms as engines of democracy. Revolution co-creator Wael Ghonim told a journalist: “This revolution started on Facebook.” That claim was debated and contested for years; critically, Facebook had suspended the page two months earlier over pseudonyms violating its real-name policy, restoring it only after advocates intervened. 

Once the protests moved to the streets, Egypt’s government—alert to social media’s power—quickly blocked Facebook and Twitter, then enacted a near-total shutdown (more on that in part 4 of this series). As history shows, the measures didn’t stop the revolution, and Egyptian president Hosni Mubarak stepped down. For a brief moment, freedom appeared to be on the horizon. Unfortunately, that moment was short-lived.

Egypt’s Digital Dystopia

Just as the Egyptian military government quashed revolution in the streets, they also shut down  online civic space. Today, Egypt’s internet ranks low on markers of internet freedom. The military government that has ruled Egypt since 2013 has imprisoned human rights defenders and enacted laws—including 2015’s Counter-terrorism Law and 2018’s Cybercrime Law—that grant the state broad authority to suppress speech and prosecute offenders.

The 2018 law demonstrates the ease with which cybercrime laws can be abused. Article 7 of the law allows for websites that constitute “a threat to national security” or to the “national economy” to be blocked. The Association of Freedom of Thought and Expression (AFTE) has criticized the loose definition of “national security” contained within the law, as “everything related to the independence, stability, security, unity and territorial integrity of the homeland.” Notably, individuals can also be penalized—and sentenced to up to six months imprisonment—for accessing banned websites.

Articles 25, which prohibits the use of technology to “infringe on any family principles or values in Egyptian society,” and 26, which prohibits the dissemination of material that “violates public morals,” have been used in recent years to prosecute young people who use social media in ways in which the government disapproves. Many of those prosecuted have been young women; for instance, belly dancer Sama Al Masry was sentenced to three years in prison and fined 300,000 Egyptian pounds under Article 26.

Beyond Egypt: Regional Trends

Egypt’s trajectory reflects a wider regional and global pattern. In the years following the uprisings, governments moved quickly to formalize legal authority over digital space, often under the banner of combating cybercrime, terrorism, or “false information.” These laws often contain vaguely worded provisions criminalizing “misuse of social media” or “harming national unity,” giving authorities wide discretion to prosecute speech.

In Qatar and Bahrain, a social media post can result in up to five years in jail. In 2018, prominent Bahraini human rights defender Nabeel Rajab was convicted of “spreading false rumours in time of war”, “insulting public authorities”, and “insulting a foreign country” for tweets he posted about the killing of civilians in Yemen and sentenced to five years imprisonment

Two years later, Qatar amended its penal code by setting criminal penalties for spreading “fake news.” Article 136 (bis) sets criminal penalties for broadcasting, publishing, or republishing “rumors or statements or false or malicious news or sensational propaganda, inside or outside the state, whenever it is intended to harm national interests or incite public opinion or disturb the social or public order of the state” and sets a punishment of a maximum of five years in prison, and/or 100,000 Qatari riyals. The penalty is doubled if the crime is committed in wartime.

Now, as war has once again reached the region, these laws are being put to the test. Bahraini authorities have arrested at least 100 people in relation to protests or expression related to the war, while Qatar has arrested more than 300 people on charges of spreading “misleading information.”

And in the UAE, at least 35 people—most or all of whom are foreign nationals—have been arrested and “accused of spreading misleading and fabricated content online that could harm national defence efforts and fuel public panic,” according to the Times of India. The arrests fall under the UAE’s 2022 Federal Decree Law No. 34 on Combating Rumours and Cybercrimes which—says Human Rights Watch—is, along with the country’s Penal Code, “used to silence dissidents, journalists, activists, and anyone the authorities perceived to be critical of the government, its policies, or its representatives.”

From Regional Practice to Global Pattern

Today roughly four out of five countries worldwide have enacted cybercrime legislation, a dramatic expansion over the past decade, with many governments adopting or revising such laws in the years following the Arab uprisings. 

Outside the region, other nations have repurposed these laws to police speech. In Nigeria, journalists have been detained under the Cybercrime Act, with dozens of prosecutions documented since 2015. Bangladesh’s Digital Security Act has been used in thousands of cases—including hundreds against journalists—while in Uganda, authorities have prosecuted political critics under computer misuse laws for social media posts. 

Cybercrime laws are only one piece of a broader toolkit that governments now deploy to control digital spaces. Over the past decade, authorities have introduced sweeping “disinformation” laws, platform liability rules, age verification laws, and data localization requirements that force companies to store data domestically or appoint legal representatives within national jurisdictions. These measures give governments leverage over global technology firms, enabling them to demand faster content removals, obtain user data, or threaten steep fines and throttling if platforms fail to comply. Rather than relying solely on blunt instruments like blocking entire websites, states increasingly govern speech through layered regulatory systems that pressure platforms to police users on the state’s behalf.

The platforms too have changed. The same social media companies that were once championed as tools of democratic mobilization now operate in more constrained environments—and often act as willing participants in repressing speech. Facing financial penalties and the prospect of being blocked entirely, many companies expanded compliance with takedown requests after 2011, as can be seen in the companies’ own transparency reports. They later invested heavily in automated technologies that remove vast quantities of content before it is ever publicly available.

Rights groups around the world, including EFF, have warned that these dynamics disproportionately impact historically marginalized and vulnerable groups, as well as journalists and other human rights defenders. Research by the Palestinian digital rights organization 7amleh and reporting by Human Rights Watch have documented how content moderation policies, government pressure, and opaque enforcement mechanisms increasingly converge—leaving activists, journalists, and human rights defenders caught between state censorship and platform governance.

The New Architecture of Repression

Looking back now, it’s clear that, fifteen years ago, governments were caught off guard. They crudely blocked platforms, shut down networks, and scrambled to contain movements they did not fully understand. But in the years since, states have systematically adapted, transforming what were once reactive measures into durable systems of control.

Today’s controls are embedded in law, outsourced to platforms, and justified through the language of security, safety, and order. Cybercrime statutes, disinformation frameworks, and platform regulations form a layered architecture that allows states to shape online expression at scale while maintaining a veneer of legality. In this system, repression is often procedural, bureaucratic, and continuous.

The question is no longer whether the internet can enable dissent, but whether it can still sustain it under these conditions.

This is the second installment of a blog series reflecting on the global digital legacy of the 2011 Arab uprisings. Read the rest of the series here.

Trump DOJ claims win as Michigan sidesteps climate lawsuit playbook

ClimateWire News - Wed, 04/01/2026 - 6:27am
The state is trying a novel tactic in climate litigation, accusing the oil and gas industry of violating antitrust laws.

NOAA halts crucial dataset that helps measure Arctic sea ice

ClimateWire News - Wed, 04/01/2026 - 6:25am
The agency says its new dataset is better, but ice measurements will take time. "Bad news for climate monitoring," one scientist lamented.

Urban heat island strategy being written to guide cooling efforts

ClimateWire News - Wed, 04/01/2026 - 6:23am
A standards-setting nonprofit is drafting guidelines that tell local officials how they can reduce pockets of deadly heat in cities.

Electricity prices outpace inflation as data centers proliferate

ClimateWire News - Wed, 04/01/2026 - 6:23am
Last year may mark a turning point, where the pace of data center development exceeds the ability of some regional electric grids to keep up.

Illinois balks at climate superfund bill

ClimateWire News - Wed, 04/01/2026 - 6:20am
The measure would have required that major climate polluters pay into a state resilience fund. But the bill didn’t attract enough support in the Democratic-controlled Statehouse.

Virginia updates laws on EV chargers, transmission lines

ClimateWire News - Wed, 04/01/2026 - 6:15am
The moves come on the heels of Gov. Abigail Spanberger (D) creating a new Cabinet-level role for energy.

Solar panel group buys spread across Michigan as residents band together

ClimateWire News - Wed, 04/01/2026 - 6:13am
In the past few years, a largely grassroots solar installation trend has taken shape across a handful of Michigan towns and counties.

‘Gravel gardens’ gain ground to cut wildfire and heat risks

ClimateWire News - Wed, 04/01/2026 - 6:12am
When airborne embers land on plant-based garden mulches like pine bark, straw or wood chips, they ignite quickly and risk spreading fire.

Japan’s top polluters face new rules as carbon market advances

ClimateWire News - Wed, 04/01/2026 - 6:11am
Reporting requirements begin this month for about 300 to 400 firms with annual Scope 1, or direct, emissions of at least 100,000 metric tons.

India set for searing summer as Iran war strains energy supplies

ClimateWire News - Wed, 04/01/2026 - 6:11am
Slower growth in energy storage capacity, coupled with natural gas shortages linked to the war, will leave India heavily reliant on coal and hydropower, as well as less predictable wind generation.

A Taxonomy of Cognitive Security

Schneier on Security - Wed, 04/01/2026 - 5:59am

Last week, I listened to a fascinating talk by K. Melton on cognitive security, cognitive hacking, and reality pentesting. The slides from the talk are here, but—even better—Menton has a long essay laying out the basic concepts and ideas.

The whole thing is important and well worth reading, and I hesitate to excerpt. Here’s a taste:

The NeuroCompiler is where raw sensory data gets interpreted before you’re consciously aware of it. It decides what things mean, and it does this fast, automatic, and mostly invisible. It’s also where the majority of cognitive exploits actually land, right in this sweet spot between perception and conscious thought...

Preview tool helps makers visualize 3D-printed objects

MIT Latest News - Wed, 04/01/2026 - 12:00am

Designers, makers, and others often use 3D printing to rapidly prototype a range of functional objects, from movie props to medical devices. Accurate print previews are essential so users know a fabricated object will perform as expected.

But previews generated by most 3D-printing software focus on function rather than aesthetics. A printed object may end up with a different color, texture, or shading than the user expected, resulting in multiple reprints that waste time, effort, and material.

To help users envision how a fabricated object will look, researchers from MIT and elsewhere developed an easy-to-use preview tool that puts appearance first.

Users upload a screenshot of the object from their 3D-printing software, along with a single image of the print material. From these inputs, the system automatically generates a rendering of how the fabricated object is likely to look.

The artificial intelligence-powered system, called VisiPrint, is designed to work with a range of 3D-printing software and can handle any material example. It considers not only the color of the material, but also gloss, translucency, and how nuances of the fabrication process affect the object’s appearance.

Such aesthetics-focused previews could be especially useful in areas like dentistry, by helping clinicians ensure temporary crowns and bridges match the appearance of a patient’s teeth, or in architecture, to aid designers in assessing the visual impact of models.

“3D printing can be a very wasteful process. Some studies estimate that as much as a third of the material used goes straight to the landfill, often from prototypes the user ends of discarding. To make 3D printing more sustainable, we want to reduce the number of tries it takes to get the prototype you want. The user shouldn’t have to try out every printing material they have before they settle on a design,” says Maxine Perroni-Scharf, an electrical engineering and computer science (EECS) graduate student and lead author of a paper on VisiPrint.

She is joined on the paper by Faraz Faruqi, a fellow EECS graduate student; Raul Hernandez, an MIT undergraduate; SooYeon Ahn, a graduate student at the Gwangju Institute of Science and Technology; Szymon Rusinkiewicz, a professor of computer science at Princeton University; William Freeman, the Thomas and Gerd Perkins Professor of EECS at MIT and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL); and senior author Stefanie Mueller, an associate professor of EECS and Mechanical Engineering at MIT, and a member of CSAIL. The research will be presented at the ACM CHI Conference on Human Factors in Computing Systems.

Accurate aesthetics

The researchers focused on fused deposition modeling (FDM), the most common type of 3D printing. In FDM, print material filament is melted and then squirted through a nozzle to fabricate an object one layer at a time.

Generating accurate aesthetic previews is challenging because the melting and extrusion process can change the appearance of a material, as can the height of each deposited layer and the path the nozzle follows during fabrication.

VisiPrint uses two AI models that work together to overcome those challenges.

The VisiPrint preview is based on two inputs: a screenshot of the digital design from a user’s 3D-printing software (called “slicer” software), and an image of the print material, which can be taken from an online source or captured from a printed sample.

From these inputs, a computer vision model extracts features from the material sample that are important for the object’s appearance.

It feeds those features to a generative AI model that computes the geometry and structure of the object, while incorporating the so-called “slicing” pattern the nozzle will follow as it extrudes each layer.

The key to the researchers’ approach is a special conditioning method. This involves carefully adjusting the inner workings of the model to guide it, so it follows the slicing pattern and obeys the constraints of the 3D-printing process.

Their conditioning method utilizes a depth map that preserves the shape and shading of the object, along with a map of the edges that reflects the internal contours and structural boundaries.

“If you don’t have the right balance of these two things, you could use up with bad geometry or an incorrect slicing pattern. We had to be careful to combine them in the right way,” Perroni-Scharf says.

A user-focused system

The team also produced an easy-to-use interface where one can upload the required images and evaluate the preview.

The VisiPrint interface enables more advanced makers to adjust multiple settings, such as the influence of certain colors on the final appearance.

In the end, the aesthetic preview is intended to complement the functional preview generated by slicer software, since VisiPrint does not estimate printability, mechanical feasibility, or likelihood of failure.

To evaluate VisiPrint, the researchers conducted a user study that asked participants to compare the system to other approaches. Nearly all participants said it provided better overall appearance as well as more textural similarity with printed objects.

In addition, the VisiPrint preview process took about a minute on average, which was more than twice as fast as any competing method.

“VisiPrint really shined when compared to other AI interfaces. If you give a more general AI model the same screenshots, it might randomly change the shape or use the wrong slicing pattern because it had no direct conditioning,” she says.

In the future, the researchers want to address artifacts that can occur when model previews have extremely fine details. They also want to add features that allow users to optimize parts of the printing process beyond color of the material.

“It is important to think about the way that we fabricate objects. We need to continue striving to develop methods that reduce waste. To that end, this marriage of AI with the physical making process is an exciting area of future work,” Perroni-Scharf says.

“‘What you see is what you get’ has been the main thing that made desktop publishing ‘happen’ in the 1980s, as it allowed users to get what they wanted at first try. It is time to get WYSIWYG for 3D printing as well. VisiPrint is a great step in this direction,” says Patrick Baudisch, a professor of computer science at the Hasso Plattner Institute, who was not involved with this work.

This research was funded, in part, by an MIT Morningside Academy for Design Fellowship and an MIT MathWorks Fellowship.

Food loss and waste associated with misbehaviour drives 11% of global anthropogenic greenhouse gas emissions

Nature Climate Change - Wed, 04/01/2026 - 12:00am

Nature Climate Change, Published online: 01 April 2026; doi:10.1038/s41558-026-02597-x

Food loss and waste (FLW) is often attributed to technoeconomic inefficiencies of food systems. However, using a mechanistic analysis framework, we show that food surplus and misconsumption accounted for 11% of global anthropogenic greenhouse gas emissions in 2021, exceeding FLW-associated emissions that are driven by technoeconomic constraints.

Two physicists and a curious host walk into a studio…

MIT Latest News - Tue, 03/31/2026 - 7:00pm

This March on The Curiosity Desk, GBH’s daily science show with host Edgar B. Herwick III, MIT scientists dropped by to address the questions: “How close are we to observing the dark universe?” (Thursday, March 12 episode) and “Is Earth prepared for asteroids?” (Thursday, March 26 episode).

Up first, Prof. Nergis Mavalvala, dean of the MIT School of Science, and Prof. Salvatore Vitale joined the host live in studio to talk about the science behind the Laser Interferometer Gravitational-wave Observatory (LIGO) and how LIGO has provided the ability to observe the universe in ways that have never been done before.

In addition to learning something new, Mavalvala explained how experimenting delivers an added piece of excitement: “pushing the technology, the precision of the instrument, requires you to be very inventive. There’s almost nothing in these experiments that you can go buy off a shelf. Everything you’re designing, everything is from scratch. You’re meeting very stringent requirements.”

Herwick likened how they might tweak or tinker with the experiment to souping up a car engine, and the LIGO scientists nodded – adding that in the most complex experiments, each bite-sized part on its own works well, and it’s the interfaces between them that scientists must get right.

While there, the two long-time colleagues also took a detour to explain how in physics experimentalists benefit from the work of theorists and vice versa. Mavalvala, whose work focuses on building the world’s most precise instruments to study physical phenomena, described the synergy between ideas that come from theory (work that Vitale does) and how you measure. (No, they assure Herwick, they don’t get into a lot of fights.)

In fact, it’s fantastic to have people from both worlds at MIT, said Vitale.  Mavalvala agreed. “One of the things that’s really important about theory in science is that ultimately, in physics especially, it’s a bunch of math. And the important thing that you have to ask is, ‘does nature really behave that way?’ And how do you answer that question? You have to go out and measure. You have to go observe nature,” said Mavalvala.

As scientists fine-tune the gravitational wave detectors, they will inform what data are collected, what astrophysical objects they might find or hope to find – and the search for certain fainter, farther away, or more exotic objects can inform what enhancements they prioritize.

But what if I’m not interested in any of that, asked Herwick? Why should I care? 

“To me, it falls in the category of for the betterment of humankind. You never know what is going to be useful. A lot of fundamental research was very far at the beginning from what turned out to be fundamental applications,” said Vitale, adding, “What they do on the instrument side has already now very important applications.”

Mavalvala was unequivocal, underscoring how pursuing curiosity is put to good use:

“When you’re making instruments that achieve that kind of precision, you’re inventing new technologies. [With LIGO] We’ve invented vibration isolation technologies to keep our mirrors really still. We’ve invented lasers that are quieter than any that were ever made before. We’ve invented photonic techniques that are allowing us to make applications even to far off things like quantum computing. 

“So, this is one of the beauties of fundamental discovery science. A, you’ll discover something. But B you’ll be doing two things: you’ll be inventing the technologies of the future, and you’ll be training the generations of scientists who may go off to do completely different things, but this is what inspires them.”

Watch the full conversation below and on YouTube:

 

Planetary defense

Turning to objects beyond Earth – specifically, asteroids – Associate Professor Julien de Wit, along with research scientists Artem Burdanov and Saverio Cambioni, joined Herwick at the Curiosity Desk later in the month. They talked about their ongoing research to identify smaller asteroids (about the size of a school bus) using the James Webb Space Telescope and why planetary defense goes beyond thinking about the massive asteroids featured in movies like Armageddon. Notably, a lot of technology on earth depends on satellites, and asteroids pose the biggest threat to satellites.    

“Dinosaurs didn’t need to care about an asteroid hitting the moon. Humanity a century ago didn’t care. Now, if [an asteroid] hits the moon, a lot of debris will be expelled and all those particles – big and small – they will affect the fleet of satellites around Earth. That’s a big potential problem, so we need to take that into account in our future,” said Burdanov.

There’s also a potential upside to being better able to detect and potentially “capture” asteroids, explained de Wit, all of it benefitted by new instruments. “It’s really an asteroid revolution going on… Our situational awareness of what’s out there is really about to change dramatically.”

He explains that one dream is to mine asteroids themselves for material to build or power next generation technologies or stations in space. “The way to reliably move into space is to use resources from space. We can’t just move stuff to build a full city. We use stuff from space.”

Echoing the sentiments expressed earlier in the month by MIT’s dean of science, the trio of asteroid explorers also described how the pursuits of planetary scientists can lead to unexpected rewards along the way. “We are swimming in an era that is data rich, and so what we do in our group and at MIT is mine that data to reveal the universe like never before,” says de Wit. “Revealing new populations of asteroids, new populations of planets, and making sense of our universe like we have never done.”

Watch the full conversation below and on the GBH YouTube channel: 

Tune in to the Curiosity Desk some Thursdays to hear from MIT researchers as they visit Herwick and the production team. 

Pages