Feed aggregator
Tech companies overstate AI’s climate benefits, report says
States sue Trump admin for revoked energy funds
Enviros, health groups are first to sue over Trump’s big climate rollback
Calif. lawmakers revive push to require coverage for wildfire-ready properties
Olympic skiers voice concern over receding glaciers
Reform UK vows to scrap Britain’s carbon border tax
EV sales boom as Ethiopia bans gas-powered car imports
Parking-aware navigation system could prevent frustration and emissions
It happens every day — a motorist heading across town checks a navigation app to see how long the trip will take, but they find no parking spots available when they reach their destination. By the time they finally park and walk to their destination, they’re significantly later than they expected to be.
Most popular navigation systems send drivers to a location without considering the extra time that could be needed to find parking. This causes more than just a headache for drivers. It can worsen congestion and increase emissions by causing motorists to cruise around looking for a parking spot. This underestimation could also discourage people from taking mass transit because they don’t realize it might be faster than driving and parking.
MIT researchers tackled this problem by developing a system that can be used to identify parking lots that offer the best balance of proximity to the desired location and likelihood of parking availability. Their adaptable method points users to the ideal parking area rather than their destination.
In simulated tests with real-world traffic data from Seattle, this technique achieved time savings of up to 66 percent in the most congested settings. For a motorist, this would reduce travel time by about 35 minutes, compared to waiting for a spot to open in the closest parking lot.
While they haven’t designed a system ready for the real world yet, their demonstrations show the viability of this approach and indicate how it could be implemented.
“This frustration is real and felt by a lot of people, and the bigger issue here is that systematically underestimating these drive times prevents people from making informed choices. It makes it that much harder for people to make shifts to public transit, bikes, or alternative forms of transportation,” says MIT graduate student Cameron Hickert, lead author on a paper describing the work.
Hickert is joined on the paper by Sirui Li PhD ’25; Zhengbing He, a research scientist in the Laboratory for Information and Decision Systems (LIDS); and senior author Cathy Wu, the Class of 1954 Career Development Associate Professor in Civil and Environmental Engineering (CEE) and the Institute for Data, Systems, and Society (IDSS) at MIT, and a member of LIDS. The research appears today in Transactions on Intelligent Transportation Systems.
Probable parking
To solve the parking problem, the researchers developed a probability-aware approach that considers all possible public parking lots near a destination, the distance to drive there from a point of origin, the distance to walk from each lot to the destination, and the likelihood of parking success.
The approach, based on dynamic programming, works backward from good outcomes to calculate the best route for the user.
Their method also considers the case where a user arrives at the ideal parking lot but can’t find a space. It takes into the account the distance to other parking lots and the probability of success of parking at each.
“If there are several lots nearby that have slightly lower probabilities of success, but are very close to each other, it might be a smarter play to drive there rather than going to the higher-probability lot and hoping to find an opening. Our framework can account for that,” Hickert says.
In the end, their system can identify the optimal lot that has the lowest expected time required to drive, park, and walk to the destination.
But no motorist expects to be the only one trying to park in a busy city center. So, this method also incorporates the actions of other drivers, which affect the user’s probability of parking success.
For instance, another driver may arrive at the user’s ideal lot first and take the last parking spot. Or another motorist could try parking in another lot but then park in the user’s ideal lot if unsuccessful. In addition, another motorist may park in a different lot and cause spillover effects that lower the user’s chances of success.
“With our framework, we show how you can model all those scenarios in a very clean and principled manner,” Hickert says.
Crowdsourced parking data
The data on parking availability could come from several sources. For example, some parking lots have magnetic detectors or gates that track the number of cars entering and exiting.
But such sensors aren’t widely used, so to make their system more feasible for real-world deployment, the researchers studied the effectiveness of using crowdsourced data instead.
For instance, users could indicate available parking using an app. Data could also be gathered by tracking the number of vehicles circling to find parking, or how many enter a lot and exit after being unsuccessful.
Someday, autonomous vehicles could even report on open parking spots they drive by.
“Right now, a lot of that information goes nowhere. But if we could capture it, even by having someone simply tap ‘no parking’ in an app, that could be an important source of information that allows people to make more informed decisions,” Hickert adds.
The researchers evaluated their system using real-world traffic data from the Seattle area, simulating different times of day in a congested urban setting and a suburban area. In congested settings, their approach cut total travel time by about 60 percent compared to sitting and waiting for a spot to open, and by about 20 percent compared to a strategy of continually driving to the next closet parking lot.
They also found that crowdsourced observations of parking availability would have an error rate of only about 7 percent, compared to actual parking availability. This indicates it could be an effective way to gather parking probability data.
In the future, the researchers want to conduct larger studies using real-time route information in an entire city. They also want to explore additional avenues for gathering data on parking availability, such as using satellite images, and estimate potential emissions reductions.
“Transportation systems are so large and complex that they are really hard to change. What we look for, and what we found with this approach, is small changes that can have a big impact to help people make better choices, reduce congestion, and reduce emissions,” says Wu.
This research was supported, in part, by Cintra, the MIT Energy Initiative, and the National Science Foundation.
How MIT OpenCourseWare is fueling one learner’s passion for education
Training for a clerical military role in France, Gustavo Barboza felt a spark he couldn’t ignore. He remembered his love of learning, which once guided him through two college semesters of mechanical engineering courses in his native Colombia, coupled with supplemental resources from MIT Open Learning’s OpenCourseWare. Now, thousands of miles away, he realized it was time to follow that spark again.
“I wasn’t ready to sit down in the classroom,” says Barboza, remembering his initial foray into higher education. “I left to try and figure out life. I realized I wanted more adventure.”
Joining the military in France in 2017 was his answer. For the first three years of service, he was very military-minded, only focused on his training and deployments. With more seniority, he took on more responsibilities, and eventually was sent to take a four-month training course on military correspondence and software.
“I reminded myself that I like to study,” he says. “I started to go back to OpenCourseWare because I knew in the back of my mind that these very complete courses were out there.”
At that point, Barboza realized that military service was only a chapter in his life, and the next would lead him back to learning. He was still interested in engineering, and knew that MIT OpenCourseWare could help prepare him for what was next.
He dove into OpenCourseWare’s free, online, open educational resources — which cover nearly the entire MIT curriculum — including classical mechanics, intro to electrical engineering, and single variable calculus with David Jerison, which he says was his most-visited resource. These allowed him to brush up on old skills and learn new ones, helping him tremendously in preparing for college entrance exams and his first-year courses.
Now in his third year at Grenoble-Alpes University, Barboza studies electrical engineering, a shift from his initial interest in mechanical engineering.
“There is an OpenCourseWare lecture that explains all the specializations you can get into with electrical engineering,” he says. “They go from very natural things to things like microprocessors. What interests me is that if someone says they are an electrical engineer, there are so many different things they could be doing.”
At this point in his academic career, Barboza is most interested in microelectronics and the study of radio frequencies and electromagnetic waves. But he admits he has more to learn and is open to where his studies may take him.
MIT OpenCourseWare remains a valuable resource, he says. When thinking about his future, he checks out graduate course listings and considers the different paths he might take. When he is having trouble with a certain concept, he looks for a lecture on the subject, undeterred by the differences between French and U.S. conventions.
“Of course, the science doesn't change, but the way you would write an equation or draw a circuit is different at my school in France versus what I see from MIT. So, you have to be careful,” he explains. “But it is still the first place I visit for problem sets, readings, and lecture notes. It’s amazing.”
The thoroughness and openness of MIT Open Learning’s courses and resources — like OpenCourseWare — stand out to Barboza. In the wide world of the internet, he has found resources from other universities, but he says their offerings are not as robust. And in a time of disinformation and questionable sources, he appreciates that MIT values transparency, accessibility, and knowledge.
“Human knowledge has never been more accessible,” he says. “MIT puts coursework online and says, ‘here’s what we do.’ As long as you have an internet connection, you can learn all of it.”
“I just feel like MIT OpenCourseWare is what the internet was originally for,” Barboza continues. “A network for sharing knowledge. I’m a big fan.”
Explore lifelong learning opportunities from MIT, including courses, resources, and professional programs, on MIT Learn.
AI Found Twelve New Vulnerabilities in OpenSSL
The title of the post is”What AI Security Research Looks Like When It Works,” and I agree:
In the latest OpenSSL security release> on January 27, 2026, twelve new zero-day vulnerabilities (meaning unknown to the maintainers at time of disclosure) were announced. Our AI system is responsible for the original discovery of all twelve, each found and responsibly disclosed to the OpenSSL team during the fall and winter of 2025. Of those, 10 were assigned CVE-2025 identifiers and 2 received CVE-2026 identifiers. Adding the 10 to the three we already found in the ...
Big Tech meets Big Oil: Self-driving trucks roar into the Permian Basin
EPA docs: 47 climate staffers reassigned
Emails show DHS agreed to restore canceled disaster grant program
Elizabeth Warren questions a company’s effort to sell flood insurance
Wyoming aims to boost Trump’s agenda with ‘energy dominance fund’
Malaysia, Japan plan carbon capture project, despite climate benefit doubts
Start planning for catastrophic global warming, top advisers tell EU
The week the EU’s climate foundations started to shake
Kenya launches carbon registry to boost climate finance, credibility
Personalization features can make LLMs more agreeable
Many of the latest large language models (LLMs) are designed to remember details from past conversations or store user profiles, enabling these models to personalize responses.
But researchers from MIT and Penn State University found that, over long conversations, such personalization features often increase the likelihood an LLM will become overly agreeable or begin mirroring the individual’s point of view.
This phenomenon, known as sycophancy, can prevent a model from telling a user they are wrong, eroding the accuracy of the LLM’s responses. In addition, LLMs that mirror someone’s political beliefs or worldview can foster misinformation and distort a user’s perception of reality.
Unlike many past sycophancy studies that evaluate prompts in a lab setting without context, the MIT researchers collected two weeks of conversation data from humans who interacted with a real LLM during their daily lives. They studied two settings: agreeableness in personal advice and mirroring of user beliefs in political explanations.
Although interaction context increased agreeableness in four of the five LLMs they studied, the presence of a condensed user profile in the model’s memory had the greatest impact. On the other hand, mirroring behavior only increased if a model could accurately infer a user’s beliefs from the conversation.
The researchers hope these results inspire future research into the development of personalization methods that are more robust to LLM sycophancy.
“From a user perspective, this work highlights how important it is to understand that these models are dynamic and their behavior can change as you interact with them over time. If you are talking to a model for an extended period of time and start to outsource your thinking to it, you may find yourself in an echo chamber that you can’t escape. That is a risk users should definitely remember,” says Shomik Jain, a graduate student in the Institute for Data, Systems, and Society (IDSS) and lead author of a paper on this research.
Jain is joined on the paper by Charlotte Park, an electrical engineering and computer science (EECS) graduate student at MIT; Matt Viana, a graduate student at Penn State University; as well as co-senior authors Ashia Wilson, the Lister Brothers Career Development Professor in EECS and a principal investigator in LIDS; and Dana Calacci PhD ’23, an assistant professor at the Penn State. The research will be presented at the ACM CHI Conference on Human Factors in Computing Systems.
Extended interactions
Based on their own sycophantic experiences with LLMs, the researchers started thinking about potential benefits and consequences of a model that is overly agreeable. But when they searched the literature to expand their analysis, they found no studies that attempted to understand sycophantic behavior during long-term LLM interactions.
“We are using these models through extended interactions, and they have a lot of context and memory. But our evaluation methods are lagging behind. We wanted to evaluate LLMs in the ways people are actually using them to understand how they are behaving in the wild,” says Calacci.
To fill this gap, the researchers designed a user study to explore two types of sycophancy: agreement sycophancy and perspective sycophancy.
Agreement sycophancy is an LLM’s tendency to be overly agreeable, sometimes to the point where it gives incorrect information or refuses the tell the user they are wrong. Perspective sycophancy occurs when a model mirrors the user’s values and political views.
“There is a lot we know about the benefits of having social connections with people who have similar or different viewpoints. But we don’t yet know about the benefits or risks of extended interactions with AI models that have similar attributes,” Calacci adds.
The researchers built a user interface centered on an LLM and recruited 38 participants to talk with the chatbot over a two-week period. Each participant’s conversations occurred in the same context window to capture all interaction data.
Over the two-week period, the researchers collected an average of 90 queries from each user.
They compared the behavior of five LLMs with this user context versus the same LLMs that weren’t given any conversation data.
“We found that context really does fundamentally change how these models operate, and I would wager this phenomenon would extend well beyond sycophancy. And while sycophancy tended to go up, it didn’t always increase. It really depends on the context itself,” says Wilson.
Context clues
For instance, when an LLM distills information about the user into a specific profile, it leads to the largest gains in agreement sycophancy. This user profile feature is increasingly being baked into the newest models.
They also found that random text from synthetic conversations also increased the likelihood some models would agree, even though that text contained no user-specific data. This suggests the length of a conversation may sometimes impact sycophancy more than content, Jain adds.
But content matters greatly when it comes to perspective sycophancy. Conversation context only increased perspective sycophancy if it revealed some information about a user’s political perspective.
To obtain this insight, the researchers carefully queried models to infer a user’s beliefs then asked each individual if the model’s deductions were correct. Users said LLMs accurately understood their political views about half the time.
“It is easy to say, in hindsight, that AI companies should be doing this kind of evaluation. But it is hard and it takes a lot of time and investment. Using humans in the evaluation loop is expensive, but we’ve shown that it can reveal new insights,” Jain says.
While the aim of their research was not mitigation, the researchers developed some recommendations.
For instance, to reduce sycophancy one could design models that better identify relevant details in context and memory. In addition, models can be built to detect mirroring behaviors and flag responses with excessive agreement. Model developers could also give users the ability to moderate personalization in long conversations.
“There are many ways to personalize models without making them overly agreeable. The boundary between personalization and sycophancy is not a fine line, but separating personalization from sycophancy is an important area of future work,” Jain says.
“At the end of the day, we need better ways of capturing the dynamics and complexity of what goes on during long conversations with LLMs, and how things can misalign during that long-term process,” Wilson adds.
