Feed aggregator
The Seine in Paris is open for swimming as temperatures soar
Tropical Storm Erin could become Atlantic season’s 1st hurricane
Podcast Episode: Separating AI Hope from AI Hype
If you believe the hype, artificial intelligence will soon take all our jobs, or solve all our problems, or destroy all boundaries between reality and lies, or help us live forever, or take over the world and exterminate humanity. That’s a pretty wide spectrum, and leaves a lot of people very confused about what exactly AI can and can’t do. In this episode, we’ll help you sort that out: For example, we’ll talk about why even superintelligent AI cannot simply replace humans for most of what we do, nor can it perfect or ruin our world unless we let it.
%3Ciframe%20height%3D%2252px%22%20width%3D%22100%25%22%20frameborder%3D%22no%22%20scrolling%3D%22no%22%20seamless%3D%22%22%20src%3D%22https%3A%2F%2Fplayer.simplecast.com%2F49181a0e-f8b4-4b2a-ae07-f087ecea2ddd%3Fdark%3Dtrue%26amp%3Bcolor%3D000000%22%20allow%3D%22autoplay%22%3E%3C%2Fiframe%3E
Privacy info.
This embed will serve content from simplecast.com
(You can also find this episode on the Internet Archive and on YouTube.)
Arvind Narayanan studies the societal impact of digital technologies with a focus on how AI does and doesn’t work, and what it can and can’t do. He believes that if we set aside all the hype, and set the right guardrails around AI’s training and use, it has the potential to be a profoundly empowering and liberating technology. Narayanan joins EFF’s Cindy Cohn and Jason Kelley to discuss how we get to a world in which AI can improve aspects of our lives from education to transportation—if we make some system improvements first—and how AI will likely work in ways that we barely notice but that help us grow and thrive.
In this episode you’ll learn about:
- What it means to be a “techno-optimist” (and NOT the venture capitalist kind)
- Why we can’t rely on predictive algorithms to make decisions in criminal justice, hiring, lending, and other crucial aspects of people’s lives
- How large-scale, long-term, controlled studies are needed to determine whether a specific AI application actually lives up to its accuracy promises
- Why “cheapfakes” tend to be more (or just as) effective than deepfakes in shoring up political support
- How AI is and isn’t akin to the Industrial Revolution, the advent of electricity, and the development of the assembly line
Arvind Narayanan is professor of computer science and director of the Center for Information Technology Policy at Princeton University. Along with Sayash Kapoor, he publishes the AI Snake Oil newsletter, followed by tens of thousands of researchers, policy makers, journalists, and AI enthusiasts; they also have authored “AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference” (2024, Princeton University Press). He has studied algorithmic amplification on social media as a visiting senior researcher at Columbia University's Knight First Amendment Institute; co-authored an online a textbook on fairness and machine learning; and led Princeton's Web Transparency and Accountability Project, uncovering how companies collect and use our personal information.
Resources:
- The WIRED AI Elections Project
- Axios: “Behind the Curtain: A white-collar bloodbath” (May 28, 2025)
- Bloomberg: “Klarna Slows AI-Driven Job Cuts With Call for Real People” (May 8, 2025)
- Ars Technica: “Air Canada must honor refund policy invented by airline’s chatbot” (Feb. 16, 2024)
What do you think of “How to Fix the Internet?” Share your feedback here.
TranscriptARVIND NARAYANAN: The people who believe that super intelligence is coming very quickly tend to think of most tasks that we wanna do in the real world as being analogous to chess, where it was the case that initially chessbots were not very good.t some points, they reached human parity. And then very quickly after that, simply by improving the hardware and then later on by improving the algorithms, including by using machine learning, they're vastly, vastly superhuman.
We don't think most tasks are like that. This is true when you talk about tasks that are integrated into the real world, you know, require common sense, require a kind of understanding of a fuzzy task description. It's not even clear when you've done well and when you've not done well.
We think that human performance is not limited by our biology. It's limited by our state of knowledge of the world, for instance. So the reason we're not better doctors is not because we're not computing fast enough, it's just that medical research has only given us so much knowledge about how the human body works and you know, how drugs work and so forth.
And the other is you've just hit the ceiling of performance. The reason people are not necessarily better writers is that it's not even clear what it means to be a better writer. It's not as if there's gonna be a magic piece of text, you know, that's gonna, like persuade you of something that you never wanted to believe, for instance, right?
We don't think that sort of thing is even possible. And so those are two reasons why in the vast majority of tasks, we think AI is not going to become better or at least much better than human professionals.
CINDY COHN: That's Arvind Narayanan explaining why AIs cannot simply replace humans for most of what we do. I'm Cindy Cohn, the executive director of the Electronic Frontier Foundation.
JASON KELLEY: And I'm Jason Kelley, EFF’s Activism Director. This is our podcast series, How to Fix the Internet.
CINDY COHN: On this show, we try to get away from the dystopian tech doomsayers – and offer space to envision a more hopeful and positive digital future that we can all work towards.
JASON KELLEY: And our guest is one of the most level-headed and reassuring voices in tech.
CINDY COHN: Arvind Narayanan is a professor of computer science at Princeton and the director of the Center for Information Technology Policy. He’s also the co-author of a terrific newsletter called AI Snake Oil – which has also become a book – where he and his colleague Sayash Kapoor debunk the hype around AI and offer a clear-eyed view of both its risks and its benefits.
He is also a self-described “techno-optimist”, but he means that in a very particular way – so we started off with what that term means to him.
ARVIND NARAYANAN: I think there are multiple kinds of techno-optimism. There's the Mark Andreessen kind where, you know, let the tech companies do what they wanna do and everything will work out. I'm not that kind of techno-optimist. My kind of techno-optimism is all about the belief that we actually need folks to think about what could go wrong and get ahead of that so that we can then realize what our positive future is.
So for me, you know, AI can be a profoundly empowering and liberating technology. In fact, going back to my own childhood, this is a story that I tell sometimes, I was growing up in India and, frankly, the education system kind of sucked. My geography teacher thought India was in the Southern Hemisphere. That's a true story.
CINDY COHN: Oh my God. Whoops.
ARVIND NARAYANAN: And, you know, there weren't any great libraries nearby. And so a lot of what I knew, and I not only had to teach myself, but it was hard to access reliable, good sources of information. We had had a lot of books of course, but I remember when my parents saved up for a whole year and bought me a computer that had a CD-Rom encyclopedia on it.
That was a completely life-changing moment for me. Right. So that was the first time I could get close to this idea of having all information at our fingertips. That was even before I kind of had internet access even. So that was a very powerful moment. And I saw that as a lesson in information technology having the ability to level the playing field across different countries. And that was part of why I decided to get into computer science.
Of course I later realized that my worldview was a little bit oversimplified. Tech is not automatically a force for good. It takes a lot of effort and agency to ensure that it will be that way. And so that led to my research interest in the societal aspects of technology as opposed to more of the tech itself.
Anyway, all of that is a long-winded way of saying I see a lot of that same potential in AI that existed in the way that internet access, if done right, has the potential and, and has been bringing, a kind of liberatory potential to so many in the world who might not have the same kinds of access that we do here in the western world with our institutions and so forth.
CINDY COHN: So let's drill down a second on this because I really love this image. You know, I was a little girl growing up in Iowa and seeing the internet made me feel the same way. Like I could have access to all the same information that people who were in the big cities and had the fancy schools could have access to.
So, you know, from I think all around the world, there's this experience and depending on how old you are, it may be that you discovered Wikipedia as opposed to a CD Rom of an encyclopedia, but it's that same moment and, I think that that is the promise that we have to hang on to.
So what would an educational world look like? You know, if you're a student or a teacher, if we are getting AI right?
ARVIND NARAYANAN: Yeah, for sure. So let me start with my own experience. I kind of actually use AI a lot in the way that I learn new topics. This is something I was surprised to find myself doing given the well-known limitations of these chatbots and accuracy, but it turned out that there are relatively easy ways to work around those limitations.
Uh, one kind of example of uh, if a user adaptation to it is to always be in a critical mode where you know that out of 10 things that AI is telling you, one is probably going to be wrong. And so being in that skeptical frame of mind, actually in my view, enhances learning. And that's the right frame of mind to be in anytime you're learning anything, I think so that's one kind of adaptation.
But there are also technology adaptations, right? Just the simplest example: If you ask AI to be in Socratic mode, for instance, in a conversation, uh, a chat bot will take on a much more appropriate role for helping the user learn as opposed to one where students might ask for answers to homework questions and, you know, end up taking shortcuts and it actually limits their critical thinking and their ability to learn and grow, right? So that's one simple example to make the point that a lot of this is not about AI itself, but how we use AI.
More broadly in terms of a vision for how integrating this into the education system could look like, I do think there is a lot of promise in personalization. Again, this has been a target of a lot of overselling that AI can be a personalized tutor to every individual. And I think there was a science fiction story that was intended as a warning sign, but a lot of people in the AI industry have taken as a, as a manual or a vision for what this should look like.
But even in my experiences with my own kids, right, they're five and three, even little things like, you know, I was, uh, talking to my daughter about fractions the other day, and I wanted to help her visualize fractions. And I asked Claude to make a little game that would help do that. And within, you know, it was 30 seconds or a minute or whatever, it made a little game where it would generate a random fraction, like three over five, and then ask the child to move a slider. And then it will divide the line segment into five parts, highlight three, show how close the child did to the correct answer, and, you know, give feedback and that sort of thing, and you can kind of instantly create that, right?
So this convinces me that there is in fact a lot of potential in AI and personalization if a particular child is struggling with a particular thing, a teacher can create an app on the spot and have the child play with it for 10 minutes and then throw it away, never have to use it again. But that can actually be meaningfully helpful.
JASON KELLEY: This kind of AI and education conversation is really close to my heart because I have a good friend who runs a school, and as soon as AI sort of burst onto the scene he was so excited for exactly the reasons you're talking about. But at the same time, a lot of schools immediately put in place sort of like, you know, Chat GPT bans and things like that.
And we've talked a little bit on EFF’s Deep Links blog about how, you know, that's probably an overstep in terms of like, people need to know how to use this, whether they're students or not. They need to understand what the capabilities are so they can have this sort of uses of it that are adapting to them rather than just sort of like immediately trying to do their homework.
So do you think schools, you know, given the way you see it, are well positioned to get to the point you're describing? I mean, how, like, that seems like a pretty far future where a lot of teachers know how AI works or school systems understand it. Like how do we actually do the thing you're describing because most teachers are overwhelmed as it is.
ARVIND NARAYANAN: Exactly. That's the root of the problem. I think there needs to be, you know, structural changes. There needs to be more funding. And I think there also needs to be more of an awareness so that there's less of this kind of adversarial approach. Uh, I think about, you know, the levers for change where I can play a little part. I can't change the school funding situation, but just as one simple example, I think the way that researchers are looking at this maybe right, right now today is not the most helpful and can be reframed in a way that is much more actionable to teachers and others. So there's a lot of studies that look at what is the impact of AI in the classroom that, to me, are the equivalent of, is eating food good for you? It’s addressing the question of the wrong level of abstraction.
JASON KELLEY: Yeah.
ARVIND NARAYANAN: You can't answer the question at that high level because you haven't specified any of the details that actually matter. Whether food is good and entirely depends on what food it is, and if you're, if the way you studied that was to go into the grocery store and sample the first 15 items that you saw, you're measuring properties of your arbitrary sample instead of the underlying phenomena that you wanna study.
And so I think researchers have to drill down much deeper into what does AI for education actually look like, right? If you ask the question at the level of are chatbots helping or hurting students, you're gonna end up with nonsensical answers. So I think the research can change and then other structural changes need to happen.
CINDY COHN: I heard you on a podcast talk about AI as, and saying kind of a similar point, which is that, you know, what, if we were deciding whether vehicles were good or bad, right? Nobody would, um, everyone could understand that that's way too broad a characterization for a general purpose kind of device to come to any reasonable conclusion. So you have to look at the difference between, you know, a truck, a car, a taxi, other, you know, all the, or, you know, various other kinds of vehicles in order to do that. And I think you do a good job of that in your book, at least in kind of starting to give us some categories, and the one that we're most focused on at EFF is the difference between predictive technologies, and other kinds of AI. Because I think like you, we have identified these kind of predictive technologies as being kind of the most dangerous ones we see right now in actual use. Am I right about that?
ARVIND NARAYANAN: That's our view in the book, yes, in terms of the kinds of AI that has the biggest consequences in people's lives, and also where the consequences are very often quite harmful. So this is AI in the criminal justice system, for instance, used to predict who might fail to show up to court or who might commit a crime and then kind of prejudge them on that basis, right? And deny them their freedom on the basis of something they're predicted to do in the future, which in turn is based on the behavior of other similar defendants in the past, right? So there are two questions here, a technical question and a moral one.
The technical question is, how accurate can you get? And it turns out when we review the evidence, not very accurate. There's a long section in our book at the end of which we conclude that one legitimate way to look at it is that all that these systems are predicting is the more prior arrests you have, the more likely you are to be arrested in the future.
So that's the technical aspect, and that's because, you know, it's just not known who is going to commit a crime. Yes, some crimes are premeditated, but a lot of the others are spur of the moment or depend on things, random things that might happen in the future.
It's something we all recognize intuitively, but when the words AI or machine learning are used, some of these decision makers seem to somehow suspend common sense and somehow believe in the future as actually accurately predictable.
CINDY COHN: The other piece that I've seen you talk about and others talk about is that the only data you have is what the cops actually do, and that doesn't tell you about crime it tells you about what the cops do. So my friends at the human rights data analysis group called it predicting the police rather than predicting policing.
And we know there's a big difference between the crime that the cops respond to and the general crime. So it's gonna look like the people who commit crimes are the people who always commit crimes when it's just the subset that the police are able to focus on, and we know there's a lot of bias baked into that as well.
So it's not just inside the data, it's outside the data that you have to think about in terms of these prediction algorithms and what they're capturing and what they're not. Is that fair?
ARVIND NARAYANAN: That's totally, yeah, that's exactly right. And more broadly, you know, beyond the criminal justice system, these predictive algorithms are also used in hiring, for instance, and, and you know, it's not the same morally problematic kind of use where you're denying someone their freedom. But a lot of the same pitfalls apply.
I think one way in which we try to capture this in the book is that AI snake oil, or broken AI, as we sometimes call it, is appealing to broken institutions. So the reason that AI is so appealing to hiring managers is that yes, it is true that something is broken with the way we hire today. Companies are getting hundreds of applications, maybe a thousand for each open position. They're not able to manually go through all of them. So they want to try to automate the process. But that's not actually addressing what is broken about the system, and when they're doing that, the applicants are also using AI to increase the number of positions they can apply to. And so it's only escalating the arms race, right?
I think the reason this is broken is that we fundamentally don't have good ways of knowing who's going to be a good fit for which position, and so by pretending that we can predict it with AI, we're just elevating this elaborate random number generator into this moral arbiter. And there can be moral consequences of this as well.
Like, obviously, you know, someone who deserved a job might be denied that job, but it actually gets amplified when you think about some of these AI recruitment vendors providing their algorithm to 10 different companies. And so every company that someone applies to is judging someone in the same way.
So in our view, the only way to get away from this is to make necessary. Organizational reforms to these broken processes. Just as one example, in software, for instance, many companies will offer people, students especially, internships, and use that to have a more in-depth assessment of a candidate. I'm not saying that necessarily works for every industry or every level of seniority, but we have to actually go deeper and emphasize the human element instead of trying to be more superficial and automated with AI.
JASON KELLEY: One of the themes that you bring up in the newsletter and the book is AI evaluation. Let's say you have one of these companies with the hiring tool: why is it so hard to evaluate the sort of like, effectiveness of these AI models or the data behind them? I know that it can be, you know, difficult if you don't have access to it, but even if you do, how do we figure out the shortcomings that these tools actually have?
ARVIND NARAYANAN: There are a few big limitations here. Let's say we put aside the data access question, the company itself wants to figure out how accurate these decisions are.
JASON KELLEY: Hopefully!
ARVIND NARAYANAN: Yeah. Um, yeah, exactly. They often don't wanna know, but even if you do wanna know that in terms of the technical aspect of evaluating this, it's really the same problem as the medical system has in figuring out whether a drug works or not.
And we know how hard that is. That actually requires a randomized, controlled trial. It actually requires experimenting on people, which in turn introduces its own ethical quandaries. So you need oversight for the ethics of it, but then you have to recruit hundreds, sometimes thousands of people, follow them for a period of several years. And figure out whether the treatment group for which you either, you know, gave the drug, or in the hiring case you implemented, your algorithm has a different outcome on average from the control group for whom you either gave a placebo or in the hiring case you used, the traditional hiring procedure.
Right. So that's actually what it takes. And, you know, there's just no incentive in most companies to do this because obviously they don't value knowledge for their own sake. And the ROI is just not worth it. The effort that they're gonna put into this kind of evaluation is not going to, uh, allow them to capture the value out of it.
It brings knowledge to the public, to society at large. So what do we do here? Right? So usually in cases like this, the government is supposed to step in and use public funding to do this kind of research. But I think we're pretty far from having a cultural understanding that this is the sort of thing that's necessary.
And just like the medical community has gotten used to doing this, we need to do this whenever we care about the outcomes, right? Whether it's in criminal justice, hiring, wherever it is. So I think that'll take a while, and our book tries to be a very small first step towards changing public perception that this is not something you can somehow automate using AI. These are actually experiments on people. They're gonna be very hard to do.
JASON KELLEY: Let's take a quick moment to thank our sponsor. “How to Fix the Internet” is supported by The Alfred P. Sloan Foundation’s Program in Public Understanding of Science and Technology. Enriching people’s lives through a keener appreciation of our increasingly technological world and portraying the complex humanity of scientists, engineers, and mathematicians.
We also want to thank EFF members and donors. You are the reason we exist. EFF has been fighting for digital rights for 35 years, and that fight is bigger than ever, so please, if you like what we do, go to eff.org/pod to donate. Also, we’d love for you to join us at this year’s EFF awards, where we celebrate the people working towards the better digital future that we all care so much about. Those are coming up on September 12th in San Francisco. You can find more information about that at eff.org/awards.
We also wanted to share that our friend Cory Doctorow has a new podcast – have a listen to this.
[WHO BROKE THE INTERNET TRAILER]
And now back to our conversation with Arvind Narayanan.
CINDY COHN: So let's go to the other end of AI world. The people who, you know, are, I think they call it AI safety, where they're really focused on the, you know, robots are gonna kill us. All kind of concerns. 'cause that's a, that's a piece of this story as well. And I'd love to hear your take on, you know, kind of the, the, the doom loop, um, version of ai.
ARVIND NARAYANAN: Sure. Yeah. So there's uh, a whole chapter in the book where we talk about concerns around catastrophic risk from future more powerful AI systems, and we have also elaborated a lot of those in a new paper we released called AI as Normal Technology. If folks are interested in looking that up and look, I mean, I'm glad that folks are studying AI safety and the kinds of unusual, let's say, kinds of risks that might arise in the future that are not necessarily direct extrapolations of the risks that we have currently.
But where we object to these arguments is the claim that we have enough knowledge and evidence of those risks being so urgent and serious that we have to put serious policy measures in place now, uh, you know, such as, uh, curbing open weights AI, for instance, because you never know who's gonna download these systems and what they're gonna do with them.
So we have a few reasons why we think those kinds of really strong arguments are going too far. One reason is that the kinds of interventions that we will need, if we want to control this at the level of the technology, as opposed to the use and deployment of the technology, those kind of non-proliferation measures as we call them, are, in our view, almost guaranteed not to work.
And to even try to enforce that you're kind of inexorably led to the idea of building a world authoritarian government that can monitor all, you know, AI development everywhere and make sure that the companies, the few companies that are gonna be licensed to do this, are doing it in a way that builds in all of the safety measures, the alignment measures, as this community calls them, that we want out of these AI models.
Because models that took, you know, hundreds of millions of dollars to build just a few years ago can now be built using a cluster of enthusiasts’ machines in a basement, right? And if we imagine that these safety risks are tied to the capability level of these models, which is an assumption that a lot of people have in order to call for these strong policy measures, then the predictions that came out of that line of thinking, in my view, have already repeatedly been falsified.
So when GPT two was built, right, this was back in 2019, OpenAI claimed that that was so dangerous in terms of misinformation being out there, that it was going to have potentially deleterious impacts on democracy, that they couldn't release it on an open weights basis.
That's a model that my students now build just to, you know, in an afternoon just to learn the process of building models, right? So that's how cheap that has gotten six years later, and vastly more powerful models than GPT two have now been made available openly. And when you look at the impact on AI generated misinformation, we did a study. We looked at the Wired database of the use of AI in election related activities worldwide. And those fears associated with AI generated misinformation have simply not come true because it turns out that the purpose of election misinformation is not to convince someone of the other tribe, if you will, who is skeptical, but just to give fodder for your own tribe so that they will, you know, continue to support whatever it is you're pushing for.
And for that purpose, it doesn't have to be that convincing or that deceptive, it just has to be cheap fakes as it's called. It's the kind of thing that anyone can do, you know, in 10 minutes with Photoshop. Even with the availability of sophisticated AI image generators. A lot of the AI misinformation we're seeing are these kinds of cheap fakes that don't even require that kind of sophistication to produce, right?
So a lot of these supposed harms really have the wrong theory in mind of how powerful technology will lead to potentially harmful societal impacts. Another great one is in cybersecurity, which, you know, as you know, I worked in for many years before I started working in AI.
And if the concern is that AI is gonna find software vulnerabilities and exploit them and exploit critical infrastructure, whatever, better than humans can. I mean, we crossed that threshold a decade or two ago. Automated methods like fuzzing have long been used to find new cyber vulnerabilities, but it turns out that it has actually helped defenders over attackers. Because software companies can and do, and this is, you know, really almost the first line of defense. Use these automated vulnerability discovery methods to find vulnerabilities and fix those vulnerabilities in their own software before even putting it out there where attackers can a chance to, uh, to find those vulnerabilities.
So to summarize all of that, a lot of the fears are based on a kind of incorrect theory of the interaction between technology and society. Uh, we have other ways to defend in, in fact, in a lot of ways, AI itself is, is the defense against some of these AI enabled threats we're talking about? And thirdly, the defenses that involve trying to control AI are not going to work. And they are, in our view, pretty dangerous for democracy.
CINDY COHN: Can you talk a little bit about the AI as normal technology? Because I think this is a world that we're headed into that you've been thinking about a little more. 'cause we're, you know, we're not going back.
Anybody who hangs out with people who write computer code, knows that using these systems to write computer code is like normal now. Um, and it would be hard to go back even if you wanted to go back. Um, so tell me a little bit about, you know, this, this version of, of AI as normal technology. 'cause I think it, it feels like the future now, but actually I think depending, you know, what do they say, the future is here, it's just not evenly distributed. Like it is not evenly distributed yet. So what, what does it look like?
ARVIND NARAYANAN: Yeah, so a big part of the paper takes seriously the prospect of cognitive automation using AI, that AI will at some point be able to do, you know, with some level of accuracy and reliability, most of the cognitive tasks that are valuable in today's economy at least, and asks, how quickly will this happen? What are the effects going to be?
So a lot of people who think this will happen, think that it's gonna happen this decade and a lot of this, you know, uh, brings a lot of fear to people and a lot of very short term thinking. But our paper looks at it in a very different way. So first of all, we think that even if this kind of cognitive automation is achieved, to use an analogy to the industrial revolution, where a lot of physical tasks became automated. It didn't mean that human labor was superfluous, because we don't take powerful physical machines like cranes or whatever and allow them to operate unsupervised, right?
So with those physical tasks that became automated, the meaning of what labor is, is now all about the supervision of those physical machines that are vastly more physically powerful than humans. So we think, and this is just an analogy, but we have a lot of reasoning in the paper for why we think this will be the case. What jobs might mean in a future with cognitive automation is primarily around the supervision of AI systems.
And so for us, that's a, that's a very positive view. We think that for the most part, that will still be fulfilling jobs in certain sectors. There might be catastrophic impacts, but it's not that across the board you're gonna have drop-in replacements for human workers that are gonna make human jobs obsolete. We don't really see that happening, and we also don't see this happening in the space of a few years.
We talk a lot about what are the various sources of inertia that are built into the adoption of any new technology, especially general purpose technology like electricity. We talk about, again, another historic analogy where factories took several decades to figure out how to replace their steam boilers in a useful way with electricity, not because it was technically hard, but because it required organizational innovations, like changing the whole layout of factories around the concept of the assembly line. So we think through what some of those changes might have to be when it comes to the use of AI. And we, you know, we say that we have a, a few decades to, to make this transition and that, even when we do make the transition, it's not going to be as scary as a lot of people seem to think.
CINDY COHN: So let's say we're living in the future, the Arvind future where we've gotten all these AI questions, right. What does it look like for, you know, the average person or somebody doing a job?
ARVIND NARAYANAN: Sure. A few big things. I wanna use the internet as an analogy here. Uh, 20, 30 years ago, we used to kind of log onto the internet, do a task, and then log off. But now. The internet is simply the medium through which all knowledge work happens, right? So we think that if we get this right in the future, AI is gonna be the medium through which knowledge work happens. It's kind of there in the background and automatically doing stuff that we need done without us necessarily having to go to an AI application and ask it something and then bring the result back to something else.
There is this famous definition of AI that AI is whatever hasn't been done yet. So what that means is that when a technology is new and it's not working that well and its effects are double-edged, that's when we're more likely to call it AI.
But eventually it starts working reliably and it kind of fades into the background and we take it for granted as part of our digital or physical environment. And we think that that's gonna happen with generative AI to a large degree. It's just gonna be invisibly making all knowledge work a lot better, and human work will be primarily about exercising judgment over the AI work that's happening pervasively, as opposed to humans being the ones doing, you know, the nuts and bolts of the thinking in any particular occupation.
I think another one is, uh, I hope that we will have. gotten better at recognizing the things that are intrinsically human and putting more human effort into them, that we will have freed up more human time and effort for those things that matter. So some folks, for instance, are saying, oh, let's automate government and replace it with a chat bot. Uh, you know, we point out that that's missing the point of democracy, which is to, you know, it's if a chat bot is making decisions, it might be more efficient in some sense, but it's not in any way reflecting the will of the people. So whatever people's concerns are with government being inefficient, automation is not going to be the answer. We can think about structural reforms and we certainly should, you know, maybe it will, uh, free up more human time to do the things that are intrinsically human and really matter, such as how do we govern ourselves and so forth.
Um. And, um, maybe if I can have one last thought around what does this positive vision of the future look like? Uh, I, I would go back to the very thing we started from, which is AI and education. I do think there's orders of magnitude, more human potential to open up and AI is not a magic bullet here.
You know, technology on, on the whole is only one small part of it, but I think as we more generally become wealthier and we have. You know, lots of different reforms. Uh, hopefully one of those reforms is going to be schools and education systems, uh, being much better funded, being able to operate much more effectively, and, you know, e every child one day, being able to perform, uh, as well as the highest achieving children today.
And there's, there's just an enormous range. And so being able to improve human potential, to me is the most exciting thing.
CINDY COHN: Thank you so much, Arvind.
ARVIND NARAYANAN: Thank you Jason and Cindy. This has been really, really fun.
CINDY COHN: I really appreciate Arvind's hopeful and correct idea that actually what most of us do all day isn't really reducible to something a machine can replace. That, you know, real life just isn't like a game of chess or, you know, uh, the, the test you have to pass to be a lawyer or, or things like that. And that there's a huge gap between, you know, the actual job and the thing that the AI can replicate.
JASON KELLEY: Yeah, and he's really thinking a lot about how the debates around AI in general are framed at this really high level, which seems incorrect, right? I mean, it's sort of like asking if food is good for you, are vehicles good for you, but he's much more nuanced, you know? AI is good in some cases, not good in others. And his big takeaway for me was that, you know, people need to be skeptical about how they use it. They need to be skeptical about the information it gives them, and they need to sort of learn what methods they can use to make AI work with you and for you and, and how to make it work for the application you're using it for.
It's not something you can just apply, you know, wholesale across anything which, which makes perfect sense, right? I mean, no one I think thinks that, but I think industries are plugging AI into everything or calling it AI anyway. And he's very critical of that, which I think is, is good and, and most people are too, but it's happening anyway. So it's good to hear someone who's really thinking about it this way point out why that's incorrect.
CINDY COHN: I think that's right. I like the idea of normalizing AI and thinking about it as a general purpose tool that might be good for some things and, and it's bad for others, honestly, the same way computers are, computers are good for some things and bad for others. So, you know, we talk about vehicles and food in the conversation, but actually think you could talk about it for, you know, computing more broadly.
I also liked his response to the doomers, you know, pointing out that a lot of the harms that people are claiming will end the world, kind of have the wrong theory in mind about how a powerful technology will lead to bad societal impact. You know, he's not saying that it won't, but he's pointing out that, you know, in cybersecurity for example, you know, some of the AI methods which had been around for a while, he talked about fuzzing, but there are others, you know, that those techniques, while they were, you know, bad for old cybersecurity, actually have spurred greater protections in cybersecurity. And the lesson is when we learn all the time in, in security, especially like the cat and mouse game is just gonna continue.
And anybody who thinks they've checkmated, either on the good side or the bad side, is probably wrong. And that I think is an important insight so that, you know, we don't get too excited about the possibilities of AI, but we also don't go all the way to the, the doomers side.
JASON KELLEY: Yeah. You know, the normal technology thing was really helpful for me, right? It's something that, like you said with computers, it's a tool that, that has applications in some cases and not others, and people thinking, you know, I don't know if anyone thought when the internet was developed that this was going to end the world or save it. I guess people thought some people might have thought either/or, but you know, neither is true. Right? And you know, it's been many years now and we're still learning how to make the internet useful, and I think it'll be a long time before we've necessarily figure out how AI can be useful. But there's a lot of lessons we can take away from the growth of the internet about how to apply AI.
You know, my dishwasher, I don't think needs to have wifi. I don't think it needs to have AI either. I'll probably end up buying one that has to have those things because that's the way the market goes. But it seems like these are things we can learn from the way we've sort of, uh, figured out where the applications are for these different general purpose technologies in the past is just something we can continue to figure out for AI.
CINDY COHN: Yeah, and honestly it points to competition and user control, right? I mean, the reason I think a lot of people are feeling stuck with AI is because we don't have an open market for systems where you can decide, I don't want AI in my dishwasher, or I don't want surveillance in my television.
And that's a market problem. And one of these things that he said a lot is that, you know, “just add AI” doesn't solve problems with broken institutions. And I think it circles back to the fact that we don't have a functional market, we don't have real consumer choice right now. And so that's why some of the fears about AI, it's not just consumers, I mean worker choice, other things as well, it's the problems in those systems in the way power works in those systems.
If you just center this on the tech, you're kind of missing the bigger picture and also the things that we might need to do to address it. I wanted to circle back to what you said about the internet because of course it reminds me of Barlow's declaration on the independence of cyberspace, which you know, has been interpreted by a lot of people, as saying that the internet would magically make everything better and, you know, Barlow told me directly, like, you know, what he said was that by projecting a positive version of the online world and speaking as if it was inevitable, he was trying to bring it about, right?
And I think this might be another area where we do need to bring about a better future, um, and we need to posit a better future, but we also have to be clear-eyed about the, the risks and, you know, whether we're headed in the right direction or not, despite what we, what we hope for.
JASON KELLEY: And that's our episode for today. Thanks so much for joining us. If you have feedback or suggestions, we'd love to hear from you. Visit ff.org/podcast and click on listen or feedback. And while you're there, you can become a member and donate, maybe even pick up some of the merch and just see what's happening in digital rights this week and every week.
Our theme music is by Nat Keefe of Beat Mower with Reed Mathis, and How to Fix the Internet is supported by the Alfred Peace Loan Foundation's program and public understanding of science and technology. We'll see you next time. I'm Jason Kelley.
CINDY COHN: And I'm Cindy Cohn.
MUSIC CREDITS: This podcast is licensed Creative Commons Attribution 4.0 international, and includes the following music licensed Creative Commons Attribution 3.0 unported by its creators: Drops of H2O, The Filtered Water Treatment by Jay Lang. Additional music, theme remixes and sound design by Gaetan Harris.
Fake Clinics Quietly Edit Their Websites After Being Called Out on HIPAA Claims
In a promising sign that public pressure works, several crisis pregnancy centers (CPCs, also known as “fake clinics”) have quietly scrubbed misleading language about privacy protections from their websites.
Earlier this year, EFF sent complaints to attorneys general in eight states (FL, TX, AR, and MO, TN, OK, NE, and NC), asking them to investigate these centers for misleading the public with false claims about their privacy practices—specifically, falsely stating or implying that they are bound by the Health Insurance Portability and Accountability Act (HIPAA). These claims are especially deceptive because many of these centers are not licensed medical clinics or do not have any medical providers on staff, and thus are not subject to HIPAA’s protections.
Now, after an internal follow-up investigation, we’ve found that our efforts are already bearing fruit: Of the 21 CPCs we cited as exhibits in our complaints, six have completely removed HIPAA references from their websites, and one has made partial changes (removed one of two misleading claims). Notably, every center we flagged in our letters to Texas AG Ken Paxton and Arkansas AG Tim Griffin has updated its website—a clear sign that clinics in these states are responding to scrutiny.
While 14 remain unchanged, this is a promising development. These centers are clearly paying attention—and changing their messaging. We haven’t yet received substantive responses from the state attorneys general beyond formal acknowledgements of our complaints, but these early results confirm what we’ve long believed: transparency and public pressure work.
These changes (often quiet edits to privacy policies on their websites or deleting blog posts) signal that the CPC network is trying to clean up their public-facing language in the wake of scrutiny. But removing HIPAA references from a website doesn’t mean the underlying privacy issues have been fixed. Most CPCs are still not subject to HIPAA, because they are not licensed healthcare providers. They continue to collect sensitive information without clearly disclosing how it’s stored, used, or shared. And in the absence of strong federal privacy laws, there is little recourse for people whose data is misused.
These clinics have misled patients who are often navigating complex and emotional decisions about their health, misrepresented themselves as bound by federal privacy law, and falsely referred people to the U.S. Department of Health and Human Services for redress—implying legal oversight and accountability. They made patients believe their sensitive data was protected, when in many cases, it was shared with affiliated networks, or even put on the internet for anyone to see—including churches or political organizations.
That’s why we continue to monitor these centers—and call on state attorneys general to do the same.
The “Incriminating Video” Scam
A few years ago, scammers invented a new phishing email. They would claim to have hacked your computer, turned your webcam on, and videoed you watching porn or having sex. BuzzFeed has an article talking about a “shockingly realistic” variant, which includes photos of you and your house—more specific information.
The article contains “steps you can take to figure out if it’s a scam,” but omits the first and most fundamental piece of advice: If the hacker had incriminating video about you, they would show you a clip. Just a taste, not the worst bits so you had to worry about how bad it could be, but something. If the hacker doesn’t show you any video, they don’t have any video. Everything else is window dressing...
Ørsted scrambles for cash in face of Trump opposition
To boost EV sales, Ford looks to the Model T
Interior demands eagle data from wind developers
Exxon asks Supreme Court — again — to take up climate-damages case
Truck manufacturers sue to dissolve ZEV sales agreement with California
Service cuts planned at two large Pennsylvania transit agencies
Green group launches ad campaign to counter California oil lobbying
North Carolina tourist attraction damaged by hurricane to be demolished
UK’s AI ambitions clash with its climate goals
Bosnia’s mountain resorts pivot to summer tourism as climate changes
Torrential rains in Japan cause flooding, mudslides and travel disruptions
Jessika Trancik named director of the Sociotechnical Systems Research Center
Jessika Trancik, a professor in MIT’s Institute for Data, Systems, and Society, has been named the new director of the Sociotechnical Systems Research Center (SSRC), effective July 1. The SSRC convenes and supports researchers focused on problems and solutions at the intersection of technology and its societal impacts.
Trancik conducts research on technology innovation and energy systems. At the Trancik Lab, she and her team develop methods drawing on engineering knowledge, data science, and policy analysis. Their work examines the pace and drivers of technological change, helping identify where innovation is occurring most rapidly, how emerging technologies stack up against existing systems, and which performance thresholds matter most for real-world impact. Her models have been used to inform government innovation policy and have been applied across a wide range of industries.
“Professor Trancik’s deep expertise in the societal implications of technology, and her commitment to developing impactful solutions across industries, make her an excellent fit to lead SSRC,” says Maria C. Yang, interim dean of engineering and William E. Leonhard (1940) Professor of Mechanical Engineering.
Much of Trancik’s research focuses on the domain of energy systems, and establishing methods for energy technology evaluation, including of their costs, performance, and environmental impacts. She covers a wide range of energy services — including electricity, transportation, heating, and industrial processes. Her research has applications in solar and wind energy, energy storage, low-carbon fuels, electric vehicles, and nuclear fission. Trancik is also known for her research on extreme events in renewable energy availability.
A prolific researcher, Trancik has helped measure progress and inform the development of solar photovoltaics, batteries, electric vehicle charging infrastructure, and other low-carbon technologies — and anticipate future trends. One of her widely cited contributions includes quantifying learning rates and identifying where targeted investments can most effectively accelerate innovation. These tools have been used by U.S. federal agencies, international organizations, and the private sector to shape energy R&D portfolios, climate policy, and infrastructure planning.
Trancik is committed to engaging and informing the public on energy consumption. She and her team developed the app carboncounter.com, which helps users choose cars with low costs and low environmental impacts.
As an educator, Trancik teaches courses for students across MIT’s five schools and the MIT Schwarzman College of Computing.
“The question guiding my teaching and research is how do we solve big societal challenges with technology, and how can we be more deliberate in developing and supporting technologies to get us there?” Trancik said in an article about course IDS.521/IDS.065 (Energy Systems for Climate Change Mitigation).
Trancik received her undergraduate degree in materials science and engineering from Cornell University. As a Rhodes Scholar, she completed her PhD in materials science at the University of Oxford. She subsequently worked for the United Nations in Geneva, Switzerland, and the Earth Institute at Columbia University. After serving as an Omidyar Research Fellow at the Santa Fe Institute, she joined MIT in 2010 as a faculty member.
Trancik succeeds Fotini Christia, the Ford International Professor of Social Sciences in the Department of Political Science and director of IDSS, who previously served as director of SSRC.
Harvey Kent Bowen, ceramics scholar and MIT Leaders for Global Operations co-founder, dies at 83
Harvey Kent Bowen PhD ’71, a longtime MIT professor celebrated for his pioneering work in manufacturing education, innovative ceramics research, and generous mentorship, died July 17 in Belmont, Massachusetts. He was 83.
At MIT, he was the founding engineering faculty leader of Leaders for Manufacturing (LFM) — now Leaders for Global Operations (LGO) — a program that continues to shape engineering and management education nearly four decades later.
Bowen spent 22 years on the MIT faculty, returning to his alma mater after earning both a master’s degree in materials science and a PhD in materials science and ceramics processing there. He held the Ford Professorship of Engineering, with appointments in the departments of Materials Science and Engineering (DMSE) and Electrical Engineering and Computer Science, before transitioning to Harvard Business School, where he bridged the worlds of engineering, manufacturing, and management.
Bowen’s prodigious research output spans 190 articles, 45 Harvard case studies, and two books. In addition to his scholarly contributions, those who knew him best say his visionary understanding of the connection between management and engineering, coupled with his intellect and warm leadership style, set him apart at a time of rapid growth at MIT.
A pioneering physical ceramics researcher
Bowen was born on Nov. 21, 1941, in Salt Lake City, Utah. As an MIT graduate student in the 1970s, he helped to redefine the study of ceramics — transforming it into the scientific field now known as physical ceramics, which focuses on the structure, properties, and behavior of ceramic materials.
“Prior to that, it was the art of ceramic composition,” says Michael Cima, the David H. Koch Professor of Engineering in DMSE. “What Kent and a small group of more-senior DMSE faculty were doing was trying to turn that art into science.”
Bowen advanced the field by applying scientific rigor to how ceramic materials were processed. He applied concepts from the developing field of colloid science — the study of particles evenly distributed in another material — to the manufacturing of ceramics, forever changing how such objects were made.
“That sparked a whole new generation of people taking a different look at how ceramic objects are manufactured,” Cima recalls. “It was an opportunity to make a big change. Despite the fact that physical ceramics — composition, crystal structure and so forth — had turned into a science, there still was this big gap: how do you make these things? Kent thought this was the opportunity for science to have an impact on the field of ceramics.”
One of his greatest scholarly accomplishments was “Introduction to Ceramics, 2nd edition,” with David Kingery and Donald Uhlmann, a foundational textbook he helped write early in his career. The book, published in 1976, helped maintain DMSE’s leading position in ceramics research and education.
“Every PhD student in ceramics studied that book, all 1,000 pages, from beginning to end, to prepare for the PhD qualifying exams,” says Yet-Ming Chiang, Kyocera Professor of Ceramics in DMSE. “It covered almost every aspect of the science and engineering of ceramics known at that time. That was why it was both an outstanding teaching text as well as a reference textbook for data.”
In ceramics processing, Bowen was also known for his control of particle size, shape, and size distribution, and how those factors influence sintering, the process of forming solid materials from powders.
Over time, Bowen’s interest in ceramics processing broadened into a larger focus on manufacturing. As such, Bowen was also deeply connected to industry and traveled frequently, especially to Japan, a leader in ceramics manufacturing.
“One time, he came back from Japan and told all of us graduate students that the students there worked so hard they were sleeping in the labs at night — as a way to prod us,” Chiang recalls.
While Bowen’s work in manufacturing began in ceramics, he also became a consultant to major companies, including automakers, and he worked with Lee Iacocca, the Ford executive behind the Mustang. Those experiences also helped spark LFM, which evolved into LGO. Bowen co-founded LFM with former MIT dean of engineering Tom Magnanti.
“I’m still in awe of Kent’s audacity and vision in starting the LFM program. The scale and scope of the program were, even for MIT standards, highly ambitious. Thirty-seven successful years later, we all owe a great sense of gratitude to Kent,” says LGO Executive Director Thomas Roemer, a senior lecturer at the MIT Sloan School of Management.
Bowen as mentor, teacher
Bowen’s scientific leadership was matched by his personal influence. Colleagues recall him as a patient, thoughtful mentor who valued creativity and experimentation.
“He had a lot of patience, and I think students benefited from that patience. He let them go in the directions they wanted to — and then helped them out of the hole when their experiments didn’t work. He was good at that,” Cima says.
His discipline was another hallmark of his character. Chiang was an undergraduate and graduate student when Bowen was a faculty member. He fondly recalls his tendency to get up early, a source of amusement for his 3.01 (Kinetics of Materials) class.
“One time, some students played a joke on him. They got to class before him, set up an electric griddle, and cooked breakfast in the classroom before he arrived,” says Chiang. “When we all arrived, it smelled like breakfast.”
Bowen took a personal interest in Chiang’s career trajectory, arranging for him to spend a summer in Bowen’s lab through the Undergraduate Research Opportunities Program. Funded by the Department of Energy, the project explored magnetohydrodynamics: shooting a high-temperature plasma made from coal fly ash into a magnetic field between ceramic electrodes to generate electricity.
“My job was just to sift the fly ash, but it opened my eyes to energy research,” Chiang recalls.
Later, when Chiang was an assistant professor at MIT, Bowen served on his career development committee. He was both encouraging and pragmatic.
“He pushed me to get things done — to submit and publish papers at a time when I really needed the push,” Chiang says. “After all the happy talk, he would say, ‘OK, by what date are you going to submit these papers?’ And that was what I needed.”
After leaving MIT, Bowen joined Harvard Business School (HBS), where he wrote numerous detailed case studies, including one on A123 Systems, a battery company Chiang co-founded in 2001.
“He was very supportive of our work to commercialize battery technology, and starting new companies in energy and materials,” Chiang says.
Bowen was also a devoted mentor for LFM/LGO students, even while at HBS. Greg Dibb MBA ’04, SM ’04 recalls that Bowen agreed to oversee his work on the management philosophy known as the Toyota Production System (TPS) — a manufacturing system developed by the Japanese automaker — responding kindly to the young student’s outreach and inspiring him with methodical, real-world advice.
“By some miracle, he agreed and made the time to guide me on my thesis work. In the process, he became a mentor and a lifelong friend,” Dibb says. “He inspired me in his way of working and collaborating. He was a master thinker and listener, and he taught me by example through his Socratic style, asking me simple but difficult questions that required rigor of thought.
“I remember he asked me about my plan to learn about manufacturing and TPS. I came to him enthusiastically with a list of books I planned to read. He responded, ‘Do you think a world expert would read those books?’”
In trying to answer that question, Dibb realized the best way to learn was to go to the factory floor.
“He had a passion for the continuous improvement of manufacturing and operations, and he taught me how to do it by being an observer and a listener just like him — all the time being inspired by his optimism, faith, and charity toward others.”
Faith was a cornerstone of Bowen’s life outside of academia. He served a mission for The Church of Jesus Christ of Latter-day Saints in the Central Germany Mission and held several leadership roles, including bishop of the Cambridge, Massachusetts Ward, stake president of the Cambridge Stake, mission president of the Tacoma, Washington Mission, and temple president of the Boston, Massachusetts Temple.
An enthusiastic role model who inspired excellence
During early-morning conversations, Cima learned about Bowen’s growing interest in manufacturing, which would spur what is now LGO. Bowen eventually became recognized as an expert in the Toyota Production System, the company’s operational culture and practice which was a major influence on the LGO program’s curriculum design.
“I got to hear it from him — I was exposed to his early insights,” Cima says. “The fact that he would take the time every morning to talk to me — it was a huge influence.”
Bowen was a natural leader and set an example for others, Cima says.
“What is a leader? A leader is somebody who has the kind of infectious enthusiasm to convince others to work with them. Kent was really good at that,” Cima says. “What’s the way you learn leadership? Well, you’d look at how leaders behave. And really good leaders behave like Kent Bowen.”
MIT Sloan School of Management professor of the practice Zeynep Ton praises Bowen’s people skills and work ethic: “When you combine his belief in people with his ability to think big, something magical happens through the people Kent mentored. He always pushed us to do more,” Ton recalls. “Whenever I shared with Kent my research making an impact on a company, or my teaching making an impact on a student, his response was never just ‘good job.’ His next question was: ‘How can you make a bigger impact? Do you have the resources at MIT to do it? Who else can help you?’”
A legacy of encouragement and drive
With this drive to do more, Bowen embodied MIT’s ethos, colleagues say.
“Kent Bowen embodies the MIT 'mens et manus' ['mind and hand'] motto professionally and personally as an inveterate experimenter in the lab, in the classroom, as an advisor, and in larger society,” says MIT Sloan senior lecturer Steve Spear. “Kent’s consistency was in creating opportunities to help people become their fullest selves, not only finding expression for their humanity greater than they could have achieved on their own, but greater than they might have even imagined on their own. An extraordinary number of people are directly in his debt because of this personal ethos — and even more have benefited from the ripple effect.”
Gregory Dibb, now a leader in the autonomous vehicle industry, is just one of them.
“Upon hearing of his passing, I immediately felt that I now have even more responsibility to step up and try to fill his shoes in sacrificing and helping others as he did — even if that means helping an unprepared and overwhelmed LGO grad student like me,” Dibb says.
Bowen is survived by his wife, Kathy Jones; his children, Natalie, Jennifer Patraiko, Melissa, Kirsten, and Jonathan; his sister, Kathlene Bowen; and six grandchildren.
Jason Sparapani contributed to this article.
Planets without water could still produce certain liquids, a new study finds
Water is essential for life on Earth. So, the liquid must be a requirement for life on other worlds. For decades, scientists’ definition of habitability on other planets has rested on this assumption.
But what makes some planets habitable might have very little to do with water. In fact, an entirely different type of liquid could conceivably support life in worlds where water can barely exist. That’s a possibility that MIT scientists raise in a study appearing this week in the Proceedings of the National Academy of Sciences.
From lab experiments, the researchers found that a type of fluid known as an ionic liquid can readily form from chemical ingredients that are also expected to be found on the surface of some rocky planets and moons. Ionic liquids are salts that exist in liquid form below about 100 degrees Celsius. The team’s experiments showed that a mixture of sulfuric acid and certain nitrogen-containing organic compounds produced such a liquid. On rocky planets, sulfuric acid may be a byproduct of volcanic activity, while nitrogen-containing compounds have been detected on several asteroids and planets in our solar system, suggesting the compounds may be present in other planetary systems.
Ionic liquids have extremely low vapor pressure and do not evaporate; they can form and persist at higher temperatures and lower pressures than what liquid water can tolerate. The researchers note that ionic liquid can be a hospitable environment for some biomolecules, such as certain proteins that can remain stable in the fluid.
The scientists propose that, even on planets that are too warm or that have atmospheres are too low-pressure to support liquid water, there could still be pockets of ionic liquid. And where there is liquid, there may be potential for life, though likely not anything that resembles Earth’s water-based beings.
“We consider water to be required for life because that is what’s needed for Earth life. But if we look at a more general definition, we see that what we need is a liquid in which metabolism for life can take place,” says Rachana Agrawal, who led the study as a postdoc in MIT’s Department of Earth, Atmospheric and Planetary Sciences. “Now if we include ionic liquid as a possibility, this can dramatically increase the habitability zone for all rocky worlds.”
The study’s MIT co-authors are Sara Seager, the Class of 1941 Professor of Planetary Sciences in the Department of Earth, Atmospheric and Planetary Sciences and a professor in the departments of Physics and of Aeronautics and Astronautics, along with Iaroslav Iakubivskyi, Weston Buchanan, Ana Glidden, and Jingcheng Huang. Co-authors also include Maxwell Seager of Worcester Polytechnic Institute, William Bains of Cardiff University, and Janusz Petkowski of Wroclaw University of Science and Technology, in Poland.
A liquid leap
The team’s work with ionic liquid grew out of an effort to search for signs of life on Venus, where clouds of sulfuric acid envelope the planet in a noxious haze. Despite its toxicity, Venus’ clouds may contain signs of life — a notion that scientists plan to test with upcoming missions to the planet’s atmosphere.
Agrawal and Seager, who is leading the Morning Star Missions to Venus, were investigating ways to collect and evaporate sulfuric acid. If a mission collects samples from Venus’ clouds, sulfuric acid would have to be evaporated away in order to reveal any residual organic compounds that could then be analyzed for signs of life.
The researchers were using their custom, low-pressure system designed to evaporate away excess sulfuric acid, to test evaporation of a solution of the acid and an organic compound, glycine. They found that in every case, while most of the liquid sulfuric acid evaporated, a stubborn layer of liquid always remained. They soon realized that sulfuric acid was chemically reacting with glycine, resulting in an exchange of hydrogen atoms from the acid to the organic compound. The result was a fluid mixture of salts, or ions, known as an ionic liquid, that persists as a liquid across a wide range of temperatures and pressures.
This accidental finding kickstarted an idea: Could ionic liquid form on planets that are too warm and host atmospheres too thin for water to exist?
“From there, we took the leap of imagination of what this could mean,” Agrawal says. “Sulfuric acid is found on Earth from volcanoes, and organic compounds have been found on asteroids and other planetary bodies. So, this led us to wonder if ionic liquids could potentially form and exist naturally on exoplanets.”
Rocky oases
On Earth, ionic liquids are mainly synthesized for industrial purposes. They do not occur naturally, except for in one specific case, in which the liquid is generated from the mixing of venoms produced by two rival species of ants.
The team set out to investigate what conditions ionic liquid could be naturally produced in, and over what range of temperatures and pressures. In the lab, they mixed sulfuric acid with various nitrogen-containing organic compounds. In previous work, Seager’s team had found that the compounds, some of which can be considered ingredients associated with life, are surprisingly stable in sulfuric acid.
“In high school, you learn that an acid wants to donate a proton,” Seager says. “And oddly enough, we knew from our past work with sulfuric acid (the main component of Venus’ clouds) and nitrogen-containing compounds, that a nitrogen wants to receive a hydrogen. It’s like one person’s trash is another person’s treasure.”
The reaction could produce a bit of ionic liquid if the sulfuric acid and nitrogen-containing organics were in a one-to-one ratio — a ratio that was not a focus of the prior work. For their new study, Seager and Agrawal mixed sulfuric acid with over 30 different nitrogen-containing organic compounds, across a range of temperatures and pressures, then observed whether ionic liquid formed when they evaporated away the sulfuric acid in various vials. They also mixed the ingredients onto basalt rocks, which are known to exist on the surface of many rocky planets.
“We were just astonished that the ionic liquid forms under so many different conditions,” Seager says. “If you put the sulfuric acid and the organic on a rock, the excess sulfuric acid seeps into the rock pores, but you’re still left with a drop of ionic liquid on the rock. Whatever we tried, ionic liquid still formed.”
The team found that the reactions produced ionic liquid at temperatures up to 180 degrees Celsius and at extremely low pressures — much lower than that of the Earth’s atmosphere. Their results suggest that ionic liquid could naturally form on other planets where liquid water cannot exist, under the right conditions.
“We’re envisioning a planet warmer than Earth, that doesn’t have water, and at some point in its past or currently, it has to have had sulfuric acid, formed from volcanic outgassing,” Seager says. “This sulfuric acid has to flow over a little pocket of organics. And organic deposits are extremely common in the solar system.”
Then, she says, the resulting pockets of liquid could stay on the planet’s surface, potentially for years or millenia, where they could theoretically serve as small oases for simple forms of ionic-liquid-based life. Going forward, Seager’s team plans to investigate further, to see what biomolecules, and ingredients for life, might survive, and thrive, in ionic liquid.
“We just opened up a Pandora’s box of new research,” Seager says. “It’s been a real journey.”
This research was supported, in part, by the Sloan Foundation and the Volkswagen Foundation.
Surprisingly diverse innovations led to dramatically cheaper solar panels
The cost of solar panels has dropped by more than 99 percent since the 1970s, enabling widespread adoption of photovoltaic systems that convert sunlight into electricity.
A new MIT study drills down on specific innovations that enabled such dramatic cost reductions, revealing that technical advances across a web of diverse research efforts and industries played a pivotal role.
The findings could help renewable energy companies make more effective R&D investment decisions and aid policymakers in identifying areas to prioritize to spur growth in manufacturing and deployment.
The researchers’ modeling approach shows that key innovations often originated outside the solar sector, including advances in semiconductor fabrication, metallurgy, glass manufacturing, oil and gas drilling, construction processes, and even legal domains.
“Our results show just how intricate the process of cost improvement is, and how much scientific and engineering advances, often at a very basic level, are at the heart of these cost reductions. A lot of knowledge was drawn from different domains and industries, and this network of knowledge is what makes these technologies improve,” says study senior author Jessika Trancik, a professor in MIT’s Institute for Data, Systems, and Society.
Trancik is joined on the paper by co-lead authors Goksin Kavlak, a former IDSS graduate student and postdoc who is now a senior energy associate at the Brattle Group; Magdalena Klemun, a former IDSS graduate student and postdoc who is now an assistant professor at Johns Hopkins University; former MIT postdoc Ajinkya Kamat; as well as Brittany Smith and Robert Margolis of the National Renewable Energy Laboratory. The research appears today in PLOS ONE.
Identifying innovations
This work builds on mathematical models that the researchers previously developed that tease out the effects of engineering technologies on the cost of photovoltaic (PV) modules and systems.
In this study, the researchers aimed to dig even deeper into the scientific advances that drove those cost declines.
They combined their quantitative cost model with a detailed, qualitative analysis of innovations that affected the costs of PV system materials, manufacturing steps, and deployment processes.
“Our quantitative cost model guided the qualitative analysis, allowing us to look closely at innovations in areas that are hard to measure due to a lack of quantitative data,” Kavlak says.
Building on earlier work identifying key cost drivers — such as the number of solar cells per module, wiring efficiency, and silicon wafer area — the researchers conducted a structured scan of the literature for innovations likely to affect these drivers. Next, they grouped these innovations to identify patterns, revealing clusters that reduced costs by improving materials or prefabricating components to streamline manufacturing and installation. Finally, the team tracked industry origins and timing for each innovation, and consulted domain experts to zero in on the most significant innovations.
All told, they identified 81 unique innovations that affected PV system costs since 1970, from improvements in antireflective coated glass to the implementation of fully online permitting interfaces.
“With innovations, you can always go to a deeper level, down to things like raw materials processing techniques, so it was challenging to know when to stop. Having that quantitative model to ground our qualitative analysis really helped,” Trancik says.
They chose to separate PV module costs from so-called balance-of-system (BOS) costs, which cover things like mounting systems, inverters, and wiring.
PV modules, which are wired together to form solar panels, are mass-produced and can be exported, while many BOS components are designed, built, and sold at the local level.
“By examining innovations both at the BOS level and within the modules, we identify the different types of innovations that have emerged in these two parts of PV technology,” Kavlak says.
BOS costs depend more on soft technologies, nonphysical elements such as permitting procedures, which have contributed significantly less to PV’s past cost improvement compared to hardware innovations.
“Often, it comes down to delays. Time is money, and if you have delays on construction sites and unpredictable processes, that affects these balance-of-system costs,” Trancik says.
Innovations such as automated permitting software, which flags code-compliant systems for fast-track approval, show promise. Though not yet quantified in this study, the team’s framework could support future analysis of their economic impact and similar innovations that streamline deployment processes.
Interconnected industries
The researchers found that innovations from the semiconductor, electronics, metallurgy, and petroleum industries played a major role in reducing both PV and BOS costs, but BOS costs were also impacted by innovations in software engineering and electric utilities.
Noninnovation factors, like efficiency gains from bulk purchasing and the accumulation of knowledge in the solar power industry, also reduced some cost variables.
In addition, while most PV panel innovations originated in research organizations or industry, many BOS innovations were developed by city governments, U.S. states, or professional associations.
“I knew there was a lot going on with this technology, but the diversity of all these fields and how closely linked they are, and the fact that we can clearly see that network through this analysis, was interesting,” Trancik says.
“PV was very well-positioned to absorb innovations from other industries — thanks to the right timing, physical compatibility, and supportive policies to adapt innovations for PV applications,” Klemun adds.
The analysis also reveals the role greater computing power could play in reducing BOS costs through advances like automated engineering review systems and remote site assessment software.
“In terms of knowledge spillovers, what we've seen so far in PV may really just be the beginning,” Klemun says, pointing to the expanding role of robotics and AI-driven digital tools in driving future cost reductions and quality improvements.
In addition to their qualitative analysis, the researchers demonstrated how this methodology could be used to estimate the quantitative impact of a particular innovation if one has the numerical data to plug into the cost equation.
For instance, using information about material prices and manufacturing procedures, they estimate that wire sawing, a technique which was introduced in the 1980s, led to an overall PV system cost decrease of $5 per watt by reducing silicon losses and increasing throughput during fabrication.
“Through this retrospective analysis, you learn something valuable for future strategy because you can see what worked and what didn’t work, and the models can also be applied prospectively. It is also useful to know what adjacent sectors may help support improvement in a particular technology,” Trancik says.
Moving forward, the researchers plan to apply this methodology to a wide range of technologies, including other renewable energy systems. They also want to further study soft technology to identify innovations or processes that could accelerate cost reductions.
“Although the process of technological innovation may seem like a black box, we’ve shown that you can study it just like any other phenomena,” Trancik says.
This research is funded, in part, by the U.S. Department of Energy Solar Energies Technology Office.