Minnesota Now with Nina Moini

Elon Musk's social media platform X sues Minnesota over political deepfake ban

Wisconsin Supreme Court
Elon Musk attends the finals for the NCAA wrestling championship on March 22 in Philadelphia.
Matt Rourke | AP file

The social media platform X, run by Elon Musk, has filed a federal lawsuit challenging a Minnesota law barring the use of deepfakes to influence an election.

The lawsuit, filed Wednesday against Minnesota Attorney General Keith Ellison, claims the law — passed with bipartisan support in the Legislature in 2023 — violates free speech rights.

The law made it a crime to post fake videos that could be used to influence an election. Legislators were concerned that deepfakes, made with AI and digital editing, could be used to spread misinformation about candidates. It carries potential penalties of fines and prison time.

X’s lawsuit claims the law’s “requirements are so vague and unintelligible that social media platforms cannot understand what the statute permits and what it prohibits.”

And the lawsuit further claims that the law’s enforcement system — with the potential for criminal penalties — “incentivizes platforms to err on the side of removing any content that presents even a close call as to whether it is a ‘deep fake’ prohibited by the statute.”

Noting that there is no penalty for “erring on the side of too much censorship,” the lawsuit alleges that the Minnesota law establishes a system that “will inevitably result in the censorship of wide swaths of valuable political speech and commentary and will limit the type of ‘uninhibited, robust, and wide-open’ ‘debate on public issues’ that core First Amendment protections are designed to ensure.”

And the lawsuit says that “X and other social media platforms already maintain robust policies and features that address what they view as problematic content,” such as X’s “Community Notes” feature.

The lawsuit seeks a declaration that the Minnesota law is unconstitutional, is null and void, and won’t be enforced by Ellison or other state officials.

State Sen. Erin Maye Quade, DFL-Apple Valley, helped pass the law. She said the law was written to prevent fake material close to an election that might deceive voters.

“I'm not surprised that a man who bought a presidential election and tried to buy a Wisconsin Supreme Court seat now wants to be able to influence other elections through very realistic deep fakes that are meant to injure candidates and that people can't tell aren't real,” she said. “Because that is the standard that we have here for this deep fake law. It's very narrow.”

In a statement to MPR News, Ellison’s office said it “is reviewing the lawsuit and will respond in the appropriate time and manner.”

MPR News reporter Dana Ferguson contributed to this story.


Alan Rozenshtein, a University of Minnesota law professor who specializes in technology and the constitution, joined Minnesota Now to talk about the lawsuit.

Use the audio player above to listen to the full conversation.

Subscribe to the Minnesota Now podcast on Apple PodcastsSpotify or wherever you get your podcasts.

We attempt to make transcripts for Minnesota Now available the next business day after a broadcast. When ready they will appear here.

Audio transcript

[THEME MUSIC] NINA MOINI: It's our top story this afternoon on Minnesota Now. The social media company X is suing the state of Minnesota. Lawmakers passed a ban in 2023 on using highly realistic fake videos created with artificial intelligence to sway an election. X, which is owned by billionaire Elon Musk and used to be called Twitter, filed a lawsuit yesterday, saying that ban is unconstitutional and that it's too vague so it would, quote, "lead to blanket censorship, including a fully protected core political speech."

So joining me now to explain is a University of Minnesota Law professor who specializes in technology and the constitution. Alan Rozenshtein, welcome to the show. Thank you for sharing your knowledge with us.

ALAN ROZENSHTEIN: Thanks for having me.

NINA MOINI: For starters, I hoped you could talk a little bit about what exactly constitutes a deepfake. What are they, and how might they affect elections?

ALAN ROZENSHTEIN:: So a deepfake is any piece of media-- it could be an image. It could be an audio. It could be video-- that is used with artificial intelligence-- that's what makes it a deepfake-- to impersonate someone and make it appear that they've done something or said something that they haven't done. Obviously, impersonations are nothing new. We've had technologically created ones for many decades, and then, of course, people have been impersonating each other probably for as long as human history. But what's special about deepfakes is that they can be particularly realistic, and they can be quite easy to generate.

NINA MOINI: Yeah, and what do you make of the argument from the lawsuit, that this law passed in Minnesota is sort of too vague and that it might lead to censorship?

ALAN ROZENSHTEIN: I'm not generally in the business with agreeing with Elon Musk, but I think in this case, I am forced to. Which is to say I think this lawsuit is actually quite strong and that this law, which is actually already being challenged in another lawsuit that's currently ongoing, is very likely to be struck down on both First Amendment grounds and also federal statutory grounds.

NINA MOINI: Would you talk a little bit more about that, Professor, both of the points there?

ALAN ROZENSHTEIN: Sure. So the issue here, the core argument in the lawsuit and also in the other lawsuit is that this law violates the First Amendment because it restricts a kind of speech. Deepfakes are a kind of speech. Just because they're generated with AI doesn't make them any less First Amendment protected. And just because they're false actually also doesn't make them First Amendment unprotected. The First Amendment protects plenty of false speech.

Now, there are categories of false speech, whether deepfakes or not, that aren't protected by the First Amendment. So if the deepfake is defamatory, for example, then the political candidate could sue. If the deepfake is used to commit specific fraud, then I think the government would have a good argument if it was limited to that specific category of deepfakes. But here the government is asserting the power to not just censor, but indeed to criminalize a whole category of content. And that generally just goes far beyond what the First Amendment allows.

And I think when you get to how that applies to platforms, it's actually even worse. Because it's one thing to say that an individual is going to be held accountable for creating and distributing a deepfake, but this law also arguably applies to platforms, and it creates potential criminal liability for them. And what that means is that to comply with this law, platforms are going to err on the side of taking down a huge amount of content, not just the deepfakes that are intended to be taken down by the lawsuit, but anything that might plausibly be a deepfake or that a platform suspects of being a deepfake because of the penalties. And this kind of overt censorship is a huge, huge, huge First Amendment concern. So I think on the constitutional arguments, the lawsuit is quite strong.

NINA MOINI: What about the Supreme Court? Are there any precedents there, where it comes to speech rights and social media companies? I think this idea of how to regulate social media companies has been around a long time.

ALAN ROZENSHTEIN: Yeah. So I think this case is not about regulating social media companies per se. The issue with this case is not the social media companies own First Amendment rights. The issue here is really about whether or not false speech is itself First Amendment protected. And the court has held over and over again that generally false speech is First Amendment protected, just like any other speech, except for some very narrow exceptions.

NINA MOINI: So a lot of that, the deepfakes, though, do end up on social media. From your standpoint, with your background, what would be a more lawful or a better way to make sure that this issue of deepfakes, as technology and AI progresses, has some regulation around it?

ALAN ROZENSHTEIN: Well, I think that more narrow and targeted regulations could, I think, potentially pass constitutional muster. I think a disclosure requirement, whereby if something is a deep fake, then the creator or the distributor, if they know that, have to disclose. I think that's actually a much more plausible argument. And then there are other categories of deepfakes with respect to, for example, nonconsensual pornography of individuals, where I think the government's argument for preventing that is stronger than in the political context, where the government is potentially censoring a lot of political speech, which is the most highly protected speech under the First Amendment.

NINA MOINI: Where would the regulation come from that you speak of? Would that be passed by members of Congress? Where would it come from? Because what we're talking about right now is a state law. I'm just curious where you think that should come from.

ALAN ROZENSHTEIN: Yeah, so it depends on who the regulation is targeted at. If the regulation is being targeted at individuals who create and disseminate this content, then it could come from either the states or the federal government. But if the regulation is also seeking to regulate platforms, that can only come from the federal government. And the reason for that is that in the 1990s, Congress passed a law called Section 230, part of a broader law called the Communications Decency Act, and that law explicitly immunized, gave liability protection to platforms for any content made by their users.

And part of that law preempts state law that tries to conflict with that. So in our system, the federal government is supreme over the states. And so to the extent that-- and this is also part of the lawsuit. To the extent that this Minnesota law is in violation of that federal law, because it tries to impose liability on a platform for content that the platform's users created, it's also illegal.

NINA MOINI: Do you foresee perhaps other social media companies joining the lawsuit?

ALAN ROZENSHTEIN: Perhaps, but it doesn't really matter. At the end of the day, if the court strikes down this law as to X, it's going to strike the law down as to everyone else.

NINA MOINI: And you did mention, just want to make sure people know, on the nonconsensual sexual deepfake videos, Minnesota also has a law against that. And a lot of these laws are very new in this state. They're working to close a loophole in that ban this year. Do you think if this lawsuit went forward, do you know what it would mean for restrictions on sexual-related deepfakes?

ALAN ROZENSHTEIN: I think it depends, really, of course, how the court rules and how broadly or narrowly the court rules. Again, I do think that the case for regulating, potentially even criminalizing nonconsensual pornographic deepfakes, is stronger than for political deepfakes because I think there, the case that it is creating real, profound harms is easier to make. And also that speech itself and related speech has very little First Amendment value.

Whereas the speech at issue in political deepfakes, though it may be false, is core political speech and so is of higher value. Now, that doesn't mean that any law about pornographic deepfakes would be constitutional. It has to be written very carefully so as, in particular, not to create over censorship. But I do think that those laws would be easier to defend than a political deepfake laws.

NINA MOINI: And just before I let you go, Professor, when we're talking about these deepfakes, it's in the context of really well-known people or politicians. But what does it mean for the everyday person?

ALAN ROZENSHTEIN: Well, I think that it's clearly unfortunate that we have technology that can create nonconsensual images of us. Now, again, I do think if the legislature wanted to pass a law about deepfakes of ordinary individuals outside the political context, I think that would be an easier law to defend than about high-level politicians, especially because-- and I think this is an unfortunate reality. We shouldn't assume that the reason for misinformation and disinformation in society is the existence of deepfakes. At the end of the day, unfortunately, we live in a society where a lot of people, frankly, want to be misinformed. And so I actually think that focusing on political deepfakes is, frankly, not the biggest source of leverage for depolarizing our politics.

NINA MOINI: And where would you focus?

ALAN ROZENSHTEIN: Well, I would focus on trying to convince individuals to do some more critical thinking about how they think and how they read the news. I think precisely the same group of people that would be most swayed by deepfakes are also the same group of people right now that are swayed by stuff they read in the media and don't think critically about. But I don't think it's, frankly, the deepfakes that are the reason why so many people across the political spectrum have such false and, honestly, sometimes ridiculous views about politics.

NINA MOINI: You're seeing people believe often what they want to believe, and that is a challenge. Professor Rosenstein, thank you so much for your time this afternoon. Really appreciate it.

ALAN ROZENSHTEIN: Thanks for having me.

NINA MOINI: That was Alan Rozenshtein, law professor at the University of Minnesota. And by the way, NPR News has reached out to Attorney General Keith Ellison this morning for an interview. His office said he needs more time to review the complaint.

Download transcript (PDF)

Transcription services provided by 3Play Media.