You Is that the candidate's actual voice and image on the air or online or one generated by artificial intelligence in an effort to deceive voters? This election season, it could be hard to tell. The recent release of OpenAI's latest text-to-video feature called Sora makes hyperrealistic yet complex artificially generated videos. The images are potentially impossible to discern as fake. That's why states, including Wisconsin, are moving to enact laws around the use of AI in campaigns. A bill moving quickly through the legislature would require campaign ads that contain synthetic media or audio or video content substantially produced by means of generative artificial intelligence to include a disclaimer. That's a good thing, according to Edgar Lynn, Wisconsin policy strategist and attorney with the nonprofit Nonpartisan Protect Democracy. He joins us now. Thanks very much for being here. Thank you for having me. So are there examples that we've already seen out there that our artificial intelligence generated campaign materials? Yes. The U.S. election season is only in the midst of its primaries and already we have examples of political parties, campaigns, and at least one super PAC using AI generated content and ads, campaign videos, and voter outreach. So I'll give you two examples. Last spring, the Republican National Committee responded to President Biden's re-election campaign announcement with an AI generated video illustrating the country's projected future during a second Biden term. The other example that I can give is at the end of last year, Charmaine Daniels, a Democratic congressional candidate in Pennsylvania, launched an interactive AI powered political campaign caller for voter outreach. So they are being used. What we're seeing in the U.S. is consistent with the proliferation of AI generated content in elections around the world, elections in Slovakia, Argentina last year, and most recently Taiwan and Indonesia. Are they always used deceptively? Not necessarily. That will depend on that itself. So who is making this stuff? Well, I think, I mean, there's a lot of technology platforms that are creating these types of artificial intelligence technology. And there's a list of them that I think people generally hear about them. There's Chad GBT. There's a whole slate of them. Are they easy or hard to spot for the average person? This is a great question. It depends. Now, I will say that it's an arms race in terms of spotting, right? In terms of the technology. The bottom line is detection capabilities are developing, but they are and never will keep up with the increased sophistication and realism of AI generated content. As you've seen, we've seen photos from maybe last year where perhaps the fingers are a little unusual. But today, that technology has already increased capabilities to level that what we have today is different from last year. So likely, is it that synthetic media would deceive voters? You know, it is likely. And this is something that we are not used to because historically, we trust what we see and we trust what we hear. That's video and audio in today's world, as the technology ramps up at an increasing speed, the detection is very hard. And so with that, the likelihood of deception is very possible. So at the very least, how important is it in your mind that there's a disclosure that says the audio or video contains content generated by AI? You know, it is incredibly important. Voters should anticipate they may encounter AI generated content related to the election and should not just rely on their senses alone to identify what's human generated versus what's AI generated. So it is incredibly important in this. But I'll just say that there is a, this is a portfolio of tools. There is not one silver bullet. You know, disclosure is one, detection, you know, journalistic integrity, all these are a host of tools that could be helpful for the, for people viewing these ads. How could this synthetic or AI generated media cause even more mistrust in elections than already exists? Yeah, that's a great question. So the threats to our democracy, the misinformation, the playbook that people use, they're still the same. The difference is that AI makes things bigger, faster, and stronger. And it's about the accessibility to the public because you can imagine in the past we've watched movies, there are special effects and they're very good. But that's limited to people who can make those. And even if you think of photoshopping, that's limited to people who knows how to use Photoshop well. But with synthetic media with AI generated content, now that door has been open for the public to use. And so it's about accessibility to these awesome technologies. It's pretty scary stuff. Ed Gerlin, thanks very much. Thank you. That's why we support this bill. We support the fact that at least people should be made aware. And how they disclose it, obviously, is going to be a big factor too. You mean like whether it's on text or in part of the verbal disclaimer or something? Yeah, you know, there's kind of this, you know, where is that text on the video, right? It's very small. It's very glaring, like all those matter for voters. And that's, it's interesting this bill, what's interesting about this bill is they give it to the ethics commission to have rulemaking power and part of that is so that they have the flexibility to evolve as a technology evolves versus passing a lot every time something comes up. Yeah, yeah. That is interesting. We kind of await for it to get passed and signed. Yeah. We're hurry up. Yeah. I think, yeah, it's the Wild West. But this is one area where policymakers and even technology platform are actually, they have a consensus that disclosing where the content comes from is pretty important. Yeah. How unusual to have a consensus. Ed Gerlin, thank you. Thank you so much. It was really nice talking with you.