So in terms of AI, the legislature passed a bill last year, voice vote, unanimous I can get that talked about regulating AI, what does it actually do and what will the impact be this fall for television officials? Well the regulation applies to political advertisements and it basically requires that the advertisers, if they use AI, say so in the ads, that the content was produced by AI in some kind of way and so we've actually done research on this in terms of correcting misinformation when we'll show people misinformation correction piece that a human is created and then that a bod is created and we would tell them an AI bod is created it, people actually tend to trust it a little bit more and so that kind of ran against what we thought might happen and so people tended to trust the AI created information correction. Now political advertisements, usually not information correction is usually really strong claims, sometimes claims that aren't true or really flirting the line with what's true. Candidates though, candidates really like to control their message and so I'd be surprised if they fully ceded their own advertisements to AI. They might use AI to generate ideas, they might ask chat GPT give me 10 potential ways to make this claim or give me 20 potential crowds that look like this or that but then they might find the one they like and then go shoot it for real and then they wouldn't have to say that it was AI generated. I think the real danger with AI in this election is not in campaign advertisements, it's in social media posts that go viral and there's really no regulation for that. That's right, there's really nothing the state legislature or the congress can do to stop that unless they really want to regulate people's speech in a way that would probably be very difficult to do successfully and get passed out of the courts. So back to the the bill in Wisconsin, the one thing that jumped out of me was the only thing that it really seemed to do was relieve television stations of any responsibility, make sure that they couldn't be held liable if the campaigns themselves did not disclose or disclosed in proper. Yeah it's very similar to how social media operates where if your eyes say something that's completely false on Facebook, it goes viral, lots of people believe it, Facebook can't get in trouble and now the same kind of principle applies to AI in television ads where it won't be the TV stations that air the ads that get into trouble but the candidates who didn't disclose that there was AI. What obligation is there on the WB, the Browncasters Association or individual television stations? I tried contacting them and none of them would go on camera to talk about how they handle individual claims or allegations that that ad is false. It needs to be pulled down and we know we're going to be flooded with advertising. So what is the stock gap there? I mean in terms of just more truth, more ads against each other? There are examples of ads getting pulled down because they're not true or the reaction to them was so negative that some stations sometimes will say we're not going to enter this particular ad. Whether that happens because the content of the ad or part of the content of the ad was created by AI, I think is an open question. Probably only if there's misinformation being sewn in one way or another and even then broadcast companies aren't looking to make history here and be accused of bias by taking down the ad of one side but not the other. And so if one side is really good at disclosing if they use AI and ads and the other side isn't, that might pose a real problem for them. It's reasonable for viewers of television programming to believe the content that they see is verifiably true and it's reasonable for people to think that the content they see in the ads that stations choose to air are true. And so there is a bit of responsibility on the side of the broadcasters but it's also extremely hard to police. How can they know for sure the video was AI generated? How can they know for sure the script was AI generated? These are really difficult problems to solve without universally accepted ways of solving them. So we've seen you talked about social media. We saw an example of misinformation when Kamala Harris first announced she had that rally in Michigan at the airport hander and then there were all sorts of conspiracy theories saying that was an AI generated radio. Is that the most likely way that AI comes into play in terms of it just being a boogeyman? Like it's something to scare people like this is all fake. You can't trust anything. I think that there are kind of two ways AI can matter. One is using AI as a boogeyman so something happens in the other side says oh that must be AI can't possibly be real. You should be skeptical of the other side because they're using AI. The other is that a candidate picks up on a post that uses AI and treats it as true which has also happened where a former president Trump shared information that Taylor Swift had endorsed him which she had not. When these kinds of things happen too especially when the candidates themselves pick it up and share it, those things are going to take on a life of their own in really remarkable and fast ways that are hard to regulate. Is this a different than the scare about deep fix four years ago or Russian influence? We just saw the Biden administration release new sanctions against Russia today. Is AI new? Or is it just the new term for lies and misinformation? It's another tool. AI can tell us things that are verifiably true and AI can make things up. It's not AI that's doing that. It's how the user intends to share that information. It's another tool. It can be used appropriately. It can be misused. Some people will use it well. Some people will misuse it. It's really hard I think to think about how should we approach content that's produced by artificial intelligence. Artificial intelligence could look at the box square of a baseball game and write up a story about how this player went two for four and had a home run and this one had the game winning RBI and the picture went this many innings and could write a story that's probably fairly serviceable. But if AI starts describing the crowd reaction, now AI is making that up. What's the line that we want to draw? And the same kind of question is going to come toward us when it comes to political campaigns. Where are we going to draw the line about claims that are artificially generated? And how are we going to police? Who's making those claims? Are they disclosing where the claims come from? Candidates have to cite sources when they make claims about things their opponents have said in ads presently. And now in Wisconsin, they have to say if it came from AI, whether that means it's false or true, though, is still another question. And what about third party interest groups who don't care if they get fined $1,000? These are the places where I think we might have to pay a little more attention. They don't mind paying a fine and they don't think that the negative attention will rise to the level of the good that they get for their side by fomenting chaos and sharing things that aren't true. And so in a world where there's not very much regulation on what third party advocates can do, the law in Wisconsin is trying to at least say you've got to tell us if it's coming from AI. But that's not the same thing as saying is the information true. That's a separate question. I think some people think of AI and they think of images, but you've also talked about how it can create the phrasing and the wording. If AI helps generate something but then a human goes in and touches it up, is it really AI anymore? Or is it just if it's Johnny Cash singing a Beyonce song or some other use of it? Right. I think this is an open question, right? So it'll be very difficult, I think, to prove that a script from an ad was AI generated even if it was or even if the impetus was AI generated. So I asked Chad GPT, write me an ad that talks about my advocacy for reproductive rights and low taxes. And it gives me three scripts. And I like one of them and I tweak it a little bit to put it in my voice. Have I used AI? Yes. Is the final copy AI? Well, not wholly. So do I have to disclose? Not clear. And so it's probably something the courts would have to resolve. Who is being who is most likely to be susceptible to these kinds of advertising? Because we already know that the vast majority of people have already made their minds. So is it the late low info people that come in at the end or is it seniors or who's out there? So I think in terms of believing content they haven't encountered before, low information voters who are paying attention at the last minute are often susceptible to messages because they're new to them. They haven't been paying attention to the race and these things are new. Sometimes though, those of us who are really invested in politics could also be swayed by this. If the information hits us in our political sweet spot, if it is something good about our side or really bad about the other side, we might be more susceptible to believe that information if it's false and not do the work that it takes to try to suss out whether it's true. When it comes to the long history of dirty campaigning and all that kind of stuff, where does this fall in? Is this a new era of scariness and of low-ball politics or is this just kind of fitting into the pattern of where we're going? I think it's another tool to campaign in the way that political candidates in the United States have always campaigned. There've always been negative campaigns. There've always been campaigns that are flirted with the truth. There have always been campaigns that go over flirting with the truth and just start saying things that aren't true. AI is a tool to help with that. I'm a little more worried about AI-generated deepfakes where you have Kamala Harris or Donald Trump saying something that they did not say but it looks like they did. That's something that's a little more worrisome than AI-generating content or making a crowd seem a little bit bigger. These are problems but they're not the same as putting words in someone's mouth and leading a lot of people to draw a conclusion about someone that was coming from a false pretense. Where is the line when it comes to someone who's not part of a campaign or maybe you're there part of a campaign of putting out an ad that's designed to influence but also comes across as maybe a parody. It was clear I was making a joke about that. That's really difficult. The Supreme Court has protected our right to engage in satire and parody and I don't know that the court imagined that we might be able to generate content where it looks like the actual person you're parroting is saying the words in a way. It's a little bit new. I think that people who do that are on the safest ground if they disclose prominently that this is a parody or satire or something like that but sometimes people will read something in the onion and get really upset even though of course the onion is America's finest fake news source. It's a pretend satirical source and sometimes people who aren't familiar with it will see it and think it's true and that'll certainly happen with AI-generated images and content as well. And where is mentioning the onion that brings it to like the epoch times or the Wisconsin Independent which is you know and both of these being mailed to people's homes without them asking for them epoch times have booths at both conventions. Where do those kind of outlets fit into this whole this conversation? They're trying to generate conversations about issues they care about and help candidates they prefer and that's really no different than any other kind of campaign strategy. They're trying to help their side and hurt the other side. You know a lot of these sources aren't news sources. They're sharing things that don't go through a rigorous fact check. They don't correct mistakes. They don't punish reporters who make errors. They don't they don't seem to try to be fair to both sides. Well there are you know there's there are ranges of of these you know kinds of things. You know I would say Fox News is a little different in that they have different kinds of programming so Pete Buttigieg a spokesperson for the Biden administration a cabinet member will go on to Fox News and answer questions right that's not happening so much in the epoch times. Although you know there are lots of hosts on the opinion programming side of Fox News and they're in their prime time. They're most watch shows where lots of things are said that don't don't pass the the test of the verb of meeting the verifiable truth. And is that in terms of the public and how they're influenced. I mean Fox News kind of filtering over to the epoch times. Fox News clearly has a news division where they do traditional journalism and then they have their prime time show. That's right and a lot of people can't tell the difference. That's right. It's akin to a newspaper where you know newspapers clearly say this is our opinion page and some people will still read them and think oh well this newspaper so biased they're sharing their opinions when that is the purpose of that page right and the purpose of Fox programming 7 p.m. on you know in the central time zone is to persuade us not to inform us. How bad is it going to be for ads this fall? I mean do you track those? Do you kind of you've done research on some of these things? What does this this look like? I think we're going to have a historic number of ads. We'll probably have a historic number of negative ads. We're seeing both sides start to devote more resources to social media advertising which is a little more of the Wild West in terms of political advertising and I think you know but the real gray area is when someone else does something using AI, making stuff up, trying to just share things that aren't true and then a candidate picks it up and shares it. So it's not an advertisement but it's still the candidate endorsing this content that is false. Like that's where we're in the most trouble and we need journalists the most to help us sort through what's true and what's not. All right. Anything else you want to add? I think that covers it. Great. Thank you. Thank you very much. I really appreciate it. Yeah you bet.