Okay. So we have it for the camera. Could you introduce yourself and tell us what you do? Sure. Eric Nisbet, professor of communication and policy analysis at Northwestern University. How would you describe the current state of artificial intelligence and misinformation as it relates to campaigns and elections? I think it's still evolving. We're in the early stages. I think 2024 will be sort of a proving test ground for how generative AI will be used in campaigns, whether it's by campaigns themselves or by bad actors in terms of spreading false misleading images or videos. What should people be on the lookout for if they are online or elsewhere and think that they may see something that's fake and not real? Really it's about, okay, it's just a trusted source or not, right? So we verify the source of that video. Is it coming from a verified influencer that you know and trust? Is it coming from a mainstream media outlet, for example? Is it coming from a partisan outlet that might be biased, right? So think about the biases or sort of verify who is actually spreading that video or image. The barely sense of, okay, should I take it at face value or should I cross-check it? One of the great things we have is online is, well, cross-check it with other sources. What are other sources saying about this video or image? So you get a broader view of whether it's accurate or not. Who's the most vulnerable or the targets of these fakes? Well, the research shows that it tends to be older Americans who may not have as much media or new media literacy when it comes to this type of information. And so, part of the things that we could do as citizens is sort of like peer-to-peer fact-checking. If you see someone spreading a video or image that you think might not be authentic, might be fake, it's okay to, in a constructive way, challenge that. Fact-checking yourself may be suggest looking at other sources. Do that for yourself or others. Don't rely upon media to fact-check it. In our own social networks online, we can help each other make sure that we have true and accurate information. What might recognition on this look like? When it comes to regulating social media, when it comes to fake or misleading information, really don't want to focus on the content. I think we need to focus on transparency. Social media companies have been cutting back on allowing researchers or other sort of accountability institutions to actually track what is in their algorithms and on their platforms, right? They've caught access on X. They've downgraded tools that researchers use to track false and misleading information. I think using sometimes privacy rhetoric, right? They say we want to keep our information private for our users by using that really excuse to hide how much misinformation might be on their platforms by outside parties. That could basically be an accountability mechanism. So I really think, other than regulating speech, we need to regulate transparency. Where do you all see them scouring in the future? Well, right now with the polarization in Congress, I don't see any regulation moving forward, right? It's going to be up to individual states and individual citizens and civil society to really sort of address this problem. So it's really up to us individually or collectively to address false and misleading information online, extremist or violent content that might threaten election workers or politicians. Really, we need to hold social media companies and others, our politicians, our political leaders, news media accountable across the board to make sure we have sort of greater information integrity, not only in this election, but elections moving forward. Thank you so much. Thank you.