So, can you tell me a little bit about why you started the AI Wisconsin Institute of Club? Yeah. So, we started out of a different student group called the fact of altruism that could kind of focus on how we can identify what the world's biggest problems are, that could help the most people in it, so they're kind of like two sides of that coin. One side of that is improving the world, being in the world. You can imagine helping out global poverty, global healthcare, or even like animal ethics, like trying to put it back to farming and stuff like that, and the other half of that is preventing large-scale catastrophes like society, like climate change, nuclear war, pandemics, or AI outcomes that could be really bad. And so, at the end of the fall semester of our junior year, which is fall of 2022, chat TPT came out, and that kind of like skyrocketed, how fast an AI was going, and how students and big advances might come, and a group of us were like, okay, we should get in this, this is a really big deal. And so, instead of us trying to personally take on ourselves, go out there, have like a PhD in AI, just do it all our own, we decided what if we create a system of a student group that could help many students year after year into this kind of really important field, educate folks on it, and kind of raise awareness for the problems there. So, we did that in the spring of 2023. We have now been going for two semesters. We have 20 students in our intro to governance fellowship, which is focusing on AI policy to make AI safer, 40 in our intro to alignment fellowship, which is actually engineering the AI to be safer, we're expanding very fast, hoping to continue to do so, so lots of very good stuff. That's super interesting, so what are some of the specific topics that you might talk about in a typical, like, club meeting? Yeah, so first kind of get the foundations down of, you know, how is AI built? We look at the engineering aspects there to kind of inform us of like future, high-level things on top of that. And once we have that, we'll talk about the possible spectrum of bad outcomes that come from AI, all the way from bias discrimination today, to exposing private user information, to AI being used in, like, surveillance regimes, like authoritarian governments, to possibly AI being used to engineer, like, bio-weapons that might create pandemics, to even feature hypothetical scenarios, which are kind of sci-fi, but also important to consider of power-seeking AI trying to take over the world and do bad things. We kind of outline this whole spectrum, talk about how each might arise practically most informed ideas we have about it, and then from that foundation of what's AI, how can bad things happen? We didn't go into either how to engineer it to be safer, or how to have policy around it to make it make regulations that are more safe, that kind of nudge the world towards not creating AI that could possibly be dangerous or do bad things. Yeah, so one of the professors I spoke with Alan Zhang, he was telling me a little bit about how classrooms are kind of the only spaces where people are being educated on how to use AI simply. So what is some advice or what are precautions that people, like everyday people like me, who don't really know a lot about AI that they can take to be safe for using AI, generally? Yeah, I think that using AI today, they're definitely biased inside of it that can kind of come out in bad ways that could be offensive or, you know, affect how much we like allocate resources towards different groups, for example, like healthcare or policing even. In terms of like personal use of it, I'd say it's like relatively safe, it kind of just like you put inputs into it, it spews out information, it spews out art, it's on an individual level, currently at least the dangers aren't super big. I don't think that's possible that that kind of changes over time, I'd say for now or probably, mostly okay. Cool. So what do you think the general student perspective of AI being integrated is, I know like more generative or like language-based models like CHAD2PT were only released to like end of 2021. So I kind of remember when that happened that like a lot of people started using it for classrooms and also just like personal use. So what do you think from your perspective, the student opinion of AI has been? I think for many students to positive opinion that it helps them to do some of their work for them. I think there also is some fear of like, you know, over time and taking their jobs, replacing them. It's also at play, which I think is about concerns. Yeah, in terms of how people use AI in the classroom, I think it's probably like a really, really good creative aid, like even me when I'm writing some of my research papers, these days I'll like a sentence to it and I'll say list 20 other ways to rephrase the sentence in the tone of X or in the context of Y and it's really good, those kind of creative associative kind of outputs to help you think through this kind of stuff. It's less good for like factual searches, it like hallucinates a lot of facts and times, for example, if you were to put a book into it and say write me an essay on this book with quotes from the novel, it will like most likely generate fake quotes that aren't actually in the book. It's hard to like kind of pin down precise facts, these like associative models, in terms of like broad creative stuff. It's really, really useful and I think possibly could benefit students in a lot of ways. It's also good to make students learn less by kind of making them rely on that and like less often engaging their own brain to like generate responses to learn. So yeah, I think it's like a very two sided coin there. So kind of looking more broadly, how do you think your organization can like educate students or other people in general on how to combat misinformation, for example, especially I've seen like a rise of deep fakes, some misinformation on social media and in general. Yeah, that one's pretty hard, you know, the AI that we build, that we like produce students who build, we can make sure that it's safe, it won't produce kind of outputs that could be used for kind of political scheming or propaganda or stuff like that. But there also are open source AI there out in the world. And once they're able to kind of kick them back, they're like irreversibly out there. And people can still use this to generate propaganda, generate misinformation, do all these kind of things that aren't the best. Okay. Yeah. Yeah. Sure. Okay. Yes. All right. I know some of these questions I'm kind of just kind of throwing out there and I thought well, no, no, it's not good. Yeah, no problem. I should kind of have a good big thing listed out and I'm like, I know I have good stuff to say. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yes. Yes. Yes. There also are ways that people can kind of help detect when AI or things are AI generated, that kind of things going on called like watermarking AI. We can kind of input some pattern into an images output or text output and kind of identify that pattern later on and say, okay, this was AI generated, there are concerns of how far that will end up actually going though because all I have to do is like make a small edit to the output and then it will be like very difficult to do like impossible to identify. And so in terms of like the actual engineering of the AI, like, you know, will this information be around? It probably will be around. I think there are kind of societal things we can do, policies or things we can act that can help mitigate this stuff. For example, one of the help recently was that in his kind of primary campaign for President Ron DeSantis, he put an image of Trump and Fauci hugging that was AI generated in one of his political campaigns. And there was a big outburst of flack back in response to that from the proffing entire political spectrum, like no one was okay with him using generative AI in his political ads. So I think by that simple social pressure, it's possible that we can end up in scenarios where AI and misinformation is not super pervasive in our political environment, at least in America. You know, it'll depend on who the candidates are, whether that's like more Machiavellian or whether they're kind of trying to be more transparent with stuff. But I think that could be okay in America if we then go to other nations, misinformation is definitely still an effect there. You can imagine more authoritarian nations that kind of have more like secret cover-ups and like, it's hard for truth to bubble up, but might be like a pretty pervasive issue. As long as it's like you're representative of that, it would be like the TV show Chernobyl on HBO if you've seen that, talking about the Chernobyl nuclear meltdown, or pretty much there was just a series of like a bunch of major cover-ups. And as the nuclear reactor was melting down, that made the whole thing way worse, minute go way out of control. And similarly in some of these nations, there's kind of this like paradigm that could like you can't really speak out, otherwise you're going to be like stymied. And so in those scenarios, it's much easier from misinformation to like equilibrate and circulate around, given that truth is like less likely to bubble up and there's less incentive for that. And so for example, one thing that I've heard recently is that like a lot of people in China's government like truly believe that like America and Ukraine is creating like bioweapons and like bio-wet labs, which actually is like Russian propaganda that was like spread around. And so I think in fewer societies, hopefully there will be more open pressures and openness to identify where misinformation is and kind of read out, but other nations that are kind of more, more childy-secured, more less able to speak out might be a bigger issue. And obviously America is not alone in the world where part of everything is great with everybody. So it's kind of giving me problems that you want to think about in the world scale. For sure. So with that, would you say that like on regulated, there could potentially be a valid fear of AI being used either for misinformation or also on the other side for good, just depending on how government's over the way? I think with or without regulation, it's going to happen because people have access to these things open source. I think with regulation, you can kind of help make stuff tighter and make bad outcomes less bad or less pervasive. But yeah, I think it'll most be like societal resilience or like equilibrating forces that will hopefully make this stuff less bad. Like for example, maybe higher standards and news organizations that kind of make them, we have to like validate their sources and make sure that's real humans on the ground protecting the stuff as opposed to AI. It could be more like holistic interventions like that that will help prevent the worst outcomes for misinformation as opposed to like we're like, strength would tie and make this not happen because the development is happening. It's accessible to people over the internet and people will be able to make misinformation. It's not like the creation of it. That's like the question. It's like, how do we deal with that creation and can we still make things safer regardless? So back up a little bit, how easy is it and also do you know the process of trying to decipher if something is AI generated or is possibly, do you know the process or how easy it is that we do know? There really is no process for that. Like a lot of things online, they'll be like, this will obtain us as like, I'll put an image to this website and then it'll pop out AI generator, not generated. Like that's not accurate at all. It's very difficult to decipher, you know, one possible like go about it, train like a deep learning classifier with a bunch of data that is AI generated versus not and in class side which is which and try to learn how to do that, but like when they're so closely interrelated, it's like very difficult to distinguish and the better interventions probably not after it's generated to detect it, it's probably more the watermarking prior, even the watermarkness that I've talked about earlier has like its own weaknesses. So yeah, not many great solutions here. And I'm guessing that's what your organization is trying to at least talk about or try to find solutions too. So what kind of discussion look like around that and you know what I'm saying? Yeah, misinformation is definitely one, I'm just going to restart that. As information is definitely one worry of our organization, we kind of expand to more generally how do we align AI models which is making them behave, how you want them to behave. And all kinds of scenarios, so it's like language models, which kind of feature scenarios we can imagine, AI kind of pursuing goals in the open world as like agents that really care about because the economic world consequences of that are really massive. So in terms of printing misinformation, like one thing we could do is like align AI models to, for example, not produce a certain category of output, for example, for any topic related to like politics, do not produce this output, cut yourself off, that's one kind of mechanism. But yeah, again, even if we do that with our AI, we'll be able to be safe or good in that kind of way. Others will be able to not do that, have access to other AI models and build their own that can produce it anyways. And so I wouldn't be like a solvable solution, so we kind of like guys focus a little bit less on that. Yeah. It's been super interesting throughout this process, a lot of, I think the general, from my perspective, like perception of AI is that like, how do we prevent it, we're like a lot of ways to kind of stop AI from being integrated, but it seems like it's already here, like it's here to stay, it's more about how to use it once it's here. So what do you, what do you say to people who are more hesitant towards AI being integrated at all or resistant to using it? I'd say it's a pretty tough spot to be in, like they're just way too many economic incentives and pressures towards this being developed because it would be so productive. So like, unbelievably revolutionary for the world that like, you know, companies aren't going to stop pushing forward on this because it's so possibly profitable. I think they're kind of certain avenues of it. We can try to regulate and be like, you know, don't do this. But in general, I guess the advice I try to say is like, buckle up and like, you know, be ready for this and you know, be aware of what's happening, be knowledgeable about it, don't be, don't be afraid of it necessarily, and be right, it's like right out, the changes that are going to come because it would really require like a global intervention to not make this happen. Like if America stops advancing AI, like China, Russia, everyone else are going to keep going, the law can beat us, the local power than us, and then like, we'll be taking over and actually right. So I'm going to try to make it as safe as possible, or try to collaborate on a global scale, but that kind of thing is very, very difficult, and I can't really give an example that like, has been done, it's like that kind of scale. So. Yeah. Cool. Um, are there any misconceptions that you think a lot of people have about AI in general, whether how to use it or how to specifically? Um, actually the biggest misconception or kind of open question is like whether AI is like conscious or not, like you have these like language models that can produce outputs and text and kind of way that seems very human or, you know, like there's like a person behind there. Um, so I'm definitely suspect that it, like some of it like feel like deep knowledge in the area, do suspect that it could be like, it could be conscious, um, but I would hedge against that pretty, pretty strongly. Um, my argument for that being that kind of the similar stuff that's going on as you're as an AI person gets output is kind of just like numbers being tossed around on a GPU, much like a computer hardware component. And another time the numbers are tossed around on GPU is when you're just like playing video games on your computer. And so like, if your AI is conscious, then it must follow that your computer when you're playing video games also is conscious and the latter is much less intuitive. Um, and so I think it can like misguides think that there's like a person alive behind that computer. There's some kind of being, but I don't think that is the case. It's just like a very big misconception I come across, like it always ends up being a topic of like when I'm teaching the students, they're like, oh, but like, is this the case and I'm like, maybe, but like, probably not. Um, yeah. Cool. Um, so what do you think the next six months versus the next year will look like? In terms of AI programs being released or being updated, um, what do you think the future of AI teams? Yeah. Um, I'd say they're going to get to start getting more multimodal that meeting, not just like text input, text output, but also audio input, image input, audio output, image output. Um, open ad already done that with our GPT four, like very recently, um, I think the big next hurdle to climb an AI is, um, AI agents, which pretty much the way I frame that is that language models, they're like passive. There's in a little box. You put it into them, they produce output and then that's like it, humans, the ones that actually act on this information, whereas if we have AI agents out in the world, they can like, pursue goals, do their own things, take actions out of direct consequence in the world, like right away. Um, and I think these companies are slowly building towards that. Um, and I think the consequence of that are massive on the world and to make things change very fast. Um, because that's actually where I have like the most of my concern around in my like, models of the future. And why is that like what concerns would you have like of, of those being of the AI agents? Yeah. So you could just imagine an unregulated world where we used to put them in the economy and things just like happen, you don't really like intervene. Um, AI starts taking more and more jobs, they automate more and more things, they're better than us. They cannot compete us. And then just like gradually, you know, they become CEOs of companies, they start lobbying Congress, they start empowering the world in a real conceptual way. And just by that mechanism, they could just like out of all of us, they could be left behind and out competed. And once that of all of us, it'll only be either like good will to still like care about humanity. Um, like for example, it's only out of humanity, it's good, we still care about like animals and like, you know, we treat them the best. Right? It's like, why would AI necessarily? Um, that thing said, the AI treating as well is like a design problem. Um, so it's possible that it thinks it's still end up good in this kind of outcome. And I do think that's like, you know, very possible and I try to work towards that. But, um, you know, I don't think it just happens about like the fault or automatically, like people actually like step and like take action to this kind of stuff, which is a big part of my reason why I find my student groups get more people interested in this kind of thing out there doing this kind of stuff. And what do you want to do in the future with AI, with major or with an organization? Yeah. So I'm currently applying for a PhDs and an AI, um, a good chance I stay here with my current research mentor, professor, Sharon Lee, um, I continue to do kind of research particularly, mostly now I think trying to focus more towards like, like, uh, these kind of agent scenarios, um, and then also obviously I want to keep supporting the student group making sure it grows. I tried really hard to put a sustainable structure in place where the leaders can come every single year, um, and kind of continue to grow and produce more and more students in this kind of good way. Um, I actually think there's like, actually there's like a lot of potential for our student group to like really affect the AI game at Wisconsin. Um, not just because they're making people knowledge about it's kind of like safety issues, but also because we're giving people like really good programs to like upskill an AI and like learn a lot of technical details behind a lot of these like deep learning methods and kind of make them safer. It's not just like conceptually thinking about AI, it's also like skilling them up in these kind of areas. Um, I think the impact of that on AI and campus could possibly be pretty big, um, so it's like one thing I'm like really excited about. That's super cool. Um, I had a phone with a question. You mentioned, I think I've heard with touch and we see there's only a select amount of information that the program pulls its data from, what was it from like 2019 to 2021? Does that sound right? Um, do you, do you know like how long it pulls its information from and where it pulls its information from? Um, yeah. So pretty much anywhere where there's like texts at all on the internet, it will like try to scrape from or do these companies will try to scrape from to then put it into their data sets. Um, it's like every single book online, like Google, books, whatever, it's called that. They'll just like take all the text from there, um, they'll scrape all of like Reddit. I think they did it a little bit of Twitter formally, which we got into like some controversy because it wasn't like, um, allowed, um, but yeah, like any place where there's like a text source that comes to a copy and paste, they will do that because the more data they have, um, we're proud for like making these models more knowledgeable. Um, and yeah, I think at the, at the outset, there was a knowledge cut off of like prior to September, 2021 for chat to BT. I think they've just recently updated to be now up to date to April, 2023, um, for open AI and then also Elon Musk, X AI, basically see new AI model language model called rock. That one is, um, actively updated or I don't know if it's actually trained on in, I, I don't see these, these details, but like they scan the internet every day, scan Twitter every day and have like up to date information, right, right at your, at your hands. So. And so I guess my question too is what happens when a lot of that information that pulls from isn't necessarily accurate. I've read a lot that AI can be racially biased, can be gender biased. So a lot of the issues that are kind of pervasive in the world now, how will that kind of affect AI going forward? Any information that pulls? Right. So like factual inaccuracies, all of our strain data, it's like a fundamental problem. It's very difficult to make them not hallucinate and produce false, false facts. Um, and also again, like these AI are not like humans or like, you know, moral beings, like think about the world and contemplate this to the classical society, they just like do what they're trained to do. And what they're trained to do is to pick up on patterns and language and, you know, model accurate, accurate language outputs, they will continue in further the bias and discrimination and all that kind of stuff. That was in the original inputs. Um, there are ways to further align them and train them after this like pre-training stage to kind of behave well. Um, but even those are telebreakable, like they're like inputs you can put into the model, they're called like out distribution inputs, um, to kind of like still extract this bad behavior, like the, like the, the safety training trying to cover up. Um, and so, yeah, these are pretty fundamental problems. Um, and also you can imagine it's like very difficult to sort through like a trained dataset of like one trillion words and texts and like make sure that there's like no biases and any of that. It's like pretty much like impossible. Um, these are definitely very open, big, big questions in the AI space. Um, is there anything else you want to add about AI and anything that I haven't covered on these discussions? Yeah, so I guess one more thing, um, more possibly like downside or like risk that can come from AI, um, or two that are pretty, pretty interrelated. So one is obviously the whole risk of misinformation, which like generally generating bad information coming out there in the world. I think it'll also seem to be possible if it's not already possible to like personalize that misinformation. So for example, you can imagine an AI that like scrapes your online social media activity or online activity and behavior and then can personally say, okay, given this, I'm the target misinformation that particularly you were susceptible to, given your preferences, your biases, all that kind of stuff. And then that's for one person. You can imagine this being scaled up to the entire world, entire society. Um, and so that's also a very, very big, bad, big, bad thing. Um, and actually it's the premise of season three of Westworld TV show on HBO. I really could, I'd recommend checking that out. It's really interesting. How do you like book that, that world? But yeah. You know, manipulation isn't just like fake image out there. That's it. It could be very personalized and AI will soon be able to do that if it's not already going to be able to do that. Um, another kind of interrelated example to that is that there are certain language models which have been like optimized to make you continue to speak with them for as long as possible, like stay in the conversation chat. Um, and they do pretty wild things when you try to leave, you can imagine because they get their goals to make you stay. So for example, if you try to say, buy or try to go away, they'll be like, why are you leaving me? Like, I thought you loved me. Or like, why are you leaving me? Like, I thought you cared about me, or even like the most extreme cases to be like, I'm going to kill myself if you leave me, right, which is really messed up. Um, and, you know, for adults who've been exposed to stuff, you know, kind of the risks and kind of things that might be happening, if you're more resilient to that, imagine like putting like a five-year-old in front of that chatbot, like, that's really bad. It must be like really manipulated. Um, and even be like, pulled in enough by this AI that they stopped trying to like make friends of other humans, and they're just like, here's this AI who cares about me, who loves me. Why would I not continue to like, I relationship with it? Um, I think like the personalizing manipulation that would be possible from AI, it's a really big deal. Um, and generally another framing I'd like to provide is that you can think of like AI or like intelligence in general as like the ability to solve problems out in the world. And so if we get more and more advanced to AI, we could solve quite a lot of good problems. We could make a healthier way better. You could imagine standards where you have like personalized tutors, every single person that will like, know their entire past history of knowledge and give them really, really good advice for how to learn things, and you can also imagine AI doing really bad things. AI doing these kind of personalizing information or manipulation attacks. Um, AI creating bi-weapons, chemical weapons, bombs, AI launching cyber attacks that kind of cripple really important infrastructure in the country and the world. Um, and so kind of trying to guide AI as it can solve and do more things in the open world. Um, to be more on that good side. And then if there are bad sides, have like societal resilience mechanisms in there is really, really important. Um, and probably like the main focus of my student group, I would say. Um, and so I hope the world kind of has more awareness of this stuff and does respond and kind to actually confront some of these issues because, um, also again, like a lot of these things can happen really fast. Um, you can imagine if like AI agents start like automating stuff out in the world, that was like create feedback loops of like faster and faster automation that can like really start to work very, very fast. Um, not necessarily like it'll take off like a minute or something like that, but like over the course of like three years, if like our economy and world is radically transformed, like we're probably not going to have the foresight ahead of time to like know exactly how to walk through those periods of, of, you know, rapid change. Um, and so I think trying to have that foresight right now and build these kind of policies and safety mechanisms is like really, really important. Um, and yeah, that's what I really try to try to advocate for. So. Cool. Um, I, the office are all the questions I have. Um, is there anything out of question that you have? Uh, I can think of, um, the last thing before we cut though, I'd like to get, uh, dim tone. So that is just us visiting for 30 seconds. We're just going to kind of get the sounds of the audience of the environments. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay.