If you want to, yeah, just go ahead and say your name and then you're like position or title here at the University. My name is Annette Zimmerman. I'm an assistant professor of philosophy and an affiliate professor of statistics. Actually, sorry. Do you have a little bit closer than I would have just for the ice side, you know? Yeah, sure. Yeah. I was just thinking, like, reflexively I looked at that, but I know I have to look at that at the time. Yeah, I know. Yeah, I know. Yeah. And then you can repeat whenever you, you know. Yeah. Cool. Yeah. It sounded like one straight shot. Yeah. Cool. So I guess my first question is, yeah, the use of it in excitement around AI just seems to be like constantly accelerating, thinking about AI being incorporated more into schools or different sectors. So as this is happening, like what environmental impacts are you thinking that AI could bring with it? AI requires the use of massive finite natural resources. I think just of data centers, right, in order to run a data center, turns out we need resources to cool it. And so that requires massive amounts of water. Water is one of our most valuable resources on this planet. And unfortunately, it turns out that in many cases companies prioritize their needs over the collective needs to, you know, have access to these resources in an equitable way. So we see that more and more. There were recent cases of American companies buying land in various places in the global south, which is obviously cheaper than doing that over here in the U.S. and just using local resources there to power AI innovation that benefits American companies first before those ostensible benefits reach everyone else in society. Sure. So thinking about that, more data centers are popping up a lot in Wisconsin and in the Midwest. We have Microsoft and Mount Pleasant also making plans for Kenosha. I would also speak that Meta is thinking about investing nearly a billion into centers in central Wisconsin. So my first question is why are those companies making a shift from outsourcing in other countries to the United States? Yeah. In a way, it's a positive development to move these data centers to places in the world. That are just naturally colder because then of course fewer resources are required to keep them cold. On the other hand, we shouldn't be entirely naive about these developments. Obviously, the environmental concern still remains if AI innovation requires massive amounts of resources, whether we take those resources here in Wisconsin or in other places of the world. That's still a moral and political problem. So it's not plausible to think in my view that merely moving data centers to a different location gets rid of all of the ethical and political problems. However, I think one important justice-related concern gets addressed when we try to internalize the environmental costs of AI innovation in this way rather than expecting countries in the global south to simply bear the burdens of American companies innovating in the space without actually getting those benefits in return. Very often, people who work in data centers outside of the U.S. operate under very exploitative labor conditions. They're underpaid for the kind of work they do. And of course, the local population at large also doesn't benefit in a proportionate way from the fact that these companies are going down there. So in a way, it seems more intuitively fair to move these operations to colder locations, including in the U.S. But again, we shouldn't be too optimistic about that being necessarily an ethically unobjectionable solution. Sure. And what makes Wisconsin, or specifically the Great Lakes region, such a prime target and so attractive for these companies? Well, I think there are multiple reasons. I think one reason is just the climate-related reason of fewer resources are required to operate here in comparison to operating, say, in Texas. But there are other reasons, too, and these are more political reasons. AI innovation has been accompanied with a lot of anxiety about people losing a lot of jobs to automation, and this has triggered a lot of political conflict as well. My guess is that American companies hope to get a lot of people on their side politically by showing that them innovating doesn't necessarily mean that large swaths of the population will necessarily lose their jobs, but that instead, those populations can keep being employed in jobs that don't require, say, a PhD. But that still are well-paid positions that are relatively secure, such as people who would work in a data center or in other parts of the AI innovation pipeline. And so that's not an apolitical move. That is a move that calculates that the local population in the U.S. is very concerned about job losses coinciding with AI, and especially the loss of jobs that are not extremely highly technical or extremely highly skilled jobs, but that are nonetheless important. So those two reasons, I think, are the main drivers of this move to locate data centers here. Sure. Do you think the jobs that these data centers could provide would outweigh the jobs that AI automation would take away? Wait. Sorry. I just got distracted thinking. No, sure. Yeah. Huh. I just remembered something. Yeah. Go for it. No. Okay. Yeah. Let's keep it under wraps right now. I always have to double check that I'm not saying something that is, like, under a friend DA. Sure. Yeah. Yeah. Okay. Tell me again. Yeah. Sorry. Would it be possible that the jobs that these data centers could be providing would outweigh the jobs that it's taking away? Is that something that could benefit these customers? That is hard to predict. Sorry. No. Can we restart that? Yeah. Oh, sure. It's very hard to predict whether the benefits of moving operations here to the Great Lakes region will necessarily outweigh this risk of AI-related job loss. AI has shown that very few things are predictable. This is a highly heterogeneous, rapidly evolving field where things can change in a matter of days, not months. And so it seems very difficult to guarantee that X number of jobs will be secure just because some American companies are moving operations here. That being said, currently with the information we have, it looks like, you know, there's great promise and this idea of having more local operations here in the region because this is a region in which people worry especially about automation-related job loss much more so than in many other parts of the country. So right now, things might be more promising on that front. Certainly, it's better for the local population than a model of outsourcing all operations elsewhere outside of the country, outside of the United States, and it might be better for everyone globally as well, given that whenever American companies go outside of the U.S., they are not under very stringent requirements to avoid exploitative labor practices. So in a way, this could be a win-win, but given that the power of these corporate actors remains largely unchecked, it's impossible to predict whether they will be acting ethically as they embark on this move. Yeah, jumping back to what you were saying earlier, why do you think automation issues are more prevalent here in the U.S. or if it was constantites than other parts of the country? Oh, right. Well, because here in the region, previously, a lot of jobs had to do with manual labor and manufacturing. And so these are jobs that are crucial for the American economy, but that are particularly vulnerable to ongoing waves of technological innovation. And so the question that a lot of people locally, I think, were asking themselves was, well, do I now need to train in some completely different profession in order to have a secure source of income? So that's a very real concern. I think it would be unreasonable to expect large parts of the population to suddenly become coders, software developers, and other jobs like that. Not everyone needs to have a computer science or data science PhD in order to have a stable income. And so moving data centers here to the region could be a good way of giving opportunities to people who maybe don't want to do a PhD. But he do have something to contribute to this process of innovating in the AI space in other ways. Sure. That's something to think about, too, that a lot of these data centers are popping up faster than a lot of people can keep track of that someone I talked to at Queen Wisconsin said. So what are the implications of so many data centers popping up from Mount Pleasant to Kenosha, there was Foxconn in the past, and using water resources in kind of this time of uncharted territory when there are very few regulations about what they could do? Yeah, the question of regulation is arguably more pressing now in recent weeks and months given that it's currently unclear if America will in fact adopt a comprehensive AI sector regulation. There's a lot of disagreement in this policy space. One core problem is that the main actors in this space are just a handful of companies. These companies are so powerful that they have an oligopoly, right? So it's not a type of market where there's thousands of different actors competing with each other. It's just the top five to put it crudely. And so when we have just a handful of very powerful companies, it's actually extremely difficult to legally regulate that market effectively unless our elected public officials are willing to put guardrails on this oligopoly. Whether our public officials will be willing to do so, however, is exactly the open question right now. A lot of this question depends on different political ideologies. And so it seems that right now big tech leaders have actually swung significantly to the right end of the political spectrum that tends to be rather hostile against too much government regulation. It would be foolish though to just view this as classic libertarianism as we saw it even in previous iterations of technological innovation. Silicon Valley has always been known for being quite libertarian ideologically. What we're seeing now is actually something much more pronounced and much more politically active where it's not just that big tech CEOs are telling elected officials, hands off, please don't regulate us. Instead, it's more like we want to collaborate with public officials to end up with exactly the kind of minimal regulation that we want. And this is why we see so many powerful tech leaders in active conversation and in close collaboration with people who currently hold public office in this country. So that's a reason to remain skeptical that very comprehensive AI policy and governance will be implemented rapidly enough and strongly enough across all of these different domains, including end domains that pertain to environmental damages. Sure. Do you think a lot of these lawmakers have a responsibility to hold firm and make sure that these companies have regulations? Well, I think all public officials have that responsibility. As a political philosopher, a lot of people in my field talk about how the duties that one incurs when one holds public office are not just duties to fulfill the promises that your own platform, your own electorate, prefers, it's actually a much more comprehensive duty to protect and promote the interests of all those who are governed by you. And so with that in mind, that suggests that actually good AI policy and governance can't be partisan politics. It has to be responsive even to the interests of those who disagree with particular other policies or who disagree with the political preferences of big tech CEOs. And so that's a duty that only our elected officials can fulfill because of course corporate actors are not incentivized to bear civic interests in mind. Thinking about how much power these companies have, Microsoft Water Consumption, they said they have plans to be water positive by 2030, but in 2022 their global usage was 1.7 billion gallons, which was actually a 34% jump from the prior year. And also reading up of the World Wildlife Federation, they said by this year 2025, nearly two thirds of the world could face water shortages. So what are the ethics of these companies being able to choose who gets access to water, whether it's these data centers or everyday people, you know, farmers, everyday households, what are the ethics of that? One key problem in this area is that the people who currently have the power to decide how finite resources are allocated are actually just these oligopolistic corporate actors. That's because nobody has thought about intervening, you know, from the site of government to regulate this space ex ante. So what happens is that corporate actors go ahead and decide to deploy some new innovative tool and to create the infrastructure that is required for, say, training of powerful new AI model, including, you know, massive data centers everywhere. And of course other resources get used for that too, think just of the kinds of resources that we need to create cables and chips. All of these technologies that we just imagine as being in the cloud are actually these embodied things that require real kind of tangible materials, right? And so these decisions are largely made by whoever has the power to deploy some new innovative tool. So we're dealing with a kind of very asymmetrical distribution of power. We're dealing with a small number of people and companies who are able to decide, oh, we're now going to operate in this environment. And nobody really can argue against us because we have very limited competition and almost no regulation. There's a regulatory and kind of competitive vacuum around us. And so that seems very unjust from the perspective of political and moral philosophy. The first question that I think we should ask is, why is that power distributed so asymmetrically? Normally, when we think about making decisions as a collective in a democracy, we think that everyone whose interests are affected by some decision, also have a say in that decision. Now, this whole dynamic that we see in the AI space completely turns that logic on its head. Now we have the few deciding that they have some beneficial idea that, of course, might benefit everyone down the line and thus promote their interests somehow. But of course, it'll benefit them first, it'll benefit that particular company first. And so that's a very unfair distribution of benefits and burdens that we normally don't accept in the same way in other domains of collective decision making. So applying these core democratic values would actually require rethinking exactly that asymmetry. So it would require putting that initial decision making power back into the hands of our elected officials and or us just as citizens, right, and being able to make an initial judgment call about just where exactly we want to innovate, how exactly we want to innovate, what resources we're willing to dedicate, and what risks we're comfortable taking, rather than just waiting and seeing whatever company decides to do whatever new innovative jump next. So it's very dangerous from an ethics perspective to just wait and see. It's much better to decide early and often as a democratic collective. Absolutely. So tell me a little bit more about Google's water usage in Uruguay, what had happened there during the year drought? Yes, there was a disturbing case fairly recently where it had turned out that Google had built data centers in Uruguay. At a time where Uruguay's local population was experiencing one of the worst droughts in their history in comparison to the last 75 years, this was the worst drought ever experienced there. And so there were these really shocking images coming out of Uruguay of people protesting in the streets because they just didn't have access to clean water while Google was innovating and developing their fancy AI tools by using all of these local resources. And so that's the perfect example of how a company's initial decision to unilaterally decide to deploy their tool in some place or to build a data center in some place that can have these vastly unjust implications that actually threaten the core basic interests of people on the ground, not just some political preferences that they might have, but really the kinds of interests that are actually necessary for basic human survival. And so that showcases just how morally and politically urgent this problem actually is. Sure. I'm thinking ahead in the future as more of these companies come into Wisconsin. You know, that might be an extreme case of something that had happened, but could something similar, similar incidents happen in the Great Lakes region as these companies are coming in for the potable water here? One thing I've learned about the AI space is never say never. I think these risks are always there. It's also important to appreciate that not all risks associated with AI innovation are properly understood at all. There's just a lot that we don't know. And that's because we can't fully predict what the next wave of innovating in the generative AI space will look like. Because we can't predict that, we don't actually know what resources will be required to power that next wave. And so even if we are pretty confident right now that, you know, the same thing that happened in Uruguay won't happen here in Wisconsin, it's not clear whether that will always remain true because it might be that the next generation of powerful AI tools requires even more resources. That being said, things could also turn out very differently. So presumably innovation means finding less resource intensive ways of powering technological progress. We just don't know. So it could be that the next wave of AI is actually much more eco-friendly than the last one. And that would be good. It's just that we have to think about how we as a society can create the right incentives to actually get there. So how can we as a society put the right incentives in place on these powerful corporate actors to pay attention to these issues pertaining to environmental damage? Often there are not enough incentives in industry to do that. And so what we can do locally as well as on a federal level is to emphasize to our public officials that we as citizens or as residents are concerned about these questions and to really prioritize that when we set an agenda for ourselves as a democratic constituency. Absolutely. So thinking about that, what are questions that everyday people should be asking themselves, do you think about the ethics of AI whether socially or environmentally? I think the biggest question that everyone should ask themselves is do I need to automate this task in the first place? I think it's very easy for people to assume that they will increase the quality of whatever decision they are trying to make just by using AI because it's common to think about AI as this all-powerful, highly objective, politically neutral tool. It's important to realize though that no technology is neutral and perfectly objective and AI is not necessarily better than human reasoning in all cases. It really depends on what kinds of questions we're trying to answer and what sorts of tasks we're trying to solve. And it looks like in some cases our human ability to appreciate complexity and to wrestle with ambiguity is actually what's required for success in a complex task. So it's good to outsource particular mundane tasks to AI that are easily computable where we're fairly confident that we can delegate this. But any task that requires reflection, disagreement, appreciating uncertainty and ambiguity, that's very hard to replicate just by automating that task. Yeah, Jane. I just realized that when I just went out, I was going to say that's perfect. I actually, that was my last question. So cool. Oh, you're done. Yeah, I was going to say is there anything else that you wanted to know? I think that was good. Okay. Yeah. It doesn't go out like before or I think