Okay. Perfect. We're good to go. Awesome. Thank you so much for speaking with me. Of course. I guess my first kind of question is what are teachers' reactions to AI being released? Chat GBT was only released a year ago. So as a professor, how is it being integrated into the classroom? And what are your teachers' takes on it? So as the teachers' takes on it, I think we're kind of all over the place. And rightfully so in some respects, because things are going to vary from instructor to instructor. They're going to vary from class to class, department to department, institution to institution. And it's all based on different learning objectives when we encounter our students. So what I'm trying to do in my class might be very different from what somebody's trying to do in a freshman composition class or in a foreign language learning class. And because of that, people have very different opinions about the role that that technology should have in their own particular classroom. And sometimes those are really extreme opinions of outright banning it or outright allowing it. I think most people are more in the middle trying to think about what their learning objectives are and how they can accommodate this technology responsibly within their current practices. And the real challenge is sort of like at an institutional level, trying to think about a coherent policy across different courses, instructors, departments, colleges and so forth. That is rigorous and has clear standards but also accommodates that kind of variation in diversity. And I think UW has done a really good job of that actually. From my perspective, they've been more focused on kind of like general principles or guiding orientations versus like hard and fast rigid rules. And that I think is good because that allows instructors like myself to have a lot of latitude as to how we actually incorporate the technology, whilst also keeping us focused on its responsible uses. So what's kind of your stance on using? That was quick. Watch it will start right up again. So what's your stance on using AI in the classroom depending on courses or engine? Yeah. That's a great question and it's something I've thought a lot about and it's something I've talked a lot about with my students. I've tried to bring them into that thought process and I've tried to help encourage them to draft some of the policies for my particular course around AI. Because I think my ultimate goal for my class is to help them understand what the technology is, understand how it works, understand what it does well, understand what it does poorly, so that they can make their own choices about how to use it responsibly. I think that that's really my goal. So I'm not trying to view them towards not using it or view them towards using it. I just want them to understand what it is and again like how they can make informed choices about its use in school and beyond school as well. And so that means really exploring it from a variety of different angles in the classroom and kind of integrating it into all the typical writing tasks that we would do in any given semester. So overlaying the AI on top of the typical trajectory of a research project and saying something like, all right, topic invention. How would AI supplement this normal process or research? How would it supplement that process or macro structure, paragraph arrangement, argumentation, how would a sentence level of grammatical fixes syntax? How does it interface with each of these aspects of writing? What's it good at? What's it bad at? And how can we think about different ways of using it again both in school and moving forward? So what is some feedback that you've heard from students about using AI on their projects or in the classroom? What have you written there? In general, we've been pretty much on the same page, which I think is great, in terms of what it does well and in terms of what it does not do well. We have all found a lot of utility in chat GPT for things like topic invention and for brainstorming, I guess more generally speaking, when you're in that phase of writing where you're trying to think of an idea and you need a sounding board, right? It's pretty great for that. It's a chatbot. So you can kind of talk to it. And there are ways of having a structured conversation that allow you to talk yourself into a sort of workable topic for a class like mine. I've had a lot of people come up to me and say that was very useful. That allowed me to find a topic or get me on the right path toward a topic. Pretty consistently though, at the same time, we agree on what it doesn't do well, which is to produce in an instant final stage output of writing. So if you tell it, just write me a proposal, it's not going to write you a good proposal. It's going to be immediately identifiable as generatively written. And above and beyond that, it's not going to be really anything anyone would want to read. It's not going to propose anything that's going to change anything in the world. And it's not going to help anybody build any critical skills of any kind when used in that way. So we're still working on how to use it for certain kinds of things. And I'm open to that sort of experimentation, but I do think a kind of consensus is emerging just really briefly, which is that you don't want to use it to generate its own content. It's much more useful as a sort of like discursive partner or discussion partner and or as a tool for shaping your own existing writing. I think that as a class, we're kind of zeroing in on that general consensus. That's super interesting. So what does that look like going forward, do you think in the next, in the near future, in the next year, in the next five years for your class specifically, or for it being integrated into an educational institution? Yeah, that's a great question. I tend to take, I don't, you know, I'm not master down this. I don't know what's going to happen. And we talked a little bit beforehand about whether the potentially fattish nature of something like this. I am of the opinion that AI is going to be, sorry, I'm of the opinion that generative writing and AI assisted writing is going to be really deeply embedded in most of our writing technologies moving forward. You can already see it kind of popping up in things like Bard. It'll be in all the search engines. It'll be integrated in a Microsoft Word very deeply, I think, very soon. This is just my, I can see it. I can see a pathway to that. So what does that mean? I mean, that's the question. What does that mean in terms of writing instruction? What does that mean in terms of what we want people to get out of the writing process in terms of their own development as thinkers, researchers, writers, editors, readers, right? And ultimately, I think it just means that we need to, again, kind of review our expectations on all those points, really ask rigorously, what are we trying to do when we're teaching people how to write? Is it just about the final product or is it about the process that leads up to the final product, right? Both of those things are important. And then just try and think about, like, how to facilitate those learning objectives using this new tool. And that's going to take a lot of different forms. Some of which are kind of emerging now and some of which are very unforeseen. So would you say you kind of think of gendered AI or chat TBT as more, or would you say as a tool rather than something to actually create content or writing samples for students? I do tend to think of it as a tool, which is a problematic metaphor in some senses because it has, you know, every tool has its own affordances and limitations and tools ask you to do certain things, right? But in this case, I think a tool is an appropriate metaphor. It is one tool in a student's or a writer's or reader's toolkit, and it is something that they can pull out on certain types of occasions to assist them in certain types of tasks. And one of the things that I've really kind of come to accept, and just know over years of teaching, and before that, an ongoing years of writing and reading myself, is that everybody has their own relationship to reading and writing. And everyone has their own strengths and weaknesses on those points. And for that reason, I think it's important, I think it's critical that we teach students to identify their strengths as writers and readers, and identify their areas for improvement as writers and readers, and then figure out exactly which tools can help them supplement or augment those areas and which are to be avoided. And chat GPT is just another one of those tools in the toolkit. Absolutely. So what, if any, problems or hesitations do you or teachers or students feel about generative AI? Yeah, that's another great question. One of the things that I've been really surprised about this semester is the number of students who seem genuinely opposed to using it. Genuinely opposed. I believe them. I don't think they're just telling me that. I really believe that they are against it for a variety, for reasons, for different reasons. I think everyone that's against chat GPT or opposes it in some ways, there are different reasons for it. That's your point. Some of them are opposed to it for privacy reasons. They don't want to give their information to OpenAI, which I get. Some of them are opposed to it because they, frankly, have said very directly, I want to earn this myself. I'm taking this class. I'm here. I have to be here. I want to gain these certain skills myself, and I don't feel right about kind of offloading it to a machine. Others are worried about plagiarism or being accused of plagiarism, and others are just, you know, worried about the final quality of the writing and how it might impact the grade. So those are all concerns that students have raised too many directly, and in support of their decision not to use the technology. Does that answer your question? Absolutely. We can't talk about this earlier, too, that you work closely with AI on a weekly basis. Could you just give me a quick example of what that might look like if a student comes up with a question or how you might assign AI with the project? Yeah, I can give you two examples. I can give you a lot of examples. Well, I'll give you two examples. One example has to do with a in-class exercise that I devised called automated looping, and you might have heard me talk about this in the summer AI series, but automated looping is a topic invention exercise. This is something that happens at the outset of a research project and you're trying to think of something to do. You're trying to think of a topic that you can spend X number of weeks on that's going to allow you to answer some kind of unanswered question. You know, it's really important that you sort of go through this process of defining a question, identifying a problem, and then kind of molding it, shaping it, narrowing it, winnowing it into something that's very specific and workable in the context of a research paper. And that's a really tough process for a lot of people. It takes a lot of time. And it's a skill that you can learn. But, you know, the question is always like, how can we speed up that process a little bit so that students understand the narrowing process, but it also doesn't eat up four weeks of a 15-week semester. So automated looping is one thing that has worked pretty well in that respect, and it's really a combination of kind of established writing principles and established writing practices, one of which is free writing, and this is the idea in a writing course where you're given a prompt and you just free write, meaning you put a pencil on a page or you put your hands on the keys, and for a set amount of time you just keep writing. The only rule is that you can't stop writing for that amount of time on that prompt. And the idea is that at the end of the five minutes or whatever it is, your hands will be cramped and you'll have a bunch of content on the page. And most of it's going to be rubbish, but some of it's going to be interesting and salvageable, right? So that's the first thing is the free writing aspect. There's another exercise called looping which builds on free writing, and in looping exercise you take what you've written in the first free write, you identify those salient points, those little kind of like diamonds in the rough, and you do another free write on those specific points that narrows them down. And then you kind of just keep looping and looping and narrowing and narrowing, and eventually you're going to get to something that's actually really kind of concrete and usable and direct enough for a class like mine. And in automated looping, instead of you yourself writing it down and getting your hands cramped, you kind of, you know, you have chat GPT do that work of the free write for you. You prompt it with a question, you know, what are some of the harms associated with climate change? A broad question of that kind, what are some of the harms, problems associated with climate change? And it'll give you a list, you know, flooding here, extreme weather here, you know, food insecurity here. And then you pick the thing that really stands out to you as being, okay, that's kind of interesting to me, I'm interested in food insecurity, and then that's an opportunity to loop. So you take food insecurity, you add in another prompt, right? Tell me more about food insecurity. And then you're kind of winnowing and winnowing through this series of interactions with the chatbot. And, you know, something like, what are the problems with climate change is not a topic that you could do for my class, because it's way too big. No one could ever write a paper on something so big, right? But if you keep looping in this way, and you keep having a sort of back and forth with the technology, eventually you'll get to something like, okay, what are the effects, you'll get to something like, what are the effects of flooding on low-income residents in Miami or something like that? Which is, that's an interesting question, and you can think of specific ways of potentially answering that question. And I think you can do it more, that's something you can do in class, right? As long as everyone's willing to sign up for a chat GPT account, it's something you can do in class, and kind of like, you know, get the ball rolling on topic development. That's super interesting. Yeah, you mentioned kind of hesitantly that some students, if they were willing to use chat GPT, are some students so opposed to using the program at all that they want? Have you come across that? Yes, I have, and I've tried different ways around that. I mean, as UW students, we have all access to Bard, which is Google's generative writing program, so they could log in and use Bard in that way. I also tried to do it where like, we would have one account for the whole class for chat GPT, and I set it up and everything, but it turns out that if you have one account and like 30 people sign up to it at one time, it goes all crazy and like, doesn't do what you want it to do, so that didn't really work. Learning process. Yeah, are there any misconceptions you think people have in general about generative? I think for writing, there are a couple misconceptions. The first is that it produces good writing, which I don't think it does produce good writing. It produces grammatically correct writing, which is not something to scoff at. I mean, that is important, but it has a house style. Chat GPT has a certain kind of like voice to it, and it's not a particularly readable voice. I mean, it's very list-like. It forms perfectly organized paragraphs, but they don't really amount to anything readable. It's never more than some of its parts, so that's the biggest misconception is that it produces good writing. It doesn't. It produces grammatically acceptable writing. Grammatically acceptable writing is never compelling reading, and what we're always looking for is compelling reading. Compelling reading means something that has a point of view, it means something that has something to say about the world. It has a recognizable voice, leading readers through sort of complicated or complex and thought-through argument. I can't do that. Yeah, I could keep going, but I think that's where I'm at. Yeah, absolutely. Is there anything else you want to add about generative AI in your experience, either as a professor or just in general, as someone who sees it? Yes. Can I add two things? Can I assume you're going to chop this up somehow? Yes. One of your questions was about harms or problems associated with it. Some of the wider-scale problems of generative AI, I'm on board with all of them. Sorry, I'm going to start that over. There have been a lot of discussions about some of the harms of generative AI, and I'm clued into all of those questions of ethics, and inclusion, misinformation, those are all seriously real concerns, and things that we need to sort of figure out how to navigate socially, like as a general kind of national information economy or global information economy. But the thing that I'm personally really concerned about is emissions from generative AI, because every time that you are asking it to write you something and you see the things scrolling through and sort of producing it line by line, I just feel it chugging away and burning something into the air, right? And we saw the impact of some of this kind of computational emissions from cryptocurrencies and blockchain, but if I'm right about the extent to which generative AI and AI in general is going to be integrated into basically every facet of our lives, we need to start thinking about how to responsibly manage the energy demands of that enterprise, and there just isn't a good answer for that right now, particularly at the training stage. I'm not feeling great about that. Yeah, absolutely, that's super interesting. I never even considered that. When it came to AI, I don't think many people think about emissions when using a program like Chechnya YouTube is super fascinating. It's already emitting more than most small countries. So, I mean, it's going to increase exponentially if again, if my sort of view of things is anywhere in the ballpark of where it ends up, I've got one more thing. This is a little bit of a plug, so you can use it or not. I'll just say the college writing classroom I think is more important than ever. With the advent of chat TPT, the college writing classroom is more important than ever, precisely because I think it's like one of the only places where students will be taught how to responsibly use that technology. I don't know where else they'll be trained in this. I don't know where else they'll be taught how to read things critically to identify the voice of an algorithmically written document. I don't know where else. You know what I'm going to say? I heard the first of my mouth.