I was recently asked to discuss my thoughts on AI and teaching. Below are the prompts and my responses. The story that otherwise ran on the Yale Daily News (October 3, 2025) is linked here: https://yaledailynews.com/blog/2025/10/03/the-height-of-self-sabotage-computer-science-professors-bash-ai-use/
1. What are your policies on AI use in your classroom? Were there any events or realizations that influenced you to create or shift policy?
My classroom policy on AI summarized is:
Use AI with caution during your university time. When it comes to your studies, you should be greedily using your 4 years to build your brains critical thinking abilities - that is what will truly last you a lifetime! Be cautious of the marketing pitch on current AI technology - and ask yourself if your use of AI would be taking away a valuable opportunity to train your brain. Remember, you're ultimately trading four years of your time to be here and to learn. Using AI to complete a problem set (PSET) is a bad idea, using AI to search for what book / article / example to learn a concept however may be a good use of AI tools. And as a reminder, don't underestimate the social and technical skills you build by simply asking a professor, ULA, or student a question and having a conversation. It's one of the major advantages of being at university!
I've had two realizations that have influenced me:
2. What AI policies by the department or specific CS professors do you think have been successful? Do you wish the computer science department provided more (or less) concrete guidelines on AI use across the department?
Each class is different, so it probably would be best to let the professor be the expert for their class and let them set an appropriate policy. I understand if we end up with many different policies that could be difficulty for students to keep track of - so this is something that unfortunately may take time to settle on.
3. Do you use AI in research and/or teaching? If so, how?
Outside of a spellchecker, I currently do not use AI in research or teaching. As an example, I currently discourage the use of AI for doing any writing for my research/senior thesis students for two reasons: 1.) Some of the AI tools do not cite precisely where the information is coming from (i.e. plagiarism is the issue). 2.) Learning how to write and edit your thoughts over time is one of the best ways to crystalize your thoughts and understanding of a topic - again no shortcuts learning, but writing is a good way to really cement your understanding of a topic! So when it comes to writing, I think this is one area where I am unlikely to touch an AI tool beyond a spelling/grammar/format checker in the future.
4. How do you think the popularization of AI has changed teaching, researching, and learning computer science at universities? Do you think it’s a net positive?
The icky thing about AI for me personally is when I ask the questions: "Where does the content come from", "Can we trust the data sources", "Will my life's work be vacuumed up by an algorithm in a millisecond and repackaged as someone else's - how should I feel?" Whether you are a programmer, artist, or content creator, motivationally it's a net negative knowing all the free work I have distributed over my life may be claimed by someone else without regard to any copyright law or at least an acknowledgment. There's other issues regarding the energy consumption and sustainability of the current AI technology that has further implications on our planet as a collective - I suspect improvements can be made here. I don't want you to personally feel icky if you've used AI (every Google search seems to produce AI results now anyway), but I think there remains for myself a great tension in the ethics of how AI models are trained, and how AI collect and redistribute the results.
But to answer your original question more crisply -has the popularization of AI changed research, teaching, and learning? Probably not that much thus far in practice - the teaching and research methods that we need to advance in the world still will happen incrementally through the scientific process even with the existence of AI tools. Current AI models will likely continually improve while training on the accumulated corpus of human knowledge up to this point. But then you may ask what happens when all of that data is processed? We'll still have to continue to interpret and process the new data (i.e. with a human-in-the-loop) to incrementally improve teaching, research, and learning. As an example, 10 years ago program synthesis was really exciting and promising, doing what LLM's are doing now for programming generation - those program synthesis techniques that are now less popular (and perhaps less scalable) did take 5-10 years to develop. I suspect we will get the next big advancement after LLM's in 5-10 years for whatever is next. I don't think we're there today, but that of course does not mean to stop trying to innovate - with or without AI. teaching, learning, and research is incremental. This is why it's incredibly important to continuously fund science. I know Hollywood likes to portray it otherwise (and who doesn't love an inspiring movie), but unfortunately miracle ideas don't appear out of thin air as often depicted in movies - at least not for me so far.
5. Do you have anything else you want to share with our readers?
Betting against new technology (AI or otherwise) over a long enough time period would probably be silly of me to do in writing. Look, you have to keep an open mind, and change is part of living in our world (scientifically, culturally, etc.). An AI tutor or better search engine that meets a student at their skill level, when they need to ask a question at 1am is useful and perhaps even more equitable. This is a similar advancement in the same way the internet at our fingertips can provide a more efficient and accessible way to access information versus how our parents generation had to drive to the nearest library to access information. We have some tough questions still to answer about how AI tools should be used, and how to use them responsibly as they improve - especially in the academic world. We will figure out some best practices. But when it comes to yourself and AI right now, I'll leave you with a question - can AI help you run a marathon?
These were my raw responses. Perhaps you'll agree, disagree, or I'll otherwise be proven to be totally wrong in the future.