Over the summer I posted a rant online (below after the jump), which was circulated enough that I was invited by my university to take the con position in the debate should students be encouraged to use AI in the classroom. This is what I wrote in response to that question. It is an an attempt to think about what is lost when we automate the acts of reading and writing. I am not really sure if what I wrote works, or if anyone will read it, I decided to share it here as well.
My position is that so-called AI or Large Language Model (LLM) technologies such as ChatGPT should not be used for preparing writing assignments in college classes. There are multiple arguments that one could make against using such technologies. I am not going to address the ecological impact of AI, except to say in passing that it is substantial enough to lead companies like Google to completely reassess or scrap their objectives for lowering carbon emissions. I am also not going to address the ethical and legal issues brought up by the fact that all of these LLMs (and image generating software) are trained on published and copyrighted works. Those issues are best dealt by people who have expertise in that area. What I am going to address is what I know, and what I worry about, and that is what we lose when we automate or outsource reading and writing to technology. I am also not going to address the products of these technologies, the texts, images, and conversations that they can produce. I freely admit that they can be impressive as final products. My concern is not with the product, but with the process—with the process of reading and writing as part of education.