I was asked to speak at an AI and Ethics panel at my university. What follows is the outline that I am using for my talk.
I will say at the outset that I am not sure if the term ethics even fits when it comes to so called AI. I say so called because I take the name artificial intelligence to be tantamount to a branding exercise. It is not clear that anything we are seeing with ChatGPT and other large language models or image generating programs really addresses anything that would be called intelligence in the broad everyday use of the term, or what is meant by intelligence in this context. Nonetheless, what we are seeing with AI is a fundamental transformation of how information is produced, gathered, and circulated, one that has effects on not only the university but also politics, culture, and even our individual psychic health and well-being. Ethics, as the term is conventionally used either refers to individual behavior with respect to other individuals or communities, should I lie, cheat, steal, or even kill, or, in a different sense, to the norms specific to a profession or activity in the sense of medical ethics or the ethics of journalism. It is not clear if either of these terms is adequate to such a fundamental transformation of the nature of how knowledge is produced. AI expands beyond the human world, and is not limited to any one particular activity or profession.
There is a third sense of ethics that would refer to a general way of life, an ethos. It is this sense that animates Félix Guattari’s use of term to argue that ethics can be understood as our relation to the processes and relations which sustain us, the various ecological milieus in which we live. Guattari outlines three ecologies. The first is what we generally understand by ecology, in other words, the environment, the natural world and its processes. The second ecology is that of the social world, its institutions and structures that sustain and regulate social existence. The third is what could be called an ecology of mind, to use Gregory Bateson’s phrase, the relations between our thoughts and ideas, emotions and actions. Of course all of these ecologies overlap and intersect; the primary value of distinguishing between them is heuristic, making it possible to distinguish between different effects of a process or relation.
How can we think of AI in relation to these three ecologies, natural, social, and psychic?
1) With respect to the natural world many researches have pointed to the enormous environmental toll of AI with respect to energy use, water, and resource extraction.
a) AI searches on ChatGPT use ten times as much energy as a google search. Moreover, the energy to train a LLM model is comparable to that of a trans-American flight. These factors have drastically increased the energy use of companies like Google up to 50% in recent years, and have led many of the same companies to drastically scale back their commitments to reducing their use of carbon energy sources. Since data centers need consistent power at all times there is a noticeable tendency for them to turn to fossil fuel intensive forms of power such as gas or coal rather than solar or wind, further increasing the carbon output of AI. If one places this in the context of the larger picture of global warming, especially as it has worsened under the Trump administration, AI us is contributing to an unsustainable increase in energy usage.
b) AI Data centers also rely on a great deal of water for cooling, and this water has to be fresh and often potable in order to cool effectively. It is estimated that AI demand could consume 1.1 trillion to 1.7 trillion gallons of fresh water globally a year by 2027. This is half of the water consumed annually by the UK.
c) The environmental impact of AI is not at all evenly distributed. The third environmental impact of AI is that constructing it involves mining resources such as copper and lithium in conditions of strip mining that are both ecologically and socially devastating.
“AI is not only a matter of computation but a significant commitment of material resources.” Proponents of AI often argue that these commitments will be justified by the ability of the AI of the future to solve such problems as global warming. However, this posits the idea that there might be some future solution to global warming that does not involve a reduction of use of fossil fuels. Much of the thinking around AI promotion tends to overstate its potential, this is reflected even in the use of the term intelligence in its name. As Dan McQuillan writes, “The primary climate threat posed by AI is not the egregious use of energy to train the models but the idea that AI is key to “solving” climate change.”
2) With respect to the social world we can frame the social and economic impact of AI in terms of how it intersects with existing hierarchies and inequalities for one, reinforcing them and how its products affect and shape the existing world of knowledge and information.
a) Generative AI relies on a great deal of data in order to be trained. This data is made up of written texts and images that are produced and accumulated for various different process, with goals and objectives different from training language of visual models. This data than carries with it deeply sedimented biases with respect to what is worth saying or seeing, and the way different people are discussed or envisioned. For example Sayifa Umoja Noble in Algorithms of Oppression has demonstrated that image searches often reproduce existing biases, for example showing more sexualized images for “black girls” than “white girls.” Data comes with bias, and AI companies in their voracious demand for more and more data, have proven themselves to be more concerned with quantity of information rather than quality with respect to social norms. As Karen Hao writes, “The share of pornographic images on the internet was so large that removing them shrank the dataset enough to notably degrade the model’s (DALL-E 3) performance. In particular, it made the model worse at generating faces of women and people of color…” What AI refers to as data is the product of labor, both in terms of past labor that has produced documents, images, and so on, and the present and invisible labor to train and monitor AI. The first is not paid for, CEOs of AI corporations have made it abundantly clear that their entire business model is dependent on cheap or free data, and the later is outsourced to countries like Kenya where it can be paid as cheaply as possible. AI data is the embodiment of existing hierarchies, in terms of race, gender, but also in terms of the hierarchy of capital, wealth, over and above labor.
b) No matter what data sets are used AI has a built in tendency to prioritize the past over the future. It can be used to write in various styles, a Shakespearian sonnet about Taco Bell, or give us a painting of our cat in the style of impressionism, but the increased use of AI to generate text and images increases the weight of the past on the future. This is further complicated by the fact that the more corporations rely on AI generated art or text the less new images or texts will be produced. AI generated art and writing effectively stop intellectual and artistic transformations, weighing down on the brains of the living like a nightmare.
c) Generative AI’s tendency to reproduce existing biases is further complicated by the fact its results are often treated as being free of bias, as being the synthesis of all of the existing knowledge of humanity. Bias is conventionally understood to be an effect of human perspectives and agendas; technology or machines are understood to be objective. Machine generated answers would then appear to correct for biases by assembling all of the relevant data. However, generative AI does not have a means for sorting or adjudicating between different claims. To put it simply its sorting is for the most part statistical, it says what most of the texts it was trained on says. The gap between how the information is produced and how it is received leads to real confusion and a breakdown in any ability to critically assess information or perspectives. To cite a recent New York Times article about individuals caught up in AI’s claims “At the time, Mr. Torres thought of ChatGPT as a powerful search engine that knew more than any human possibly could because of its access to a vast digital library. He did not know that it tended to be sycophantic, agreeing with and flattering its users, or that it could hallucinate, generating ideas that weren’t true but sounded plausible.”
d) This confusion can be exploited by bad actors. The combination of the belief in objectivity and ChatGPT’s tendency to be sycophantic, to increase engagement by telling people what they want to believe, not to mention the tendency towards “hallucinations” makes it a technology that can be easily exploited to reinforce any position or point of view. Possible evidence for this could be seen with the recently Make America Healthy Again report which was documented as containing numerous “hallucinated” references to articles that were never written. Generative AI has drastically increased and individuals ability to create documents, publications, images, and even voices and text that look or sound true but are fabrications. George Orwell once painted a nightmare picture of a Ministry of Truth that rewrote history and journalism according to the demand of power, now that power is available to almost anyone at the push of a button.
At a social level generative AI threatens to both reify past hierarchies and exclusions while reducing our capacity to critically assess the future. As Dan McQuillan writes“Socially applied AI is ultimately a technology of unfreedom because it closes off possible futures other than those of its own determination.”
3) The last aspect of the damage of so-called AI is what it is doing to the ecology of our minds. On this point it is useful to stress that of course AI does not exist outside of existing social relations. Many of the most disturbing stories about AI use reflect the existing isolation and immiseration of life under existing social conditions. That people turn to ChatGPT for therapeutic advice or for companionship says as much about our own society as it does about the technology. However, as much as these forms of use stem from existing social relations, AI has a tendency to reinforce them. People who interact with an AI powered Chatbot for companionship will have even less of a chance to connect with human beings. As Dan McQuillan writes, “By ignoring our interdependencies and sharpening our differences AI becomes the automation of former UK prime minister Margaret Thatcher’s mantra that ‘there is no such thing as society.”
a) This would be the first psychic aspect of AI, it offers the semblance of social interaction without the irreducible difference that defines social life. To deal with other people is always to deal with the fact that people understand things differently, have different goals and aspirations, AI offers us the possibility of connection without conjunction, to use Franco Berardi’s terms. As Berardi writes, "Conjunction is the encounter and fusion of rounded irregular forms that infiltrate in an imprecise and repeatable interaction of algorithmic functions of straight lines and points that can be perfectly superimposed onto each other inserting and detaching themselves according to discrete modalities of interaction…The digitalization of communicative processes produces a sort of desensitization to the curve, to continuous processes of slow becoming, and a corresponding sensitization to coded, sudden changes of state and the succession of discrete signs.” There are even stories of people who prefer the company of chatbots that are always available, always friendly, and free of the discord that is endemic to social relations. AI threatens to both deepen social disconnection while monetizing the ensuing isolation and loneliness it produces.
b) Generative AI is often presented as a technology that can do two important tasks. First, it can summarize texts, notes, and even recordings. Second, it can generate text with a number of prompts. In other words, it offers the possibility to automate reading and writing. Following Bernard Stiegler I would argue that the automation should be understood as a deskilling, or a proletarianization of these skills; the technology knows how to spell, draw, or construct a sentence so I do not have to learn. Of course there are many instances in which such a shift of skill from the person to the machine is welcome, google maps is easier to use than trying to read a map while driving a car. However, I would argue that reading and writing are not just particular skills but are unique in that it is in and through reading and writing that we learn both what we think and how to think. Or, put more simply, reading is thinking and writing is thinking. The more these skills are automated, the more one loses the ability to think. A recent study from MIT suggest that people who rely on ChatGPT to write end up retaining less information and are unable to even summarize what they "wrote."
c) AI then promises a fundamental transformation of the tasks that are in some sense inalienable and those that can be automated. Spinoza argued that freedom of speech is a necessary aspect of political life because people cannot help but interpret, evaluate, and think. AI promises the possibility that this would not be the case, that people will fundamentally outsource their thinking, ask an AI bot any possible question. This is no doubt an incredible advance in convenience, but it is also a loss of fundamental autonomy.
So my question is a simple one: given the effect of AI on the natural, social, and psychic ecologies that sustain individual and collective life, how can anyone who is interested in sustaining and maintaining our biosphere, public sphere, and psychological well-being advocate using these technologies? I know that people will say that these technologies are “here to stay,” or that they are skills that employers demand. However, I am not interested in those arguments, which are a kind of bad faith or, at the very least, not framed in terms of ethics. Is there an ethical justification for AI use, a way that it can be seen to be beneficial to natural, social, and psychic ecologies. Or maybe more to the point is it possible to have a use of AI that is not attached to ecological destruction, to the entrenchment of existing biases, and that undermines the basic literacy constitutive of a subject.
No comments:
Post a Comment