The American writer Ambrose Bierce published a short story in 1899 titled ‘Moxon’s Master’ in which a man named Moxon creates an automaton that plays chess with his master. When Moxon wins the game, the automaton is furious and makes an attempt to kill his inventor. The narrator of the story refers to the nineteenth-century philosopher Herbert Spencer’s definition of ‘Life’. Life, Spencer thought, is the definite combination of heterogeneous changes, both simultaneous and successive, in correspondence with external coexistences and sequences. There was, in other words, nothing uniquely human about it. I suppose it is possible to simply think of human beings as Carbon-based life forms, while machines are silicon-based life forms. When Bierce wrote the story, the word ‘robot’ had not yet come into being (the Czech writer coined it in 1920 in a play), and it would be over a hundred years hence, when in July 2022, it was reported that a chess-playing robot in Moscow lost its temper at the quick moves of a seven-year-old opponent and reacted by grabbing the boy’s finger and fracturing it.
Today, it is thought that we are witnessing 4IR or the fourth Industrial Revolution (after steam power, electric power, development and ICTs), and artificial intelligence or AI is ubiquitous. From scholarly work to human experience to media reports, the background, applications, and concerns about AI are everywhere. AI applications are a part of decision-making in healthcare, transportation, logistics, marketing, social media, recruitment, entertainment, law enforcement, finance, military, policing, security, education, communication and more. In my presentation this past Sunday at the fourth international Vajrayana conference on ‘Modernity of Buddhism’ held at Centre for Bhutan and GNH Studies, I argued for the urgent and relevant connexions between Buddhism and AI.
AI, at its core, is a field which systematically aims to build machines that can function in intelligent ways. In contrast to ‘weak/narrow AI’ that focuses on specific tasks, ‘strong AI’ or AGI (Artificial General Intelligence) can think and act like humans, and Super AI may even exceed human intelligence. The first phase of AI development was symbolist, and it was based on rules, logic, and symbols and found solutions that humans could understand. However, in the last few decades, connectionist paradigms have meant that unsupervised machine learning (ML) has taken precedence. The non-symbolic deep neural networks (DNN) learn from massive amounts of data in order to make predictions. This raises two significant issues relating to biases and explanation. Because AI is trained upon human data, it can (and does) import the biases that humans exhibit, such as relating to gender and race. For example, a bail and sentencing decision algorithm called COMPAS in the United States perpetuated systemic racist bias through its risk assessments. Further, a DNN’s reasoning is a ‘black box’ — known as the problem of black box ontology in AI — based on complex and layered calculations that are inaccessible to humans, so that their own designers let alone the end users, are unable to have an explanation for such automated decision-making. Machines produce decision outcomes that may be useful, but we do not know how or why these were reached. This has led to the need for Explainable AI or XAI, since explaining outcomes are critical from a policy and regulatory point of view and also for accountability, to build trust, and for attributing liability in high-stakes domains such as medical diagnostics, autonomous vehicles, finance, or defence. When something does not go right, who is responsible?
Notwithstanding these issues, AI is increasingly shaping our knowledge about ourselves and the world as we interface with digital and real entities, deciding what we can access and curating what we can, and do, care for. The examples I gave in my lecture included algorithms who have votes on investment company boards, human beings who get sentimentally attached on online AI believing them to be like them, AI that can decode speech from brain activity with a fair degree of accuracy, judges being asked to consult AI recommendations, and the sentience of algorithms being potentially on the horizon.
You might ask where does Buddhism enter the AI discourse? Buddhism is, among other things, the study of the mind. There are many similarities in the preoccupations of AI scientists and Buddhist philosophers — a key difference being that Buddhist philosophers have thought about questions concerning the mind for millennia. The two fields are not often thought of together, but popular culture has been reflective of the overlaps. Think of Neo, the character played by Keanu Reeves in the movie series The Matrix who realises that he is imprisoned in a material world that is a computer simulation designed by AI, and must attempt to be free and teach others how to be free, rather like the Buddha. While lethal killer robots dominate the western imaginaries of AI, in Japan, robots are much more a part of public life.
I suggest that there are some ways in which we might conceptualise the intersections of Buddhism and AI.
First, what AI can do for Buddhism, that is, the ways in which AI can influence Buddhist practice and teaching. Technology offers religion an innovative means of spreading the message, especially at a time when according to a Pew Research Centre report in 2017, Buddhists are projected to decline in absolute number, dropping 7% from about 500 million in 2015 to 462 million in 2060. The 400-year-old Kodaiji temple made ‘Mindar’, an androgynous robotic priest made from aluminium covered in silicone resembling human skin, over 6 feet tall, weighing 70 pounds, and costing $1 million. It preaches Buddhist sermons and is not yet equipped with ML algorithms but its creators hope that it will be, one day. A humanoid robot named ‘Pepper’ is available for hire at Japanese funerals and it can chant sutras and tap drums like a Buddhist priest. There is also a boom of virtual humans and in Thailand, a digital monk called Phra Maha AI teaches people about Buddhism on Facebook and Instagram. It has been proposed that AI can help to understand the brain mechanism of meditation practice and facilitate it through neurophysiological monitoring. AI art is now expanding and this includes potential for Buddhist art. Then, there is the AI-enabled mapping of historical Buddhist texts and pathways.
Second, what Buddhism might do for AI, that is, the scholarly work on developing frameworks of the mind drawn from Buddhist philosophy that can be used for AI models, or the ways in which Buddhist philosophy provides important critical insights for the development of Strong AI. The five-aggregate model in Buddhism — where the activities of subject with a mind are divided into five aggregates: physical matter and body (rupa), feelings or affect valence (vedana), perception or cognition of conceptions (sanna), volition (sankhara), and phenomenal consciousness (vinnana) — can serve as a framework for AI studies. Buddhist philosophers can help software experts to construct genuinely intelligent AI through a sophisticated Buddhist idea of causation (that is neither fully deterministic nor totally probabilistic) that can understand transformation and interdependence better. Buddhist philosophy can add value to the Strong AI debate on consciousness. Buddhism as a religion is unique in its ability to consider life as not uniquely human, but also animal, and even machine. In contrast to western philosophers like Searle, the Dalai Lama has affirmed that machine cognition and machine life are not impossible to rule out. When asked if robots could ever become sentient beings, he replied: “if the physical basis of computer acquires the potential or the ability to serve as basis for a continuum of consciousness… a stream of consciousness might actually enter into a computer”. For Buddhists, there is no essential stable and continuous self and the focus is on changing mental states that are impacted by interconnexions that condition the experience of such continuity. For Buddhist AI scholars that I draw upon, emptiness is applicable to the parts and the whole, and we may think of Nagarjuna’s critique of the reification of human causal power that they advance. Again, for Buddhist logicians like Ratnakirti there are no fenced-off individual mind streams and dualistically conceptualised cognition is rejected, and thus, the question of whether a computational system is ‘really conscious’ is diminished. Finally, a Buddhist approach is useful to think of how to create compassionate AI and selfless robots. To model emotions, models of mind are essential and the Buddhist traditions distinguish different versions of compassion (metta, karuna, mudita, uppekkha) and virtues (the paramitas) that can assist in building robotic character and its capacity to learn through interactions. As Buddhism imagines the humans as more than mindless machines, it can be a signpost for machines that may have a mind.
Third, what Buddhist AI would do for global governance, that is, much beyond affective computing, a Buddhist ethics centred on compassion and the reduction of suffering must be a vital part of the global vision for AI and care must be considered to be a crucial driver of intelligence. The globally uneven development of technology has steadily ensured systematic divergences between the establishing of connectivities versus relationalities. AI shapes political reasoning and reorders economic organisation even as the capacities for regulating and governing in the AI age vary systematically across the globe. AI capabilities offer opportunities, but also distinct risks relating to labour, privacy, conflict, and more.
AI has been called the ‘new great game’ and weaponised AI concerns all. The 2021 report of the U.S. National Security Commission on AI stated, “The ability of a machine to perceive, evaluate, and act more quickly and accurately than a human represents a competitive advantage in any field – civilian or military. AI technologies will be a source of enormous power for the companies and countries that harness them”. In 2017, Russian President Putin declared “whoever becomes the leader in the sphere will become the ruler of the world”.
Given the global asymmetries in the ability to engage with such technologies, let alone regulate them, makes the norm architecture around AI to be heavily skewed away from a large number of developing countries. It is vital that ethics be foregrounded in AI and global governance and Buddhist ethics can help in being mindful of the need to have goals such as the elimination of suffering, and engaging with a diversity of cultural values beyond the west.
All technologies are inherently political and set against the contemporary backdrop of declining democracies and exacerbating inequalities, alternative articulations of desire, suffering, and relationality are needed. Here, focusing on Buddhism and AI intersections can give us a better insight into how cognition, consciousness, consent, conscience, and compassion are central in thinking on AI technology. What we dream, we have a chance to realise, and we have a right and a responsibility to dream better — in another hundred years, we want to co-exist with AI that is intelligent and compassionate, that challenges us to do better and projects the best and not the worst of us as humans.
Contributed by
Dr Nitasha Kaul
She is from the University of Westminster in London is the author of 7 books and over 140 articles on multidisciplinary themes, including on the history, politics, and international relations of Bhutan. Her work can be found on https://nitashakaul.com and she is on twitter @NitashaKaul