What begins as an emergency response to the spread of disease can soon give way to wide-open questions of what we really live for, and what we really want. Historically, plagues and other lethal contagions have tended to stir religious fervor, and it is hardly a surprise that also the current pandemic, can spawn existential questioning. But such contemplations need not deteriorate into metaphysical flights of fancy. And in fact, they often arise with a clear concern for the world we live in, here and now. Covid-19 forces us to think, in concrete terms, about seemingly abstract questions that we otherwise typically more or less automatically relegate to “some other time” in a vague future. As we try to ward off disaster and chart a wholesome course of action, powerful AI is there to assist us in the computation of vast and diverse data collected on the humanitarian, social, and economic battle fronts. Yet as we rely on the crucial help of our newfound virtual friends for visualizing the lay of the land and charting a way forward, a computation-based method requires that we translate important human values—like that of a saved human life—in sometimes disturbingly quantifiable ways. And while working on a somewhat reasonable formulation of the value of one human life, we soon realize that we must also be able to say more about what it might be that gives life value in the first place. What is “a good life”? And what really do we want this world to be? In the end, building a reliable model of disease control requires that we have good-enough answers to questions like these. Indeed, if the values of a model’s central variables turn out to be dramatically off, the consequences of relying on its predictions may turn out to be quite disastrous.
Perhaps precisely because there hardly are any clear-cut or universally applicable explanations to what it means to be alive—and valuably so—human minds have for millennia kept working on these questions in a wealth of often radically different contexts. In other words, when we now are called on to give practically meaningful answers to notoriously deep questions, there are plenty of resources to consider. Buddhism, for one, abounds with discourse on benevolent intelligence and informed networks of expert care. The promise of a wholesome and productive synergy between such life-approaches and the cognitive prowess of artificial intelligences is then there to be explored. Buddhism has a bent for unrelenting inquiry into the nature and underlying structures of life—but Buddhism also cherishes pragmatism and solution-oriented approaches to the concrete challenges we encounter in life. These two lines of commitment may seem to be pulling apart, and yet Buddhists claim otherwise. We try to take them on their word, and so we ask ourselves: what would happen if the ideas and concepts of Buddhism were made clear and useful to the current AI efforts around Covid-19 and the concomitant economic challenges?
Human-AI Symbiosis under Viral Pressure
AI is at the heart of the fight against Covid-19. As we grapple with a plethora of complex and often painful questions raised by the pandemic, the models that are used to predict both the spread and containment of the virus are largely driven and developed with the help of AI. Where governments have been most successful in keeping the disease at bay, AI has typically been employed aggressively to track and disseminate information about the bearers within society. In the race toward a successful vaccine, AI is also at the forefront, charting the structures of viral proteins with astronomical speed. As in so many spheres of life (and death) in the 21st century, AI involvement in this now global battle against disease has become just about inextricable from, and hence intrinsic to, human efforts.
And AI is arguably naturally suited to assist us in facing this existential challenge. Machine intelligence still struggles with making crude, common sense out of complexity, but neural networks thrive on vast data and registering both minute detail and exponential growth that humans struggle to comprehend. In fact, the more data, so much the better for neural networks. In other words, where we humans are strong—in making ad hoc common sense, for example—the networks are weak, and where we fail—as in keeping in mind vast complexity—machine intelligence flourishes. Who would decline the services of a friend (artificial or not) who so clearly possesses something that we humans both lack and need, given the current challenges? The fit seems natural and the synergy undeniable.
But with increasing ability comes greater responsibility. As humans we have for ages been able to live, evolve, and develop despite—and perhaps even by virtue of—wide swaths of cognitive illusions and a wealth of moral idiosyncrasies. But as our capabilities increase dramatically with the help of AI, so do the ramifications of our underlying imperfections. The question as to what things really should be like is taking on a new concreteness, if not urgency, because AI enables us to move much faster toward the fulfillment of our wishes than what would otherwise have been possible. The question is what outcomes should be pointed towards and how do we know if the results match our intentions? The setup of “the ideal world” is therefore not just a dreamy passtime pursuit, nor is it the reclusive domain of professional philosophers, preachers, demagogues, or the like. At the present day and time, most of us will need a rough and ready answer to this tantalizing but barely tangible question, because we cannot quite afford to go by our ordinary inclinations, hunches, and half-baked conclusions. As fairy-tale listeners have known for centuries, once a genie pops up, we better be good at making wishes. Yet as AI moves forward in leaps of progress, our human capacity for making sound and reasonable choices is easily strained.
This rather novel predicament—needing to know what we want quite well because we might soon get it—is well illustrated by the pandemic. Covid-19 is an enemy that must be effectively neutralized. But once the data on the enemy and its swift movements begin to stream in, how do we best treat and respond to the information? The clear perception of a distinct public enemy is soon mitigated once we begin to look into the different roadmaps for containing and defeating the disease. How do we weigh the value of a saved human life against economic and social costs incurred in the process of saving it? As the relevant experts and authorities build our computerized models for disease control, questions such as these must be addressed and responded to explicitly for the models to make sense and be effective. The pandemic requires a forceful and global response, and AI can help us design and test potential treatments, vaccines, and preventive social behaviors. But as comprehensive courses of action begin to emerge on our screens and in our minds, we may also soon find occasion to pause, asking ourselves what sort of world it really is we are seeking to protect, or create. Accounting for the spread of a disease across the world requires myriad data from distinct fields of learning and spheres of life. As AI helps us bring them all together, a rich map of interdependent physical, biological, and social factors begins to emerge. But as the map is constructed and the myriad variables are revalued and adjusted, the otherwise clear contours of “the enemy” are perhaps no longer as distinct. Where, really, do we want the model to take us?
Acknowledging Our Combined Responsibilities
AI helps us compute the (very) big picture, but it is then our responsibility to still know rather well what we mean when we say, for example, “life” or speak of “a valuable life.” Of course, and as any thinking adult or child will know, concepts such as these are not self-explanatory. Quite the contrary. But for our newly gained symbiosis with AI to make sense and be successful, it seems that we will have to keep our minds and hearts to exploring them. Value misalignment can be detrimental, whether in the development of AI or the fight against contagion. So, let’s give it our best. The reach of our hybrid minds seems greater than ever.
Supported by the Diverse Intelligences initiative of the Templeton World Charity Foundation, our team of mind scientists, thinkers, and AI professionals are working on a resource for making thoughts and practices from Buddhist contexts practically comprehensible for AI science and, perhaps, AI as such. In other words, we work on putting lofty claims to truly noble ethics and profound insight—claims and slogans that for believing Buddhists carry a long and distinguished pedigree—to a practical test in the unforgiving here-and-now of AI implemented in the real world. To carry out that experiment we must, of course, be able to establish a sense of mutual understanding, and so our task is also to render key concepts from AI concretely meaningful within Buddhist conceptual spheres and, indeed, within traditional Buddhist communities of learning and contemplation. In these ways, we hope to contribute to the global emergence of powerful, mature, and ethically informed collaborations between biological and artificially engendered intelligences. Together we are strong.