Professor Yuval Noah Harari, a historian, philosopher, and the best-selling author of ‘Sapiens’ and ‘Homo Deus’ talks about AI and humankind. Excerpts from RIGSS Dialogue.
Choki Wangmo
Where is the world headed?
The world is now facing three major challenges — ecological crisis or climate change, technological disruption, and the threat of global and nuclear wars.
We understand the ecological threat, we should be doing something, but we don’t do it. The technological threat is even more complicated because we don’t understand what we are facing and what we should be doing. The pace of technological development is such that most people around the world, including the senior politicians and business people, don’t understand what we are saying. Just over the last few months, we’ve seen a tremendous leap in the abilities of artificial intelligence (AI), which our mind struggles to comprehend.
What really makes AI unique is that it is the first technology in history that can make decisions by itself and take power away from people.
What are the potential threats of AI?
We are losing power as humans. We are losing power in a way that never happened before. Every previous invention in history gave humans more power because neither the knife nor the bomb can decide how to use it. A knife cannot decide whether it is used to murder somebody or to save their life in surgery. Similarly, nuclear energy cannot decide by itself whether you use it to build a bomb or to produce electricity. And even if you build the bomb, still the bomb is not so powerful. You are powerful because the bomb can’t decide when and where to blow. It’s always a human decision.
AI is different. It can make decisions by itself, increasingly, more and more crucial decisions about our lives. We apply for a loan, it’s an AI making the decision, not a human being. But in all of human history, only humans were able to tell stories. Humans never experienced reality as it is. We always experience reality through a curtain, through a prism, through glasses of culture, through text, music and images. Now, we have non-human intelligence that may be better than us at creating this presence between us and reality.
We may now be entering the age of Maya, the age of illusion, because we now have a non-human intelligence which is able to place a curtain between humans and reality. This is a danger for humanity as a whole. If we are not careful, we will find ourselves entrapped behind this curtain of illusion.
What is the likely impact of technological disruption on culture and tradition?
We see all over the world this tension between the need to modernise and the need to preserve a tradition. In both cases, we shouldn’t modernise blindly by just adopting the latest technology. We shouldn’t preserve tradition because it is just there. The fact that it is old doesn’t necessarily mean it is good. Every tradition was once new. And human beings, all the nations, all the cultures are very new. What is a really deep tradition is the air we breathe in and breathe out. We do it every time. It connects us to all living beings. You take something and you give it back. This is the oldest tradition.
AI is now creating new entities. It is not a breathing entity. After millions of years of tradition of breathing, here is something that doesn’t breathe. What is the impact on how it understands the world? Now AI can create stories, and religions too. In a couple of years, there might be religions whose scriptures are written by AI. What does a religion look like, whose holy text is written by a non-breathing entity?
Impact of technological revolution/disruption? How can a country like Bhutan be prepared for it?
The new technologies that we develop now like the AI would be much more powerful than steam engines and railroads of the industrial revolution. If again, a few countries lead the AI revolution, they will be in a position to conquer and exploit everybody in the world. This time, the type of control they will have could be much tighter and stronger than what we saw in the previous imperial age. AI makes it possible for the first time to monitor everybody all the time everywhere, monitor everything, including what is happening in their body. For example, Hitler and Stalin wanted to follow everyone but couldn’t do it. With technology, we don’t need human agents to follow you around. Now, we have digital agents. Algorithms can analyse all kinds of information. We could be facing the worst totalitarian regime in history, the impact of which will be far more extreme than what we saw in the 20th century.
We can use this AI to benefit everyone. If we harvest the power of the new technology, it would make lives easier for everyone, if only the powerful governments do not take control of it. For example, we can use AI to monitor government corruption. It is the choice of what you do with technology. Make sure no powerful nation monopolises the technology. A small country like Bhutan cannot stand up to big countries like China, the USA, and other big corporations like Google. If we unite with other countries, they can try to regulate these emerging technologies.
There is a need for regulations to ensure that service providers do not misuse the information provided. The government should have regulations that prevent the over concentration of too much information in one place; it could lead to digital dictatorship.
There is a need for a global safety net. The problem with automation is that powerful countries will take over the international market. Countries that are already behind will be hit hard by the automation revolution. Some countries will become completely powerful, some developing countries without resources will collapse.
How should we educate our young children? How important is STEM education?
In education, like other fields, we face an unprecedented situation. We now have to educate our children without knowing the world they will inhabit. We don’t know the type of necessary skills required. We have no idea what the world will be like in 20 years, or what the job market will look like, except that it will be very different from what it is now. For example, we won’t need coders in 20 years. They will be replaced by AI.
We don’t have the answers but the world will be extremely hectic and fluid. There will be major changes. What people need is the ability to keep changing and learning. They will have to constantly learn new skills and reinvent themselves. For this, they need mental resilience and emotional flexibility.
I think STEM is very important but it shouldn’t be the only thing we teach our young children. It should be balanced with training people to have mental flexibility, how to deal with failures, and how to build relationships. The best chance we have to deal with AI is that AI develops at an enormous speed, and there is an enormous need to understand it. We need mathematics and computer science to understand AI but we also need to understand ourselves to develop our minds. If we spend a dollar and an hour on developing AI, we need to spend an hour and a dollar on developing the human mind.
How do you know when you are losing control to AI? What could be done?
It is not a single point in time but a gradual process. We are in the middle of losing it. Primitive AI was initially used to increase engagement but then has resulted in ethnic cleansing, wars, genocides, hatred, and fall in democracies. We lost control over it. For example, we saw the rapid spread and influence of ChatGPT and GPT-3 and GPT-4 in the last few weeks and months.
It is not too late if the governments of the world and big corporations decide to slow down. AI is intelligent but does not have consciousness and feelings. In certain fields, it is more intelligent than us. What happens if the world is dominated by entities more powerful than us? We don’t know. We are not familiar with entities that do not have feelings. It is best to slow down because we don’t understand the consequences of such intelligent entities.