'Yesterday, we stood on the edge of the abyss, but today we took an important leap forward,' a colleague once said. At the time, an ambitious systems renewal project was faltering and about to fail. Individual employees could do little about it. We played our part in the drama and watched it happen. But if you listened to the corporate propaganda, we were doing great. In the end, 100 million euros had gone down the drain. But that was only child's play compared to humanity's latest undertaking. We are about to make another leap forward, a jump into the abyss, with artificial intelligence (AI). And we cannot stop it. We helplessly watch the drama unfold. We have no control over our future. In the current political and economic system, states and corporations compete to stay ahead of their rivals. Many people think we are doing fine and that life has never been better, but if you fall into the abyss, the weightlessness can give you an ecstatic sensation.
During an interview, the historian Yuval Noah Harari lamented, 'Humans have become like the gods. We have the power to create new life forms and destroy life on Earth, including ourselves. We face two threats: ecological collapse and technological disruption. Instead of uniting as humanity to face these common challenges, we are divided and fighting each other more and more. If we are so intelligent, why are we doing these stupid things? We are engaged in self-destructive activities and seem unable to stop ourselves.' The death toll of Mao's Great Leap Forward was thirty million. Our future can be much worse. Harvests around the globe are failing, likely because of climate change. At the same time, we make computers more intelligent than ourselves. Perhaps those computers will find solutions to our problems. We do not need computers to guess what we should do. It is not that we do not know. Only the hope for technological solutions discourages us from taking necessary drastic actions.
Scary technologySince time immemorial, people have been scare-mongering about new technologies. We can use every technology for good and evil. You can use a kitchen knife to peel potatoes or to kill someone. So far, the apprehension was overdone. As soon as humans mastered fire, some probably warned against using it. Fire could escape our control and kill us. Socrates dreaded writing because written texts could replace our memories and make us dumb. Legend has it that Socrates was the wisest man around at the time. How could he be so mistaken? Later, the printing press caused anguish of information overload. There will be so many books, so how can you ever read them all?
That was a sheer underestimation of human problem-solving capabilities. It was a problem only intellectuals could think of. You do not have to read every book. And illiterates figured that out quite quickly. How could illiterates know better than intelligent people? But our proficiency to fret is eternal. Train travelling would cause infertility, telegraphs short sentences would undermine human language, telephones would cause electrocution, television would destroy our social life, Internet search engines would make us stupid, and 5G would change human bodies and enable the coronavirus to spread. And we survived all that, so scare-mongers seem silly now, just like people expecting the end times and the return of Jesus. That could be the perfect moment for our hubris to take us down.
An atomic bomb can obliterate a city and kill everyone inside it. These bombs have been around for nearly seventy years now. And we are not dead yet. But we might all die within a matter of hours. There are enough weapons of mass destruction to wipe us out several times. And you cannot prove that these weapons will kill us until they do. So, those who demand proof are not the brightest minds. To illustrate the point, imagine a chance of one per cent of a destructive world war starting each year. That chance is there every year. In 10 years, the likelihood of war becomes nearly 10%. And over 50 years, it becomes close to 40%. In the long run, a world war is inevitable even if the likelihood in any given year is only 1%. It is impossible to assess the exact chance of world war, but there is one, and the example demonstrates that, given enough time, world war becomes certain unless we achieve permanent world peace.
Should we fear AI? At least several experts are scared. AI can mean the end of humanity, they claim. Others disagree. AI could escape our control, leading to unintended outcomes. A low chance of it happening in any given year is not particularly reassuring. That also applies to other technologies like genetic engineering. And perhaps accidents are not our biggest concern. So, why is AI more dangerous than other technologies that can go wrong? Harari came up with the following:
- AI constantly improves. It will be faster, more accurate and can outcompete us.
- AI can create new ideas that are better than ours. It can think for us.
- AI can make decisions by itself, and these decisions are better. It can decide for us.
- AI can exploit our weaknesses. It can make us do what its makers want us to do.
We cannot compete because we need rest, can be distracted and do not learn as fast. Change is stressful to us. Things are moving too fast. We are close to the point that we cannot take it anymore. We deliver ourselves increasingly to entities that constantly learn at a pace we cannot match. AI will take over many tasks humans perform, and this time, there may be no new jobs for us, leading to widespread unemployment and economic disruption. If everyone is without a job, who can afford food and housing?
And why should we make decisions if computers make better ones? For instance, why should you drive your car when self-driving cars cause fewer accidents? Why do we need doctors if AI can make better diagnoses and operate on patients with fewer errors? And AI may know more about ourselves than we do. So, why should you decide what films to watch and what books to read if a computer knows better? AI already makes personalised suggestions on web stores.
Socrates feared writing would make us dumb. If we write things down, we do not have to remember them. But we make far more memories than we can put into writing. And Socrates did not anticipate that written texts could make us more intelligent. Text can last longer and be more accurate than human memory. If you write down your thoughts or data you acquired, you do not have to reinvent your ideas or gather the data again. Instead, you can start where you ended, improve your thoughts and write them down again or find more data to arrive at better conclusions. Likewise, spelling and grammar checkers relieve us from the need to write correctly. They can help us focus on our ideas rather than spelling and grammar. As a result, we may formulate less clearly. And navigation systems can erode our ability to orient ourselves in our environment. But AI goes further than that. It can generate ideas by itself and make decisions for us.
Soon, there may be no point in thinking and learning as AI knows better. Students already use ChatGPT to write their essays. Soon, AI will write better articles than humans on almost every subject. And what is the point in learning if you can ask any question to a computer that gives you an instant answer that is better than an answer you can come up with after doing months of research yourself?
Algorithms on social media discovered that inciting hatred, outrage and fear are successful ways of attracting our attention and keeping us hooked on a platform like Facebook. And that was simple AI. Today, AI can generate fake news stories and videos. Soon, it might be impossible to discern truth from fiction. In the future, AI can develop intimate relationships with us, make us buy things or change our opinions. Soon, computers and robots may control or manipulate us without our knowledge.
Military applications may be the most troublesome. You cannot afford to lose in war. And so, there is cut-throat competition. Militaries worldwide race to develop AI faster than their adversaries. AI can make decisions faster and better than humans. If a human pilot fights against an AI pilot, he has no chance. AI can accelerate weapons development. A computer already generated thousands of ideas for new chemical weapons. Killer robots that decide who to kill are on the way. And we may consider it morally acceptable if AI makes fewer errors in discerning between civilians and combatants.
Whether or not AI is dangerous is not a question. There is a danger. We can use it for beneficial purposes but also destructive ones. Unlike the atomic bomb, which can kill us in a way we can foresee, we cannot predict how AI might end us. It constantly evolves. And we have no control over the technology. And so, the AI created by the competition between nations and corporations determines what happens, while no one intends the outcome.
The system that ends usWe cannot choose the world and time we live in. If you lived in Germany in 1620, that determined your options in life. You could not go on vacation by aeroplane to Spain, watch television, or post your life on Instagram. You did not have these options. And you did not know what happened in China or England or learned about it years later. Look at all the choices we have today. There are shampoos for every type of hair and from many different brands. And that is just shampoo. In 1620, you washed your hair with water or not at all. For every desire, countless products are on the market. But despite the infinite options, you cannot choose the system. You can go off-grid or become a homeless vagabond, but you cannot choose another system. Indeed, the world and time we live in are a given, and we can only change the system together if we agree on what we must do.
Marxists have long railed against this system aimed at profit rather than our happiness and well-being. It lures us with the satisfaction of false desires. We must work harder and learn faster. Our governments can send us to war. We cannot escape the survival-of-the-fittest type of competition between corporations and governments. But mediaeval people had no choice either. And they lived under worse conditions than most of us. And we could live with the current system if it did not kill us. But that is not the case. Wherever the system arrived, it brought destruction. Traditional communities became integrated or disappeared. And we are about to destroy ourselves. The Marxist alternative collapsed under capitalist competition. Marxism supposedly was a rational alternative, but it did not work out as envisioned. But we need inspiring stories. The writing is on the wall. We must end the system to survive, but how?
The list of failures is endless. Mao's Great Leap Forward was one of them. Instead of envisioning something new, we can better look at what works. Few succeeded in successfully building communities outside the system that is everywhere around us. The Amish are a notable exception. They only adopt modern technologies if that does not affect their lifestyle. The Amish place a high value on family time and face-to-face conversations and aim to maintain self-sufficiency. They value rural life, manual labour, humility and submission to God. The Amish choose their path willingly, and their numbers are growing. They have built a utopian society. Unlike other idealist communities, their way of life has stood the test of time. Like us, they cannot choose the world they live in. The Amish can do as they please because they pose no threat to the system. But they can show us how to survive the coming Great Collapse.