What is complexity? It can be a situation in which an individual participates in a production process without understanding it. Suppose that success depends on input received from ten or twenty individuals. Each has a well-defined task but not the ability to evaluate the entire process. The system is tolerant and mistakes are not immediately recognizable. They are eventually recognized, but not before too much damage is done. A traditional carpenter, or machinist, has complete control of the production process; mistakes are recognizable at once. But this might no longer be true in a totally computerized factory.
Reliance on software (artificial intelligence) often helps incompetent people. A scientists who is not familiar with quantum chromodynamics, for example, might hide his or her limitations by using a computer code created by someone else. Using the code that scientist might be able to successfully analyze some experimental data and make correct predictions. It is amazing how much can be accomplished by a person who knows nothing more than how to enter data, and which buttons to press. But each code, like each traditional thoeory, has limitations; it works well in some situations and not in others. An expert is aware of this and is able to avoid mistakes resulting from inappropriate use of software. Unlike an incompetent pretender, a real expert knows that any model of reality, traditional or computer simulation, is never more reliable than assumptions on which it is based.
Is it possible that division of labor, and negative aspects of artificial intelligence, were responsible for monumental errors made in the financial sphere? I do not know how to answer this question. Yes, artificial intelligence is often very useful. But the rate of progress in that area might easily become too great for human beings. Our memories are limited and we need time to get used to new ways of doing things. This often becomes impossible when changes are introduced too frequently. Wikipedia has an article on complexity in economics.
< http://en.wikipedia.org/wiki/Complexity_economics >
It reminds us that "in the light of the new concepts introduced, economic systems shall no more be considered as 'naturally' inclined to achieve equilibrium states. On the contrary, economic systems - like most complex and self-organized systems - are intrinsically evolutionary systems, which tend to develop, prevailingly toward levels of higher internal organization; though the possibility of involution processes - or even of catastrophic events - remains immanent. . . . " I find such observations interesting, especially about the role of computer simulation. Computer simulations, used by competent economists, should have prevented the disaster. Why was the disaster allowed to happen? Does it mean that models are not reliable or does it mean that our decision makers are not competent?
Neither economist nor politician, I would very much like to learn from what more knowledgeable people say about complexity of our economic system and about how widely simulations are used by economists. I know that physical scientists and engineers learn a lot from simulations, before building complex devices such as oil tankers, airplanes and nuclear reactors. They study effects of all conceivable parameters and try to minimize the probability of failure. Is this approach possible in the field of economics? If not then why not?
P.S.
In a 2004 paper,
< http://www.paecon.net/PAEReview/issue26/Smith26.htm >
Lewis L. Smith wrote about "a proliferation of business advisory groups, conferences, consultants, fellowships, journals, research institutes, seminars, workshops, et cetera." He was referring to research in the field of "complexity economics." All this was not sufficient to prevent the current financial crisis. How can this be explained? Quoting Alan Greenspan, the author wrote:
"A well-known proposition is that, under a very restrictive set of assumptions, uncertainty has no bearing on the actions that policy makers might choose ... These assumptions are never met in the real world.
... policy makers need to consider not only the most likely future path ... but also the distribution of possible outcomes about that path ...
A policy action that is calculated to be optimal ... may not in fact be optimal, once the full extent of uncertainty ... is taken into account ...
... only a limited number of risks can be quantified with any confidence. And even these risks are generally quantifiable only if we accept the assumption that the future will replicate the past ... "