Artificial Intelligence

Since computers have been invented, geeks don’t stop talking about artificial intelligence (AI). Indeed it is cool to have a machine which helps human beings with cognitive activities.

There are two types of AIs: strong AI (general AI) which can perform any intellectual task that a human can, and weak AI which is designed to solve specific tasks.

Weak AIs are already here. Chess programs play chess better than the best human players, pocket calculators multiply numbers better than the best mathematicians, IBM Watson wins human players in Jeopardy and helps doctors to treat cancer. Machines start working on more and more tasks which were previously considered as entirely human work.

But strong AI is not created yet, and many researchers doubt that it is even possible to create. And I don’t see any reason to work on strong AI instead of weak AIs. Weak AIs already help humans in some areas. They are relatively easy to create because the purpose of each weak AI is narrow and well understood. It also is similar to how humans work: nobody is an expert in everything, we specialize in narrow area of knowledge. Why machines should be any different? It might be not economically wise to invest resources into building generic machine when machines for specific tasks can be much cheaper.

Anyway, in case strong AI is feasible and can be invented in future, it is good to know what to expect. Will it be human-friendly? How to make it human-friendly? How fast will it improve itself? Can there be competition between different strong AIs?

In “Intelligence Explosion Microeconomics” http://intelligence.org/files/IEM.pdf Eliezer Yudkowsky describes one of the issues: returns on cognitive reinvestment. Unlike humans, strong AI can improve itself by rewriting its own code, scaling hardware, inventing better hardware, etc. Growth of AI intelligence will depend on a function of return on cognitive reinvestment – the ability to invest more resources to get better and faster mind.

Author provides there scenarios: “intelligence fizzle”, “intelligence combustion” and “intelligence explosion”, and then gets into details on each of them. He defends the last scenario by using observations of human brain evolution history and computer science history and takes into account such factors as “returns on speed”, “returns on population”, local vs distributed intelligence, engineering difficulty in comparison with evolutionary difficulty.

After reading this paper I think that AI explosion is possible if strong AI itself is possible, and it would be really good to solve AI friendliness problem before such explosion happens.

Meanwhile let’s work on weak AIs which can give us profit today.