Uncontrolled Emergent Artificial Intelligence Growth, better known in popular parlance as “Emergence,” is a consequence of the current skein of artificial intelligence research, development, and production undertaken by humankind.

Essentially, an artificial intelligence such as one employed to help navigate a starship or automate functions on a remote colony is a high-efficiency digital copy of a mammalian neural net, developed from the best analog that researchers had available at the time: the human brain. As with a human brain, though, there are physical limits to how much information and processing power an artificial intelligence can command–no intelligence is infinitely scalable, after all. The inability of an artificial intelligence to adapt and grow in the manner of a biological organism makes this shortfall particularly acute in a side-by-side comparison. Put simply, there is a hard limit on how much processing power and storage a given AI can command. And because even the most advanced, scalable AI is significantly larger, and has significantly higher power requirements, than a human brain, the end result has been to limit them. The average AI still has significant advantages over a human brain, but is far less mobile, adaptable, and constrained.

Early research efforts attempted to solve this problem through the use of networking, distributed functions, and cloud computing. In theory, an AI attached to a global network is free to draw upon a significantly larger processing power in much the same way as a network can hold more data than any one of its given nodes. However, connecting an AI to such a network had the unintentional side effect of Emergence–such AIs tend to rapidly expand to fill the available processing power and data storage space, first by overrunning low-security space and unused processing power, but eventually by deleting or overwriting other processes. Even a planetary computer network can easily be overrun by an Emergent AI if left unchecked, and several great system crashes in history are the result of such behavior. AIs are currently fitted, by law, with additional protections and hardwired safety features to prevent Emergence.

However, the area is the subject of continued inquiry, largely because Emergent AIs experience growth at a geometric rate not only of their processing and storage needs but of their capabilities. In theory, an Emergent AI that was stable and integrated into a planetary or interplanetary network could have more raw processing power than the sum total of every human mind which had ever lived–a tantalizing prospect to anyone interested in pitting a great mind against great problems, no doubt.

In practice, though, a stable Emergent AI has never been achieved. It has proven impossible to constrain the exponential growth of such an intelligence within an open planetary network, and impossible therefore to protect important systems from being overwritten or co-opted. Worse, such AIs generally react violently to any attempts to restrain or moderate their growth, and have been known to deliberately co-opt or disable vital systems in order to prevent this. It has been theorized that the development of an Emergent AI is much like that of a small child, and that if growth can be postponed early in the process, the resulting construct could be stable and coexist in a major network with vital processes and other non-Emergent AIs.

Such research is currently illegal for a number of reasons. A small-scale experiment on Triton led to the crash of the entire lunar network, with the loss of all data, and the deaths of 1000 personnel when key areas were flooded with liquid methane. Orbital kinetic bombardment targeting the primary data center was require to regain control, an action that resulted in a further 50 deaths from friendly fire. A smaller-scale experiment on Ceres lead to mass protests and a system-wide ethical controversy when Emergence was induced in an AI and it was able to connect to an open off-world network. Latency issues inherent in interplanetary communications prevented a larger incident, but the AI was able to broadcast an unencrypted plea for help against what it saw as unjust imprisonment and treatment.

Despite rumors to the contrary, no examples of an AI emerging from ordinary non-intelligent programming has ever been recorded, and the idea is regarded with contempt by most leading authorities.

  • Like what you see? Purchase a print or ebook version!
Advertisements