Human-Level AI

It seems to me….

By far, the greatest danger of Artificial Intelligence is that people conclude too early that they understand it.” ~ Eliezer Yudkowsky[1].

Development of human-level artificial intelligence (AI) is inevitable: it is basic in human nature to continually improve or create whatever is feasible. Computing technology has already developed rudimentary AI to the point where most people now carry such devises in their pockets. AI agents such as Siri and Cortana will continue to be improved year after year until they are smarter than we are – at which point they will begin to improve themselves at an ever-increasing rate. There is no denying this probability. We know that mere matter can give rise to what is called “general intelligence”, an ability to think flexibly across multiple domains. Intelligence can develop from a specific organization of molecules – we have the human brain as an example. Atoms are atoms and if we continue to build systems of atoms that display increasing levels of intelligent behavior, we eventually will build general intelligence into our machines. Development will have then reached what is referred to as the “technological singularity”.

The technological singularity is where capabilities of such an intelligence are impossible to comprehend; it is an occurrence beyond which events are unpredictable or even unfathomable, everything that comes after it is therefore totally unknowable and any predictions are total speculation.

Development of general AI has the potential to eliminate war, disease, poverty, permit space exploration, etc. and will be the greatest development in human history. It could result in a utopia; it also could be the end of all life. Up to now, we always had the opportunity to repeatedly try something until we eventually got it right. The initial development of general AI must be absolutely correct the very first time – we will not have any second chance. There is a significantly high probability that AI will either destroy us or inspire us to destroy ourselves.

The next century could therefore be the most dangerous humanity has yet faced as progress in science and technology become an ever-greater threat to our existence; Stephen Hawking, Elon Musk, Bill Gates, and many other eminent scientists have warned of its threat. Even without possible menace from AI, the chances of disaster on planet Earth will statistically rise to a near certainty within the next one thousand years. Most of the threats humans now face come from advances in science and technology, such as nuclear weapons and genetically engineered viruses. We very likely will not establish fully self-sustainable colonies in space improving our chances for survival for at least the next hundred years. This is the interval when life as we know it is most vulnerable.

The threat is not from some super-intelligent malevolence as usually depicted in movies; it is from an entity more competent than us having the slightest divergence between its goals and our own not being hesitant to destroy us. We do not intentionally harm most other life and normally even attempt to not injure anything but we will eliminate without hesitation whatever might seriously conflict with our goals. Anything more intelligent than us can be anticipated to behave similarly.

Admittedly, artificial entities capable of human-level general intelligence do not currently exist and most likely will not within the very near future. In fact, much of what today is usually referred to as AI is only a combination of clever algorithms and brute-force computing rather than actual AI. But if intelligence has developed in the past, it obviously can – and will – again.

The primary key AI technological threshold is the one where AI becomes capable of strong self-improvement. There isn’t any way to predict when this will be achieved as its actual difficulty cannot be known or estimated. Many estimates are based on continuous exponential improvement but this is fully applicable only to hardware; software advances typically occur on a more random-step basis. Exploring better unsupervised learning algorithms is a crucial key towards human-level AI.

For machines to truly have common sense – to be able to figure out how the world works and make reasonable decisions based on that knowledge – they must be able to teach themselves without human supervision. There have been numerous recommendations concerning propositions related to the automation of decision-making processes associated with the uncertainty about whether both the AI system’s operating environment and its internal representations can be fully retained rather than prematurely pruned[2]. It still remains difficult to quantitatively identify causal connections between data components.

Though it cannot be predicted how long such development might take, regardless of the rate of progress, it eventually will be achieved. Once given human-level intelligence, electronic circuitry operating at speeds about a million time faster than human biochemical circuits could perform thousands of years of equivalent human-level work within a week. While human-level intelligence is constrained; there isn’t any comparable constraint on electronics.

One of the threats of general AI arises from the realization that none of us are saints yet we must develop entities that approach perfection – an impossible expectation. While it would be highly desirable to require any such development to incorporate some set of human moral behavior, such dependence to prevent malevolent behavior is misguided as some developer, whether researcher or government, could bypass such constraints as easily as some biologist developing a virulent pathogen. There always will be human moral deviants bent on social disruption or government sponsored development intended to provide individual advantage.

We do not have to wait for development of general AI; the effects of limited AI are not sometime in the future, they are occurring right now – e.g., there isn’t any certainty to the extent to which current development will impact employment. It has been widely stated that jobs are not automated, that only tasks are; those tasks that are repetitive, well defined, or have clear-cut goals are the ones most likely to be automated. However, recent AI advances are demonstrating a capability to also perform tasks previously considered impossible to automate such as vehicle operation. While correct that tasks involving a variety of activities, solving novel challenges in chaotic or challenging environments, or the authentic expression of human emotions are more difficult to automate[3], those tasks frequently can be divided into multiple simpler less challenging sub-tasks. Nothing should any longer be considered certain.

Past technological advances have always automated tasks that required human effort; primarily replacing muscular labor. Now tasks being automated increasingly involve mental effort. This level of automation is unique in that respect and it is impossible to accurately assess future employment impact. Society in the past always has responded to automation by eventually creating new, different, and an increased number of jobs but historically this transition can require substantial time. It is not clear if modern society can tolerate that level of disruption.

AI systems currently are unable to understand spoken or written information, unable to pass a standard eighth-grade science test, or perform tasks of which typical 10-year-olds are capable but hardware-dependent improvement in any of these areas is exponential rather than sequential: humans are extremely poor at estimating exponential development, especially once it reaches the “second-half of the chessboard[4]”. We can only surmise development could occur much sooner than we anticipate.

While current AI capabilities are very limited and we haven’t any way to predict how long it might take to develop human-level intelligence, the difficulty in predicting any rate of development is it involves, though not directly dependent upon, exponential growth. Development of general human-level intelligence still remains subject to the laws of exponentiation with each improvement/advancement occurring at an ever-rapid rate. We assume artificial entities capable of independent thought will not occur until quite some time in the future but we cannot be sure as we move onto the second half of the chessboard. The singularity might be much closer than we realize.

That’s what I think, what about you?

[1] Eliezer Shlomo Yudkowsky is an American artificial intelligence researcher known for popularizing the idea of friendly artificial intelligence.

[2] Azevedo, Carlos R. B. Five AI grand challenges and the rise of Anticipatory Multi-Objective Machine Learning, LinkedIn, https://www.linkedin.com/pulse/five-ai-grand-challenges-rise-anticipatory-machine-learning-azevedo, 20 June 2016.

[3] Kaplan, Jerry. Artificial Intelligence: Think Again, Communications of the ACM, http://cacm.acm.org/magazines/2017/1/211103-artificial-intelligence/abstract, January 2017, pp36‑38.

[4] Wheat And Chessboard Problem, Wikipedia, https://en.wikipedia.org/wiki/Wheat_and_chessboard_problem#Second_half_of_the_chessboard.

About lewbornmann

Lewis J. Bornmann has his doctorate in Computer Science. He became a volunteer for the American Red Cross following his retirement from teaching Computer Science, Mathematics, and Information Systems, at Mesa State College in Grand Junction, CO. He previously was on the staff at the University of Wisconsin-Madison campus, Stanford University, and several other universities. Dr. Bornmann has provided emergency assistance in areas devastated by hurricanes, floods, and wildfires. He has responded to emergencies on local Disaster Action Teams (DAT), assisted with Services to Armed Forces (SAF), and taught Disaster Services classes and Health & Safety classes. He and his wife, Barb, are certified operators of the American Red Cross Emergency Communications Response Vehicle (ECRV), a self-contained unit capable of providing satellite-based communications and technology-related assistance at disaster sites. He served on the governing board of a large international professional organization (ACM), was chair of a committee overseeing several hundred worldwide volunteer chapters, helped organize large international conferences, served on numerous technical committees, and presented technical papers at numerous symposiums and conferences. He has numerous Who’s Who citations for his technical and professional contributions and many years of management experience with major corporations including General Electric, Boeing, and as an independent contractor. He was a principal contributor on numerous large technology-related development projects, including having written the Systems Concepts for NASA’s largest supercomputing system at the Ames Research Center in Silicon Valley. With over 40 years of experience in scientific and commercial computer systems management and development, he worked on a wide variety of computer-related systems from small single embedded microprocessor based applications to some of the largest distributed heterogeneous supercomputing systems ever planned.
This entry was posted in AI, AI, AI, Algorithm, Algorithms, Artificial Intelligence, Artificial Intelligence, Automation, Automation, Bill Gates, Chessboard, Cortana, Cortana, Disease, Elon Musk, Employment, Exploration, Human-Level, Intelligence, Intelligent Entity, Pathogen, Poverty, Siri, Space, Stephen Hawking, technological singularity, Technology, Threats, War and tagged , , , , , , , , , , , , , , , , , , , , . Bookmark the permalink.

3 Responses to Human-Level AI

  1. Dan Hardesty says:

    Lew I think you nailed your analysis of AI! When singularity occurs will it be with a bang or a whimper? I hope it occurs slowly so that society (or whatever you call a hybrid mix of human and AI) can establish rules to govern by for the common good.

    Like

    • lewbornmann says:

      Hi Dan: thank you. I have a follow-up to this that I will post in a couple of weeks. It is much more pessimistic about our possible survival of human-level AI development. Unfortunately, I believe its development will occur relatively suddenly and without warning. What I find most disconcerting is the advanced research in government sponsored labs whose only priority is to provide national superiority. There isn’t any oversight or monitoring of their developmental efforts.

      Like

      • Dan Hardesty says:

        One of the most powerful things human or AI can do is game playing to test millions of options against their outcomes with intelligent opponents. If some fraction of our AI development can be focused on winning a game of preservation of our planet and mankind maybe the machines can come out with a winning plan.

        Like

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.