AI’s Potential Negatives

It seems to me….

Forget artificial intelligence – in the brave new world of big data, its artificial idiocy we should be looking out for.” ~ Tom Chatfield.

As anyone having followed my comments probably realizes, I am an admitted advocate for technological advancement in general and artificial intelligence (AI) in particular. It is relatively easy to argue that development of such intelligence constitutes humanity’s best hope for long-term survival as a species, something I have discussed on several occasions[i], and honestly admit to the personal belief that humans are merely nature’s necessary intermediate step in the evolution of higher intelligence.

While aware this view is not totally shared even by some experts in the field, I typically have dismissed those concerns as xenophobic. When those concerns are expressed by people I respect and know to be more intelligent than I am, while I might disagree in degree, I consider it important to provide an appropriate balance to my perspective and address those negative concerns. Stephen Hawking stated he believes “… development of full artificial intelligence could spell the end of the human race”. Elon Musk believes AI poses an “existential threat” to humanity.

Prior to speculating on any possible negative effects of such development, clarity necessities stating that intelligence of any kind, whether carbon-based such as in animals or in any other form, is never “artificial”. Consistent with common usage however, the terms artificial intelligence (AI) and intelligent entity will be used interchangeably.

Development of artificial intelligence is inevitable. While most AI-related research in the U.S. is conducted at universities, it is believed that development of such an entity would provide such an overwhelming national strategic advantage that it is reasonable to assume all advanced nations are currently funding similar research projects.

The hypothesis that accelerating progress in technologies will result in artificial intelligence exceeding human intellectual capacity and control is called a technological singularity. Since capabilities of such an intelligence are impossible to comprehend, the technological singularity is an occurrence beyond which events are unpredictable or even unfathomable, everything that comes after it is therefore totally unknowable and any predictions are total speculation. One of the more common arguments against creating artificial intelligence is the potential negative consequences of the singularity.

Human history has consistently demonstrated that in any contact between two dissimilar cultures, the least advanced culture always is at a disadvantage and experiences resulting decline. Any existential treat from intelligent entities possibly therefore could be relatively similar to that posed by extraterrestrial contact.

The average person incorrectly associates intelligence with both awareness and consciousness[ii]. While a functional demonstration of AI is anticipated within the next several years, development of an intelligent entity capable of creative and original thought is unlikely within the near future as it currently is not known how to relate subjective experiences to the objective universe. The initial capability of functioning entities will therefore be quite limited.

An intelligent entity initially would most likely exhibit savant syndrome and be without the human equivalent of a moral compass; able to solve problems and answer difficult questions but without understanding their implications or potential affect. It is unclear if morality and compassion would develop naturally as a result of intelligence or if it is a product of evolutionary survival. It is doubtful anything similar to Asimov’s three laws of robotics[iii] ever would be incorporated into any entity.

Feasible scenarios for malicious or hostile behavior by an intelligent entity are most likely possible only if that behavior is deliberately programmed (e.g., by a military, an extremist, deranged individual, or similarly anti-social source), if humanity’s behavior is such that the entity feels it necessary to either protect itself, or if one of the entity’s goals are sufficiently incompatible with human interests to such an extent that it is the only way to achieve that goal.

Given the significant potential to achieve strategic political advantage, governments, especially the military, are likely to attempt to control AI creation providing some credibility to a Terminator[iv]-like scenario. While projects in each country differ in funding and control, most U.S. research is conducted at major universities where a high priority is placed on publication of research results. If successful AI development is first demonstrated at a U.S. university, publication of research results most likely would be sufficiently detailed to permit duplication and verification by other researchers within the field possibly mitigating any negative effects associated with attempted government control.

A number of years ago, several university research projects attempted to create databases containing all then-known facts. While probably not feasible at the time due to technical limitations, the possibility of such development is becoming increasingly likely. While not a conspiracy theorist, it is somewhat puzzling that references to any of those projects no longer seem available.

Unless competition for limited resources within the same ecological niche brings different species into conflict, they normally are able to peacefully coexist. If a sufficiently intelligent entity is capable of self-improvement, it could be assumed the rate of further development would quickly accelerate beyond the singularity to where there would be relatively little contact or reason for conflict between the species. If an entity developed self-awareness and humans felt sufficiently threatened that they attempted to terminate it, the entity should be expected to defend itself.

An immature intelligent entity, similar to an immature human child, probably would assume it was entitled highest priority for limited resources and might not fully appreciate possible economic, cultural, or environmental impact. Physical constraint limitations imposed by advanced system development in accordance with Moore’s Law would most likely be overcome resulting in increased efficiency in resource utilization such that this probably would not be likely. Human physical or conceptual constraints, similar to visual or hearing frequencies or possibly even three-dimensional limitations, would not be shared.

The introduction of any technological advance brings with it the potential for both benefits and threats. This always has been true; AI isn’t any different. The technology itself is neutral and how anyone perceives its potential is a reflection of their personal psychological orientation rather than about that technology.

While the intelligence singularity is initially most likely to be achieved within some university research department, it will only provide a slight increase in capability as additional intelligence becomes evolutionarily integrated into almost every device currently available from so-called “smart” phones, to notepads, personal computers, and automobiles to even common household devices[v].

Contrary to pessimistic predictions about the potential of AI, it is better to consider the current benefits already accruing from its application in medical diagnostics and treatment, education, energy and environment, or assistance to the disabled. It is not possible to predict occurrences beyond the technological singularity but other than for the possible negative impact associated with government control, the potential benefits of AI far out-weigh any unlikely negative results.

That’s what I think, what about you?

[i] Bornmann, Lewis J., PhD. Intelligence’s Future, https://lewbornmann.wordpress.com/2011/05/12/intelligences-future/, 12 May 2011.

[ii] Bornmann, Lewis J., PhD. Intelligence Enhancement, https://lewbornmann.wordpress.com/2015/01/05/intelligence-enhancement/, 5 January 2015.

[iii] Asimov, Isaac. Three Laws of Robotics, Wikipedia, http://en.wikipedia.org/wiki/Three_Laws_of_Robotics.

[iv] The Terminator, Wikipedia, http://en.wikipedia.org/wiki/The_Terminator.

[v] Bornmann, Lewis J., PhD. Internet of Things, https://lewbornmann.wordpress.com/2013/12/10/internet-of-things/, 10 December 2013.

Advertisements

About lewbornmann

Lewis J. Bornmann has his doctorate in Computer Science. He became a volunteer for the American Red Cross following his retirement from teaching Computer Science, Mathematics, and Information Systems, at Mesa State College in Grand Junction, CO. He previously was on the staff at the University of Wisconsin-Madison campus, Stanford University, and several other universities. Dr. Bornmann has provided emergency assistance in areas devastated by hurricanes, floods, and wildfires. He has responded to emergencies on local Disaster Action Teams (DAT), assisted with Services to Armed Forces (SAF), and taught Disaster Services classes and Health & Safety classes. He and his wife, Barb, are certified operators of the American Red Cross Emergency Communications Response Vehicle (ECRV), a self-contained unit capable of providing satellite-based communications and technology-related assistance at disaster sites. He served on the governing board of a large international professional organization (ACM), was chair of a committee overseeing several hundred worldwide volunteer chapters, helped organize large international conferences, served on numerous technical committees, and presented technical papers at numerous symposiums and conferences. He has numerous Who’s Who citations for his technical and professional contributions and many years of management experience with major corporations including General Electric, Boeing, and as an independent contractor. He was a principal contributor on numerous large technology-related development projects, including having written the Systems Concepts for NASA’s largest supercomputing system at the Ames Research Center in Silicon Valley. With over 40 years of experience in scientific and commercial computer systems management and development, he worked on a wide variety of computer-related systems from small single embedded microprocessor based applications to some of the largest distributed heterogeneous supercomputing systems ever planned.
This entry was posted in AI, Artificial, Artificial Intelligence, Artificial Intelligence, Asimov, Awareness, Behavior, Compassion, Consciousness, Creativity, Elon Musk, Hostile, Intelligence, Intelligent Entity, Machine, Moore’s Law, Morality, Negative, Objective Universe, Original Thought, Robotics, Savant Syndrome, Self-Aware, Singularity, Stephen Hawking, Subjective Experiences, Terminator, Xenophobia and tagged , , , , , , , , , , , , , , , , , , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s