The State Of AI

It seems to me….

The purest case of an intelligence explosion would be an Artificial Intelligence rewriting its own source code. The key idea is that if you can improve intelligence even a little, the process accelerates. It’s a tipping point. Like trying to balance a pen on one end – as soon as it tilts even a little, it quickly falls the rest of the way.” ~ Eliezer Yudkowsky[1].

Artificial Intelligence (AI) supposedly was my area of concentration in graduate school in the 1960s. This was the height of “expert system” development – directly preceding the start of the so-called AI winter. I hung around the Stanford AI Lab in the early 1970s; did some work on speech recognition (without success): my AI credentials are extremely weak. As one of the co-chairs for a large international AI-centric conference in San Francisco (ACM-84) in 1984, I strongly believed at the time in the feasibility of program goals established by the Japanese. It appeared as if we might finally be on our way.

The Japanese Fifth Generation artificial intelligence project (FGCS) proposed to build a massively parallel computing system over a ten-year period: 3 years for initial R&D, 4 years for building various subsystems, and a final 3 years to complete a working prototype system. In 1982, the government decided to go ahead with the project, and established the Institute for New Generation Computer Technology (ICOT) through joint investment with various Japanese computer companies. The term “fifth generation” was intended to convey the system as being a leap beyond computer architecture at the time but it quickly was surpassed in speed by less specialized hardware (e.g., Sun workstations and Intel x86 machines). The project did produce a new generation of promising Japanese researchers but after the FGCS Project, the Japanese Ministry of International Trade and Industry (MITI) stopped funding large-scale computer research projects and the research momentum developed by the FGCS Project dissipated.

Many of us believing in the basic premise of AI became disillusioned with what we perceived to be the slow rate of progress. Initial application success could not be prolonged; so-called “expert systems” proved inadequate; hoped for breakthroughs in theorem development and probability or speech understanding never materialized. We began to appreciate just how difficult it was to duplicate seemingly easy human capabilities. During the ensuing AI winter, research visibility diminished but continued with much more realistic and achievable goals. Now, AI once again is gaining public attention as intelligence increasingly is incorporated in seemingly all devices and applications.

Defining AI is difficult and while current researchers might object, to many outside the field anything demonstrating improved performance over time seems to qualify as AI. This was the only requirement in the past and, in a general sense, still remains sufficient. How an algorithm is implemented is immaterial. In the 1960s we implemented expert systems by continually adding additional rules and it was considered AI – but times have changed. Gomper’s checker playing program was considered AI but now chess applications on cell phones can defeat most humans and they are only thought of as games and not AI.

Recent advances in AI in the last few years are somewhat alarming since many of the people working in the field do not seem to fully appreciate the potential negative consequences. Full general AI surpassing human-level intelligence is inevitable – it WILL happen. We just haven’t any idea when. As humans, it encompasses our greatest hope and possibly the only way for us to survive as a species. Unfortunately, the dark side is that there are an unlimited number of ways it could go wrong. No, I believe the Hollywood scenarios are impossible but we have to understand that machine intelligence never will equate to human intelligence.

AI is one of humankind’s truly revolutionary endeavors. Artificial intelligence has many benefits to offer, especially in supporting human beings to make better decisions and to pursue scientific goals that are currently beyond our reach. It will transform our economies, our society, and our position at the center of this world. If we get this right, the world will be a much better place. We’ll all be healthier, wealthier, and happier.

The human brain is the most complex known system in the universe. There’s nothing approaching the complexity of the brain’s billions of neurons and trillions of connections. But there are also no known physical laws that would prevent us from reproducing or exceeding its capabilities.

If there can be artificial intelligence, then there can be super-intelligent artificial intelligences. There doesn’t seem to be any reason why entities other than human beings could not be intelligent. Nor does there seem to be any reason to think the highest human IQ represents the upper limit on intelligence.

For the time being, superintelligent machines that pose an existential threat to humanity are the least of our worries. The more immediate concern is how to prevent robots or machines with rudimentary language and AI capabilities from inadvertently harming people, property, the environment, or themselves. There is a danger they will interpret commands from a human too literally or without any deliberation about the consequences. Overconfidence in the moral or social capabilities of robots is also dangerous. The increasing tendency to anthropomorphize social robots and for people to establish one-sided emotional bonds with them could have serious consequences.

Human fallibility poses greater risk and challenges than general level artificial intelligence as artificial entities become increasingly autonomous and ubiquitous. Humans are fallible; they make mistakes; they frequently give faulty, incomplete, or ambiguous instructions; they are inattentive, nefarious, and self-serving.

An oft-cited thought experiment[2] illustrating why the potential threat of general AI is considered such as extreme danger is if given even a very narrowly defined goal that is totally benign, an entity not sharing human values could adopt unforeseen hazardous instrumental sub-goals posing an existential threat-risk to humanity. To logically even evaluate such a potential threat requires a reasonable analysis, an estimate of the risk, and an understanding of the underlying or technological phenomena necessary to formulate a response. We tend to assume that emotion, ethics, and intelligence are somehow interdependent and not actually possible in one form without the others but there isn’t any substantiation for this belief.

Researchers in the field are aware of the potential negative consequences of future development. Recent conferences[3],[4] brought together AI researchers from academia and industry along with thought leaders in economics, law, ethics, and philosophy to address and formulate principles of beneficial AI; people from various AI-related fields hashed out opportunities and challenges related to the future of AI and steps that can be taken to ensure resulting technology is beneficial. A number of guiding principles resulted that have been endorsed by many in the field[5].

Assuming the perfect superintelligent AI entity could be developed without any concerns of negative consequences; what would be the long-term effects on humanity? It would be able to perform any work more efficiently, faster, and economically than possible by humans. Not only would it end all human drudgery, it also would be the end of most intellectual work. It is unclear what the long-term effects of leisure might be but given current expectations, it could be abject poverty.

It is unknown how long still remains prior to creation of such an entity. Today, the common enactment of AI seems to have evolved to a dependence on deep learning algorithms primarily based on distributed representations. Numerous machine-based analysis techniques, all broadly labeled as artificial intelligence, have been developed including neural networks, multilinear subspace learning, random forests, deep Boltzmann machines, deep Q-networks, etc. Numerous approaches have been pursued in the past without success; it remains to be seen what the limitations of these approaches will be but their potential has just begun to be exploited.

One of the major limitations is that computers are not yet capable of ideation – original recombinant innovation; there still isn’t any system that is creative, entrepreneurial, or innovative. Computers can be programmed to generate new combinations of pre-existing elements but are unable to originate new ideas or concepts.

While the existential threat of general level AI is real, it currently is sufficiently in the future that while it deserves consideration, other concerns assume greater importance; e.g., global warming. Near term, AI will become the operating system of all our connected devices. It will be the way we interact with our smart phones, cars, fridges, central heating system, and front door.

Even current limited AI poses challenges. It is highly probable that AI system development will be used mainly to further enrich the wealthy and to entrench the influence of the powerful. Robotic weapons could make it easier for governments to start wars because they will hold out the illusion of being able to fight without taking any casualties. They could increase the risk of accidental war as militaries deploy unmanned systems in high threat environments where it would be too risky to place a human being such as just outside a potential enemy’s airspace or deep-seaports.

Related advances in computing and automation present an immediate challenge in some areas such as employment as they progressively subsume an increasing number of job categories. The reasons for replacing people with machines are not simply based upon available technology; the predominant factor is actually the business case and the social attitudes and behavior of people in particular markets. How we cope with this change is a question not for technologists but for society as a whole. History would suggest that protectionism is unlikely to work.

In the past, displaced agricultural workers found employment in the factories. Now the factory jobs are being eliminated but a rapidly increasing percentage of newer employment opportunities are higher on the intelligence scale necessitating advanced education which exceeds the ability of a significant percentage of potential available workers. Automation and computerization will result in expanded employment opportunities but many of those positions will be in research and development.

It is necessary to ensure there is an educated workforce able to adapt to the new jobs created by technology; people need to enter the workforce with skills for jobs that will exist in a couple of decades’ time when the technologies for these jobs have been invented. If people are educated for flexibility rather than to fit into a particular job, intelligent people will be able to cope with the dislocation.

Advanced affordable education must be available to all to alleviate unemployment and a deficiency of qualified employees for those positions. For others for whom higher education is not an option, other alternatives must be found or society as a whole will suffer the consequences.

That’s what I think, what about you?

[1] Eliezer Shlomo Yudkowsky is an American AI researcher and writer best known for popularizing the idea of friendly artificial intelligence.

[2] Bestrom, Nick. Superintelligence: Paths, Dangers, Strategies, Oxford University Press, 2014.

[3] AI Safety conference in Puerto Rico, Future of Life Institute, https://futureoflife.org/2015/10/12/ai-safety-conference-in-puerto-rico/, 12 October 2015.

[4] Beneficial AI 2017, Future of Life Institute, https://futureoflife.org/bai-2017/, 5 January 2017.

[5] Asilomar AI Principles, https://futureoflife.org/ai-principles/.

About lewbornmann

Lewis J. Bornmann has his doctorate in Computer Science. He became a volunteer for the American Red Cross following his retirement from teaching Computer Science, Mathematics, and Information Systems, at Mesa State College in Grand Junction, CO. He previously was on the staff at the University of Wisconsin-Madison campus, Stanford University, and several other universities. Dr. Bornmann has provided emergency assistance in areas devastated by hurricanes, floods, and wildfires. He has responded to emergencies on local Disaster Action Teams (DAT), assisted with Services to Armed Forces (SAF), and taught Disaster Services classes and Health & Safety classes. He and his wife, Barb, are certified operators of the American Red Cross Emergency Communications Response Vehicle (ECRV), a self-contained unit capable of providing satellite-based communications and technology-related assistance at disaster sites. He served on the governing board of a large international professional organization (ACM), was chair of a committee overseeing several hundred worldwide volunteer chapters, helped organize large international conferences, served on numerous technical committees, and presented technical papers at numerous symposiums and conferences. He has numerous Who’s Who citations for his technical and professional contributions and many years of management experience with major corporations including General Electric, Boeing, and as an independent contractor. He was a principal contributor on numerous large technology-related development projects, including having written the Systems Concepts for NASA’s largest supercomputing system at the Ames Research Center in Silicon Valley. With over 40 years of experience in scientific and commercial computer systems management and development, he worked on a wide variety of computer-related systems from small single embedded microprocessor based applications to some of the largest distributed heterogeneous supercomputing systems ever planned.
This entry was posted in AI, AI, AI Lab, AI Winter, Algorithms, Artificial, Artificial Intelligence, Artificial Intelligence, Automation, Boltzmann Machines, Checkers, Deep Learning, Economy, Education, Employment, employment, Expert System, Game Playing, Gomper, Human-Level, ICOT, Ideation, Implementation, Institute for New Generation Computer Technology, Intelligence, Intelligence, Japan, Japan, Ministry of International Trade and Industry, MITI, Multilinear Subspace Learning, Neural Networks, Q-Networks, Random Forests, Recombinant Innovation, San Francisco, Stanford, Threat, War and tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , . Bookmark the permalink.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.