It seems to me…
Rather than being a skeptic, I will be honest and admit to the personal belief that humans are merely a necessary step in the evolution of higher intelligence. As a computer science graduate student back in the now-primitive 1960s, given the apparent progress being made at that time, it generally was assumed that a program capable of passing the Turing Test[i] was essentially in reach and probably would be achieved within ten to twenty years. When organizing a large international conference on artificial intelligence (AI) in San Francisco in 1984, the general assumption then was it could be achieved prior to the turn of the century. The task has proven to be much more difficult than anyone originally thought and I personally now am much less optimistic about seeing true machine intelligence capable of creative curiosity and originality within the near future. Still, I believe there is at least a fifty percent probability that machine intelligence will be developed by the year 2050.
There always have been skeptics. In 1965, while teaching at the Massachusetts Institute of Technology, Hubert Dreyfus published “Alchemy and Artificial Intelligence“, an attack on the work of Allen Newell and Herbert Simon, two of the leading researchers in the field of Artificial Intelligence at the time. Dreyfus not only questioned the results they had so far obtained, but also criticized their basic presupposition (that intelligence consists of the manipulation of physical symbols according to formal rules), and argued that the AI research program was doomed to failure.
The term “artificial intelligence” is an oxymoron: intelligence is intelligence regardless of how one prefers to define it. (Some writers prefer to define intelligence as something done by humans which simply serves to totally eliminate any problem.) While there probably is some threshold where an entity, such as an insect, can be judged to act intelligently rather than instinctively, any attempt to declare exactly where that point might be probably is not meaningful since with that line of reasoning, it would be relatively simple to extend the definition of intelligence so any device programmed to act a certain way in response to some input or stimulus is acting intelligently regardless of the (in)significance of the action. Considering the rapid advances in computing, it is relatively easy to create a system displaying some limited degree of intelligence.
I originally intended to write an assessment of current development in the field but, after writing over a half dozen pages and still not having started on any technical considerations, realized there many excellent reviews of the current state of artificial intelligence already are available that were at least as good as I would be able to write. While not specifically endorsing any of these, one that seems fairly complete can be found at http://www.mind.ilstu.edu/curriculum/extraordinary_future/PhillipsDetailedTOC.php?modGUI=247&compGUI=1944&itemGUI=3395 – specifically Chapter 2: Computers and Intelligence. It also does not seem necessary to restate either arguments in support of the field or theoretical minimal requirements in memory size or processor speed on which such development might depend as I had intended.
The definition of artificial intelligence has changed over the years and many programs now available demonstrate capabilities we were trying to develop a half century ago. While only minor improvement in language understanding have been made, language recognition is readily available (though improvement in multiple simultaneous speakers still is needed). Tic-tac-toe and checkers were the best games using AI constructs then available but now a computer can defeat the world’s best chess players (but not yet for the game of Go). Computer-generated graphics are routinely used in movies, advertising, and other videos (but the human face still requires additional improvement). With each development, the definition of what constitutes artificial intelligence continues to shift to a still higher level of difficulty.
It is somewhat surprising that AI constructs have not been utilized, at least not so far, in the human-computer interface. The interface available on most notepads is the first significant improvement since the initial development of the graphical user interface originally developed by Xerox PARC on the Alta workstation in 1973. Given the speed of development and obsolescence in technology, it sometimes is difficult to believe that even the ubiquitous computer mouse, created by Douglas Engelbart back in 1964 while he was working at the Stanford Research Institute (SRI), is still one of the primary user-computer interfaces. AI constructs, however, still are not being exploited on even the latest systems now appearing on the market.
There would be obvious and significant advantages of such system extensions. None of us would consider directly conversing with another person by typing – speaking is the natural preferred method of human communication. While voice input is available on most systems, it remains awkward and unnatural to use. Why isn’t a computer (or for that matter, almost any device) able to recognize a pattern of usage and automatically perform routine tasks. For example, after turning on my computer every morning, I always check the weather and read any new e-mail messages; when I have made the same change to several items in a file, why can’t the system recognize that pattern and at least offer to do any remaining changes for me. Yes, I could program the system to do it but why does it not have sufficient intelligence to do it for me?
Perhaps partly as a result of machine intelligence as a theme in science fiction, there has been much speculation as to whether development of AI should be considered potentially beneficial to humans or as a possible threat. While acknowledging negative possibilities, I believe it to be relatively low:
Beneficial ≥ 60 percent
No effect ~ 30 percent (no apparent change)
Negative ≤ 10 percent
AI potentially could provide countless beneficial possibilities in practically every area of human activity. Scientific research and development enhancements would greatly increase potential breakthroughs of which we now can only dream.
It also is relatively easy to argue that development of such intelligence constitutes humanities best hope for long-term survival as a species. We have reached a point of technological development that though it is not readily apparent, where we have become an endangered species. We continue to develop increasingly powerful weapons. Nuclear proliferation enhances the threat of annihilation. Global warming potentially could result in runaway greenhouse affect. Resource depletion threatens sustained quality of life. Overpopulation eventually will exceed Malthusian limitations. Super-resistant viruses eventually will cause a pandemic. Humans have demonstrated only a highly questionable ability to prevent any of these possibilities. If we can not, maybe an entity possessing higher intelligence might be able to.
Once developed, AI probably would continually improve at an ever increasing exponential rate quickly exceeding our ability to understand. A relationship of benign neglect similar to our treatment of most other species we share this planet with (though we most certainly would not be a source of food) where we are “managed” for our own good might be the best for which we can hope.
That’s what I think, what about you?
[i]The Turing test is a proposal for a test of a machine’s capability to perform human-like conversation. Described by Alan Turing in the 1950 paper “Computing machinery and intelligence”, it proceeds as follows: a human judge engages in a natural language conversation with two other parties, one a human and the other a machine; if the judge cannot reliably tell which is which, then the machine is said to pass the test. It is assumed that both the human and the machine try to appear human. In order to keep the test setting simple and universal (to explicitly test the linguistic capability of the machine instead of its ability to render words into audio), the conversation is usually limited to a text-only channel such as a teletype machine as Turing suggested or, more recently IRC or instant messaging.