It seems to me….
“We are on the edge of change comparable to the rise of human life on Earth.” ~ Vernor Vinge.
I have frequently stated my belief that development of so-called artificial intelligence (AI) is inevitable; that humans are merely a necessary step in Nature’s evolutionary development of higher intelligence. All AI up to now has been “narrow” AI: able to specialize in only one area. What I’m actually referring to is the attempt to develop a general form of “super” intelligence smarter and more capable than the human brain in every area.
This belief has other significant implications. For instance, if those who believe AI is inevitable on Earth are correct, it necessitates that a significant percentage of alien civilizations who reach human-level intelligence should presumptively also have created AI. If we ever are visited by aliens, those aliens therefore ostensibly will be artificial, not biological. Why we have so far failed to detect extraterrestrial civilizations as predicted by the Drake equation is a major question postulated by Fermi’s paradox. Perhaps AI is the disconcerting answer.
But how did we, or some other similar advanced societies, reach this point? At what point does AI development become at least extremely likely if even not inevitable? There isn’t anything which indicates higher intelligence to be a viable characteristic for long-term species survival. Simply because we, as a species, have managed to survive to this point is primarily attributable to good luck but also has resulted in the development of numerous technologies capable of initiating a future catastrophic event leading to our extinction. While development of AI, nanotechnology, or synthetic biology are potentially the greatest and most beneficial developments in our history, they most likely also constitutes the greatest threats we ever will face. AI is possibly one of the reasons why we have so-far failed to detect extraterrestrial life.
The scientific method has existed throughout history but required time to reach a critical mass. Much had been learned about the basic sciences and engineering: mathematics, physics, chemistry…; but ultimately reached a point where innovation exploded. It is relatively easy to graph population or technological development as a dependent variable against the independent variable of time and establish a causal relationship determining some occurrence at the resulting curve’s greatest change in slope (its first derivative) as its most important characteristic but this ignores the effect of omitted variables leading to false conclusions. The pace of innovation has continued to exponentially increase and we haven’t any idea where it might lead or what its ultimate effect might be.
But this does not address when, or if, development of AI became inevitable. While it is impossible to actually answer when or why, that path most likely began quite some time ago building upon itself until reaching inevitability. It is this aspect that suggests it also probably would have occurred in all other advanced extraterrestrial civilizations. We are not likely to have been the first down this path; we should tread cautiously aware of the possible dire outcomes awaiting at what hopefully will not be the final realization of our quest.
A reasonable case could be made that the original discovery of electricity opened the road upon which we now find ourselves travelling upon at an ever increasing pace.
No one knows who first discovered electricity; very early humans certainly were aware of lightning but that did not indicate any useful application would ever be made of it. Benjamin Franklin is generally credited with being the first seeking to prove that lightning was caused by electricity by describing an experiment in 1750 in which an electrical conductor could be used to extract power from a thundercloud but it is not clear exactly when, how, or even if, Franklin actually performed his lightning experiment. A French experimenter named Thomas-Francois Dalibard, who had read Franklin’s writings on the subject, did successfully obtained an electrical discharge from a thundercloud in May 1752.
While there still wasn’t any actual application for this discovery, it apparently was sufficient to encourage other experimenters to begin investigating the effects of what at this time was nothing more than a new curiosity. The Italian scientists Luigi Galvani and Alessandro Volta both played a role in the development of the first battery in the late 18th and early 19th centuries.
Experimental work on the connection between electricity and magnetism began around 1820 with the work of Hans Christian Ørsted and continued with the work of André-Marie Ampère, Joseph Henry, and Michael Faraday. Their research culminated in a theory of electromagnetism developed by James Clerk Maxwell who published A Treatise on Electricity and Magnetism in 1873 which predicted the existence of electromagnetic waves.
Until this point in time, while the foundation was being formed for future discoveries, electromagnetism still remained little more than a curiosity and might not have progressed any farther. It would take something of practical significance to definitely establish the start of the path leading to where we are today. That development was radio.
Many people were involved in the invention of radio in its current form but the first intentional transmission of a signal by means of electromagnetic waves probably was accomplished by David Edward Hughes around 1880 although it was considered to be induction at the time. The first systematic and unequivocal transmission of electromagnetic waves was performed by Heinrich Rudolf Hertz which he described in papers published in 1887 and 1890 (but who considered his results to be of little practical value).
After Hertz’s work, many people became involved in further development of electronic components and methods to improve the transmission and detection of electromagnetic waves. Around the turn of the 20th century Guglielmo Marconi developed the first apparatus for long distance radio communication (though there is evidence he might have “borrowed” from the work of Nikola Tesla).
Based on these developments, especially after Marconi transmitted the first transatlantic telegraph in 1901, development morphed in numerous different directions with additional discoveries occurring at an exponential pace eventually leading to where we are today. Reginald A. Fessenden became the first to send audio (wireless telephony) by means of electromagnetic waves in 1907 but it was not until 1910 when these systems came to be referred to by the common name “radio”.
When most people today think of computers, they only think of digital devices but analog calculators were developed many years prior; Charles Babbage designed (but never completed) what is generally considered to be the first fully programmable computer: the Difference Engine, in the 1830s. This, however, along with the fact that ALL computers actually were humans (someone who did computing) until the 1950s, is a mere side-note in the development of AI.
The actual date of when the first digital computer was developed is subject to some disagreement as several systems were built at approximately the same time. University of Iowa professor J.V. Atanasoff and his graduate student, Clifford Berry, constructed a working model started in 1937 and completed in 1941. Two University of Pennsylvania professors, John Mauchly and J. Presper Eckert, constructed the ENIAC (Electronic Numerical Integrator and Calculator) between 1943 and 1944. The very first functioning digital computer might also have been Konrad Zuse’s Z3 completed (in his parent’s living room) in Berlin, Germany in 1941 but unfortunately destroyed in World War II.
Whichever rightly deserves the title as being the first digital computer is immaterial since they quickly were followed by numerous other devices, each of which was faster and more capable than any of its predecessors. The race was on.
Immediately prior to the onset of World War II, a team of researchers at Bell Laboratories in Murray Hills, NJ, including William Shockley, John Bardeen, and Walter Brattain, were attempting to develop a solid-state switch for use in telephone switching circuits. They left during World War II to work in Great Britain on the development of radar, learning much about semiconductor crystals prior to their return to Bell Labs following the war. They announced development of the transistor circuit in 1947 reducing the size of electronic switches and eliminating the need for vacuum enclosures. A number of inventors including German Werner Jacobi, British Geoffrey Drummer, and Americans Harwick Johnson, Jack Kilby, and Robert Noyce were instrumental in inventing the Integrated Circuit (IC) released in 1958. Invention of the transistor/integrated circuit has to be considered one of the most significant developments in history.
The formal study of AI possibly can be traced to a paper titled Computing Machinery and Intelligence written in 1950 by the British mathematician Alan Turing where he proposed the question “Can machines think?” but the actual field of AI research began at a conference at Dartmouth College in the summer of 1956 attended by John McCarthy, Marvin Minsky, Allen Newell, Arthur Samuel, and Herbert Simon who, became the leaders of AI research for several following decades.
Their initial success was nothing less than astonishing: computers were winning at checkers, solving word problems in algebra, proving logical theorems, speaking English…. Everyone was extremely optimistic and research laboratories were established around the world. Herbert Simon predicted “machines will be capable, within twenty years, of doing any work a man can do” and Marvin Minsky agreed, writing that “within a generation … the problem of creating ‘artificial intelligence’ will substantially be solved”.
That early optimism gradually faded as it became generally recognized that AI was much more complex than initially realized: by 1974, in what is now called the “AI Winter”, much of the initial funding had been terminated. Interest briefly revived in the 1980s with the commercial success of so-called “expert systems”, a form of computer system that emulates the decision-making ability of a human expert by attempting to solve complex problems by reasoning with knowledge represented primarily as if–then rules rather than through conventional procedural code. Over sold, AI once again fell into disrepute.
While it seemed AI disappeared for over a quarter century, much research was quietly being conducted creating a much broader and more realistic foundation for future growth. Now, the challenges are better understood and substantial progress is occurring on what seems to be a daily basis. But where actually are we? The answer is that development of what might be considered general artificial superintelligence is still somewhere out in the future. But sufficient progress has been made to provide confidence that it definitely will occur.
Examples of narrow AI, systems that specialize in one area, seem to be everywhere. Cell phones provide general assistance through AI entities such as Siri and Cortana. Automobiles incorporate numerous such AI systems including automatic breaking and collision avoidance systems and totally self-driving autonomous vehicles are actually driving on public roads and highways. Web searches using Google or Bing are done using AI engines. AI systems are now able to defeat the world’s best experts at all board-type games. Product recommendations from Amazon are provided using AI. Home devices capable of numerous activities from environment control to security monitoring are available at every hardware outlet. Many news articles from sports results, stock market reports, and weather forecasts are actually now “written” by AI apps rather than humans. Corporate manufacturing, assembly, and supply rely on AI controlled robotics and computerization. Such systems are everywhere and sufficiently common to not even be considered out of the normal.
It is difficult to predict how soon we can anticipate development of general artificial superintelligence as progress is never linear: a single discovery is capable of instantly changing the rate of advancement. Additionally, humans intuitively consider time linearly basically believing the future will resemble the present rather than the historical exponential reality actually experienced. From this perspective, artificial superintelligence could be developed more quickly than we are prepared for it; the general consensus of those working in the field is around 2040 with even the pessimistic forecasts to be no later than 2070.
It is impossible to detail even a small fraction of the multitude of mostly small steps so far on this path while fully accepting a long way still remains to go. Many extremely important developments had to necessarily be omitted for this treatise to remain even somewhat acceptable in length. It still contains too many uninteresting names and history to be of general interest but would be quite remiss if more were omitted.
I remain confident we eventually will get there – and somewhat terrified of what that outcome will be when we do.
That’s what I think, what about you?
 Vernor Steffen Vinge is a retired San Diego State University Professor of Mathematics, computer scientist, and science fiction author.
 Urban, Tim. The AI Revolution: The Road to Superintelligence, Wait But Why, http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html, 22 January 2015.
 Who Discovered Electricity?, WiseGeek, http://www.wisegeek.org/who-discovered-electricity.htm.
 Artificial Intelligence, Wikipedia, https://en.wikipedia.org/wiki/Artificial_intelligence.
 Disclosure: I was a graduate student at the University of Wisconsin – Madison in the late1960s majoring in Computer Science with a concentration in Artificial Intelligence.
 Kurzweil, Ray. The Law of Accelerating Returns, KurzweilAINetwork, http://www.kurzweilai.net/the-law-of-accelerating-returns, 7 March 2001.