What Is AI?

It seems to me….

There is huge demand for artificial intelligence technologies.” ~ Yuri Milner[1].

In the 1950s when artificial intelligence (AI) originated, AI was described as any task performed by a program or a machine that if the same activity was carried out by a human, we would say the human had to apply intelligence to accomplish the task –this is obviously too broad a definition and has resulted in disagreements as to whether something is AI or not. Artificial intelligence describes machine capabilities that we expect would require human intelligence. Since this definition is subjective, whether a certain machine capability is considered artificial intelligence may change as our expectations of computers evolve. AI systems will typically demonstrate at least some behaviors typically associated with human intelligence: planning, learning, reasoning, problem solving, knowledge representation, perception, motion, and manipulation and, to a lesser extent, social intelligence and creativity.

What really distinguishes human intelligence from machine learning algorithms is that the former is able to handle a wide variety of complex tasks. We have not yet achieved a “strong”’ artificial intelligence (also known as General Artificial Intelligence), which can complete all the intellectual tasks of which a human being is capable. However, significant progress in AI in the last few years signals that a strong AI may be achievable in the foreseeable future.

Some people continue to doubt whether general AI can ever be developed – the probability should not be questioned. Human anthropocentrism has always egotistically considered itself unique and at a pinnacle of development unshared with any other form of creation. This not only deserves but even needs to be challenged.

Consider…. The Earth formed about 4.5 billion years ago. While there is evidence that life started 700 million years later (3.8 to 3.5 billion years ago), the earliest known life forms were single celled organisms around two billion years ago. Modern humans did not evolve until about 200,000 years ago and only began to exhibit evidence of behavioral modernity around 50,000 years ago. Actual written history began only about 8,000 years ago.

There are still some people who doubt the potential development of general AI. Regardless of whether homo sapiens are eventually supplanted by carbon, silicon, or some other form of life, the following observation is worth consideration.

Most educated people are aware that we are the outcome of neatly 4 billion years of Darwinian selection, but many tend to think that humans are somehow the culmination. Our sun, however, is less than halfway through its life span. It will not be humans who watch the sun’s demise, 6 billion years from now. Any creatures that then exist will be as different from us as we are from bacteria or amoebae.” ~ Martin Rees[2].

Intelligence can develop from a specific organization of molecules – the human brain is an example. It therefore is undeniable that simple matter can give rise to general AI, an ability to think flexibly across multiple domains. Atoms are atoms and if we continue to build systems of atoms that display increasing levels of intelligent behavior, we eventually will build general intelligence into our machines. Rudimentary AI agents, such as Siri and Cortana, currently exist and will continue to improve year after year until they exceed human intelligence – at which point they will begin to improve themselves at an ever-increasing rate.

For comparison, it has taken us around 200,000 years to reach where we are now as a species. In a period of 6 months for us, a general AI entity could accumulate half a million years’ worth of knowledge on top of everything we already know meaning that we will be unable to compete with them within days of their having achieved the same levels of intelligence as us. We won’t even have time to react because machines will have surpassed us literally within the blink of an eye.

As stated, AI can be split into two broad types: narrow AI and general AI.

Narrow AI is what is currently available in computers today: intelligent systems that have been taught or learned how to carry out specific tasks without being explicitly programmed how to do so. This can be accomplished using a form of machine learning.

General AI is the type of adaptable intellect found in humans, a flexible form of intelligence capable of learning how to carry out vastly different tasks, anything from haircutting, to building spreadsheets, or to reason about a wide variety of topics based on its accumulated experience.

Machine learning is where a computer system is fed large amounts of data, which it then uses to learn how to carry out a specific task such as understanding speech or captioning a photograph.

A subset of machine learning is deep learning, where neural networks are expanded into sprawling networks with a huge number of layers that are trained using massive amounts of data. It is these deep neural networks that have fueled the current leap forward in the ability of computers to carry out task like speech recognition and computer vision.

There are various types of neural networks, with different strengths and weaknesses. Recurrent neural networks are a type of neural net particularly well suited to language processing and speech recognition, while convolutional neural networks are more commonly used in image recognition.

Another area of AI research is evolutionary computation, which borrows from Darwin’s theory of natural selection, and sees genetic algorithms undergo random mutations and combinations between generations in an attempt to evolve the optimal solution to a given problem.

Finally, there are expert systems, where computers are programmed with rules that allow them to take a series of decisions based on a large number of inputs allowing that machine to mimic the behavior of a human expert in a specific domain.

Artificial intelligence requires an appropriate machine learning algorithm, data, and computing power. Many of today’s most promising AI technologies use neural networks. These are brain-inspired networks of interconnected layers of algorithms, called neurons, that feed data into each other, and which can be trained to carry out specific tasks by modifying the importance attributed to input data as it passes between the layers. During training of these neural networks, the weights attached to different inputs will continue to be varied until the output from the neural network is very close to what is desired, at which point the network will have “learned”’ how to carry out a particular task.

A neural network can be small and simple or large and powerful. The larger the neural network, the more powerful it is. Reaching the limits of neural networks, and therefore of today’s artificial intelligence performance, requires using powerful computer chips. Several IC manufacturers are developing chips specifically for deep learning and other artificial intelligence uses.

There very good reasons for the rapid progress currently being made in AI development:

More computing power: AI requires a lot of computing power and many recent advances have been made as complex deep learning models can now be deployed. One of the technologies that made this possible are graphics processing units (GPUs), specialized electronic circuits designed to rapidly manipulate and alter memory to accelerate the creation of images in a frame buffer intended for output to a display device.

More data: In order to have a useful AI-agent to make smart decisions like telling which item to recommend to you next when shopping online or classify an object from an image requires extensive training. AI-agents are trained on large data sets and big data enabling them to do this more efficiently.

Better algorithms: Most state-of- the-art algorithms are based on the idea of neural networks which are constantly producing better results.

Broad investment: Universities, governments, startups and tech giants (Google, Amazon, Facebook, Baidu, Microsoft…) are all investing heavily in AI.

Much remains to be learned prior to development of general AI; just how much remains unknown. Amara’s Law[3] states that people tend to overestimate a technology’s short-term effect and underestimate its long-term effect. Another is people’s inability to imagine a nonexistent technology’s limitations, while a third factor is their often-mistaken assumption that performance of a task by AI equals competence. The penchant to parallelize AI progress in learning a certain task to the same process in humans is misleading. In addition, it should not be assumed AI will continue to progress steadily on an exponential performance path but rather in fits and starts; nor should anyone believe media-promulgated visions of unexpected AI scenarios. No one should expect accelerated AI deployments when gradual deployments are far more likely.

Expectations of AI’s overall benefits partly stem from human history where technological innovation has helped humanity more often than degraded it. Even when AI eventually surpasses us intellectually, if it lacks consciousness, there would still be an important sense in which humans are superior to machines – the smartest beings on the planet would not be conscious or sentient. Much depends on the issue of machine consciousness but neuroscientists are far from understanding the basis of consciousness in the brain and philosophers are at least equally far from a complete explanation of its nature.

Perhaps we should not be in too great a hurry to develop a general AI capability. Arthur C. Clarke[4] observed that “Any sufficiently advanced technology is indistinguishable from magic”. Besides, creation of human-level artificial intelligence possibly could be comparable to creation of a god – and it might not be friendly.

That’s what I think, what about you?

[1] Yuri Borisovich Bentsionovich Milner is a Russian entrepreneur, venture capitalist, and physicist.

[2] Martin John Rees, Baron Rees of Ludlow, OM, FRS, FREng, FMedSci, FRAS is a British cosmologist and astrophysicist.

[3] Roy Charles Amara was an American researcher, scientist, futurist, and president of the Institute for the Future best known for coining Amara’s Law on the effect of technology.

[4] Sir Arthur Charles Clarke, CBE, FRAS was a British science fiction writer, science writer and futurist, inventor, undersea explorer, and television series host.

About lewbornmann

Lewis J. Bornmann has his doctorate in Computer Science. He became a volunteer for the American Red Cross following his retirement from teaching Computer Science, Mathematics, and Information Systems, at Mesa State College in Grand Junction, CO. He previously was on the staff at the University of Wisconsin-Madison campus, Stanford University, and several other universities. Dr. Bornmann has provided emergency assistance in areas devastated by hurricanes, floods, and wildfires. He has responded to emergencies on local Disaster Action Teams (DAT), assisted with Services to Armed Forces (SAF), and taught Disaster Services classes and Health & Safety classes. He and his wife, Barb, are certified operators of the American Red Cross Emergency Communications Response Vehicle (ECRV), a self-contained unit capable of providing satellite-based communications and technology-related assistance at disaster sites. He served on the governing board of a large international professional organization (ACM), was chair of a committee overseeing several hundred worldwide volunteer chapters, helped organize large international conferences, served on numerous technical committees, and presented technical papers at numerous symposiums and conferences. He has numerous Who’s Who citations for his technical and professional contributions and many years of management experience with major corporations including General Electric, Boeing, and as an independent contractor. He was a principal contributor on numerous large technology-related development projects, including having written the Systems Concepts for NASA’s largest supercomputing system at the Ames Research Center in Silicon Valley. With over 40 years of experience in scientific and commercial computer systems management and development, he worked on a wide variety of computer-related systems from small single embedded microprocessor based applications to some of the largest distributed heterogeneous supercomputing systems ever planned.
This entry was posted in AI, AI, AI, Amara's Law, Amazon, Apple, Artificial, Artificial Intelligence, Artificial Intelligence, Chip, Consciousness, Cortana, Cortana, Creativity, Deep Learning, Evolution, Evolutionary Computation, Expert System, Google, Google, Google, Homo Sapien, Human, Human, Intelligence, Intelligence, Machine Learning, Microsoft, Microsoft, Neural Networks, Neural-Network, Siri and tagged , , , , , , , , , , , , , , , , , , , , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.