It seems to me….
“By their very nature, heuristic shortcuts will produce biases, and that is true for both humans and artificial intelligence, but the heuristics of AI are not necessarily the human ones.” ~ Daniel Kahneman.
It is very difficult to define Artificial Intelligence (AI) in a way acceptable to everyone working in the field. This is partly the result of the rapid changes currently taking place in what many consider to be AI. Applications that only a few years ago would have been considered AI are now used every day without any thought of the technology making it possible. Automated perception, machine learning and reasoning, decision making, and much more. Applications such as Siri (Apple) or Cortana (Microsoft) recognize spoken input. Machine vision is capable of facial recognition. Internet search engines. Language translation. Route planning. Self-driving cars. Medical diagnosis. The list constantly advances at an ever increasing pace.
It is important to differentiate between narrow and general AI.
Narrow AI: systems smart enough to do one complex task but just that one task.
General AI: can adapt to whatever task with which it’s confronted in much the same way as a human is able.
General AI, or strong AI as it’s sometimes known, is a far broader concept. It indicates a system that can work much like the human brain, understanding new concepts, grasping feedback from several senses at once, and finding creative solutions to unfamiliar problems, all with minimal training data. Humans have general intelligence: the ability to learn from one situation and apply it to another.
For some, it is seen as trying to recreate the human brain itself. Recreating that kind of intelligence in computers could be decades away. Progress, though, is constant.
To many people AI is big banks analyzing data in minutes instead of hours, automating everything from investment performance reviews to suspicious activity reports to wealth advisory communications on an unprecedented scale and with very little human capital, or IBM Watson’s natural-language processing to provide financial advice to its members. Marketing departments are able to cater to customers one-to-one, on a massive scale. Using big data analytics in patient care is 30 to 35 percent more successful than a comparable doctor’s performance. Automated machine learning to better understand structured and unstructured data so that users can successfully connect with the people, information, and advertisers they need. The U.S. Navy has developed robots that can fight fire and automate many of a firefighter’s most dangerous tasks. Applying machine-learning models to identify new vulnerabilities and suspicious activity to reduce external threats to customer data. Examples are everywhere one choses to look.
The next big hurdles are endowing computers and robots with common sense: being able to anticipate the consequences of ordinary, everyday actions on people and things. The other one is endowing them with creativity – and that will be incredibly difficult.
Machines currently only do well at responding to certain predictable questions. Apple’s Siri, if you ask the right things, sounds quite competent but much of the time the responses are inappropriate. The next step, elusive thus far, is developing a program that actually understands the meaning of words and phrases. Computer scientists generally concede that jokes and sarcasm are still totally beyond computers.
These systems should not be designed similar to humans: evolution has in many ways been a terrible designer. What is needed is a system to be rational, predictable, and fast. While it’s certainly interesting to ponder what might constitute machine consciousness, to try to create a human-like “machine mind” would be a mistake.
Would/could an artificially intelligent entity value or even understand the ephemeral or esoteric? We are human and feel this to be a necessary aspect of intelligence – we place a high importance on it – but it is not necessarily an aspect equally shared by other life with whom we share this planet. Would a horse appreciate a Shakespearian sonnet; a dog a Rembrandt; a lizard a Beethoven symphony? Regardless of what we imagine, AI entities will not “think” like humans.
While a non-carbon-based intelligent entity might be capable of creating a piece of art, possibly even superior to any comparable human created work, it would experience life far differently than humans. It would combat digital rather than biological viruses; it could modify its own logic; its nutritional needs would be energy-related rather than for carbohydrates or proteins; through parallel processing it could multitask working on a virtually unlimited number of chores while simultaneously experiencing everything occurring through remote sensors.
AI entities will almost certainly be capable of eventually experiencing emotions as these are only an additional artifact associated with intelligence but unless we merely reinvent human-like life, those emotions; pain, elation, love, loss, longing, ego; will differ from the human experience.
The full spectrum of human experience, as it differs from the experience of non-human species, so will it differ for an AI-based entity. They never can be human and any pretense or expectation to the contrary is false. The most we as humans can aspire to achieve is the recognition and acceptance of those differences by any other non-human species.
AI researchers have made significant progress in recent years though the actual extent of that progress is not appreciated by most people as the most advanced systems currently are on large university and corporate servers rather than humanoid-type robots. Having Internet contact enables these systems to access external sensory devices including audio/visual, media, or literary resources.
And another thing to consider: humanity’s dominant position on this planet is dependent upon our superior intelligence; if another entity’s intelligence surpasses our own, we most likely will not retain that dominance.
That’s what I think, what about you?
 Daniel Kahneman is an Israeli-American psychologist notable for his work on the psychology of judgment and decision-making, as well as behavioral economics, for which he was awarded the 2002 Nobel Memorial Prize in Economic Sciences.