Artificial Intelligence (AI) has left the confines of science fiction to shape the contours of our lives. While Cortana, Siri, Alexa and other “intelligent assistants” help us perform all sorts of tasks, tech startups relentlessly launch other AI-driven products to tackle a growing list of human concerns — from personal finance (robo advisors) and customer service (chatbots) to romance (dating bots) and health (medical bots). Given the spreading influence of AI, Alphabet’s Eric Schmidt recently remarked that we’re inexorably heading towards the “Age of Intelligence.”
But, have we truly imbued machines with intelligence?
The answer depends on how you define AI. The term “AI” is widely abused and used to describe all levels of automation, even rule-based scripting. AI experts maintain a much higher bar, setting “artificial general intelligence (AGI)” and the most strident variants of the Turing Test as the field’s Holy Grail. Making things murkier, there are other terms to consider: strong AI, weak AI, machine learning, deep learning – what do they all mean?
The Difficulty With Definitions
Nearly all of the AI-driven systems running today can be classified as “Weak AI” or “Narrow AI”. In his book The Singularity Is Near, computer scientist and futurist Ray Kurzweil defined weak AI as AI that is proficient in one area.
A narrowly intelligent program could become World Chess Champion or the planet’s top Go player – but still suck miserably at other tasks such as distinguishing and analyzing images. Deep Blue (the first AI world chess champion in 1997), Alpha Go (the top Go player since 2016), Alexa, Siri, and the chatbot who booked your last vacation are examples of Weak AI. So far, it’s the best we’ve come up with.
On the other hand, “Strong AI” has two alternate definitions. The term was originally coined by philosopher John Searle in his paper Mind, Brains and Programs, defining it as a programmed computer with a mind in exactly the same sense human beings have minds. The term originated from his famous Chinese Room argument which posits that Strong AI cannot exist since no program can give a computer a true “mind”, regardless of its intelligence. Searle probed the issues as a philosopher, but few AI researchers or computer scientists really care about the distinction between a computer with a human “mind” and a computer with a mind that behaves indistinguishably from that of a human.
More recently, Strong AI has emerged as an antonym for Narrow AI, which is driven largely by machine learning and deep learning. Machine learning (ML) refers to software applications that can learn and make predictions from data, but without being explicitly programmed to do so. Deep learning, a subfield of machine learning, has recently been able to achieve breakthough performance results by utilizing mathematical architectures loosely inspired by how neurons work in the biological brain. Most of the best performing AI systems today are built deep learning, also known as “deep neural network”, algorithms. Nvidia has a great post going into further detail about the differences between machine learning and deep learning.
Another term often used synonymously with Strong AI is Artificial General Intelligence (AGI). AI luminary and author Ben Goertzel defines AGI as “a synthetic intelligence that has a general scope and is good at generalization across various goals and contexts.” Self-made robotics tycoon Peter Voss believes AGI to be an artificial program capable of “learning anything, in principle”, but further clarifies that the learning should be “autonomous, goal-directed, and highly adaptive”. Unlike Goertzel, Voss deprioritizes human-like emotions and social empathy as requisite components of AGI. Meanwhile, Temple University professor and author Pei Wang describes it in terms of the core elements and assumptions of AGI research:
- Stressing on the general-purpose nature of intelligence
- Taking a holistic or integrative viewpoint on intelligence
- Believing the time has come to build an AI that is comparable to human intelligence.
While AI experts disagree on details, most will likely agree with Goertzel’s Core AGI Hypothesis:
“The creation and study of synthetic intelligences with sufficiently broad (e.g. human-level) scope and strong generalization capability, is at bottom qualitatively different from the creation and study of synthetic intelligences with significantly narrower scope and weaker generalization capability.”
Narrow AI is not AGI — despite what marketers are telling you.
What is General Intelligence?
The reason it has been so problematic settling on a definition for AGI is primarily a result of the difficulty in defining general intelligence itself. General intelligence need not even be human-like and its potential breadth makes it hard to define and harder still to characterize through tests and metrics. Indeed, the past two decades have seen many approaches taken, none of which has taken hold as the ideal.
In a 2005 article, Nils Nilsson proposed a pragmatic approach, wherein any AI that can carry out the same practical tasks as a human can be considered to have human-level intelligence. This presupposes that human-like intelligence is the goal, which is true in many practical senses. A psychological approach to GI, under analysis since the early 20th century, also relies on a human baseline, but attempts to isolate deeper underlying characteristics that enable pragmatic results. This approach is exemplified by Gardner’s theory of multiple intelligences and more recent work describing human cognitive competencies.
The adaptionist approach states that a greater general intelligence is demonstrated by a greater ability to adapt to new environments, particularly with insufficient resources. This approach raises a new debate on whether the intelligence of a system is in its ability to achieve results or in using the minimum output to do so. Similarly, the embodiment approach holds that intelligence is best understood by focusing on the modulation of the body-environment interaction. An intelligent system operates within the rules of its environment to produce optimal behavior.
More esoteric approaches include the cognitive architecture approach which develops requirements for human-level intelligence from the standpoint of cognitive functions such as knowledge and skill representation, reasoning and planning, perception and action, etc. Finally we have the mathematical approach which attempts to define intelligence based on the reward-achieving capability of a system. In this highly generalized approach, humans are not taken as a benchmark and are indeed far from maximally intelligent.
How Can We Test For AGI?
Given the difficulty in achieving a universal definition for AGI, it’s no surprise that developing a single test or metric for its presence is equally controversial. Further complicating matters is the importance of external environment in analyzing an AI’s behavior.
Numerous tests have been proposed beginning with the Turing Test put forth by Alan Turing in 1950. In this test, a machine passes if it is able to successfully imitate human conversation and fool an evaluator. A similar version, the Virtual Turing Test plays out the same scenario through avatars in a virtual world. A Text Compression Test challenges an AI to compress a text by recognizing and understanding patterns contained within.
There are also various tests challenging an AI or robot to accomplish human educational goals – graduate from an online university, graduate from a physical university, or win a Nobel Prize. Some of these clearly overshoot human-level intelligence as few humans are Nobel Prize winners. Practical tests have been put forth by those advocating for a pragmatic approach to AGI. The Coffee Test, proposed by Apple co-founder Steve Wozniak, asks whether an AI can enter an average American home and make a cup of coffee. Similarly, the Employment Test challenges an AI to hold down a human job.
While settling on a test for AGI is difficult, establishing a metric for partial progress toward AGI is exponentially more so. Practical tests have been proposed, such as putting an AI through elementary school or using the coffee test, but these can be easily gamed by systems designed with the test in mind. Some researchers suggest that it is fundamentally impossible to quantify progress toward AGI due to the principle of “cognitive synergy”. A fully functional AI maybe achieve 100% on a test while a 90% functional AI may only score 50% on the same test.
All-in-all, achieving AGI is an extraordinary undertaking and a far cry from the “AI” currently touted by start-ups and marketers. Developing a true artificial intelligence, and establishing it as such, is an ongoing challenge that continues to be hindered by difficulties devising definitions, metrics and tests. However, its inevitable arrival promises to herald a new era in human-machine interactions and perhaps force a redefinition of “intelligence” itself.
Leave a Reply
You must be logged in to post a comment.