What is Artificial Intelligence? Definition, History, and Types

What Is Artificial Intelligence (AI)?

Artificial intelligence (AI) technology allows computers and machines to simulate human intelligence and problem-solving tasks. The ideal characteristic of artificial intelligence is its ability to rationalize and take action to achieve a specific goal. AI research began in the 1950s and was used in the 1960s by the United States Department of Defense when it trained computers to mimic human reasoning.

A subset of artificial intelligence is machine learning (ML), a concept that computer programs can automatically learn from and adapt to new data without human assistance.

Key Takeaways

  • Artificial intelligence technology allows computers and machines to simulate human intelligence and problem-solving capabilities.
  • Algorithms are part of the structure of artificial intelligence, where simple algorithms are used in simple applications, while more complex ones help frame strong artificial intelligence.
  • Artificial intelligence technology is apparent in computers that play chess, self-driving cars, and banking systems to detect fraudulent activity

How Artificial Intelligence (AI) Works?

While the specifics vary across different AI techniques, the core principle revolves around data. AI systems learn and improve through exposure to vast amounts of data, identifying patterns and relationships that humans may miss.

This learning process often involves algorithms, which are sets of rules or instructions that guide the AI’s analysis and decision-making. In machine learning, a popular subset of AI, algorithms are trained on labeled or unlabeled data to make predictions or categorize information. 

Deep learning, a further specialization, utilizes artificial neural networks with multiple layers to process information, mimicking the structure and function of the human brain. Through continuous learning and adaptation, AI systems become increasingly adept at performing specific tasks, from recognizing images to translating languages and beyond.

Types of artificial intelligence

Artificial intelligence can be organized in several ways, depending on stages of development or actions being performed. 

For instance, four stages of AI development are commonly recognized.

  1. Reactive machines: Limited AI that only reacts to different kinds of stimuli based on preprogrammed rules. Does not use memory and thus cannot learn with new data. IBM’s Deep Blue that beat chess champion Garry Kasparov in 1997 was an example of a reactive machine.
  2. Limited memory: Most modern AI is considered to be limited memory. It can use memory to improve over time by being trained with new data, typically through an artificial neural network or other training model. Deep learning, a subset of machine learning, is considered limited memory artificial intelligence.
  3. Theory of mind: Theory of mind AI does not currently exist, but research is ongoing into its possibilities. It describes AI that can emulate the human mind and has decision-making capabilities equal to that of a human, including recognizing and remembering emotions and reacting in social situations as a human would. 
  4. Self-aware: A step above the theory of mind AI, self-aware AI describes a mythical machine that is aware of its own existence and has the intellectual and emotional capabilities of a human. Like the theory of mind AI, self-aware AI does not currently exist.

A more useful way of broadly categorizing types of artificial intelligence is by what the machine can do. All of what we currently call artificial intelligence is considered artificial “narrow” intelligence, in that it can perform only narrow sets of actions based on its programming and training. For instance, an AI algorithm that is used for object classification won’t be able to perform natural language processing. Google Search is a form of narrow AI, as is predictive analytics, or virtual assistants.

Artificial general intelligence (AGI) would be the ability for a machine to “sense, think, and act” just like a human. AGI does not currently exist. The next level would be artificial superintelligence (ASI), in which the machine would be able to function in all ways superior to a human. 

History of artificial intelligence: Key dates and names

The idea of “a machine that thinks” dates back to ancient Greece. But since the advent of electronic computing (and relative to some of the topics discussed in this article) important events and milestones in the evolution of artificial intelligence include the following:

  • 1950: Alan Turing publishes Computing Machinery and Intelligence. In this paper, Turing—famous for breaking the German ENIGMA code during WWII and often referred to as the “father of computer science”— asks the following question: “Can machines think?”  From there, he offers a test, now famously known as the “Turing Test,” where a human interrogator would try to distinguish between a computer and human text response. While this test has undergone much scrutiny since it was published, it remains an important part of the history of AI, as well as an ongoing concept within philosophy as it utilizes ideas around linguistics.
  • 1956: John McCarthy coins the term “artificial intelligence” at the first-ever AI conference at Dartmouth College. (McCarthy would go on to invent the Lisp language.) Later that year, Allen Newell, J.C. Shaw, and Herbert Simon create the Logic Theorist, the first-ever running AI software program.
  • 1967: Frank Rosenblatt builds the Mark 1 Perceptron, the first computer based on a neural network that “learned” though trial and error. Just a year later, Marvin Minsky and Seymour Papert publish a book titled Perceptrons, which becomes both the landmark work on neural networks and, at least for a while, an argument against future neural network research projects.
  • 1980s: Neural networks which use a backpropagation algorithm to train itself become widely used in AI applications.
  • 1995: Stuart Russell and Peter Norvig publish Artificial Intelligence: A Modern Approach (link resides outside ibm.com), which becomes one of the leading textbooks in the study of AI. In it, they delve into four potential goals or definitions of AI, which differentiates computer systems on the basis of rationality and thinking vs. acting.

     

  • 1997: IBM’s Deep Blue beats then world chess champion Garry Kasparov, in a chess match (and rematch).

     

  • 2004: John McCarthy writes a paper, What Is Artificial Intelligence? (link resides outside ibm.com), and proposes an often-cited definition of AI.
  • 2011: IBM Watson beats champions Ken Jennings and Brad Rutter at Jeopardy!
  • 2015: Baidu’s Minwa supercomputer uses a special kind of deep neural network called a convolutional neural network to identify and categorize images with a higher rate of accuracy than the average human.
  • 2016: DeepMind’s AlphaGo program, powered by a deep neural network, beats Lee Sodol, the world champion Go player, in a five-game match. The victory is significant given the huge number of possible moves as the game progresses (over 14.5 trillion after just four moves!). Later, Google purchased DeepMind for a reported USD 400 million.
  • 2023: A rise in large language models, or LLMs, such as ChatGPT, create an
    enormous change in performance of AI and its potential to drive enterprise value. With these new generative AI practices, deep-learning models can be pre-trained on vast amounts of raw, unlabeled data.

Artificial Intelligence Examples 

Though the humanoid robots often associated with AI (think Star Trek: The Next Generation’s Data or Terminator’s  T-800) don’t exist yet, you’ve likely interacted with machine learning-powered services or devices many times before. 

At the simplest level, machine learning uses algorithms trained on data sets to create machine learning models that allow computer systems to perform tasks like making song recommendations, identifying the fastest way to travel to a destination, or translating text from one language to another. Some of the most common examples of AI in use today include: 

  • ChatGPT: Uses large language models (LLMs) to generate text in response to questions or comments posed to it. 
  • Google Translate: Uses deep learning algorithms to translate text from one language to another. 
  • Netflix: Uses machine learning algorithms to create personalized recommendation engines for users based on their previous viewing history. 
  • Tesla: Uses computer vision to power self-driving features on their cars. 

What is artificial general intelligence (AGI)? 

Artificial general intelligence (AGI) refers to a theoretical state in which computer systems will be able to achieve or exceed human intelligence. In other words, AGI is “true” artificial intelligence as depicted in countless science fiction novels, television shows, movies, and comics. 

As for the precise meaning of “AI” itself, researchers don’t quite agree on how we would recognize “true” artificial general intelligence when it appears. However, the most famous approach to identifying whether a machine is intelligent or not is known as the Turing Test or Imitation Game, an experiment that was first outlined by influential mathematician, computer scientist, and cryptanalyst Alan Turing in a 1950 paper on computer intelligence. There, Turing described a three-player game in which a human “interrogator” is asked to communicate via text with another human and a machine and judge who composed each response. If the interrogator cannot reliably identify the human, then Turing says the machine can be said to be intelligent. 

To complicate matters, researchers and philosophers also can’t quite agree whether we’re beginning to achieve AGI, if it’s still far off, or just totally impossible. For example, while a recent paper from Microsoft Research and OpenAI argues that Chat GPT-4 is an early form of AGI, many other researchers are skeptical of these claims and argue that they were just made for publicity.

Regardless of how far we are from achieving AGI, you can assume that when someone uses the term artificial general intelligence, they’re referring to the kind of sentient computer programs and machines that are commonly found in popular science fiction. 

Benefits of AI

Automation

AI can automate workflows and processes or work independently and autonomously from a human team. For example, AI can help automate aspects of cybersecurity by continuously monitoring and analyzing network traffic. Similarly, a smart factory may have dozens of different kinds of AI in use, such as robots using computer vision to navigate the factory floor or to inspect products for defects, create digital twins, or use real-time analytics to measure efficiency and output.

Reduce human error

AI can eliminate manual errors in data processing, analytics, assembly in manufacturing, and other tasks through automation and algorithms that follow the same processes every single time.

Eliminate repetitive tasks

AI can be used to perform repetitive tasks, freeing human capital to work on higher impact problems. AI can be used to automate processes, like verifying documents, transcribing phone calls, or answering simple customer questions like “what time do you close?” Robots are often used to perform “dull, dirty, or dangerous” tasks in the place of a human. 

Fast and accurate

AI can process more information more quickly than a human, finding patterns and discovering relationships in data that a human may miss.

Infinite availability

AI is not limited by time of day, the need for breaks, or other human encumbrances. When running in the cloud, AI and machine learning can be “always on,” continuously working on its assigned tasks. 

Accelerated research and development 

The ability to analyze vast amounts of data quickly can lead to accelerated breakthroughs in research and development. For instance, AI has been used in predictive modeling of potential new pharmaceutical treatments, or to quantify the human genome. 

Share this Post

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top