The ultimate glossary of AI terms

The definition of artificial intelligence (AI) differs depending on who you ask. Why? Because there are many different types of AI, for many different purposes. In our glossary of artificial intelligence terms, you’ll learn what all the key terms mean.

Agents

Also known as bots, droids or intelligent agents. These agents are autonomous software programs that respond to their environment and act on the behalf of humans to accomplish a system’s target function. When multiple agents are used together in a system, they interact with one another to achieve the goals of the overall system.

Algorithm

A specific set of instructions, contained within a computer program, which determine how that program analyzes and uses data to complete its specified task (target function).

Artificial Intelligence

Computer programs designed to solve difficult problems which humans (and animals) routinely solve. The goal of AI is to develop programs which can solve such problems independently, although the patterns for solving these problems differ significantly from the way they are solved by humans.

Artificial neural network

An approach to machine learning that is loosely based on biological neural networks in the brain. ANN’s learn nonlinear relationships between input data and output data.

Bayesian network

A model that represents and calculates the probabilistic relationships between a set of random variables and an uncertain domain via a directed acyclic graph. The nodes on the graph represent the random variables and the links between them represent their conditional dependencies.

For example, a Bayesian network can be used to calculate the probabilities of various diseases being present (the uncertain domain) based on the given symptoms (the variables).

Chatbot

A computer program that conducts conversations with human users by simulating how humans would behave as conversational partners.

Clustering

A method of unsupervised learning and a common statistical data analysis technique. In this method, observations that show similarities to each other are organized into groups (called clusters).

Combinatorial explosion

A fundamental problem in computing whereby the number of combinations that a computer has to examine grows exponentially. The number of combinations can become so large that even the fastest computers aren’t able to examine them all in a conceivable time frame (we are talking hundreds of thousands of years here!).

Computational creativity

A multidisciplinary research area that draws on the fields of art, science, philosophy and AI to engineer computational systems that are able to model, stimulate and replicate human creativity. For example, IBM researchers are currently exploring how computational creativity can be used in the food industry to make recipes for dishes that have never been imagined before by humans.

Data

Any collection of information converted into a digital form.

Data mining

The process of combing through a data set to identify patterns and extract information. Often such patterns and information are only clear when a large enough data set is analyzed. For this reason, AI and machine learning are extremely helpful in such a process.

Decision model

A model that uses prescriptive analytics to establish the best course of action for a given situation. The model assesses the relationships between the elements of a decision to recommend one or more possible courses of action. It may also predict what should happen if a certain action is taken.

Deep Blue (chess-playing AI)

A computer developed by IBM that is able to play chess without human guidance. It was the first computer chess-playing system to win both a chess game and chess match against a human world champion of chess.

Deep learning

A subset of AI and machine learning in which neural networks are “layered”, i.e. combined with plenty of computing power, and given a large volume of training data to create extremely powerful learning models capable of processing data in new and exciting ways in a number of areas, e.g. advancing the field of computer vision.

Descriptive model

A summary of a data set that describes its main features and quantifies relationships in the data. Some common measures used to describe a data set are measures of central tendency (mean, median and mode).

Genetic algorithm

A method for solving optimization problems by mimicking the process of natural selection and biological evolution. The algorithm randomly selects pairs of individuals from the population to be used as parents. These are then crossed over to create a new generation of two individuals, or children. This process is repeated until the optimization problem is solved.

Inductive logic programming (ILP)

An approach to machine learning whereby hypothesized logic is developed based on known background knowledge and a set of both positive and negative examples of what is and isn’t true.

Inductive reasoning

The ability to derive key generalized conclusions or theories by analyzing patterns in a large data set.

Machine learning

A subfield of AI in which algorithms “learn” how to complete a specified task. The traditional approach to computer programming relies on explicit instructions written by a human. In contrast, machine learning uses statistical pattern recognition and inference to derives its own mapping between input data and output data

Natural language generation (NLG)

Algorithms attempt to generate language that is comprehensible and human sounding. The end goal is to produce computer-generated language that is indiscernible from language written by humans.

Natural language processing (NLP)

A machine learning task concerned with improving the interaction between humans and computers. This field of study focuses on helping machines to better understand human language in order to improve human-computer interfaces.

Optical character recognition (OCR)

A computer system that takes images of typed, handwritten or printed text and converts them into machine-readable text. For example, when you deposit a check into a bank machine, OCR software is used to recognize the information written on the check.

Overfitting

Overfitting is when a machine learning model is tuned to the training data too well. In other words, it fits the model to patterns in the data that are due to noise, as opposed to the true relationship between the input and output. Overfit models have poor performance when tested on unseen, real-world data.

Parameter

Any characteristic that can be used to help define or classify a system such as an event, thing, person, project or situation. In AI, parameters are used to clarify exactly what an algorithm should be seeking to identify as important data when performing its target function.

Predictive analysis

The act of analyzing current and past data to look for patterns that can help make predictions about future events or performance.

Predictive model

A model that uses observations measured in a sample to predict the probability that a different sample or remainder of the population will exhibit the same behaviour or have the same outcome.

Pruning

The process of removing sections of decision trees that provide little power to classify instances (the occurrence of something). This technique reduces the size and complexity of the final decision tree and improves accuracy by eliminating the parts of the tree that are likely to cause overfitting.

Recurrent neural network

A type of artificial neural network in which recorded data and outcomes are fed back through the network forming a cycle. This process allows the network to use its internal memory to sort through random data as it goes.

Regression

A statistical method to determine the relationships between input (independent) and output (dependent) variables.

Reinforcement learning

A type of machine learning in which machines are “taught” to achieve their target function through a process of experimentation and reward. In reinforcement learning, the machine receives positive reinforcement when its processes produce the desired result and negative reinforcement when they do not.

Supervised learning

A type of machine learning in which examples of input/ output pairs are provided to the machine learning algorithm. The goal of the algorithm is to learn the relationship between the input and output.

Swarm intelligence

An approach to artificial intelligence that is based on the idea that when individual agents come together, the interactions between them lead to the emergence of a more “intelligent” collective behaviour. It stems from the natural behaviour of animals such as bees, which combine into swarms to work more intelligently.

Target function

The specific task an AI or learning machine has been designed and programmed to complete.

Test data set

In machine learning, the test data set is the data given to the machine after the training and validation phases have been completed. The test data set is used to check the performance characteristics of the algorithms produced after the completion of the first two phases when presented with unknown data. This will give a good indication of the accuracy, sensitivity and specificity of the algorithm’s predictive powers.

Training data set

In machine learning, the training data set is the data given to the machine during the initial “learning” or “training” phase. From this data set, the machine is meant to gain some insight into options for the efficient completion of its assigned task through identifying relationships between the data.

Turing test

A test developed by Alan Turing in 1950. It is a test of a machine’s ability to exhibit behaviour that is indistinguishable from that of a human. The test is based on a process in which a series of judges attempt to discern interactions with a control (human) from interactions with the machine (computer) being tested.

Unsupervised learning

A type of machine learning in which no examples are provided of the desired output. In unsupervised learning, the machine is left to identify patterns and draw its own conclusions from the data sets it is given.

Validation data set

In machine learning, the validation data set is the data given to the machine after the initial learning phase has been completed. The validation data is used to identify which of the relationships identified during the learning phase will be the most effective to use in predicting future performance.

Discover how applying actual AI to language can deliver big results!