Connect with us

AI 101

What is a Decision Tree?

mm

Published

 on

What is a Decision Tree?

A decision tree is a useful machine learning algorithm used for both regression and classification tasks. The name “decision tree” comes from the fact that the algorithm keeps dividing the dataset down into smaller and smaller portions until the data has been divided into single instances, which are then classified. If you were to visualize the results of the algorithm, the way the categories are divided would resemble a tree and many leaves.

That’s a quick definition of a decision tree, but let’s take a deep dive into how decision trees work. Having a better understanding of how decision trees operate, as well as their use cases, will assist you in knowing when to utilize them during your machine learning projects.

General Format of a Decision Tree

A decision tree is a lot like a flowchart. To utilize a flowchart you start at the starting point, or root, of the chart and then based on how you answer the filtering criteria of that starting node you move to one of the next possible nodes. This process is repeated until an ending is reached.

Decision trees operate in essentially the same manner, with every internal node in the tree being some sort of test/filtering criteria. The nodes on the outside, the endpoints of the tree, are the labels for the datapoint in question and they are dubbed “leaves”. The branches that lead from the internal nodes to the next node are features or conjunctions of features. The rules used to classify the datapoints are the paths that run from the root to the leaves.

Steps and Algorithms

Decision trees operate on an algorithmic approach which splits the dataset up into individual data points based on different criteria. These splits are done with different variables, or the different features of the dataset. For example, if the goal is to determine whether or not a dog or cat is being described by the input features, variables the data is split on might be things like “claws” and “barks”.

So what algorithms are used to actually split the data into branches and leaves? There are various methods that can be used to split a tree up, but the most common method of splitting is probably a technique referred to as “recursive binary split”. When carrying out this method of splitting, the process starts at the root and the number of features in the dataset represents the possible number of possible splits. A function is used to determine how much accuracy every possible split will cost, and the split is made using the criteria that sacrifices the least accuracy. This process is carried out recursively and sub-groups are formed using the same general strategy.

In order to determine the cost of the split, a cost function is used. A different cost function is used for regression tasks and classification tasks. The goal of both cost functions is to determine which branches have the most similar response values, or the most homogenous branches. Consider that you want test data of a certain class to follow certain paths and this makes intuitive sense.

In terms of the regression cost function for recursive binary split, the algorithm used to calculate the cost is as follows:

sum(y – prediction)^2

The prediction for a particular group of data points is the mean of the responses of the training data for that group. All the data points are run through the cost function to determine the cost for all the possible splits and the split with the lowest cost is selected.

Regarding the cost function for classification, the function is as follows:

G = sum(pk * (1 – pk))

This is the Gini score, and it is a measurement of the effectiveness of a split, based on how many instances of different classes are in the groups resulting from the split. In other words, it quantifies how mixed the groups are after the split. An optimal split is when all the groups resulting from the split consist only of inputs from one class. If an optimal split has been created the “pk” value will be either 0 or 1 and G will be equal to zero. You might be able to guess that the worst-case split is one where there is a 50-50 representation of the classes in the split, in the case of binary classification. In this case, the “pk” value would be 0.5 and G would also be 0.5.

The splitting process is terminated when all the data points have been turned into leaves and classified. However, you may want to stop the growth of the tree early. Large complex trees are prone to overfitting, but several different methods can be used to combat this. One method of reducing overfitting is to specify a minimum number of data points that will be used to create a leaf. Another method of controlling for overfitting is restricting the tree to a certain maximum depth, which controls how long a path can stretch from the root to a leaf.

Another process involved in the creation of decision trees is pruning. Pruning can help increase the performance of a decision tree by stripping out branches containing features that have little predictive power/little importance for the model. In this way, the complexity of the tree is reduced, it becomes less likely to overfit, and the predictive utility of the model is increased.

When conducting pruning, the process can start at either the top of the tree or the bottom of the tree. However, the easiest method of pruning is to start with the leaves and attempt to drop the node that contains the most common class within that leaf. If the accuracy of the model doesn’t deteriorate when this is done, then the change is preserved. There are other techniques used to carry out pruning, but the method described above – reduced error pruning – is probably the most common method of decision tree pruning.

Considerations For Using Decision Trees

Decision trees are often useful when classification needs to be carried out but computation time is a major constraint. Decision trees can make it clear which features in the chosen datasets wield the most predictive power. Furthermore, unlike many machine learning algorithms where the rules used to classify the data may be hard to interpret, decision trees can render interpretable rules. Decision trees are also able to make use of both categorical and continuous variables which means that less preprocessing is needed, compared to algorithms that can only handle one of these variable types.

Decision trees tend not to perform very well when used to determine the values of continuous attributes. Another limitation of decision trees is that, when doing classification, if there are few training examples but many classes the decision tree tends to be inaccurate.

To Learn More

Recommended Artificial Intelligence CoursesOffered ByDurationDifficulty


Introduction to Artificial Intelligence



IBM

9 Hours

Beginner


Deep Learning for Business


Yonsei University

8 Hours

Beginner


An Introduction to Practical Deep Learning


Intel Software

12 Hours

Intermediate


Machine Learning Foundations


University of Washington

24 Hours

Intermediate
Spread the love

Deep Learning Specialization on Coursera

Blogger and programmer with specialties in machine learning and deep learning topics. Daniel hopes to help others use the power of AI for social good.

AI 101

What is Computer Vision?

mm

Published

on

What is Computer Vision?

Computer vision algorithms are one of the most transformative and powerful AI systems in the world, at the moment. Computer vision systems see use in autonomous vehicles, robot navigation, facial recognition systems, and more. However, what are computer vision algorithms exactly? How do they work? In order to answer these questions, we’ll dive deep into the theory behind computer vision, computer vision algorithms, and applications for computer vision systems.

How Do Computer Vision Sytems Work?

In order to fully appreciate how computer vision systems work, let’s first take a moment to discuss how humans recognize objects. The best explanation neuropsychology has for how we recognize objects is a model that describes the initial phase of object recognition as one where the basic components of objects, such as form, color, and depth are interpreted by the brain first. The signals from the eye that enter the brain are analyzed to pull out the edges of an object first, and these edges are joined together into a more complex representation that complete’s the object’s form.

Computer vision systems operate very similarly to the human visual system, by first discerning the edges of an object and then joining these edges together into the object’s form. The big difference is that because computers interpret images as numbers, a computer vision system needs some way to interpret the individual pixels that comprise the image. The computer vision system will assign values to the pixels in the image and by examining the difference in values between one region of pixels and another region of pixels, the computer can discern edges. For instance, if the image in question is greyscale, then the values will range from black (represented by 0) to white (represented by 255). A sudden change in the range of values of pixels near each other will indicate an edge.

This basic principle of comparing pixel values can also be done with colored images, with the computer comparing differences between the different RGB color channels. So know that we know how a computer vision system examines pixel values to interpret an image, let’s take a look at the architecture of a computer vision system.

Convolutional Neural Networks

The primary type of AI used in computer vision tasks is one based on convolutional neural networks. What’s a convolution exactly?

Convolutions are mathematical processes the network uses to determine the difference in values between pixels. If you envision a grid of pixel values, picture a smaller grid being moved over this main grid. The values underneath the second grid are being analyzed by the network, so the network is only examining a handful of pixels at a time. This is often called the “sliding windows” technique. The values being analyzed by the sliding window are summarized by the network, which helps reduce the complexity of the image and make it easier for the network to extract patterns.

Convolutional neural networks are divided into two different sections, the convolutional section and the fully connected section. The convolutional layers of the network are the feature extractors, whose job is to analyze the pixels within the image and form representations of them that the densely connected layers of the neural network can learn patterns from. The convolutional layers start by just examining the pixels and extracting the low-level features of the image like edges. Later convolutional layers join the edges together into more complex shapes. By the end, the network will hopefully have a representation of the edges and details of the image that it can pass to the fully connected layers.

Image Annotation

While a convolutional neural network can extract patterns from images by itself, the accuracy of the computer vision system can be greatly improved by annotating the images. Image annotation is the process of adding metadata to the image that assists the classifier in detecting important objects in the image. The use of image annotation is important whenever computer vision systems need to be highly accurate, such as when controlling an autonomous vehicle or robot.

There are various ways that images can be annotated to improve the performance of a computer vision classifier. Image annotation is often done with bounding boxes, a box that surrounds the edges of the target object and tells the computer to focus its attention within the box. Semantic segmentation is another type of image annotation, which operates by assigning an image class to every pixel in an image. In other words, every pixel that could be considered “grass” or “trees” will be labeled as belonging to those classes. The technique provides pixel-level precision, but creating semantic segmentation annotations is more complex and time-consuming than creating simple bounding boxes. Other annotation methods, like lines and points, also exist.

Spread the love

Deep Learning Specialization on Coursera
Continue Reading

AI 101

What Are Neural Networks?

mm

Published

on

What Are Neural Networks?

Many of the biggest advances in AI are driven by artificial neural networks. Artificial Neural Networks (ANNs) are the connection of mathematical functions joined together in a format inspired by the neural networks found in the human brain. These ANNs are capable of extracting complex patterns from data, applying these patterns to unseen data to classify/recognize the data. In this way, the machine “learns”. That’s a quick rundown on neural networks, but let’s take a closer look at neural networks to better understand what they are and how they operate.

Understanding The Multi-layer Perceptron

Before we look at more complex neural networks, we’re going to take a moment to look at a simple version of an ANN, a Multi-Layer Perceptron (MLP).

What Are Neural Networks?

Photo: Sky99 via Wikimedia Commons, CC BY SA 3.0 (https://commons.wikimedia.org/wiki/File:MultiLayerPerceptron.svg)

Imagine an assembly line at a factory. On this assembly line, one worker receives an item, makes some adjustments to it, and then passes it on to the next worker in the line who does the same. This process continues until the last worker in the line puts the finishing touches on the item and puts it on a belt that will take it out of the factory. In this analogy, there are multiple “layers” to the assembly line, and products move between layers as they move from worker to worker. The assembly line also has an entry point and an exit point.

A Multi-Layer Perceptron can be thought of as a very simple production line, made out of three layers total: an input layer, a hidden layer, and an output layer. The input layer is where the data is fed into the MLP, and in the hidden layer some number of “workers” handle the data before passing it onto the output layer which gives the product to the outside world. In the instance of an MLP, these workers are called “neurons” (or sometimes nodes) and when they handle the data they manipulate it through a series of mathematical functions.

Within the network, there are structures connecting node to node called “weights”. Weights are an assumption about how data points are related as they move through the network. To put that another way, weights reflect the level of influence that one neuron has over another neuron. The weights pass through an “activation function” as they leave the current node, which is a type of mathematical function that transforms the data. They transform linear data into non-linear representations, which enables the network to analyze complex patterns.

The analogy to the human brain implied by “artificial neural network” comes from the fact that the neurons which make up the human brain are joined together in a similar fashion to how nodes in an ANN are linked.

While multi-layer perceptrons have existed since the 1940s, there were a number of limitations that prevented them from being especially useful. However, over the course of the past couple of decades, a technique called “backpropagation” was created that allowed networks to adjust the weights of the neurons and thereby learn much more effectively. Backpropagation changes the weights in the neural network, allowing the network to better capture the actual patterns within the data.

Deep Neural Networks

Deep neural networks take the basic form of the MLP and make it larger by adding more hidden layers in the middle of the model. So instead of there being an input layer, a hidden layer, and an output layer, there are many hidden layers in the middle and the outputs of one hidden layer become the inputs for the next hidden layer until the data has made it all the way through the network and been returned.

The multiple hidden layers of a deep neural network are able to interpret more complex patterns than the traditional multilayer perceptron. Different layers of the deep neural network learn the patterns of different parts of the data. For instance, if the input data consists of images, the first portion of the network might interpret the brightness or darkness of pixels while the later layers will pick out shapes and edges that can be used to recognize objects in the image.

Different Types Of Neural Networks

What Are Neural Networks?

Photo: cecebur via Wikimedia Commons, CC BY SA 4.0 (https://commons.wikimedia.org/wiki/File:Convolutional_Neural_Network_NeuralNetworkFeatureLayers.gif)

There are various types of neural networks, and each of the various neural network types has its own advantages and disadvantages (and therefore their own use cases). The type of deep neural network described above is the most common type of neural network, and it is often referred to as a feedforward neural network.

One variation on neural networks is the Recurrent Neural Network (RNN). In the case of Recurrent Neural Networks, looping mechanisms are used to hold information from previous states of analysis, meaning that they can interpret data where the order matters. RNNs are useful in deriving patterns from sequential/chronological data. Recurrent Neural Networks can be either unidirectional or bidirectional. In the case of a bi-directional neural network, the network can take information from later in the sequence as well as earlier portions of the sequence. Since the bi-directional RNN takes more information into account, it’s better able to draw the right patterns from the data.

A Convolutional Neural Network is a special type of neural network that is adept at interpreting the patterns found within images. A CNN operates by passing a filter over the pixels of the image and achieving a numerical representation of the pixels within the image, which it can then analyze for patterns. A CNN is structured so that the convolutional layers which pull the pixels out of the image come first, and then the densely connected feed-forward layers come, those that will actually learn to recognize objects, come after this.

Spread the love

Deep Learning Specialization on Coursera
Continue Reading

AI 101

Supervised vs Unsupervised Learning

mm

Published

on

Supervised vs Unsupervised Learning

In machine learning, most tasks can be easily categorized into one of two different classes: supervised learning problems or unsupervised learning problems. In supervised learning, data has labels or classes appended to it, while in the case of unsupervised learning the data is unlabeled. Let’s take a close look at why this distinction is important and look at some of the algorithms associated with each type of learning.

Supervised Vs. Unsupervised Learning

Most machine learning tasks are in the domain of supervised learning. In supervised learning algorithms, the individual instances/data points in the dataset have a class or label assigned to them. This means that the machine learning model can learn to distinguish which features are correlated with a given class and that the machine learning engineer can check the model’s performance by seeing how many instances were properly classified. Classification algorithms can be used to discern many complex patterns, as long as the data is labeled with the proper classes. For instance, a machine-learning algorithm can learn to distinguish different animals from each other based off of characteristics like “whiskers”, “tail”, “claws”, etc.

In contrast to supervised learning, unsupervised learning involves creating a model that is able to extract patterns from unlabeled data. In other words, the computer analyzes the input features and determines for itself what the most important features and patterns are. Unsupervised learning tries to find the inherent similarities between different instances. If a supervised learning algorithm aims to place data points into known classes, unsupervised learning algorithms will examine the features common to the object instances and place them into groups based on these features, essentially creating its own classes.

Examples of supervised learning algorithms are Linear Regression, Logistic Regression, K-nearest Neighbors, Decision Trees, and Support Vector Machines.

Meanwhile, some examples of unsupervised learning algorithms are Principal Component Analysis and K-Means Clustering.

Supervised Learning Algorithm Examples

Linear Regression is an algorithm that takes two features and plots out the relationship between them. Linear Regression is used to predict numerical values in relation to other numerical variables. Linear Regression has the equation of Y = a +bX, where b is the line’s slope and a is where y crosses the X-axis.

Logistic Regression is a binary classification algorithm. The algorithm examines the relationship between numerical features and finds the probability that the instance can be classified into one of two different classes. The probability values are “squeezed” towards either 0 or 1. In other words, strong probabilities will approach 0.99 while weak probabilities will approach 0.

K-Nearest Neighbors assigns a class to new data points based on the assigned classes of some chosen amount of neighbors in the training set. The number of neighbors considered by the algorithm is important, and too few or too many neighbors can misclassify points.

Decision Trees are a type of classification and regression algorithm. A decision tree operates by splitting up a dataset down into smaller and smaller portions until the subsets can’t be split any further and what results is a tree with nodes and leaves. The nodes are where decisions about data points are made using different filtering criteria, while the leaves are the instances that have been assigned some label (a data point that has been classified). Decision tree algorithms are capable of handling both numerical and categorical data. Splits are made in the tree on specific variables/features.

Support Vector Machines are a classification algorithm that operates by drawing hyperplanes, or lines of separation, between data points. Data points are separated into classes based upon which side of the hyperplane they are on. Multiple hyperplanes can be drawn across a plane, diving a dataset into multiple classes. The classifier will try to maximize the distance between the diving hyperplane and the points on either side of the plane, and the greater the distance between the line and the points, the more confident the classifier is.

Unsupervised Learning Algorithms

Principal Component Analysis is a technique used for dimensionality reduction, meaning that the dimensionality or complexity of the data is represented in a simpler fashion. The Principal Component Analysis algorithm finds new dimensions for the data that are orthogonal. While the dimensionality of the data is reduced, the variance between the data should be preserved as much as possible. What this means in practical terms is that it takes the features in the dataset and distills them down into fewer features that represent most of the data.

K-Means Clustering is an algorithm that automatically groups data points into clusters based on similar features. The patterns within the dataset are analyzed and the datapoints split into groups based on these patterns. Essentially, K-means creates its own classes out of unlabeled data. The K-Means algorithm operates by assigning centers to the clusters, or centroids, and moving the centroids until the optimal position for the centroids is found. The optimal position will be one where the distance between the centroids to the surrounding data points within the class is minimized. The “K” in K-means clustering refers to how many centroids have been chosen.

Summing Up

To close, let’s quickly go over the key differences between supervised and unsupervised learning.

As we previously discussed, in supervised learning tasks the input data is labeled and the number of classes are known. Meanwhile, input data is unlabeled and the number of classes not known in unsupervised learning cases. Unsupervised learning tends to be less computationally complex, whereas supervised learning tends to be more computationally complex. While supervised learning results tend to be highly accurate, unsupervised learning results tend to be less accurate/moderately accurate.

To Learn More

Recommended Machine Learning CoursesOffered ByDurationDifficulty


Introduction to Artificial Intelligence



IBM

9 Hours

Beginner


Deep Learning for Business


Yonsei University

8 Hours

Beginner


An Introduction to Practical Deep Learning


Intel Software

12 Hours

Intermediate


Machine Learning Foundations


University of Washingotn

24 Hours

Intermediate
Spread the love

Deep Learning Specialization on Coursera
Continue Reading